id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15443756
pes2o/s2orc
v3-fos-license
“Souls of the ancestor that knock us out” and other tales. A qualitative study to identify demand-side factors influencing malaria case management in Cambodia Background Appropriate case management of suspected malaria in Cambodia is critical given anti-malarial drug resistance in the region. Improving diagnosis and the use of recommended malarial treatments is a challenge in Cambodia where self-treatment and usage of drug cocktails is widespread, a notable difference from malaria treatment seeking in other countries. This qualitative study adds to the limited evidence base on Cambodian practices, aiming to understand the demand-side factors influencing treatment-seeking behaviour, including the types of home treatments, perceptions of cocktail medicines and reasons for diagnostic testing. The findings may help guide intervention design. Methods The study used in-depth interviews (IDIs) (N = 16) and focus group discussions (FGDs) (N = 12) with Cambodian adults from malaria-endemic areas who had experienced malaria fever in the previous two weeks. Data were analysed using NVivo software. Results Findings suggest that Cambodians initially treat suspected malaria at home with home remedies and traditional medicines. When seeking treatment outside the home, respondents frequently reported receiving a cocktail of medicines from trusted providers. Cocktails are perceived as less expensive and more effective than full-course, pre-packaged medicines. Barriers to diagnostic testing include a belief in the ability to self-diagnose based on symptoms, cost and reliance on providers to recommend a test. Factors that facilitate testing include recommendation by trusted providers and a belief that anti-malarial treatment for illnesses other than malaria can be harmful. Conclusions Treatment-seeking behaviour for malaria in Cambodia is complex, driven by cultural norms, practicalities and episode-related factors. Effective malaria treatment programmes will benefit from interventions and communication materials that leverage these demand-side factors, promoting prompt visits to facilities for suspected malaria and challenging patients’ misconceptions about the effectiveness of cocktails. Given the importance of the patient-provider interaction and the pivotal role that providers play in ensuring the delivery of appropriate malaria care, future research and interventions should also focus on the supply side factors influencing provider behaviour. Background In Cambodia, an estimated 2.65 million people are at risk of malaria [1]. The Cambodian Ministry of Health estimates that 83,777 outpatient and 4,045 inpatient malaria cases were reported in 2009, with this disease accounting for 0.6% of all outpatient cases and 3.5% of all inpatient cases in the same year [2]. Estimated prevalence rates range from 3.0% to 12.3% in malaria-prone provinces, with the epidemiology of malaria varying widely across the country. Prevalence is highest around the tropical forests located on the country borders, covering 60% of Cambodia's landmass. Parasite prevalence rates vary and are reported to reach 15% to 40% in remote, forested areas, with much lower rates in the plains [3]. In the northeast, malaria transmission is relatively high; the reported annual incidence rate lies between 11 to 50 cases per 1,000 habitants and Plasmodium falciparum, the deadliest strain of malaria, predominates [4]. By contrast, along the western border with Thailand, P. falciparum malaria transmission is generally lower than the northeast and Plasmodium vivax predominates [5,6]. Malaria transmission risk in Cambodia is associated with the rainy season, typically peaking in August and September. Unlike many areas of sub-Saharan Africa, the highest burden of malaria infection afflicts adults who work and stay overnight in the forests. The border between Cambodia and Thailand serves as an epicentre of multidrug resistance [5,7]. Since the 1970s, this area has been the hotspot for the development of anti-malarial resistant parasites; resistance to anti-malarials, including chloroquine and sulphadoxine-pyrimethamine, subsequently spread to other parts of Asia and Africa [8]. In 2009, artemisinin-resistant P. falciparum malaria was confirmed in Cambodia's Pailin province [9]. Experts believe a number of factors have contributed to the emergence of drug resistance in Cambodia: 1) previously unregulated sales of artemisinin monotherapy; 2) limited access to artemisinin combination therapy (ACT); 3) ACT that are not co-formulated (facilitating continued use of artemisinin monotherapy); and 4) ubiquitous counterfeit and substandard medicines [9]. Over the past 10 years, the Cambodia National Malaria (CNM) programme has pioneered a number of innovative malaria control approaches, many of which have become accepted as standard practice in malariaendemic nations. For example, since 2000, CNM has recommended using ACT (artesunate and mefloquine) as the first-line treatment for P. falciparum malaria and chloroquine as the first-line treatment for P. vivax malaria. Before treatment, the National Treatment Guidelines instruct providers to confirm malaria infection through microscopy or a rapid diagnostic test (RDT). In 2008, CNM changed the protocol for treating malaria in districts with confirmed multidrug resistance, switching to dihydroartemisinin + piperaquine (DHA + PPQ), a fixed-dose combination, as the first-line treatment. Under the Resistance Containment Programme, CNM launched multiple initiatives including a ban on the sale of artemisinin monotherapy [10] as well as community level services to facilitate rapid diagnosis and treatment with the correct first-line anti-malarials [11]. Other national malaria control efforts include the provision of highly subsidized RDTs and ACT treatments in the private sector since 2003, the provision of these commodities for free in the public health sector [12,13], and regular monitoring of the quality of anti-malarials in both the public and private health sectors at sentinel sites [14]. Despite these efforts, recent research in Cambodia shows that rates of diagnostic testing and prescription of first-line treatment for confirmed cases remains relatively low among persons with malaria fever. Supply side data from outlet surveys show that the availability of diagnostic tests and the first-line treatment is variable, with higher availability in the public sector, but lower stocking rates in the private sector [15]. As such, when patients seek out treatment for malaria fever, the diagnostic tests and/or the first-line treatment may not be available. In addition, other supply side research has shown that many of the anti-malarials may be substandard or fake in Cambodia [14]. Moreover, providers may prescribe unsuitable dosages, incorrect medicines and improper duration of treatment [16]. In addition, household survey data suggest that many people with malaria fever rely on home remedies, such as sponge baths and traditional medicines made from a variety of herbal or plant sources, which they self-administer [17,18]. This reliance on self-treatment with home remedies may delay patients from seeking proper care. In addition, while nearly half of all Cambodians who seek care for malaria symptoms receive a blood test, patients most commonly received medicines sold or dispensed by health providers as "drug cocktails" when treating these fevers [18], a finding supported by other quantitative research [11,19]. Cocktails typically consist of a small plastic bag containing one or more tablets of various medicines including antipyretics, vitamins, anti-malarials, antihistamines and antibiotics [11]. The widespread use of cocktails creates challenges and dangers for combating malaria in Cambodia. First, as it is the provider who decides on the composition of the cocktail, it is unclear what patients receive in their plastic bags and whether they even receive an anti-malarial. If an anti-malarial is provided, it may be an incomplete dose or an oral artemisinin monotherapyboth of which lead to parasite drug resistance [20]. The variation in the number of cocktail packets bought from the provider adds another threat to combating drug resistance. Even though providers often present multiple packets or pills as a full course of treatment, some patients do not always choose to purchase a full course; factors such as affordability and illness severity sometimes limit the number of cocktail packets that patients buy [11]. For these reasons, Cambodian national malaria control efforts have also focused on increasing consumer awareness of the dangers of cocktail medicines through behaviour change communication (BCC) campaigns. Efforts to change how Cambodians approach malaria treatment face notable challenges. In general, treatmentseeking behaviour for illness is a highly complex process. Around the globe, people frequently seek multiple sources of treatment and many self-medicate or undergo some type of treatment at home, outside of a medical facility. People also often have specific perceptions of medicines, believing that some are more effective than others. Moreover, specific cultural beliefs, norms and attitudes are likely to influence the treatment-seeking process [21][22][23][24]. Numerous research studies, primarily conducted in sub-Saharan Africa, have extensively documented a number of demand-side factors associated with treatment-seeking behaviour, including perceptions about the cause and severity of the illness, quality of care at health facilities, affordability of treatment, proximity of services to patients, and positive manner of the providers [21][22][23][24][25]. In Cambodia, the epidemiology of malaria and the specifics of the treatment environment make the malaria treatment-seeking process vastly different than that found in other parts of the globe, particularly the process in sub-Saharan Africa where most research on this topic has focused thus far. Cambodian adultsspecifically forest workersare most afflicted by malaria, unlike in much of Africa where children under five are most at risk. As a result, caretaking responsibility rests with the individual rather than the caregiver of a young child and access to care is limited, factors which often guide treatment-seeking decisions in Cambodia. Moreover, while self-treatment of fever at home is common worldwide, the practice in Cambodia appears to be much higher than in surveyed sub-Saharan Africa countries [26]. Cocktail medicines are also more commonly used in Cambodia and the Southeast Asia region [11,20], a negligible practice in sub-Saharan Africa [26]. Finally, malaria treatment practices in Cambodia are complicated by the multiple definitions and cultural understandings of "fever". Cambodia's main language, Khmer, uses a variety of terms and definitions for fever such as: fever with chills (krun janh) or hot body (krun ngak) known as "malaria fever"; dengue fever (krun chhiem); or other types of fever or symptoms, such as night fever (krun yop), high temperature (kdao gadow/kdao kluan) or sweating (krung loap) [27]. Such variety may further complicate treatment practices. To date, only a few unpublished papers provide a descriptive picture of the factors that tend to influence malaria treatment in Cambodia [28,29]. One of these studies, a qualitative study from 2004, suggests that a number of factors influence provider decisions in Cambodia [28]. These include stock outs of test kits and financial incentives to sell medicines rather than test before providing treatment, since a confirmed diagnosis may diminish medicine sales due to negative test results. Results also revealed that patients may prefer to spend money on medicines rather than a test, and may prefer to self-treat based on their symptoms until they become more seriously ill [28]. To expand the limited evidence base, this study uses qualitative methods to explore the demand-side factors that influence malaria treatment-seeking behaviours and patient-provider interactions among Cambodian patients. It aims to shed light on findings from quantitative research studies and offer programmatic recommendations to increase the uptake of appropriate malaria case management in Cambodia. It asks three key questions: 1) Why do people first treat at home and what types of medicines are used? 2) Why do patients take drug cocktails for malaria and what are their perceptions of these medicines? and 3) Why do some patients with malaria fever receive a diagnostic test while others do not? The study findings aim to provide a more nuanced understanding of the patient-provider interaction at a health facility or outlet where malaria treatment is sought, including a patient's perception of the provider. Such findings may prove useful in guiding the design of interventions focused on increasing informed demand for effective malaria case management services in Cambodia. Methods Researchers employed qualitative in-depth interviews (IDIs) and focus group discussions (FGDs) to investigate treatment-seeking behaviours for malaria fever in Cambodia. To understand the complexity of factors related to diagnosis and treatment-seeking behaviours, two groups of participants were recruited: adults who reported they had their malaria fever confirmed through diagnostic testing and those who reported they did not confirm their fever with a diagnostic test. FGDs gathered information on community level norms and beliefs about parasitological diagnosis and malaria treatment-seeking behaviours. IDIs collected data on participants' individual experiences when they fell ill with fever, their treatmentseeking processes and the dynamics of the patientprovider interaction when seeking care for malaria. Sampling A non-probability, purposive sample of the target group was used to recruit study participants from three randomly selected, rural, malaria-endemic districts, located in the heavily forested areas of Pursat and Kratie provinces. To find participants, researchers employed a variety of snowball sampling methods. First, the village chiefs, health providers, and shop assistants and owners who sell medicines from various outlets (e.g. pharmacies, drug stores, etc.) served as key informants to identify people in the area who recently had fever. Potential participants were then asked if they knew of other people in surrounding villages or forest areas who had experienced malaria fever (krun janh/krun ngak) in the previous two weeks. Researchers used a screening questionnaire to determine the respondent's eligibility for inclusion in the study. They asked whether or not the respondent had had malaria fever (krun janh or krun ngak) in the previous two weeks, as opposed to other common types of fever in Cambodia, such as dengue fever (krun chhiem) or night fever (krun yop). They also ascertained whether or not the potential participant's malaria fever had been confirmed using a diagnostic test. Aiming to conduct at least 10 FGDs, with 8-10 participants each, and 12 IDIs, the research team implemented 12 FGDs and 16 IDIs over a two-week period during the rainy season in August 2009. Researchers sought to enrol a similar number of participants into each of the two sampling groups, those who had received a diagnostic test and those who had not. Data collection Teams of four Cambodian social scientists (two women and two men) hired from the local community conducted the IDIs and FGDs in Kratie and Pursat provinces. All were trained to correctly use the guides and study protocols for data management. They held each in-depth interview in a private space, sometimes at the home of the individual. FGDs were held in community centres. Prior to participation, researchers informed study participants of the study objectives and obtained verbal consent from all participants. Incentives were provided in the form of refreshments after participation as well as a traditional Khmer scarf (kroma). Researchers made voice recordings of all IDIs and FGDs, with the consent of participants, and interviewers also took notes of the content, non-verbal behaviour and setting of the interaction. Interview and focus group guides with open-ended questions addressed key topics related to treatmentseeking behaviour. Each of these instruments focused on how participants responded to their fever, where they sought treatment, why they did or did not receive a diagnostic test, what types of treatment they received and the perceived efficacy of those treatments. While both the IDI and FGD instruments asked respondents to describe their most recent episodes of fever, FGD participants discussed their recent fever experiences within the group as a means for helping understand the cultural and social norms around malaria treatment-seeking behaviour. The IDI and FGD guides were translated, piloted and revised before and during the fieldwork. This process aimed to improve the clarity of the questions, assist in assessing topic saturation to guide any needed changes in the instruments, and allow for any required increases in the sample size. Ethics Ethical approval was obtained for this study from the Cambodian Ministry of Health Ethical Review Board, 19 July 2009 (#109NECHR). Data analysis Recordings from all interviews were transcribed verbatim in the original Khmer and then translated into English. On completion, a member of the bilingual research team read and reviewed each translated English transcript, comparing it against the original Khmer version. The team rectified any discrepancies with the translator until full agreement between the translated transcript and original Khmer version was obtained. Researchers used NVivo qualitative data analysis package (QSR International Pty 2002) to analyse the data. Following the principles of grounded theory, the research team coded the transcripts according to common themes that emerged from the data, letting the data guide the coding rather than allowing researchers to impose a coding scheme [30]. Given the open nature of the interview questions, this approach enabled the emergence of unexpected concepts and categories that sometimes had dual meanings, such as "trust in a provider". These concepts were inductively generated and coded, meaning that the same respondent could be coded for expressing both trust and lack of trust in a provider. Researchers analysed the frequency with which individuals reported themes, and to what extent the group mentioned these themes, for patterns. This procedure helped clarify which themes consistently emerged across all groups and which were idiosyncratic [31]. To capture the main topics emerging from the data, the research team arranged the descriptive codes into sets of broader themes. For example, the code, "I always believe providers will give me the right treatment to cure my fever" was categorized under the theme, "trust in providers". Researchers avoided coding data into categories that were too small (e.g. "belief that cocktail medicines will cure fever immediately when working in the forest"), as such classification can make results difficult to interpret [32]. Therefore, the team created codes to encapsulate recurring themes, eventually revisiting the entire dataset with the final coding scheme to perform the final analysis. To ensure inter-rater reliability in the coding, the research team employed various procedures. First, the Khmer research staff conducted a paper-based, thematic analysis on 40% of the IDI and FGD transcripts, using the final codes that had emerged from the initial analysis phase. Then, they compared the findings between the Khmer transcript-based analysis with the initial findings from the Nvivo analysis. Through this comparison, the team checked to make sure they had coded the themes in a consistent manner, without creating new codes in one type of analysis and not the other. Any discrepancies were resolved through discussion with the larger team of coders and the primary analyst until full agreement was obtained. To confirm the reliability of the findings, the lead researchers presented the results to the Khmer research team upon completion of the analysis. In addition, the team once again verified any quotes used in the results summary with the translators. Results A total of 60 participants (68% male, 32% female) participated in this research study, with a mean age of 32 years. Most respondents had no secondary education: 75% had received primary school education alone while 15% had finished secondary education. Another 10% had not received any schooling at all. The large majority of participants were married (85%) and reported working in forested areas (80%). Per the sampling criteria, all participants reported having malaria fever in the previous two weeks. 27 participants had received a diagnostic test upon seeking treatment while 33 had not. It should be noted that the sampling groupsthose who sought diagnostic testing versus those who did notdid not have any bearing on the results, with the exception of the expected differences in diagnostic testing. The emergent themes remained consistent within each group, so differences in use of self-treatment and cocktail medicines did not appear. Types of treatment taken at home Few participants reported taking immediate steps to treat their fever. Most waited a few days before seeking treatment outside the home, either because they were waiting for symptoms to worsen and/or to ensure the fever was not the result of a simple cold or flu. They also waited because they were in the forest, far away from health centres or outlets where they could purchase medicines. Respondents also reported that medicines are expensive and they did not have the money to treat their fevers. Thus, a common alternative is to treat the fever at home, using traditional remedies. Many believe the symptoms can be treated successfully at home, an impression that is also rooted in their perception of the seriousness of the illness. Moderator: When you have a hot temperature, where do you go for treatment? Respondent 1: First, we treat it at home by ourselves. We use different medicines to make us feel better, and which help reduce the fever. The elders say to use a kind of roof thatching plant, corn and a kind of aquatic herb that is used as a spice and which keeps for a year. We boil the core of the corn and use this to treat the fever. Respondent 2: But if the fever is not any better after two or three days, we may go to a hospital or a shop that sells medicines. Respondent 3: But if it is serious from the start, we will go to the hospital right away. FGD, Pursat Province Home treatment includes traditional medicines, primarily used to delay, alleviate and/or cure symptoms before seeking modern alternatives. Most reported drinking boiled tree roots (e.g. the roots of the lemon tree with alcohol or kapok leaves) or taking warm sponge baths with fragments of ginger, or with guava leaf and the leaf of a small, sour fruit. They also use other natural remedies, as described by this forest worker: Respondent: If we are in the forest, we have nothing with us. So, we just have trees in the forest such as "Ampil Brok Phler" and "Merm Krovanh Chruk". We just eat them when we do not have medicines. Moreover, there are "Cheung Kras" grass, and also coconut stumps which we cut to eat. These are temporary medicines in the forest until we arrive home. When we have a serious fever, we can cut the coconut core into two pieces, tie it with black thread, insert a nail, and then boil them together to drink until we get medicine. . . . The [traditional medicines] can help around 30% or 40%. They can protect us from running a high temperature. But these won't be effective for long, so, three or four days later, we will start shivering again. However, this mixture helps us to be able to ride our ox cart home IDI, 32 year-old, Male Forest Worker, Pursat Province While treatment at home is commonplace among respondents, they generally sought help outside the home from a qualified provider of health care if the home treatment strategies were perceived as ineffective or failing, or if symptoms worsened. This source outside the home was most often the nearest provider of modern medicines. Sometimes respondents resorted to using modern medicines that they had left over from a previous episode of illness, even if these medicines were not anti-malarials. Reasons for taking cocktails While many respondents rely on traditional medicines as a primary treatment, the majority of participants eventually receive some form of modern medication for their malaria symptoms. A few participants mentioned brand names without prompting from the interviewer or FGD facilitator, typically the ACT Malarine (private sector) or A + M (public sector). However, most respondents did not name the medications they received. The most common point shared by all respondents about modern medications focused on cocktails. When they sought modern treatment, they often received medicines without formal packaging, presented as cocktails. Therefore, this section focuses on the study results surrounding the perceptions of cocktail medicines, rather than the perceptions of pre-packaged medicines. Respondents reported the cocktail medicines are presented in different forms. Sometimes, the cocktails consist of complete blister packs that have simply been removed from pre-packaged medicine boxes. In other cases, providers have cut up the blister packs or removed the pills from the blisters or tins altogether, placing these individual pills in the mixture. The providers prepare all of these formulations in small plastic bags. Instructions for taking these cocktail medicines direct participants to take them multiple times a day over the course of several days, typically using the colour of the pill in the cocktail (e.g. blue, white, yellow or red) to denote which medicines to take at specific times. Sometimes the instructions describe a sign or picture imprinted on the pill(s). Some respondents reported linking the colour of the pill with the perceived curative agent. For example, they described the red pills as the pills for energy, while the light blue pills were "new" treatments for malaria. Moderator: What sort of medicine were you offered? Respondent 1: I got pills in a plastic bag. The provider removed different pills from a blister pack and some from a big container, and put them into the plastic bag. Moderator: The provider removed them from the blister pack? Respondent 1: Yes, she did. Moderator: What about others here? What medicine did you get? Respondent 4: I don't know. The health provider just prepared a bag for me to take twice per day, morning and afternoon. I didn't understand [anything] about those medicines. I could not recognize them, but I knew they related to malaria and typhoid. The health providers told me. They gave me the tablets after I got serum. Moderator: Did the medicine come with its original package or cover? Respondent 4: Some came with its original packaging, some didn't. It came in a small plastic bag. FGD, Pursat Province Among study participants, the use of cocktails is widespread and is viewed as a "normal" medicine given by providers. Given the frequency with which respondents mentioned receiving cocktails, this type of treatment is deemed a common, standard treatment for all symptoms. Respondents did not question the efficacy of the cocktail treatments, regarding them as a "normal" and "common" type of treatment for malaria from health providers. They cited trust in the efficacy of these treatments. In addition, respondents perceive the inclusion of multiple types of pills in cocktail packages as more effective and less expensive than a pre-packaged medicine. Many described the need to not only cure malaria, but also to reduce fever and headaches; thus, a combination of pills, targeting multiple symptoms, is deemed more effective and more affordable than purchasing multiple pre-packaged medicines. Respondents also generally cited trust in their providers and believed that the provider would only give them an effective medicine: Interviewer: Do you think these [cocktail] treatments are effective? Respondent: Yes, they are good treatments because I always get better after treating [my illness with them]. And I believe in the medicines because they are given to me by health providers. They save us. It is the provider's job to save us. Interviewer: How much do you believe in these treatments? IDI, 29 year-old, Male Forest Worker, Kratie Province Many respondents also discussed how the cocktails are often saved for later illnesses, or for when they return to the forest. They also reported stopping the medicine regimen prematurely because they felt better, or they needed the medicines for other family members or their own future illnesses, as illustrated by this exchange: Reasons for diagnostic testing The data suggest that many respondents are not aware they need a diagnostic blood test to confirm their fever as malaria. Many also believe they do not need to do a test because they are able to self-diagnose malaria. Since many participants claimed they "knew" it was malaria because the symptoms they had experienced were the same as those from previous bouts of the illness, they felt that the additional cost imposed by testing seemed fruitless. Sometimes this perception was linked to previous experiences with diagnostic testing; those who had had prior experience with tests that confirmed fever as malaria now felt in a better position to recognize when fever episodes were malaria. Others reported that providers did not mention the need for a test. Some participants also noted that they did not know where to get tested. Respondent 3: For me, I just went to the shop and got the medicines. The provider did not tell me to take a test or anything. He gave me medicines and told me to take them to cure my fever. So I did. I didn't know anything about a test. FGD, Kratie Province For respondents who mentioned seeking treatment from public or private facilities or clinics, many did not question the need for a diagnostic test because they trusted their provider to give them the appropriate treatment. This finding contrasted with responses from those who obtained treatment from less formal outlets including village shops or markets; these respondents did not comment on the quality of the provider. Participants perceive public and private health care providers as knowledgeable and experienced, as well as able to ascertain what type[s] of medicine is needed based on their symptoms. Providers are viewed as being a source of authority in treatment and diagnosis. Several respondents reported they would accept any decision handed down by the provider. Often they were not challenged by the provider to have a test, as illustrated by the following discussion: Respondent: When I got to the health facility, I told the provider to give me anti-malarial medicine. Also, the provider did not request or provide blood testing for me, but only gave me a cocktail. Interviewer: So what did you tell the provider? Respondent: When I arrived, I asked him if he could please give me anti-malarial medicine "for three times" and he did that for me. In fact, he also has test equipment [but he didn't use it on me]. Interviewer: Why did you ask him to give you antimalarial medicine? Respondent: Because I think my disease is really malaria. So I just tell him to give me anti-malarials, which he did for me. Interviewer: So, he did not ask you anything? Respondent: He did ask me a few questions, something like "How did you get malaria?" IDI, 22 year-old Female, Kratie Province In contrast, those who received diagnostic testing noted that blood tests are also seen as part of the treatment plan, as well as something that is prescribed by providers. The data suggest that receiving a test is often dependent on where people purchased or obtained medicines. Typically, respondents mentioned receiving a test when they sought treatment from a public health facility or a private hospital. They also talked about the role of the provider in this process, citing that receiving a test is up to the health provider, either public or private, as illustrated by the quote below: Respondent 5. When I arrived, the doctor said that I have to have a blood test, in order to make it easier to prescribe the proper treatment. If we do not have a blood diagnosis, we cannot know what disease it is, and so we cannot provide proper treatment. After the blood test, I was informed I have three positive signs of malaria and I was given A + M. Respondent 7: First, my feet were cold and I felt my neck was cold too. So, then I warmed myself by the fire. After doing this, I was still cold. Then, I went to sleep without taking any medicine. I started out really cold until I covered myself with a blanket, but I was still cold. So, I took two tablets of paracetamol, which [made me feel] better, although I was sweaty for a while. Then the chills started again. After that, I was brought to the hospital. Moderator: How many days were you sick before you went to the hospital? Respondent 7: Three days, then I went to the hospital. As I did not recover, I went to the hospital for a blood test. They asked me where I had been. And I said that I had been in the highland area. Then they asked me to have a blood test. So, I took it. FGD, Kratie Province In other instances, participants said they specifically sought a diagnostic test in order to understand their illness, or in order to correctly diagnose the illness and find the appropriate treatment. In these cases, although they had familiarity with some malaria symptoms, they recognized they could not identify the illness on their own and, therefore, needed a blood test to confirm the cause of the fever. Others reported taking medicine first without being tested. When the symptoms did not improve, they decided testing was necessary to find the "right" treatment. In addition to aiding the identification of the correct treatment, confirmation of disease also prevented the risky behaviour of taking an anti-malarial when it was not needed. Doing so was seen as "dangerous". Participants explained the effects of unnecessary anti-malarial treatments as "harming the blood vessels", "making blood thick", "weakening the blood" or "shattering the blood bullet". Respondent: Because I felt uneasy inside my body, I even took traditional medicine as well as the cocktail which I had bought nearby. I did not recover. I just spent money on the medicine without getting good results. That was why I needed the blood test. Interviewer: When you got the blood test, did you request the provider to do it for you or did you just go there to see what the provider would recommend?" Respondent: I asked him to do it because I felt strange in my body. I always had chills and fever and could not get better by taking the medicine. And I wanted to know what illness it was, and why I did not get better even after taking the medicine. So, I asked him to do the blood test to identify the disease. Interviewer: Before you did the blood test, what did you take? Respondent: Beforehand I just took medicine like paracetamol and stuff like that for when you have a fever, and also the thing called Tetra [tetracycline]. Interviewer: Oh, so you didn't have anti-malarial medicine? Respondent: No, that time I did not know I had caught malaria until I did the blood test. If I had known that I had caught malaria, I could have chosen the right medicine IDI, 34 year-old, Male Forest Worker, Kratie Province Respondents also reported taking a blood test before going to work in the forest, particularly if they were experiencing malaria symptoms already. They wanted to ensure the malaria was cured before they travelled long distances where they would be required to work and sleep in areas far from any health facilities. They also reported taking cocktails with them in order to have medicine on hand in the forest in case they experienced new symptoms, as this participant demonstrates: Interviewer: Uncle, did you do the blood test? Respondent: I did it before going to the forest. Interviewer: How many days did you do the blood test before you went to the forest? Respondent: Three days before I went to the forest. However, before I went to the forest, I already had had fever one or two times. That is why I did the blood test. I thought I could not take cocktails if I go to the forest again and I do not know what disease I catch. So I needed to do the blood test to see what illness I have. Then, I could buy cocktails to take to the forest. IDI, 45 year-old, Male Forest Worker, Pursat Province A number of respondents talked about obtaining multiple blood tests. They reported that providers could not recognize the results or needed to double-check the results at times. Many participants also reported confusion in ascertaining what the results of their tests were. Sometimes, they were unclear if they had tested positive for malaria or other illnesses, namely dengue fever or typhoid. In some cases, providers had given respondents anti-malarials even though their test results were negative. Others reported getting mixed results from the test: Respondent 7: First, I had a blood test. The results showed that I was positive for malaria. I recovered after I took medicine for malaria. Then I got the chills again at 8 o'clock. I told my mother that I was not recovered yet. So, my mother took me to have a blood test again. The results showed I was positive for typhoid, not malaria. Discussion The qualitative results presented in this paper shed light on the complexity of treatment-seeking behaviour for malaria among Cambodians, an important issue to understand in order to effectively implement appropriate malaria case management in the country. This study provides a richer understanding of how patient decisions about treatment choice, the sequence of treatments (from home-based to facility-based care) and diagnosis are based on multiple factors. These factors include prior fevers, treatment experiences, local beliefs about how fever should be treated, the influence of social networks, practical considerations such as cost and proximity to health facilities, and cultural norms. This section discusses these issues in more detail and suggests interventions that may improve fever case management and treatment-seeking behaviour among Cambodian patients. Cambodian cultural practices, norms and beliefs, as well as practicalities and malaria episode-related factors, drive decision making about treatment practices. As demonstrated by other research studies [18,26,33,34], this study shows that treatment-seeking behaviour for malaria in Cambodia often starts with self-treatment prior to any biological diagnosis. When deciding how to treat oneself for suspected cases of malaria fever, Cambodians who work in forested areas weigh their beliefs in the effectiveness of these home-based treatments against their appraisal of available outside treatment options, such as proximity to health facilities, availability of financial resources and perceptions of the illness' severity. Others have also found that the perceived severity and duration of illness are influential in the decision to seek and obtain treatment [21,22,24]. The Cambodians examined in this study generally seek health care outside the home when the self-treatment strategies are perceived as failing, or when the illness is perceived as worsening. However, even if they do intend to obtain a confirmed diagnosis and/or obtain appropriate treatment, making the decision to seek outside care is impacted by transport costs, care costs, the distance to a drug outlet and knowledge of tests, as confirmed by others [21,22,24]. Treatment-seeking behaviour is also mediated by previous treatment-seeking behaviour. As evidenced by other research [35], leftover medicines from previous illness episodes are reportedly saved and administered when another family member becomes ill. Cambodian social norms also encourage the use of drug cocktails for malaria treatment. Cocktail use is widespread, a finding demonstrated by this study and confirmed by other quantitative [11,14,16,17,27] and qualitative [28] studies in the country. This study's interviews and focus groups showed that cocktails are the accepted community norm to treat illnesses, suggesting that this form of medicine is provided for other types of illness, not just malaria. The perceived affordability of these cocktails, compared to pre-packaged therapies, is a strong driver of this community norm. Drug cocktails are believed to treat multiple symptoms more effectively. Other evidence has demonstrated similar conclusions, noting that cocktails are preferred because they are less expensive than pre-packaged treatments and patients can only afford to pay for a certain number of pills at one time [28]. This study also unveiled the importance of the patientprovider interactionand the cultural norms influencing this interactionas pivotal in the context of malaria diagnosis and case management in Cambodia. Because providers are perceived as providing sound advice and knowing the best or correct treatments for malaria, respondents noted that they readily accepted the provider's treatment and recommendations for malaria care, including the provision of drug cocktails. They deemed it unnecessary to question or challenge the provider's advice. This norm persisted even in the absence of a suggested diagnosis from the provider. In this sense, providers are very influential in terms of the decision making process, a finding supported by other research [36,37]. Despite this high regard for providers in Cambodian society, many patients do draw their own conclusions about the cause of their fever, prompting them to sometimes operate proactively with providers. As discovered in other research where malaria is described as an individual disease identifiable from previous symptom episodes [38], many respondents in this study associated their symptoms with previous experiences with the disease. Consequently, they felt confident engaging in selfdiagnosis and directly asking for malaria treatment from providers before these practitioners even performed their own exams. These patients are also hesitant to spend additional resources on tests when they "know" they have malaria, a factor that may lead to refusals of diagnostic testing. The patient-provider interaction also complicates the treatment outcome, because the patients may be misguided by providers or are unclear about the recommended treatment or the results of their diagnostic test. Some study respondents reported being confused about the results of their malaria test, relating how they were given anti-malarials even after testing negative for malaria. Others noted that many providers did not recommend or offer a diagnostic test even when they sought treatment, a finding supported by quantitative provider and household studies [10,18,39]. In one of these studies, one third of the surveyed providers believed that malaria infection could still be confirmed without a blood test, even though the vast majority (88%) understood that other diseases can also cause malaria symptoms [39]. In addition to not recommending diagnostic testing, providers interacting with participants in this current qualitative study also appeared to deliver inappropriate care or improperly prescribed medicines. Such findings are supported by other Cambodian research studies, which found that a large proportion of prescriptions contain two or more drugs that could result in adverse drug reactions, as well as inappropriate practices such as over prescription of medicines, improper instructions on treatment duration and the provision of incorrect medicines and/or their dosages [14]. Interpreting findings for programmatic decision making Given the complexity of treatment-seeking behaviour for malaria, the findings from this qualitative study may be useful in shaping communications programs and other interventions that aim to increase informed demand for appropriate management of malaria. The practical recommendations and programmatic considerations that follow suggest ways to change treatment-seeking behaviour as well as improve patient-provider interactions. Incorporate traditional medicines into behaviour change communication messages The use of traditional medicines is deeply rooted in Cambodian cultural beliefs and norms. Rather than eschewing these remedies, interventions that promote diagnostic testing and first-line treatment for confirmed malaria cases could also consider incorporating local methods just for symptom relief until proper care can be found. For example, communication campaigns could promote the use of specific traditional medicines that are believed to reduce fever for when the patient first experiences symptoms and needs relief while travelling to a facility for a diagnosis and more effective treatment. Employ simple messages in all BCC materials Given the complexity of treatment-seeking behaviour in Cambodia, clear and simple messaging is needed in all communication materials directed at patients [40,41]. As proposed in a recent review of socially marketed ACT and RDT in Cambodia, some suggested messages include: 1) "If you are going to buy an anti-malarial, only buy the recommended ACT"; 2) "Before you buy an anti-malarial, get tested first"; and 3) "If you test negative, don't take an anti-malarial" [13]. Existing campaigns that highlight the dangers of cocktails may also want to ensure simple messages address the incorrect perception that cocktails are more effective than pre-packaged medicines, particularly for treating multiple symptoms. Encourage patients to ask for diagnostic testing and appropriate treatments Even though Cambodians have a high regard for providers and often trust their advice without question, this study and others demonstrate that providers do not always practice appropriate case management of malaria. As such, BCC campaigns should encourage patients to advocate for correct and comprehensive care from their providers. Messages could educate patients about the care they should expect from a health facility, instructing them to ask for diagnostic testing and proper antimalarial treatments. Findings from this study suggest that Cambodian patients may be open to this messaging approach, given that they currently feel confident selfdiagnosing malaria, refusing testing and directly asking a provider for medicine. The intervention goal would be to convince patients to change their "ask"from cocktail medicines without a confirmed diagnosis to a full course of first-line treatment based on a diagnostic test first. Focus interventions specifically on provider behaviour and education The importance of provider practices and their influence on the patient are clearly important in determining what the patient receives, as identified in this study as well as through other research [30,[42][43][44][45]. Such influence is of particular concern when the evidence demonstrates poor provider practices and lack of adherence to Cambodia's case management strategy. A wide range of interventions can lead to key improvements in professional practice and patient outcomes [42,46,47]. For example, training of practitioners, provider incentives, recurring supervisory visits, clear treatment protocols and a regular supply of equipment are essential for encouraging appropriate malaria diagnosis and treatment [47][48][49]. Clear guidelines should also be provided to practitioners on how to manage patients presenting with malaria-like symptoms but who are parasite-negative. For designing and monitoring structured and systematic interventions, organisations can employ a number of practical tools, such as provider-based logframes [50], validated behaviour change frameworks and provider behaviour change models [51][52][53][54]. Emphasize the inappropriateness of drug cocktails in provider education The positive associations and perceptions of cocktail medicines suggest that behaviour change communication cannot ignore these formulations and leave them out of campaign messaging. As health providers are the ones compiling these cocktail packages, interventions should target this audience through provider education, mentoring and supervision, emphasising the distribution of complete packages of appropriate ACT for confirmed malaria cases. Study limitations As with any qualitative research endeavour, researchers trade off the generalisability of the findings for richness and depth in the data. In this study, the non-random sampling procedure and small sample size reduces the representativeness of the results. Furthermore, the snowball procedure used to recruit participants may have biased the sample, as people are more likely to refer others who are similar to themselves. In addition, the results are specific to the population sampled, given the focus on treatment seeking in just two areas of Cambodia as well as the inclusion of adults reporting malaria fever only. Therefore, the results may not be generalisable to other areas in the country or Southeast Asian region, and they cannot be extrapolated to treatment decisions for other types of fever. Additional research is needed to identify whether these findings are generalisable to a wider population. A final study limitation is that the transcripts were not back translated into English to identify discrepancies. Areas for future research While this qualitative research increases understanding of malaria treatment-seeking behaviour in Cambodia and provides valuable insight into the patient perspective, there still remains a dearth of evidence on the supply side (i.e. the providers) of malaria treatment. Further research is needed on the quality of Cambodian health services, the nature of patient-provider interactions from the provider perspective, and in particular, the array of factors that influence and motivate providers' testing and dispensing behaviours with regards to malaria. Provider research is also needed in the area of malaria drug cocktails, to understand how attitudes and practices may be changing as well as to pinpoint the types of interventions that may prove successful in combating the widespread misuse of these medicine combinations. Additional research on cocktails should also determine what medicines are found in these mixtures (e.g. anti-malarials, antibiotics) to better understand how these cocktails may be used to treat illnesses other than malaria, a topic currently being investigated by a number of research collaborators [55]. Finally, malaria treatment interventions will benefit from research studies focused on patient costs, access issues and provider financial incentives. While previous research has documented the availability, price, market share and use of anti-malarials in Cambodia [15,17,18], no known studies have linked demand and supply side data to determine Cambodians' ability to access these first-line treatments. Conclusion Although substantial gains have been made in malaria treatment and service delivery in Cambodia, there is still room for improvement in terms of appropriate case management for suspected malaria casesnamely, higher levels of diagnostic testing prior to treatment and a shift away from the provision of drug cocktails for the disease. Given the pressing issue of artemisinin resistance in the area, including multidrug resistance, encouraging providers to adopt both behaviours is essential. This studyone of the first of its kind in Cambodia and in Southeast Asiaexamines the demand-side factors that influence patients' behaviour in treatment choice, the sequence of treatment taken and provider interactions, including the acceptance and demand for testing and first-line anti-malarials. Effective intervention programmes will leverage these demand-side factors to promote prompt treatment-seeking behaviour for suspected malaria through channels delivering appropriate case management. On the supply side, given the pivotal role providers play in ensuring the delivery of appropriate care for malaria and their influence in shaping patient treatment-seeking behaviour, interventions designed to improve provider knowledge and the practice of appropriate case management are also needed. Future malaria intervention programmes and research that considers both the patient's and the provider's side of the interaction will strengthen appropriate malaria care in Cambodia and, ultimately, lead to reductions in the malaria disease burden.
2016-05-04T20:20:58.661Z
2012-10-05T00:00:00.000
{ "year": 2012, "sha1": "81437f3a9298aa7096449b230d754e8746762dfc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1475-2875-11-335", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3ad77f35eeeb7a749983055e4da3215a7fca2fd", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270289831
pes2o/s2orc
v3-fos-license
Unraveling Differences in Molecular Mechanisms and Immunological Contrasts between Squamous Cell Carcinoma and Adenocarcinoma of the Cervix This study aims to refine our understanding of the inherent heterogeneity in cervical cancer by exploring differential gene expression profiles, immune cell infiltration dynamics, and implicated signaling pathways in the two predominant histological types of cervix carcinoma, Squamous Cell Carcinoma (SCC) and Adenocarcinoma (ADC). Targeted gene expression data that were previously generated from samples of primary cervical cancer were re-analyzed. The samples were grouped based on their histopathology, comparing SCC to ADC. Each tumor in the study was confirmed to be high risk human papilloma virus (hrHPV) positive. A total of 21 cervical cancer samples were included, with 11 cases of SCC and 10 of ADC. Data analysis revealed a total of 26 differentially expressed genes, with 19 genes being overexpressed in SCC compared to ADC (Benjamini–Hochberg (BH)-adjusted p-value < 0.05). Importantly, the immune checkpoint markers CD274 and CTLA4 demonstrated significantly higher expression in SCC compared to ADC. In addition, SCC showed a higher infiltration of immune cells, including B and T cells, and cytotoxic cells. Higher activation of a variety of pathways was found in SCC samples including cytotoxicity, interferon signaling, metabolic stress, lymphoid compartment, hypoxia, PI3k-AKT, hedgehog signaling and Notch signaling pathways. Our findings show distinctive gene expression patterns, signaling pathway activations, and trends in immune cell infiltration between SCC and ADC in cervical cancer. This study underscores the heterogeneity within primary cervical cancer, emphasizing the potential benefits of subdividing these tumours based on histological and molecular differences. Introduction Cervical carcinoma (CC) continues to be a public health concern, despite the adoption of early detection protocols.Cervical cancer rates have declined in developed countries due to effective screening programs.However, in developing nations, there is a concerning increase in prevalence, attributed in part to inadequate screening and follow-up.Globally, there are 341,000 deaths per year from this malignancy, underlining its significant impact on women's health [1][2][3][4]. Persistent infection with high-risk human papillomavirus (HPV) is the predominant cause of CC, accounting for at least 98% of cases [4,5].HPV impedes the inflammatory response in the epithelium, leading to diminished cytokine and chemokine production.This results in reduced attraction of dendritic cells, allowing the virus to bypass the immune system and remain in the body [6].It is not yet fully understood why HPV persists in one individual while being cleared in another. Cervical cancers predominantly fall into two histological subtypes, squamous cell carcinoma (SCC) and adenocarcinoma (ADC) [7].These arise from different cell types in the cervix, including squamous epithelial cells found in the ectocervix and mucin-containing cylindrical cells found in the endocervix.Although these cancers emerge from distinct cell types within the same organ, they are often treated similarly.This variation in cell of origin may contribute to the differences observed in tumor microenvironment (TME) and immunological pathways [8]. The TME, a complex and dynamic network of neoplastic, immune, stromal and endothelial cells, along with extracellular matrix (ECM) components, is increasingly recognized as influential in the course of cancer progression and in response to therapy [9].The unique molecular and genetic characteristics of SCC and ADC lead to distinct TMEs, with potential implications for the immune response to these cancers [10].Understanding the specific mechanisms of immune evasion employed by SCC and ADC is crucial, given the growing interest in immunotherapy as a promising avenue for cervical cancer treatment [10]. There are very limited studies focusing on the differences between cervical SCC and ADC with respect to molecular and immunological aspects.These studies highlighted differences in the expression of PD-1/PDL-1, as well as the activation of IL-17, JAK/STAT, and Ras signaling pathways between cervical SCC and ADC [10,11].While existing research has provided valuable insights into the diversity and complexity of cancers, there remains a need for more in-depth studies.This is particularly true for advancing personalized treatment strategies for more targeted therapies in SCC and ADC.The aim of this study was to evaluate and compare the differential gene expression profiles, immune cell infiltration dynamics, and implicated signaling pathways between cervical SCC and ADC. Results A total of 21 primary cervical cancer stage Ib1-2 (FIGO 2009) samples were included for data analysis.The samples were divided into the following two distinct groups based on the histopathology status: Squamous Cell Carcinoma (SCC, n = 11) and Adenocarcinoma (ADC, n = 10).The clinical values of the two groups, including age, HPV status and other details, were similar and did not show any significant variation (Table 1).The mean invasion depth of SCC was 8.22 ± 4.1 mm and of ADC was 7.07 ± 3.3 mm, (p-value = 0.49).Analyzing the gene expression data revealed a distinct difference in pattern between SCC and ADC.A total of 19 genes were significantly overexpressed and 7 were underexpressed genes in SCC (BH, p-value < 0.05) (Figure 1B and Table 2).Around 20 of the differentially expressed (DE) genes showed a high Fold-of Change (FOC) value (log2-fold > 2 or <−2) (Figure 1C). Differentially Expressed Genes in SCC and ADC Analyzing the gene expression data revealed a distinct difference in pattern between SCC and ADC.A total of 19 genes were significantly overexpressed and 7 were underexpressed genes in SCC (BH, p-value < 0.05) (Figure 1B and Table 2).Around 20 of the differentially expressed (DE) genes showed a high Fold-of Change (FOC) value (log2-fold > 2 or <−2) (Figure 1C). The Immune Landscape of SCC and ADC Pairwise similarities were used to identify immune cells in tissue samples as described previously [1].A higher number of B cells, T cells and cytotoxic cells was found in SCC compared to ADC (Figure 2A,B).However, all the other identified cells such as NK cells, mast cells, dendritic cells, macrophages and neutrophils showed similar numbers in the two groups.Importantly, the immune checkpoint markers CD274 and CTLA4 were found to be significantly more highly expressed in SCC (Figure 2C). Higher Activation of the Immune-Related Pathways in SCC Samples Compared to ADC Pathway analysis of the profiled genes revealed significant activation of several signaling pathways in SCC compared to ADC.Notably, the cytotoxicity pathway was more activated in SCC (BH, p-value < 0.002), as were notch signaling (BH, p-value < 0.003), interferon signaling (BH, p-value < 0.001), metabolic stress (BH, p-value < 0.015), lymphoid compartment (BH, p-value < 0.016), hypoxia (BH, p-value < 0.024), PI3k-AKT (BH, p-value < 0.029) and hedgehog signaling (BH, p-value = 0.03).On the other hand, only the autophagy pathway scored more highly in ADC compared to SCC (BH, p-value = 0.012) (Figure 3B).Additional Gene Set Enrichment Analysis (GSEA) showed genes involved in the pathway analysis and provided key insights into the unique molecular dynamics underlying these two cancer subtypes (Figure 4). Higher Activation of the Immune-Related Pathways in SCC Samples Compared to ADC Pathway analysis of the profiled genes revealed significant activation of several signaling pathways in SCC compared to ADC.Notably, the cytotoxicity pathway was more activated in SCC (BH, p-value < 0.002), as were notch signaling (BH, p-value < 0.003), interferon signaling (BH, p-value < 0.001), metabolic stress (BH, p-value < 0.015), lymphoid compartment (BH, p-value < 0.016), hypoxia (BH, p-value < 0.024), PI3k-AKT (BH, p-value < 0.029) and hedgehog signaling (BH, p-value = 0.03).On the other hand, only the autophagy pathway scored more highly in ADC compared to SCC (BH, p-value = 0.012) (Figure 3B).Additional Gene Set Enrichment Analysis (GSEA) showed genes involved in the pathway analysis and provided key insights into the unique molecular dynamics underlying these two cancer subtypes (Figure 4). Discussion While the histopathological typing of cervical cancer is routinely documented in pathology reports, its translation into clinical or therapeutic strategies has been lacking.In our study, we identified 26 DE genes between SCC and ADC, 19 of which were found to be highly expressed in SCC.Notably, SCC exhibited heightened immune stimulation, evident in increased immune cell infiltration and elevated scores in immune-related pathways compared to ADC.Crucially, immune checkpoint targets CD274 and CTLA4 demonstrated elevated expression in SCC.It is important to note that our study selected a highly homogeneous group of primary cervical cancer samples.Our findings suggest the possible subdivision of primary cervical cancer into, at least, two distinct groups discernable at both molecular and immunological levels.Our findings suggest that SCC and ADC can be subdivided not only on grounds of morphology but also at the molecular and immunological level.Furthermore, our data support the consideration of specific anti-CD274 and anti-CTLA4 drugs in the treatment regimen for SCC, offering a tailored therapeutic approach. The results of our study are similar to previous findings.The study by Campos Parra et al. compared the molecular features of cervical SCC and ADC by analyzing data from The Cancer Genome Atlas (TCGA) and a Mexican-Mestizo dataset.They identified 70 consistently DEGs across both datasets, associated with pathways such as IL-17 and JAK/STAT [10].Our DEGs did not overlap with the DEGs of Campos Parra et al.'s study because the measured genes were different.However, both studies suggest that there is an intricate web of signaling pathways influencing the progression and characteristics of cervical cancer, with certain pathways potentially being more specific to SCC or ADC.Notably, both studies identified signaling pathways (such as Notch1 in the current study Discussion While the histopathological typing of cervical cancer is routinely documented in pathology reports, its translation into clinical or therapeutic strategies has been lacking.In our study, we identified 26 DE genes between SCC and ADC, 19 of which were found to be highly expressed in SCC.Notably, SCC exhibited heightened immune stimulation, evident in increased immune cell infiltration and elevated scores in immune-related pathways compared to ADC.Crucially, immune checkpoint targets CD274 and CTLA4 demonstrated elevated expression in SCC.It is important to note that our study selected a highly homogeneous group of primary cervical cancer samples.Our findings suggest the possible subdivision of primary cervical cancer into, at least, two distinct groups discernable at both molecular and immunological levels.Our findings suggest that SCC and ADC can be subdivided not only on grounds of morphology but also at the molecular and immunological level.Furthermore, our data support the consideration of specific anti-CD274 and anti-CTLA4 drugs in the treatment regimen for SCC, offering a tailored therapeutic approach. The results of our study are similar to previous findings.The study by Campos Parra et al. compared the molecular features of cervical SCC and ADC by analyzing data from The Cancer Genome Atlas (TCGA) and a Mexican-Mestizo dataset.They identified 70 consistently DEGs across both datasets, associated with pathways such as IL-17 and JAK/STAT [10].Our DEGs did not overlap with the DEGs of Campos Parra et al.'s study because the measured genes were different.However, both studies suggest that there is an intricate web of signaling pathways influencing the progression and characteristics of cervical cancer, with certain pathways potentially being more specific to SCC or ADC.Notably, both studies identified signaling pathways (such as Notch1 in the current study and JAK/STAT from the previous study) that play a pivotal role in cell growth, differentiation, and immune responses in cancers [10].A study by Wild et al. in which they compared the level of tumor-infiltrating lymphocytes (TILs) between cervical SCC and ADC found that SCC displayed an increased level of TILs [12].Additionally, a study by Karpathiou et al. that focused on investigating immune checkpoints revealed that SCC exhibited higher expression of both CTLA-4 and PD-L1 [13]. There are also studies comparing the immune-related genes in lung/esophagus SCC and ADC, which found several immune-associated prognostic DEGs.These results indicate the need for tailored therapeutic strategies based on subtype-specific molecular signatures [14,15]. Lin et al. in their study comparing genetic and immunologic aspects of SCC and ADC in different organs reported that gene expression profiles that are determined based on histology are generally the same across the organs.In keeping with our study, they also found DEGs distinguishing SCC from ADC, emphasizing the importance of understanding the molecular differences between these subtypes [16]. Another important factor differentiating these two subtypes can be the HPV type and status.Research on vulvovaginal squamous cell carcinoma indicates that HPV-associated tumors frequently harbor PIK3CA mutations, whereas HPV-independent cancers often exhibit alterations in TERT, TP53, and CDKN2A, emphasizing distinct molecular pathways influenced by HPV types.In particular, HPV 18, which is more frequently associated with adenocarcinoma, may induce distinct molecular alterations compared to HPV 16, commonly linked with squamous cell carcinoma.Studies have demonstrated that HPV 18-related adenocarcinomas exhibit fewer pre-cancerous lesions, suggesting a more direct progression to invasive cancer, which could be attributed to unique gene expression profiles influenced by the viral oncogenes.These findings indicate that the carcinogenic mechanisms of HPV 18 could differ significantly from those of HPV 16, potentially impacting the gene expression landscape and, subsequently, the therapeutic responses [17,18].This is in line with the study by Preti et al., which reported that different HPV types target different cells and exhibit distinct carcinogenic patterns, influencing the progression of vaginal carcinoma.Their research indicates that HPV-associated carcinogenic mechanisms vary by HPV type, which could impact gene expression profiles and therapeutic responses.These variations highlight the importance of considering HPV genotype when developing targeted treatment strategies [19]. Furthermore, the pronounced immune cell infiltration observed in SCC as compared to ADC may be influenced by the differential immune modulation by HPV types commonly associated with these subtypes.This aligns with findings from Kobayashi et al., which highlight the crucial role of HPV-specific immune responses in shaping the tumor microenvironment.Their review suggests that HPV 16, commonly associated with SCC, may induce a stronger Th1-mediated cellular immune response, which could explain the enhanced immune cell infiltration in SCC observed in our findings [20] Despite the significance of our findings, it is important to acknowledge that the sample size is limited.Validation using a larger independent cohort of samples would be an important next step.Our study's primary strength lies in its detailed exploration of molecular and immunological differences between cervical squamous cell carcinoma (SCC) and adenocarcinoma (ADC), identifying 26 differentially expressed genes.This discovery is pivotal for understanding the complex biology of cervical cancer, setting a foundation for targeted therapeutic strategies based on molecular profiles. The implications of our findings are significant for clinical practice, suggesting that molecular profiling could lead to more personalized treatment approaches for cervical cancer.Specifically, our data advocate for the consideration of therapies targeting CD274 and CTLA4 in SCC patients, marking a step towards precision oncology that could substantially improve patient outcomes. Samples This study builds upon our previous work that included 34 samples of primary cervical cancer patients and 12 samples of a normal cervix [1].Targeted gene expression profiles of cancer-and immune-related genes were generated from all the samples. For the current study, we selected the most clinically homogeneous group of samples consisting of 21 primary cervical cancer cases that did not develop distant recurrence.All the samples were stage Ib1-2 (FIGO 2009) at primary diagnosis, and all were hrHPV positive.To address the aim of this study, samples were grouped based on histopathology diagnosis.Therefore, the gene expression profile of 11 SCC samples was compared to that of 10 ADC samples (Figure 1A). Statistical Analysis The previously generated targeted expression profiles using the IO 360 panel of nanoString were reanalyzed [1].Expression data were re-uploaded into the nSolver software (version 4.0), and the advanced analysis module (version 2.0, NanoString, Seattle, WA, USA) was used for analysis.The raw gene counts were normalized using the most stable housekeeping genes present in the panel, chosen via the geNorm algorithm [21].The background threshold was set by calculating twice the mean count of the eight negative controls.Genes with counts surpassing this threshold were deemed as detected.Differentially expressed (DE) genes were calculated using a mixture of binomial models, log linear model, or a simplified negative binomial model.The results were considered significant after applying Benjamini-Hochberg correction (BH).BH-p-value < 0.05 was considered significant.Variations in pathway and cell type scores were examined using the Mann-Whitney U test.Graphical representations including heatmaps and volcano plots were generated using the TBtools software (version 1.055) [22].The (R software) (version 4.4.0) was employed for data visualization and correlation analyses. Conclusions Primary cervical cancer can be divided into squamous cell carcinoma and adenocarcinoma at both the histopathological and molecular level.Cervical SCC exhibits significantly higher immunogenicity compared to ADC.In addition, SCC expresses significantly higher levels of sCD274 and CTLA4, which highlights the possibility of using targeted therapy to treat SCC of the cervix. Figure 1 . Figure 1.Overview of the study and the findings.(A) Schematic representation of the study design [1].(B) Volcano plot of the differentially expressed genes between Squamous Cell Carcinoma (SCC) and Adeno Cell Carcinoma (ADC).The dotted red line presents the BH-p-value < 0.05.The horizontal dotted lines present the Fold of Change (FOC) > 1.Every dot presents a measured gene, while the orange dots are DEG higher in SCC, green dots are DEG lower in SCC (BH, p-value < 0.05) and gray dots are not significantly different between the two groups.(C) Heat-map of the DEG genes that were filtered additionally by FOC > 1. Figure 1 . Figure 1.Overview of the study and the findings.(A) Schematic representation of the study design [1].(B) Volcano plot of the differentially expressed genes between Squamous Cell Carcinoma (SCC) and Adeno Cell Carcinoma (ADC).The dotted red line presents the BH-p-value < 0.05.The horizontal dotted lines present the Fold of Change (FOC) > 1.Every dot presents a measured gene, while the orange dots are DEG higher in SCC, green dots are DEG lower in SCC (BH, p-value < 0.05) and gray dots are not significantly different between the two groups.(C) Heat-map of the DEG genes that were filtered additionally by FOC > 1. Figure 2 . Figure 2. The immune landscape of cervical Squamous Cell Carcinoma (SCC) and Adeno Cell Carcinoma (ADC).(A) Bar plot of the p-values resulted from the comparison of immune cell infiltration scores.Yellow bars are for the significant scores and the blue bars are for the not significant scores.(B) Box plots for significantly abundant immune cells in SCC.(C) Box plots of the differentially expressed immune checkpoint targets.(* p-value < 0.05). Figure 2 . Figure 2. The immune landscape of cervical Squamous Cell Carcinoma (SCC) and Adeno Cell Carcinoma (ADC).(A) Bar plot of the p-values resulted from the comparison of immune cell infiltration scores.Yellow bars are for the significant scores and the blue bars are for the not significant scores.(B) Box plots for significantly abundant immune cells in SCC.(C) Box plots of the differentially expressed immune checkpoint targets.(* p-value < 0.05). Figure 3 . Figure 3. Pathway scores for Squamous Cell Carcinoma (SCC) and Adeno Cell Carcinoma Bar plot of the p-values resulted from the (A) bar plot of the p-values resulted from the com of pathway scores.Yellow bars are for the significant scores and the blue bars are for the n icant scores.(B) Box plots for the following significant pathways: cytotoxicity, notch sign terferon signaling, autophagy, metabolic stress, lymphoid compartment, hypoxia, PI3Khedgehog signaling.(* p-value < 0.05). Figure 3 . Figure 3. Pathway scores for Squamous Cell Carcinoma (SCC) and Adeno Cell Carcinoma (ADC).Bar plot of the p-values resulted from the (A) bar plot of the p-values resulted from the comparison of pathway scores.Yellow bars are for the significant scores and the blue bars are for the not significant scores.(B) Box plots for the following significant pathways: cytotoxicity, notch signaling, interferon signaling, autophagy, metabolic stress, lymphoid compartment, hypoxia, PI3K-Akt, and hedgehog signaling.(* p-value < 0.05). Figure 4 . Figure 4. Comparing the volcano plots of GSA for cytotoxicity, notch signaling, interferon signaling, autophagy, metabolic stress, lymphoid compartment, hypoxia signaling, PI3K-Akt, and hedgehog signaling pathways between SCC (orange dots) and ADC (green dots).The dashed red line shows significance level of p value. Figure 4 . Figure 4. Comparing the volcano plots of GSA for cytotoxicity, notch signaling, interferon signaling, autophagy, metabolic stress, lymphoid compartment, hypoxia signaling, PI3K-Akt, and hedgehog signaling pathways between SCC (orange dots) and ADC (green dots).The dashed red line shows significance level of p value. Table 1 . Clinical characteristics of included patients. Table 2 . A summary of the DEGs in SCC and ADC. Table 2 . A summary of the DEGs in SCC and ADC.
2024-06-07T15:02:55.789Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "7ed68366c898572072ef4ed9d5ff8a46a2eb2314", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/11/6205/pdf?version=1717569332", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df9b99e22b7fa63ab7b042413eee11ba5065fe00", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246478209
pes2o/s2orc
v3-fos-license
Omega-3 Fatty Acid Supplementation and Coronary Heart Disease Risks: A Meta-Analysis of Randomized Controlled Clinical Trials Background The clinical benefits of omega-3 fatty acids (FAs) supplementation in preventing and treating coronary heart disease (CHD) remain controversial. Therefore, this study aimed to investigate the clinical benefits of omega-3 FA supplementation, with special attention given to specific subgroups. Methods Randomized controlled trials (RCTs) that compared the effects of omega-3 FA supplementation for CHD vs. a control group and including at least 1,000 patients were eligible for the inclusion in this meta-analysis. The relative risk (RR) of all-cause death, major adverse cardiovascular events (MACEs), cardiovascular death, myocardial infarction (MI), stroke, and revascularization were estimated. We analyzed the association between cardiovascular risk and omega-3 FA supplementation in the total subjects. We focused on the cardiovascular risk compared to omega-3 FA in subgroups with different development stages of CHD, omega-3 FA supplementation application dose, diabetes, and sex. PROSPERO Registration Number: CRD42021282459. Results This meta-analysis included 14 clinical RCTs, including 1,35,291 subjects. Omega-3 FA supplementation reduced the risk of MACE (RR; 0.95; CI: 0.91–0.99; p for heterogeneity 0.27; I2 = 20%; p = 0.03), cardiovascular death (RR; 0.94; CI: 0.89–0.99; p for heterogeneity 0.21; I2 = 25%; p = 0.02), and MI (RR; 0.86; CI: 0.79–0.93; p for heterogeneity 0.28; I2 = 19%; p < 0.01), but had no significant effect on all-cause death, stroke, and revascularization. In the subgroup analysis, omega-3 FA supplementation decreased the incidence of MACE and cardiovascular death in acute patients with MI, the risk of MI and stroke in patients with CHD, and the risk of MI in patients with high-risk CHD. 0.8–1.2 g omega-3 FA supplementation reduced the risk of MACE, cardiovascular death, and MI. It was revealed that gender and diabetes have no significant association between omega-3 FA supplementation and MACE risk. Conclusions Omega-3 FA supplementation had a positive effect in reducing the incidence of MACE, cardiovascular death, MI. Regardless of the stage of CHD, omega-3 FA supplementation can prevent the occurrence of MI. The 0.8–1.2 g omega-3 FA supplementation alleviated CHD risk more effectively than lower or higher doses. Systematic Review Registration https://www.crd.york.ac.uk/prospero/, identifier CRD42021282459. INTRODUCTION Omega-3 fatty acids (FAs) are polyunsaturated FA commonly found in marine fish and closely linked to cardiovascular health. Omega-3 FA mainly contains α-linolenic acid (ALA), docosahexaenoic acid (DHA), and eicosapentaenoic acid (EPA). In 1972, Bang and Dyerberg compared the dietary difference and serum lipoprotein levels between Inuit and Danes (1). They found that Inuit were not prone to coronary heart disease (CHD), and they ate a lot of seal and whale meat and also blubber. Meanwhile, an observational experiment confirmed that the inclusion of marine omega-3 FA was negatively correlated with the risk of CHD as well (2). Omega-3 FA may reduce the risk of CHD by antiinflammatory effect, improve vasomotor and endothelial cell function, and lower serum lipoprotein levels (3). In 1994, Lungershausen et al. found that blood pressure and plasma triglycerides (TAGs) were significantly decreased in patients with hypertension following omega-3 FA supplementation treatment (4). The specific mechanism by which omega-3 FA supplementation to reduce TAGs remains unclear. At present, it is generally believed that omega-3 FA supplementation can increase mitochondria β-oxidation and thereby reduced endogenous triglyceride TAG synthesis. Concurrently, omega-3 FA supplementation also increases plasma lipoprotein lipase activity to play a protective role in cardiovascular protection (5,6). In JELIS trial, a lipid intervention trial initiated by Yokoyama et al., daily treatment with 1.8 g EPA in combination with statins proved to be more effective than statins alone in reducing cardiovascular events in patients with high cholesterol (7). However, further investigation is required to determine whether omega-3 FA supplementation dose and application in primary prevention or secondary prevention are associated with cardiovascular events. A meta-analysis that was performed by Balk et al. also revealed that omega-3 FA supplementation application could be utilized as an effective lifestyle strategy for preventing CHD (8). This protective effect is positively correlated with the applied dose (8). A meta-analysis, including 13 randomized controlled trials (RCTs) with 1,27,477 subjects, indicated that omega-3 FA supplementation could reduce the risk of MI (RR 0.88, 95% CI: 0.83-0.94) and CVD death (RR 0.92, 95%, CI: 0.88-0.97, I 2 = 6%) (9). Abdelhamid et al. performed a meta-analysis that included 79 RCTs with enrolled 1,62,796 subjects to evaluate the role of omega-3 FA supplementation in preventing CHD (10). They demonstrated that based on low-certainty evidence, omega-3 FA supplementation for 12-88 months could reduce CHD risk and death rate (10). Nonetheless, based on medium to high-certainty evidence, they illustrated that omega-3 FA supplementation has insignificant effect on cardiovascular mortality and events (10). REDUCE-IT was a double-blind, multicenter RCT that aimed to determine the effect of icosapent ethyl on cardiovascular events after providing established statin therapy to patients with CHD or diabetes and other risk factors. The results of REDUCE-IT manifested that compared to the control group, icosapent ethyl can significantly reduce the risk of ischemic events by 25%, including cardiovascular death. Additionally, icosapent ethyl can significantly decrease triglyceride levels, and when combined with statin therapy, it appears to be promising than using statins alone. This appears to be promising that omega-3 FA supplementation can prevent cardiovascular risks, but different results were obtained in the other two large clinical RCTs (11). The STRENGTH trial was the latest large-scale double-blind RCT involving 13,078 subjects with high cardiovascular risks (12). The median treatment time with omega-3 FA supplementation and corn oil in the control group was 38.2 months. However, they eventually exhibited no significant difference in cardiovascular events between the two groups (12). The VITAL trial was a largescale clinical trial for healthy people. This trial attempted to investigate the effects of omega-3 FA supplementation and vitamin D in the primary prevention of cardiovascular events after a 5-year follow-up. Both the results of VITAL and STRENGTH found that omega-3 FA supplementation had no significant effect on the incidence of cardiovascular events; nevertheless, the secondary endpoints of VITAL revealed that omega-3 FA supplementation reduced the risk of MI by 28% (13). A newly published meta-analysis stated no clinical benefit for omega-3 FA supplementation in preventing cardiovascular risks in healthy people and patients with CHD (14). The aforementioned RCTs and meta-analysis results were inconsistent, posing a challenge to the clinical application of omega-3 FA supplementation to prevent cardiovascular events. It is hypothesized that omega-3 FA supplementation effectively prevents cardiovascular events in some specific cases. As recommended by the American Heart Association (AHA), in individuals with preexisting CHD, heart failure (HF), and reduced ejection fraction (EF), employing omega-3 FA supplementation for CHD prevention is a reasonable treatment option (15). Our research included RCTs with a large sample size compared with the previous meta-analyses and divided the population enrolled into subgroups of subjects with high risks of CHD, cases with CHD, and patients with acute MI. This study focused on preventing cardiovascular events following omega-3 FA supplementation application. Meanwhile, we focused on cardiovascular risks of the three subgroups after applying omega-3 FA supplementation and explored the association between omega-3 FA supplementation and cardiovascular events, application dose, sex, and having diabetes or not. METHODS This meta-analysis followed Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (16). This meta-analysis was performed using the PRISMA checklist (Supplementary Table S1). The protocol of this meta-analysis was registered on the PROSPERO database (https://www.crd.york.ac.uk/prospero/) with the Registration Number CRD42021282459. Search Strategy The search strategy was conducted in accordance with the participants, intervention, comparison, outcome, and study design (PICOS) format as follows: P = adults (above 18 years old) with high risks of CHD or confirmed CHD or MI; I = omega-3 FA supplementation; C = control group with or without placebos; O = all-cause death and cardiovascular outcomes including major adverse cardiovascular events (MACEs), cardiovascular death, myocardial infarction (MI), stroke, and revascularization; S = RCT. Chen Gong and ShiChun Shen independently searched databases including PubMed, Google Scholar, Cochrane library, and Clinicaltrial.gov to screen all the eligible RCTs published before October 01, 2021 without language restrictions. The combination terms of keywords omega-3 fatty acid (FA) supplementation, fish oil, eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), RCT, CHD, cardiovascular disease, myocardial infarction (MI), sudden cardiac death, and stroke were searched in the above database. Inclusion Criteria -RCTs enrolled adult subjects over 18 years old with high risks of CHD or confirmed CHD or not. -The sample size of RCTs was more than 1,000. -RCTs were designed to compare omega-3 FA supplementation to control group with or without placebo. -Outcomes of RCTs include one of the following events: allcause death, MACEs, cardiovascular death (CV death), MI, stroke, and revascularization. Data Extraction In each RCT, we extracted trial registration number if applicable, first author, publication year, trial location, participant characteristics, type and dose of omega-3 FA supplementation, treatment duration, subject number of omega-3 FA supplementation group and control group, reported endpoints, and study design. Two researchers independently completed data collection, and if any discrepancies were encountered, they were resolved through negotiation. Major adverse cardiovascular event was defined as a composite of cardiovascular death, MI, and stroke. MI and stroke were defined as non-fatal MI and non-fatal ischemic stroke, respectively. However, not all RCTs reported the specific definition of MI and stroke. If RCT did not specify the definition of MI and stroke, respectively, as non-fatal MI and ischemic stroke, the number of total MI and stroke reported in that RCT was collected in our meta-analysis. When the results of the same RCT were updated, we decided to report the latest report. Assessment of Methodological Quality We assessed the risk of bias based on Cochrane collaboration tool for the methodological quality of included RCTs (17). Assessment elements of Cochrane collaboration tool include random sequence generation, allocation concealment, participant and personnel blinding, blinding of outcome assessment, incomplete outcome data, no selective outcome reporting and other sources of bias. Subgroup Analysis To further analyze the cardiovascular risks of omega-3 FA supplementation on specific populations, we divided the included experimental populations into subgroups with high risks of CHD, diagnosed with CHD, and with acute MI based on the participants' prior medical history to conduct relevant subgroup analysis. The subgroup of MI included RCTs enrolled subjects with acute MI within 3 months. CHD subgroup included RCTs that enrolled patients with old MI for more than 3 months and confirmed CHD. A previous study performed by Bernasconi et al. found that cardiovascular mortality and fatal MI prevention can be achieved with <0.8-1.2 g/d omega-3 FA, and the protective effect quickly plateaued with the increasing dosages (18). According to the treatment doses of omega-3 FA supplementation, we divided the included studies into low-dose studies with dose of <0.8 g, moderate-dose studies with dose equal to or >0.8 and <1.2 g, and also high-dose studies with dose equal to or more than 1.2 g. The subgroup analyses of diabetes and sex were performed to determine the influencing factors of the cardiovascular effect of omega-3 FA supplementation. Statistical Analysis We used the Peter's test and regression test for funnel plot asymmetry to assess the risk of bias. I 2 was used for the heterogeneity between each RCTs. If I 2 was <50% or p for heterogeneity more than 0.10, the fixed-effects model was used, and if I 2 was >50% or p for heterogeneity <0.10, the randomeffects model was used. A sensitivity analysis was conducted to reduce and exclude sources of heterogeneity as follows. (1) We compared the calculation results of the random-effects model and the fixed-effects model to verify the robustness of the results of our research. (2) We eliminated each study in turn to observe the change in I 2 . If the value of I 2 was significantly reduced after an RCT was eliminated, then the RCT is the source of heterogeneity. (3) We performed subgroup analysis according to the medical history of CHD, the doses of omega-3 FA supplementation, diabetes or not, and sex. In this meta-analysis, p < 0.05 was considered significant. R (version 4.1.1) was used to compute statistical tests [relative risks (RRs), confidence intervals, sensitivity analyses, and I²-test]. Tables and forest plots produced by R (version 4.1.1) were used to show data. RESULTS This research retrieved 20,819 articles, of which 20,777 were excluded based on the title and abstract. Subsequently, we excluded RCTs for samples <1,000 and assess non-supplement omega-3 FA (e.g., food fortified with omega-3 or dietary advice). Finally, 14 RCTs, including 1,35,291 subjects, were included in this meta-analysis. These subjects include people at high CHD risks, diagnosed with CHD, and patients with acute MI. A total of 67,704 subjects received omega-3 FA supplementation at doses ranging from 0.4 to 4 g, whereas 67,587 cases received control treatment. Figure 1 displays the determination of relevant RCTs and finally retrieved the process of obtaining the final literature. Table 1 shows the characteristics of the finally included 14 RCTs. According to the design of each RCT, we used the Cochrane tool to score 14 RCTs for risk of bias. Figure 2 demonstrats the methodological quality for each RCT and showed the risk of bias of RCTs included in our meta-analysis was low. Endpoints All RCTs, including 1,35,291 subjects, reported the occurrence of all-cause death. All-cause death for omega-3 FA-supplemented group (7.71%) was similar to that of the control group (7.83%) (RR 0.98; 95% CI: 0.95-1.02; p for heterogeneity 0.10; I 2 = 34%; p = 0.35) ( Figure 3A). Peter's test and the funnel plot were employed to detect the risk of bias with p < 0.05. From the perspective of figure geometry, the funnel chart is symmetrical ( Figure 3B). The aforementioned evidence showed that the risk of bias of RCTs included in our meta-analysis was low. According to the low heterogeneity of included RCTs, a fixed-effects model was utilized to analyze data. Meanwhile, we used a random-effects model to perform the same analysis. The results analyzed by the random-effects model are consistent with those of the fixed-effects model, confirming the robustness of current results (Figures 3, 4; Supplementary Figure S1). Subgroup Analysis Development Stage of CHD Three RCTs enrolled 10,481 patients who had an MI within the previous 3 months (19,21,28), four RCTs enrolled 27,036 patients with confirmed CHD (11,22,23,26), and the other seven studies included 97,774 subjects with high CHD risks (7,12,13,20,24,25,27). If an RCT included subjects with different development stages of CHD and the RCT included the largest number of people at a certain development stage of CHD, then the RCT was included in the corresponding subgroup of the certain development stage of CHD. The JELIS and STRENGTH trials included many subjects with high risks of CHD and also a small number of subjects with confirmed CHD (7,12). However, no specific subgroup outcomes were reported; therefore, we included them in the subgroup of high risks of CHD. As the number of patients with CHD in REDUCE-IT trial was larger than subjects with high risks of CHD, this RCT was classified into CHD subgroup in our meta-analysis (11). Although ALPHA OMEGA included all patients with MI, we still included them in the CHD subgroup for the participants undergone MI more than 3 months (22). Dose of Omega-3 FA Two RCTs applied low doses of omega-3 FA supplementation on 7,338 subjects (22,23), the other eight RCTs applied moderate doses of omega-3 FA supplementation on 87,037 subjects (13,(19)(20)(21)(24)(25)(26)(27), whereas the remaining four RCTs applied high doses of omega-3 FA supplementation on 40,916 subjects (7,11,12,28). To determine the association between omega-3 FA supplementation doses and cardiovascular risk, we conducted a (Figure 6). Treatment with lower and higher dose of omega-3 FA supplementation did not exhibit similar benefits for MACE, MI, and cardiovascular death. Regardless of the dose application of omega-3 FA supplementation, the data of RCTs support that omega-3 FA supplementation has no significant effect on all-cause death, stroke, and revascularization. Sex Two RCTs reported the number of MACE events in male and female subjects (12,13). A total of 21,296 male subjects and 17,653 female subjects were enrolled. It can be observed from the results that gender has no significant impact on MACE after applying omega-3 FA supplementation (men: RR 0.98; 95% CI: 0.89-1.08; p for heterogeneity 0.37; I 2 = 0.0%; p = 0.71 and women: RR 0.93; 95% CI: 0.81-1.07; p for heterogeneity 0.99; I 2 = 0.0%; p = 0.30, respectively (Figure 6). This meta-analysis demonstrated that additional omega-3 FA supplementation could decrease the incidence of MACE, cardiovascular death, and MI. However, insignificant effect on all-cause death, stroke, and revascularization was observed between omega-3 FA supplementation and the control group. Our subgroup analysis found that omega-3 FA supplementation can reduce the risks of MACE and MI in people with acute MI, the risk of MI and stroke in people with CHD, and the risk of MI in patients with high risks of CHD. In addition, according to FIGURE 5 | Forest plot of subgroup of subjects with MI, CHD, and high risks of CHD. MI: myocardial infarction; CHD: coronary heart disease; MACEs, major adverse cardiovascular events; CV death, cardiovascular death. *Random-effects model. the used omega-3 FA dose in RCTs, we found that moderate-dose omega-3 FA ranging from 0.8 to 1.2 g attenuated the incidence of MACE, cardiovascular death, and MI. However, lower and higher dose omega-3 FA supplementation did not show a similar advantage in reducing the risk of MACE, cardiovascular death, and MI. A wide range of studies illustrated that cardiovascular benefits of omega-3 FA supplementation are positively correlated with the given dose (26). There was a controversy between this study results and previously reported findings, which may be associated with the baseline omega-3 index (EPA+DHA in red blood cells) (29). A baseline omega-3 index >8% implied a lower risk of cardiovascular (30). In the high-dose subgroup, OMEMI study that was performed by Kalstad et al. was mainly conducted in Norway, where fish consumption is high. Since the daily diet is rich in omega-3 FA, it is evident that the subjects' baseline omega-3 index is significantly higher. The results of OMEMI experiment revealed that 1.8 g n-3 PUFAs daily for 2 years was used to treat elderly patients with a recent AMI that did not reduce incidence of cardiovascular events or all-cause death. Studies also revealed that a daily dose of <0.8-1.2 mg omega-3 FA supplementation can reduce the risks of cardiovascular death and MI (18,29,31). As the dose increases, it will not increase the cardiovascular benefits but rather may increase the risk of malignant arrhythmias such as atrial fibrillation (AF) (18). In STRENGTH and OMEMI experiments, high omega-3 FA supplementation doses increased the risk of AF. As this adverse impact of excessive doses becomes more prevalent, the cardiovascular advantages of omega-3 FA supplementation may gradually stabilize or even diminish at moderate to high doses (32). Omega-3 FA is mainly found in marine fish. Previous epidemiological studies stated that compared to non-Mediterranean diets, a Mediterranean diet containing a significant amount of omega-3 FA alleviates cardiovascular risks (33,34). In people on a non-Mediterranean diet, even a slight amount of fish diet can also reduce cardiovascular risk (35). GISSI-P included Italians within 3 months of MI (19). They were on a Mediterranean diet and received omega-3 FA supplementation to reduce the risk of death from MI. Patients with MI for 3-14 days were enrolled in OMEMI. In addition, an insignificant difference was observed in cardiovascular events after receiving omega-3 FA supplementation or not (28). In that study, 73.2% of subjects ate fish at least once a week during the study period, significantly reducing the incidence of cardiovascular events. Eating fish once per week has been demonstrated to reduce the risk of cardiovascular events by 52% compared to eating fish once per month in a previous prospective study, which enrolled 20,551 subjects (36). In the STRENGTH FIGURE 6 | Forest plot of subgroup of the application dose of omega-3 FA, diabetes or not, and sex. MACEs, major adverse cardiovascular events; CV death, cardiovascular death; MI, myocardial infarction. *Random-effects model. experiment, all subjects received 4 weeks of statin treatment before grouping, and no significant difference was observed in cardiovascular events between omega-3 FA supplementation treatment and the control group (11). Aung et al. reported the association between omega-3 FA supplementation and cardiovascular events in patients with a history of MI (37). The RCTs enrolled subjects with the medical history of MI, and some who had acute MI were included in their meta-analysis (37). However, these subjects had MI, and the time of omega-3 FA supplementation use after MI was quite different, which is a significant factor in reporting biased results. Their meta-analysis demonstrated that omega-3 FA supplementation utilization in patients with MI had no influence on all-cause death, cardiovascular death, MI, any cardiovascular events, and stroke compared to applying placebo (37). In our meta-analysis, we divided patients with MI into old and acute stages according to the occurrence time of MI to decrease the bias. Our subgroup analysis found that omega-3 FA supplementation significantly reduced the risk of MACE and MI in patients with acute MI. This is probably because we had a clear definition of the acute MI subgroup, but previous meta-analyses did not subdivide the time of MI. The difference between old MI and acute MI was directly included in their study. In Casula's research (38), omega-3 FA supplementation was found to be alleviated the risk of MI in patients with patients with CHD, which is compatible with our results. A recent meta-analysis conducted by Rizos et al. performed a prognostic event analysis based on the dose of omega-3 FA supplementation administrated in each included study (13). The meta-analysis reported outcomes including all-cause deaths, cardiovascular death, sudden death, MI, and stroke. In contrast to our research results, they disclosed that the application dose of omega-3 FA supplementation <1 g had a non-significant effect on the above results. Their meta-analysis included 17 studies, and the number of subjects included in each study ranged from 72 to 25,869. It is commendable that they included as many studies as possible to ensure the reliability of results. However, excessive studies with a small-sample size may lead to unreliable conclusions. It may explain why their results conflict with ours. Compared with the previous meta-analysis contains more RCTs, our research includes 14 RCTs. We strictly adhere to the inclusion criteria that the sample size of included RCTs must be >1,000. Our results were consistent in calculations by two effect models and Peter's test. Two previous published meta-analyses provided information about the use of omega-3 FA supplementation in the secondary prevention of CHD (38,39). However, they did not specify the past medical history and CHD development stages of included populations and included some RCTs with <100 participants. Our research confirmed previous conclusions and provided extra findings with four specific subgroup analyses. Our approach circumvents the aforementioned problems that occur in previously published analyses. First, we excluded RCTs with a small sample size and adopted two effect models to show the robustness of our results. Second, we conducted subgroup analysis of the included population to analyze the differences of omega-3 FA supplementation in cardiovascular events for primary or secondary prevention of CHD according to different stages of CHD development. Third, we performed subgroup analysis based on the usage dose of omega-3 FA supplementation, diabetes or not, and sex. Our research provided a better answer to the two questions of omega-3 FA supplementation that should be considered at which the development stage of CHD to prevent cardiovascular events and determine the proper dose of omega-3 FA supplementation. This study encounters some limitations. (1) Although this study strictly followed the inclusion criteria and included experiments with more than 1,000 subjects, the number of studies included in some subgroup analyses was relatively small. More research is still required to support our results. (2) The RCTs included in this study are mostly performed by Western countries and lack sufficient data on Asians. (3) Subjects with CHD and acute MI may receive basic secondary prevention strategies, and whether secondary prevention measures would affect the clinical benefits of omega-3 FA supplementation cannot be excluded. The previously mentioned limitations require more large-scale RCTs to investigate further. CONCLUSION This study conducted a meta-analysis of 14 large-scale RCTs to investigate the risk of cardiovascular events after receiving omega-3 FA supplementation. We found that omega-3 FA supplementation can reduce the risk of MACE, cardiovascular death, and MI. Additionally, it exhibits good clinical benefits for primary prevention and secondary prevention of CHD. Omega-3 FA supplementation application dose that ranges from 0.8 to 1.2 g exhibits more superiority than other doses in reducing cardiovascular risks. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS SS and CG conceptualized the study, performed screening, data extraction, and data analysis by R software. Risk of bias was assessed by LZ and YX. Original draft preparation, reviewing, and editing were performed by SS, CG, and KJ. The work was supervised and funded by LM. All authors contributed to the article, approved, read, and agreed to the submitted version of the manuscript.
2022-02-03T14:29:54.240Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "5b84df98d343dc46a9aeb9b708dc851643dbd949", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2022.809311/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "5b84df98d343dc46a9aeb9b708dc851643dbd949", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57189574
pes2o/s2orc
v3-fos-license
Accelerated Consensus for Multi-Agent Networks through Delayed Self Reinforcement This article aims to improve the performance of networked multi-agent systems, which are common representations of cyber-physical systems. The rate of convergence to consensus of multi-agent networks is critical to ensure cohesive, rapid response to external stimuli. The challenge is that increasing the rate of convergence can require changes in the network connectivity, which might not be always feasible. Note that current consensus-seeking control laws can be considered as a gradient-based search over the graph's Laplacian potential. The main contribution of this article is to improve the convergence to consensus, by using an accelerated gradient-based search approach. Additionally, this work shows that the accelerated-consensus approach can be implemented in a distributed manner, where each agent applies a delayed self reinforcement, without the need for additional network information or changes to the network connectivity. Simulation results of an example networked system are presented in this work to show that the proposed accelerated-consensus approach with DSR can substantially improve synchronization during the transition by about ten times, in addition to decreasing the transition time by about half, when compared to the case without the DSR approach. This is shown to improve formation control during transitions in networked multi-agent systems. I. INTRODUCTION Multi-agent networks are common cyber-physical systems with applications such as autonomous vehicles, swarms of robots and other unmanned systems, e.g., [1]- [8]. The performance of such systems, such as the response to external stimuli, depends on rapidly transitioning from one operating point (consensus value) to another, e.g., as seen in flocking, [9], [10]. Thus, there is interest to increase the convergence to consensus for such networked multi-agent systems. A challenge is that there are fundamental limits to the achievable convergence to consensus using existing graphbased update laws for a given network, e.g., of the form where the current state is Z(k), the updated state is Z[k + 1], γ is the update gain, K is the graph Laplacian and P is the Perron matrix. Hence the convergence to consensus depends on the eigenvalues of the Perron matrix P , which in turn depends on the eigenvalues of the graph Laplacian K. For example, if the underlying graph is undirected and connected, it is well known that convergence to consensus can be achieved provided the update gain γ is sufficiently small, e.g., [11]. Funding from NSF grant CMMI 1536306 is gratefully acknowledged The gain γ can be selected to maximize the convergence rate. However, for a given graph (i.e., a given graph Laplacian K), the range of the acceptable update gain γ is limited, which in turn, limits the achievable rate of convergence as shown in previous work [12]. Although it is possible to change the convergence by choosing the Perron matrix [13], i.e., by choosing a different graph structure for the network, the maximum rate of convergence with current graph-based updates is bounded for a given network structure. As discussed in [12], current limitations in graph-based approaches motivate the development of new approaches to improve the convergence to consensus. Note that the convergence can be slow if the number of agent inter-connections is small compared to the number of agents, e.g., [14]. Randomized time-varying connections can lead to faster convergence, as shown in, e.g., [14]. The update sequence of the agents can also be arranged to improve convergence, e.g., [15]. When such time-variations in the graph structure or selection of the graph Laplacian K are not feasible, the need to maintain stability limits the range of acceptable update gain γ, and therefore, limits the rate of convergence. This convergencerate limitation motivates the proposed effort to develop a new approach to improve the network performance. The major contribution of this work is to use an acceleratedgradient-based approach to modify the standard update law in Eq. (1) for networked multi-agent systems. Previous works have used such acceleration methods (also referred to as the Nesterov's gradient method) to improve the convergence of gradient-based search in learning algorithms, e.g., see [16], [17]. Another contribution is to show that the proposed accelerated approach to consensus can be implemented by using a delayed self reinforcement (DSR), where each agent only uses current and past information from the network. This use of already existing information is advantageous since the consensus improvement is achieved without the need to change the network connectivity and without the need for additional information from the network. This work generalizes the author's previous works in [12], [18], [19], which considered a momentum term only to improve the convergence to consensus. Simulation results of an example networked system are presented in this work to show that the proposed acceleratedconsensus approach with DSR can substantially improve synchronization during the transition by about ten times, in addition to decreasing the transition time by about half, when compared to the case without the DSR approach. This is shown to improve formation control during transitions in networked multi-agent systems. II. PROBLEM FORMULATION A. Background: network-based consensus control Let the multi-agent network be modeled using a graph representation, where the connectivity of the agents is represented by a directed graph (digraph) G = (V, E), e.g., as defined in [11]. Here, the agents are represented by nodes V = {1, 2, . . . , n+1}, n > 1 and their connectivity by edges E ⊆ V × V, where each agent j belonging to the set of neighbors N i ⊆ V of the agent i satisfies j = i and (j, i) ∈ E. B. Graph-based control The consensus control for the multi-agent network is defined by the graph G, aŝ whereẐ represents the states of the agents, k represents the time instants t k = kδ t , γ is the update gain, u is the input to each agent, the weight a i,j is nonzero (and positive) if and only if j is in the set of neighbors N i ⊆ V of the agent i, and the terms l ij of the (n + 1) × (n + 1) Laplacian L of the graph G are real and given by where each row of the Laplacian L adds to zero, i.e., from Eq. (3), the (n + 1) × 1 vector of ones 1 n+1 = [1, . . . , 1] T is a right eigenvector of the Laplacian L with eigenvalue 0, C. Network dynamics One of the agents is assumed to be a virtual source agent, which can be used to specify a desired consensus value Z d . Without loss of generality, the last node, n+1 is assumed to be a virtual source agent. Moreover, each agent in the network should have access to the source agent's state Z s =Ẑ n+1 through the network, as formalized below. Assumption 1 (Connected graph): The digraph G is assumed to have a directed path from the source node n + 1 to any other node i in the graph, i.e., i ∈ V \(n + 1). Some properties of the graph G without the source node n + 1 are listed below, e.g., [11]. In particular, consider the n × n pinned Laplacian matrix K obtained by removing the row and column associated with the source node n + 1, the following partitioning of the Laplacian L is invertible, i.e., and B is an Non-zero values of B j implies that the agent j is directly connected to the source Z s . 1) The pinned Laplacian matrix K is invertible from the Assumption 1 and the Matrix-Tree Theorem in [20]. 2) The eigenvalues of K have have strictly-positive, real parts. 3) The product of the inverse of the pinned Laplacian K with B leads to a n × 1 vector of ones, i.e., The dynamics of the non-source agents Z represented by the remaining graph G \s, be given by where P is Perron matrix. A sufficiently small selection of the update gain γ will stabilize the dynamics in Eq. (8), e.g., see [11], i.e., all eigenvalues λ P,i of the Perron matrix where 1 n×n is the n × n identity matrix, will lie inside the unit circle. D. Stable consensus With a stabilizing update gain γ, the state Z of the network (of all non-source agents) converges to a fixed source value Z s , e.g., for a step change in the source value Z s , i.e., Z s [k] = Z d for k > 0 and zero otherwise. Since the eigenvalues of P are inside the unit circle, the solution to Eq. (8) for the step input converges as k → ∞. Therefore, taking the limit as k → ∞ in Eq. (8), and from invertibility of the pinned Laplacian K from Eq. (5). as k → ∞. Then, from Eq. (7), the state Z[k] at the non-source agents reaches the desired state Z d as time step k increases, i.e, Thus, the control law in Eq. (8) achieves consensus. E. Convergence-rate limit For a given pinned Laplacian K, the range of the acceptable update gain γ is limited, which in turn limits the achievable rate of convergence. If is an eigenvalue of the pinned Laplacian K with a corresponding eigenvector V K,m , i.e., then is an eigenvalue of the Perron matrix P for the same eigenvector V K,m , since Lemma 1 (Perron matrix properties): The network dynamics in Eq. (8), is stable if and only if the update gain γ satisfies Proof: See [12]. The model in Eq. (8) can be rewritten as where δ t is the time between updates. For a sufficiently-small update time δ t it can considered as the discrete version of the continuous-time dynamicṡ The eigenvalues of γ δt K increase proportionally with γ and inversely with update time interval δ t . Therefore, the settling time T s of the continuous time system decreases as the gain γ δt increases. The sampling time δ t is bounded from below based on the sensing-computing-actuation bandwidth of the agents in the network, and the gain γ is limited by the network structure as in Lemma 1. Consequently, the smallest possible update time δ t and the given network structure limit the fastest possible settling time for a given network. F. The settling-time improvement problem The research problem addressed in this article is to reduce the settling time T s (from one consensus state to another) under step changes in the source value (i.e., improve convergence) where each agent can modify its update law 1) using only existing information from the network neighbors, 2) without changing the network structure (network connectivity K), and 3) without changing the update-time interval δ t , which limits the maximum gain γ. B. Accelerated gradient search In general, the convergence of the gradient-based approach as in Eq. (17) can be improved using accelerated methods. In particular, applying the Nesterov modification [16], [17] of the traditional gradient-based method to Eq. (17) results in This accelerated-gradient-based input results in the modification of the system Eq. (2) tô Consequently, the dynamics of the non-source agents Z represented by the remaining graph G \s and given by Eq. (8), becomes Remark 1: For directed graphs, the potential in Eq. (18) does not lead to the graph Laplacian [21], [22]. Instead, the graph potential (without the source node) can be directly considered as and the application of the accelerated-gradient approach leads to the same Eq. (26). C. Implementation using delayed self reinforcement The above accelerated-gradient approach for multi-agent networks can be implemented without additional information from the network, or having to change the network connectivity. For an agent i, let v i be the information obtained from the network, i.e., where K i is the i th row of the pinned Laplacian K. Then, the update of agent Z i is, from Eq. (22), (26) where B i is the i th row of the source connectivity matrix B. The delayed self-reinforcement (DSR) approach, however, requires each agent to store a delayed versions from the network, as illustrated in Fig. 1. D. Quantifying synchronization during transition In general, it is not only important that the network reaches a new consensus value Z d , but also that during the transition the network states are similar to each other. For example, consider the case when the state Z of the agents are horizontal velocities V x . Then having similar velocities during the transition (i.e., synchronization during the transition) can aid in maintaining the formation, without the need for additional control actions. The lack of cohesion or synchronization during the transition can be quantified in terms of the deviation ∆ in the response as where k T s is the number of steps needed to reach the settling time T s , which is time by which all agent responses Z reach and stay within 2% of the final value Z d , Z is the average value of the state Z, over all individual agent state-components z i , i.e., and | · | 1 is the standard vector 1-norm, for any vectorẐ. A normalized measure ∆ * that removes the effect of the response speed is obtained by dividing the expression in Eq. (27) with the settling time T s as Note that the system's transient response is more synchronized if the normalized deviation ∆ * is small. IV. RESULTS AND DISCUSSION The step response of an example system, with and without DSR, are comparatively evaluated. Moreover, the impact of using DSR on the response of a networked formation of agents is illustrated when the networked state is the velocity during an acceleration maneuver. A. System description The example network used in the simulation is shown in Fig. 2. It consists of n = 25 non-source agents arranged uniformly (initially) on a 5 × 5 grid. The minimal initial spacing between the agents is one. The last non-source agent Z n has access to the source Z s . The update gain γ of the system in Eq. (8) without DSR is selected to ensure stability. The weight of each edge is selected as one, i.e., a ij = 1 in Eq. (3). The maximum valueγ of the update gain γ in Eq. (8) can be found from Lemma 1 asγ = 0.2763. The update gain γ needs to be smaller than the maximum value to ensure stability without DSR, and therefore, the following simulations use the update gain γ = 0.1382 =γ 2 <γ. The discrete time system without DSR in Eq. (8) settles to 2% of the final value in k T s = 1331 steps, and for a settling time of T s = 1 s, the sampling time is δ t = T s /k T s = 7.5131 × 10 −4 . B. Performance without and with DSR The desired velocity of the source Z s is selected to increase, with a sinusoidal acceleration profile, from zero to the desired consensus value of Z d = 0.02 as shown in Fig. 3. The response of the states Z achieves the final desired value Z d with a settling time of T s = 1s to reach and stay within 2% of the final value Z d , as shown in Fig. 3. The response with DSR is substantially faster when compared to the response without the DSR, for the same desired source Z s . With the DSR gain selected as β = 0.95 in Eq. (26), the settling time is T s = 0.4756s, i.e., to reach and stay within 2% of the final value Z d , as shown in Fig. 4. Thus, with a smaller settling time, the response with the DSR-based accelerated consensus is about 50% faster than the response without DSR, as also seen by comparing the responses in Figs. 3 and 4. In addition to increasing the rate of convergence, and more importantly, the accelerated consensus leads to better synchronization during the transition. The deviation ∆ (from synchronization) as in Eq. (27) of the response without the accelerated consensus as in Eq. (8) is ∆ = 0.0103. The normalized deviation ∆ * in Eq. (29) is the same ∆ * = ∆/T s = 0.0103 since the settling time is T s = 1 without DSR. The use of the accelerated consensus reduces the loss of synchronization during the transition. The deviation ∆ = 0.0006 for the accelerated consensus case. Even the normalized deviation (with a smaller settling time T s = 0.4756) for the accelerated consensus using DSR is ∆ * = 0.0012, which is about ten times smaller than the case without the DSR. Thus, the proposed accelerated-consensus approach with DSR can substantially improve synchronization during the transition by about ten times, in addition to decreasing the transition time by about half, when compared to the case without the DSR approach. C. Impact on formation spacing To comparatively evaluate the impact of maintaining synchronization during the transition between consensus values, the state Z of the agents are considered to represent the horizontal velocity Z = V x of each agent. Then, the horizontal position X of each agent is found as The initial and final positions with and without the accelerated consensus are compared in Fig. 5. As seen in the figure, the accelerated-consensus approach implemented with DSR as in Eq. (26) leads to better formation control when compared to the case without the DSR approach as in Eq. (8). In this example, no control actions are taken to maintain the formation to focus on the comparative evaluation of the performance with and without the proposed accelerated-consensus approach. Nevertheless, the ability of the accelerated-consensus approach to reduce distortions in the formation can potentially improve the performance of other methods with control actions to maintain the formation. D. Summary of results The use of the accelerated consensus, implemented using DSR, results in a faster convergence to the consensus value. Moreover, during the transition the network is more cohesive with the accelerated-consensus approach, which results in better formation control. While this article focussed on a quadratic potential with a linear network dynamics, the accelerated-consensus approach could also be implemented for the nonlinear case. V. CONCLUSIONS This article showed that accelerated-gradient methods, used to improve the convergence in gradient-based search algorithms, can be used to improve current consensus algorithms in networked multi-agent systems. Moreover, the article developed implementation of the proposed accelerated consensus using delayed self reinforcement (DSR), where each agent only uses current and past information from the network. This is advantageous since the consensus improvement is achieved without the need to change the network connectivity and without the need for additional information from the network. Simulation results showed that the proposed accelerated-consensus approach with DSR can substantially improve synchronization during the transition by about ten times, in addition to decreasing the transition time by about half, when compared to the case without the DSR approach. This was shown to improve formation control during transitions in networked multi-agent systems.
2018-12-30T13:58:26.000Z
2018-12-30T00:00:00.000
{ "year": 2018, "sha1": "ddb10e2102c02ed60c16f73e73e8695db2d90bbd", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1812.11536", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "de1fd8eda324dbc0ae5615e053ddf4278204bf05", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
253779988
pes2o/s2orc
v3-fos-license
Performance of the high-dimensional propensity score in adjusting for unmeasured confounders High-dimensional propensity scores (hdPS) can adjust for measured confounders, but it remains unclear how well it can adjust for unmeasured confounders. Our goal was to identify if the hdPS method could adjust for confounders which were hidden to the hdPS algorithm. The hdPS algorithm was used to estimate two hdPS; the first version (hdPS-1) was estimated using data provided by 6 data dimensions and the second version (hdPS-2) was estimated using data provided from only two of the 6 data dimensions. Two matched sub-cohorts were created by matching one patient initiated on a high-dose statin to one patient initiated on a low-dose statin based on either hdPS-1 (Matched hdPS Full Info Sub-Cohort) or hdPS-2 (Matched hdPS Hidden Info Sub-Cohort). Performances of both hdPS were compared by means of the absolute standardized differences (ASDD) regarding 18 characteristics (data on seven of the 18 characteristics were hidden to the hdPS algorithm when estimating the hdPS-2). Eight out of the 18 characteristics were shown to be unbalanced within the unmatched cohort. Matching on either hdPS achieved adequate balance (i.e., ASDD <0.1) on all 18 characteristics. Our results indicate that the hdPS method was able to adjust for hidden confounders supporting the claim that the hdPS method can adjust for at least some unmeasured confounders. Introduction The high-dimensional propensity score (hdPS) has been used in different contexts and within multiple databases for the control of confounding by indication and it has been shown to be at least equivalent and potentially superior to the propensity score in this regard [1][2][3][4][5][6][7]. Superiority of the hdPS is generally attributed to the greater number of covariates drawn from the database to include in the final hdPS model [5]. However, the performance of the hdPS has not been assessed when information regarding some of these potential confounders within the examined database is limited. Our aim was to assess the impact of limited information regarding potential confounders on the performance of the hdPS. To achieve this goal, we compared the performance of the hdPS in a scenario where the algorithm had full access to all of the data contained within a database to its performance in a scenario where only partial data were available to the algorithm. The administrative database situation in Quebec, Canada provides an interesting setting in which to examine this issue. There are two distinct sets of medico-administrative databases available in Quebec; the Régie de l'assurance maladie du Québec (RAMQ) databases (physician and pharmacists billing data) and the Maintenance et Exploitation des Données pour l'Étude de la Clientèle Hospitalière (MED-ECHO) databases (hospital discharge data). RAMQ and MED-ECHO data may overlap. However, they differ on their source of information (e.g., only the RAMQ databases provide outpatient data) and may be more detailed in specific areas (e.g., the MED-ECHO databases provide more detailed and more precise information regarding patients' entry date/discharge date and on in-hospital diagnoses and therapeutic and diagnostic procedures) [8][9][10]. To test the performance of the hdPS under conditions of limited information regarding potential confounders, we examined the association between the risk of diabetes and exposure to high versus low statin doses [7,[11][12][13][14][15][16]. Assessing this association in a Quebec incident statin user population may be hindered by the presence of confounding by indication since patients started on a higher statin dose have been shown to be sicker and at higher risk for diabetes than those started on a lower dose [7]. We compared the performance of the hdPS within two scenarios: (1) the algorithm used in the hdPS estimation had full access to all the data provided by both the MED-ECHO and RAMQ databases, and (2) the algorithm had only access to the data provided by the MED-ECHO databases. One of the uses of hdPS is to select a matched sub-cohort from the main cohort (all patients initiated on statins) where the characteristics of patients who received treatment A (high dose statins) are similar to the characteristics of patients who received treatment B (low-dose statins) [5]. That is, we assessed the performance of the hdPS on its ability to select a balanced sub-cohort when it is used as a matching criterion [7,[17][18][19][20][21]. The performance of the restricted information hdPS was assessed by comparing the balance achieved with this method to the balance achieved when all information was available to the algorithm. Data sources The different data sources used within this study have been described elsewhere [7]. Briefly, we obtained data on a cohort of 800,551 new statin users from RAMQ. For this study, we used data from both the RAMQ databases (i.e., demographic database, medical services, and claims database and pharmaceutical database) and from the MED-ECHO databases (i.e., hospitalization-description database, hospitalization-diagnoses database, and hospitalization-intervention database). Patient records were linked across all databases by use of a unique identification number which was encrypted to protect patient confidentiality. Access to data was granted by the Commission d'accès à l'information and the protocol was approved by the Centre hospitalier de l'Université de Montréal's ethics' committee. Full cohort The Full Cohort used within this study has been described elsewhere [7]. Briefly, it was comprised of 404,129 patients newly initiated on a statin (either simvastatin, lovastatin, pravastatin, fluvastatin, atorvastatin, or rosuvastatin) between January 1st, 1998 and December 31st, 2010. Patients were defined as having been newly initiated on a statin if they did not receive any statin dispensation in the year prior to the date of first statin dispensation (hereby defined as the cohort entry date). Identification of exposure group All patients were categorized into two groups based on the strength of the daily statin dose of their first statin dispensation [12]. Patients initiated on a daily dose of ≥10 mg of rosuvastatin, ≥20 mg of atorvastatin or ≥40 mg of simvastatin formed the high dose group and the remaining patients formed the low dose group. Identification of the study outcome Onset of diabetes within 2 years follow-up was used as our study outcome. Patients were defined as cases if they received either a dispensation of a drug used in the treatment of diabetes (WHO ATC A10) or a diagnosis of diabetes (ICD-9 code: 250.x; ICD-10 codes: E10.x-E14.x) within the 2 years following the cohort entry date; all other patients were considered to be diabetes-free. High-dimensional propensity score method Two distinct hdPS models were created and resulting hdPS were calculated for all patients included in the Full Cohort. Detailed description of the hdPS method can be found elsewhere [5]. Both models were created using the default setting of the SAS hdPS macro v.1 [22]. Six potential data dimensions were defined using the data collected from the year prior to the cohort entry date: (1) drugs dispensed in an outpatient setting, (2) physician claims for procedures codes, (3) physician claims for diagnostic codes, (4) specialty of the physician providing care, (5) hospitalization discharge data for inpatient procedure codes, and (6) hospitalization discharge data for inpatient diagnostic code. Full information model The first hdPS model (hereby defined as hdPS full info model) was created by selecting the top 500 covariates, as assessed by the hdPS algorithm, contained within all 6 data dimensions. In addition to these 500 covariates, the following known confounders were forced within the hdPS full info model: [12] patients' sex, age, poverty level status (yes versus no) at the cohort entry date, year of entry within the cohort (as a categorical variable), and ≥1 hospitalization, ≥5 outpatient visits, ≥5 distinct drugs dispensed to the patient, all within the year prior to the cohort entry date. The resulting hdPS full info model was used to estimate each patient's hdPS-1. Hidden information model The second hdPS model (hereby defined as the hdPS hidden info model) was created by selecting the top 500 covariates, as assessed by the hdPS algorithm, contained within the 2 data dimensions provided from the MED-ECHO databases since it was believed a priori that it would contain less potential covariates, therefore increasing the risk of unmeasured confounding (the 4 data dimensions provided by RAMQ were hidden to the algorithm). In addition to these 500 variables, the following covariates were forced within the hdPS hidden info model: patients' sex, age, and poverty level status (yes versus no) at the cohort entry date, the year of entry within the cohort (as a categorical variable) and ≥1 hospitalization in the year prior to the cohort entry date. Within this model, hospitalization status (≥1 hospitalization yes vs no) was assessed solely from data available within the MED-ECHO databases. Outpatient medical resource utilization and outpatient drug dispensation covariates, forced within the previous model, were excluded from this list since they were based on information solely available within the RAMQ databases. The resulting hdPS hidden info model was used to estimate each patient's hdPS-2. Creation of the matched sub-cohorts Trimming was performed and patients located within nonoverlapping regions of the hdPS-1 distribution were excluded [23][24][25], all other patients were eligible for inclusion within the Matched hdPS Full Info Sub-Cohort. Low dose controls were found for patients initiated on a high dose using a greedy, nearest neighbor 1:1 matching algorithm. Matching occurred if the difference in the logit of hdPS-1 between the nearest neighbors was within a caliper width equal to 0.2 times the SD of the logit of the hdPS-1 [26]. Patients selected by the matching algorithm were included within the Matched hdPS Full Info Sub-Cohort. These two steps were reproduced using hdPS-2 in order to create the Matched hdPS Hidden Info Sub-Cohort. Statistical analyses Patients' baseline characteristics within both sub-cohorts were assessed using the information provided from the full database. Absolute standardized differences (ASDD) were used to compare patients' baseline characteristics between patients included in the high dose group versus those included in the low dose group within both matched sub-cohorts [19,21]. ASDD <0.1 are generally assumed to indicate good balance between groups [21,27]. Discrete data are presented in absolute values and percentages and continuous data are presented as mean (± SD). All statistics were performed using SAS version 9.3 (Cary, North Carolina). Results Description of the full cohort The Full Cohort is comprised of 404,129 patients, 264,947 patients (65.6 %) of which were in the low dose group while the remaining 139,182 patients (34.4 %) were in the high dose group; as mentioned previously, patients in the high dose group were different and overall sicker than those in the low dose group. Specifically, eight of the 18 examined patient characteristics (i.e., sex, ≥5 outpatient medical visits, ≥1 hospitalization, history of myocardial infarction, history of percutaneous coronary intervention, dispensation of beta-blockers, dispensation of angiotensin receptor blockers [ARB], dispensation of angiotensin converting enzyme inhibitors [ACEI]) were unbalanced (ASDD >0.1) within the Full Cohort [7]. Of the 404,129 patients included within the Full Cohort, none were classified as diabetic at the cohort entry date. Diabetes was identified in 12,978 patients (3.2 %) within the 2 years follow-up. Table 1 shows the number of potential covariates, with and without the assessment of recurrence, within each of the 6 data dimensions considered within this study. The 4 data dimensions provided from the RAMQ databases (n without assessment of recurrence =2758 [71. Characteristics of the patients included within the Matched hdPS Full Info Sub-Cohort Using data contained within all 6 available high-dimensions, we created the hdPS full info model which was used to estimate patients' hdPS-1. Three hundred and one patients (0.0 %) had hdPS-1 located within non-overlapping regions and were excluded from the analysis. Among the remaining 403,828 patients, we matched 116,014 patients (28.7 %) from the high dose group to 116,014 patients (28.7 %) from the low dose group based on their individual hdPS-1; selected patients formed the Matched hdPS Full Info Sub-Cohort (Fig. 1). The hdPS full info model was created from the information present within all 6 data dimensions while the hdPS hidden info model was limited to the information present within the 2 data dimension provided by MED-ECHO MED-ECHO Maintenance et Exploitation des Données pour l'Étude de la Clientèle Hospitalière ; RAMQ Régie de l'assurance maladie du Québec a Any covariate not present within at least 100 patients is excluded by the hdPS algorithm and was therefore not included within this table Patients included within the Matched hdPS Full Info Sub-Cohort were on average 64.6 years old (SD 11.2) and 116,688 of them were males (50.3 %) ( Table 2). Balance (ASDD <0.1) was obtained in all 18 examined patient characteristics (ASDD ranged from 0.001 to 0.023 with an average of 0.008). Characteristics of the patients included within the Matched hdPS Hidden Info Sub-Cohort Using data from the 2 data dimensions selected from the MED-ECHO databases, we created the hdPS hidden info model to estimate each patient's individual hdPS-2. Sixty-six patients (0.0 %) had hdPS-2 located within non-overlapping regions and were excluded from the analysis. Among the remaining 404,063 patients, we matched 119,376 patients (29.5 %) from the high dose group to 119,376 patients (29.5 %) from the low dose group based on their individual hdPS-2; selected patients formed the Matched hdPS Hidden Info Sub-Cohort (Fig. 1). About half of the patients included within this sub-cohort were male (n = 120,238 [50.4 %]) and the average age was 64.5 years old (SD 11.2) ( Table 3). Balance within this sub-cohort was obtained for all 18 examined patient characteristics (ASDD ranged from 0.004 to 0.075 with an average of 0.027), including those which were hidden to the hdPS algorithm (ASDD for the hidden covariates ranged from 0.011 to 0.075 with an average of 0.036). Relative performance of the two matched sub-cohorts ASDD obtained within both matched sub-cohorts are shown within Fig. 2. The Matched hdPS Full Info Sub-Cohort was shown to achieve better balance on 16 of the 18 examined patient characteristics, the two remaining characteristics were equally balanced within both matched sub-cohorts. Discussion Our results show that matching on the hdPS hidden info model achieved balance on all 18 examined patient characteristics. This result shows that the hdPS algorithm was able to adjust for imbalance regarding patient characteristics, some of which were Comorbidity status, drug dispensations, and medical utilization rates were assessed in the year prior to the cohort entry date. Absolute standardized differences are defined as the between group difference as a proportion of the pooled standard deviation of the two groups a At the cohort entry date b Identifies baseline characteristics which had 0.10< ASDD ≤0.20 within the unmatched populations [7] c Identifies baseline characteristics which had ASDD >0.20 within the unmatched populations [7] unavailable to the hdPS algorithm. Among these, some were very important variables regarding outpatient medical visits and drug dispensations which can be highly associated with both the choice of treatment and the risk of diabetes. As expected the hdPS algorithm had access to a greater number of potential covariates to build the hdPS full info model (n = 3854 potential covariates) than when building the hdPS hidden info model (n = 1096 [28.4 %] potential covariates). This difference implied that 431 (86.2 %) covariates selected within the hdPS full info model were no longer available for selection and had to be replaced by the algorithm when it built the hdPS hidden info model. The main strength of our study is that it provides support to the claim that the hdPS is able to adjust for at least some unmeasured confounders. In their original paper, Schneeweiss and colleagues [5] hinted that some of the covariates selected by the hdPS algorithm may not be direct confounders but may actually be proxies of unmeasured confounders. Although adjusting for a perfect proxy of an unmeasured confounder is equivalent to directly adjusting for this confounder [28], it remained unclear if the hdPS could truly adjust for a confounder not present within the examined database. Four important known confounders (i.e., ≥5 medical outpatient visits, dispensation of beta-blockers, dispensation of ARB, and dispensation of ACEI; all shown to be unbalanced within the full cohort) [7] were not available to the hdPS hidden info model. Our results show that this model was able to achieve balance within all examined patient characteristics, including the four previously mentioned (Table 3). Such a result is of significant value since the PS technique may not adjust for variables not included within the PS model [29]. However, we were unable to identify which covariates selected by the hdPS algorithm were used as proxies for these four confounders. Our study has limitations. First, since our study shows that hdPS was able to control for measured confounders which were unavailable to the hdPS algorithm in the restricted data setting, it is reasonable to think that the algorithm is also able to control for some unmeasured confounders. Of note, the ability of hdPS to control for unmeasured confounders may be specific to these covariates/databases and to this specific population and may not be true in other settings. Comorbidity status, drug dispensations, and medical utilization rates were assessed in the year prior to the cohort entry date. Absolute standardized differences are defined as the between group difference as a proportion of the pooled standard deviation of the two groups a At the cohort entry date b Identifies baseline characteristics which had 0.10< ASDD ≤0.20 within the unmatched populations [7] c Identifies baseline characteristics which had ASDD >0.20 within the unmatched populations [7] d Identifies covariates which were hidden to the hdPS algorithm within the hdPS hidden info model Second, we only examined a limited number of patient characteristics. Although balance was achieved within both sub-cohorts regarding all 18 examined patient characteristics, we cannot guarantee that this balance would be achieved in other patient characteristics or in other unmeasured confounders. Finally, we did not examine the relative performance of the two matched sub-cohorts in regards to the measure of association which would have been obtained in an eventual etiological study. To do so would require the existence of a Bgold standard^, providing the nature and magnitude of the Btrueâ ssociation, to which we could compare our results [2][3][4][5]. Despite this fact, we consider that the quality of the match is a good marker of the performance of the hdPS method within this study since only this approach could illustrate that the hdPS method truly adjusted for the seven hidden confounders [7,[17][18][19][20][21]. In conclusion, our results show that, within the confines of our study, the hdPS was able to adequately adjust for confounders which were hidden to the algorithm. Such results support the claim that the hdPS can adjust for at least some unmeasured confounders and further support its use in future observational studies. Acknowledgment This study was made possible through data sharing agreements between Canadian Network for Observational Drug Effect Studies (CNODES) and the provincial government of Quebec. The opinions, results, and conclusions reported in this paper are those of the authors. No endorsement by the province is intended or should be inferred. We would like to also thank the CNODES investigators and collaborators for their contribution in developing the study protocol evaluated in this paper. The work presented within this manuscript has not been published previously except within abstract form and partly within Jason R Guertin's doctoral thesis work. Compliance with ethical standards Contributions of authors' statement Jason R Guertin helped in the design of the study, conducted all of the analyses and wrote the first draft of the manuscript. Elham Rahme and Jacques LeLorier both conceived the study and revised the draft of the manuscript. All authors approved the final draft of the manuscript. 2 Head-to-head comparison of the absolute standardized differences obtained within the two matched sub-cohorts. ACEI angiotensin converting enzyme inhibitors; ARB angiotensin receptor blockers; BB beta-blockers; CABG coronary artery bypass graft; Calc blockers calcium blockers; CHF congestive heart failure; hdPS high-dimensional propensity score; PCI percutaneous coronary intervention; PVD peripheral vascular disease; Absolute standardized differences <0.1 are assumed to indicate balance; all 18 patient characteristics were considered to be balanced within the two sub-cohorts
2022-11-23T14:15:06.139Z
2016-08-30T00:00:00.000
{ "year": 2016, "sha1": "788d367ce18d82f6f286abc2687c9bcd0eb29ed5", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5110594?pdf=render", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "788d367ce18d82f6f286abc2687c9bcd0eb29ed5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
81814682
pes2o/s2orc
v3-fos-license
Prevalence of obesity among adolescents at secondary schools in Kirkuk city : Objective: to identify the secondary school adolescent's obesity Introduction: besity increased in the societies due to poor eating habits like increase consumption of sweetened beverages, energy-dense foods and change in the eating behavior to consumption of refined grains, added sugars, added fats, snacks, beverages, fast foods and eating away from home. (1) Obesity is concerned as a cause for many health problems in the future of obese human such as gall bladder disease, ischemic stroke, osteoporosis, and some types of cancers, (2) adverse consequences of obesity, such as diabetes, cardiovascular disease, (3) .sleep apnea, gastro-esophageal reflux, depression, poor self worth, (4) non-alcoholic fatty liver disease (5) and physical and psychological aspects problems (6) . The issue of overweight and obesity has become a serious public health concern all over the world during the last decades. The prevalence of overweight and obesity is increasing, and obesity is estimated to be a major leading cause of mortality and morbidity, causing an estimated 2.6 million deaths worldwide and 2.3% of the global burden of disease (7) . In recent years, obesity among children and adolescents has emerged as a global epidemic and serious public health problem in the Eastern Mediterranean region. In Saudi Arabia, a country that has experienced marked nutritional changes and rapid urbanization in recent decades, it was estimated that 26.6% and 10.6% of adolescents aged 13-18 years are overweight or obese, respectively (8) . In Iraq study conducted in Karbala city related to Prevalence of Obesity among Adult Population and the result was that obesity affects about 30% of adult population in Karbala (9) . Another study conducted in Al-Najaf Al-Ashraf City to assess self esteem of obese adolescents and the prevalence of obesity was a (17.26%) of the total study samples (1350) adolescents. (10) . Hereditary factors are a major factor contributing to obesity. It most closely correlates with the biological mother's weight (11) . Obesity presents as a disorder of the mechanisms of energy balance in the body. Although predisposition to obesity is partly determined by genetic factors, an obesogenic environment is required for the phenotypic expression. Body weight, like height, is passed on genetically; the body fat content of adopted children shows a better correlation with that of their biological parents (12) . Methodology: Subjects: The study population included Iraqi nationals, male and female students, aged 12 to 15 years. A representative sample of these adolescents (537 students, 270 boys and 267 girls) was selected from schools in Kirkuk city by using the proportional stratified sampling. From each school (20%) of the total number of students were randomly selected by interval number. The obese adolescents' number was 120 out of the total study sample. Anthropometric measurements: The weight is measured for each adolescent participant in the study. It is measured without shoes and light clothes as possible. The investigator used weight scale which is highly reliable and borrowed from the Iraqi Nutrition Research Institute made by (Seca Company, Australia), weight scale is a gift from the United Nation Children′s Fund (UNICEF) and has a capacity of (188.8) kg. Before use the scale, the investigator is checking the scale daily by weight a standard weight. During weighting, the scale was placed on a hard-floor surface, and each participant was stand still in the center of the platform of the scale with the body weight evenly distributed between both feet. The height of adolescents is measured without shoes by using measuring tape of height two meters (UNICEF tape measure) it is already reliable. The individual should stand on a flat surface with weight distributed evenly on both feet, heals together and the head upward. The arms are hanging freely to the sides, and o the head, back, buttocks and heal are against the wall with the knee fully extended and line of vision parallel to floor . (13) According to the Dietary Guidelines for Americans 2010, body mass index is a measure of weight in kilograms (kg) relative to height in meters squared. Body mass index status categories include underweight, healthy weight, overweight, and obese. Underweight: < 5 th percentile of BMI for age Normal weight: 5 th to < 85 th percentile of BMI for age Overweight: 85 th to < 95 th percentile of BMI for age Obese: ≥95 th percentile of BMI for age. (14) BMI was calculated by scientific application program (WHO AnthroPlus) which obtained from Iraqi Nutrition Research Institute. Questionnaire: The sociodemographic data sheet, consisted of (12) items categorized as general information (adolescents age and gender) and socioeconomic data (parents level of education, parents occupation status, type of family, total number of family, number of rooms, house area, house content and car possession). Question about the obesity history of adolescents families, if their obesity among families members (father, mother, and brother/sister). A statistical analysis was performed using the Microsoft office excel 2007 and SPSS package (version 16). Chi-square statistics were used to determine the presence of an association between the variables. (1) shows that (50.3%) of the adolescents pupils is male, (34.1%) their age is 13 years old, and (77.5%) of them coming from middle level of socio economic status score. Table (2) shows that (22.3%) is obese from the total study sample and their percentage reflects that one fifth of study sample were obese. No. = number, % = percentage, P= probability level, χ² = Chi-square, Sig. = level of significant, * = significant at p-value ≤ 0.05 Table (3) show (55.8%) of the obese adolescents is male, (42.5%) their age is 13 years old, and (79.2%) of the obese adolescents come from family of middle level of socio economic status score. No. = number, % = percentage, P= probability level, χ² = Chi-square, Sig = level of significant, * = significant at p-value ≤ 0.05 Results Table (4) shows that (59.2%) of the obese adolescents has obesity history in their family, (22.5%, 24.2%, and 30.8%) of the obese adolescents has obese fathers, obese mothers, and obese brothers or sisters respectively. Discussion: Table (1) which refers to the statistically distribution of the observed freq-uencies, percentages of some related demographical characteristics variables for the entire studied sample. Regarding to the gender, the finding indicate that males more than females (50.3%, 49.7%) respectively. Regarding to the age, the finding indicate that adolescents with 13 years old more than other age groups (34.1%). Regarding to the socio-economic status, it is found that most of the study sample is from the middle level of SES. The study results in table (2) shows that more than half of the study sample in regarding to their BMI percentile has normal weight (61.5%) and (22.3%) were obese. This result agrees with Bin Zaal et al., in his study Dietary habits associated with obesity among adolescents in Dubai, United Arab Emirates for age (12)(13)(14)(15)(16)(17), prevalence of obesity was 21.3% of the study sample. Table (3) refers to the statistically distribution of some related demographical characteristics variables for obese adolescents. (15) Regarding to the gender, the finding indicates that obesity is more common in males than females (55.8% males). Concerning to the obese adolescents age, the finding of this study shows that the obesity is increasing in thirteen years old of adolescents more than other ages. This result agree with Bin Zaal et al., in his study Dietary habits associated with obesity among adolescents in Dubai, United Arab Emirates for age (12)(13)(14)(15)(16)(17), he find that obesity in male more than female and obesity increase in fourteen years old for male and thirteen years old for female. Regarding to the socioeconomic status, it is found that most of the obese adolescents is from middle SES. Parental obesity has been identified as a predominant risk factor for childhood obesity, probably owing to a combination of genetic, epigenetic, social and environmental factors. Children with two obese parents have a higher risk of obesity than those with one or no obese parent. (16) The results of the table (4) refer to the correlation between the obese adolescents and their family history of obesity, the study results indicate that there is highly significant relationship between obese adolescents and obesity in the family, obese father, and obese brother /sister (0.000, 0.037, 0.000) respectively, while there is a no significant relationship between obese adolescents and obese mother. Svensson et al., agree with this result, he fined that in his study associations between severity of obesity in childhood and adolescence, obesity onset and parental BMI: in a longitudinal cohort study, Severity of obesity was significantly correlated with both maternal and paternal BMI (P<0.01). (16) Jiang et al., in their study an association between child and adolescent obesity and parental weight status: a cross-sectional study from rural north China, they found that there is a highly significant relationship between child/adolescent obesity was significantly associated with parental obesity (12.53% for father and 14.29 for mother) at p-value≤0.01. (17) Recommendations: 1. The Ministry of Health must be provided a health staff for each school to follow up adolescents' health. 2. Regular visits to schools to detect obesity and its complications. 3. Continue to research the long-term health benefits that result from eating a healthy diet. 4. Research innovative, cost effective ideas to provide nutritious snacks during the school day. 5. Place posters throughout the school showing foods rich in various nutrients. 6. Healthy food tips in the school news letter for parents. 7. Provide facilities and environment for physical exercise in the schools. 8. Educational activities and more orientation about their diet and physical exercise at early ages involving the whole family to control the excess of weight. 9. Encourage adolescents and their families to read the list of calories on the backed foods.
2019-03-18T14:04:33.300Z
2013-09-30T00:00:00.000
{ "year": 2013, "sha1": "8e16f6145502b53bb2583470c5c777b8f490a036", "oa_license": "CCBY", "oa_url": "https://injns.uobaghdad.edu.iq/index.php/INJNS/article/download/175/163", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "555ce6abdc976fa8fb2c4f8fdbb390e2c1e2dd3c", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
235546805
pes2o/s2orc
v3-fos-license
27.9% Ef fi cient Monolithic Perovskite/Silicon Tandem Solar Cells on Industry Compatible Bottom Cells The authors acknowledge the support of Thorsten Dullweber and Silke Dorn (both ISFH) for the chemical polishing of CZ wafers. The authors acknowledge funding from HyPerCells (Hybrid Perovskite Solar Cells, http://www.perovskites.de) joint Graduate School, as well as from the German Federal Ministry for Economic Affairs and Energy (BMWi) through the “ PersiST ” project (grant no. 0324037C) as well as ProTandem (grant no. 0324288C). Further funding was provided by the Federal Ministry of Education and Research (BMBF) for funding of the Young Investigator Group Perovskite Tandem Solar Cells within the program “ Materialforschung für die Energiewende ” (grant no. 03SF0540) and by the Helmholtz Association within the projects HySPRINT Innovation lab and TAPAS (Tandem Perovskite and Silicon solar cells — Advanced optoe-lectrical characterization, modelling, and stability). F.L. acknowledges fi nan- cial support from the Alexander von Humboldt Foundation via the Feodor Lynen program. The authors also acknowledge fi nancial support by the Federal Ministry for Economic Affairs and Energy within the framework of the 7th Energy Research Programme (P3T-HOPE, grant no. 03EE1017C). M.J. and M.T. acknowledges fi nancial support from the Slovenian Research Agency (ARRS) within the grants P2-0197 and J2-1727. Open access funding enabled and organized by Projekt DEAL. Introduction Today's photovoltaic market is dominated by crystalline silicon-based solar cell technology. With a record power conversion efficiency (PCE) of 26.7%, [1] silicon single-junction solar cells are approaching their theoretical limit of 29.4%. [2] To overcome this limit, silicon solar cells can be combined with wider bandgap materials into multijunction solar cells, where each photovoltaic active material converts a specific part of the spectrum efficiently into electrical power. With two active materials (commonly termed tandem solar cells), the theoretical PCE limit is %46% based on detailed balance arguments. [3] The excellent optoelectronic properties as well as the tunable bandgap and potentially low-cost fabrication make metal halide perovskites suitable candidates for the top cell material in tandem solar cells. [4][5][6][7][8][9][10][11][12][13] Only 3 years after the first realization of a p-i-n tandem solar cell by Bush et al., [14] the highest scientifically reported efficiency of 29.15% is close to the theoretical limit of silicon single-junction solar cells. [15] With a certified efficiency of 29.52%, Oxford PV surpassed this limit but did not disclose any further details. [16] These high efficiencies are achieved on rather thick front-side polished float-zone (FZ) silicon heterojunction solar cells, which are industrially not relevant for three reasons: 1) chemical-mechanical polishing (CMP) is time consuming and expensive. Therefore, it is desirable to use either chemical polishing as it is used in passivated emitter and rear cell (PERC) industry or double-side textured wafers. The latter approach is favored because such textures can be produced in a standard batch process and they provide optical advantages. Perovskite/ silicon tandem processing on such wafers is addressed in recent publications. [4,9,[17][18][19][20][21] However, solution processing of highefficiency tandem solar cells using such textures still remains challenging due to the difficulties of processing very thin perovskite layers on micrometer-sized pyramid structures. 2) FZ-silicon is not used for PV mass production. Instead, Czochralski (CZ) silicon will remain the main method to fabricate silicon ingots, [22] mainly because of lower costs. 3) The absorption of Si for photon energies just above the bandgap, i.e., in the infrared (IR) part of the spectrum, is relatively poor. For tandem cells, however, where the top cell will absorb most of the higher energy photons, the IR response of the bottom cells is crucial. [5,23] Therefore, the bottom cell thickness in most publications on perovskite/silicon tandem solar cells is 260 to 300 μm, whereas according to current market forecasts, the industrially relevant thickness for n-type monocrystalline silicon is just 140 to 150 μm (as cut) in 2022. [24] In this article, we demonstrate for the first time monolithic perovskite/silicon tandem solar cells based on thin non-CMP n-type CZ-silicon bottom cells. The reduced response in the IR region for thinner bottom cells will shift the optimal top cell bandgap for standard test conditions toward larger energies. Solar Cells We use (100)-oriented %130 μm thick (as cut) n-type CZ-silicon wafers with random pyramids on the rear side and a specified resistivity of %5 Ω cm. The front side of these CZ-based bottom cells was chemically polished using standard etching procedures in PERC industry but using a more aggressive treatment, removing up to 20 μm, to obtain a surface compatible with the top-cell processing. [25,26] Tandem solar cells with these type of bottom cells are termed "CZ-based." As a reference, we use (100)-oriented 280 AE 20 μm thick FZ wafers with random pyramids on the rear side, a CMP front side, and a resistivity of %3 Ω cm (in the following termed "FZ-based"). The front and rear side of all wafers are passivated with intrinsic amorphous silicon ((i)a-Si:H) layers. On the rear side, p-doped a-Si:H is deposited on the passivating layer. N-doped nanocrystalline silicon oxide (nc-SiO x :H) optimized in refractive index for optimum NIR incoupling on the front passivating layer serves as electron-selective contact. [5] All silicon layers were deposited by plasma-enhanced chemical vapor deposition (PECVD). On top of the (n) nc-SiO x :H layer, an In 2 O 3 -based transparent conducting oxide (TCO) is deposited, whereas the rear contact consists of a layer stack of aluminum-doped zinc oxide (AZO) and silver. More details can be found in the Materials and methods section in the Supporting Information. After processing the bottom cells, the measured thicknesses of the CZ-and FZ-based bottom cells are 100 and 280 μm, respectively. www.advancedsciencenews.com www.solar-rrl.com leading to falsified roughness values (see Figure S2, Supporting Information, for an adjusted scale). Therefore, atomic force microscopy (AFM) is used to image and analyze the surface of the FZ-based bottom cell ( Figure S3, Supporting Information). The root mean square roughness values (Sq) are extracted from the areas as shown in Figure S3, Supporting Information. They amount to 1 and 736 nm for the FZ-based and CZ-based bottom cells, respectively. The maximum height values (Sz) are 9 nm and 7.7 μm for the FZ-based and CZ-based bottom cells, respectively. Although the Sz for CZ silicon is very high, the lateral dimension of the features is large enough to enable complete coverage during spin coating. This is evident for the saw mark visible for the CZ silicon in the CLSM image ( Figure 1C): The step height is %5-6 μm (see Figure S4, Supporting Information), but since it extends over 100 μm, it should not be problematic for solution-processed perovskite layers. To investigate the influence of the different wafer types (i.e., thickness and topography) on the optical properties, we measured reflection of the bare wafers. The reflection spectra shown in Figure S5, Supporting Information, demonstrate that the reflection for wavelengths below 950 nm is not affected by the difference in topography or thickness. For longer wavelengths, the reflection is higher for the thin CZ silicon. Less light is absorbed due to the thinner silicon. Thus, the amount of light arriving at the rear side of the cell is increased, which consequently increases the amount of light reflected at the rear side, too. For the same reason, the light reflected at the rear side of the cell is absorbed less while being transferred back in the thinner silicon bottom cell, leading ultimately to an increased outcoupling at the front side and thus reflection. For the p-i-n top cell which is identical on both types of bottom cells ( Figure 1E), a self-assembled monolayer, 2PACz, is used as a hole-selective layer (HSL). In addition to its electrical advantages, it enables conformal coverage on the nonpolished bottom cell. [27] The nominal perovskite composition is Cs 0.05 (MA 0.23 FA 0.77 ) 0.95 Pb(Br 0.23 I 0.77 ) 3 yielding a bandgap of 1.68 eV. On top of the perovskite, LiF and C 60 are deposited via thermal evaporation and SnO 2 is deposited via atomic layer deposition. After the sputter deposition of zinc-doped indium oxide (IZO) as transparent conductive oxide, Ag and LiF are thermally evaporated as split ring-type bus bar electrode and antireflective coating, respectively. The active area of the resultant tandem solar cells is 1 cm 2 . A detailed description can be found in the Materials and Methods section in the Supporting Information. To monitor the process, opaque perovskite singlejunction solar cells with an active area of 0.16 cm 2 were fabricated together with the tandem solar cells. The median performance values (10 devices) for opaque perovskite single-junctions are 78.5% for the fill factor (FF), 20.3 mA cm À2 for the short-circuit current density (J SC ), 1.2 V for the open-circuit voltage (V OC ), and 19.3% for the PCE (see Figure S6, Supporting Information). A maximum efficiency of 19.9% with a V OC of 1.21 V was obtained in this p-i-nt y p ec o n figuration, which is among the highest PCE and V OC values for perovskite cells as typically used in two-terminal tandem solar cells. [28] Figure 2A shows the external quantum efficiency (EQE) and reflection spectra of two-terminal (monolithic) tandem solar cell champion devices based on thin CZ and thick FZ bottom cells. In the short-wavelength range, a minor difference in reflection occurs. We account this difference to very slight variations of layer thicknesses in the front contact, leading to altered interference patterns. The difference in the long wavelength range is a result of a difference in the bottom cell thickness, as described previously. For both tandem solar cells, the EQE spectra of the perovskite subcells (top cells) are very similar. Consequently, the photogenerated current densities ( J Ph ) under 100 mW cm À2 AM1.5 G illumination are also similar in both perovskite subcells (19.56 and 19.44 mA cm À2 for the CZ and FZ cells, respectively). The main difference between the tandem cells occurs in the EQEs of the silicon subcells (bottom cells). The J Ph in the silicon bottom cell of the thick FZ-based tandem solar cell is 19.08 mA cm À2 .T h e reduced bottom cell thickness in the CZ-based tandem solar cell causes a lower response in the near-IR region leading to a reduced photogenerated current density of 17.81 mA cm À2 .Therefore,the cumulative photogenerated current density decreases from %38.52 to %37.37 mA cm À2 .A st h eJ SC of tandem solar cells is mainly determined by the J Ph of the limiting subcell, a lower J SC for the thin CZ-based tandem solar cell is expected. In contrast, www.advancedsciencenews.com www.solar-rrl.com the current mismatch between the subcells increases. As we have reported previously, the tandem's FF is affected by the current mismatch. [7] Generally, the FF increases with larger mismatch. In addition, thinner silicon wafers lead to higher V OC values due to an decreasing total recombination current density. [2] To estimate the gain in V OC , we simulated silicon single-junction solar cells (illumination spectrum as in the tandem) with CZ-and FZsilicon as used for tandem solar cells with the program Quokka3 (see Figure S7, Supporting Information, and the Materials and Methods section in the Supporting Information for more details). A V OC enhancement of %17 mV is expected when using 100 μm CZ-silicon instead of 280 μm FZ-silicon. Even though the FF of the bottom cell also depends on the thickness and fabrication method, the simulations reveal that the configurations investigated in this work, both cell types, FZ and CZ, should deliver the same FF of 82.5% to 83% (see Figure S7, Supporting Information). Summarizing, the thinner CZ-based tandem solar cell is expected to have a lower J SC ,higherFF(duetolargercurrent mismatch), and higher V OC . The J-V curves shown in Figure 2B confirm these expectations. The best reference device based on thick FZ wafers has a J SC of 19.13 mA cm À2 , V OC of 1.89 V, and FF up to 78.01% and as a result a PCE of up to 28.15%. This value is in very good agreement to our previous results for similar tandem layer stacks. [15] For the thin CZ tandem solar cell, the high FF of 80.89% partially compensates the lower J SC of 17.81 mA cm À2 . Combined with a higher V OC of 1.94 V, the PCE of this cell is 27.89%. This value is just 0.26% abs below the PCE of the front-side polished, thick FZ reference cell. Note that another J-V scan of the same cell led to a similar but yet slightly higher FF of 81.15% which is to the best of our knowledge the highest FF presented for perovskite/silicon tandem solar cells to date (see Figure S8, Supporting Information). The improvement of the FF per mismatch is higher than what we reported previously, [7] but as elaborated by Boccard et al., the improvement in FF depends strongly on the performance of the individual subcells. [29] Stable operation of the herein presented tandem solar cells is highlighted by 5 min maximum power point (MPP)-tracks as shown in Figure S9, Supporting Information. After 300 s MPP-tracking, PCE values of 28.05% and 27.81% are measured for the FZ-and CZ-based device, respectively, which is well in line with the J-V curve-derived efficiency. The illuminated J-V results for three CZ and four FZ tandem solar cells are summarized in Figure S10, Supporting Information. They reveal the same median PCE of 27.8% for both CZ-and FZ-based tandem solar cells. The V OC improvement by 30 to 40 mV for the best devices is slightly more than expected from simulations. Therefore, we measured absolute photoluminescence of the top and bottom cell for both configurations to extract the quasi-Fermi level splitting (QFLS or implied V OC ). [30,31] The intensity of the laser was set to match the current density generated within each subcell under AM1.5 G illumination. The PL spectra, QFLS values, and radiative limits are shown in Figure S11, Supporting Information. For the perovskite subcell, the QFLS values are the same on both the FZ-and CZ-based tandem solar cells. They amount to %1.20 eV, which is consistent with the V OC of the perovskite single-junction solar cells (see Figure S6, Supporting Information). The QFLS of the silicon subcell in the FZ-based tandem solar cell is %690 meV. Consequently, the sum of the perovskite and silicon QFLS for the FZ-based cell is %1.89 eV, which is in very good agreement with its V OC extracted from the J-V curve (1.90-1.91 V for this specific sample). For the CZ-based tandem cell, a QFLS of 710 meV in the Si wafer was calculated. The enhancement of %19 meV compared to the FZ-based cell matches well with the simulated V OC enhancement of 17 mV. We find well-agreeing values of the cumulative QFLS (1.910 eV) and the measured V OC (1.92-1.93 V for this sample) for the CZ-based tandem cells. Therefore, we account the previously mentioned V OC improvement of up to 40 mV to a sample to sample variation. To exclude any structural changes in the perovskite due to different surface topographies of the bottom cells, X-ray diffraction (XRD) measurements were conducted. The XRD patterns acquired for the HSL/perovskite stack deposited on the different bottom cells reveal similar crystallization of the perovskite films on both surfaces (see Figure S12, Supporting Information). We attribute the additional peak around 32.8 for the FZ-sample to stem from the In 2 O 3 -based recombination layer. To analyze the effect of the bottom cell in more detail, we measured the J-V curve of the CZ-based tandem solar cell in a way that the J Ph values of both subcells are equal to the respective J Ph values in the FZ tandem solar cell (i.e., same mismatch conditions for CZ-and FZ-based tandem solar cells). This required to increase the illumination intensity only in the IR region of the spectrum, which can be easily done with the utilized light-emitting diode sunsimulator. In Figure S13, Supporting Information, this J-V measurement with adjusted J Ph values is compared to the J-V of the FZ tandem solar cell under AM1.5G conditions. In addition to the increased V OC , just a slight deviation occurs at voltages just below the MPP. The FF values of both measurements are very similar, demonstrating that the increased FF of the CZ tandem solar cell under AM1.5G conditions arises mainly from the increased current mismatch. [7] The long-term stability of one CZ-and two FZ-based tandem solar cells (nonencapsulated) is shown in Figure S14, Supporting Information. The initial PCE values are 27.6% (CZ), 28.15% (FZ), and 27.4% (FZ). The cells were held at 25 C in air, the relative humidity (RH) was not actively controlled. In addition to the J MPP , V MPP , the resulting PCE, and the normalized PCE, we show time series of the cell temperature and RH. The latter one ranges from 11% to 26%. After 1000 h continuous tracking, the cells were still performing at 67% (CZ), 70% (FZ), and 67% (FZ) of their respective initial PCE values, where the main parameter driving the reduction in PCE is J MPP . These efficient monolithic perovskite/silicon tandem solar cells demonstrate that it is not mandatory to use chemical-mechanical polishing for spin-coated perovskite films. Instead, chemical polishing as it is already deployed in industry, is sufficient for solution processing such as spin coating. Furthermore, it shows that industry relevant, almost threefold thinner CZ-silicon wafers can enable the same performance as the thick, CMP FZ-silicon wafers standardly used in lab-scale devices. Optical Simulation The reduced photogenerated current density for thinner silicon bottom cells necessitates adjustments to achieve current www.advancedsciencenews.com www.solar-rrl.com matching or power matching conditions. Although the aforementioned experiment and previous reports demonstrate that the tandem solar cells are not highly sensitive to current mismatch conditions, [7,[32][33][34] the highest PCE values might be achieved with current or power matching conditions. Moreover, there are various effects affecting the mismatch conditions, as we will discuss later. Each of the effects needs to be controlled to obtain ultimately a current or power matched tandem solar cell. Apart from increasing the IR response of the bottom cell by optical improvements, which is not the focus of this study, current matching can be achieved by reducing the perovskite thickness and/or widening the perovskite bandgap. To shed light on this aspect, we used GenPro4 to simulate the optical performance of tandem solar cells. [35] We simulated tandem solar cells with a silicon bottom cell thickness of 100 and 280 μm. For both bottom cells, the perovskite thickness and its bandgap were varied. The lower limit of the thickness of 700 nm represents a realistic case as this thickness can be easily achieved with solution processing. [9,10,17,18,21,[36][37][38] As the J Ph saturates toward thicker films ( Figure 3A), an upper limit of 1500 nm was chosen. For each thickness, the bandgap of the perovskite was varied by shifting the refractive index n and extinction coefficient k (measured via spectral ellipsometry for E g ¼ 1.63 eV) along the wavelength axis to cover a bandgap range of 1.63 to 1.78 eV ( Figure S15, Supporting Information). [39,40] The bandgap is taken as the inflection point of the perovskite absorption edge as shown in Figure S16, Supporting Information. All other layers were kept as in the experiment. An example of simulated EQE spectra with various perovskite bandgaps is shown in Figure S17, Supporting Information. Figure 3B shows the ideal top cell bandgap E g,top,matched as a function of the top cell's thickness when utilizing a280μm and 100 μm thick silicon bottom cell (see also Table S3, Supporting Information). The photogenerated current density of the perovskite top cell J Ph,Perovskite increases with thicker perovskite layers. As a consequence, the photogenerated current density of the silicon bottom cell J Ph,Silicon decreases with thicker perovskite layers ( Figure S18, Supporting Information). Thus, for thicker perovskite layers, it is necessary to widen the top cell bandgap if current matching is desired. When increasing the thickness from 700 to 1500 nm, the top cell bandgap needs to be increased by 0.047 eV for both bottom cell thicknesses. In the best case, this would improve the V OC . As evident from Table S3, Supporting Information, the matched photogenerated current density J Ph,matched stays almost constant. In addition, the bottom cell thickness alters current matching conditions. We found that the reduction of the bottom cell thickness from 280 to 100 μm requires to widen the top cell bandgap by 0.02 eV, regardless of the perovskite's thickness. However, simultaneously J Ph,matched decreases from 19.64 to 19.14 mA cm À2 (average values). Therefore, for a perovskite thickness of 700 nm, its bandgap needs to increase from %1.69 to %1.71 eV if the bottom cell thickness is reduced. The higher V OC from both top and bottom cell together (40-50 mV) will exactly compensate the reduced J SC (when assuming that J Ph,matched ¼ J SC ). Obviously, the FF of the perovskite top cells needs to remain the same regardless of the perovskite thickness to maintain the high PCE. Ultimately, a trade-off between high J SC (thick silicon wafer and narrow perovskite bandgap) and high V OC (thin silicon wafer and wide perovskite bandgap) needs to be made to yield the highest efficiency. Finding this optimum bottom cell thickness will be work for the future. These simulations do not include any optimization of other (e.g., contact) layer thicknesses. The adjustment of these layer thicknesses can reduce the interference patterns appearing, especially in the NIR wavelength range for the silicon subcell (see Figure S18, Supporting Information). This would require an optimization for each individual top cell bandgap and thickness. The same simulations were performed for double-side textured tandem solar cells. As previously simulated and experimentally demonstrated, [4,6,19,20] the additional front-side texture reduces reflection and removes interference patterns, enabling higher J Ph and J SC values (see Figure S19, Supporting AB Figure 3. A) Simulated photogenerated current densities J ph of perovskite/silicon tandem solar cells as a function of the perovskite thickness. The thickness of the silicon bottom cell is 100 μm and the perovskite bandgap is 1.73 eV. The rear side of the tandem cells is textured. The front side is either flat (denoted as "Flat") or textured (denoted as "Textured"). The corresponding EQE spectra are shown in Figure S18 Information, for simulated EQEs). The same trends appear as for planar devices: With thinner silicon, a larger perovskite bandgap is needed to ensure current matching conditions (See Table S4, Supporting Information and Figure 3). When reducing the silicon thickness from 280 to 100 μm, the perovskite bandgap should increase by 0.019 eV to maintain current matching at the same perovskite thickness. However, this comes along with a reduction of J ph,matched by %0.5 mA cm À2 . Ultimately, the optimum bandgap not just depends on the thickness of the perovskite layer and the thickness of the silicon wafer. Luminescence from the perovskite top cell into the silicon bottom cell will relax the requirement for current matching conditions. [41] Furthermore, it was previously shown that higher operational temperatures and respective optical changes in top and bottom cells will lead to different optimum perovskite top cell bandgaps around 1.63 eV for highest energy yield with thick bottom cell wafers. [20] The transition from a monofacial to a bifacial tandem solar cell reduces the bandgap as well, if current matching should be maintained. [21] Therefore, the ideal device design cannot be easily derived from the performance under standard test conditions but needs to be derived for each case individually considering realistic outdoor conditions. Conclusion We demonstrated perovskite/silicon tandem solar cells based on industrially relevant silicon bottom cells, namely, 100 μm thin CZ-wafer with an industrial deployed chemical polishing for the front side and a textured rear side. For comparison, we fabricated tandem cells based on 280 μm thick FZ-wafers with a chemical-mechanical polished front side, which is standardly used in lab-scale devices. The CZ-based tandem cells have a PCE of up to 27.89%, which is just slightly below the value of 28.15% for FZ-based tandem cells. However, the median PCE of 27.8% indicates equal performance for both bottom cell types. The median V OC increases from 1.89 V (max. 1.91 V) for the FZ-based cells to 1.92 V (max. 1.94 V) for the CZ-based cells, explained by the higher V OC of the thin CZ bottom cell. Simultaneously, thinner silicon bottom cells present a lower EQE in the IR region, leading to a lower photogenerated current density and, thus, a lower J SC (19.1 vs 17.8 mA cm À2 ). The increased mismatch, when using an identical top cell, results in improved FF values (77.2% vs 80.9%). After 1000 h continuous MPP-tracking, the nonencapsulated cells still performed at 67% (CZ) and 67 to 70% (FZ) of their initial PCEs. We performed optical simulations to find current matching conditions for the 100 and 280 μm silicon bottom cells with planar and textured front sides. The perovskite bandgap needs to be increased by %0.02 eV when using a 100 μm thin silicon wafer instead of the commonly used thickness of 280 μm. Simultaneously, the expected J SC reduces by %0.5 mA cm À2 . The higher V OC from both top and bottom cell together (40 to 50 mV) can exactly compensate the reduction in J SC for thinner wafers. Thus, to achieve highest PCE values with industrial bottom cells, the perovskite's bandgap needs to be widened to values well over 1.71 eV. The precise optimum top cell bandgap in this region is highly important, as these wide bandgap compositions are typically prone to phase segregation or are limited by nonradiative recombination. [42,43] Therefore, this work highlights that further investigation is needed to enable highly efficient and stable wider bandgap compositions and with that, highest tandem PCE values when using industry relevant bottom cells. Experimental Section Detailed information about the fabrication and characterization is given in the Supporting Information. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2021-06-22T17:54:47.421Z
2021-05-05T00:00:00.000
{ "year": 2021, "sha1": "c897595e88c03e68515c20a5a91d272cf97db137", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/solr.202100244", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9c98c101868c64633ecd1baaec8ff7367f8bc19b", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
67861393
pes2o/s2orc
v3-fos-license
Ultrasound-guided fine-needle aspiration biopsy of thyroid nodules <10 mm in the maximum diameter: does size matter? Objective Ultrasound-guided fine-needle aspiration biopsy (US-FNAB) is a safe and effective method of screening malignant thyroid nodules such as papillary thyroid carcinoma. However, not much data are available regarding the diagnostic efficacy of US-FNAB for papillary thyroid microcarcinoma (≤10 mm in diameter). We aim to compare the diagnostic efficacy of US-FNAB on thyroid nodules between two groups divided by a diameter of 10 mm by correlating the cytological results of US-FNAB with the histopathologic diagnoses in selected patients. Patients and methods Eight hundred twenty-two thyroid nodules (Group A: diameter ≤10 mm, n=620; Group B: diameter >10 mm, n=202) from 797 patients treated between March 2014 and June 2017 were retrospectively evaluated. Only nodules with Thyroid Imaging Reporting and Data System (TIRADS) categories 4–6 were enrolled and sampled by US-FNAB, followed by surgical resection. Results According to The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC) diagnostic categories, 94 thyroid nodules were classified as I, III and IV, and were excluded from the analysis. The resultant 728 thyroid nodules from 721 patients were analyzed. The malignant tendency (TBSRTC V and VI) rates on US-FNAB were 88.2% and 84.6% (P=0.202) in Group A and Group B, respectively, and the malignant rates were 89.5% and 86.9% (P=0.330), respectively, on histopathology. There was a high concordance between cytology and histopathology diagnoses (kappa value =0.797), and no statistical difference in terms of US-FNAB accuracy was found between the two groups (P=0.533). Conclusion For thyroid nodules of TIRADS category 4–6, the diagnostic efficacy of US-FNAB is similar for thyroid nodules either smaller or greater than 10 mm in their maximum diameter. Introduction Thyroid nodules are common entities that are detected in 19%-68% of the population by using high-resolution ultrasound (US). 1 Nevertheless, only a relatively small percentage (~5%) of thyroid nodules are malignant, with papillary thyroid carcinoma (PTC) being the most common pathological type. 2 According to the WHO classification system, papillary thyroid microcarcinomas (PTMCs) are PTCs <10 mm in diameter. 3 Early cancer detection and intervention has been suggested to reduce patient's mortality and morbidity. Given its minimal invasiveness and technical simplicity, ultrasound-guided fine-needle aspiration biopsy (US-FNAB) has been widely adopted for characterizing thyroid nodules cytopathologically. 4 The cytology results will help to determine whether or not subsequent thyroidectomy is necessary. According to the Dovepress Dovepress 1232 Lyu et al American Thyroid Association (ATA) guideline, US-FNAB is recommended for those thyroid nodules with a diameter >10 mm those with intermediate to high suspicion US pattern. 5 On the other hand, for patients with nodules ≤10 mm of suspicious US pattern, active sonographic surveillance is recommended instead. 6 Despite the fact that the prognostic advantage of PTMC has been an issue of debate in recent studies, 7-10 early diagnosis and treatment of PTMC might be beneficiary to patient prognosis as small size alone does not guarantee low risk in incidentally found thyroid cancers. 11,12 There are several studies that have evaluated the efficacy of US-FNAB for small thyroid nodules. 13,14 We focused here on the diagnostic efficacy of US-FNAB for ultrasonographically "suspicious", that is, Thyroid Imaging Reporting and Data System (TIRADS) categories 4-6, 15 and small, that is, measuring ≤10 mm as the maximum diameter on US, thyroid nodules. To the best of our knowledge, our research is the first study of this kind with the largest sample size ever reported. This knowledge is important to determine whether US-FNAB should be routinely performed in this subgroup of patients in future clinical practices. Patient selection We retrospectively reviewed 721 patients who had received thyroid US, thyroid nodule US-FNAB and subsequent thyroid surgery at Ningbo Medical Center Lihuili Eastern Hospital and Taipei Medical University Ningbo Medical Center (a tertiary medical center) between March 2014 and June 2017. Thyroid nodules were given cytopathological and histopathologic diagnoses with the criteria listed in the following sections and were grouped according to their maximum diameter as measured on US: Group A (≤10 mm) and Group B (>10 mm). Written informed consents were obtained from all patients before performing US-FNABs and surgeries. Written consents were also obtained from the patients regarding the report of their medical data. Ethical approval for conducting this retrospective study was obtained from the Ethics Committee of Ningbo Medical Center Lihuili Eastern Hospital and Taipei Medical University Ningbo Medical Center (ethical approval no. DYLL2016001) in compliance with the Declaration of Helsinki. Thyroid US evaluation US examinations were performed at the Department of Ultrasound with Philips Q5 US equipment and a 5-12 MHz linear probe. The reasons for thyroid US scan are as follows: 1) palpable cervical mass and/or enlarged lymph nodes; 2) patients with a known history of thyroid nodes and referred from other hospitals and 3) on routine health checks that included thyroid US. US patterns of the thyroid lobes and nodules (eg, calcification, echogenicity, volume, shape, dimensions, long axis/short axis ratio, vascularity) were recorded and all nodules were classified according to TIRADS by using five sonographic parameters (ie, composition, echogenicity, shape, margin and echogenic foci). Patients having TIRADS 4-6 thyroid nodules were included for US-FNAB, while the exclusion criteria were: 1) patients who rejected US-FNAB or were not cooperative for the procedure; 2) accompanied with severe cardiovascular or pulmonary conditions and 3) patients with bleeding history of unknown causes or coagulation disorders (prothrombin time >18 seconds, platelet count <50×10 9 /L, prothrombin activity <40%). US-FNAB procedure for thyroid nodules US-FNAB was performed by the same pair of experienced diagnostic sonographers who were licensed to perform the procedure, in order to ensure similarity in techniques employed. Specifically, one was responsible for US guidance and the other for fine-needle aspiration (FNA). Under US guidance, the target nodule was punctured with a 24-G needle connected to a 5 mL syringe without local anesthesia. After confirmation of reaching the target nodule by the needle tip on US, the needle was moved forward and backward ten times under negative pressure to aspirate the sample. Then the needle was withdrawn, negative pressure was released and specimen within the needle was immediately transferred to a liquid-based cytology medium. Generally, only one puncture was needed to sample each individual nodule and a second attempt could be delivered in case of unsatisfactory gross tissue yield by the first attempt as judged by the sonographer performing aspiration as a precaution. Pathological review of thyroid nodules Cytological and histopathologic diagnoses were performed at the Ningbo Diagnostic Pathology Center by pathologists specialized in thyroid cancer. The cytological diagnoses were made by two independent specialized cytologists (Dr Jue Zhou and Dr Xian-fa Xu; kappa value =0.83), and release of the final report was subjected to the approval of a third senior cytologist (Dr Deng Pan), who was also in charge of making the final decision when there was discrepancy between the two. Cytological results were reported according to The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC) as follows: 16 I, nondiagnostic or unsatisfac-tory; II, benign; III, atypia of undetermined significance or follicular lesion of undetermined significance; IV, follicular neoplasm or suspicious for a follicular neoplasm; V, suspicious for malignancy and VI, malignant. Surgery was carried out in patients with US TIRADS categories 4-6 at their own choice with signed consents. The histological diagnoses were reported according to the WHO histological classification of thyroid tumors. 17 Statistical analyses Data were analyzed using the SPSS 22.0 software package (IBM Corporation, Armonk, NY, USA). Continuous variables were presented as mean ± SD. Categorical variables were presented as percentages. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were given as percentages. The diagnostic performance of US-FNAB was defined by the consistency between cytological and histopathologic results: US-FNAB was considered as accurate when the two results matched and vice versa. Data comparisons between groups were carried out with the Mann-Whitney U test. Chi-squared test or Fisher's exact test was used to compare categorical variables. The kappa statistic was used to measure the agreement between the US-FNAB and histopathology diagnoses. The kappa can range from −1 to +1: values ≤0 indicate no agreement and values 0.01-0.20 mean none to slight agreement, 0.21-0.40 mean fair agreement, 0.41-0.60 mean moderate agreement, 0.61-0.80 mean substantial agreement and 0.81-1.00 mean almost perfect agreement. A P-value <0.05 was considered statistically significant. Ethics approval and consent to participate Ethical approval was obtained from the Ethics Committee of Ningbo Medical Center Lihuili Eastern Hospital and Taipei Medical University Ningbo Medical Center (ethical approval no. DYLL2016001). Demographics, US and US-FNAB results A total of 822 thyroid nodules of TIRADS categories 4-6 from 797 patients (772 had 1 nodule and 25 had 2) were initially reviewed during the study period. There were 620 nodules in Group A and 202 nodules in Group B. Cytologically, 94 thyroid nodules were classified as TBSRTC I, III and IV, and were excluded from subsequent statistical analysis. The resultant 728 thyroid nodules from 721 patients (714 had 1 nodule and 7 had 2) were classified as TBSRTC II (n=92, 12.6%), V (n=168, 23 Diagnostic performance of US-FNAB According to US-FNAB cytology, the percentages of nodes that fell in the diagnostic categories of V (suspicious for malignancy) and VI (malignant), which indicate malignant tendency, were 88.2% in Group A and 84.6% in Group B (P=0.202), whereas the histological malignant rates were 89.5% in Group A and 86.9% in Group B (P=0.330). There was a high agreement between the US-FNAB cytology and final histopathology (kappa value =0.797). There was no significant difference in sensitivity, specificity, accuracy, PPV and NPV of US-FNAB between the two groups ( Table 1). The diagnostic performance of US-FNAB did not exhibit significant difference between the two groups either (P=0.533). Discussion Given its minimal invasiveness, technical simplicity and high concordance with histology results, US-FNAB has become a routine method for biopsy of many benign and malignant pathologies, including thyroid nodules. 18 The current recommendations on the size selection criteria for US-FNAB of thyroid nodules have been set at a cutoff value of 10 mm with suspicious sonographic featues. 19,20 ATA guidelines recommend that nodules ≥10 mm with high to intermediate suspicious US pattern (or ≥15 mm with low suspicious pattern or ≥20 mm with very low suspicious pattern) be evaluated by US-FNAB. 5 Similarly, the Society of Radiologists in Ultrasound recommend performing US-FNAB on thyroid nodules that are >10 mm and when there 21 On the other hand, for nodules <10 mm, ATA recommends further evaluation only for patients with clinical symptoms or associated lymphadenopathy and no routine sonographic follow-up for those with very low suspicious US pattern (weak recommendation, low-quality evidence). 22 Therefore, for nodules ≤10 mm that exhibit intermediate to highly suspicious US patterns, routine US follow-ups are justifiable. However, nodule size alone is not predictive of malignancy in patients with Bethesda category III, IV and V thyroid nodules, and due to the existence of thyroid microcarcinomas such as PTMC, early identification by US-FNAB and subsequent surgical intervention might provide clinical benefits for those selected patients. 23,24 Certain US patterns have been associated with potential malignancy and thus indicate the necessity of US-FNAB. For example, a recent meta-analysis suggested that US features such as microcalcification, a taller than wide shape, irregular margins and absence of elasticity are associated with higher risk of malignancy and have the most satisfactory diagnostic performances. 25 Similarly, the Revised Korean Society of Thyroid Radiology Consensus Statement and Recommendations suggest that for nodules >5 mm, FNAB should be performed if the target nodule is solid and hypoechoic, together with any three of the following suspicious US features: microcalcification, nonparallel orientation (taller than wide), spiculated or microlobulated margin. 26 According to TBSRTC, the malignancy risk is 60%-75% for of category V (suspicious for malignancy) and 97%-99% for of category VI (malignant). 16 On the other hand, there are only a limited number of reports that have evaluated the diagnostic efficacy of US-FNAB for thyroid nodules <10 mm. 13,27,28 Unal and Sezer found a concordance rate of 52.4% between US-FNAB cytology and histopathology in 21 nodules <10 mm in diameter. Also, among those inconsistent cases (false negative: cytological category of benign, but histologically malignant), PTMC accounted for the majority of cases. 14 Rosario et al recently specifically addressed the FNA results for a subgroup of "highly suspicious" thyroid nodules (≤10 mm, highly suspicious US features, restricted to the thyroid) in patients who are candidates for active surveillance according to ATA. 29 They reviewed a total of 198 nodules and found a very high rate of malignancy on histology for nodules with suspicious/malignant (100%) and indeterminate (81.4%) cytology results. Zhong et al compared the efficacy of US-FNAB for 344 thyroid nodules with different sizes (≤5.0, 5.1-10.0 and >10.0 mm) and found similar diagnostic efficacy regardless of size. 30 The main novelty of the current study is the focus on TIRADS 4-6 thyroid nodules and enrolment of the largest sample size to date for comparing the diagnostic efficacy of US-FNAB for nodules that were either ≤10 or >10 mm. Besides, unlike previously studies where the benign nature of the nodule was verified by repetition of cytology or simply by clinical follow-up, all nodes in our study were surgically removed with definite histopathologic diagnoses. Our results showed that the inconsistency rate between US-FNAB and histology was much lower than what has been reported previously: only 21 cytologically "benign" nodules identified by the US-FNAB were later confirmed as PTCs after surgery (false negative) and 10 cytologically "malignant" nodules were later verified as benign nature (false positive). Among the 21 nodes with false-negative US-FNAB results, 5 were >10 mm and 16 were ≤10 mm. This might be caused by sampling of normal thyroid tissue during US-FNAB procedure. Besides, there were false-negative US-FNAB cases in both groups, implicating the need of follow-up of these patients with cytologically "benign" results. Given the slow-growing nature of PTC, there is still controversy over the clinical value of its early diagnosis by US-FNAB. It was reported that there was a high frequency of occult papillary microcarcinomas in an autopsy study (35.6%). 31 In our study, the number of nodules ≤10 mm was three times higher than those >10 mm, suggesting the importance of appropriate clinical management of these subcentimeter nodules. The 2015 ATA guidelines, however, do not recommend US-FNAB for subcentimeter nodules unless there is extrathyroidal extension or suspicious lymphadenopathy. Besides, ATA also favors active surveillance in selected low-risk PTMC, instead of immediate surgery. These recommendations are largely based on observational studies done by Japanese researchers that suggested the relatively indolent nature of certain subtypes of PTMC. [32][33][34] Nevertheless, perithyroidal lymph node metastasis is a feature seen in certain subtypes of PTMCs which are shown to have poor differentiation. 12,35,36 However, without US-FNAB, there is no better way to stratify patients with subcentimeter thyroid nodules of intermediate to highly suspicious features into either active surveillance or surgery. Neither there is any reliable method to predict which subsets of PTMC will be more aggressive and thus need to have more timely surgery. A recent study by Gweon et 1235 Lyu et al than-wide shape) might not be good candidates for active surveillance as they are associated with malignancy rate and aggressive biological behavior. 37 Besides, prolonged active surveillance itself is not without associated costs in terms of clinical, psychological and economic burdens on the health care system and patients. 38 Therefore, ongoing researches are needed to provide evidence for the optimal management of patients with subcentimeter PTMC. A detailed discussion of the natural history of PTMC is out of scope of the current work, 24,[39][40][41] and the strength of our results lies in demonstrating that US-FNAB is a convenient and reliable diagnostic procedure for these selected subcentimeter pathologies. However, our study also has several limitations. First, similar to previous work, 37 patients with nodules of TIRADS categories 4-6 were offered with US-FNAB and subsequent surgery. Although this provides valuable information on the concordance between cytology and histology results, some physicians would doubt the necessity of surgery for patients with benign US-FNAB cytological findings and would suggest follow-up instead. 42 This is at least partially because of the associated surgical complications. 43,44 However, in our practice, we did provide explanations to every patient regarding current ATA guidelines and related researches by Ito et al 33 and offered conservative options including active surveillance. 34 After thorough considerations, some patients opted to have US follow-ups, while others still had severe psychological stress and proceeded with surgeries despite our advice. Second, thyroid nodes with TBSRTC I, III and IV cytology (11% of the total cases) were excluded from the analysis because, according to TBSRTC guidelines, a repeated FNA is needed for category I and III nodules, while category IV (follicular or suspicious) is not the focus of the current study. Therefore, our study only focused on thyroid nodules with definite cytology results (either benign or malignant, but excluding follicular neoplasms) yielded by single biopsy attempt. However, this might also lead to selection bias to some extent. Third, we did not exclude patients with coexistence of diffuse thyroid disease, such as thyroiditis or diffuse goiter, and these thyroid conditions might influence the performance of the US-FNABs, especially for small nodules. 45 Last but not least, our study is retrospective and future prospective study will yield clinical evidence at a higher level to answer the controversy over the benefit of surgery on the prognostic advantage of PTMC. To conclude, US-FNAB is an effective diagnostic method for thyroid nodules with TIRADS categories >4 regardless of their size. Prospective studies are needed to determine the natural history of thyroid malignancy <10 mm and identify those at particular risk of exhibiting aggressive behavior, in order to further support the value of its early diagnosis and surgical intervention as well as for providing more optimal and personalized care. Consent for publication This manuscript does not contain any individual person's data in any form. Data sharing statement The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
2019-03-11T17:20:17.479Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "3e76ea3a089b47c95ebddc684e0b9b4c3acd20e3", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=47899", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b3bd5271a9a8186f471f60fc595a4aed200c33d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239666657
pes2o/s2orc
v3-fos-license
FOOD PREFERENCE OF Thyrinteina arnobia (STOLL, 1782) (LEPIDOPTERA: GEOMETRIDAE) ON NATIVE AND EXOTIC HOSTS One of the factors that may aff ect and limit the production in eucalypt plantations is the attack of defoliating insects. Among those, the brown eucalypt caterpillar, Thyrinteina arnobia (Stoll, 1782) (Lepidoptera: Geometridae), stands out for being the major defoliating pest of Eucalyptus spp. in Brazil. Thus, the present study aimed to investigate the food consumption of T. arnobia, in its native host, guava (Psidium guajava L.), and in the diff erent E. urograndis clones (VE 41, I 144, TP 361 and VCC 865). To assess T. arnobia food consumption, choice and non-choice tests were carried out using the native and the exotic host, alone or in combination. In non-choice tests, it was observed a higher consumption for the VE 41 clone and the native host (guava). The food consumption evaluation in choice tests indicated no food preference of T. arnobia between guava and E. urograndis clones, with the exception for the TP 361 clone, which was signifi cantly less consumed than guava. In choice tests between the diff erent E. urograndis clones, the clone I 144 presented a tendency towards lower food preference, being consumed only after 48 hours. In addition, the leaf consumption was similar between the VE 41, I1 14 e VCC 865 clones. In choice tests using the E. urograndis clones in pairs, the VE 41 clone was more consumed while the I 144 clone was less consumed when compared to the TP 361 clone. The obtained results provide basic information for the indication of eucalypt clones, and the understanding of the interaction and ecological relationships, assisting in the development of Forest Integrated Pest Management (Forest IPM) programs for the control of T. arnobia. 1.INTRODUCTION For purposes of forestry genetic improvement, the genotypic features of each genetic material may directly aff ect the insect-plant interaction. Thus, it is essential to identify and characterize possible resistance factors, contributing to the development of appropriate and effi cient pest management strategies (Jesus et al., 2015). Among the insects native to Brazil, Thyrinteina arnobia stands out for being found all over the national territory and has migrated from its native hosts, such as guava (Psidium guajava L.), to the exotic host Eucalyptus spp., becoming the main defoliator on eucalypt crops in the country, requiring the constant use of pest control techniques (Barreto and Mojena, 2014). Therefore, several management tactics are being developed to mitigate the yield losses caused by T. arnobia, including the selection of less susceptible species in the genus Eucalyptus, as well as the search for resistant genotypes or clones. According to Boiça Júnior et al. (2013), plant resistance is a feature depending on environmental and genetic factors, and for that, diff erent genetic materials, such as species, hybrids and/or clones of Eucalyptus spp. may show variations in susceptibility to biological agents, including defoliating insects, such as T. arnobia. Thus, the search for resistance features is essential, since the presence of morphological or chemical stimuli can aff ect herbivory and depress the consumption and food preference of insects, keeping the population density of the pests below the level of economic damage, having no or low adverse eff ect on the environment (Seifi et al., 2013). Nevertheless, studies related to the interaction of native and exotic host species and the food consumption of T. arnobia, in the search for the characterization of non-preference resistance are still rare. Jesus et al. (2015) in tests using species/genotypes of Eucalyptus spp., found results that consider Eucalyptus dunnii and the hybrid Corymbia citriodora x Corymbia torelliana presented antibiosis and/or antixenosis for T. arnobia. Similarly, clone C10 (C. citriodora x C. torelliana), proved to be less preferred by T. arnobia, corroborating that T. arnobia exhibits a distinction between the genetic materials off ered. Thus, the present study aimed to investigate the food consumption of T. arnobia on its native host guava and on diff erent E. urograndis clones (VE 41,I 144,TP 361 and VCC 865) in order to contribute to the development of Integrated Forest Pest Management programs (Forest IPM). MATERIAL AND METHODS The research was conducted at the Laboratory of Agricultural and Forestry Entomology (LEAF) of the Engineering and Agricultural Sciences Campus, Federal University of Alagoas (CECA/UFAL) and at Embrapa Coastal Tablelands. T. arnobia fi eld collection T. arnobia was obtained through manual collections in E. urograndis crops, variety 1407 and 224, at Albuquerque Farm, municipality of Atalaia, Alagoas state, 9 ° 30 '27' 'S and 36 ° 1 '24' 'W. The biological forms, such as eggs, caterpillars, pupae and/or adults were collected manually or with the aid of an entomological net and then transported to the Laboratory of Agricultural and Forestry Entomology (LEAF). T. arnobia rearing T. arnobia eggs were immersed in copper sulfate solution (CuSO4) for 10 seconds, then washed in distilled water in order to avoid contamination and placed in Petri dishes, with moistened fi lter paper, until the eggs hatch. After hatching, the caterpillars were placed in plastic buckets of 20L, with two side openings and a lid with holes covered by voile fabric. For feeding, leaves of E. urograndis or guava were off ered, in branches, placed in 500 ml glass bottles with water, until the loss of turgor, when they were replaced by new branches. Pupae were sexed and coupled in 10 x 20 cm PVC tube cages, internally lined with sulfi te paper, for adult emergence and oviposition. The adults were daily fed with a 10% honey solution. Eggs were daily collected. Guava leaves were collected from trees belonging to the Campus and the diff erent E. urograndis clones (TP 361, VCC 865, I1 44 and VE 41) were collected in the experimental clonal forest stand of the Engineering and Agricultural Sciences Campus (CECA). Then, the samples were labelled and taken to the Agricultural and Forestry Entomology Laboratory (LEAF), washed in running water and cut into 4 cm diameter discs (≈ 1378mm 2 ). To determine the consumed area, the host leaves were previously drawn on bond paper and at each evaluation period, the consumed area was marked in its respective outline, with diff erent colours for each period. Thereafter, T. arnobia food consumption was determined using the Bioscentifi c Lad ADC-AM-3000 leaf area meter. 2.3.1 Food consumption of T. arnobia on E. urograndis clones and guava in non-choice tests. The leaf discs of E. urograndis clones (VE 41, I 144, TP 361, VCC 865) and guava were individualized in arenas of 6.0 x 5.0cm plastic pots, lined with fi lter paper and moistened with distilled water. Subsequently, one T. arnobia caterpillar was released in each arena and the leaf consumption was evaluated after 24 hours and 48 hours. The experimental design was completely randomized, with 10 repetitions and 05 treatments, 4 clones of eucalypt (TP 361, VCC 865, I1 44 and VE 41) and 1 of guava. The data obtained were subjected to analysis of variance (ANOVA) and the means compared by the Tukey test (P≤ 0.05), using the statistical package of the SAS version 9.0 program (SAS Institute, 2011). Graphics were created using the SigmaPlot Software version 11.0 (Systat software, 2006). Food consumption of T. arnobia in free choice tests using paired E. urograndis clones with guava. For the evaluation of T. arnobia food consumption in free choice tests with E. urograndis clones (TP 361, VCC 865, I1 44 and VE 41) and guava, plastic pots (26 x 16 x 4 cm) with an orifi ce in the lid, covered with voile fabric, and lined with polyethylene foam, moistened with distilled water and covered with fi lter paper, were used as arenas. In each arena, the leaves of the diff erent hosts were placed in pairs (E. urograndis clone x guava), equidistant from the center and from each other. One T. arnobia caterpillar was released in the center of the arena and after 24 and 48 hours, the consumed area of each disc was evaluated. The experimental design was completely randomized, with fi ve treatments (VE41 × Guava; I144 × Guava; TP361 × Guava; and VCC865 × Guava) and fi ve repetitions for each treatment. The data were submitted to the Chi-squared test of Independence (P≤ 0.05), using the statistical package of the SAS version 9.0 program (SAS Institute, 2011). The graphics were created using the SigmaPlot Software version 11.0 (Systat software, 2006). Food consumption of T. arnobia in free choice tests among E. urograndis clones. For the evaluation of T. arnobia food consumption in free choice tests among E. urograndis clones, plastic pots (26 x 16 x 4 cm) with an orifi ce in the lid, covered with voile fabric, and lined with polyethylene foam, moistened with distilled water and covered with fi lter paper, were used as arenas. In each arena, the four clones (TP 361, VCC 865, I1 44 and VE 41) were arranged concomitantly, in a circle, equidistant from the centre and from each other. One T. arnobia caterpillar was released in the centre of the arena and after 30 min, 2, 4, 6, 24 and 48 hours, the leaf consumption was assessed. The experimental design was completely randomized, with four treatments (VE 41, I 144, TP 361 and VCC 865) and 10 repetitions. The data were submitted to analysis of variance (ANOVA) and the means compared by the Tukey test (P≤ 0.05), using the statistical package of the SAS version 9.0 program (SAS Institute, 2011). Graphics were created using the SigmaPlot Software version 11.0 (Systat software, 2006). Food consumption of T. arnobia in free choice tests using paired E. urograndis clones. For the evaluation of T. arnobia food consumption in free choice tests using paired E. urograndis clone plastic pots (26 x 16 x 4 cm) with an orifi ce in the lid, covered with voile fabric, and lined with polyethylene foam, moistened with distilled water and covered with fi lter paper, were used as arenas. In each arena, the leaves of the diff erent clones were placed in pairs, equidistant from the center and from each other. One T. arnobia caterpillar was released and after 24 and 48 hours, the leaf consumption was evaluated. The data were submitted to the Chi-squared test of Independence (P≤ 0.05), using the statistical package of the SAS program version 9.0 (SAS Institute, 2011). Graphics were created using the SigmaPlot Software version 11.0 (Systat software, 2006). RESULTS 3.1 Food consumption of T. arnobia on E. urograndis clones and guava in non-choice tests. Data from food consumption of T. arnobia on E. urograndis clones and guava in non-choice tests, revealed signifi cant diff erences between the evaluated treatments after 24h (F = 9.25; P <0.0001) and 48h (F = 10.33; P <0.0001), following the same pattern of leaf consumption on both periods. It was found that all treatments were consumed, however, greater consumption of leaf area was observed for the native host guava, with an average leaf consumption area of 480.6 ± 89.52 mm 2 and 841.1 ± 132.7 mm 2 , followed by the VE 41 clone with 455.6 ± 94.2 mm 2 and 560.1 ± 108.5 mm 2 after 24 and 48 hours, respectively, signifi cantly diff ering from the other treatments (Figure 1). Food consumption of T. arnobia in free choice tests using paired E. urograndis clones with guava. The results obtained for the food consumption of T. arnobia on paired E. urograndis clones with guava (VE 41 x Guava; VCC 865 x Guava; I1 44 x Guava) showed similar consumption for the hosts, after 24h and 48h. Food consumption of T. arnobia in free choice tests among E. urograndis clones. Regarding the food consumption in free choice tests among the E. urograndis clones, it was observed that after 2h of evaluation, only the VE 41 (4.2 ± 4.2mm 2 ) and TP 361 (7.0 ± 7.0mm 2 ) clones were consumed, despite not diff ering from the other unconsumed clones (F = 1.0; P = 0 .40). In general, I 144 clone presented a lower food preference from T. arnobia, being consumed only after 48 hours of evaluation. In spite of this, statistical diff erences were not identifi ed for this evaluation period between any of the tested clones, with average values of area consumed of 31.3 ± 21.8 mm 2 for I 144; 37.4 ± 18.02 mm 2 for TP 361; 90.7 ± 52.1mm 2 for VE 41; and 94.0 ± 28.0mm 2 for VCC 865 (F = 1.05; P = 0.38) (Figure 3). 3.4 The food consumption of T. arnobia in free choice tests using paired E. urograndis clones. For the food consumption in free choice tests using paired E. urograndis clones, a signifi cant diff erence was observed in both evaluation periods (24 and 48h), for the pairing between I 144 × TP 361 clones, showing lower consumption of I 144 clone, with average values of 7.8 ± 10.7mm 2 for I 144 and 94.4 ± 89.4mm 2 for TP 361, after 24h (F = 69.52; P = 0.00012); as well as 98 ± 44.7mm 2 for I 144 and 161.8 ± 85.6mm 2 for TP 361 after 48h (F = 10.21; P = 0.004). For the other pairings, no signifi cant diff erences were observed in any of the evaluated periods. DISCUSSION According to the results, it is possible to observe signifi cant diff erences in the food consumption of T. arnobia among native and exotic hosts as well as among the tested genetic materials (clones). The VE 41 clone and the native host guava were more consumed than the other genetic materials tested in non-choice food consumption tests. This fact suggests that the least consumed clones (I 144, VCC 865 and TP 361) may possibly present chemical, physical or morphological stimuli able to reduce T. arnobia feeding (Lima et al., 2018). It is important to note that, although the native host guava has been more consumed than the I 144, VCC 865 and TP 361 clones, when off ered in nonchoice tests, the native host was not always the most preferred, when T. arnobia had a chance to choose. These results suggest that T. arnobia may present a similar attraction to the stimuli emitted by both the native host and the tested genotypes of the exotic host, with the exception of TP 361 clone. This fact is quite notable, since the hybrid E. urograndis was developed in Brazil in the mid-70s (Faria et al., 2013), revealing that in a relatively short period of time, T. arnobia no longer distinguishes between its native and exotic hosts. However, in spite of not presenting remarkable preference between the native and exotic hosts, there is evidence in the literature that suggests diff erences in T. arnobia development (Marinho et al., 2008;Holtz et al., 2003b;Santos et al., 2000). Holtz et al. (2003c) reported that T. arnobia had a higher intrinsic population growth rate (rm) in E. cloesiana than in its native host guava. However, Santos et al., (2000) assessing the development of T. arnobia in E. urophylla and guava, demonstrated a better performance in the native host, with 5% of larval mortality in guava against 46.5% in E. urophylla. In general, even though there is no consensus among the authors, these facts point to an adaptation of T. arnobia to the exotic host. According to West and Cunnengham (2002), outbreaks of T. arnobia populations in forest stands of Eucalyptus spp. are often considered larger than in the native host guava crops, indicating that this fact may be due to the production method and not only to the genetic features of the hosts, as the forest plantations of Eucalyptus spp. These are crops commonly presented in extensive and contiguous clonal monoculture, favouring the incidence of pests, demonstrating that the preference for a host may depend on many factors, such as the quality and amount of available food resources (West and Cunnengham, 2002). Among the possible causes for the succession of T. arnobia population outbreaks in Eucalyptus spp., Marinho et al. (2008) consider that the exotic host has not yet developed defensive mechanisms, which would have already happened with the native host, in co-evolution processes, this is because eucalyptus plants produce more protease inhibitors than guava, but are more attacked by caterpillars of the genus Thyrinteina that possibly adapted for the protection of those plants by increasing the production of digestive enzymes. Futuyama (2008) approaches that adaptations to introduced exotic genetic materials have been gradually occurring in several species of insects, increasing the host range and favouring the change and maintenance of the population level between native and exotic hosts, thus ensuring the survival of these individuals in the fi eld. The lower consumption observed for the clone 144 in the present study is substantiated on the fact that in the state of Alagoas, Brazil, the same E. urograndis clone has shown a higher productivity when compared to the others, being the most used clone for forest stands implantations in the region. This higher productivity may be related to lower attack by pest insects, including T. arnobia, which defoliates Eucalyptus spp. directly aff ecting growth rates and in case of continuous attacks may cause the death of its host (Pereira 2007;Moreira 2013). Nevertheless, when the E. urograndis clones were off ered in pairs, the I 144 clone was only signifi cantly less preferred when compared to clone TP 361, indicating that when there is few options of choice, T. arnobia tends to make low distinction between the off ered materials. When it comes to food preference, the insect responses may vary according to the stimuli coming from the host plant, being of chemical (allelochemical), physical (colour) or morphological nature (hairiness, texture, hardness, structure dimension, among others). For Eucalyptus spp., physical-chemical features, in addition to the presence of secondary compounds, such as tannins, phenols, fats and essential oils, can provide phagostimulating or deterrent properties, infl uencing the herbivory process, showing a direct infl uence on host preference (Ohmart et al., 1985;Lara, 1991;Ohmart and Edwards, 1991). The results of although it was not possible to affi rm the existence of non-preference resistance among the diff erent clones of E. urograndis (VE 41, I1 44, TP 361, VCC 865) and the native host guava for T. arnobia, the results of the present study suggest a higher consumption for the VE 41 clone and guava, as well as a low preference for the I1 44 clone. In general, the investigation of the interactions of native and exotic hosts and T. arnobia is of fundamental importance for a better understanding of ecological relationships, assisting in the development, planning and use of appropriate methods for Forest Integrated Pest Management programs (Forest IPM). CONCLUSIONS There are diff erences in the food consumption of T. arnobia between its native and exotic hosts, in which the VE 41 clone and the native host guava were more consumed, when compared to the other tested genetic material. The TP 361 clone was less preferred when off ered paired to the native guava host and the I 144 clone presented less food preference, when the genetic materials were off ered together. The results obtained provide basic information for the indication of eucalypt clones and understanding of the interaction and ecological relationships, assisting in the development of Forest Integrated Pest Management (Forest IPM) programs for the control of T. arnobia. AUTHOR CONTRIBUTIONS Almeida, C. A. C. and Breda, M. O. was responsible for the conception and design of the work, data collection, data analysis and interpretation, drafting the article, revision and fi nal approval of the version to be published. Santos, J. M., Gonçalves, F. S. and Rodrigues, M. B. was responsible for the data collection.
2021-08-27T16:43:43.909Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "53dcf8d8de232e13720a8b881e4133ea5fc20bc1", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rarv/a/pfjRQgSZCQVhCNYc8q9LmfL/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6954d3e3d3b31a3ce1aa3ae914d34f204b596c1c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
119713540
pes2o/s2orc
v3-fos-license
Functoriality of motivic lifts of the canonical construction Let (G,X) be a Shimura datum and K a neat open compact subgroup of $G(\mathbb{A}_f)$. Under mild hypothesis on (G,X), the canonical construction associates a variation of Hodge structure on $\textrm{Sh}_K(G,X)(\mathbb{C})$ to a representation of G. It is conjectured that this should be of motivic origin. Specifically, there should be a lift of the canonical construction which takes values in relative Chow motives over $\textrm{Sh}_K(G,X)$ and is functorial in (G,X). Using the formalism of mixed Shimura varieties, we show that such a motivic lift exists on the full subcategory of representations of Hodge type {(-1,0),(0,-1)}. If (G,X) is equipped with a choice of PEL-datum, Ancona has defined a motivic lift for all representations of G. We show that this is independent of the choice of PEL-datum and give criteria for it to be compatible with base change. Introduction Let (G, X) be a Shimura datum. By design, there is a functor Rep(G) → VHS/X which assigns a Q-valued variation of Hodge structures on X to a representation of G. For any neat open compact K ≤ G(A f ), let S := Sh K (G, X) denote the corresponding Shimura variety, defined over its reflex field via canonical models. For well-behaved (G, X), the variations of Hodge structure constructed on X descend to S(C). We call the resulting functor Rep(G) → VHS/S(C) the canonical construction and denote it by µ H G . The canonical construction should be of motivic origin. Specifically, there should be a canonical ⊗-functor µ mot G : Rep(G) → CHM/S to the category of relative Chow motives over S, such that commutes up to canonical natural isomorphism. Here H • B denotes the relative Betti realisation enriched to take values in variations of Hodge structure. The functor µ mot G should also be well behaved under change of G etc. In particular, the canonical construction should produce variations of Hodge structure which arise from geometry. As an example, for the usual modular curve datum (GL 2 , H), if V denotes the standard representation of GL 2 , then µ H G (V ) is isomorphic to H 1 B (E → S) ∨ , where E → S is the universal elliptic curve. The obvious choice for µ mot G (V ) is then the relative Chow motive h 1 (E → S) ∨ (in the notation of Theorem 2.8). Let Rep(G) AV denote the full subcategory of Rep(G) whose objects are of Hodge type {(−1, 0), (0, −1)}, i.e. for any (h : S → G) ∈ X, the restriction of V to S is (z ⊕z)-isotypical. Alternatively, the objects of Rep(G) AV are those for which their image under µ H G is the dual of H 1 B (A → S) for some abelian variety A → S (see Lemma 5.5). The author is grateful to have been supported by an EPSRC Doctoral Prize Introduction The first aim of this paper is to show that µ mot G can be defined on Rep(G) AV where the vertical maps are base change by f . This is stated more precisely as Theorem 6.4. Note that the reflex field of (G ′ , X ′ ) is allowed to be strictly larger than that of (G, X). The method of proof is to use the formalism provided by mixed Shimura varieties. Mixed Shimura varieties, as defined by Pink, generalise the traditional definition by allowing for non-reductive algebraic groups. Crucially, objects such as universal elliptic curves fit into this framework, i.e. they are mixed Shimura varieties and their structure maps are given by functoriality of mixed Shimura data. Canonical constructions exist more generally than just the Hodge case. For example, the ℓadic étale canonical construction associates a lisse ℓ-adic sheaf on S (considered as defined over its reflex field via canonical models) to a representation of G. The functor µ mot G should lift every incarnation of the canonical construction. In Section 10, we show this is the case of the étale canonical construction. For PEL-type Shimura data much stronger results on lifting µ H G are known due to work of Ancona [Anc15]. For Shimura data with a fixed choice of PEL-datum, Ancona has been able to define a functor Anc G defined on all of Rep(G) (see Thm. 8.7). Unfortunately, it is not directly clear that Anc G commutes with pull back via a morphism of Shimura data. Moreover, it is not clear that Anc G is independent of the choice of PEL-datum (recall that a Shimura variety may admit multiple distinct PEL-data, see Example 7.8). In the latter part of this paper, we show that Anc G is independent of the choice of PEL-datum (Lemma 9.3 and Corollary 9.8) and in many cases commutes with morphisms of Shimura varieties. More precisely, call a morphism of Shimura data each with chosen (possibly unrelated) PEL-data f : (G ′ , X ′ ) → (G, X) admissible if f * V is a summand of V ′⊕k for some k, as G ′ -representations, where V ′ , V denote the representations given in the PEL-data on the source and target respectively. The motivation for this definition is that it ensures that we may use functoriality of mixed Shimura data to compare f * Anc G (V ) and Anc G ′ (V ′ ). THEOREM 1.2 Given f : (G ′ , X ′ ) → (G, X) an admissible morphism of PEL-type Shimura varieties each with chosen PEL-data, then the following diagram commutes: up to a specified natural isomorphism. Moreover, there is a prism analogous to that of Theorem 1.1. This is made precise in Lemma 9.7 and Corollary 9.8. Not all morphisms f are admissible (see Example 11.1), but in Corollary 11.3 we show that every f for which the source only has factors of symplectic type (see Lemma 7.5) is admissible. In any case, it is easy to decide if a given morphism is admissible. One application of results such as the above is in the theory of Euler systems. In this context it is often required to pullback classes lying in the cohomology of Shimura varieties under morphisms of Shimura data. It is also necessary to switch between various cohomology theories. For this reason it is desirable to be able to perform such operations at the motivic level. There has been significant recent progress in this direction due to Lemma's construction of motivic classes on Siegel threefolds [Lem17]. If functoriality results are available Lemma's classes have the potential to yield Euler systems for a multitude of different Shimura varieties (see for example [LSZ17], particularly Section 6). One other observation from practical applications is that it is desirable to have such results with F -coefficients for F/k a number field. For this reason we have phrased the following to allow for coefficients. Acknowledgements: I would like to especially thank David Loeffler for suggesting the research topic and for providing guidance throughout. I am also deeply indebted to Giuseppe Ancona for many helpful discussions and explaining his results to me. Relative motives We now recall some background on relative motives. NOTATION 2.1 Assume that k is a field of characteristic zero equipped with a fixed embedding into C. Given a k-variety Z, we write Z(C) for its complex points considered as a complex manifold. In this section, we fix S to be a smooth quasi-projective k-scheme. For simplicity, we shall assume that all components of S have the same dimension d S . DEFINITION 2.2 Following [DM91, Sec. 1], fix an adequate equivalence relation ∼ on all k-varieties and let X, Y be smooth projective S-schemes. Assume for simplicity that X, Y are equidimensional of dimensions d X , d Y respectively. We define the group of degree p correspondences from X to Y , up to equivalence by ∼, to be where A d ∼ (−) denotes the Q-vector space of codimension d cycles up to equivalence by ∼. Proceeding as in the classical case we obtain the category M ∼ /S of relative motives over S with respect to ∼, whose objects are triples (X, e, n) consisting of a variety X, an idempotent e ∈ Corr 0 S (X, X) and an integer n ∈ Z corresponding to Tate twists. The category M ∼ /S is a Q-linear ⊗-category, with the tensor structure being given by fibre product over S. We are mostly concerned with the case when ∼ is taken to be rational equivalence ∼ rat , in which case we denote M ∼ /S by CHM/S, or homological equivalence ∼ hom with respect to singular cohomology (or equivalently any choice of ℓ-adic cohomology), in which case we denote the resulting category by HomM/S. These categories are referred to as relative Chow motives over S and relative homological motives over S respectively 1 . Write H i B (Z(C), Q) for the singular cohomology of a variety Z/k. Since homological equivalence is coarser than rational equivalence we obtain a forgetful map CHM/S → HomM/S, which is full. If SmProjVar/S denotes the category of (not necessarily irreducible) smooth projective varieties over S, then there is a functor h : (SmProjVar/S) op → CHM/S which assigns to a variety X/S its motive (X, ∆ X , 0) where ∆ X is the diagonal cycle of X × S X. The same is also true of homological motives. For any adequate equivalence relation, the construction of M ∼ /S is compatible with change of S, i.e. given f : S ′ → S, we obtain pullback functors f * : M ∼ /S → M ∼ /S ′ by base changing triples in the obvious way. REMARK 2.3 This construction has been extended to the case when S → k is quasi-projective but not necessarily smooth by Corti-Hanamura [CH00]. DEFINITION 2.4 Let F/Q be a number field. We define (CHM/S) F to be the category with the same objects as CHM/S but for which Hom (CHM/S)F /S (A, B) = Hom CHM/S (A, B) ⊗ Q F . We then define CHM F /S to be the pseudo-abelianisation ((CHM/S) F ) ♮ of (CHM/S) F and refer to it as the category of relative Chow motives over S with coefficients in F . We shall frequently use that it is equivalent to think of a Chow motive with coefficients in F as an object M of ((CHM/S) F ) ♮ or as an object of CHM/S together with an inclusion F ֒→ End CHM/S (M ) (see [Del79, Sec. 2] or more details [AK02, Sec. 5], ). We define HomM F /S analogously. DEFINITION 2.5 Let AbVar/S denote the category of abelian varieties over S. We denote by CHM ab F /S, HomM ab F /S the smallest rigid linear symmetric tensor subcategories which contain the motives of abelian varieties and are closed under taking subobjects and Tate twists. THEOREM 2.6 There is a section I of the projection N : CHM ab F /S → HomM ab F /S which is a linear symmetric tensor functor, commutes with Tate twists and is such that 1 It may be better to refer to HomM/S as "naive homological motives". This is because, unlike in the case of S = k, our homological motives admit non-trivial maps between objects which should be considered to live in different cohomological degrees. As a result, they do not coincide with what we may reasonably expect of "relative numerical motives". Realisations Proof. This follows from work of O'Sullivan [O'S11, pf. of Thm. 6.1.1] (see also [Anc15,Thm. 7.1]). More precisely, O'Sullivan checks that any quotient of "Chow theory" by a proper ideal has a unique right inverse, which will then give an N as above. But cycles which are homologically equivalent to zero form a proper ideal within CHM/S. The same reasoning applies for motives with coefficients. The analogous statement for homological motives also holds but is automatic. The second condition ensures that the decomposition is compatible with change of A and S as well as applying any of the standard realisations. Another consequence is: is an isomorphism. Realisations NOTATION 3.1 Let S t → k be a smooth quasi-projective variety over a number field and VHS/S(C) denote the category of Q-valued variations of Hodge structure on S(C). For any finite field extension F/Q, we may define VHS F /S(C) analogously to Definition 2.4 (note we do not require F ⊂ R). The canonical construction This construction is spelt out in [Tor18, Cor. 4.5.7]. REMARK 3.3 In contrast to the case when S is a field, the relative Hodge realisation functors are not faithful in general. This is due to the presence of non-trivial morphisms between objects which are pure of different weights. In their work, Corti-Hanamura correct this by introducing a realisation functor taking values in a derived category. This is not necessary for our purposes as we shall only require faithfullness for elements of Hom HomMF /S (h i (X), h i (Y )) with X, Y abelian varieties, which is true of H • B (cf. [Tor18, Remark 4.5.8]). Note that for abelian varieties H i B (X(C)) = H • B (h i (X)), by Theorem 2.8. REMARK 3.4 In Lemma 3.2, by naturality in S we mean that given f : For an object X p → S this is given by the proper base All the above also holds in the étale case, which we now record for use in Section 10. NOTATION 3.5 Let ℓ be any prime and λ a prime of F dividing ℓ. Write F λ for the completion of F at λ. Let λ be a uniformiser of F λ . Given a scheme X, we writeÉt λ /S for the category of lisse λ-adic sheaves on X and F λ,X for the constant λ-adic sheaf on a scheme X with coefficient group F λ . LEMMA 3.6 There are relative étale realisation functors These are natural in S. The canonical construction NOTATION 4.1 For an algebraic group G/Q and a field F of characteristic zero let Rep F (G) denote the category of representations of G F over F . We shall usually consider an object V ∈ Rep F (G) as a representation V of G over Q together with a map F ֒→ End G (V ). We also set Rep(G) := Rep Q (G). NOTATION 4.2 Throughout (G, X) will denote a Shimura datum (which we often interchange with (G, h) for h ∈ X). We shall always assume that our Shimura data are such that the identity connected component of the centre of G is an almost-direct product of a Q-split torus and an R-anisotropic torus. This ensures that all real cocharacters of the centre are in fact defined over Q. Upon fixing a choice of neat open compact K ≤ G(A f ), we denote Sh K (G, X) by S, always considered to be defined over the reflex field. We follow a similar convention for ∈ Rep F (G), we may define a variation of Hodge structure on S(C) as follows: consider V as Q-representation of G together with an action of F . Then the underlying local system corresponds to the cover and as such may be given the Q-Hodge structure defined by the map . This is independent of the choice of representative and can be checked to define a variation of Hodge structure (this uses the almost-direct product condition on the centre of G, for a thorough treatment see [Pin90, Ch. 1]). This extends to a functor µ H G : Rep F (G) → VHS F /S(C) referred to as the canonical construction (where H stands for Hodge). CONSTRUCTION 4.4 Let V ∈ Rep F (G) and f be as above. There is a canonical isomorphism of local systems and this is also a morphism of variations of Hodge structure as it respects the Hodge structure on each fibre. The collection κ := (κ V ) V then defines a natural isomorphism: Mixed Shimura varieties Mixed Shimura data, as defined by Pink [Pin90], extend the traditional definition to not necessarily reductive algebraic groups. A mixed Shimura datum consists of a pair (P,X) with P/Q a connected algebraic group and a subspaceX ⊆ Hom(S C , P C ) satisfying various requirements (see [Pin90,Sec. 2.1] for the precise conditions 2 ). In the case that P is reductive, i.e. that P has trivial unipotent radical, we recover the classical definition of Shimura data, which we shall refer to as the pure case. For any neat open compact K ≤ P (A f ), there is an associated mixed Shimura variety Sh K (P,X), which is algebraic over its reflex field. A morphism of mixed Shimura data f : Any mixed Shimura datum (P,X) admits a map to the pure Shimura datum (G, X) where G is the quotient of P by its unipotent radical R u (P ) and X is given by postcomposing elements ofX with π : P → G. We shall always assume that our mixed Shimura varieties satisfy the stronger condition that: the centre of G = P/R u (P ) is an almost-direct product of a Q-split torus and a torus which is R-anisotropic (so the weight cocharacter π • h • w : G m,R → G R is rational for h ∈X). These ensure that there is a canonical construction for mixed Shimura varieties associating variations of mixed Hodge structure on Sh K (P,X) to representations of Rep(P ) (see [Pin90, Sec. 1.18]). Universal abelian varieties can be seen as instances of mixed Shimura varieties (see Example 5.6). In this section, we shall observe that the theory of mixed Shimura varieties automates the creation of certain abelian varieties over pure Shimura varieties in a functorial way. DEFINITION 5.1 Let (G, X) be a (pure) Shimura datum and V ∈ Rep F (G). We consider V as a Q-representation together with an F -structure F ֒→ End G (V ). For any choice of h x ∈ X, V ⊗ Q C decomposes as a direct sum of one dimensional C-subrepesentations upon each of which z ∈ S(R) = C × acts as multiplication by z −piz−qi for some p i , q i . We say that V has Hodge type given by set {(p 1 , q 1 ), (p 2 , q 2 ), ..., (p n , q n )} of (p i , q i ) occurring in the above decomposition. Since different choices of h x define isomorphic R-Hodge structures, this is independent of the choice of h x . The Hodge type of a representation V ∈ Rep(G) coincides with the Hodge type of µ H G (V ) as a variation of Hodge structure on S(C). Given V ∈ Rep F (G) AV , considering V as a representation over Q, we may form the semi-direct product V ⋊ G as an algebraic group over Q. Let p : V ⋊ G → G denote the projection map andX consist of the elements t ∈ Hom(S C , Proof. The unipotent radical of V ⋊ G is V . If, in the notation of [Pin90, Sec. 2.1], we set U = V , then it is easy to check the conditions directly. Alternatively, use that (V ⋊ G,X) is an instance of a unipotent extension in the sense of [Pin90,Prop. 2.17]. Note that we are assuming (G, X) has rational weight and the centre is an almost-direct product of a Q-split and R-anisotropic torus. The datum (V ⋊ G,X) then also satisfies the corresponding strengthened condition of a mixed Shimura variety. Mixed Shimura data of the form (V ⋊G,X) are the only non-pure data we shall need to consider. Proof. This is an easy exercise, for example see [Tor18, pf. of Lem. 4.7.4]. LEMMA 5.5 For any Shimura datum (G, X) and V, K, L as above, the map has the structure of an abelian variety. Proof. This is [Pin90, 3.22 a)] (the zero section is given by the Levi section ι : Moreover, this is functorial in the sense that given a homomorphism of representations f : respects the group structure. The existence of the projection and identity section maps force (V ⋊ G,X) to have the same reflex field as (G, X) [Pin90, Sec. 11.2(b)]. EXAMPLE 5.6 If a Shimura datum (G, X) has a PEL-datum with standard representation V (see Definition 7.2), then for any neat open compact K and K-stableẐ-lattice L of V (A f ) (we shall always take our lattices to be of full rank), Sh L⋊K (V ⋊ G,X) → Sh K (G, X) is isogeneous to the universal abelian variety defined by the PEL-datum. ii) Given a morphism of pure Shimura data f : where f * L is the lattice L considered as a K ′ -stableẐ-lattice. Proof. Both statements follow immediately from the characterisation of fibre products for mixed Shimura data given in [Pin90,Sec. 3.10]. CONSTRUCTION 5.8 We now define a functor µ mot For any α ∈ End G (V ), α(L) is aẐ-lattice and so there exists an n ∈ N such that n · α(L) ≤ L. In other words, with the first factor acting via functoriality of mixed Shimura varieties and the second by Q-linearity of CHM/S. This uses that the actions of Z as subring of T ∩ F (i.e. addition via the group law as an abelian variety) and as a subring of Q coincide, which follows from Theorem 2.8. In contrast, this would not be true of h(S K,V ) ∨ and this does not define an element of CHM F /S. where the first map is obtained by applying h 1 (−) ∨ to the dual of the map of abelian varieties π n : Sh nL⋊K (V ⋊ G,X) → Sh L⋊K (V ⋊ G,X) which is given by functoriality of mixed Shimura varieties, whilst the second is h 1 (−) ∨ of the map of mixed Shimura varieties induced by f . We then set µ mot G (f ) to be 1/n times the composite f * • (π ∨ m ) * . By construction the morphisms µ mot G (f ) will respect the F -action. PROPOSITION 5.9 Given a choice ofẐ-lattice for each V ∈ Rep F (G) AV as above, then the corresponding µ mot G is a well-defined ⊗-functor Rep F (G) AV → CHM F /S. The functor µ mot G is independent of the choice of lattice for each V , up to canonical natural isomorphism. Proof. We first remark that µ mot G (f ) is independent of the choice of n. This follows as the constructions for n and for nm differ by 1/m · (π ∨ m ) * • π m, * = 1/m · [m] * , but, for an abelian variety A/S, [m] acts on h 1 (A) ∨ by multiplication by m (Theorem 2.8). That µ mot G respects composition follows from the commutativity of the following diagram, for any f : and thus it is clear that µ mot G defines a functor. Given choices L 1 , L 2 for each V and corresponding functors µ mot G,1 , µ mot G,2 , define a natural transformation ψ : µ mot G,1 → µ mot G,2 by defining ψ V to be 1/n times the map for any n such that nL 1 ≤ L 2 . That this defines a natural transformation again follows from the commutativity of the above square. Moreover, for every V , as an isogeny (1) is invertible after applying h 1 (−) ∨ , we find that ψ defines a natural isomorphism. REMARK 5.10 If f : V → W is a non-zero homomorphism of representations of G over Q and we fix a neat open compact subgroup K of G and K-stableẐ-lattices is non-zero as a morphism of abelian varieties (for example, using the explicit description of the points over C). Together with Theorem 2.10 this demonstrates that µ mot G is faithful. NOTATION 5.11 Given V ∈ Rep F (G) AV , we shall denote the mixed Shimura variety Sh L⋊K (V ⋊ G,X) simply by S K,V . We use p : S K,V → S and ι : S → S K,V to denote the maps induced by the projection and Levi section as well as the induced maps on their analytifications. We continue accordingly for (G ′ , h ′ ). LEMMA 5.12 Given a morphism of Shimura data f : Proof. From Lemma 5.7 ii) and that the canonical projectors defining h i commute with pullback, we obtain isomorphisms The natural isomorphism is then given by taking these maps and possibly composing the maps defined in the proof of Proposition 5.9 if the lattice chosen for f * V is not f * L. Direct images for mixed Shimura varieties In this section, we check that µ mot G lifts the canonical construction and is compatible with base change. LEMMA 6.1 Given a Shimura datum (G, X) and V ∈ Rep F (G) AV , then there is a canonical identifi- Proof. The canonical construction can be extended to mixed Shimura varieties as we now recall. Let (P,X) be a mixed Shimura datum and Q ≤ P (A f ) a neat open compact subgroup. A representation W ∈ Rep F (P ), which we consider as a Q-representation ρ : P → GL(W ) together with an F -structure, defines a local system This is functorial in the sense that, given f : For the purposes of the lemma, the key fact is that pushforwards of sheaves arising via the canonical construction correspond to group cohomology. More specifically, in the notation of the lemma, the following diagram commutes: LEMMA 6.3 i) Let (G, h) be a Shimura datum and a neat open compact subgroup K ≤ G(A f ) and let α also denote the map (S K,V1 )(C) → (S K,V2 )(C). Then the following diagram commutes: For any V ∈ Rep F (G) AV , the following diagram commutes: We prove the first case, the other is similar. The strategy is to reduce to a group theoretic context via a Tannakian argument using work of Wildeshaus. Fix a connected component S 0 of S(C) and let S 0 K,Vi denote the connected component p −1 i (S 0 ). In [Wil97, Thm. II.2.2] it is checked that the canonical construction produces variations of Hodge structure which are admissible in the sense of [Kas86]. Since the V i are unipotent, objects in the image of µ H Vi⋊G (in the notation used in the proof of Lemma 6.1) admit a filtration by objects pulled back from S 0 . Let VHS ′ /S 0 denote the category of admissible variations of Hodge structure on S 0 and p i -UVar/S 0 K,Vi denote the full subcategory of VHS ′ /S 0 K,Vi whose objects admit a filtration for which the graded objects are pulled back from elements of VHS ′ /S 0 . The functors µ H G , µ H Vi⋊G take values in these categories. Fix y ∈ S 0 and for i = 1, 2 set x i = ι i (y), where ι i denotes the canonical Levi section. For i = 1, 2, let P i,xi denote the Tannaka dual of p i -UVar/S 0 K,Vi and G y the Tannaka dual of VHS ′ /S 0 all with the obvious fibre functors. The map P i,xi → G y induced by p * i is surjective (e.g. [DM82, Prop. 2.21a)]). Lastly, set V i,xi = ker(P i,xi → G y ). Consider the diagram: This does not commute, but there is an obvious natural transformation R j p 2, * =⇒ R j p 1, * α * . The calculation of higher direct images in p i -UVar/S K,Vi coincides with the usual higher direct image as elements of VHS F /S 0 K,Vi (cf. [Wil97, Sec. I.4]). The maps R j p i, * are not ⊗-functors, but we claim that when viewed in the Tannakian setting, the above triangle becomes: and the natural transformation becomes the usual map To see this, note that p * i corresponds to inflation from G y and has right adjoint p i, * , whilst (−) Vi,x i is right adjoint to inflation. Since the canonical construction is a ⊗-functor, after taking duals we obtain a diagram of short exact sequences: where t i is the dual of µ H Vi⋊G and r the dual of µ H G . Moreover, the left vertical map V i,xi → V i is an isomorphism [Wil97, p. 96] (this would not be true without restricting to admissible variations of Hodge structure). This shows that the following square commutes: as in the proof of Lemma 6.1. In the case of the trivial representation Q, this yields maps r * H 1 (V i , Q) → H 1 (V i,xi , Q) which are dual to ϕ Vi . Since the diagrams of (2) are compatible with α * , the squares of (3) form a prism: A purely group theoretic argument now checks that, consequently, there is a commutative square: Taking Tannaka and linear duals we now obtain the square in i). We are now able to prove Theorem 1.1 of the introduction. THEOREM 6.4 Let (G, h) be an arbitrary Shimura datum and K ≤ G(A f ) neat open compact. Denote by S the Shimura variety Sh K (G, h). Then the following diagram commutes, up to natural isomorphism given by ϕ : where ϕ is as in Notation 6.2). Moreover, under pullback by f : (G ′ , X ′ ) → (G, X), the triangles for (G, X), (G ′ , X ′ ) form a commutative prism: for which each face has a given natural transformation, all of which are compatible. Proof. That ϕ V defines a natural isomorphism for the first triangle is Lemma 6.3 i). The commutativity of the other individual faces in the prism is given by the natural isomorphisms: ψ of Lemma 5.12 for the rear face, κ of Construction 4.4 for the front left face, and ξ of Remark 3.4 for the front right. Due to O'Sullivan's Theorem 2.6 (cf. Remark 2.7), we need only prove the compatibility statement for homological motives. As a result, we reduce to showing that the two natural isomor- coincide, here κ is as defined in Construction 4.4 and ψ is as defined in Lemma 5.12. This follows from Lemma 6.3 ii). Classification of PEL-data In the case of PEL-type Shimura data, significantly stronger results than Theorem 6.4 are possible. In this section, we provide a classification of PEL-type Shimura data after base change to R. NOTATION 7.1 Given an algebra and finally a choice of R-algebra homomorphism h : (the first condition ensures that u, h(i)v is symmetric). Let G be the algebraic group whose R-points, for any Q-algebra R, are defined by Note G is connected if and only if G has no factors of "orthogonal type" (see Lemma 7.5). For z ∈ C × , we automatically have that h(z) ∈ G(R). We also denote by h the induced map S → G R . NOTATION 7.3 Any semisimple R-algebra with positive involution splits as a product of simple factors each of which is of one of the following types (see for example [Kot92,p. 386]): • linear: (M n (C), A →Ā t ), where (−) denotes coefficientwise complex conjugation. • orthogonal: In particular, all symplectic B R -modules split as an orthogonal direct sum of submodules only acted on non-trivially by a single simple factor of one of the above types, and G 1,R splits accordingly. NOTATION 7.4 Given an algebraic group G, we denote by G • the connected component of the identity. We define the following algebraic groups over R: • Let U a,b be the indefinite unitary group whose R-points consist of elements of M a+b (C) which preserve a Hermitian form of signature (a, b). There is an obvious isomorphism U a,b ∼ = U b,a and (U a,b ) C ∼ = GL a+b,C . • Set J = 0 −1 1 0 and let O * 2n be the algebraic group defined by The following is well-known, but we have been unable to reference explicitly in the literature. with each factor acting on the factor of V R for which the action of B R factors through the corresponding M n (R), M n (C) or M n (H). ii) If G 1,R has no factors isomorphic to U n,0 for n ≥ 2, then (G • , h) defines a Shimura datum. In particular, if G 1,R additionally has no factors of orthogonal type, then (G, h) is a Shimura datum. iii) In any case, the identity connected component of the centre of G • is an almost-direct product of a Q-split torus and an R-anisotropic torus. Proof. These properties are well-known, but we provide proofs of the statements we have been unable to find references for. In [Kot92,Lem. 4.1] it is shown that (G, h) satifies (1.5.1), (1.5.2) and (1.5.3) of [Del71], even without the assumption of ii). To show ii) it remains to show that G ad has no factors of compact type under the above assumption. This and iii) will be easy to deduce from i). In order to classify the factors of G 1,R which may arise it suffices to assume that B R is simple of each type appearing in Definition 7.3. Moreover, we are able to reduce to the case of B R isomorphic to R, C or H by an easy Morita equivalence argument. We shall make repeated use of the following result of Kottwitz: Let (C, * ) be an R-algebra with positive involution and (W, , , h), (W ′ , , ′ , h ′ ) be two triples that together with (C, * ) satisfy the conditions of Definition 7.2 with R in place of Q. Then if W and W ′ are isomorphic as C ⊗ R Cmodules, with C acting via h and h ′ respectively, then (W, , ) and (W ′ , ′ ) are isomorphic as symplectic (C, * )-modules [Kot92, Lemma 4.2]. First assume that (B R , * ) = (R, * = id). Then is a triple as above with corresponding B R ⊗ R C-module C. As a result, any symplectic (B R , * )module V R must split as an orthogonal direct sum of terms isomorphic to W . By definition, G 1 (R) for W ⊕n is Sp 2n . Now assume that (B R , * ) = (C, * = z →z). In this case, B R ⊗ R C ∼ = C × C has two irreducible modules. The triple given by (C, tr C/R (xiȳ), h(i) = i) (resp. (C, − tr C/R (xiȳ), h(i) = −i)) corresponds to the C ⊗ R C-summand on which the C actions agree (resp. disagree). So if we denote these modules by A and B respectively, then any (B R , * )-module is isomorphic to A ⊕a ⊕ B ⊕b . For such a module, G 1 (R) consists of elements of GL n (C) which also preserve a pairing of signature (b, a). In other words, G 1,R is the indefinite unitary group U b,a . Finally, in the quaternion case we shall assume that (B R , * ) = (H op , * ) (with H op an expositional choice). Then B R ⊗ R C ∼ = M 2 (C) has a unique non-trivial irreducible module, which is of R-dimension 4. This is realised by the triple (H, tr H/R (xjỹ), h(i) = j) where H op acts by right multiplication and y →ỹ is the (anti-)involution given by y = a+bi+cj +dij → a+bi−cj +dij. As such, all symplectic (H op , * )-modules are isomorphic to H ⊕n for some n. To deduce ii), note that G ad 1 ∼ = G ad (indeed, the cokernel of G ad 1 ֒→ G ad is a proper quotient of G m ). From the above calculations we find that the only possible factors of G ad 1 of compact type are U n,0 ∼ = U 0,n for n ≥ 2. For iii), first note that the largest anisotropic subtorus of Z(G • ) must be contained in Z(G • 1 ). But from the above calculation The factorisation of Lemma 7.5 i) justifies the naming convention of Definition 7.3. REMARK 7.6 In [Kot92] Kottwitz, allows Shimura data to have (not necessarily connected) reductive groups G of the form considered in Lemma 7.5 when G 1,R has no factors isomorphic to U n,0 for n ≥ 2. Ancona's results also hold in this generality and so ours will as well. DEFINITION 7.7 A Shimura datum (G, h) which arises as in Lemma 7.5 is said to be of PEL-type and the corresponding (B, * , V, , , h) is said to be a PEL-datum for (G, h). If we fix such a PEL-datum for (G, h), then we say that V ∈ Rep(G) is the standard representation of G. Shimura data with a fixed choice of PEL-datum have an explicit moduli interpretation (see [Mil17,Sec. 8]). EXAMPLE 7.8 From the proof of Lemma 7.5, it is easy to see that a Shimura datum may admit multiple distinct PEL-data due to Morita equivalence. As an explicit example, consider the PEL- , which corresponds to the usual modular curves Shimura datum (GL 2 , H). Ancona's construction In the case of PEL-type Shimura data, Ancona has described a lift of µ H G defined on all of Rep F (G) [Anc15]. But, as defined and it is not immediately clear that it is well behaved with respect to pullbacks or is even independent of the choice of PEL-datum (cf. Example 7.8). In this section we briefly recall Ancona's construction, but in the language of mixed Shimura varieties. commutes. Here, we have used the isomorphism ϕ VF : This also allows us to define, for any choice of idempotent e, the image of a direct summand e · ( V ⊗an ) for each W , then we can compatibly extend Anc G to all of Rep F (G). Finally, by composition with the section of Theorem 2.6, we obtain a functor Rep F (G) → CHM F /S, which we also denote Anc G . LEMMA 8.5 The construction of Anc G is, up to natural isomorphism, independent of all choices made. Proof. Fix W ∈ Rep F (G) and two summands isomorphic to W of a tensor space, . We must provide an isomorphism Given the compatibility of the Künneth formula with mixed Shimura varieties, we may assume that W is irreducible and there is a corresponding isomorphism e · (V ⊗a As before, it suffices to assume that b = b ′ = 0. For weight reasons, we must then have that a = a ′ . Finally, since Lemma 8.2 lifts all elements of End Rep F (G) (V ⊗a F ), we obtain a motivic lift of the isomorphism between the two tensor space representatives of W . This construction is natural, and so gives the desired natural isomorphism. REMARK 8.6 Let (G, X) be a Shimura datum with a chosen PEL-datum for which all objects of Rep(G) AV are direct summands of V ⊕n for varying n. Then the argument given above can be adapted to show that Anc G extends µ mot G up to natural isomorphism. If the PEL-datum only has factors of symplectic type in the sense of Definition 7.3, then this always holds (see Lemma 11.2). This can also be checked to hold much more generally. Proof. We describe the natural isomorphism. In the notation of Construction 8.4, write η G,V for where ϕ VF is as defined in Notation 6.2. That η G := (η G,V ) V defines a natural isomorphism now follows from Lemma 6.3 i). Compatibility with base change In this section, we give conditions to ensure Ancona's construction and Theorem 8.7 are compatible with base change, i.e. there is a commutative prism analogous to that of Theorem 6.4. Let f : (G ′ , h ′ ) → (G, h) be a morphism of Shimura data each with a chosen PEL-datum. Denote their standard representations by V ′ , V respectively. By Lemma 8.1, f * V ∼ = e · ( n (V ⊗an ⊗ V ∨⊗bn )). In order to show that Anc (−) is compatible with f , we would need to construct an isomorphism Unfortunately, such a morphism cannot be constructed using just functoriality of mixed Shimura varieties. For this reason we make the following restriction: DEFINITION 9.1 Let f : (G ′ , h ′ ) → (G, h) be a morphism of PEL-type Shimura data each with a choice of PEL-datum with standard representations V ′ , V . If (⋆) f * V ∼ = e · V ′⊕n for some n ∈ N and idempotent e ∈ End Rep(G ′ ) (V ′⊕n ), then we say that f is an admissible morphism of Shimura varieties with PEL-data. Note that if f is admissible, then f * V F ∼ = e F · V ′⊕n F for any F . Admissibility implies that there is exists a map (S K,V ) S ′ → n i=1 S ′ K ′ ,V ′ as abelian varieties over S ′ . Proof. Let V ′ , V be the standard representations of the source and target respectively and B ′ , B the chosen Q-algebras. It suffices to show that V R is a summand of some V ′⊕n R . It is a consequence of Lemma 7.5 i) that the pairs (B R , V R ) and (B ′ R , V ′ R ) may only differ up to Morita equivalence (given that they both correspond to G 1,R ). To be more explicit, say B R has a factor M a (H) with corresponding factor (H ⊕a ) ⊕n of V R , then B ′ R has a factor M b (H) with corresponding factor (H ⊕b ) ⊕n of V ′ R . The corresponding factor of G 1,R is then O * 2n acting in the obvious way. It is then clear that V R is a summand of some number of copies of V ′ R as G R -modules. EXAMPLE 9.4 In Example 7.8, we described two PEL-data for (GL 2 , H), one with standard representation V ′ = Q ⊕2 and the other with standard representation V = Q ⊕2 ⊕ Q ⊕2 . The identity map (GL 2 , H) → (GL 2 , H) is admissible for each of the two ways of assigning each (GL 2 , H) a distinct choice of the two PEL-data. Indeed, id * V ′ ∼ = (i 1 • π 1 ) · V and id * V ∼ = V ′⊕2 . Not all morphisms of Shimura data with chosen PEL-data are admissible (see Example 11.1), but in Section 11 we show that if the PEL-datum on the source has only factors of symplectic type then it is admissible. In any case, it is easy to check if a given morphism is admissible. We now assume f : (G ′ , h ′ ) → (G, h) is admissible and fix one such isomorphism as in (⋆). CONSTRUCTION 9.5 We now have canonical isomorphisms: by Lemma 5.7 i) and the Künneth formula 2.9. Write λ V for this composite. For V F , the base change to Rep F (G), there is an analogous λ VF . NOTATION 9.6 As functors on Rep F (G), we extend this to a putative natural isomorphism λ : f * • Anc G =⇒ Anc G ′ •f * as follows: Let W ∈ Rep F (G). Since the construction of Anc G ′ is independent of the choice of the θ ′ W ′ (Lemma 8.5), we are free to assume that, for is obtained from f * θ W by taking the tensor products and direct sums of (the base change of) the isomorphism of (⋆). In other words, whilst There is now an obvious choice for λ W given by taking sums and products of λ VF and its dual. we have commutativity of the individual faces. It remains to check that, as natural isomorphisms agree. This can be seen from the proof of Lemma 9.7 and Lemma 6.3. Étale canonical construction Canonical constructions arise more generally than just the Hodge realisation, and both µ mot G and Ancona's construction should also be lifts of any such construction. We sketch this for the étale realisation following [Wil97, Sec. II.4]. We use the notation for the étale realisation described in Lemma 3.6. NOTATION 10.1 Let (G, X) be a Shimura datum and K be a neat open compact subgroup of G(A f ). We consider S := Sh K (G, X) to be defined over its reflex field E/Q via canonical models. Let V ∈ Rep F (G) and L be a K-stable full rankẐ-sublattice of V (A f ). Recall from Section 5 that there is a mixed Shimura variety S K,V := Sh L⋊K (V ⋊ G,X) whose reflex field is the same as that of S. The projection and Levi section then define regular maps p : S K,V → S, ι : S → S K,V . CONSTRUCTION 10.2 Let (G, X) be a Shimura datum and K ≤ G(A f ) neat open compact. If K ′ ≤ K is an open normal subgroup, then there is a right action of K/K ′ on Sh K ′ (G, X). Since we are assuming that the centre of G is an almost-direct product of a Q-split and R-anisotropic torus, the action of K/K ′ is free on C-points and Sh K ′ (G, X) −→ Sh K (G, X) is an étale cover of smooth algebraic varieties with Galois group K/K ′ (see [Pin92,Prop. 3.3.3. and (3.4.1)]). Taking the inverse limit over K ′ ≤ K we obtain a pro-Galois covering of Sh K (G, X) with Galois group K. Let ℓ be a prime and λ a prime of F lying over ℓ. Write F λ for the completion of F at λ as before. Then any F λ -linear continuous representation of K will define a lisse λ-adic sheaf on Sh K (G, X). Given (G F ρ → GL(V )) ∈ Rep F (G), we obtain such a representation via This defines a functor µ ét G : Rep F (G) →Ét F λ /S, which we refer to as the étale canonical construction. LEMMA 10.3 Given a Shimura datum (G, X) and V ∈ Rep F (G) AV , then there is a canonical identification ϕ V,λ : H 1 λ (S K,V ) ∨ ∼ → µ ét G (V ). Proof. The étale canonical construction extends verbatim to mixed Shimura varieties. As in the Hodge case, the diagram ii) Given a morphism of Shimura data f : (G ′ , h ′ ) → (G, h), each of PEL-type with a fixed datum, which is admissible in the sense of Definition 9.1, then the triangles for (G, h) and for (G ′ , h ′ ) together with base change form a commutative prism as in Theorem 9.8. Each face has a prescribed natural isomorphism which altogether are compatible. Results on admissibility In this section, we give additional results on the admissibility of morphisms of Shimura data with chosen PEL-data. Firstly, not all such morphisms are admissible: EXAMPLE 11.1 Let (G ′ , h ′ ) be defined by the PEL-datum (Q(i), * , Q(i) ⊕2 , (− tr Q(i)/Q (xiȳ) ⊕ tr Q(i)/Q (xiȳ)), h ′ ) where h ′ : C → End R (C ⊕2 ) is the map which sends z to multiplication by (z,z). We write GU 1,1 for G ′ . Then (GU 1,1 ) R coincides with the usual generalised unitary group of complex matrices preserving, up to scaling, a Hermitian form of signature (1, −1). Let χ denote the two dimensional representation of GU 1,1 given by the composition Here, the determinant is given by considering GU 1,1 ⊂ Aut Q(i) (Q(i) ⊕2 ) whilst U 1 denotes the norm one elements of Q(i) and the final map is given by the action of U 1 on Q(i) by multiplication. Note that the image of χ preserves the symmetric non-degenerate pairing tr Q(i)/Q (ab) and that, after base change to R, χ is trivial on the image of h ′ . Now let V ′ denote the standard representation of GU 1,1 and consider the representation GU 1,1 −→ GSp(V ′ ) × GO(Q(i)) ⊗ −→ GSp(V ′ ⊗ Q Q(i)). It can be checked that f * (V ′ ⊗ Q Q(i)) ∼ = V ′ ⊗ χ is not isomorphic to V ′⊕2 , for example by base changing to C where GU 1,1 becomes isomorphic to G m × GL 2 . As a result, f is not admissible. In contrast, in the symplectic case there are no non-admissible morphisms. In particular, there do not exist non-trivial representations χ which are trivial on the image of h in the symplectic case. Proof. It suffices to show the analogous statement after base change to C. Let W be a C-representation of G C of Hodge type {(−1, 0), (0, −1)}. By Lemma 7.5, G 1,C ∼ = i Sp 2mi . Accordingly, W | G 1,C splits as a direct sum of irreducibles on which G 1,C acts via projection to some simple factor. CLAIM Let T be an irreducible representation of Sp 2n that upon restriction to the subspace S ⊃ U 1 ∼ = aI g −bI g bI g aI g a 2 + b 2 = 1 (z ⊕z)-isotypical. Then T is isomorphic to the standard representation.
2018-12-20T15:16:26.000Z
2018-12-20T00:00:00.000
{ "year": 2019, "sha1": "7a38bcb24fdca6b66d741482d983c4b20ec5ac23", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00229-019-01150-9.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "7a38bcb24fdca6b66d741482d983c4b20ec5ac23", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
56551620
pes2o/s2orc
v3-fos-license
Retinal Morphometric Markers of Crystallized and Fluid Intelligence Among Adults With Overweight and Obesity Objective: To investigate the relationship between retinal morphometric measures and intellectual abilities among adults with overweight and obesity. Methods: Adults between 25 and 45 years (N = 55, 38 females) with overweight or obesity (BMI ≥ 25.0 kg/m2) underwent an optical coherence tomography (OCT) scan to assess retinal nerve fiber layer (RNFL) volume, ganglion cell layer (GCL) volume, macular volume, and central foveal thickness. Dual-Energy X-ray absorptiometry was used to assess whole-body adiposity (% Fat). The Kaufman Brief Intelligence Test-2 was used to assess general intelligence (IQ), fluid, and crystallized intelligence. Hierarchical linear regression analyses were performed to examine relationships between adiposity and intelligence measures following adjustment of relevant demographic characteristics and degree of adiposity (i.e., % Fat). Results: Although initial bivariate correlations indicated that % Fat was inversely related to fluid intelligence, this relationship was mitigated by inclusion of other demographic factors, including age, sex, and education level. Regression analyses for primary outcomes revealed that RNFL was positively related to IQ and fluid intelligence. However, only GCL was positively related to crystallized intelligence. Conclusion: This work provides novel data linking specific retinal morphometric measures – assessed using OCT – to intellectual abilities among adults with overweight and obesity. Clinical Trial Registration: www.clinicaltrials.gov, identifier NCT02740439. INTRODUCTION Obesity prevalence is a growing global public health issue (Swinburn et al., 2011). In 2015, there were 604 million adults with obesity worldwide, representing a greater than twofold increase in prevalence since the 1980s (GBD Obesity Collaborators et al., 2017). In the United States, obesity is estimated to affect approximately 40% of the adult population (Hales et al., 2017). Excess fat mass or adiposity is known to directly contribute to a wide range of metabolic disorders and chronic diseases including type 2 diabetes and cardiovascular disease (Malnick and Knobler, 2006). However, overweight and obesity are also related to mood disorders including anxiety and depression and increasing evidence suggests that the detrimental consequences of obesity also extend to cognitive function and brain health (Romain et al., 2018) including greater risk for dementia in older age (Gustafson, 2006;Luchsinger and Gustafson, 2009). While the underlying mechanisms remain unclear, evidence from magnetic resonance imaging (MRI) studies indicates that obesity is predictive of variations in brain structure and function that often accompany cognitive deficits including reduced synaptic plasticity (Erion et al., 2014), reduced processing speed (Sanz et al., 2013), and lower gray matter volume (Walther et al., 2009). Population-based studies have revealed that, akin to aging, increasing Body Mass Index (BMI) is longitudinally associated with declining gray matter volume (13 to 16% reduction per unit increase in BMI) in the temporal lobe . Similarly, obesity has also been linked to MRI measures of white matter including hyperintensities (Gustafson D. R et al., 2004;Jagust et al., 2005;Stanek et al., 2011). Therefore, conventional neuroimaging techniques, primarily MRI, have revealed links between gray matter and white matter outcomes and obesity. However, the use of MRI presents many practical challenges including high financial costs, contraindications, susceptibility to movement artifacts, technical expertise necessary for scan acquisition and analyses, and limited mobility or accessibility for populations. Therefore, there is increasing need for determining the efficacy of alternative neuroimaging techniques with the requisite sensitivity to cognitive abilities and brain health, particularly among individuals with overweight or obesity. Recent evidence indicates the morphometric measures of the human retina, studied using optical coherence tomography (OCT), have the potential to be utilized as markers of gray and white matter in the brain (Mutlu et al., 2017). Since the human retina is formed embryonically from neural tissue and is integrated into the neural system via the optic nerve, it is possible that structural abnormalities in brain tissue may be reflected in the retina (Chang et al., 2014;Mutlu et al., 2017). Additionally, imaging the retina, as proxy for brain, provides unique advantages since it can be visualized non-invasively at the cellular level due to its transparent nature, allowing for inexpensive testing of neurological biomarkers in clinical settings (Chang et al., 2014). OCT is a 3-dimensional retinal imaging technique that relies on low-coherence near infrared interferometry (Huang et al., 1991) to segment the various structural components of the retina including, but not limited to, the retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), and macular volume and thickness. Although OCT is often used in clinical settings to detect abnormalities in the eye and monitor the progression of ocular diseases, retinal neurodegeneration has been recently correlated with cerebral atrophy suggesting that neuronal damage may occur simultaneously in the retina and throughout the brain . Additionally, the thickness of different layers of the retina are related to specific brain subcomponents of brain matter. For example, RNFL is composed of axons and RNFL thickness has been related to cerebral white matter. On the other hand, neuronal cell bodies comprise the GCL and may be reflective of cerebral gray matter (Mutlu et al., 2017). The RNFL relationship to white matter has received further support from studies among patients with Multiple Sclerosis demonstrating that RNFL is correlated with white matter tracts that are functionally separated from the visual system (Scheel et al., 2014). Several studies involving adults with Alzheimer's have shown that these patients have reduced RNFL and GCL Thomson et al., 2015). Interestingly, thinner RNFL and GCL have also been associated with smaller temporal lobe structures including the hippocampus which is vital for memory and learning across the lifespan (Mutlu et al., 2017). Although emerging evidence points to the utility of OCT as a neuroimaging technique, only a limited number of studies have examined retinal morphometric measures and cognitive function. RNFL thickness has been positively associated with performance on the mini mental state examination among a large cohort of twins between 18 and 89 years (Jones-Odeh et al., 2016). Total macular volume and RNFL have also been associated with verbal intelligence and IQ among persons with MS (Ashtari et al., 2015). However, the extent to which different retinal layers correspond to aspects of intellectual abilities among individuals with overweight and obesity has not been directly examined. Intelligence represents a critical cognitive ability known to support vital cognitive processes such as executive function and the acquisition of knowledge and learning (Colom et al., 2010). Intelligence can be conceptualized as general intelligence (i.e., intelligence quotient [IQ]) or its separable components of crystallized intelligence and fluid intelligence. Studying specific constructs of intelligence is important given that fluid and crystallized intelligence exhibit differential susceptibility to factors such as aging (Craik and Bialystok, 2006;Park and Reuter-Lorenz, 2009). Crystallized intelligence reflects the ability to use previously acquired knowledge and is therefore amenable to learning while fluid intelligence is thought to represent the ability to adapt to new situations (Cattell, 1963). In the context of obesity, studying these different measures of intelligence may provide insights into components of cognitive function that exhibit sensitivity to obesity-related cognitive impairments. However, to our knowledge, the relationship between retinal morphometric measures and intellectual ability among adults with overweight and obesity has not been previously studied. Accordingly, the present work aimed to utilize OCT to assess the relationship between retinal morphometric measures and different constructs of intelligence among adults with overweight or obesity. Given prior evidence indicating that thicker RNFL and GCL are related to greater gray matter and white matter volumes among older adults, we hypothesized lower thickness in RNFL and GCL will be associated with poorer performance across all measures of intelligence (i.e., IQ, fluid, and crystallized). Participants Middle-aged adults (25-45 years) with overweight or obesity (BMI ≥ 25.0 kg/m 2 ) were recruited from an ongoing dietary intervention (National Clinical Trail identifier NCT02740439). The data presented here were collected prior to the intervention phase of the study. Participants were recruited from the East-Central region of Illinois through e-mail listservs and flyers posted in public buildings. This study was carried out in accordance with the recommendations of the Declaration of Helsinki. The protocol was approved by the University of Illinois Institutional Review Board and written informed consent was obtained from all participants. Following informed consent, participants completed medical and demographic questionnaires. From these questionnaires, participants were excluded if they had a history of ocular (e.g., age-related macular degeneration), uncorrected vision, neurological disease, and/or chronic metabolic disease. Complete data was available for 53 participants. Participant characteristics are summarized in Table 1. Overall Procedure Data were collected over two visits to the laboratory. During the first session, following written informed consent and screening, trained researchers administered the Kaufman Brief Intelligence Test Second Edition (KBIT-2) (Naugle et al., 1993; Data presented as mean ± SD wherever unless indicated otherwise. a Based of average of both left and right eyes available for 53 subjects. RNFL, retinal nerve fiber layer; GCL, ganglion cell layer. Wang and Kaufman, 1993) and participants underwent OCT assessment in both eyes. During the second session, participants arrived following a 10-h fast and underwent a whole body Dual-Energy X-ray Absorptiometry (DXA) scan. Retinal Morphometry Assessment Retinal morphometric data were assessed using retinal images collected with the Heidelberg Engineering Spectralis Optical Coherence Tomography (SD-OCT; Heidelberg Engineering, Heidelberg, Germany). The principles of the SD-OCT technique have been previously discussed (Galetta et al., 2011). Briefly, the SD-OCT procedure relies on the interferometer to transmit lowcoherence infrared light through the pupil and the layers of the retina. This SD-OCT device utilizes a class one laser to emit infrared light at 870 nm through a super luminescence diode. Results were obtained using the central, inner, and outer rings centered around the fovea with respective diameters of 1, 2.22, and 3.45 mm. Macular, RNFL, and GCL volume were all obtained using the 3.45 diameter circle. Center Foveal Thickness was found by taking the thickness measurement at the center-most point at the foveal pit. Volume and thickness were assessed using Heidelberg software (version: 6.0.11.0). Each scan was manually segmented to account for blood vessels by trained researchers. Figures 1A,B illustrate the retinal layers examined. Data were collected on both eyes and there was a high degree of correlation between the thickness and volume measures between right and left eyes (r's between 0.71 and 0.97 all P's < 0.01). Therefore, average values of the left and right eyes were used in the statistical analyses. Intelligence Assessment Kaufman Brief Intelligence Test Second Edition has been nationally normed for ages 4-90 years to assess general intellectual abilities (IQ) (Wang and Kaufman, 1993 Statistical Analysis Pearson correlation analyses were conducted to determine the contribution of demographic and retinal morphometric measures to the intelligence outcomes. Stepwise hierarchical linear regression models were used to examine the contribution of retinal morphology measures to intelligence measures following adjustment for potential confounding variables. Age, sex, education, and % Fat were entered as step 1 control variables and morphometric measures were added at step 2 in the analyses. The significance of the change in the R 2 -value between the two steps was used to evaluate the improvement in the variance explained once retinal measures were included. The independent contribution of each retinal morphometric measure was assessed by studying the β weight and significance at step 2 when explaining variance in intelligence outcomes beyond that of the demographic variables and adiposity. Data were analyzed using SPSS (SPSS v. 24, Chicago, IL, United States) with an alpha threshold of p = 0.05. RESULTS The sample consisted of 55 participants, ages 25-45 (M = 34.33 ± 0.82 years) and was predominantly comprised of females (n = 38). Approximately half of the sample was comprised of individuals with a BMI between 25 and 29.9 kg/m 2 (49%) and the other half (51%) of the sample had a BMI ≥ 30 kg/m 2 . Majority of the study participants had obtained higher education or advanced college degrees (62%). Bivariate Correlations Preliminary Pearson bivariate correlations are summarized in Table 2. Sex (males coded as 1, females coded as 0) was negatively Regression Analyses A summary of the regression analyses for each measure of intelligence is provided in Table 3. DISCUSSION This study aimed to determine the relationship between retinal morphometric measures and intellectual abilities among adults with overweight and obesity. Consistent with our a priori hypothesis, we observed that RNFL and GCL volume were significantly related to higher intellectual ability. Interestingly, these relationships were selective in that RNFL and GCL were related to fluid and crystallized intelligence, respectively. On the other hand, we observed that greater macular volume and CFT were not significant predictors of any of our measures of intellectual abilities. However, the influence of CFT on IQ and crystallized intelligence approached statistical significance. Overall, these data indicate that OCT-derived retinal measures are related to intellectual abilities among adults with overweight and obesity. Obesity has been shown to be related to lower gray matter across several brain regions, including prefrontal cortex, temporal, occipital cortex, amygdala, and cerebellum, even after adjusting for obesity-related comorbidities (Kharabian Masouleh et al., 2016). Given that the retina shares developmental, physiological, and anatomical features with the brain, retinal imaging has emerged as an alternative approach to imaging the neural structures (de la Monte et al., 2009;Kamran Ikram et al., 2012;Ong et al., 2015). The efficacy for using OCT for neural imaging has gained particular empirical support from studies in neurodegenerative diseases. For example, histopathological and clinical studies have shown that patients with Alzheimer's disease have reduced GCL and RNFL thickness compared to controls (Coppola et al., 2015). Recent work has related RNFL and GCL thinning to global and regional cerebral atrophy using MRI among neurologically healthy adults Casaletto et al., 2017). Ong et al. (2015) studied a sample of 60-80-year-olds (N = 164) and observed that GCL thinning was selectively related to reduction in occipital and temporal lobe gray matter volume, while no relationships were observed with white matter. Additionally, in a large population based study (N = 2,124), Mutlu et al. (2017) observed that thinner RNFL and GCL were associated with poorer white-matter microstructure. The RNFL and GCL are known to be components of the ganglion cell complex with the RNFL comprising axons and the GCL signifying cell bodies. Therefore, it is possible the RNFL may correspond to cerebral white matter while the GCL may reflect cerebral gray matter integrity (Mutlu et al., 2017). However, to our knowledge, this is the first study to implicate thinner retinal morphometric measures in poorer intellectual ability among adults with overweight and obesity. Although a considerable body of literature has examined the influence of obesity on brain structure and cognitive function (Pannacciulli et al., 2006;Smith et al., 2011), the influence of obesity on measures of intelligence has received comparatively less attention. Studying neuroimaging markers of intellectual abilities is important because intelligence supports higher-order mental processes such as executive function (also known as cognitive control) as well as the acquisition of knowledge and learning across the lifespan (Colom et al., 2010). Data from the present work indicated increasing adiposity was inversely related to fluid intelligence. However, this relationship was no longer significant once other demographic factors were included in step 1 of the regression models. Fluid intelligence represents abstract reasoning and problem solving abilities and is an important predictor for lifetime trajectories of cognition and physical and mental health (Gottfredson, 1997). Importantly, the retinal morphometric measure of RNFL was the primary predictive variable for fluid intelligence. Previous neuroimaging work has shown that abnormalities in white matter influence a variety of cognitive functions, particularly under demyelinating diseases such as multiple sclerosis (Kail, 1998). White matter integrity, as indicated by fractional anisotropy, has also been shown to be related to general intellectual abilities and fluid intelligence (Yu et al., 2008;Haász et al., 2013). If the RNFL is reflective of white matter integrity, the findings of the current study are consistent with these aforementioned studies since we also observed a significant positive relationship between RNFL and fluid intelligence. On the other hand we observed that GCL volume was selectively related to crystallized intelligence. Crystalized intelligence is distinct from fluid intelligence because it refers to the ability to retrieve and use information that has been acquired throughout life (Horn and Cattell, 1968). Unlike fluid intelligence, crystallized intelligence does not exhibit susceptibility to aging (Park and Reuter-Lorenz, 2009). The implication of the finding from the current study is that GCL reflects intellectual abilities that are acquired through learning across the lifespan. Future studies are needed to determine whether changes in obesity and fat distribution differentially compromise particular intellectual abilities during development and aging. While GCL and RNFL were found to be predictive of intellectual abilities, we did not observe significant correlations between measures of macular volume and central foveal thickness and intelligence. It is worth noting that the association among foveal thickness, IQ, and crystallized intelligence approached statistical significance. It is possible that the influence of foveal thickness on intellectual ability is comparatively smaller, relative to GCL and RNFL, and our sample was not adequately powered to detect the relationships. Nevertheless, the patterns observed (i.e., potential relationships among foveal thickness, IQ, and crystallized intelligence) would be similar to those observed for GCL. Thus, greater foveal thickness may be protective of intellectual abilities thought to be acquired by learning through the lifespan. Given that previous work has shown that foveal thickness is associated with macular pigment optical density or the accumulation of macular carotenoids (Liew et al., 2006), future intervention trials are necessary to determine the susceptibility of the fovea to dietary intake and its implications for intellectual abilities. Although the present study provides novel data linking intellectual abilities to retinal morphometric measures assessed by OCT, there are several limitations worth considering. Longitudinal research studies are necessary to characterize changes in retinal measures and intellectual abilities over extended periods of time. Additionally, our study lacked a comparator group of individuals with a healthy weight status. Improving the heterogeneity of the sample by including individuals with varying weight status would provide more comprehensive insights into the relationship between obesity, intellectual abilities, and retinal measures. Finally, we did not account for genetic factors that may contribute to retinal morphometric measures. For example, Jones-Odeh et al. (2016) examined a large cohort of twins in the United Kingdom and learned that RNFL thickness was highly heritable (82%). Markers of vascular health in the retina have also been previously linked to neuropsychological functioning at midlife should be accounted for in future research (Shalev et al., 2013). Additionally, other lifestyle factors (e.g., diet and physical activity) have the potential to contribute to intellectual abilities and/or retinal morphology and warrant examination in future studies. In conclusion, these findings provide cross-sectional evidence supporting the efficacy or utility of retinal morphometric measures -as measured by OCT -to study intellectual abilities among adults with overweight and obesity. Importantly, we were able to demonstrate these relationships in a sample of middle-aged adults since previous work has predominantly focused on older adults and individuals with dementia. Selective relationships were observed between particular retinal measures and different intellectual abilities, known to be differentially affected by aging. These data may set the stage to develop future research into the interaction between aging and weight status and their influence on gray and white matter and different constructs of intelligence. AUTHOR CONTRIBUTIONS AJ, CR, CE, ST, and GR collected the data and contributed to the manuscript draft. AW interpreted the results and contributed to the manuscript development. HH and NK conceptualized the study, interpreted the results, and contributed to the manuscript draft. FUNDING This work was supported by Department of Kinesiology and Community Health at the University of Illinois, the USDA National Institute of Food and Agriculture, Hatch project (Grant number 1009249), and the Hass Avocado Board.
2018-12-21T14:07:04.369Z
2018-12-21T00:00:00.000
{ "year": 2018, "sha1": "00d6db223b0c4fe86a96cc879d68f569ff0c2c36", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpsyg.2018.02650", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00d6db223b0c4fe86a96cc879d68f569ff0c2c36", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
709260
pes2o/s2orc
v3-fos-license
An Unusual Case of Post-Traumatic Headache Complicated by Intracranial Hypotension We present a case of post-traumatic headache complicated by intracranial hypotension resulting in an acquired Chiari malformation and myelopathy with syringomyelia. This constellation of findings suggest a possible series of events that started with a traumatic cerebral spinal fluid (CSF) leak, followed by descent of the cerebellar tonsils and disruption of CSF circulation that caused spinal cord swelling and syrinx. This unusual presentation of post-traumatic headache highlights the varying presentations and the potential sequelae of intracranial hypotension. In addition, the delayed onset of upper motor neuron symptoms along with initially normal head computerized tomography scan (CT) findings, beg the question of whether or not a post-traumatic headache warrants earlier magnetic resonance imaging (MRI). A 27 year-old man, previously healthy and without history of headache, presented for evaluation of persistent headache seven months after suffering blunt trauma to the back of the head. There was no associated loss of consciousness. He developed a headache the day after injury, which was later accompanied by nausea and vomiting. The headache was described as dull occipital pain with intermittent severe throbbing. Head CT performed one week after the initial injury was unremarkable. At the time of presentation, he described his headache as daily, lasting three to four hours per day with partial pain relief using combination aspirin-acetaminophen-caffeine. His headache was alleviated by lying down and sleeping; and was exacerbated by coughing, lack of sleep and alcohol consumption. Prior to presentation to our clinic, the patient had unsuccessful treatment with amitriptyline, propranolol, sumatriptan, and hydrocodone-acetaminophen. Past medical and surgical histories were unremarkable. There was no family history of migraine, neurologic disease, or collagen vascular disease. He was a former smoker and reported infrequent alcohol consumption. Caffeine intake included two to three sodas daily. General and neurologic physical exam were unremarkable. Initial diagnosis was of chronic post-traumatic headache with a chronic migraine phenotype and possible component of medication overuse headache due to frequent use of combination analgesics. He was started on preventive treatment with venlafaxine and instructed to reduce his abortive medication use and caffeine intake. At three month follow up the patient reported modest improvement in headache symptoms. However, he had subsequently developed bilateral hand weakness and upper thoracic back pain. Physical exam was notable for bilateral finger flexor weakness, intrinsic hand muscle wasting, tongue fasciculations and ankle clonus. MRI of the brain and cervical cord with and without contrast revealed diffuse pachymeningeal enhancement, sagging brainstem, low lying cerebellar tonsils (10 mm descent) with crowding at the foramen magnum, and venous engorgement of the cervical epidural space ( Figure 1A,B). This constellation of findings was suggestive of intracranial hypotension. Additionally, there was diffuse spinal cord signal abnormality involving cervical and upper thoracic spinal cord to the level of T8-T9 with a small syrinx at T1-T2 and an epidural fluid collection in the mid-thoracic spine ( Figure 1C). Brain Sci. 2017, 7, 3 2 of 5 diffuse pachymeningeal enhancement, sagging brainstem, low lying cerebellar tonsils (10 mm descent) with crowding at the foramen magnum, and venous engorgement of the cervical epidural space ( Figure 1A,B). This constellation of findings was suggestive of intracranial hypotension. Additionally, there was diffuse spinal cord signal abnormality involving cervical and upper thoracic spinal cord to the level of T8-T9 with a small syrinx at T1-T2 and an epidural fluid collection in the mid-thoracic spine ( Figure 1C). Upon reviewing the MRI findings with the patient, he confirmed that the headache had always been alleviated by lying supine. A CT guided epidural blood patch was performed at L1/L2, and resulted in only modest improvement in headache pain. A second epidural blood patch was performed at T12/L1, again without significant clinical improvement. Repeat MRI of the brain and spine showed progression of spinal cord signal abnormality from T9 to T10. A CT myelogram showed an epidural fluid collection along the right lateral thecal sac extending from T5 to T12, most prominent at T10 (Figure 2A). CT-guided epidural blood patch directed at T9/T10 resulted in only ephemeral relief. A repeat CT myelogram was performed and a targeted interlaminar epidural blood patch was placed at T9/T10 combined with a right transforaminal blood patch at T10/11. This, the fourth epidural blood patch, also only transiently alleviated the patient's symptoms. The patient was subsequently evaluated for possible surgical repair of the dural tear. Upon the recommendation of neurosurgery, a myelogram with rapid sequence imaging was performed under biplanar fluoroscopy. This study showed CSF egress from the right T10 nerve root sleeve in the right T10-11 foramen extending into the ventral epidural space ( Figure 2B). Repeat combined interlaminar and transforaminal epidural blood patch ultimately resulted in resolution of the patient's headache. Upon reviewing the MRI findings with the patient, he confirmed that the headache had always been alleviated by lying supine. A CT guided epidural blood patch was performed at L1/L2, and resulted in only modest improvement in headache pain. A second epidural blood patch was performed at T12/L1, again without significant clinical improvement. Repeat MRI of the brain and spine showed progression of spinal cord signal abnormality from T9 to T10. A CT myelogram showed an epidural fluid collection along the right lateral thecal sac extending from T5 to T12, most prominent at T10 (Figure 2A). CT-guided epidural blood patch directed at T9/T10 resulted in only ephemeral relief. A repeat CT myelogram was performed and a targeted interlaminar epidural blood patch was placed at T9/T10 combined with a right transforaminal blood patch at T10/11. This, the fourth epidural blood patch, also only transiently alleviated the patient's symptoms. The patient was subsequently evaluated for possible surgical repair of the dural tear. Upon the recommendation of neurosurgery, a myelogram with rapid sequence imaging was performed under biplanar fluoroscopy. This study showed CSF egress from the right T10 nerve root sleeve in the right T10-11 foramen extending into the ventral epidural space ( Figure 2B). Repeat combined interlaminar and transforaminal epidural blood patch ultimately resulted in resolution of the patient's headache. Brain Sci. 2017, 7, 3 2 of 5 diffuse pachymeningeal enhancement, sagging brainstem, low lying cerebellar tonsils (10 mm descent) with crowding at the foramen magnum, and venous engorgement of the cervical epidural space ( Figure 1A,B). This constellation of findings was suggestive of intracranial hypotension. Additionally, there was diffuse spinal cord signal abnormality involving cervical and upper thoracic spinal cord to the level of T8-T9 with a small syrinx at T1-T2 and an epidural fluid collection in the mid-thoracic spine ( Figure 1C). Upon reviewing the MRI findings with the patient, he confirmed that the headache had always been alleviated by lying supine. A CT guided epidural blood patch was performed at L1/L2, and resulted in only modest improvement in headache pain. A second epidural blood patch was performed at T12/L1, again without significant clinical improvement. Repeat MRI of the brain and spine showed progression of spinal cord signal abnormality from T9 to T10. A CT myelogram showed an epidural fluid collection along the right lateral thecal sac extending from T5 to T12, most prominent at T10 (Figure 2A). CT-guided epidural blood patch directed at T9/T10 resulted in only ephemeral relief. A repeat CT myelogram was performed and a targeted interlaminar epidural blood patch was placed at T9/T10 combined with a right transforaminal blood patch at T10/11. This, the fourth epidural blood patch, also only transiently alleviated the patient's symptoms. The patient was subsequently evaluated for possible surgical repair of the dural tear. Upon the recommendation of neurosurgery, a myelogram with rapid sequence imaging was performed under biplanar fluoroscopy. This study showed CSF egress from the right T10 nerve root sleeve in the right T10-11 foramen extending into the ventral epidural space ( Figure 2B). Repeat combined interlaminar and transforaminal epidural blood patch ultimately resulted in resolution of the patient's headache. The patient continues to have improvement of bilateral hand strength. His headaches, neck, and back pain all resolved over the following month. Repeat imaging performed approximately three months following the fifth and final therapeutic blood patch showed complete resolution of pachymeningeal enhancement, tonsillar herniation and cord edema with only a tiny residual syrinx (Figure 3). Brain Sci. 2017, 7, 3 3 of 5 The patient continues to have improvement of bilateral hand strength. His headaches, neck, and back pain all resolved over the following month. Repeat imaging performed approximately three months following the fifth and final therapeutic blood patch showed complete resolution of pachymeningeal enhancement, tonsillar herniation and cord edema with only a tiny residual syrinx (Figure 3). Discussion This case presents several unusual complications of post-traumatic headache. Curiously, the patient developed delayed symptoms that were found to be a result of upper thoracic syrinx and cord edema. The positional nature of the headache and exacerbation by coughing were consistent with that of intracranial hypotension [1]. Etiologies of CSF volume depletion include trauma, shunt over drainage, and spontaneous causes, including disorders of connective tissue matrix [2]. The history of blunt trauma to the back of the head in this case would make trauma the most likely cause of CSF leakage. Intracranial hypotension resulting from CSF leakage has been described as a mechanism of acquired Chiari malformation. As spinal fluid is lost, there is loss of brain buoyancy resulting in brain settling and herniation of hindbrain structures through the foramen magnum [3,4]. Although syrinx formation has somewhat of an elusive pathophysiology, several theories have been developed to explain its cause. One of such proposes that in accordance with the Bernoulli theorem, the narrowed flow created by sagging cerebellar tonsils at the foramen magnum, causes an increase in CSF velocity and a resultant low CSF pressure in the narrowed canal. This low CSF pressure creates a suction effect on the spinal cord that distends the cord during each systole, causing extracellular fluid to develop within the distended cord, enlarging the central canal to form a syrinx [5,6]. This can occur in the setting of Chiari malformation and a similar mechanism may occur with trauma [5]. Trauma alone has been well described as a cause for syrinx formation [7][8][9][10][11][12]. In the unique case of our patient described here however, the situation is more complex. It is possible that trauma alone was enough to cause syrinx formation. However, given headache symptoms consistent with that of intracranial hypotension and persistent CSF leakage found on imaging, it is more likely that CSF leakage was the inciting factor. We propose that trauma was the cause of CSF leak that led to intracranial hypotension and sagging of cerebellar tonsils, causing distention of the cord and culminating in syrinx formation. An alternative theory that may better explain the delayed onset of weakness is that the patient sustained a second traumatic event that enlarged an initially small dural leak into a larger tear. The larger tear may have resulted in more CSF leakage and caused occlusion of CSF flow at the foramen magnum with the descent of the cerebellar tonsils, ultimately resulting in cord edema and syrinx. A more rudimentary explanation may be that the events occurred semi-independently and not as a causal sequence which we have described. Although the initial head CT did not show evidence of pathology, it is possible Discussion This case presents several unusual complications of post-traumatic headache. Curiously, the patient developed delayed symptoms that were found to be a result of upper thoracic syrinx and cord edema. The positional nature of the headache and exacerbation by coughing were consistent with that of intracranial hypotension [1]. Etiologies of CSF volume depletion include trauma, shunt over drainage, and spontaneous causes, including disorders of connective tissue matrix [2]. The history of blunt trauma to the back of the head in this case would make trauma the most likely cause of CSF leakage. Intracranial hypotension resulting from CSF leakage has been described as a mechanism of acquired Chiari malformation. As spinal fluid is lost, there is loss of brain buoyancy resulting in brain settling and herniation of hindbrain structures through the foramen magnum [3,4]. Although syrinx formation has somewhat of an elusive pathophysiology, several theories have been developed to explain its cause. One of such proposes that in accordance with the Bernoulli theorem, the narrowed flow created by sagging cerebellar tonsils at the foramen magnum, causes an increase in CSF velocity and a resultant low CSF pressure in the narrowed canal. This low CSF pressure creates a suction effect on the spinal cord that distends the cord during each systole, causing extracellular fluid to develop within the distended cord, enlarging the central canal to form a syrinx [5,6]. This can occur in the setting of Chiari malformation and a similar mechanism may occur with trauma [5]. Trauma alone has been well described as a cause for syrinx formation [7][8][9][10][11][12]. In the unique case of our patient described here however, the situation is more complex. It is possible that trauma alone was enough to cause syrinx formation. However, given headache symptoms consistent with that of intracranial hypotension and persistent CSF leakage found on imaging, it is more likely that CSF leakage was the inciting factor. We propose that trauma was the cause of CSF leak that led to intracranial hypotension and sagging of cerebellar tonsils, causing distention of the cord and culminating in syrinx formation. An alternative theory that may better explain the delayed onset of weakness is that the patient sustained a second traumatic event that enlarged an initially small dural leak into a larger tear. The larger tear may have resulted in more CSF leakage and caused occlusion of CSF flow at the foramen magnum with the descent of the cerebellar tonsils, ultimately resulting in cord edema and syrinx. A more rudimentary explanation may be that the events occurred semi-independently and not as a causal sequence which we have described. Although the initial head CT did not show evidence of pathology, it is possible the lack of sensitivity CT has for detecting these processes concealed a pre-existing chiari and/or syrinx. In this scenario, a traumatic CSF leaf would have disrupted CSF dynamics and resulted in intracranial hypotension. Known trauma was most likely to be the precipitating event in this presentation. However, in other cases of spontaneous CSF leak, connective tissue diseases leading to weakness of the dural sac may be considered a potential cause. Studies showing dural ectasia and meningeal diverticula to be common in connective tissue disorders may predispose this population to spontaneous CSF leaks [13,14]. A family history of spontaneous CSF leakage, history of aortic aneurysm, or joint hypermobility on exam may warrant an investigation to look for collagen vascular disease. Conclusions This case report highlights the importance of correctly identifying headaches associated with a CSF leak. Our young healthy man who presented with a seemingly straightforward case of post-traumatic headache developed delayed synrix formation that resulted in intrinsic hand weakness. CSF leakage is not a benign condition. Potential sequelae including syrinx formation can cause profound debilitation and prove to be quite challenging to treat. Given the risk of the afore-mentioned complications, it is vital a detailed history of the headache characteristics be obtained as well as a thorough neurologic examination performed. Should symptoms or signs consistent with intracranial hypotension be elicited, MR imaging may be warranted. Author Contributions: Sara Siavoshi, Jessica Ailani, and Carrie Dougherty conceived and designed the case report. Carrie Dougherty, Jessica Ailani, and Frank Berkowitz contributed materials and analysis for the report. Sara Siovoshi wrote the paper. Carrie Dougherty and Jessica Ailani edited the paper.
2017-01-07T08:35:44.032Z
2016-12-29T00:00:00.000
{ "year": 2016, "sha1": "b61ffa757293d4c60e5dc8a523a79a2558b19c03", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/7/1/3/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b61ffa757293d4c60e5dc8a523a79a2558b19c03", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
44775254
pes2o/s2orc
v3-fos-license
Notch responds differently to Delta and Wingless in cultured Drosophila cells. Notch, a cell surface receptor, is required for producing different types of cells during development of Drosophila melanogaster. Notch activates expression of one set of genes in response to ligand Delta and another set of genes in response to ligand Wingless. The means by which Notch initiates these different intracellular activities was examined in this study. Cultured cells expressing Notch were treated with Delta or Wingless, and the effect on Notch was examined by Western blotting. Treatment of cells with Delta resulted in accumulation of approximately 120-kDa Notch intracellular domain molecules in the cytoplasmic fraction. This form of Notch did not accumulate in cells treated with Wingless, but the approximately 350-kDa full-length Notch molecules accumulated. These results indicate that N responds differently to binding by Delta and Wingless, and suggest that although the Delta signal is transduced by the Notch intracellular domain released from the plasma membrane, the Wingless signal is transduced by the Notch intracellular domain associated with the plasma membrane. Notch (N) 1 is required for the specification of different cell types during development of Drosophila melanogaster (1). It is a cell surface receptor, the intracellular activities of which are regulated by ligands binding to the extracellular domain. Delta (Dl) is the ligand for the well known N functions associated with lateral inhibition. During lateral inhibition, N and Dl produce the neuronal precursor cells that differentiate the nervous system and the epidermal precursor cells that differentiate the cuticle (1)(2)(3)(4). Wingless is the ligand for some N functions associated with differentiation of the cuticle (5-10). The full-length N binds both Dl and Wg, in vivo and in vitro (7). It regulates expression of the Enhancer of split Complex and wingless in response to Dl (1,(11)(12)(13)(14). In response to Wg, it regulates expression of patched, shaggy, and hairy, but not Enhancer of split Complex and wingless (7). Thus, the same receptor regulates different sets of genes in response to Dl and Wg. N intracellular signals in response to Dl are mediated by the Suppressor of Hairless (Su(H)) signal transduction pathway (11)(12)(13)(15)(16). The signaling pathway used by N to transduce the intracellular signals in response to Wg is unknown. Because Wg binding does not activate expression of Enhancer of split Complex and wingless (7), it is unlikely to be the same Su(H) pathway used with Dl. Furthermore, Wg-dependent functions of N during development are distinct from Dl-dependent N functions (5)(6)(7)(8)(9)(10). These observations indicate that fulllength N generates different intracellular signals in response to Dl and Wg. However, how can the same N receptor generate one signal after binding Dl and a different signal after binding Wg? We treated N-expressing cultured cells with Dl and Wg and found out that N responds differently to binding by these two ligands. This differential response likely initiates transduction of different signals to the nucleus. MATERIALS AND METHODS S2-N and S2-Dl cells are Schneider (S2) cells transfected with the Notch and Delta genes, respectively, for expression of their proteins under the control of heat shock promoter (17). S2-N ⌬EGF1-18 and S2-N ⌬EGF19 -36 cells are S2 cells transfected with the Notch gene designed to express N proteins without epidermal growth factor-like (EGF-like) repeats 1 to 18 and EGF-like repeats 19 to 36, respectively. S2 cells do not express N, Dl, or Wg (4,7,18). Clone-8 are imaginal disc cells that express N and Dl, but not Wg (18). 2 S2 and Wg media were prepared by growing heat-shocked S2 cells or S2-Wg cells in Shields and Sang's M3 media as described (7). 1-2 ϫ 10 6 heat-shocked S2-N, S2-N ⌬EGF1-18 , S2-N ⌬EGF19 -36 , or clone-8 cells were treated with 1.5-3 ϫ 10 6 heat-shocked S2 cells or heat-shocked S2-Dl cells in the indicated media and incubated with gentle shaking at 25°C. Siliconized multiwell Falcon plates were used for incubation. Aliquots of the same cell solutions or media were used for each experiment. Proteins were extracted with 0.75% Triton X-100 and 0.5% deoxycholate as described (7). Subcellular fractions were prepared as described (16). Protein content in extracts was equalized by using absorbance values at 280 nm and the Bio-Rad DC protein assay kit. 8% SDS-polyacrylamide gel electrophoresis was used for electrophoresis. Western blotting was performed as described (19), and the signals were detected with an ECL kit (Amersham Pharmacia Biotech). ␣NI antibody, made against the intracellular CDC10/ankyrin region (20), was used to detect N molecules. RESULTS Schneider cells expressing Notch (S2-N cells) treated with Dl for 1 h accumulated ϳ120-kDa N molecules (N120; Fig. 1a, lanes 1 and 2). Dl binds N in the extracellular region including EGF-like repeats 11 and 12 (21). S2 cells expressing N molecules lacking this region, N ⌬EGF1-18 , do not accumulate N120 molecules in response to treatment with Dl ( Fig. 1a, lanes 3 and 4). This indicates that N120 accumulated in response to Dl binding N. N120 is the complete intracellular domain and is similar to the ϳ120-kDa N intracellular domain molecule shown to accumulate in vivo in response to Dl (1,16,22,23). 3 N120 molecules did not accumulate in S2-N cells treated with Wg for 1, 2, or 5 h (Fig. 1a, lanes 5 and 6, 9 and 10, and 12 and 13). However, S2-N cells treated with Wg for 5 h accumulated ϳ350-kDa N molecules (N350) but not S2-N cells treated with Dl (Fig. 1a, lanes [11][12][13]. N350 is the full-length co-linear N molecule containing both the intracellular and extracellular domains (7,16). Wg binds N in the EGF-like repeats 19 -36 region (7). S2 cells expressing N molecules lacking this region, N ⌬EGF19 -36 , did not accumulate co-linear molecules when treated with Wg for 5 h (Fig. 1a, lanes 15 and 16). On the other hand, truncated, co-linear N ⌬EGF1-18 molecules containing the Wg binding sites accumulated upon treatment with Wg ( Fig. 1a, lanes 17 and 18). These results indicate that accumulation of N350 in S2-N cells was in response to Wg binding N. Accumulation of N350 molecules was also discernible in cells treated with Wg for 2 h when the blots are exposed to film for shorter periods. In contrast to Wg-treated cells, Dl-treated cells in the same blots always had lower levels of N350 compared with the levels in untreated cells (data not shown, see below). Accumulation of N350 molecules in Wg-treated cells is not due to activity of the endogenous Notch gene, which is rearranged in S2 cells (4). 2 It is not due to a general increase or stabilization of all proteins in the cells: all N molecules do not accumulate, and the total protein levels in the three lanes are comparable (see N120 and other molecules marked with an asterisk in Fig. 1a, lanes 11-13, and the HSP 70 panels). It is also not due to a Wg effect that is unrelated to N binding but retards N processing for cell surface presentation (see Refs. 24 -26 for cell surface N processing). Otherwise, co-linear N ⌬EGF19 -36 would have also accumulated, but it did not (see Fig. 1a, lanes 15 and 16). Thus, whereas Dl binding full-length N results in accumulation of N120, Wg binding results in accumulation of the co-linear N350. Treatment of S2-N cells with Dl or Wg for 2 h also resulted in accumulation of ϳ55-kDa N molecules (N55; Fig. 1a, lanes 7-10). N55 contains only the amino terminus half of the intracellular domain, requires about 2 h to accumulate, and is variably recovered after about 3 h of treatment. 3 To determine whether the responses observed in S2 cells are general N responses to treatments with Dl and Wg, the experiments were repeated with clone-8 cells that express N endogenously (Fig. 1b). The results showed that N in clone-8 cells responded similarly to N in S2 cells. Treatment with Dl resulted in accumulation of N120 and not N350, whereas treatment with Wg resulted in accumulation of N350 and not N120; both Dl and Wg treatments resulted in accumulation of N55 molecules (Fig. 1b). The difference in levels of N350 between Dl-treated and Wg-treated cells is obvious here after just 2 h of treatment. Clone-8 cells express a higher level of N55 molecules in the absence of any treatment, presumably because they also express Dl endogenously. 2 When Dl binds N in vivo, the ϳ120-kDa N intracellular domain is released into the cytoplasm (1,15,16,22,23). To determine whether the N120 in our in vitro experiments with Dl also accumulated in the cytoplasm, S2-N cells were fractionated and analyzed following treatments with Dl and Wg. Following treatment with Dl, N120 molecules accumulated in the cytoplasmic fraction (Fig. 1c). In contrast, N350 molecules accumulated in the membrane fraction following treatment with Wg (Fig. 1c). N55 molecules are not consistently detected in these experiments as they are very unstable in this fractionation and extraction procedure (not shown). We do not know whether the N120 molecules that accumulate in the cytoplasm in response to Dl are the same as those present in the membranes (see Fig. 1c) or whether they are different molecules migrating in the same region of the gel. Membrane-tethered N intracellular domain (N intra ), untethered N intra , and N120 migrate alongside each other in these gels. 2 N120 molecules associated with the membranes or with the cytoplasm are probably the membrane-tethered or released N intracellular domain, respectively. Accumulation of N350 molecules in response to Wg is likely to be in the intracellular membranes associated with production of the heterodimeric cell surface receptor (see Refs. 24 -26). N55 is derived from N350 upon activation of Notch signaling by a ligand. 3 DISCUSSION In vivo, the complete N intracellular domain (ϳ120 kDa) is released from the plasma membrane in response to Dl. This domain translocates to the nucleus with Su(H) and activates expression of target genes (13, 15, 16, 22, 23). In our experiments, Dl treatment results in accumulation of ϳ120-kDa N intracellular domain molecules (N120) in the cytoplasm. N120 2 and 7 and 8) whereas S2-N cells treated with Wg accumulated the co-linear N350 and N55 molecules (lanes 5 and 6, 9 and 10, and 11-13). N120 molecules did not accumulate in S2-N ⌬EGF1-18 cells treated with Dl (lanes 3 and 4). Co-linear molecules also accumulated in S2-N ⌬EGF1-18 cells treated with Wg (lanes 17 and 18) but did not accumulate in N ⌬EGF19 -36 cells (lanes 15 and 16). S2-N cells treated with Dl was used in lane 14 for alignment of lanes 14 -18 with the other lanes. Lanes 1-13 can be aligned using minor N bands. Lanes 14 -16 and lanes 17 and 18 are from the same gel exposed to autoradiographic film for different periods due to different levels of N expression. The molecules marked by an asterisk are variably produced in these experiments. The blots containing lanes 11-18 were reprobed with an antiheat shock protein 70 antibody (Sigma) to show that the same amount of total proteins is present in these lanes. b, clone-8 cells show similar responses to treatments with Dl and Wg. c, N120 molecules accumulate in the cytoplasmic fraction whereas N350 molecules accumulate in the membrane fraction. Cells in b and c were treated for 2 h. Cells: N, N ⌬EGF1-18 and N ⌬EGF19 -36 ϭ S2 cells expressing these molecules; S2 ϭ untransfected S2 cells; Dl ϭ S2-Dl cells; Wg: Ϫ ϭ media conditioned by growth of S2 cells; ϩ ϭ media conditioned by growth of S2-Wg cells. Delta, Wingless, and Different Notch Molecules in Cells 9100 in our experiments and the ϳ120-kDa in vivo molecule described by others are likely the same molecules. These molecules themselves act as activators of genes responsive to Dl. In numerous experiments, treatment of S2-N or clone-8 cells with Wg never resulted in accumulation of N120. Thus, it appears that N120 is not the activator of genes responsive to Wg. The co-linear N350 molecules accumulate to higher levels in Wg-treated cells when compared with both untreated and Dltreated cells. Co-linear N molecules are proposed to be cut into separate extracellular and intracellular fragments that are noncovalently linked to produce the heterodimeric cell surface receptor (24 -26). N350 accumulates in Wg-treated cells possibly because they are converted into the heterodimeric cell surface receptors at a slower rate compared with the rate in untreated S2-N cells and Dl-treated S2-N cells. This slower rate may be due to non-processing of the intracellular domain of Wg-bound N receptors and availability of limited receptor sites in the plasma membrane. Because the N intracellular domain is barely detectable in the cytoplasmic fraction of Wgtreated cells and most remain associated with the membranes (Fig. 1c), genes responsive to Wg are likely to be activated by molecules that interact with the N intracellular domain rather than the N intracellular domain itself. N55 is unlikely to be involved in activation of Dl-or Wg-responsive genes because it is produced by both ligands. N55 is produced in response to signaling by the full-length N and is proposed to be the intracellular domain of the truncated N receptor found enriched in tissues developing after signaling by the full-length N. 3 N is a cell surface receptor, the activities of which are regulated by ligands binding its extracellular domain. Dl and Wg bind two different regions: Dl binds EGF-like repeats 11 and 12 (21), whereas Wg binds more than one site in the EGF-like repeats 19 -36 region (7). We propose that the N receptor is a "switch" for activation of different signaling pathways during development (Fig. 2). Dl binds the EGF-like repeats 11-12 region to shunt the N120⅐Su(H) complex into the nucleus for turning on the expression of Dl-related genes. Wg binds the EGF-like repeats 19 -36 region to send a transcriptional activator to the nucleus for turning on the expression of Wg-related genes. Our results also suggest that the set of molecules involved in transducing N intracellular signals in response to Dl is likely to be different from the set that transduces N intracellular signals in response to Wg. We have identified what may be the initial differences between these two different N intracellular signaling pathways, the ones likely to set in motion different intracellular events. Starting with these differences, it should be possible in the future to identify the molecules that are involved in each N intracellular signaling pathway. This will enable integration and a better understanding of the functions of N, Dl, and Wg during Drosophila development.
2018-04-03T05:51:09.237Z
2000-03-31T00:00:00.000
{ "year": 2000, "sha1": "fc174ceb0d9d09c95817a7739f10f1f94a16e7c3", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/275/13/9099.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "fa75e5bf4df16650f9a3265075b054a76be11e83", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119096553
pes2o/s2orc
v3-fos-license
The role of $N^*(2120)$ nucleon resonance in $K\Lambda(1520)$ photon and hadronic productions The associate $K\Lambda(1520)$ photon and hadronic production in the $\gamma p \to K^+\Lambda(1520)$, $p p \to p K^+ \Lambda(1520)$ and $\pi^- p \to K^0 \Lambda(1520)$ reactions are investigated within the effective Lagrangian approach and the isobar model. We are interested in the contribution from the $N^*(2120)$ (previously called $N^*(2080)$) resonance, which has a significant coupling to the $K\Lambda(1520)$ channel. The theoretical results show that the current experimental data for the $\gamma p \to K^+\Lambda(1520)$ reaction favor the existence of the $N^*(2120)$ resonance, and that these measurements can be used to further constrain its properties. We present results, including the $N^*(2120)$ contribution, for total cross sections of the $\gamma p \to K^+\Lambda(1520)$, $\pi^- p \to K^0 \Lambda(1520)$, and $p p \to p K^+ \Lambda(1520)$ reactions. For this latter one, we also calculate invariant mass and Dalitz plot distributions. Introduction The investigation of the baryon spectrum and the baryon couplings from experimental data are two of the most important issues in hadronic physics and they are attracting much attention. Both on the experimental and theoretical sides, the nucleon excited states below 2.0 GeV have been extensively studied 1 . However, the current information for the properties of states around or above 2.0 GeV is scarce 1 . On the other hand in this region of energies, many theoretical approaches (constituent quark 2 and chiral unitary 3,4,5,6,7 ) predict predicted missing N * states, which have not been so far observed. Hence, the study of the possible role played by the 2.0 GeV region nucleon resonances in the available accurate data is timely and could shed light into the complicated dynamics that governs the high excited nucleon spectrum. The associate KΛ(1520) photon and hadronic production reactions might be adequate to study the N * resonances around 2.0 GeV, as long as they have sig-nificant couplings to the KΛ(1520) pair. This is because the KΛ(1520) is a pure isospin 1/2 channel and the threshold is about 2.0 GeV. Besides, these reactions require the creation of anss quark pair. Thus, a thorough and dedicated study of the strangeness production mechanism in these reactions has the potential to gain a deeper understanding of the interaction among strange hadrons and also on the nature of the nucleon resonances. Recently, there have been several measurements for the γp → K + Λ(1520) reaction 8,9,10,11 . These data suggest that there is a sizeable contribution of total and differential cross sections from the nucleon resonances with masses around 2.1 GeV. On the theoretical side, in addition to the contributions from K and K * exchange in the t−channel and the contact term, the contributions from the nucleon states, including the N * (2120), in the s−channel 12,13,14,15,16 and the Λ(1115) pole in the u−channel have been studied 16 . The theoretical results show that when the contributions from the N * (2120) resonance and the Λ(1115) are taken into account, the current experimental data 8,9,11 can be well described. Thus, it is becoming clear that the current experimental data for Λ(1520) photoproduction favor the existence of the N * (2120) resonance, and that these measurements can be used to further constrain its properties. On the other hand, based on the results of γp → K + Λ(1520) reaction, the KΛ(1520) production in the pp → pK + Λ(1520) and π − p → K 0 Λ(1520) hadronic processes are also studied, paying an special attention to the contributions from the N * (2120) resonance 17 . In the present work, we will review the main results from these theoretical studies. Study on γp → K + Λ(1520) reaction For the γp → K + Λ(1520) reaction, the differential cross section, in the center of mass frame (c.m.), and for a unpolarized photon beam reads, are the photon and K + meson c.m. three-momenta, and θ c.m. is the K + polar scattering angle. The invariant scattering amplitudes are defined as , where u µ and u are dimensionless Rarita-Schwinger and Dirac spinors for final Λ(1520) and the initial proton, respectively, while ǫ ν (k 1 , λ) is the photon polarization vector. Besides, s p and s Λ * are the baryon polarization variables. The sub-index i stands for the contact, t−channel K − exchange, contact, s−channel nucleon pole and N * resonance terms (depicted in Fig. 1 of Ref. 13 ) and the u−channel Λ(1115) contribution (see Fig. 2 of Ref. 16 ). In Fig. 1, the theoretical calculations of the differential cross section dσ/d(cos θ c.m. ) as a function of the photon beam energy E γ are shown. These predic- The role of N * (2120) in the Λ(1520) photon and hadronic productions 3 tions are also compared to the experimental data taken from LEPS collaboration 9 . The contributions from different mechanisms of the model are separately shown. We see that the bump structure in the differential cross section could be fairly well described thanks to a significant contribution from the N * (2120). It is worth to mention that in this calculation the contribution from the u−channel Λ(1115) pole has been neglected, since its contribution is small at forward K + angels. Next in Fig. 2, we show theoretical results for differential cross sections at large K + angles and the total cross section. Here, we also compare our predictions with the experimental data from Refs. 8,11 . In this case, we pay attention not only to the contribution from s−channel N * (2120) resonance but also to that from the u−channel Λ pole, since the u−channel contribution at the backward angles could be important. As can be seen in the Fig. 2, the theoretical calculations provides a fair description of these backward K + angular data thanks to the contribution from the Λ pole term in the u−channel. Furthermore, for the total cross section, due to an important contribution from the photo-excitation of the N * (2120) resonance and its subsequent decay into a Λ(1520)K + pair, the theoretical results can describe the CLAS data 11 very well (see right panel of Fig. 2). This mechanism is also important for the bump structure in the LEPS differential cross section at forward K + angles discussed in Fig. 1. Thus, one can definitely take advantage of the important role played by this resonant mechanism in the LEPS and CLAS data to better constrain some of the N * (2120) properties, starting from its mere existence. 3. Study on π − p → K 0 Λ(1520) and pp → pK + Λ(1520) reactions Since the N * (2120) resonance played an important role in the γp → K + Λ(1520) reaction as discussed above, it may also have important contributions to the π − p → K 0 Λ(1520) and pp → pK + Λ(1520) reactions, which has also been studied in Ref. 17 . For the π − p → K 0 Λ(1520) reaction, the differential cross section in the c.m. frame can be expressed as In the equation above θ denotes the angle of the outgoing K 0 relative to beam direction in the c.m. frame, while p 1 c.m. and p 3 c.m. are the three-momenta of the initial π − and final K 0 mesons, respectively. The total invariant scattering amplitude T is given by, where the contributions from the s−channel nucleon pole, t−channel K * exchange, u−channel Σ + and s−channel N * (2120) terms are considered. With the parameters for the N * (2120)KΛ(1520) strong couplings that were obtained from the γp → K + Λ(1520) reaction, the role of the N * (2120) resonance in the π − p → K 0 Λ(1520) reaction has been investigated. Theoretical results for the total π − p → K 0 Λ(1520) cross section are shown in Fig. 3, and compared with the data taken from Ref. 18 . The solid lines represent the full results, while the contributions from the s−channel nucleon pole, t−channel K * exchange, u−channel Σ + and s−channel N * (2120) terms are shown by the dashed, dotted, dot-dashed, and dash-dot-dotted lines, respectively. We see that we can describe the experimental data of total cross sections quite well, while the s−channel nucleon pole and N * (2120) resonance and also the u−channel Σ + exchange give the dominant contributions below √ s = 2.4 GeV. The t−channel K * exchange diagram gives a minor contribution. For the pp → pK + Λ(1520) reaction, the total cross section versus the beam momentum (p lab ) of the proton is calculated by using a Monte Carlo multi-particle phase space integration program. The results for beam momentum p lab from just above the production threshold 3.59 GeV to 5.0 GeV are shown in Fig. 4. The dashed, dotted, and dash-dotted lines stand for contributions from nucleon pole, Σ + pole and N * (2120) resonance, respectively. The total contribution is shown by the solid line. From Fig. 4, we mechanisms see that the contribution from the u−channel Σ + exchange is dominant very close to threshold, but, when the beam energy increases, the contributions from the s−channel nucleon pole and the N * (2120) resonance turn to be very important. It is worth to note that our predictions for the total pp → pK + Λ(1520) cross section, at p lab = 3.65 GeV, is 0.01µb, which is 20 times smaller than the experimental upper limit of 0.2µb as measured by the COSY-ANKE Collaboration 19 . This shows that our model predictions are consistent with the experimental results. Moreover, the total cross section of the pp → pK + Λ(1520) reaction has been also measured with HADES 20 at GSI for a kinetic energy T p = 3.5 GeV (corresponding to p lab = 4.34 GeV). The result is 5.6 ± 1.1 ± 0.4 +1.1 −1.6 µb, as shown in Fig. 4, this is to be compared with our theoretical result, 11.5 µb. However, if we modify the cut off parameters Λ π and Λ * π from 1.3 GeV to 1.0 GeV, we get σ = 5.45 µb, which would be in agreement with the experimental data. However, it does not make sense to fit only one data point. So we still keep Λ π = Λ * π = 1.3 GeV as used in many previous works 21 . We should also mention that, in the present calculation, we did not include the Λ(1520)p final-state-interaction (FSI), which can increase the results even by a factor of 10 at the very near threshold region, similarly to the important role played by Λp FSI in the pp → pK + Λ reaction 22 . We ignore this effect because there are no experimental data on this reaction and also very scarce information about the Λ(1520)p FSI. Furthermore, the corresponding momentum distributions of the final proton and K + meson, the KΛ(1520) invariant mass spectrum, and also the Dalitz Plot for the pp → pK + Λ(1520) reaction at beam momentum p lab = 3.67 GeV, which is accessible for DISTO Collaboration 24 , are calculated and shown in Fig. 5(a), Fig. 5(b), Fig. 5(c), and Fig. 5(d), respectively. The dashed lines are just phase space distributions, while, the solid lines are full results from our model. From Fig. 5, we see that even at p lab = 3.67 GeV, there is a clear bump in the KΛ(1520) invariant mass distribution, which is produced by the contribution of the N * (2120) resonance. Summary We have reviewed the role of N * (2120) resonance in the associate KΛ(1520) photon and hadronic productions at low energies within an effective Lagrangian approach and the isobar model. In addition to the contact, t−channelK exchange, and s−channel nucleon pole contributions, the contributions from the u−channel Λ(1115) hyperon pole term and N * (2120) resonance are also considered. The results show that when the contributions from the N * (2120) resonance and the Λ(1115) are taken into account, both the new CLAS 11 and the previous LEPS data 8,9 for the γp → K + Λ(1520) reaction can be simultaneously described. Actually, we find an overall good description of the data, both at forward and backward K + angles, and for the whole range of measured γp invariant masses. The contribution of the u−channel Λ(1115) pole term produces an enhancement at backward angles, and it becomes more and more relevant as the photon energy increases, becoming essential above W ≥ 2.35 GeV and cos θ c.m. ≤ −0.5. On the other hand, the CLAS data, clearly support the existence of a spin-parity J P = 3/2 − nucleon resonance with a mass around 2.1 GeV, a width of at least 200 MeV and a large partial decay width into Λ(1520)K. These characteristics could be easily accommodated within the constituent quark model results of Simon Capstick and W. Roberts of Ref. 23 . Such resonance might be identified with the two stars PDG N * (2120) state, which would confirm previous claims 13,14 from the analysis of the bump structure in the LEPS differential cross section at forward K + angles discussed in Fig. 1. On the other hand, motivated by the study of the γp → K + Λ(1520) reaction, the role of N * (2120) has also been investigated in the π − p → K 0 Λ(1520) and pp → pK + Λ(1520) reactions. The results show that the contribution from the u−channel Σ + exchange is dominant in the very near threshold region, but, when the beam energy increases, the contributions from s−channel nucleon pole and N * (2120) resonance turn to be very important. Furthermore, the invariant mass distribution and the Dalitz Plot are also predicted which can be tested by the future experiments.
2013-11-03T08:39:15.000Z
2013-11-03T00:00:00.000
{ "year": 2014, "sha1": "963ba4f4222b81b84e5b6b038e144641f356c148", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1311.0439", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "963ba4f4222b81b84e5b6b038e144641f356c148", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
140064339
pes2o/s2orc
v3-fos-license
Influence of the Thin-Film Ag Electrode Deposition Thickness on the Current Characteristics of a CVD Diamond Radiation Detector Background: We investigated the current characteristics of a thin-film Ag electrode on a chemical vapor deposition (CVD) diamond. The CVD diamond is widely recognized as a radiation detection material because of its high tolerance against high radiation, stable response to various dose rates, and good sensitivity. Additionally, thin-film Ag has been widely used as an electrode with high electrical conductivity. Introduction Radiation is a flow of energy that has the ability to directly or indirectly ionize air from particle lines or electromagnetic waves. The types of radiation include alpha radiation, beta radiation, gamma radiation, neutron radiation, ultraviolet radiation, and X-rays. Beginning with Kozlov et al. [1] in 1966, the unique properties of diamond have been explored to detect and characterize radiation. At present, high-quality single crystal chemical vapor deposition (CVD) diamond is used as a radiation detector material because it is easily compacted, which facilitates ideal positioning, and exhibits resistance to radioactivity without generating a secondary fission [2]. CVD diamond has a wide bandgap of 5.5 eV, high resistivity, high thermal conductivity, high electron and hole mobilities, which enable faster charge collection, and radiation robustness [3]. Based on such properties, CVD diamond can be considered as an interesting candidate for use as a photodetector device that is able to detect extreme-to JRPR deep-UV radiation while also being transparent in the visible wavelength. Moreover, CVD diamond-based electronics can operate in high-temperature and harsh environments, e.g., under the condition of ionizing radiation, where the operation of conventional semiconductor devices is limited [4][5][6][7][8]. These unique characteristics have been realized in devices capable of replacing silicon-based devices, such as thermal and fast neutron detectors in nuclear fusion reactors [9]. Radiation detectors are also required in such applications, and CVD diamond-based devices can offer excellent performance with great optical efficiency and robustness in extreme environments. The thin-film electrode deposition of a CVD diamond radiation detector is the process of electrically connecting each element formed through various processes in order to complete the circuit of the detector [10]. Ag is used as the thinfilm electrode of a CVD diamond radiation detector because it possesses high electrical conductivity and the ability to establish good ohmic contact with low resistivity. Furthermore, since the conditions for hole and electron injections are better than those for other metals, the charge mobility is improved. In addition, when the bias voltage is increased, the leakage current remains constant and stable [11]. In order to confirm this, we conducted a comparison between the current leaked due to an increase in the bias voltage of the deposited Ag and that of Pt metal on CVD diamond. For example, to prepare good injecting contacts, several metals were used by Kozlov et al 1) ., for hole injection they used Ag, Au, Pt, or C deposition, aluminium or boron implantation, for electron injection P, Li, or C. Consequently, it was found that the deposited Ag sample had a lower leakage current than the Pt sample that steadily increased with increasing bias voltage. The results also demonstrated that Ag metallization is more suitable for a CVD diamond radiation detector than any other metal electrode. Additionally, thin-film Ag electrodes have been reported to retain high electrical conductivity, and have good charge mobility and low resistivity even with a film thickness below 100 nm [12,13]. In this study, the following experiment and analysis were carried out as based on the electrical characteristics of the Ag electrode metallized to a thickness of 100 nm, as was mentioned above. Leakage current changes under a bias voltage, and the photo-current under UV irradiation, were measured for various film thicknesses due to different thin-film Ag electrode deposition times. We also analyzed the surface and current characteristics of the deposited thin-film Ag electrode to achieve a highly efficient and sensitive CVD diamond radiation detector [14]. Materials and Methods The photodetector device used in this study is an Element Six product, which is a single-crystal high-purity CVD diamond with dimensions of 3.0 × 3.0 × 0.5 mm 3 . Ag was deposited onto the CVD diamond substrate by performing AC RF sputtering for durations of 20, 40, 60, and 80 seconds at 100 W. Silver paste was used to connect the MI (mineral-insulated) cable for the signal line to the deposited thin-film Ag electrode on the CVD diamond, as is shown in Figure 1. The MI cable is a mineral-insulated cable that demonstrates excellent long-term stability and enables high precision at high temperatures. Because of this characteristic, the MI cable JRPR can be used in extreme environments, such as in a nuclear reactor. The CVD radiation detector fabricated from this material was placed in a fixed jig to ensure that accurate leakage and photo-current measurements were obtained. To monitor the current characteristics, the MI cable was connected to measure the leakage current and photo-current with the Keithley 6487 pico-ammeter (Tektronix, Inc., Beaverton, OR). The leakage current measurement was performed under the conditions of an applied 50-V bias [15,16], and the photo-current was measured under 254-nm UV irradiation. This specific wavelength was chosen because of the characteristic response of the CVD diamond within a wavelength range of 200 to 300 nm [17,18]. To study the thickness and surface roughness of the de-posited thin-film Ag electrode, the thin-film Ag electrode was deposited onto a glass substrate over a period of 20, 40, 60, or 80 seconds, and was analyzed by using a scanning electron microscope (SEM) to confirm the cross-sectional thickness of the deposited electrode. Also, we analyzed the surface roughness of the electrode deposited onto the CVD diamond by using an atomic force microscope (AFM). Results and Discussion The cross-sectional thicknesses of the 20-, 40-, 60-, and 80-second deposited thin-film Ag electrodes were approximately 50, 98, 152, and 257 nm, respectively, as is shown in Figure 2. Note that the deposited thin-film thickness values JRPR shown in the SEM images of Figure 2 are averaged values. Figure 3 shows the surface roughness of the deposited thin-film Ag electrodes according to the deposition time. The surface roughness values (Rq) for the bare CVD diamond substrate with 20-, 40-, 60-, and 80-second deposited thinfilm Ag electrodes were 0.69, 1.78, 2.63, 4.55, and 1.63 nm, respectively. These results demonstrate that the surface roughness of the bare diamond substrate was low, and that a thicker Ag electrode corresponded to increased surface roughness. By comparing the three-dimensional (3D) AFM images, we observed that a 40-second deposition of the thin-film Ag electrode resulted in uniform deposition, i.e., constant surface roughness. However, for the 80-second deposited film, it was observed that the roughness value of the electrode decreased with increased film thickness. For the 20-second deposited film, the leakage current was 0.09 nA, and the photo-current was an unstable measurement of 39.85 nA, as is shown in Figure 4A. It was confirmed that the leakage current was small and the photo-current was large but unstable because of insufficient electrode thickness. When the deposition time was 40 seconds, the leakage current was 0.04 nA, and the photo-current was linearly detected as 27.77 nA, as is illustrated in Figure 4B. For an approximately 98-nm thick electrode, it was confirmed that leakage current was small; moreover, the thickness of the film resulted in a stable photo-current that was generated by the UV radiation. The 60-second deposited film resulted in a leakage current of 0.22 nA and a photo-current that was detected as 26.28 nA, as is shown in Figure 4C. When the deposition time was 80 seconds, the leakage current was 0.26 nA, and the photo-current was detected as 16.94 nA ( Figure 4D). The results are summarized in Table 1, which shows that the leakage current increased and the photo-current output signal was reduced as the deposition time and thickness of the thin-film Ag electrode increased. The study results show that the deposition time of a thinfilm Ag electrode strongly influences the surface conditions of the electrode, which thus affect the current characteristics of a radiation detector under UV irradiation. Additionally, it was found that, if the surface conditions of the UV-irradiated CVD diamond detector are not optimal, the detection signal is deteriorated. Thus, through analysis of the results, we have confirmed that thicker deposition and surface uniformity of the thin-film Ag electrode positively influence the leakage current and photo-current of a CVD radiation detector because it facilitates the excellent mobility of holes and electrons. As the thickness of the thin-film Ag electrode was reduced, the leakage current was observed to decrease, while the detected photo-current signal output became unstable. Conversely, an increase in electrode film thickness was observed to correspond to increased leakage current and decreased output of the photo-current signal. Additionally, the leakage current was found to be very low, and the photo-current output signal was observed as stable for a deposited film thickness of 98 nm; at this thickness, a uniform and constant surface roughness of the deposited thin-film Ag electrode helped to improve the current characteristics of the CVD diamond radiation detector. These results are in agreement with previous reports that state that high electrical conductivity and low resistivity occur when the thickness of the deposited thin-film Ag is approximately 100 nm [13]. Conclusion Through this study, it was found that the thickness of the thin-film Ag electrode implemented in a radiation detector significantly influences the output signal of the photo-current. Consequently, we have determined the optimal conditions for thin-film Ag electrodes purposed for radiation detectors. We plan to use the findings presented here to develop a stable and efficient detector that can be used in a nuclear reactor. Acknowledgments This work was supported by the National Research Foundation Grant (NRF-2013M2A8A1035822) from Ministry of
2019-04-30T13:08:54.835Z
2018-12-30T00:00:00.000
{ "year": 2018, "sha1": "dbdbecbba7e6802f1d505c993fb2174a10d6b675", "oa_license": "CCBYNC", "oa_url": "http://jrpr.org/upload/pdf/jrpr-43-4-131.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "776eb038961857ab2fddb4ad7face5f1b998505e", "s2fieldsofstudy": [ "Physics", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
243482121
pes2o/s2orc
v3-fos-license
RELATHIONSIP BETWEEN MONEY VELOCITY AND INFLATION TO INCREASING STOCK INVESTMENT RETURN: EFFECTIVE STRATEGIC BY JAKARTA AUTOMATED TRADING SYSTEM NEXT GENERATION (JATS-NG) PLATFORM This study aims to formulate strategies to avoid macroeconomic risks in investing in the Indonesian capital market so that more people will be able to trade in financial instruments traded in the Indonesian capital market using an integrated online transaction system. This type of research is quantitative descriptive with the research population of 20 State-Owned Enterprises (BUMN) listed on the Indonesia Stock Exchange. The type of data in this study is time series data taken from 2016 to 2020 by conducting a documentation study conducted on the publication of annual financial statements (Financial Statements), so that the target population is 40 (8 Companies x 5 Years) annual financial report data for the research sample. The data analysis method in this study uses Moderate Regression Analysis (MRA) and data analysis uses the Smart PLS statistical software. The outputs of this research are publications of reputable international journals, international proceedings and reference books for ISBN certified research results with TKT level 3. In the concept of investing in the capital market and money market sectors, investors still provide high attractiveness with standardized transaction mechanisms in various stock exchanges around the world. The capital market is a business entity that brings together parties who need long-term funds with capital owners. The party that needs capital is the company or issuer. While the owners of capital are investors (or the public). Shares and bonds, apart from being bought, can also be sold. The specific purpose of this study is to formulate a strategy to minimize the impact of velocity of money and other macroeconomic factors that can affect the rate of return on investment in the capital market, namely inflation, Rupiah/US Dollar exchange rate and currency turnover which are significantly negative on stock returns. It can be explained that when the Rupiah exchange rate against the US Dollar weakens, it will have a negative effect on the capital market because the capital market becomes unattractive because investors tend to prefer to save money in Dollars. (Yuswandy Yoedy, 2013). The research stage will be accompanied by the implementation of public discussion activities and special training to understand the application of the Jakarta Automated Trading System Next Generation (JATS-NG) application. The Focus Group Discussion will be attended by stakeholders from institutions related to the Indonesia Stock Exchange, Securities Companies, and Investors. Meanwhile, analysis training related to integrated online trading transactions will invite financial technology analysts. Departing from the problems and theories above, the urgency of this research is expected to help research the effect of velocity of money and inflation on investment returns in the Indonesian capital market with an integrated online transaction system through the Jakarta Automated Trading System Next Generation (JATS-NG) can foster investment interest in the Indonesian capital market and so that investors can carry out investment risk management properly in order to generate maximum profits. RESEARCH QUESTIONS AND HYPOTHESES 1. Does the Inflation Rate affect the return on stock investments? Inflation is an event that describes the situation and conditions where the price of goods has increased and the value of the currency has weakened (Lubis, 2012). Inflation is defined as a tendency to increase the price of products as a whole so that there is a decrease in the purchasing power of money (Tandelilin, 2010). Based on the opinion of experts, it can be concluded that inflation is a process of increasing prices continuously which causes a decrease in the value of the currency and people's purchasing power. In line with Muhammad's research (2016), it is found that inflation has a significant negative effect on stock returns in banking companies. Hypothesis 1: Inflation rate has a negative and significant effect on Cryptocurrency returns. The formulation of the inflation rate variable hypothesis in this study refers to several studies, including ( Does the Velocity of Money affect the return on stock investments? Liu & Tsyvinski (2018) conducted a study on electronic banking in Finland and its effect on velocity of money. The purpose of this study is to see the impact on the velocity of advances in banking technology that occurred in Finland. What economists call the quantity theory of money, velocity is a significant movement/driver of token prices. Theory Markowitz. Portfolio The company's risk is more related to changes in the micro condition of the securities issuing company. Portfolio management states that a company's risk can be minimized by diversifying assets in a portfolio, as Harry Makowitz introduced in 1952 on the theory related to investor estimation of risk and return expectations by combining assets into an efficient portfolio diversification. Theory Classic Money Request According to the classical economic view, the function of money is only as a medium of exchange. Therefore, the quantity of money demanded is proportional to the level of output or income. If the level of output increases, the quantity of money demanded will increase. Irving Fisher explained the theory of the value of money called Transaction Velocity Theory, complementing David Ricahrdo's theory which did not pay attention to the velocity factor of money circulation. Fisher argued that the speed of money supply and the velocity of circulation of goods and services were very important factors in measuring the value of money. Concept Payment and Transaction System The payment system implemented is a form of Bank Indonesia's task to maintain rupiah stability as mandated in Law no. 23 of 1999 concerning Bank Indonesia. In general, the payment system has a goal, namely to encourage the national economy and to increase economic activity through a more conducive business environment and to increase foreign power and the image of the national economy so as to encourage foreign investors to enter Indonesia. Theory Return Investment according to (Dimson, Marsh, & Staunton, 2015), return is yield and capital gain(loss)". Yield is cash flow that is paid periodically to investment holders. While capital gain (loss) is the difference between the price of an investment at the time of purchase and the price at the time of sale. Descriptive Statistics Descriptive statistics provide a general description of the research object that is sampled, the explanation of the data through descriptive statistics is expected to provide an initial picture of the problem being studied. Descriptive statistics are focused on the maximum, minimum, average (mean) and standard deviation values. The complete descriptive statistics are in table 4.1: Furthermore, the observations made for velocity of money in this study were 40 observations. Lowest valuevelocity of money in this study are0.052338 (5.2338%) dthe highest value is2.555820 (255.582%). The average value of income at0.707842 (70.7842%)with the standard deviation value0.959751 (95.9751). The standard deviation value is greater than the average value. It shows fluctuationvelocity of money high in the sample in this study. The observations made for inflation in this study were 40 observations. The lowest value of inflation in this study is1.680000 (168%) and the highest score is 3.610000 (361%). The average value of inflation market capitalization is2.888000 (288.8%) with a standard deviation value 0.651480 (65.148%). The standard deviation value is smaller than the average value. This shows that a small inflation rate in the sample will affect this research. Figure 4.1 Histogram Normality Test Based on Figure 4.1. Based on the picture above, it can be seen that the probability value in the Jarque-Bera test is 0.045133 where this value is below the standard error tolerance value (5%). Therefore, it can be concluded that in this study it is not normally distributed. Multicollinearity Test Table 4.3 Multicollinearity Test Based on the table above, it shows that this model is not multicollinear by looking at the output between the independent variables in the regression where the output is less than 0.8. Autocorrelation Test Table 4.4. Autocorrelation Test The autocorrelation test from table 4.4 above can be seen from the value of Durbin Watson in this study. The value of Durbin Watson in this study is1.804662 . This value is between the tolerance values in the autocorrelation test, namely -2 and 2. Therefore, it can be concluded that this study is free from autocorrelation symptoms, meaning that in this research model there is no interference with the correlation between the time periods used in each variable. Based on the test results using the Eviews 10 application, it is known that the tcount value of the asset price is -0.594376 significantly 0.0567. The ttable value in this study calculated by df = nk is 2.02439 with a significance of 0.05. Then it can be seenvelocity of moneynegative and significant effect on returns. This is indicated by the results of the tcount (-0.594376) t Based on the test results using the Eviews 10 application, it is known that the tcount value of inflation is1.328703 significantly0.0940. The ttable value in this study calculated by df = nk is 2.02439 with a significance of 0.05. So it can be seen that inflation has a negative and insignificant effect on yields. This is indicated by the results of the tcount (1.328703) < t table (2.02439) and significant value0.09400.05. So it can be concluded that the inflation variable has a negative and significant effect on returns. Partial Test The results showed that the variablevelocity of money and inflation simultaneously on yields. This is based on the results of Fcount of10.22377 with significant level 0.00001 and the value of Ftable in this study calculated by df = nk is 3.24 with a significance of 0.05. Because the value of Fcount is equal to10.22377 < Ftable value of 3.24 and significance probability value Fcount0.00001 < 0.05, so it can be concluded that together the independent variables arevelocity of money and inflation has a negative and significant effect on the dependent variable, namely yield.
2021-11-05T15:26:25.811Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "796eb5ca2279c6970e240ad61ab6220875f2c8aa", "oa_license": "CCBYNCSA", "oa_url": "https://radjapublika.com/index.php/IJEBAS/article/download/27/50", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d94cdc3c571825a40a20c328bad2d3cf919b3e8a", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
224816566
pes2o/s2orc
v3-fos-license
Differential Diagnosis of Frontotemporal Dementia, Alzheimer's Disease, and Normal Aging Using a Multi-Scale Multi-Type Feature Generative Adversarial Deep Neural Network on Structural Magnetic Resonance Images Methods: Alzheimer's disease and Frontotemporal dementia are the first and third most common forms of dementia. Due to their similar clinical symptoms, they are easily misdiagnosed as each other even with sophisticated clinical guidelines. For disease-specific intervention and treatment, it is essential to develop a computer-aided system to improve the accuracy of their differential diagnosis. Recent advances in deep learning have delivered some of the best performance for medical image recognition tasks. However, its application to the differential diagnosis of AD and FTD pathology has not been explored. Approach: In this study, we proposed a novel deep learning based framework to distinguish between brain images of normal aging individuals and subjects with AD and FTD. Specifically, we combined the multi-scale and multi-type MRI-base image features with Generative Adversarial Network data augmentation technique to improve the differential diagnosis accuracy. Results: Each of the multi-scale, multitype, and data augmentation methods improved the ability for differential diagnosis for both AD and FTD. A 10-fold cross validation experiment performed on a large sample of 1,954 images using the proposed framework achieved a high overall accuracy of 88.28%. Conclusions: The salient contributions of this study are three-fold: (1) our experiments demonstrate that the combination of multiple structural features extracted at different scales with our proposed deep neural network yields superior performance than individual features; (2) we show that the use of Generative Adversarial Network for data augmentation could further improve the discriminant ability of the network regarding challenging tasks such as differentiating dementia sub-types; (3) and finally, we show that ensemble classifier strategy could make the network more robust and stable. INTRODUCTION As the first and third most common forms of dementia, Alzheimer's disease (AD) (Association et al., 2011) and Frontotemporal dementia (FTD) (Bang et al., 2015) are often mistaken as each other. This is due to the similarities in their clinical presentation, cognitive domains impairment, brain atrophy, and progressive alterations in language ability, behavior, and personality (Neary et al., 2005;Alladi et al., 2007;Womack et al., 2011). Despite significant efforts spent on establishing sophisticated clinical guidelines for their differential diagnosis, the diagnostic accuracy is still not satisfactory. Specifically, when diagnosing with the NINCDS-ADRDA criteria (Neary et al., 1998), the sensitivity of distinguishing AD subjects from FTD patients could reach as high as 93%; however, the specificity for FTD recognition is only 23% as most patients with FTD also fulfilled the NINCDS-ADRDA criteria for AD (Varma et al., 1999). With the necessity of applying different symptomatic intervention of treatment for various dementia subtypes in clinical practice (Pasquier, 2005), it is essential to develop a computer-aided diagnosis system for the improvement of the accuracy of differential diagnosis between these two dementias. Patterns of brain atrophy observed in T1-weighted Magnetic Resonance Imaging (MRI) have been successfully used to capture structural changes in the human brain (Du et al., 2007;Davatzikos et al., 2011), specifically for using in developing computational systems that can identify the type of dementia pathology in the brain. Computer-aided diagnosis systems with MRI have been built for both AD and FTD (Suk et al., 2014;Jiskoot et al., 2018). In addition to binary classification with normal aging, T1-weighted MRIs have also been used for the differential diagnosis of AD and FTD by differentiating the atrophy pattern of these two types of dementia such as the affected regions and rate of change (Raamana et al., 2014). Various structural biomarkers have been explored to distinguish between AD and FTD, such as gray matter (GM) volume loss (Rabinovici et al., 2008), cortical thinning (Du et al., 2007), highdimensional features based on GM and white matter (WM) volume distribution of whole brain (Davatzikos et al., 2008), as well as atrophy and shape deformity of individual structures (Looi et al., 2010). Most previous studies on computer-aided diagnosis system for dementia classification emphasized on binary classification tasks, e.g., NC vs. FTD, NC vs. AD, or FTD vs. AD with few direct multi-class dementia classification methods in the literature. Raamana et al. compared multiple structural features, such as volumes, Laplacian invariants, and surface displacements of the hippocampus and lateral ventricle, regarding the multiclass classification among NC, AD, and FTD subjects (Raamana et al., 2014). With PCA and multi-class support vector machine (SVM) classifier, they achieved a 0.79 AUC. Tong et al. applied the RUSBoost algorithm (Seiffert et al., 2010) for the multi-class classification of subjective memory complaints, AD, frontotemporal lobe degeneration (FTLD), dementia with Lewy bodies, and vascular dementia (Tong et al., 2017). With volume and grading features as well as CSF measures and age, they achieved 75.2% overall accuracy with 0.8 sensitivity for AD and 0.63 sensitivity for FTLD. Recently, deep learning has been delivering astounding performance for many recognition tasks (Hinton and Salakhutdinov, 2006;Krizhevsky et al., 2012;Simonyan and Zisserman, 2014). Its applications in computer-aided diagnosis has also drawn attention and it has out-performed traditional classification methods for many clinical recognition tasks (Suk et al., 2014;Ronneberger et al., 2015;Litjens et al., 2017). However, to the best of our knowledge, there have been no deep-learning-based approaches developed and published yet for the differential diagnosis of AD and FTD. In this study, we proposed a novel framework to combine multi-type and multi-scale image-based features from structural MRI scans. Local volume size and surface thickness features were extracted by segmenting the T1-weighted MRI images into patches of a hierarchical size based on brain anatomy in a coarse-to-fine manner. A multi-scale and multi-type feature deep neural network (MMDNN) was developed to learn the latent representation across each individual features, along with the Generative Adversarial Network (GAN) technique for data augmentation and ensemble classifier strategy to increase robustness of the framework. A comprehensive validation experiment with 1,954 images demonstrates the superior performance of the proposed framework with 88.28% accuracy. METHODS In the proposed framework, the original raw structural MRI images were first segmented into different anatomical structure region of interests (ROI) with FreeSurfer. Each ROI was further sub-clustered into smaller patches of super-pixels with multiscales. The volume, cortical thickness at each level of the patch were extracted as multi-scale multi-type features. Finally, a Generative Adversarial Network with multi-type and multi-scale features was trained to achieve differential diagnosis to identify patients with AD and FTD from NC subjects. , also referred to as NIFD started in 2010 with the primary goals being to identify neuroimaging modalities and methods of analysis for tracking frontotemporal lobar degeneration (FTLD) and to assess the value of imaging vs. other biomarkers in diagnostic roles. More detailed information about FTLDNI can be found in 4rtni-ftldni.ini.usc.edu. Both ADNI and FTLDNI databases contain longitudinal scans for each participant. Subjects with who the diagnosis changes in any of their follow-up visits during the study period (i.e., MCI progressing to AD or reverting to NC), were excluded from the study to reduce the effect of potential misdiagnosis. A total of 1,954 Structural MRI were included in this study, 1,114 of which were from ADNI database, and the remaining 840 from the NIFD database. Table 1 shows the demographic and clinical information of these subjects in both database. The numbers in the brackets of the second row are the numbers of male and female subjects, while number before each bracket is the total number of subjects belong to that group. The numbers in the remaining three rows represent the mean and standard deviation of age, education, and MMSE, respectively. Multi-Level Multi-Type Feature Extraction For image recognition problems, convolutional neural network (CNN) and its variants, such as VGG16 (Simonyan and Zisserman, 2014), ResNet (He et al., 2016), and Inception-ResNet (Szegedy et al., 2017), have achieved the state-of-the-art performance in various tasks. However, those networks require a large number of labeled samples for their training. Especially with high dimensional data, as used in this study (256 × 256 × 256 3D images), larger kernel sizes or more layers are necessary to learn the latent representation, resulting in a larger network that needs even more training samples. The dataset used in our data set is considerably larger in magnitude than many other studies in the neuroimaging context, it is however still relatively small in scale as compared with most of the natural image recognition tasks. Therefore, to reduce the dimension of input data and the size of network, each MRI scan was segmented into small regions based on brain anatomy, which we denoted as "patches" hereafter, and two types of primary structural features, volume size and cortical mantle thickness, were extracted for the differential diagnosis of NC, AD, and FTD. For MRI scan segmentation and volume size feature extraction, the following steps were applied: (1) structural ROI parcellation, (2) Structuralwise patch cluster-based segmentation, (3) Feature extraction and normalization. Firstly, in the ROI segmentation step, the gray matter (both cortical and subcortical) of each T1 structural MRI image was segmented into 87 anatomical ROIs using FreeSurfer 5.3 (Dale AM, 1999). For some ROIs, in particularly larger ones such as the occipital cortex, the discriminant information for brain structural change could be localized within the ROI to smaller focal locations. Such localized differences could potentially provide important information to differentiate AD and FTD but could be lost in aggregating the features across the whole ROI. Therefore, each ROI was further subdivided into smaller patches in the second patch parcellation step. Parcellation or subdivision of a FreeSurfer ROI was performed on a template MR image using a k-means clustering algorithm based on their intensity similarity (Raamana et al., 2015). Following the kmeans clustering step, a high-dimensional accurate non-rigid registration method, LDDMM (Beg et al., 2005), was applied to register each ROI of a target MRI to the corresponding ROI of the template. With the ROI-wise registration maps, the patch-wise segmentation of each template ROI was propagated back into the target space. Finally, in the feature extraction and normalization step, the volume of each patch was extracted as a primary feature for disease classification. The w-score, which represents the standardized residual of the chosen features, was computed to remove the effect of covariates such as the field of strength (1.5T or 3T), scanner type, scanning site, age, sex, and the size of the intra-cranial vault (ICV) of each individual (Ma et al., 2018 andPopuri et al., 2020). The normalized features as represented by the w-scores were input into the classifier. The patch-wise cortical thickness features were extracted in a similar manner to the patch-wise volumetric features. The vertex coordinates in each of the 68 cortical ROIs were subdivided into smaller patches by grouping them with k-means clustering based on the pairwise Euclidean distance of their thicknesses in the template space (Raamana et al., 2015). The locally-clustered cortical patches were then propagated back to each of the target space following the backward deformation field that was derived during the LDDMM non-rigid registration step (Beg et al., 2005). The average thickness of the mantle within each patch was computed as features followed by the w-score normalization (Ma et al., 2018 andPopuri et al., 2020) to remove the confounding effect of covariates. To avoid losing discriminant information during data down-sampling, multiple scale features were extracted in a coarse-to-fine manner. Each ROI was parcellated into three different scales of patch-sizes: 500, 1,000, and 2,000 voxels per patch for the volume features and 500, 1,000, and 2,000 vertex per patch for the thickness features. Those sizes were predefined to retain enough detailed information while restraining the number of primary features with respect to the number of training data to prevent overfitting. The subdivision of ROIs into these three scales resulted in a total number of 1,488, 705, and 343 voxel patches for the gray matter volume feature, and a total number 527, 255, and 131 vertex patches for the cortical thickness feature, respectively. Together with the FreeSurfer ROIs providing volumes and thickness, this gives six feature sets containing 3,409 scalars that represent each brain MR image. Deep Neural Network for Multi-Scale and Multi-Type Feature Combination With the patch-wise volume size and surface thickness features extracted from MRI images, a multi-scale and multi-type feature deep neural network (MMDNN) was constructed to learn the latent pattern from both types of features for the classification of NC, AD, and FTD pathology, which achieved state-of-the-art binary classification of NC and AD subjects using both FDG-PET and MRI images in our previous study (Lu et al., 2018a). As displayed in Figure 1, the MMDNN consisted of two stages with a total of seven blocks Multilayer Perceptrons (MLPs). The first network stage consisted of 6 MLPs blocks, each corresponds to a single type of features extracted at a single scale. These MLPs were trained independently in the first stage, and their outputs were concatenated as the input feature vector to train the final MLP block in the second stage. The parameters of the whole network were then fine-tuned together. For each image, the output was three probabilities with each corresponding to a subject group, i.e., NC, AD, and FTD, and the class with the highest probability was deemed to be the resulting classification. For each MLP, the number of units for each layer are displayed on its top left in Figure 1. If the dimension of input feature is represented with N, the number of units in a single MLP were predefined as 3N, 3 4 N, and 50 to increase the chance of exploring a larger range of potential hidden correlations across different patches in the first layer and gradually reduce the number of features in the following layers to avoid too many parameters Lu et al. (2018b,a). To avoid overfitting, dropout layers (Srivastava et al., 2014) were added after each hidden layer. During the training stage, half of the units were randomly dropped to prevent complex coadaptations on training data as well as to reduce the amount of computation and improve the training speed. During the validation or testing stage, all the units were retained to feed features to the next layer. Data Augmentation With Generative Adversarial Network In deep/machine learning, a common strategy to increase the number of training samples and prevent overfitting is data augmentation. Operations, such as rotation, flip, and zooming, are commonly used for 2D image recognition. However, those operations can hardly be used on a 1D feature vector. GAN (Goodfellow et al., 2014) have emerged to be a powerful tools to synthesize new data and have gained popularity in the generation of realistic natural images, and has also shown great potential to be a powerful data augmentation technique to synthetic image data with more variation and improve the generalizability of the machine learning algorithm (Shi et al., 2018;Lata et al., 2019;Sandfort et al., 2019;Shao et al., 2019). Therefore, we investigated the possibility of applying GAN for 1D structural brain feature augmentation for the improvement of classification performance in this study. GANs consist of two parts, the Discriminator (D) and the Generator (G), as displayed in Figure 2. In the proposed framework, the MMDNN was used as the discriminator with an additional channel of output for the recognition of data synthesized by the generator, denoted here as "fake, " while the generator aimed to generate feature vectors to "fool" the Discriminator, i.e., classified as NC, AD, or FTD by the discriminator. The input of the generator was a 1D random noise vector. By finding the mapping from the random variables to the data distribution of interest, the generator outputs a feature vector with the same dimension as the real data samples. It was worth mentioning that the fourth channel of output was only used during the optimization of GAN. For each testing sample, only the output probabilities of the first three channels were used to determine which of the three groups a subject belongs to. To prevent potential problems due to vanishing gradients, the generator consists of two layers, a single hidden layer and an output layer. Both layers are fully connected layer with 512 and 3,449 units, respectively. The dimension of random noise was set to 100 with each element set to follow a normal distribution. The activation function for the first layer was a rectified linear unit (ReLU) to avoid gradients from vanishing, while the one for the second layer was tanh function to squash the synthesized data into the same range of the real data. Network Optimization For optimization of the GAN, the loss function was defined: where x represents the input data and p z (z) is the prior of input noise variables. log(−D(G(z))) was used instead of log(1 − D(G(z))) to avoid vanishing gradient and mode collapse . The E here stands for weighted cross entropy function, which is defined as: where N is the number of input samples, j represents the class of samples, W j stands for the weight of class j which is computed as the inverse proportion of the subject number for the current class over the entire sample data, x i , y i are the feature vector and label of sample i, and h represents the network function. For the training of GAN, the discriminator and the generator were optimized alternately. During the optimization of the discriminator, the parameters of the generator were held constant, and when the generator was trained, the parameters of the discriminator were fixed. The minimax competition between G and D could drive both networks toward better performance. Besides adding dropout layers, another strategy, early stopping, was applied during the training process to reduce the overfitting. During the training of the deep neural network, iterative back propagation could drive the network to co-adapt to the training set. After a certain point, reducing training error could result in increasing the generalization error. Early stopping was therefore useful to provide guidance for the number of optimization iterations before overfitting. Part of the training data was randomly selected as the validation set and were excluded from training. While the remaining data samples were used to train the network, the validation set was used to determine the early stopping time point: the iteration in which the network has the lowest generalization error for the validation set. In this study, optimization of the network was stopped when the generation error of the validation set ceased to decrease for a consecutive 20 epochs. Furthermore, due to the limited number of available data and variation among different samples, there was still a chance that early stopping with a small validation set could result in biased classification toward the validation set, and the differential performance could be unstable with different splitting of training and validation sets. An ensemble classifier strategy (Lu et al., 2018b) was therefore used to improve the robustness, stability, and generalizability of the classifier. Similar to the 10-fold cross validation, the training set was randomly divided into 10 subsets. In each fold of the training process, one subset was retained for validation while the remaining nine subsets were used for training. With 10 repetitions, each set was used for validation once resulting in 10 different networks. For each test sample, each network would generate three probabilities corresponding to NC, AD, and FTD. The output probabilities of 10 networks were averaged followed with a softmax operation to determine the final classification result. The proposed deep neural network was built with Tensorflow (Abadi et al., 2015), an open source deep learning toolbox provided by Google. For the optimization of network in all No statistical difference was shown when comparing the W-scores of the Healthy Control subjects between the ADNI and FTDNI, confirming no database-specific biases remained in the input w-score feature of the normative group. (B) Similar level of significant differences were shown when comparing the NC and AD subjects in the ADNI database, or (C) When comparing the NC and the FTD subjects in the FTDBI database, indicating similarity between the AD and FTD group. (D) When comparing the FTD and AD group alone, significant differences were observed in both the volume-based and thickness-based features, indicating discrepancy between these two types of Dementia subtypes which can be utilized to achieve potential differential diagnosis. Unpaired t-test were performed for each pair of the comparison, with multiple comparison corrected by setting false discovery rate (FDR) = 0.05. experiments, Adaptive Moment Estimation (Adam) was used as the optimizer, batch size was set as 100 and the learning rate was fixed as 5 × 10 −5 . Performance Evaluation To validate the discriminant ability of the proposed framework on NC, AD, and FTD pathology, 10-fold cross validation was performed on the 1,954 T1 MRI images. Because a single subject could have multiple scans at different visits, a split based on images could result in having scans from the same subject used for both training and testing. We therefore performed the split based on subject to ensure complete separation between training and test samples. As mentioned in the section 2.5, the training set was further sub-dived into 10 subsets for each cross validation experiment and 10 networks optimized with different training and validation set were used to "vote" for the classification result of testing samples. Such an experimental design ensures that the data samples in the training, validation, and testing set were mutually exclusive on a subject level. The performance of classification was measured via accuracy and the sensitivity of correctly identifying different groups, such as N(TrueNC)/N(NC) for NC group, where N(·) denotes the number of data samples belonging to this group. Other than the proposed deep-learning-based method, a standard classifier, support vector machines (SVM) were also trained for comparison. One vs. rest strategy was applied for this multiclass classification task. Principal component analysis (PCA) was used for the reduction of feature dimension and the eigenvectors accounting for 95% of the total data variance were retained. Radial basis function (RBF) kernel was used for SVM given its superior performance in classification tasks. The features extracted at different scales were concatenated as the input for PCA+SVM classifier. In addition, to validate the effect of patchwise parcellation, we also trained the MLPs on FreeSurfer ROIwise features, i.e., the surface thickness and volume size of each ROI based on the Freesurfer segmentation. Figure 3 showed the comparison of the distributions for the entire concatenated multi-level multi-type W-score feature set between different subgroups. First, no statistical difference were shown when comparing the W-scores of the healthy control subjects between the ADNI and FTDNI for either the volumebased or thickness-based features (Figure 3A), confirming no database-specific biases remained in the input w-score feature of the normative group. Similar level of significant differences were shown when comparing the NC and AD subjects in the ADNI database (Figure 3B), or when comparing the NC and the FTD subjects in the FTDBI database (Figure 3C), indicating similarity between the AD and FTD group. Finally, when comparing the FTD and AD group alone, significant differences were observed in both the volume-based and thickness-based features, indicating discrepancy between these two types of Dementia subtypes, which can be utilized to achieve potential differential diagnosis. Cross Validation Experiment Results The results of 10-fold cross validation experiment are shown in Table 2. When comparing the mean accuracy across 10-folds, the accuracy of PCA+SVM with both type of multi-scale features was only slightly higher (0.02%) than the multi-scale deep neural network (MDNN) with surface thickness feature. The accuracy of MDNN using volume size feature was higher than the one using surface thickness feature by 2.93%. The combination of both type of multi-scale features showed superior performance comparing with MDNN using a single type of feature, and it was further improved by 1.42% with the data augmentation using the proposed GAN technique. Figure 4 showed the corresponding statistical comparison results among different experimental setup for the overall accuracy as well as the sensitivity for each class group. When compared to the baseline method, PCA+SVM (multi-type), both the proposed MMDNN method with or without GAN showed significant improvement (indicated as O) for the overall accuracy ( Figure 4A), as well as sensitivity for AD (C) and FTD (D). Training with multi-type feature showed improvement over the training with only single feature (for either thickness, indicated as X, or volume, indicated as +) in terms of overall accuracy ( Figure 4A). Finally, data augmentation using GAN further improve the overall accuracy ( Figure 4A) as well as sensitivity for the NC group ( Figure 4B) and the FTD group ( Figure 4D) (indicated as +). For detailed classification result, the confusion matrices of experiments using the proposed multi-scale networks are displayed in Table 3. The presented four experiments show a similar pattern despite the differences in their accuracy and sensitivity. The networks had a good performance for the task of distinguishing between AD and FTD pathology. The discrimination between NC and FTD showed the least accurate performance, leaving room for potential future improvement. Discrimination With Cortical Thickness Feature The experiment performance with only cortical thickness feature was displayed in Table 4. MLP with only ROI-wise cortical thickness feature showed the least accuracy (76.48%), while better result was achieved with PCA+SVM using features extracted at all scales. As expected, the classification performance was sensitive to patch size change and a generalized reduction with increasing patch size was found on the overall accuracy. The combination of multi-scale features with MDNN yielded superior classification performance. Discrimination With Volume Size Feature The experiment performance with volume size feature was displayed in Table 5. Similarly, as the experiments with cortical thickness feature, MLP with only ROI-wise feature had the worst performance (79.78%), and PCA+SVM using features extracted at all scales showed better accuracy (82.28%). Unlike the experiments with cortical thickness feature, MLP with a single scale of feature showed better performance comparing with PCA+SVM using features extracted at all scales. The combination of multi-scale features with MDNN also had the highest accuracy, while no generalized reduction of accuracy was found with increasing of patch size. Ensemble Classifier As described in section 2.5, the classification results presented in this study came through the "collective vote" of an ensemble of classifiers instead of a single network. The classification performance with or without ensemble classifiers of four different experiments, including MDNN with cortical thickness, MDDN with volume size, MMDNN with multi-type of features and GAN with multi-type of features, are shown in Figure 5. The y axis represents the mean classification accuracy from the 10fold cross validation experiment, while the x axis stands for different classifiers. On the x axis, the number "1" to "10" represents the network trained with different split of training and validation set, while "ensemble" denotes the combined result of these 10 networks. DISCUSSION In this study, we proposed a novel deep-learning-based framework for the differential diagnosis of NC, AD, and FTD. Cross validation experiment indicate that the proposed network could learn the latent patterns representing the different dementias using multi-type and multi-scale features, which in combination with GAN-based data augmentation, achieved a high accuracy of 88.28%. Based on the confusion matrix displayed in Table 3, there were only three cases of misdiagnoses between AD and FTD out of 891 samples, suggesting the excellent performance of the proposed framework to distinguish these two dementias. Differential Diagnosis Using MRI Biomarker Brain MRI is an imaging modality widely used for detecting various types of dementia, as the image contrast between different tissue can reveal pathology-induced brain morphology changes. Due to variations in pathogenesis and phenotypes, dementia can further be categorized into different sub-types, such as FTD, AD, mild cognitive impairment, vascular dementia, and dementia with Lewy bodies. Differentiating among different dementia subtypes is crucial for to provide appropriate healthcare and potential treatment, but is challenging due to overlapping phenotyping and morphological heterogeneity with each subtype (Bruun et al., 2019), and accurate differential diagnosis requires both appropriate feature extraction technique combined with powerful classification model. Some recent studies attempted to differentiate dementia subtypes using different machine learning techniques, such as hierarchical classification (Kim et al., 2019), statistical learning with feature selection based on least absolute shrinkage and selection operator (LASSO), and support vector machine (SVM) (Zheng et al., 2019), but are limited from either the constrained feature set (e.g., structural-volume feature) or relatively small validate with data for testing the robustness and generalizability of the classifiers. In our study, we proposed a framework to achieve accurate differential diagnosis by first building a multi-scale multi-type feature, followed with a deep neural network with the help of generative adversarial data augmentation technique, which was validated on a large sample (1,954 images), demonstrating a consistent overall high accuracy. Multi-Scale Classification Based on the results presented in Table 4, the accuracy of MLP decreased from 82.80% to 79.51% with patch size increasing from 500 voxels to 2,000 voxels, suggesting that cortical thickness feature is sensitive to the change of size of the ROI patch sizes, while less variation of accuracy was found with ROI volume feature (from 85.78% to 85.41%) as shown in Table 5. Contradicting our observations on using cortical thickness feature, the accuracy of volume size feature showed a slight improvement when the patch size increased from 1,000 to 2,000 voxels, suggesting that the volume change caused by brain atrophy may affect a large brain region in a similar fashion. However, the combination of multi-scale features always resulted in a better classification performance, indicating that the proposed MDNN is capable of learning the hidden pattern across the small to large patch sizes regardless the feature type. The optimal scale with the best performance would be a potential tunable hyperparameter in an optimization framework. Volume Size, Surface Thickness, and Other Morphological Features Two types of features, ROI volume and cortical thickness, were used for differential diagnosis in this study. Cross validation experiments showed that volume size has better discriminant ability compared with surface thickness regardless of the scale of feature and the type of classifier, as presented in Tables 4, 5. In addition, the results in Table 2 show that with the same classifier, the combination of these two features yields superior classification performance comparing with single type of feature, regardless of whether they are concatenated as a single input feature vector for SVM or using a MLP to learn the latent representation of each scale of feature first. In this study, we have explored the extraction volumebased and cortical-thickness-based features as an effort to improve the power of differential diagnosis. Other additional image-based morphological features could potentially also provide complementary information regarding brain pathology. Specifically, cortical folding has showed different aging-related patterns between healthy and diseased brain (Wang et al., 2016), including dementia such as AD (Cash et al., 2012). The combination of cortical folding with other shape-based descriptors such as local cortical thickness could potentially yield better characterization the cortical morphological changes that is induced by AD and other types of dementia (Awate et al., 2017). Therefore, the proposed framework could potentially be further extended to integrate other brain morphological descriptors, such as the cortical folding, into the multi-type input feature space to achieve better classification and differential diagnosis power. In the current study, the proposed network was trained using structural-MRI-based patch-wise volume size and surface thickness features created with a combination of from FreeSurfer segmentation and k-mean clustering to balance the number of parameters trainable and the level of original image-based patterns that are preserved. A potential future direction is to learn the features directly from the raw structural image while maintaining a trainable number of network parameters, which still remains a challenge. This study with patch-wise FreeSurfersegmentation-based features sets a baseline benchmark for future studies of deep-learning-based differential diagnosis studies with novel network-leaned image-based features for comparison. Table 2, the classification accuracy was further improved by 1.42% when using GAN for data augmentation. The sensitivity for detecting AD and FTD pathology was increased by a large margin with a slight decrease for detecting NC samples. Instead of log(1 − D(G(z))), we used log(−D(G(z))) in loss function to avoid vanishing gradient and mode collapse . Therefore, we did not specify what kind of data samples the generator should synthesize. We consider it as a "success" for the generator as long as the generated feature vector was classified as one of the three categories, i.e., NC, AD, and FTD, by the Discriminator. It would be interesting to train one or three Generators to synthesize data samples corresponding to specific groups, although this is beyond the scope of this study as our primary goal was to increase the differentiating accuracy. As displayed in For the generator, we only have a single hidden layer because of the low dimension of our data and potential gradient vanishing problem. Instance normalization or other kinds of normalization (Almahairi et al., 2018) was not performed because they caused mode collapse of the generator and resulted in synthetic data all close to 0. Contrasting with many other studies using GAN , we found root mean square propagation (RMSprop) optimizer resulted in an 87.39% accuracy, which was lower than with Adam optimizer. Ensemble Classifier and Cross-Validation As shown in Figure 5, there can be as much as 3% difference in the classification accuracy (the seventh and the tenth bar of the top left image) across the individual classifiers trained with a different subdivision of the training and validation set, suggesting an unstable performance of each single classifier. In all four experiments, the ensemble classifier had the highest or close to highest accuracy, suggesting that the ensemble strategy improves the robustness and generalizability of the classifier. It was worth mentioning that with the GAN, the variation of classification accuracy with individual classifiers decreased to 0.49% (from 87.98 to 88.47%) while the accuracy of ensemble classifier was 88.28%, suggesting that, with using GAN for data augmentation, the complex co-adaptations to training or validation set were reduced. The ensemble classifier strategy, although still effective, could therefore be optional with the application of GAN in light of limitations of available computational resources. On top of the combination of GAN-based data augmentation and cross-validation-based ensemble classifier, an additional nested 10-fold cross validation was implemented to ensure the proposed method is properly validated. Nevertheless, it would be ideal to validate the proposed multi-class classifier on an entire independent and well-homogenized dataset to best evaluate its generalizability toward unseen dataset Yee et al., 2020). CONCLUSION In this study, a novel framework for accurate differential diagnosis among NC, AD, and FTD pathology has been proposed leveraging the multi-type and multi-scale feature fusion, ensemble classifier, and GAN strategy. The proposed framework achieved a high accuracy of 88.28%. The crossvalidation experiments conducted on 1,954 MRI images demonstrate three salient observations. Firstly, the proposed network was able to learn the latent representation pattern across the different types of features (volumes and cortical thickness) extracted at coarse-to-fine scales. Secondly, using a Generative Adversarial Network for data augmentation could prevent overfitting and improve classification performance. Thirdly, the ensemble classifier strategy could result in a more robust and stable classifier, which has statistically better performance than an individual classifier. The promising high-accuracy results using the proposed framework, and the ability of deep networks to generalize to multiple classes, indicate that this approach can be potentially extended for the multiclass differential classification of brain images in other neurodegenerative dementias as well. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS DM conducted the experiment, performed the data processing and analysis, and wrote the manuscript. LW and MB designed and supervise the experiments, guided, and revised the manuscript. KP performed the data processing and manuscript writing. DL conducted the experiment, designed the framework, performed the analysis, and wrote the manuscript. All authors contributed to the article and approved the submitted version.
2020-10-22T19:01:25.188Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "81c9de8d4f474efa52ceff0f3eb8723313e53f2c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.00853/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81c9de8d4f474efa52ceff0f3eb8723313e53f2c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
265228132
pes2o/s2orc
v3-fos-license
COVID-19 vaccine hesitancy worldwide and its associated factors: a systematic review and meta-analysis Introduction The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has taken a toll on humans, and the development of effective vaccines has been a promising tool to end the pandemic. However, for a vaccination program to be successful, a considerable proportion of the community must be vaccinated. Hence, public acceptance of coronavirus disease 2019 (COVID-19) vaccines has become the key to controlling the pandemic. Recent studies have shown vaccine hesitancy increasing over time. This systematic review aims to evaluate the COVID-19 vaccine hesitancy rate and related factors in different communities. Method A comprehensive search was performed in MEDLINE (via PubMed), Scopus, and Web of Science from January 1, 2019 to January 31, 2022. All relevant descriptive and observational studies (cross-sectional and longitudinal) on vaccine hesitancy and acceptance were included in this systematic review. In the meta-analysis, odds ratio (OR) was used to assess the effects of population characteristics on vaccine hesitancy, and event rate (acceptance rate) was the effect measure for overall acceptance. Publication bias was assessed using the funnel plot, Egger's test, and trim-and-fill methods. Result A total of 135 out of 6,417 studies were included after screening. A meta-analysis of 114 studies, including 849,911 participants, showed an overall acceptance rate of 63.1%. In addition, men, married individuals, educated people, those with a history of flu vaccination, those with higher income levels, those with comorbidities, and people living in urban areas were less hesitant. Conclusion Increasing public awareness of the importance of COVID-19 vaccines in overcoming the pandemic is crucial. Being men, living in an urban region, being married or educated, having a history of influenza vaccination, having a higher level of income status, and having a history of comorbidities are associated with higher COVID-19 vaccine acceptance. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) first emerged in 2020 Pandemic in December 2019 and soon became a global concern owing to its high transmissibility; it was announced as a pandemic by the World Health Organization (WHO) in March 2020.Various strategies have been considered to prevent further transmission.Obligatory wearing of face masks, social distancing, travel restrictions, lockdowns, and quarantine were the first steps taken by most countries worldwide.Despite being partially successful in limiting further disease transmission, these strategies resulted in tremendous economic devastation [1,2]. The constant emergence of new variants of SARS-CoV-2, which are either more transmissible or cause greater morbidity and mortality, further raised concerns about a more cost-effective solution.Therefore, ever since vaccines became available and approved for use at the end of 2020, they have been considered the most effective and crucial strategy to fight coronavirus disease 2019 (COVID-19) [3][4][5]. Nevertheless, for a vaccination program to be successful, a considerable proportion of the community must be vaccinated.At least 70% of each community must be fully vaccinated to achieve herd immunity, and this number may be greater based on the vaccine type and transmissibility of the circulating variants [4,6,7].Hence, public acceptance of COVID-19 vaccines has become the key to controlling the pandemic.It is therefore important to ensure that maximum vaccine coverage is reached by making vaccines accessible and affordable and increasing public awareness to achieve maximum vaccine acceptance [8]. However, studies have shown that the rate of vaccine hesitancy has increased over time, making it the most important concern in the fight against COVID-19.The WHO declared vaccine hesitancy, defined as the refusal to get vaccinated despite the availability of vaccines, as one of the top 10 global health threats in 2019.This growing hesitancy may be because of an altered perception of the disease risk, uncertainty about available vaccines, fear of side effects, misinformation, and the spread of fake news [3,8,9]. Perception of health risk is strongly associated with vaccine hesitancy.Consequently, for public health to improve people's knowledge and attitudes toward vaccination, it is necessary to first understand this phenomenon's social, demographic, and psychological determinants.Furthermore, the language and communication strategies or media used to convey a health message influence how the vaccine is received.To accomplish this, all authorities involved in health communication must work together to produce clear and coherent messages [10]. With vaccines being the most important and effective weapon in the battle against COVID-19, it is essential to address the factors contributing to vaccine hesitancy and attempt to increase the rate of vaccine acceptance in the community.Herein, a systematic review was performed to detect the intention to receive COVID-19 vaccines among different communities and identify different population characteristics and factors associated with COVID-19 vaccine hesitancy. Eligibility criteria This systematic review included all relevant descriptive and observational studies (cross-sectional and longitudinal) on vaccine hesitancy and acceptance.No time constraints for studying or publishing articles nor restrictions on the population were imposed.Non-English studies, studies without full-text access, and those not relevant to vaccine hesitancy or acceptance were excluded.Narrative reviews, systematic reviews, meta-analyses, editorials, commentaries, letters to the editor, unpublished data, books, and conference papers were also excluded. Study selection After searching the databases, all retrieved records were screened for inclusion by reviewing the title/abstract and full text based on the eligibility criteria.Six authors (MB, FF, FG, HB, RR, and FS) performed both title/abstract and full-text screening, such that every article was reviewed by two independent reviewers.They resolved any disagreements by consulting a third reviewer (AL, NS, or MA). Data extraction and analysis One reviewer (FG, HB, RR, or FS) extracted the relevant data from the included papers, which were then rechecked and confirmed by another reviewer (AL, NS, or MA).The following data were extracted for each study: title, first author's name, date of study (year and month), study design, number of respondents/participants, age groups, gender, race/ ethnicity, religion, marital status, country, metropolitan classification (rural or urban), income, insurance status, education, occupation/ employment status, work setting (high-risk or non-high-risk), presence of any disease/chronic situation/history of comorbidities (physical/psychiatric), ongoing treatments, smoking status/alcohol consumption, mistrust in the government/healthcare system, received training on COVID-19 prevention, contact with confirmed/suspected COVID-19 patients, history of COVID-19 diagnosis, lost someone from COVID-19, health believes on COVID-19 (perceived susceptibility, severity, benefits, barriers, cues to action, etc.), being informed about COVID-19 vaccines, vaccination-related intentions, parents' willingness and hesitancy toward children's vaccination, COVID-19 vaccination status (not vaccinated, 1 dose, etc.), vaccine hesitancy, willingness to pay for vaccination, Hesitancy, the primary outcome of the study, was considered as any reluctance, delay, or doubt in acceptance, and also refusal of the COVID-19 vaccines.Acceptance was defined as already vaccinated or willing to accept COVID-19 vaccines in the future without any doubt.For studies in Fig. 2. Forest plot displaying the comparison of COVID-19 vaccine hesitancy between women and men.which only hesitancy or acceptance was reported, the other was calculated by subtracting the total number of respondents from the reported outcome.If a study reported hesitancy and acceptance as two different variables (i.e., measured using two different questionnaires), acceptance was calculated by subtracting the number of hesitant respondents from the total respondents.To measure the effects of age and income on vaccine hesitancy, the highest category of each variable was reported in the included studies and compared with the lowest category.Odds ratio Fig. 3. Forest plot displaying the comparison of COVID-19 vaccine hesitancy between older and younger people. (OR) was used to assess the effects of population characteristics on vaccine hesitancy, and event rate (acceptance rate) was the effect measure for overall acceptance.Additionally, 95% confidence intervals (CIs) were calculated for both measures.Crude data were extracted when available; otherwise, ORs were calculated.Owing to the heterogeneity in the variables included in the regression models of different studies, only univariate ORs were extracted for use in the meta-analysis.The randomeffects model was used when heterogeneity was more than 50% (I 2 > 50%).Publication bias was visually assessed using funnel plots and Egger's test.The trim-and-fill method was used to impute the missing studies and adjust for the effects of publication bias. All meta-analyses were performed using Comprehensive Meta-Analysis, Version 2.2 (CMA; Biostat Inc., Englewood, NJ, USA).The acceptance rate was plotted on the Earth map using Python version 3. The studies were ordered alphabetically in all forest plots. Study characteristics A total of 6,417 articles (PubMed ¼ 1,548, Web of Science ¼ 1,077, and Scopus ¼ 3,792) were retrieved from the database search.After removing 2,369 duplicated records, 4,048 remained, which were assessed for eligibility using title/abstract and full-text screening.Finally, 135 studies were included in this analysis.Fig. 1 shows the identification process of the included studies according to the PRISMA 2009 flow diagram [12].Table 1 shows a summary of the included studies. A meta-analysis of 114 studies encompassing 849,911 participants showed an overall acceptance rate of 63.1% (59.3-66.7%;Fig. 12).Moreover, Fig. 13 shows the acceptance rate by country, and Fig. 14 shows a map of the acceptance rate worldwide. All analyses were performed using a random-effects model due to significant differences observed in the design, setting, and population of the included studies.In addition, the (I 2 ) for all analyses was greater than confirming heterogeneity among the included studies.Furthermore, applying Duval and Tweedie's trim-and-fill method altered the results for age (OR ¼ 0.90; 95% CI¼ 0.73-1.10)and healthcare workers (OR ¼ 0.95; 95% CI¼ 0.82-1.11).Funnel plots and other publication bias tests are shown in the Supplementary Material (Figs.S1-S11 and Table S1). Discussion Vaccine uptake rate plays a significant role in achieving herd immunity against COVID-19.The basic reproductive number of an infectious disease is used to calculate the level of population immunity required to limit the spread [141].According to the most recent COVID-19 estimates, a population of 60-75% immune individuals is necessary to prevent the virus from spreading further and infecting the community [142][143][144]. Three factors influence vaccination acceptance: complacency, confidence, and convenience [145].Complacency refers to the belief that the risk of developing a specific disease is low, making immunization unnecessary and avoidable [146,147].The level of faith and trust in the safety and effectiveness of vaccination is referred to as confidence.The comfort afforded by the population in terms of vaccine accessibility, price, and availability is referred to as convenience [146]. The findings showed that reasons for hesitation are more frequently associated with distrust of medical authorities and vaccine safety.Other factors related to the perception of health risks, such as fear of consequences and lack of information, are also important in vaccine hesitancy.Hence, future vaccination campaigns should emphasize the importance of the individual and include activities aimed at increasing their health knowledge.These actions should be performed at all levels of the healthcare system to increase awareness and trust. The rapid development of effective and safe COVID-19 vaccines was unprecedented [148][149][150][151]. Nonetheless, COVID-19 vaccine apprehension could be a stumbling block in worldwide attempts to contain the pandemic's harmful health and socioeconomic consequences [152][153][154].The cost, effectiveness, and duration of protection provided by vaccines appear to be important factors in achieving this goal [150,155,156]; however, vaccine reluctance could be a major obstacle in successfully controlling the COVID-19 outbreak [35]. Consequently, estimates of vaccine acceptance rates can help plan actions and intervention measures to raise public awareness and reassure people about the safety and benefits of vaccines, which can help control the virus' spread and mitigate the negative effects of this unprecedented pandemic [157,158] [153].These findings are partially in accordance with those of Danis et al. , who found that economic hardship was a driver of vaccination reluctance; however, no link was found between financial hardship and vaccine rejection [159].By contrast, parental education was a valid predictor of vaccination refusal in both mothers and fathers, whereas reluctance appeared to be unaffected by parental education.In addition, Black and African populations had lower acceptance rates in our study-a finding consistent with another study that found a higher level of skepticism and anxiety regarding the flu vaccine among African Americans [159].In contrast, our analysis indicates that income does have an impact on vaccination attitudes, with the high-income population showing lower COVID-19 vaccine hesitancy than the low-income population. Vaccine acceptability among healthcare personnel yielded mixed findings.In general, healthcare workers had higher acceptance; however, Dror et al. found no significant differences in vaccine acceptance between healthcare and non-healthcare personnel in their study, and Barello et al. found no significant differences between healthcare and non-healthcare students [160,161].Our analysis found a statistically significant difference in COVID-19 vaccine hesitancy between healthcare and non-healthcare workers, with healthcare workers showing less hesitancy than non-healthcare workers.The impact of political ideology on vaccination acceptance or rejection has been one of the most intriguing aspects in some studies; for instance, Kennedy et al. conducted a study focusing on populist parties, finding that-at least in the Western European setting-populist party support might be used as a proxy for vaccination reluctance [162]. The constant advancement in technology suggests that the future of healthcare will be integrated with technology.Therefore, to combat vaccine hesitancy, it is critical to promote population-based communication and information strategies.These strategies include forging multidisciplinary alliances among healthcare providers, providing medical and scientific communications on vaccination, sharing recent data and shreds of evidence on virtual media or brochures, and increasing opportunities for dialogue and counseling regarding vaccination [10,163]. Finally, when the effects of gender and age on COVID-19 vaccination apprehension were examined, it was found that men were more likely to be immunized against COVID-19.This may be because of their stronger perception of COVID-19 hazards and weaker beliefs in diseaserelated conspiracies [164][165][166].As sampling bias-particularly in gender distribution-might alter the reported rates, these variables should be addressed for the appropriate interpretation of COVID-19 acceptance rates.According to our review, men have a higher acceptance rate than women.This finding is consistent with previous research, which indicated that a substantial percentage of women are concerned about vaccination safety and have little faith in the quality and impartiality of the information supplied by healthcare experts [167].Furthermore, according to our analysis, age was not associated with vaccine acceptance.The results of our study are inconsistent with those of prior research, which demonstrated that the COVID-19 vaccine acceptance rate increased with age [168].Similar to our results, another study conducted by Salibi et al. among Syrian refugees showed that vaccine rejection did not differ with age [169].We also analyzed two other sociodemographic factors in our review: marital status and place of residence.According to our results, married people had a lower level of vaccine hesitancy than single people.Moreover, rural people showed a higher rate of vaccine hesitancy than those living in urban areas. We also analyzed the distribution of vaccine hesitancy and acceptance rates among different countries.The COVID-19 vaccination uptake rates in the Middle East were among the lowest worldwide, with Kuwait (23.6%),Jordan (28.4%), and Saudi Arabia (64.7%) having the lowest acceptance rates [164,170].Such low rates could be attributed to the region's broad adoption of conspiracy views as well as its subsequent anti-vaccination attitude [165,166,171,172].Nevertheless, a few nations in the area (such as Israel and the United Arab Emirates) were able to attain vaccination coverage rates that were among the highest in the world, which was ascribed to major efforts to increase vaccine trust [173,174].The vaccine acceptance rates were relatively high in Latin America, with results from Brazil and Ecuador reporting acceptance rates >70% [175,176].This was also observed in a survey in Mexico with a vaccine acceptance rate of 76.3% [175].Urrunaga-Pastor et al. attributed this to the fact that the region was one of the most affected by the pandemic internationally, with high Fig. 7. Forest plot displaying the comparison of COVID-19 vaccine hesitancy between high and low-income people.mortality rates per person, which might have contributed to lower levels of complacency [177,178].High rates of COVID-19 vaccination hesitancy have been reported in Western and Central Europe, with some European countries (Ireland, Italy, Norway, and the United Kingdom), Canada, and the United States having a better outlook.According to a recent study on vaccine hesitancy in the United States for COVID-19, geographic disparity in vaccine hesitancy is closely linked to socioeconomic variables such as race and income.The authors argue that policymakers, community groups, and religious leaders play important roles in building public trust and reducing vaccine-related hesitancy [179]. Furthermore, data from African nations revealed significant rates of COVID-19 vaccination apprehension, particularly in Cameroon (15%) and Senegal (21%); this was mostly because of a lack of confidence in This study has some limitations.First, the studies included in this analysis varied in population, making comparison of the results challenging.Second, most studies have relied on self-reported surveys, which increased the risk of response bias.Third, it is crucial to consider that people's views on vaccines may change as real-world data become available.The studies included in our review captured public opinions during the peak of the COVID-19 pandemic, a time when information related to vaccines was still emerging and often scarce. Further research is required to confirm this hypothesis.Longitudinal studies that follow individuals over time could be valuable for understanding how attitudes evolve with new developments.Directly conducting interviews and focus groups with individuals can provide insights into beliefs and concerns that surveys may overlook. Conclusion Being men, living in an urban region, married, educated, having a history of influenza vaccination, having a higher income level, and having a history of comorbidities were associated with higher COVID-19 vaccine acceptance.In contrast, older age, history of prior COVID-19 infection, and being a healthcare worker did not significantly change the COVID-19 vaccine acceptance rate. people/participants' attitudes and beliefs toward COVID-19 vaccine, fear of vaccination's adverse effects, and acceptance of other vaccines.Web-PlotDigitizer version 4.5 (Pacifica, California, USA) was used to extract data from the figures[11]. . Evaluation of the attitudes toward and acceptance rates of COVID-19 vaccines can aid in launching much-needed communication initiatives to boost public trust in health authorities.Using the results of several COVID-19 vaccine surveys conducted worldwide, this systematic review aimed to analyze the prevalence and factors influencing COVID-19 vaccine acceptance, intention, and hesitancy.Pogue et al. found that income did not affect vaccination attitudes.Participants with a low educational level also had a lower acceptance rate Fig. 5 . Fig. 5. Forest plot displaying the comparison of COVID-19 vaccine hesitancy between married and single people. Fig. 6 . Fig. 6.Forest plot displaying the comparison of COVID-19 vaccine hesitancy between educated people and non-educated people. Fig. 8 . Fig. 8. Forest plot displaying the comparison of COVID-19 vaccine hesitancy between people with and without a history of COVID-19 infection. Fig. 9 . Fig. 9. Forest plot displaying the comparison of COVID-19 vaccine hesitancy between people with and without a history of influenza vaccine. Fig. 10 . Fig. 10.Forest plot displaying the comparison of COVID-19 vaccine hesitancy between healthcare and non-healthcare workers. Fig. 11 . Fig. 11.Forest plot displaying the comparison of COVID-19 vaccine hesitancy between people with and without comorbidities. Fig. 12 . Fig. 12. Forest plot displaying the overall acceptance rate of included studies. Fig. 13 . Fig. 13.Forest plot displaying acceptance rate of included studies categorized by countries. Table 1 Summary of the included articles.
2023-11-17T16:25:45.245Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "973665743e98dd91514a3a7358fc8e482b8112a9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.soh.2023.100048", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f577c6468f8fc49c890c115486754f6650e46ee9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237467109
pes2o/s2orc
v3-fos-license
Crystal structure, Hirshfeld and electronic transition analysis of 2-[(1H-benzimidazol-1-yl)methyl]benzoic acid In the title compound, the benzimidazole ring system is inclined to the the benzene ring by 78.04 (10)°. The crystal structure features O—H⋯N and C—H⋯O hydrogen bonding and C—H⋯π and π–π interactions. Chemical context Benzimidazole is a naturally ocurring compound, being present in vitamin B 12 (Crofts et al., 2014) and may also be synthesized from benzoic acid and o-phenylenediamine in presence of an excess of acid. Benzimidazole and its derivatives show biological activities such as antibacterial, antifungal (Yadav et al., 2015), antimicrobial (Shruthi et al., 2016), and anticancer (Kalalbandi et al., 2015). Cyanobenzyl compounds are used as intermediates in the synthesis of species that possess significant pharmaceutical properties. Compounds having carboxylic acid as a functional group have shown chelating properties and thus have potential applications in the field of biology. Such groups are also helpful in building metal-organic frameworks that usually form supramolecular networks due to extensive hydrogen bonding and weak interactions. For example,imidazol-1-yl)methyl]benzoic acid has been used to construct coordination polymers with different metal ions (Ahmad et al., 2013). Herein, we report the title compound, 2-[(1H-benzimidazol-1yl)methyl]benzoic acid, which was synthesized by a condensation reaction of benzimidazole and 2-(bromomethyl) benzonitrile in acetonitrile followed by a hydrolysis process. Hirshfeld surface analysis A Hirshfeld surface analysis was performed and the twodimensional fingerprint plots generated (McKinnon et al., 2007;Spackman & Jayatilaka et al., 2009) using Crystal-Explorer17 (Turner et al., 2017). The Hirshfeld surface mapped over d norm , colour-mapped from red (shorter distance than the sum of van der Waals radii) through white to blue Table 1 Hydrogen-bond geometry (Å , ). Figure 2 View of the crystal packing along the a axis, showing O-HÁ Á ÁN and C-HÁ Á ÁO hydrogen-bonding interactions forming a one-dimensional chain. Figure 3 The hydrogen bonding and C-HÁ Á Á andinteractions form zigzag chains, giving a supramolecular structure along the bc plane. Figure 1 Asymmetric unit of title compound, with atom labelling and displacement ellipsoids are drawn at the 50% probability level. (longer distance than the sum of the van der Waals radii). The principal weak interactions are clearly visible. The surface coverage corresponding to O-HÁ Á ÁN and C-HÁ Á ÁO interactions are 9% and 11.8%, respectively. The dark-red spot indicates significant hydrogen bonding. The two-dimensional finger plots are given in Fig Electronic transition analysis Electro-conducting materials synthesized by conjugated organic compounds show promising electronic properties due to the availability of delocalized electrons, except for semiconducting materials such as TiO 2 , ZnO and other metal oxide nano-materials, which are electro-conducting in themselves (Odziomek et al., 2017). The electronic properties of organic compounds depend on the electronic transition between the highest occupied molecular orbital (HOMO) or valence band and lowest occupied molecular orbital (LUMO) or conduction band. In a simple method, the energy band gap (Eg) of organic molecule is determined by a Tauc plot from the absorption spectra ( max = 245 nm, in this case). The band gap energy, Eg = 4.6 eV, of the title compound is very large (Fig. 5). This large band gap arises due to high -conjugation or polarization in the title molecule system. The title molecule could be useful for developing or enhancing the organic electronic properties of conducting materials such as metal-organic frameworks. Synthesis and crystallization In an equimolar ratio, benzimidazole (2 g, 16.9 mmol) and dry K 2 CO 3 (4.66 g, 33.85 mmol) were mixed in a round-bottom flask in acetonitrile (MeCN, 60 ml) under an inert atmosphere. The mixture was then allowed to stirred for 60 min at 363 K then treated with 2-(bromomethyl) benzonitrile (3.31 g, 16.9 mmol), and the resulting solution refluxed for 24 h. After completion of this step, the solution was allowed to cool to room temperature and the mixture was poured slowly onto ice-water (100 ml) under constant stirring. A greenish muddy crystalline precipitate was obtained and it was left to stand at 293 K for two days. After two days, a crystalline powder of 2- [(1H-benzo[d]imidazol-1-yl)methyl]benzonitrile was obtained (Ahmad et al., 2013). The title compound was synthesized by hydrolysis of 2- [(1H-benzo[d]imidazol-1-yl)methyl]benzonitrile, 2 g being mixed with 20 equimolar of potassium hydroxide (6.86 g, 8.58 mmol) in water. The solution was refluxed at 373 K for 36 h, the resultant solution was then allowed to cool at room temperature and then poured onto ice-water, and after that acidified using 6 N HCl for protonation. The protonated solution was kept for slow evaporation. After two weeks, paleyellow cubic crystals were obtained in good yield, which were suitable for data collection. The reaction scheme is shown in Fig. 6.
2021-09-01T15:02:44.495Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "01f6c64f154d54aa81adb68c2f031519070a0b6b", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/e/issues/2021/07/00/ex2044/ex2044.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a1259161bf52fde64d45fba8d33a2a9c7ca0d74", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
260427179
pes2o/s2orc
v3-fos-license
Learning dynamical systems from data: A simple cross-validation perspective, part III: Irregularly-Sampled Time Series A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel. In particular, this strategy is highly efficient (both in terms of accuracy and complexity) when the kernel is data-adapted using Kernel Flows (KF)~\cite{Owhadi19} (which uses gradient-based optimization to learn a kernel based on the premise that a kernel is good if there is no significant loss in accuracy if half of the data is used for interpolation). Despite its previous successes, this strategy (based on interpolating the vector field driving the dynamical system) breaks down when the observed time series is not regularly sampled in time. In this work, we propose to address this problem by directly approximating the vector field of the dynamical system by incorporating time differences between observations in the (KF) data-adapted kernels. We compare our approach with the classical one over different benchmark dynamical systems and show that it significantly improves the forecasting accuracy while remaining simple, fast, and robust. Introduction The ubiquity of time series in many domains of science has led to the development of diverse statistical and machine learning forecasting methods. Examples include ARIMA [10], GARCH [5] or LSTM [39]. Most of these methods require the time series to be regularly sampled in time. Yet, this requirement is not met in many applications. Indeed, irregularly sampled time series commonly arise in healthcare [29], finance [16] and physics [40] among other fields. While adaptations have been proposed, these workarounds tend to consider the irregular sampling issue as a missing values problem, leading to poor performance when the resulting missing rate is very high. Such approaches include (1) the imputation of the missing values (e.g. with exponential smoothing [23,42] or with a Kalman filter [17]), and (2) fast Fourier transforms or Lomb-Scargle periodograms [16,2]. This issue has motivated the development of several recent deep learningbased algorithms such as VS-GRU [27], GRU-ODE-Bayes [28,15] or ODE-RNN [18]. Amongst various learning-based approaches, kernel-based methods hold potential for considerable advantages in terms of theoretical analysis, numerical implementation, regularization, guaranteed convergence, automatization, and interpretability [11,32]. Indeed, reproducing kernel Hilbert spaces (RKHS) [14] have provided strong mathematical foundations for analyzing dynamical systems [6,21,19,20,4,24,25,1,26,7,8,9] and surrogate modeling (we refer the reader to [38] for a survey). Yet, the accuracy of these emulators depends on the kernel, and the problem of selecting a good kernel has received less attention. Recently, the experiments by Hamzi and Owhadi [22] show that when the time series is regularly sampled, Kernel Flows (KF) [34] (an RKHS technique) can successfully reconstruct the dynamics of some prototypical chaotic dynamical systems. KFs have subsequently been applied to complex large-scale systems, including climate data [30,41]. The nonparametric version of KFs has been extended to dynamical systems in [35]. A KFs version for SDEs can be found in [36]. Despite its recent successes, we show in this paper that this strategy (based on approximating the vector field of the dynamical system) cannot directly be applied to irregularly sampled time series. Instead, we propose a simple adaptation to the original method that allows to significantly improve forecasting performance when the sampling is irregular. The adaptation is to approximate the vector field and can be reduced to adding time delays in between observations to the delay embedding used to feed the method. We demonstrate the benefits of our approach on three prototypical chaotic dynamical systems: the Hénon map, the Van der Pol oscillator, and the Lorenz map. For all, our approach shows significantly improved forecasting accuracy (compared to the original approach). Specifically, our contributions are as follows: • We show that learning the kernel in kernel ridge regression using our modified approach significantly improves the prediction performance for irregular time series of dynamical systems • Using a delay embedding, we adapt the KF-adapted kernel method algorithm to make multistep predictions The outline of this paper is as follows. In Section 2, we review kernel methods for regularly sampled time series and propose an extension of Kernel Flows to irregularly sampled time series. Section 3 contains a description of our experiments with the Hénon, Van der Pol, and Lorenz systems and a discussion. The appendix provides a summary of the theory of reproducing kernel Hilbert spaces (RKHS). Statement of the problem and proposed solution 2.1. The problem. Let x 1 , x 2 , ..., x n be observations from a deterministic dynamical system in R d , along with a vector t = (t 1 , . . . , t n ) containing the time of observations. That is, the observation x k is observed at time t k . Importantly, time differences in between observation t k+1 − t k are not necessarily regular. Our goal is to predict x n+1 , x n+2 , . . . given the future sampling times t n+1 , t n+2 , ... and the history of the irregularly observed time series (x 1 , ...x n and t 1 , ..., t n ). 2.2. A reminder on kernel methods for regularly sampled time series. The simplest approach to forecasting the time series (employed in [22]) is to assume that x 1 , x 2 , . . . is the solution of a discrete dynamical system of the form with an unknown vector field f † and time delay τ ∈ N * (which we will call delay or delay embedding) and approximate f † with a kernel interpolant f of the past data (a kernel ridge regression model [13]) and use the resulting surrogate model x k+1 = f (x k , . . . , x k−τ † +1 ) to predict future state. Given τ ∈ N * (see [22] for how τ can be learned in practice), the approximation of the dynamical system can then be recast as that of interpolating f † from pointwise measurements with X k := (x k , . . . , x k+τ −1 ), Y k := x k+1 and N = n − τ . Given a reproducing kernel Hilbert space 1 of candidates H for f † , and using the relative error in the RKHS norm · H as a loss, the regression of the data (X k , Y k ) with the kernel K associated with H provides a minimax optimal approximation [33] of f † in H. This regressor (in the presence of measurement noise of variance where X = (X 1 , . . . , X N ), Y = (Y 1 , . . . , Y N ), k(X, X) is the N × N matrix with entries k(X i , X j ), k(x, X) is the N vector with entries k(x, X i ) and I is the identity matrix. This regressor has also a natural interpretation in the setting of Gaussian process (GP) regression: (i.) (3) is the conditional mean of the centered GP ξ ∼ N (0, K) with covariance function K conditioned on ξ(X k ) + √ λZ k = Y k where the Z k are centered i.i.d. normal random variables of variance λ. 2.3. A reminder on the Kernel Flows (KF) algorithm. The accuracy of any kernel-based method depends on the kernel K, and [22] proposed (in the setting of Subsec. 2.2) to also learn that kernel from the data (X k , Y k ) with the Kernel Flows (KF) algorithm [34,44,12] which we will now recall. To describe this algorithm, let K θ (x, x ) be a family of kernels parameterized by θ. Using the notations from Subsection 2.2, the interpolant of the data (X, Y ) (X = (X 1 , . . . , X N ) and Y = (Y 1 , . . . , Y N )) obtained with the kernel K θ (and a nugget λ > 0) admits the representer formula (4) A fundamental question is then: which θ should be chosen in (4)? KF answers that question by learning θ from data based on the simple premise that a kernel (K θ ) is good if the interpolant (4) does not change much under subsampling of the data. This simple cross-validation concept is then turned into an iterative algorithm as follows. 2.4. The problem with irregularly sampled time series. The model (1) fails to be accurate for irregularly sampled series because it discards the information contained in the t k . When the x k are obtained by sampling a continuous dynamical system, one could consider the following alternative model: While this approach may succeed if the time intervals t k+1 − t k are small enough, it will also break down as these time intervals get larger. In our experiments section, we refer to this approach as the Euler approach, as it consists in learning the Euler discretization of the vector field. The proposed solution. To address this issue, we consider the model which incorporates the time differences ∆ k = t k+1 − t k between observations. That is, we employ a time-aware time series representations by interleaving observations and time differences. The proposed strategy is then to construct a surrogate model of (7) by regressing f † from past data and a kernel K θ learned with Kernel Flows as described in Subsec. 2.3. Note that the past data takes the form (2) with X k := (x k , ∆ k , . . . , x k+τ −1 , ∆ k+τ −1 ), Y k := x k+1 and N = n − τ . Experiments We conduct numerical experiments on three well-known dynamical systems: the Hénon map, the van der Pol oscillator, and the Lorenz map. We generate irregularly sampled time series from these dynamical systems using numerical integration and subsequently split the time series into training and test subsets. The time series are subsequently irregularly sampled according to the following scheme. The time interval between each observation ∆ k is taken to be a multiple of the smallest integration setup used to generate the data δ t . That is, ∆ k = α k δ t where α k is a random integer between 1 and α. We train the kernel on the training part of the time series and evaluate the forecasting performance of the model. We report both the mean squared error (MSE) and the coefficient of determination (R 2 ). Given test samples x n+1 , x n+2 , ..., x N and the predictionsx n+1 ,x n+2 , ...,x N , the MSE and the coefficient of determination are computed as follows: The MSE should then be as low as possible and the R 2 as high as possible. We note that it is possible to have a negative R 2 , if the predictor performs worse than the average of the samples. To showcase the importance of learning the kernel parameters and to include the time difference between subsequent observations, we proceed in three stages. We first report the results of our method when the parameters of the kernel are not learned but rather sampled at random from a uniform (U(0, 1)) distribution and when the time delays are not encoded in the input data. In this setup, we distinguish the original KF case and the Euler version, as discussed in Subsection 2.4. Second, to assess the importance of learning the Kernel parameters, we report the model performance when the parameters are learned but the time delays are not encoded in the input data. Lastly, we report the performance of our approach when we both learned the kernel parameters and included the time delays. For all models variants and dynamical systems, we use the training procedure as described in [22] and used a mini-batch size of 100 temporal observations and minimize ρ(θ) as in Equation 5 using stochastic gradient descent. To allow for a notion of uncertainty in the reported metrics, all our experiments use a five repetition approach where five different kernel initialization are randomly chosen. In all of our examples, we used a kernel that is a linear combination of the triangular, Gaussian, Laplace, locally periodic kernels, and the quadratic kernel. Table 1. Test performance of the different datasets. We report the means along with standard deviations of the mean squared error (MSE) and coefficient of determination (R 2 ) on the forecasting task. As Hénon is not a time-continuous map, the Euler version of KF is not applicable in this case. For readability, we abstain from reporting the exact numbers when MSE is larger than one and R 2 larger lower than 0. Method is, for a horizon h and for a delay embedding with delay d, we split the test time series in chunks of lengths h + d. For each of these chunks, we use the d first samples as input to our model and predict over the h remaining samples in the chunk. We eventually aggregate the predictions of all samples overall chunks together to compute the reporting metrics. Overview. Recapping, we will compare 5 approaches: (A) Regressing model (7) with a kernel learnt using KF (which we call irregular KF). (B) Regressing model (1) with a kernel learned using KF (which we call regular KF). (C) Regressing model (6) with a kernel learned using KF (which we call the Euler version). (E) Regressing model (1) without learning the kernel. Table 1 summarizes results obtained in the following sections. Hénon map. Consider the Hénon map with a = 1.4, b = 0.3 x n+1 = 1 − ax 2 n + y n , y n+1 = bx n We have repeated our experiments five times with a delay embedding of 1, a learning rate η of 0.1, a prediction horizon h of 5, maximum time difference α of 3, and have trained the model on 600 points to predict the next 400 points. Fig. 1i shows that approach (E) cannot reconstruct the attractor because it makes no attempt at learning the kernel and ignores time differences in the sampling. Fig. 1ii shows that embedding the time delay in the kernel (approach (A)) significantly improves the reconstruction of the attractor of the Hénon map. Table 1 displays the forecasting performance of the different methods. We observe that if the kernel is not learned (if the kernel is not data adapted), then the underlying method is unable to learn an accurate representation of the dynamical system. However, if the parameters of the kernel are learned, then our proposed approach (A) clearly outperforms the regular KF (approach (B)). As for the Euler version, it is not applicable in this case as Hénon is not a continuous map. Here, we have used a prediction horizon h of 10, a learning rate η of 0.01, a maximum time difference α of 5, and a delay embedding of 1. As evident from Table 1 Lorenz. Our third example is the Lorenz system described by the following system of differential equations:ẋ with standard parameter values σ = 10, ρ = 28 , β = 8 3 Our parameters include a delay embedding of 2, a learning rate η = 0.01, a prediction horizon h = 20, a maximum time difference α = 5, 5000 points used for training and the 5000 for testing. Fig. 7, 8 and 9 show that (1) not learning the kernel or not including time differences lead to poor reconstructions of the attractor of the Lorenz system even if the time horizon is 1. However, as observed in Table 1, the Euler version of KF leads to satisfying results, close to (but not as good as) the ones obtained with our proposed approach (A). Remark (Real-time learning and Newton basis): It is possible to include new measurements when approximating the dynamics from data without repeating the learning process. This can be done by working in Newton basis as in [37] (see also section 4 of [38]). The Newton basis is just another basis for the space spanned by the kernel on the points, i.e., span{k(., , the basis is orthonormal in the RKHS inner product). If we add a new point x N +1 , ..., x N +m , we'll have corresponding elements v N +1 , ..., v N +m of the Newton basis, still orthonormal to the previous ones. So we will have a new interpolant f new (x) = N +m i=1 b i v i (x) that can be rewritten in terms of the old interpolant as where f can still be written in terms of the basis K, but with different coefficients c . If A is the kernel matrix on the first N points, on can compute a Cholesky factorization A = LL T with L lower triangular. Let B := L −T , then v j (x) = N i=1 (B) ij K(x, x i ). When we add new points, we have an updated kernel matrix A , and the Cholesky factor of A can be easily updated to the one of A . Conclusion Our numerical experiments demonstrate that embedding the time differences between the observations in the kernel considerably improves the forecasting accuracy with irregular time series. Though we have focused on a few examples, the success of our proposed approach (A) has raised the question of whether it can be extended to other systems, including those described by partial and stochastic differential equations, as well as complex real-world data. Reproducing Kernel Hilbert Spaces (RKHS). We give a brief overview of reproducing kernel Hilbert spaces as used in statistical learning theory [14]. Early work developing the theory of RKHS was undertaken by N. Aronszajn [3]. Definition 5.1. Let H be a Hilbert space of functions on a set X . Denote by f, g the inner product on H and let f = f, f 1/2 be the norm in H, for f and g ∈ H. We say that H is a reproducing kernel Hilbert space (RKHS) if there exists a function K : X × X → R such that i. K x := K(x, ·) ∈ H for all x ∈ X . ii. K spans H: H = span{K x | x ∈ X }. iii. K has the reproducing property: ∀f ∈ H, f (x) = f, K x . K will be called a reproducing kernel of H. H K will denote the RKHS H with reproducing kernel K where it is convenient to explicitly note this dependence. The important properties of reproducing kernels are summarized in the following proposition. Proposition 5.1. If K is a reproducing kernel of a Hilbert space H, then i. K(x, y) is unique. ii. ∀x, y ∈ X , K(x, y) = K(y, x) (symmetry). iii. Theorem 5.1. Let K : X ×X → R be a symmetric and positive definite function. Then there exists a Hilbert space of functions H defined on X admitting K as a reproducing Kernel. Conversely, let H be a Hilbert space of functions f : X → R satisfying ∀x ∈ X , ∃κ x > 0, such that |f (x)| ≤ κ x f H , ∀f ∈ H. Then H has a reproducing kernel K. Theorem 5.2. Let K(x, y) be a positive definite kernel on a compact domain or a manifold X. Then there exists a Hilbert space F and a function Φ : X → F such that K(x, y) = Φ(x), Φ(y) F for x, y ∈ X. Φ is called a feature map, and F a feature space 2 . Function Approximation in RKHSs: An Optimal Recovery Viewpoint. In this section, we review function approximation in RKHSs from the point of view of optimal recovery as discussed in [33]. Problem P:. Given input/output data (x 1 , y 1 ), · · · , (x N , y N ) ∈ X × R, recover an unknown function u * mapping X to R such that u * (x i ) = y i for i ∈ {1, ..., N }. In the setting of optimal recovery, [33] Problem P can be turned into a well-posed problem by restricting candidates for u to belong to a Banach space of functions B endowed with a norm || · || and identifying the optimal recovery as the minimizer of the relative error where the max is taken over u ∈ B and the min is taken over candidates in v ∈ B such that v(x i ) = u(x i ) = y i . For the validity of the constraints u(x i ) = y i , B * , the dual space of B, must contain delta Dirac functions φ i (·) = δ(· − x i ). This problem can be stated as a game between Players I and II and can then be represented as If || · || is quadratic, i.e. ||u|| 2 = [Q −1 u, u] where [φ, u] stands for the duality product between φ ∈ B * and u ∈ B and Q : B * → B is a positive symmetric linear bijection (i.e. such that [φ, Qφ] ≥ 0 and [ψ, Qφ] = [φ, Qψ] for φ, ψ ∈ B * ). In that case the optimal solution of (12) has the explicit form where A = Θ −1 and Θ ∈ R N ×N is a Gram matrix with entries Θ i,j = [φ i , Qφ j ]. To recover the classical representer theorem, one defines the reproducing kernel K as K(x, y) = [δ(· − x), Qδ(· − y)] In this case, (B, || · ||) can be seen as an RKHS endowed with the norm and (14) corresponds to the classical representer theorem v * (·) = y T AK(x, ·), using the vectorial notation y T AK(x, ·) = N i,j=1 y i A i,j K(x j , ·) with y i = u(x i ), A = Θ −1 and Θ i,j = K(x i , x j ). Now, let us consider the problem of learning the kernel from data. As introduced in [34], the method of KFs is based on the premise that a kernel is good if there is no significant loss in accuracy in the prediction error if the number of data points is halved. This led to the introduction of ρ = ||v * − v s || 2 ||v * || 2 (16) which is the relative error between v * , the optimal recovery (15) of u * based on the full dataset X = {(x 1 , y 1 ), . . . , (x N , y N )}, and v s the optimal recovery of both u * and v * based on half of the dataset X s = {(x i , y i ) | i ∈ S} (Card(S) = N/2) which admits the representation v s = (y s ) T A s K(x s , ·) with y s = {y i | i ∈ S}, x s = {x i | i ∈ S}, A s = (Θ s ) −1 , Θ s i,j = K(x s i , x s j ). This quantity ρ is directly related to the game in (13) where one is minimizing the relative error of v * versus v s . Instead of using the entire the dataset X one may use random subsets X s 1 (of X) for v * and random subsets X s 2 (of X s 1 ) for v s . Writing σ 2 (x) = K(x, x) − K(x, X f )K(X f , X f ) −1 K(X f , x) we have the pointwise error bound Local error estimates such as (18) are classical in Kriging [43] (see also [31][Thm. 5.1] for applications to PDEs). u H is bounded from below (and, in with sufficient data, can be approximated by) by Y f,T K(X f , X f ) −1 Y f , i.e., the RKHS norm of the interpolant of v * . Code. All the relevant code for the experiments can be found at: https://github.com/jlee1998/Kernel-Flows-for-Irregular-Time-Series
2021-11-29T02:15:52.084Z
2021-11-25T00:00:00.000
{ "year": 2021, "sha1": "06920c730dc5e2af5864323a7fba1c30f6e8df5a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physd.2022.133546", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "06920c730dc5e2af5864323a7fba1c30f6e8df5a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
198495113
pes2o/s2orc
v3-fos-license
Clinical characteristics and disease-specific prognostic nomogram for primary gliosarcoma: a SEER population-based analysis Because the study population with gliosarcoma (GSM) is limited, the understanding of this disease is insufficient. In this study, the authors aimed to determine the clinical characteristics and independent prognostic factors influencing the prognosis of GSM patients and to develop a nomogram to predict the prognosis of GSM patients after craniotomy. A total of 498 patients diagnosed with primary GSM between 2004 and 2015 were extracted from the 18 Registries Research Data of the Surveillance, Epidemiology, and End Results (SEER) database. The median disease-specific survival (DSS) was 12.0 months, and the postoperative 0.5-, 1-, and 3-year DSS rates were 71.4%, 46.4% and 9.8%, respectively. We applied both the Cox proportional hazards model and the decision tree model to determine the prognostic factors of primary GSM. The Cox proportional hazards model demonstrated that age at presentation, tumour size, metastasis state and adjuvant chemotherapy (CT) were independent prognostic factors for DSS. The decision tree model suggested that age <71 years and adjuvant CT were associated with a better prognosis for GSM patients. The nomogram generated via the Cox proportional hazards model was developed by applying the rms package in R version 3.5.0. The C-index of internal validation for DSS prediction was 0.67 (95% confidence interval (CI), 0.63 to 0.70). The calibration curve at one year suggested that there was good consistency between the predicted DSS and the actual DSS probability. This study was the first to develop a disease-specific nomogram for predicting the prognosis of primary GSM patients after craniotomy, which can help clinicians immediately and accurately predict patient prognosis and conduct further treatment. In this study, retrospective data including a total of 498 patients who underwent craniotomy between 2004 and 2015 were reviewed from the Surveillance, Epidemiology, and End Results (SEER) database. The clinical characteristics and independent prognostic factors were analysed by applying large patient numbers. A prognostic disease-specific nomogram was constructed and validated based on retrospective patient data from the SEER database. The nomogram is a multivariate visualization prediction model that can incorporate different variables affecting prognosis 12 . Recently, the nomogram has been widely used to predict the prognosis of patients with malignant tumours [13][14][15][16] . However, to our knowledge, no published literature has proposed a nomogram to predict the prognosis of primary GSM patients after craniotomy. Therefore, our study intended to develop a nomogram that can be applied to individually assess the survival time of patients with primary GSM after craniotomy and to discuss different factors influencing the prognosis of GSM patients. Results Patients' clinicopathologic characteristics. The study population consisted of 498 patients diagnosed with primary GSM receiving craniotomy. A flowchart of the case selection criteria of patients is shown in Fig. 1. Patient, tumour and surgical characteristics, including sex, age, race, marital status, surgical procedures, site of the tumour, tumour size, metastasis, chemotherapy and radiotherapy information, are displayed in Fig. 2. Most of the patients were male (315, 63.3%). Tumour metastasis was rare (12, 2.4%). The temporal lobe was more susceptible to tumours than other lobes (196, 39.4%). The univariate analysis is shown in Fig. 2. The results demonstrated that age at presentation, site of the tumour, tumour size, metastasis state, adjuvant chemotherapy (CT) and adjuvant radiotherapy (RT) were significantly associated with GSM patient survival. There was no significant difference regarding sex, race, marital status or surgical procedure. Prognostic nomogram for DSS and the decision tree model. The nomogram generated via the Cox proportional hazards model included four independent prognostic factors influencing DSS after optimization by the Akaike information criterion (AIC) protocol, which is shown in Fig. 5. The C-index of internal validation for DSS prediction was 0.67 (95% CI, 0.63 to 0.70). The calibration curve for the probability of postoperative DSS at 1 year suggested that there was good consistency between the predicted DSS probability and the actual DSS probability in the dataset (Fig. 5). The receiver operating characteristic (ROC) curve and area under the curve (AUC) are displayed in Fig. 5. The AUC (0.67) indicated good accuracy of the one-year prognosis prediction of this model. Figure 6 displays the decision tree model and two significant parameters influencing GSM survival (age and adjuvant CT). Other variables(13) Patients with gliosarcoma not in the brain (1) ) ( www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion Due to the rarity of this disease, data concerning the patient characteristics of primary GSM are lacking. Most previous studies have been based on single institutional experience, and the results do not represent the actual situation. Our results showed that the median age at diagnosis was over 60 years (61 years), and most of the patients www.nature.com/scientificreports www.nature.com/scientificreports/ were male (63.1%). The results were consistent with those from three other studies whose sample sizes included more than 50 patients 2,17,18 . The temporal lobe was more susceptible to tumours than other lobes (196, 39.3%). Most other previous studies also reported a tendency of temporal lobe involvement by GSM 2,17,19,20 . Ma R et al. 8 reported that the tumours were most likely to involve the frontal and parietal lobes. However, there were only 33 patients in this study. The multivariate analysis demonstrated that age at presentation, tumour size, metastasis state, and adjuvant CT were independent prognostic factors for DSS. Several other studies have also concluded that age at diagnosis was a significant prognostic factor and that a younger age was associated with a better prognosis 2,8,18 . To our knowledge, there have been no previous studies suggesting that tumour size is a prognostic factor. Our results showed that smaller tumours implied a better prognosis. Regarding the metastatic state, our study suggested that patients with tumour metastasis had a worse prognosis, which was verified in another study 21 www.nature.com/scientificreports www.nature.com/scientificreports/ There are no standardized management protocols for GSM. Generally, maximal surgical resection and adjuvant therapy are recommended 22 . Kozak et al. suggested that biopsy alone resulted in worse survival than either subtotal resection or gross total resection (GTR) 2 . Another study found that GTR resulted in better survival than subtotal resection or biopsy in GSM patients 18 . Our series did not find a significant difference in prognosis based on the surgical procedure. Regarding adjuvant therapy, trimodality therapy is considered the most effective method for GBM 7 . For low-grade gliomas (LGGs), the effect of adjuvant chemotherapy or radiotherapy alone was compared, and one study suggested that CT alone was associated with better survival than RT alone in patients with LGGs who received craniotomy 23 . Concerning GSM, previous studies have reached different conclusions. Some studies have concluded that chemotherapy is a prognostic factor 10,11,24 , while some have demonstrated that radiotherapy affects prognosis 2,17 , and others have indicated that trimodality therapy is the most beneficial for prognosis 8,9,18 . Our series found a significant correlation between chemotherapy and patient prognosis. We summarize several studies discussing the prognostic factors of GSM patients in Table 1. Although it is generally believed that the prognosis of GSM patients is poor, there are still reports of GSM patients with a relatively good prognosis. Huo Z et al. 25 reported two cases of primary GSM with a prolonged prognosis (130 months and 48 months). Both patients received complete tumour resection and postoperative adjuvant therapy without any evidence of tumour recurrence or metastasis. Another case report presented a female GSM patient who was in stable condition at 31 months after the initial diagnosis 26 . Tumour resection and concomitant adjuvant therapy were performed after the initial diagnosis. Another surgery and second-line chemotherapy (ifosfamide, carboplatin, and etoposide) were conducted after tumour recurrence at 8 months. The authors discussed the feasibility of unconventional chemotherapy in the treatment of GSM. Many prognostic models have been reported for different types of tumours. Breast cancer is the most common tumour in women, and the prognosis varies greatly. Phung MT et al. 27 www.nature.com/scientificreports www.nature.com/scientificreports/ studies discussing the prognostic models of breast cancer and identified 58 relevant models between 1982 and 2016. Within these 58 models, many methods of model development were applied. The most commonly used method was the Cox proportional hazards regression (n = 32). Other kinds of methods included an artificial neural network (n = 6), a decision tree (n = 4), logistic regression (n = 3), the Bayesian method (n = 3), a multistate model (n = 2), a support vector machine (n = 2) and others (n = 6). Four models applied a nomogram as the presentation form. When assessing discrimination ability, the C-index/AUC was the most commonly used method. Another systematic review of predictive models for resectable pancreatic cancer reported that within the 16 developed models, 11 used the Cox regression method 28 . There are also reports of the application of machine learning in the development of a clinical prognostic model 29,30 . However, the Cox proportional hazards regression method is still the most widely used method when establishing prognostic models. In this study, we applied both the Cox proportional hazards model and the decision tree model to determine the prognostic factors of primary GSM and developed a prognostic DSS nomogram. By applying this nomogram, clinicians can immediately and accurately predict patient prognosis, which can help conduct further treatment after craniotomy. Regarding the decision tree model, we found that age and chemotherapy were important nodes for prognostic judgement. A younger age and adjuvant chemotherapy were associated with better survival for GSM patients. We know that the ability to accurately predict patient outcome is important, yet the statistical methodology to assess the accuracy of these predictive models seems to be insufficient. Schumacher M et al. 31 illustrated that the Brier score and the prediction error curves based on it are valuable for assessing the predictive performance of prognostic classification schemes through the analysis of two studies on node-positive breast cancer patients. The same study provided a more comprehensive perspective for clinical researchers to conduct these prognosis prediction studies and could help researchers select more appropriate statistical models based on the prediction error curves. The authors compared the predictive ability of different statistical methods (fuzzy inference, logistic regression, classification and regression tree) in another study 32 . There are several limitations to our study. As a retrospective study, a selection bias was unavoidable. The use of the open access data from the SEER database provided a large amount of patients and surgical information, but several important factors affecting patient prognosis, including molecular/pathological information, were not available through this database. It is generally recognized that molecular pathological data, such as MGMT, are also associated with patient prognosis 18,33 . Thus, the prognostic factors we analysed based on the SEER database were not complete. The prognostic disease-specific nomogram developed in this study should undergo further improvements after adding these relevant data. Additionally, due to the rarity of this disease, we could not find sufficient clinical data to externally validate this nomogram. Material and Methods Patients and study design. A total of 498 patients receiving craniotomy between 2004 and 2015 were extracted from the 18 Registries Research Data of the SEER database. All patients were diagnosed with GSM by a histopathological examination. The variables included sex, age at diagnosis, race, marital status at diagnosis, surgical procedures, tumour size, primary site, metastasis state, and adjuvant therapy. The end of the follow-up was Dec. of 2015, and the primary endpoint was cause-specific death. The exclusion criteria were as follows: (a) Survival was less than 1 month or unknown (according to clinical practice, patients who die within one month after craniotomy usually die of surgical complications; therefore, it may not be appropriate to incorporate these patients into a prognostic analysis.); (b) Tumour size was missing; (c) One patient had GSM that was not in the brain; and (d) Another variable was unknown or missing. The exclusion process is shown in Fig. 1. Statistical analysis. The continuous variables were transformed into categorical variables to match with the nomogram. The best cut-off points of continuous variables were identified with X-tile 34 . Categorical variables were grouped according to clinical reality. The DSS rate and the median DSS were calculated with the life table method. Both univariate and multivariate Cox proportional hazard models were applied to calculate the HRs and their 95% CIs to analyse different prognostic variables associated with DSS 35 . Variables were included in the multivariate analysis if they reached a p value of ≤0.20 on the univariate analysis. These prognostic factors were screened with a Cox proportional hazard model adopting the bidirectional elimination method and were optimized with the AIC protocol 36 . The risk scores were then calculated according to the following formula: risk score = β1 × 1 + β2 × 2+ … +βnXn (β, regression coefficient; X, prognostic factors). Kaplan-Meier curves were plotted to compare DSS on account of different prognostic factors. A nomogram was developed based on the independent prognostic factors and by using the rms package in R version 3.5.0 (http://www.r-project.org/). The discrimination of the nomogram was assessed by Harrell's C-index, which could estimate the probability between the observed and predicted DSS 37 . A random resampling procedure (bootstrapping) with 1,000 resamples was used for internal validation. The ROC curve and the AUC were evaluated using the survivalROC package in R version 3.5.0 to assess the accuracy of one-year prognosis prediction. We also performed the decision tree model by using the party package in R version 3.5.0 to analyse the prognostic factors from other perspectives. P < 0.05 was considered statistically significant. Ethical declaration. This article does not contain any experiments on humans as well as animals and/or the use of human tissue samples performed by any of the authors. Conclusion Our study was the first to develop a disease-specific nomogram to predict the prognosis of primary GSM patients after craniotomy based on retrospective patient data from the SEER database. This predictive model included four independent prognostic factors influencing DSS: age at presentation, tumour size, metastasis state, and adjuvant chemotherapy. Further research is needed to improve this nomogram by analysing more comprehensive prognostic data, and the effectiveness of this model should be evaluated in future clinical applications. Apart from the Cox proportional hazard model, we also performed the decision tree model to analyse the prognostic factors and determined that age and adjuvant CT were important prognostic factors.
2019-07-26T13:05:16.800Z
2019-07-24T00:00:00.000
{ "year": 2019, "sha1": "0d680c42990f42f069a23c04dfc87cbabfd47a04", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-47211-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c4fe71c639063cc243a6771ee5b937ed82d55c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23756449
pes2o/s2orc
v3-fos-license
An overview of Cleft care in Nigeria The Nigerian Postgraduate Medical Journal, Vol. 18, No. 2, June 2011 151 In 2006, a funded multicentre cleft repair initiative became available in Nigeria through a non-governmental charity organisation (NGO), in USA. Prior to 2006, only a few cases of cleft lip and palate were surgically operated. This is mainly due to financial constraints by the affected family and in some cases due to lack of awareness or limited knowledge of available treatment options 1. Report from a centre in Nigeria show that 30% of new cases of cleft lip and palate deformities are adults and children above six years of age1. This trend is largely due to the gap created by unmet surgical needs prior to 2006. Although the free cleft care in Nigeria supported by the NGO started essentially in three centers (Maiduguri, Zaria and Enugu), there was still an unmet need considering the population of Nigeria and geographical spread of people. In an attempt to stimulate interest in cleft care and provide access to standard practice through mentorship and workshop, the NGO organised the first (2006) and second (2007) Pan African conference for cleft lip and palate (PACCLIP) in Ibadan, Nigeria. During and after these conferences, centres that met the requirement for cleft repairs were shortlisted and subsequently offered treatment grants. Since then, several other centres in Nigeria have received the free cleft treatment grant, and about 3000 cases of cleft lip and palate have been repaired2. For instance, Since the Lagos University Teaching Hospital received a treatment grant in 2007, and over 200 cases of cleft lip and palate have been surgically operated and some other centres have equally done more (personal communication). This is an overview of the present state of cleft lip and palate care in Nigeria. The aim is to stimulate further discussions on the need to improve standard of care and quality of life patients with cleft lip and palate deformities. The need to incorporate the philosophy of multidisciplinary approach in the management of cleft lip and palate is also advocated. In 2006, a funded multicentre cleft repair initiative became available in Nigeria through a non-governmental charity organisation (NGO), in USA.Prior to 2006, only a few cases of cleft lip and palate were surgically operated.This is mainly due to financial constraints by the affected family and in some cases due to lack of awareness or limited knowledge of available treatment options 1 .Report from a centre in Nigeria show that 30% of new cases of cleft lip and palate deformities are adults and children above six years of age 1 .This trend is largely due to the gap created by unmet surgical needs prior to 2006. Although the free cleft care in Nigeria supported by the NGO started essentially in three centers (Maiduguri, Zaria and Enugu), there was still an unmet need considering the population of Nigeria and geographical spread of people.In an attempt to stimulate interest in cleft care and provide access to standard practice through mentorship and workshop, the NGO organised the first (2006) and second (2007) Pan African conference for cleft lip and palate (PACCLIP) in Ibadan, Nigeria.During and after these conferences, centres that met the requirement for cleft repairs were shortlisted and subsequently offered treatment grants.Since then, several other centres in Nigeria have received the free cleft treatment grant, and about 3000 cases of cleft lip and palate have been repaired 2 .For instance, Since the Lagos University Teaching Hospital received a treatment grant in 2007, and over 200 cases of cleft lip and palate have been surgically operated and some other centres have equally done more (personal communication). This is an overview of the present state of cleft lip and palate care in Nigeria.The aim is to stimulate further discussions on the need to improve standard of care and quality of life patients with cleft lip and palate deformities.The need to incorporate the philosophy of multidisciplinary approach in the management of cleft lip and palate is also advocated. Treatment outcomes As the number of cleft surgeries and surgeons involved in cleft repairs across Nigeria increases over the years, it has become imperative to assess the quality of surgery and quality of cleft care.Following similar concerns by the Royal college of Surgeons of England, about the care of patients with cleft deformities in England and Wales, William et al 3 surgeons and number of cleft surgeries in the UK.They reported that 67 surgeons in 45 centers were involved in cleft repairs, with one third of them carrying out less than five primary cleft repairs per year 3 .In comparison with the gold standard in Northern Europe, this was considered to be low 4 .An expert working group at the Department of Health, UK recommended 40-50 cleft repairs each year to be desirable for a surgeon to maintain his skills 5 .In Nigeria, majority (68.4%) of cleft surgeons are "low volume operators" undertaking 10 or less new cleft repairs annually 6 .Although this number is expected to rise, (and is in fact rising due to availability of free treatment grant provided by the NGO in some centres in Nigeria), there may be some centres that may still not able to attain a case load of 40-50 years per year.For quality control and assessment of treatment outcome in cleft surgery, there is need for a national survey of cleft centres to ascertain the workload and possibly formulate a policy on quality control. Multidisciplinary approach to care Cleft care requires multidisciplinary surgical and non-surgical care from birth until adulthood to restore function and aesthetics 7 .The American Cleft Palate Association (ACPA) recommended that in the management of cleft lip and palate patients, the interdisciplinary team should be responsible for providing care or making appropriate referrals for audiologic and otolaryngologic care, surgical intervention, dental care, speech-language pathology services, genetic evaluations and counseling, psychological and social services, nursing care, and paediatric care 7 .An expert working group on orthodontics in the National Health Service (NHS), UK; stated that good care of patients with cleft lip or palate in the first 10 years of life was especially important and; there was considerable evidence that initial care had a profound influence on the complexity and duration of later treatment 5 .Williams et al 3 concluded that when the treatment of clefts is inexpert and uncoordinated, outcomes may be seriously substandard.In addition, they stated that poor services are more costly because surgical procedures have to be repeated and ancillary care such as speech therapy and orthodontics are protracted 3 .Presently in the UK, fewer surgeons now treat cleft cases under a managed clinical network that includes other cleft disciplines.These surgeons possess fellowship training in the management of cleft lip/palate and craniofacial deformities, and continuing education in cleft lip/palate and craniofacial deformities is mandatory for them. Available evidence shows that most centres around Nigeria have limited facilities for multidisciplinary cleft care.Only few centres have facilities for interdisciplinary managed care 6 .A recent study reported that speech pathologists and orthodontists were less represented (20% and 36.7% respectively) compared to Oral and Maxillofacial Surgeons and Plastic Surgeons (70% and 63.3% respectively) in cleft teams in Nigeria 8 .Another study by Olasoji et al 6 revealed that interdisciplinary approach to cleft care is being practised by about 20% of Nigerian cleft lip and palate care providers.These findings suggest that interdisciplinary care for the cleft patient does not appear to have been fully embraced in Nigeria.Factors militating against multidisciplinary care in Nigeria include financial constraints, and lack of facilities and trained personnel among others.In many cleft care centres across Nigeria, full complement of personnel required for total (multidisciplinary) care of cleft patients are not yet available.A study to examine whether children with orofacial clefts received more comprehensive care and whether their parents perceived better outcomes if the care was delivered by interdisciplinary teams compared with individual providers; found that those who were treated by interdisciplinary teams received more recommended care than those who were treated by individual providers 9 .Overall, receipt of recommended age-appropriate care tended to be higher among those with team care in comparison with those without team care, and several of those differences were statistically significant 9 . Quantity and quality of care With the provision of free treatment grants to many centres across Nigeria, emphasis has been on the turnover of surgeries for cleft repairs while little or no attention/ focus is paid to outcome/quality of life.It is expected that as the number of repaired cleft lip/palate increases, more patients will require secondary repair, speech therapy, and orthodontics therapy and orthognathic surgery.Hence, the need for multidisciplinary approach to cleft care can not be over-emphasised.Multidisciplinary approach has been reported to be cost-effective, and also prevent delay in ancillary care such as orthodontics and speech therapy 3 .It is important to appreciate the funding support of international NGOs.However, it is also important to request surgical specialities to show leadership towards attaining quality in cleft repairs while recognising the difficulties that may follow request for colleagues to relinquish interest. Public-Private Partnership Several reports in the literature suggest that team care provides the best approach to treatment of children with orofacial clefts. 6,9However available evidence suggests that multidisciplinary cleft care is not well established in Nigeria.Health authorities and planners in Nigeria must demonstrate greater commitment towards the realisation of multidisciplinary cleft.This can be facilitated by improving the surgical infrastructure, establishing/ designating and building capacity for caregivers and providing access to patients seeking these services.Public-private partnerships should be encouraged where charity organisations and philanthropic individuals continue the funding of cleft repairs in these centres, while government maintains the infrastructure and provide the required expertise for multidisciplinary care.Local NGOs should also be encouraged to support free cleft care initiatives especially in areas not presently covered by available grant such as speech therapy, orthodontics and orthognathic surgery. Fellowship for cleft care givers and providers There is a need to establish fellowship training in cleft care in Nigeria.In Europe and many parts of American continent, cleft care is so specialised that the management is referred and restricted only to cleft care centres and delivered by those who have specialised training (fellowship training) in cleft lip/palate and craniofacial defects.This example, in our opinion should be encouraged in Nigeria.Designated centres with a ssigned cleft surgeons and other care providers should be allowed to treat all cleft cases in each of the geo-political zones of Nigeria and strategies to institute a cleft fellowship should be adopted. Conclusion With the provision of free treatment grants for cleft lip and palate surgery to many centres across Nigeria, the number of surgery is increasing while little or no attention/ focus is paid to outcome/quality of life.There is need to improve standard of care for patients with cleft lip and palate deformities.In addition, there is need to incorporate the philosophy of multidisciplinary approach in the management of cleft lip and palate in order to improve the quality of life in patients with cleft deformities. 2, June 2011 An overview of Cleft care in Nigeria: A. Butali , W. L. Adeyemo investigated the number of cleftSummaryThis is an overview of the present state of cleft lip and palate care in Nigeria.The aim is to stimulate further discussions on the need to improve standard of care and quality of life in patients with cleft lip and palate deformities.The number of cleft surgeries and surgeons involved in cleft repairs across Nigeria is increasing due to availability of free treatment grants provided by non-governmental organisation; therefore, it has become imperative to assess the quality of surgery and quality of cleft care.It is expected that as the number of repaired cleft lip/ palate increases, more patients will require secondary repair, speech therapy, and orthodontics therapy and orthognathic surgery.The following recommendations are made to improve the standard of cleft care in Nigeria: establishment of multidisciplinary team approach, formulation of policy on quality control, establishment of fellowship training in cleft care and establishment of regional specialised cleft care centre. The Nigerian Postgraduate Medical Journal, Vol. 18, No.
2018-04-03T05:33:48.069Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "ae940699d926ea11a9c555940f81e506d15f4311", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1117-1936.170374", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cb3558cd5bf0b762baa9992eb50536343f416689", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
155102384
pes2o/s2orc
v3-fos-license
A PTPmu Biomarker is Associated with Increased Survival in Gliomas An integrated approach has been adopted by the World Health Organization (WHO) for diagnosing brain tumors. This approach relies on the molecular characterization of biopsied tissue in conjunction with standard histology. Diffuse gliomas (grade II to grade IV malignant brain tumors) have a wide range in overall survival, from months for the worst cases of glioblastoma (GBM) to years for lower grade astrocytic and oligodendroglial tumors. We previously identified a change in the cell adhesion molecule PTPmu in brain tumors that results in the generation of proteolytic fragments. We developed agents to detect this cell surface-associated biomarker of the tumor microenvironment. In the current study, we evaluated the PTPmu biomarker in tissue microarrays and individual tumor samples of adolescent and young adult (n = 25) and adult (n = 69) glioma populations using a fluorescent histochemical reagent, SBK4-TR, that recognizes the PTPmu biomarker. We correlated staining with clinical data and found that high levels of the PTPmu biomarker correlate with increased survival of glioma patients, including those with GBM. Patients with high PTPmu live for 48 months on average, whereas PTPmu low patients live only 22 months. PTPmu high staining indicates a doubling of patient survival. Use of the agent to detect the PTPmu biomarker would allow differentiation of glioma patients with distinct survival outcomes and would complement current molecular approaches used in glioma prognosis. Introduction Diffuse gliomas are malignant brain tumors that consist of astrocytomas, oligodendrogliomas, and glioblastomas (GBM). They represent approximately 70% of all malignant brain tumors [1] and are categorized into either low or high grade. Low grade tumors include World Health Organization (WHO) grade II astrocytomas and oligodendrogliomas. High grade tumors include WHO grade III astrocytomas and oligodendrogliomas, as well as GBM (WHO grade IV). Low grade diffuse gliomas have the best prognosis with an overall survival of 11 years [2], whereas the overall survival for the most aggressive glioma, type IV GBM, is only 15 months [3]. In addition to tumor grade, other important prognostic factors include age at diagnosis and gender [4][5][6]. Females and younger age at the time of diagnosis are associated with longer survival in glioma patients. Treatment of low grade glioma generally consists of debulking surgery, radiation therapy, and sometimes adjuvant chemotherapy [7]. Treatment of grade III and IV gliomas consists of surgical resection followed by radiation and chemotherapy [3], with the extent of surgical resection being a major predictor of disease outcome [8]. Conventional surgery has relied heavily upon the neurosurgeon's professional experience to recognize tumor from normal brain tissue; now, more sophisticated approaches using magnetic resonance imaging (MRI) and/or fluorescent agents to identify tumor tissue are in use or in development [9]. Currently, one fluorescent agent, 5-aminolevulinic acid (5-ALA), is Food and Drug Administration (FDA) approved to aid in the surgical resection of high grade glioma. It is a non-specific agent that is metabolized preferentially but not exclusively in glioma tissue, and while helpful at improving the extent of surgical resection [10,11], it lacks specificity in identifying tumor margins [10,12,13]. We are interested in the roles that changes in cell adhesion play in brain tumor progression. In normal cells, the full-length receptor protein tyrosine phosphatase PTPµ mediates cell-cell adhesion through homophilic binding [14]. We discovered that full-length PTPµ is proteolyzed into fragments in GBM [15,16], whereas differential levels of full-length PTPµ are expressed in low grade astrocytoma tissue [17]. The PTPµ extracellular fragment generates a biomarker in the tumor microenvironment that can be utilized for specific molecular recognition of cancer [18]. We identified peptides, known as the SBK peptides, derived from the extracellular portion of PTPµ that bind homophilically to the extracellular PTPµ biomarker in vitro and/or in vivo [18]. Conjugating these SBK peptides to fluorophores created targeted PTPµ biomarker-directed agents that specifically bind tumor cells, including GBM, but not normal tissues [18]. Remarkably, use of these fluorescent agents in vivo with brain tumor models revealed their ability to detect the primary tumor and glioma cells that had migrated several millimeters away from the main tumor mass [19]. We have proposed developing these agents as tools for fluorescence-guided surgical resection of GBM [9]. The utility of SBK-targeted agents for imaging gliomas has also been demonstrated using preclinical MRI. For that application, the SBK peptide was conjugated to the macrocyclic molecule, 1,4,7,10-Tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA), and complexed with gadolinium [20,21]. PTPµ biomarker targeted MRI agents showed more sustained binding to and enhancement of tumors compared to untargeted, conventional gadolinium-containing contrast agents in brain tumor models [20,21] and might be advantageous in standard MRI or intraoperative MRI. The WHO Classification of Tumors of the Central Nervous System introduced a new "integrated" scheme using molecular markers alongside traditional histopathology to classify diffuse gliomas [22]. These recommendations include tests for mutated isocitrate dehydrogenase 1 (IDH1) status to differentiate tumors. Presence of IDH1 mutation correlates with more favorable patient survival outcomes [22]. In the present study, we characterized the staining of the PTPµ biomarker in human glioma tissue microarrays or in individual tumor biopsy samples using the SBK4 peptide conjugated to Texas Red, SBK4-TR, and correlated this with clinical and pathologic features, including survival outcomes. Our findings indicate that PTPµ high biomarker levels are predictive of longer survival time for all glioma subtypes. Even when adjusted for age, sex, and IDH1 mutation status, PTPµ high biomarker levels correlate with increased survival in GBM patients. These data provide evidence that the PTPµ biomarker may predict survival for various gliomas and support the use of the SBK agents for prognosis and imaging. Staining of Glioma Sections with the PTPµ Biomarker Human glioma tissue microarrays (TMAs) or individual glioma tumor samples were obtained from 94 patients including 25 adolescent and young adult (AYA) and 69 adults with astrocytomas (n = 12), oligodendroglioma (n = 14), oligoastrocytoma (n = 7), and GBM (n = 61). The clinicopathological characteristics of the patients combined with PTPµ biomarker staining results are summarized in Table 1. The 94 patients were fairly equally divided between those with PTPµ low (52%) and those with PTPµ high biomarker levels (48%; Table 1). Significantly more patients with PTPµ high were alive at the end of the follow-up period (22 patients) compared to those with PTPµ low biomarker levels (eight patients, p < 0.002). These PTPµ high biomarker patients also had a significantly longer mean overall survival time of 48 months compared to the mean overall survival time of 22.4 months for the PTPµ low patients (p < 0.001). Survival times shown in Table 1 represent mean survival times for all patients in a group, both for those where death was recorded and for those alive at the conclusion of the follow-up period. The mean time to recurrence shown in Table 1 for each group was calculated only from patients who experienced a recurrence. The samples were stained for PTPµ with SBK4-TR, and a subset of those patient samples are shown in Figure 1. Histology was visualized by staining with hematoxylin and eosin (H&E; Figure 1a,d,f). The SBK4-TR staining was visualized with a fluorescent microscope. There was variable PTPµ staining of the tumor samples (Figure 1c,e,f). The amount of fluorescence was divided into two categories, PTPµ low and PTPµ high, to reflect the biphasic nature of the results. As examples, A1 and A2 were classified as PTPµ low, while E7 and E8 illustrate PTPµ high expressing samples (Figure 1c). The TMAs were also stained for mutant IDH1, as shown in Figure 1b,f, and scored as positive or negative to replicate scoring by pathologists. A different TMA is shown in Figure 1f with samples illustrating the range of PTPµ low and PTPµ high as well as wild-type and mutant IDH1 samples. Analysis of Clinical Variables in Comparison to the PTPµ Biomarker Kaplan Meier survival plots demonstrate that patients with PTPµ high biomarker staining have significantly increased survival relative to patients with PTPµ low biomarker staining. The last outcome recorded for a patient (i.e., living or deceased) at the end of the follow up period was carried forward to generate these survival plots. The survival of all glioma patients with PTPµ high and PTPµ low is plotted either unadjusted ( Figure 2a) or adjusted ( Figure 2b) by gender, grade, age group, and IDH1 mutation status. Median survival times are shown below each plot. As shown in Figure 2a, the median survival of all glioma patients with PTPµ low was 13.3 months compared to 57.8 months for those with PTPµ high. After adjusting for sex, tumor grade, age group, and IDH1 mutation status, the median survival of all patients with PTPµ low was about half as long, 18.6 months, as those with PTPµ high staining, where the median survival was 38.2 months (Figure 2b). staining of the tumor samples (Figure 1c,e,f). The amount of fluorescence was divided into two categories, PTPμ low and PTPμ high, to reflect the biphasic nature of the results. As examples, A1 and A2 were classified as PTPμ low, while E7 and E8 illustrate PTPμ high expressing samples ( Figure 1c). The TMAs were also stained for mutant IDH1, as shown in Figure 1b and f, and scored as positive or negative to replicate scoring by pathologists. A different TMA is shown in Figure 1f with samples illustrating the range of PTPμ low and PTPμ high as well as wild-type and mutant IDH1 samples. Analysis of Clinical Variables in Comparison to the PTPμ Biomarker Kaplan Meier survival plots demonstrate that patients with PTPμ high biomarker staining have significantly increased survival relative to patients with PTPμ low biomarker staining. The last outcome recorded for a patient (i.e., living or deceased) at the end of the follow up period was carried forward to generate these survival plots. The survival of all glioma patients with PTPμ high and PTPμ low is plotted either unadjusted ( Figure 2a) or adjusted ( Figure 2b) by gender, grade, age group, and IDH1 mutation status. Median survival times are shown below each plot. As shown in Figure 2a, the median survival of all glioma patients with PTPμ low was 13.3 months compared to 57.8 months for those with PTPμ high. After adjusting for sex, tumor grade, age group, and IDH1 mutation status, the median survival of all patients with PTPμ low was about half as long, 18.6 months, as those with PTPμ high staining, where the median survival was 38.2 months (Figure 2b). Multivariable Cox proportional hazards regression survival models were generated to investigate the effects of PTPμ, sex, age, grade, IDH1 mutation status, and other parameters on Multivariable Cox proportional hazards regression survival models were generated to investigate the effects of PTPµ, sex, age, grade, IDH1 mutation status, and other parameters on overall survival. Results of the final model are summarized in the Forest Plot shown in Figure 3. Sex, age, WHO tumor grade, and IDH1 mutation status were all included in the final model since all four characteristics are well validated prognostic factors in glioma as mentioned above [4][5][6]22]. PTPµ high staining resulted in a significantly decreased hazard compared to PTPµ low staining ( Figure 3). Males showed a slightly increased hazard compared to females, but this difference was not significant. Similarly, there were no significant differences in the hazard of death among patients in the different age groups. Patients with grade IV tumors had significantly increased hazard ratios relative to patients with lower grade tumors. Consistent with previous studies, patients with mutant IDH1 had a significantly reduced hazard of death relative to wild-type IDH1 glioma patients (Figure 3). significant. Similarly, there were no significant differences in the hazard of death among patients in the different age groups. Patients with grade IV tumors had significantly increased hazard ratios relative to patients with lower grade tumors. Consistent with previous studies, patients with mutant IDH1 had a significantly reduced hazard of death relative to wild-type IDH1 glioma patients ( Figure 3). To better visualize overall survival among patients with PTPμ low and PTPμ high staining in different age categories, survival data for patients in each PTPμ category were plotted, as shown in Figure 4. Unadjusted Kaplan Meier survival plots were calculated for glioma patients with PTPμ low (Figure 4a) and PTPμ high (Figure 4b). Too few patients were available in each category to make meaningful Kaplan Meier survival plots that adjusted for sex, grade, and IDH1 mutation. Of the 49 patients with PTPμ low staining, six were AYA, 12 were patients aged 40 to 60, and 31 were patients aged 60 and over. In the PTPμ low group, younger patients had longer median survival times than older patients (Figure 4a). Patients aged 60 and over with low levels of PTPμ staining had a median survival time of 5.3 months, patients 40-60 years old had a median survival of 20.6 months, and the AYA patients had an almost four-fold longer median survival of 80.3 months (Figure 4a). The distribution of age groups was different for patients with PTPμ high staining; of the 45 high, 19 were AYA, 18 were aged 40 to 60, and only eight were aged 60 and over. As with PTPμ low biomarker staining, the unadjusted overall survival for patients in the 60 and over group was worse than that of the other two age categories for the PTPμ high biomarker (Figure 4b). AYA patients with high levels of PTPμ had longer survival compared to the other the age groups (Figure 4b), but median survival time could not be determined because only six deaths were recorded among the 19 AYA patients. To better visualize overall survival among patients with PTPµ low and PTPµ high staining in different age categories, survival data for patients in each PTPµ category were plotted, as shown in Figure 4. Unadjusted Kaplan Meier survival plots were calculated for glioma patients with PTPµ low (Figure 4a) and PTPµ high (Figure 4b). Too few patients were available in each category to make meaningful Kaplan Meier survival plots that adjusted for sex, grade, and IDH1 mutation. Of the 49 patients with PTPµ low staining, six were AYA, 12 were patients aged 40 to 60, and 31 were patients aged 60 and over. In the PTPµ low group, younger patients had longer median survival times than older patients (Figure 4a). Patients aged 60 and over with low levels of PTPµ staining had a median survival time of 5.3 months, patients 40-60 years old had a median survival of 20.6 months, and the AYA patients had an almost four-fold longer median survival of 80.3 months (Figure 4a). The distribution of age groups was different for patients with PTPµ high staining; of the 45 high, 19 were AYA, 18 were aged 40 to 60, and only eight were aged 60 and over. As with PTPµ low biomarker staining, the unadjusted overall survival for patients in the 60 and over group was worse than that of the other two age categories for the PTPµ high biomarker (Figure 4b). AYA patients with high levels of PTPµ had longer survival compared to the other the age groups (Figure 4b), but median survival time could not be determined because only six deaths were recorded among the 19 AYA patients. Comparison of PTPµ low and PTPµ high biomarker staining within a given age group reveals some interesting observations (Figure 4). For instance, patients in the oldest age group with PTPµ high staining had a significantly longer median overall survival of 18.9 months (Figure 4b) compared to 5.3 months for those patients 60 and over in the PTPµ low group (Figure 4a; log rank p-value = 0.025). The trend was similar although not statistically significant for the other two age groups. In the 40 to 60 age category, the median survival was 30.3 months for the PTPµ high versus 20.6 months for PTPµ low patients. The median survival time for AYA patients with PTPµ high could not be determined and cannot be compared to that of AYA patients with PTPµ low in this study due to the length of the follow-up period. Of note, 13 of 19 PTPµ high AYA patients survived through the follow-up period compared to three of six in the PTPµ low group. 0.025). The trend was similar although not statistically significant for the other two age groups. In the 40 to 60 age category, the median survival was 30.3 months for the PTPµ high versus 20.6 months for PTPµ low patients. The median survival time for AYA patients with PTPµ high could not be determined and cannot be compared to that of AYA patients with PTPµ low in this study due to the length of the follow-up period. Of note, 13 of 19 PTPµ high AYA patients survived through the follow-up period compared to three of six in the PTPµ low group. Next, the 61 patients with GBM were analyzed separately to better examine the relationship between survival and PTPµ staining in these patients. Kaplan Meier survival plots for overall survival are shown unadjusted or adjusted for sex, age group, and IDH1 mutation status ( Figure 5). GBM patients with PTPµ high staining showed significantly better survival compared to those with PTPµ low staining in both unadjusted ( Figure 5a) and adjusted plots (Figure 5b). In contrast, no significant differences were detected between GBM patients with PTPµ low and PTPµ high staining in terms of recurrence-free survival (Figure 5c,d). Next, the 61 patients with GBM were analyzed separately to better examine the relationship between survival and PTPµ staining in these patients. Kaplan Meier survival plots for overall survival are shown unadjusted or adjusted for sex, age group, and IDH1 mutation status ( Figure 5). GBM patients with PTPµ high staining showed significantly better survival compared to those with PTPµ low staining in both unadjusted ( Figure 5a) and adjusted plots (Figure 5b). In contrast, no significant differences were detected between GBM patients with PTPµ low and PTPµ high staining in terms of recurrence-free survival (Figure 5c,d). Finally, we examined the 33 remaining patients with lower grade gliomas (non-GBM), including astrocytomas (grade II and III), oligoastrocytomas, and oligodendrogliomas, to determine whether PTPµ staining correlated with overall survival (Figure 6a,b) or recurrence-free survival (Figure 6c,d). As with GBM patients, patients with lower grade tumors but PTPµ high levels had longer overall survival than those with PTPµ low levels, although this difference was only significant after adjusting for sex, age group, and IDH1 mutation status (Figure 6b). There was no difference in the unadjusted recurrence-free survival for glioma patients with non-GBM tumors with high and low PTPµ biomarker staining (Figure 6c). However, after adjusting for sex, age group, and IDH1 mutation status, the PTPµ high non-GBM glioma patients had significantly longer recurrence free survival times than the PTPµ low non-GBM patients, 34.1 versus 11.8 months, respectively (Figure 6d) Finally, we examined the 33 remaining patients with lower grade gliomas (non-GBM), including astrocytomas (grade II and III), oligoastrocytomas, and oligodendrogliomas, to determine whether PTPµ staining correlated with overall survival (Figure 6a,b) or recurrence-free survival (Figure 6c,d). As with GBM patients, patients with lower grade tumors but PTPµ high levels had longer overall survival than those with PTPµ low levels, although this difference was only significant after adjusting (b) Overall survival adjusted for sex, age group, and IDH1 mutation for GBM patients. (c) Unadjusted recurrence-free survival for GBM patients. (d) Overall recurrence-free survival adjusted for sex, age group, and IDH1 mutation for GBM patients. Median overall survival or recurrence-free survival with 95% CIs are shown below each plot. Based on the log-rank test, recurrence-free survival curves were not significantly different between PTPµ high and low patients for either unadjusted or adjusted curves. for sex, age group, and IDH1 mutation status (Figure 6b). There was no difference in the unadjusted recurrence-free survival for glioma patients with non-GBM tumors with high and low PTPµ biomarker staining (Figure 6c). However, after adjusting for sex, age group, and IDH1 mutation status, the PTPµ high non-GBM glioma patients had significantly longer recurrence free survival times than the PTPµ low non-GBM patients, 34.1 versus 11.8 months, respectively (Figure 6d). Overall recurrence-free survival adjusted for sex, age group, and IDH1 mutation for non-GBM patients. Median overall survival or recurrence-free survival with 95% CIs are shown below each plot. Based on the log-rank test, the adjusted recurrence-free survival curves were significantly different between PTPµ high and low non-GBM patients. Overall recurrence-free survival adjusted for sex, age group, and IDH1 mutation for non-GBM patients. Median overall survival or recurrence-free survival with 95% CIs are shown below each plot. Based on the log-rank test, the adjusted recurrence-free survival curves were significantly different between PTPµ high and low non-GBM patients. Discussion The most recent WHO Classification of Tumors of the Central Nervous System recommendations combine basic histology with either immunohistochemical or genetic tests for mutated IDH1 status, transcriptional regulator (ATRX) loss, and TP53 mutation or 1p/19q chromosomal deletion status to differentiate tumors [22]. Using these molecular markers, gliomas can be more accurately classified as diffuse astrocytoma, oligodendroglioma, oligoastrocytoma, or the glioma with the worst overall prognosis, GBM [22]. Of particular interest was the recommendation that molecular data and genotype overrule histology when discordant results arise [22]. Sequencing studies by The Cancer Genome Atlas (TCGA) identified the common mutation of IDH1 in GBM, with an observation that~10% of GBM patients harbored IDH1 mutations [23]. IDH1 mutations were associated with increased overall survival of GBM patients and occurred preferentially in young patients and those with secondary GBM [23], that is GBM progressing from a lower grade glioma as opposed to GBMs that arise de novo, i.e., primary GBM. A further refinement of glioma subtypes was accepted in the 2016 WHO guidelines by adding ATRX and TP53 mutational analysis alongside evaluation of 1p/19q chromosomal co-deletion [22]. GBMs and astrocytomas are classified as IDH mutant or wild-type [24]. Oligodendrogliomas can be distinguished from astrocytomas based on ATRX and TP53 mutations (observed in astrocytomas only) versus 1p/19q co-deletion (observed in oligodendrogliomas only along with IDH1 mutation) [24]. The use of immunohistochemistry for both IDH1 and ATRX mutation analysis should simplify the adoption of molecular diagnostics in the neurohistological setting [25]. Based on molecular findings, new predictions for disease outcome can also be determined. For example, the presence of IDH1 mutations and 1p/19q co-deletions are associated with better survival outcomes for grade II, III, and IV gliomas [7, 23,26,27], which may be relevant for determining treatment options for lower grade glioma patients with worse prognoses [7,26]. The data presented here suggest that high levels of PTPµ staining correlate with longer overall survival (anywhere from one and a half to three times longer) for patients of similar age. Since high PTPµ staining is correlated with improved survival of all age groups, the PTPµ biomarker may be an important prognostic marker. Unlike the markers discussed above, the changes observed in PTPµ in glioma are all post-translational in nature and not at the level of DNA. There is little evidence of PTPµ changes at either the DNA or the RNA level in brain tumors in the literature. The TCGA database indicates 13 mutations in the PTPµ gene (PTPRM) coding region, most of them low impact mutations. We previously observed differences in full-length PTPµ and proteolytic fragments of PTPµ in different glioma types, including GBM by immunoblot [17]. When full-length PTPµ protein was added back to the invasive glioma LN-229 cell line, a cell line characterized by low amounts of full length PTPµ and high amounts of PTPµ fragments, cell migration was reduced [15]. We found that PTPµ fragment expression was essential for promoting cell migration and cell survival in this cell line [15]. Based on these results, we hypothesized that proteolytic cleavage of PTPµ impacts adhesion between adjacent cells, leading to a loss of contact inhibition of growth and promotion of cancer cell migration and invasion [28]. PTPµ high biomarker staining and IDH1 mutation both substantially reduced the hazard ratio of death, as shown in Figure 3. Additional studies with more patients in each age group are needed to determine whether these biomarkers are involved in one or more common pathways leading to oncogenesis and/or prolonged survival. Current practice is to utilize Clinical Laboratory Improvements Amendments (CLIA)-approved and commercially available monoclonal antibodies (mAbs) for the most common mutation of IDH1 and ATRX for routine grading of gliomas [25]. If validated by additional studies, the SBK4-TR agent could be used in a similar setting and would allow quick and convenient one-step staining, as the Texas Red fluorophore is already conjugated to the SBK4 peptide. In future studies, we will use this same reagent to validate our results in an independent dataset of patient tissue. In addition to using the SBK agents to predict patient outcomes, these agents could also be used in fluorescence-guided surgical resection of glioma [9] for patients whose biopsy is positive for the PTPµ biomarker. 5-ALA is currently approved to be used in GBM surgery as it distinguishes tumor tissue from normal tissue by the preferential conversion of 5-ALA to fluorescent porphyrins (PpIX) in the heme biosynthesis cycle, which occurs at a higher rate in epithelial and tumor tissue [29]. PpIX fluoresces under 400-410 nm wavelength excitation and emits at 635-705 nm and can be visualized with a fluorescent surgical microscope. It is very effective at delineating the main GBM tissue mass, with 92% positive predictive value, 77% specificity, and 79% negative predictive value [12], and was recently approved by the U.S. FDA for this purpose. However, it is less effective at labeling the dispersive GBM tumor border [13,30]. The use of 5-ALA in the surgical resection of GBM results in a significant improvement in the extent of tumor resection (65% versus 36% with white light alone) and yields an improvement in six month progression-free survival [11]. From previous studies, we know that the SBK agents are very effective at labeling the main tumor mass and the tumor's invasive edge [19]. The SBK agents can be conjugated to any fluorophore, including those in the near-infrared range to minimize tissue interference. Since the PTPµ biomarker signifies cancer and the data presented demonstrate that the SBK4-TR agent labels both low-and high-grade glioma tissue, SBK4-TR could be useful for fluorescence-guided surgical resection of gliomas, either alone or multiplexed with 5-ALA for double labeling. Brain tumors are the third most common malignancy in AYA patients between 15 and 39 years old [31]. Until recently, AYA populations have been lumped with either pediatric or adult patients, and their treatment has varied between following pediatric or adult guidelines, neither of which may be appropriate for this disease [32]. With the advent of molecular characterization of malignancy, fundamental differences between AYA patients and other age groups have been identified, clarifying that separate disease mechanisms are at play. In the case of glioma, pediatric, AYA, and adult patients have molecular distinctions between and within different glioma grades [32]. In GBM, for example, TP53 and IDH1 mutations and phosphatase and tensin homolog (PTEN) deletion are frequently observed in patients under 40 [23,33], as is hypermethylation of the CpG island methylator (C-GIMP) phenotype [34]. All are correlated with better prognosis. In this data set, there are higher levels of the PTPµ biomarker in AYA patients and in the 40 to 60 adult glioma patients, while patients 60 years and over tend to have low PTPµ staining. In older GBM patients, epidermal growth factor receptor (EGFR) amplification and PTEN deletions are observed in a majority of cases, and IDH1 mutations are rarely observed [33]. Of note, our data included a subset of GBM patients over 60 with PTPµ high staining and median survival times more than three times that of GBM patients in the same age group with PTPµ low staining. As with the IDH1 mutation, PTPµ high staining correlates with improved survival. Unlike IDH1 mutation status, the PTPµ biomarker may be a relevant prognosis marker for all age groups. In summary, the data presented here indicate the exciting possibility that the staining of the PTPµ biomarker may be used to predict clinical outcomes of glioma patients. Study Ethics and Patient Information Glioma patients were identified and prospectively consented to the Ohio Brain Tumor Study (OBTS, PIs: Barnholtz-Sloan and Sloan) under approval from the University Hospitals Institutional Review Board (IRB Number CC296; Approval 24 May 2018). Clinical and pathological data were gathered for each patient and included age at diagnosis, sex, race, WHO grade, histological type, overall survival, overall vital status, recurrent status, and recurrence-free survival. The IDH1 mutation status was obtained for some patient samples through the TCGA database (21 samples) or as part of the medical record (30 samples). The remaining samples were stained for IDH1 mutation as described below. The last outcome recorded (i.e., living or recurrence-free) at the conclusion of the follow-up period was carried forward to generate the Kaplan Meier plots used to illustrate overall survival or recurrence-free survival. No recurrence data were available for four patients (three in the PTPµ low group and one in the PTPµ high group), thus these were excluded from analyses of recurrence-free survival. The mean time to recurrence shown in Table 1 for each group was calculated only from patients who experienced a recurrence. Reagents The SBK4 peptide used for tissue staining was synthesized as described [18]. The N-terminal glycine of SBK4 peptide was coupled to Texas Red (TR; Molecular Probes Inc, Eugene, OR, USA) as described [18] to make the fluorescent agent. Anti-IDH1 R132H Monoclonal Antibody clone H09 [American Research Products (Dianova GmbH), Waltham, MA, USA] reacts specifically with the isocitrate dehydrogenase 1 (IDH1) R132H point mutation in tissue sections from formalin-fixed brain tumor specimens. Biomarker Labeling of Human Glioma Tissue All tumor samples used for this study were obtained from the OBTS, which makes patient samples available to researchers. The OBTS generated three TMAs to facilitate screening a large number of patient biopsy tissues, and these were stained for the PTPµ biomarker. To supplement these samples, additional individual biopsy samples were also screened. Together, these TMAs and slides represented samples from 94 glioma patients (25 adolescent and young adult and 69 adult) with astrocytomas (n = 12), oligodendroglioma (n = 14), oligoastrocytoma (n = 7), and GBM (n = 61). Tissue staining with SBK4-TR was described [18]. Positive controls (GBM) and negative controls (Epilepsy) were tested with the TMAs or individual slides. Tumor samples were obtained formalin-fixed and paraffin-embedded. Prior to staining, the TMAs or slides were deparaffinized and blocked with 2% goat serum in phosphate buffered saline (PBS) for 20 min at room temperature (RT). The samples were then incubated with SBK4-TR agent diluted in 2% goat serum in PBS at RT for 1 hr in the dark. Following a PBS rinse, the TMAs or slides were coverslipped with Vectashield Hard Set Mounting Medium (Vector Laboratories, Inc., Burlingame, CA, USA) and imaged on a Hamamatsu Nanozoomer S60 slide scanner (Bridgewater, NJ, USA). Some samples were also stained for the IDH1 mutation. For IDH1 staining, antigen retrieval was performed in a citrate buffer. Antibody binding was detected using MACH4 horseradish peroxidase (Biocare, Pacheco, CA, USA), and diaminobenzidine was used as a chromogenic substrate. The sections were counterstained with hematoxylin and eosin (H&E) and mounted with Ecomount (Biocare, Pacheco, CA, USA) and imaged. Tissue staining for both PTPµ and IDH1 biomarkers was quantified by blinded observers. For PTPµ, an initial scoring system of one to four was used to capture staining intensity information about the samples with a staining level of one indicating low fluorescence and level four indicating high fluorescence. After reviewing all of the results, the PTPµ biomarker was dichotomized as either low or high to better reflect the biphasic nature of the staining pattern. For IDH1, samples were scored as either positive for the mutation or negative as it is done clinically. Statistical Analysis Data were analyzed using version 3.5.1 of the R software. Summaries of PTPµ staining in comparison to the indicated clinicopathological characteristics were performed using the "tableone" package. Numbers and percentages of categorical variables were compared using the Chi square test. For continuous variables, means and standard deviations were calculated and compared using a t test. Survival analyses were performed using the "survival" and "survminer" packages in R. The Kaplan Meier method and log-rank tests were used for generating unadjusted survival curves and testing for significance as indicated. Multivariable models using Cox proportional hazards regression were generated to incorporate the possible contribution of additional clinicopathological features to overall survival. The final model selected for all patient data adjusted for sex, age group, tumor grade, and IDH1 mutation. The global log-rank p-values are shown for the survival curves with the three age groups indicated. In all cases, p-values <0.05 were considered statistically significant.
2019-05-17T13:08:37.671Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "a38f8b6894cc4a3cc14e21ad05bf2f0c7a62a7c7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/10/2372/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a38f8b6894cc4a3cc14e21ad05bf2f0c7a62a7c7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
223956805
pes2o/s2orc
v3-fos-license
Mischief: A Simple Black-Box Attack Against Transformer Architectures We introduce Mischief, a simple and lightweight method to produce a class of human-readable, realistic adversarial examples for language models. We perform exhaustive experimentations of our algorithm on four transformer-based architectures, across a variety of downstream tasks, as well as under varying concentrations of said examples. Our findings show that the presence of Mischief-generated adversarial samples in the test set significantly degrades (by up to $20\%$) the performance of these models with respect to their reported baselines. Nonetheless, we also demonstrate that, by including similar examples in the training set, it is possible to restore the baseline scores on the adversarial test set. Moreover, for certain tasks, the models trained with Mischief set show a modest increase on performance with respect to their original, non-adversarial baseline. Introduction An adversarial attack on deep learning systems, as introduced by Szegedy et al. (2014) and Goodfellow et al. (2015), consists on any input that may be designed explicitly to cause poor performance on a model. They are traditionally split in two major categories: white-box and black-box. In the former, the adversarial inputs are found-or rather, learned-through a perturbation of the gradient. The latter, on the other hand, assumes that there is no access to the model's gradient, and thus said adversarial examples are often found by trial-and-error. In computer vision, such attacks typically involve injecting learned noise into small areas of the input image. This noise is unnoticeable to a human user, but just about complex enough to cause the network to fail to perform as expected. In contrast, for text-based systems, this noticeability-versus-failure tradeoff is not as clear. Since machine learning-based language models embed an input string into a vector space for further processing in other tasks, from a feasibility point of view it would be more realistic to determine which changes in the string, and not the vector, are more likely to negatively affect the model. In an attempt to make our work more readily applicable to existent systems, we concentrate ourselves solely on such black-box attacks. Moreover, we focus mostly on architectures that leverage the transformer layer from Vaswani et al. (2017), such as BERT (Devlin et al., 2018), as their high performance in multiple language modeling tasks makes them ubiquitous in both research and production pipelines. In this work we present Mischief, a procedure that generates a class of such adversarial examples. In order to remain within our constraints, Mischief leverages a well-known phenomenon from psycholinguistics first described by Rawlinson (1976). We characterize the impact of our algorithm on the performance of four selected transformer-based architectures, by carrying out exhaustive experiments across a variety of tasks and concentrations of adversarial examples. Our experimentation shows that the presence of Mischief-generated examples is able to significantly downgrade the performance of the language models evaluated. However, we also demonstrate that, at least for the architectures evaluated, including Mischief-generated examples into the training process allows the models to regain, and sometimes increase, their baseline performance in a variety of downstream tasks. Related Work Adversarial attacks in the context of learning theory were perhaps first described by Kearns and Li (1993). However, such examples per se predate machine learning (e.g., techniques to circumvent spam filters) by a wide margin (Ollman, 2007). On the other hand, the study of adversarial attacks on deep neural networks, albeit relatively recent, fields a large number of important contributions in addition to the ones mentioned on the introduction, and it is hard to name them all. However, an excellent introduction on this topic, along with historical notes, can be found in Biggio and Roli (2018). In the context of Natural Language Understanding (NLU), the work by Jia and Liang (2017) was arguably the first where such notions were formally applied to the intersection of language modeling and deep learning. Moreover, the well-known research by Ebrahimi et al. (2018a), Belinkov and Bisk (2018), and Minervini and Riedel (2018) showed that the large majority of existing language models are extremely vulnerable to both black-box and white-box attacks. Indeed, the Mischief algorithm is similar to that one of Belinkov and Bisk (2018) with variations on noise levels, and applied over a wider range of natural language tasks. Nonetheless, the large majority of the procedures presented in such papers were often considered to be unrealistic (Zhang et al., 2019), and it wasn't until Pruthi et al. (2019) and Ebrahimi et al. (2018b) where more practical attack and defense mechanisms were introduced. This paper is more closely aligned to theirs, although it differs in key aspects regarding contributions and methodology. Regardless, our work is meant to add to this body of research, with a specific focus on black box-based sentence level attacks for transformer architectures. The interested reader can find a more comprehensive compendium of the history of adversarial attacks for NLU in the survey by Zhang et al. (2019). We elaborate on a few studies of the research first reported by Graham Rawlinson (Rawlinson, 2007) in Section 3. In addition to these papers, it is important to point out that there are quite a few papers around this phenomenom. For example, the work by Perea and Lupker (2003) and Perea and Lupker (2004) expanded upon said research by exploring other types of permutations; while Gomez et al. (2008) and Norris (2006) attempted to explain this phenomenon from a statistical perspective, and an analysis of some of the leading theories around positional encoding can be found in the articles by Davis and Bowers (2006) and Whitney (2008). Finally, a compilation of the works around this effect can be found in Davis (2003). Background Graham Rawlinson described in his doctoral thesis (1976) a phenomenon where permuting the middle characters in a word, but leaving the first and last intact, had little to no effect on the ability of a human reader to understand it. It was shown in a few other studies that said permutation does tend to slow down readability (Rayner et al., 2006), and that the type of permutation (i.e., the position of the permuted substring) is relevant to the comprehension of the text (Schoonbaert and Grainger, 2004), as well as any context (Pelli et al., 2003) added. It could be argued that the act of shuffling the characters in an input word will have a naturally detrimental effect on any language model. Most models rely on a tokenizer and a vocabulary to parse the input string; thus, the presence of an adversarial example as an input to a pretrained model implies that the input will very likely be mapped to a low-information, or even incorrect, vocabulary element. On the other hand, the attention mechanism that lies at the heart of the transformer architecture does not have a concept of word order (Vaswani et al., 2017), and relies on statistical methods to learn syntax (Peters et al., 2018). It has also been shown to prefer, in some architectures, certain specific tokens and other coreferent objects (Clark et al., 2019;Kovaleva et al., 2019). This suggests that, although models relying on these artifacts may be resilient to slight changes in the input, the right concentration of permuted words may lead to a degradation in performance-all while remaining understandable by a human reader. Note that we do not alter the order of the words in the sentence, as that may risk losing a significant amount of semantic and lexical information, and thus it will no longer be considered a practical adversarial attack. Input: dataset D, proportion p, probability r for sentence s ∈ D do Draw probability π s ∼ P if π s ≤ p then for word w ∈ s do Draw probability π r ∼ P if π r ≤ r ∧ |w| > 3 then The Mischief Algorithm We define the Generalized Rawlinson adversarial example (GRA) as a permutation σ on a word w = w 1 , w 2 , . . . , w n , where n > 3, such that GRA(w) = w 1 , σ(w 2 ), . . . , σ(w n−1 ), w n . The algorithm that we use to generate such examples, which we call Mischief, is a function that acts on a text corpus and takes in two parameters p, r ∈ [0, 1]. Here, p denotes the proportion of the dataset to perform "Mischief" on, and r is the probability of a word w in a given line to be randomized; that is, to perform GRA(w). An implementation of Mischief can be seen in Algorithm 1. Experimentation We evaluate our approach with four transformer-based models: BERT (large, cased) (Devlin et al., 2018), RoBERTa (large) (Liu et al., 2019), XLM (2048-en) (Lample and Conneau, 2019), and XLNet (large) (Yang et al., 2019). All of them were selected due to their high performances on the Generalized Language Understanding Evaluation (GLUE) 1 benchmarks, which is a set of ten distinct NLU tasks designed to showcase the candidate's ability to model and generalize language (Wang et al., 2018). For every model and task, we apply four different concentrations r = {25%, 50%, 75%, 100%} of Mischief on the training set; additionally, for each of these concentrations we test different combinations of Mischief-no Mischief on the test and training set, totaling 640 distinct experiments. Due to the large complexity of the task, we maintain p = 1 across all experiments. A summary of our findings and general experimental set up is described in Table 1 Methodology In order to obtain the performance of a model on a test set, an experimenter must first upload the raw predictions to the GLUE website. Due to the number of experiments we performed, along with our need to modify the test sets, we opted out from evaluating every result on the website. Instead, we treated the The results are averaged out across all values of r, and measured in terms of F1, Pearson correlation (p in the plot), Spearman-ρ (s in the plot), and accuracy (not indicated). Note the high variance in RTE, which we attribute to the small size of the dataset. provided validation set as a test set, and generated a small validation set from a 90% − 10% split of the training set. It is well-known that the language models we tested are highly sensitive to initialization. Given that in most tasks we observed significant variation of results accross multiple experiments, we report the average result for ten random seeds, bringing up the total number of experiments to 6,400. However, as done in Kovaleva et al. (2019), we opt to not report the CoLA or WNLI benchmarks, as their small training set size made them remarkably sensitive to variations in the experimentation, and their inclusion could bias our summary results for the following sections. Effects of Mischief in the Test Set Our first set of experiments, corresponding to the first column of Table 1, involved exploring the effects of a Mischief-generated adversarial test set, as well as a simple defense schema. To simulate an adversarial attack, on our first setup we fine-tuned the models on each GLUE task as described on their original papers, and evaluated them on test sets with Mischief. Then, we simulated a simple defense by applying Mischief to the training sets, and subsequently fine-tuning and evaluating the models. The results can be seen in Figure 1. We found that models trained without Mischief are vulnerable to adversarial attacks, with performance drops averaging almost 20% on the case where r = 25%. However, such degradation can be easily recovered by training with Mischief, as well as performing minor hyperparameter tuning to compensate for the variations in the new training set. A plot of the mean task degradation observed by varying r can be seen in Figure 2. We conjecture that the "dip" at r = 25% can be explained by the fact that the concentration of GRA examples is enough to degrade the performance of the tested models, but not sufficient to allow for learning. Effects of Mischief in the Training Set Our second set of experiments involve evaluating the performance on the unmodified test dataset, after training the models with Mischief. The results for every proportion r can be seen in Figure 3. We observed increased performances on various tasks. However, it does appear that the size of the dataset, as well as the objective of the task, have an important influence as to whether Mischief-trained models can have such performance increase. This is to be expected, as certain tasks do rely more heavily on "masking" certain tokens. For example, MRPC (the Microsoft Research Paraphrase Corpus, by Dolan and Brockett (2005)), is, as it name indicates, a classification task where the model must determine whether two sentences p, q are paraphrases of one another. A paraphrase of p would normally retain most of the semantic content, while altering the lexical relations as much as possible, in which Mischief clearly allows for a more fine-grained data expansion at the tokenizer level. It is also important to point out that MRPC is a relatively large dataset, with 5801 sentence pairs. On the other hand, some other tasks would actually be harmed by the unintentional "masking" induced by Mischief. As an example, RTE (Recognizing Textual Entailment) is a dataset merged by Wang et al. (2018) from the corpora by Dagan et al. (2005), Bar-Haim et al. (2006), Giampiccolo et al. (2007), and Bentivogli et al. (2009). Its objective is to determine whether a pair of sentences p, q have the relation p =⇒ q. It could be argued that such a task cannot benefit from Mischief, as it would lose critical lexical information and simply obfuscate the dataset further. However, MNLI (the Multi-Genre NLI corpus, by Williams et al. (2018)) also involves textual entailment, but it is significantly larger than RTE: Figure 3: Resulting performance change, across all tasks, for every r. In blue we report the best performance of a model trained with Mischief, and evaluated on its original test set. In red, we report the baseline. In general, large corpora consistently benefit from Mischief-based training. the latter is the smallest dataset presented in this paper, with 2769 examples total, while the former is nearly 150 times larger, topping about 433, 000 sentence pairs. Discussion Mischief as an adversarial attack is remarkably effective, although its ability to degrade the performance of a language model is, fortunately, easily lost if the model has been exposed to other GRA samples before. We hypothesize that this is, as mentioned in Section 3.1, due to the way these models construct their vocabulary. The models tested employ a byte pair encoding (BPE) preprocessing step (Gage, 1994), which segments subwords iteratively and stores them in the vocabulary based on their frequency. It follows that any model trained on Mischief-generated samples will become more robust to the perturbations induced by this algorithm. Moreover, the models tested have large parameter sizes, which translates into a much stronger ability to memorize, and thus be resilient to, new input examples. This can also help partially explain the results observed in Section 4.3: let w i , w j be two words occurring in different parts of the dataset, and where w i = w j , and |w i | = |w j | := n. For n ≥ 3, and assuming a uniform distribution, the probability that these two words are transformed the same way is Pr[GRA(w i ) = GRA(w j )] = 1 (n − 2)! However, Equation 1 does not account for the fact that some tasks do not benefit from Mischief. For example, the QQP (Quora Question Pairs) dataset attempts to relate a question-answer pair semantically 3 , and Mischief-based models consistently underperformed in spite of the fact that this corpus has nearly 400,000 lines. Given the scores in STS-B (Cer et al., 2017), and SST-2 (Socher et al., 2013), it appears that, generally speaking, tasks where semantic similarity is the primary measurement are more likely to be impacted negatively. There were some exceptions to the rule, however, as some models did outperform their baseline, for example, BERT in STS-B for r = 100% and XLNet in SST-2 for r = 75%, and r = 100%. Conclusion We presented Mischief, a simple algorithm that allows us to construct a class of human-readable adversarial examples; and showed that the injection of such examples in the dataset is capable of significantly degrading the performance of transformer-based models. Such models can be made resistant to Mischiefbased attacks by simply training with similar examples, and without relying on other components (e.g., a spell-checker). However, Mischief has also value as a data augmentation technique, as we saw that certain NLU tasks benefit from the inclusion of such examples. It is important to point out that, in general, adversarial attacks are architecture-independent (Szegedy et al., 2014). Although we attempted to provide an indepth analysis of select transformer-based architectures, it remains an open problem as to whether the results of this paper are applicable to other families of models. We conjecture that, as long as their tokenizer operates in a similar fashion to the WordPiece tokenizer from Schuster and Nakajima (2012), and their parameter size is large enough, the effects from this study extend to them. In the case of smallercapacity models or other word-segmentation techniques where out-of-vocabulary words are frequently mapped to the same token, the outcome of a Mischief-based attack can only be more detrimental. Finally, one area we did not pursue in this paper is synonym injection. We argue that synonym injection is arguably far more impactful in terms of supplying strong adversarial examples, and a Mischief-based approach to training with such examples may also increase performance in the tasks where Mischief did not show an improvement. However, given how sensitive is the meaning of a word-let alone their synonym-to context, such process cannot be done in an automated fashion, and without expert knowledge being invested.
2020-10-19T01:00:19.346Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "f9b5b51d9871046caeaeb7a2ca60632dac011c39", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f9b5b51d9871046caeaeb7a2ca60632dac011c39", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
260714159
pes2o/s2orc
v3-fos-license
Beyond-mean-field description of octupolarity in dysprosium isotopes with the Gogny-D1M energy density functional The emergence and stability of (static) octupole deformation effects in Dy isotopes from dripline to dripline ($72 \le N \le 142$) is analyzed in this paper using mean-field and beyond-mean-field techniques often used for this purpose. We find static octupole deformations at the Hartree-Fock-Bogoliubov (HFB) level with the Gogny D1M force for $N \approx 134$ isotopes, while nuclei with $N \approx 88$ exhibit reflection-symmetric ground states. It is shown that, given the softness found in the mean-field and parity-projected potential energy surfaces along the octupole direction, neither of these two levels of approximation is suficcient to extract conclusions about the (permanent and/or vibrational) nature of octupole dynamic in Dy isotopes. From the analysis of the collective wave functions as well as the excitation energies of the first negative-parity states and $B(E3)$ strengths, obtained within the framework of a two-dimensional symmetry-conserving generator coordinate method (2D-GCM), it is concluded that the increased octupole collectivity in Dy isotopes with $N \approx 88$ and $N \approx 134$ is a vibrational-like effect that is not directly related to permanent mean-field octupole deformation in the considered nuclei. A pronounced suppression of the $B(E1)$ strengths is predicted for isotopes with $N \approx 82$ and $N \approx 126$. The comparison of results obtained with other parametrizations, show the robustness of the predicted trends with respect to the underlying Gogny energy density functional. The emergence and stability of (static) octupole deformation effects in Dy isotopes from dripline to dripline (72 ≤ N ≤ 142) is analyzed in this paper using mean-field and beyond-mean-field techniques often used for this purpose. We find static octupole deformations at the Hartree-Fock-Bogoliubov (HFB) level with the Gogny D1M force for N ≈ 134 isotopes, while nuclei with N ≈ 88 exhibit reflection-symmetric ground states. It is shown that, given the softness found in the mean-field and parity-projected potential energy surfaces along the octupole direction, neither of these two levels of approximation is suficcient to extract conclusions about the (permanent and/or vibrational) nature of octupole dynamic in Dy isotopes. From the analysis of the collective wave functions as well as the excitation energies of the first negative-parity states and B(E3) strengths, obtained within the framework of a two-dimensional symmetry-conserving generator coordinate method (2D-GCM), it is concluded that the increased octupole collectivity in Dy isotopes with N ≈ 88 and N ≈ 134 is a vibrational-like effect that is not directly related to permanent mean-field octupole deformation in the considered nuclei. A pronounced suppression of the B(E1) strengths is predicted for isotopes with N ≈ 82 and N ≈ 126. The comparison of results obtained with other parametrizations, show the robustness of the predicted trends with respect to the underlying Gogny energy density functional. The majority of the spherical and/or quadrupoledeformed nuclear ground states are reflection-symmetric. However, due to the mean-field spontaneous symmetry breaking mechanism [1] reflection-asymmetric ground states tend to be favored energetically in certain regions of the nuclear chart [2]. Those regions are usually associated with the neutron/proton numbers N/Z = 34, 56, 88 and 134 where the coupling between intruder (N + 1, l + 3, j + 3) and normal-parity (N, l, j) states is more effective in developing octupole deformed ground states. Octupole-related features have been studied around the already mentioned neutron/proton numbers, however, the search for new islands of reflectionasymmetric shapes, all over the nuclear chart, still represents one of the frontiers in nuclear structure physics nowadays. Within this context, a better understanding of the permanent and/or vibrational nature of octupole dynamic in atomic nuclei still represents a major challenge that cannot be resolved with plain mean field calculations. Octupolarity along the Dy isotopic chain has been the subject of experimental studies. For example, bands associated with parity doublets have been studied in 157 Dy using the JUROGAM II array [3]. A rotational band, built on an octupole vibration, has been identified in 152 Dy [4]. The E1 transitions between opposite parity bands, have been studied in 154 Dy [5]. Negative parity bands have also been investigated in both 156 Dy and 162 Dy [6,7]. The experimental findings [3][4][5][6][7] raise questions about the impact of octupole correlations in the structural evolution along the Dy isotopic chain as well as on the (permanent and/or vibrational) nature of octupole deformation effects in those isotopes. Recently, relativistic mean-field calculations have been carried out for Dy nuclei [8]. On the basis of plain mean-field results, it has been concluded, that N ≈ 88 and N ≈ 134 Dy isotopes exhibit permanent octupole deformation. The conclusion extends to isotopes where the octupole minima found in the calculations are very shallow and the corresponding potential energy surfaces exhibit a rather soft behavior along the octupole direction. The conclusions of Ref. [8] are at variance with previous macroscopicmicroscopic (Mac-Mic) results [9] as well as with the ones extracted from this microscopic study in which, the relevance of beyond-mean-field octupole dynamics in Dy isotopes is considered with the Gogny energy density functional (EDF) [10], using the models already introduced in Refs. [11][12][13][14] and used to describe octupole dynamics in other regions of the nuclear chart. In particular, we address in the present study the stability of (static) meanfield octupole deformation effects once beyond-mean-field symmetry restoration and/or configuration mixing (dynamical) effects are taken into account. To this end, calculations have been carried out along the Dy isotopic chain from proton to neutron dripline (72 ≤ N ≤ 142). A lot of effort has been devoted to better understand basic fingerprints of octupole correlations (see, for example, Refs. [15][16][17][18][19][20][21][22][23][24][25] and references therein). Previous experiments have found evidence for octupole deformed ground states in 144,146 Ba [21,22] and 222,224 Ra [24,25]. The measured low-lying states in 224,226 Rn suggest that those isotopes should be characterized as octupole vibrations [26]. Furthermore, fingerprints of octupole correlations have also been found in the case of 228 Ra and 228 Th [24,27]. Here, one should keep in mind that renewed interest in octupole correlations also comes from the need to improve the description of fission paths in heavy and super-heavy nuclei. In particular octupole correlations are well known to affect the outer sectors of the fission paths in those nuclei (see, for example, Refs. [28][29][30][31] and references therein). Octupole deformation is also one of the collective coordinates at play in the case of cluster radioactivity [32]. In the case of the Gogny energy density functional (EDF) [10], the models of Refs. [11][12][13][14] have already been employed to study the quadrupole-octupole coupling in regions of the nuclear chart such as the Sm and Gd isotopes with 84 ≤ N ≤ 92 [11], actinide nuclei with neutron number N ≈ 134 [12], Rn, Ra and Th isotopes [13] as well as neutron-rich actinides and super-heavy nuclei [14]. First, the quadrupole Q 20 and octupole Q 30 deformation parameters have been considered simultaneously within the constrained Hartree-Fock-Bogoliubov (HFB) framework [1] to build the corresponding (Q 20 , Q 30 ) mean-field potential energy surfaces (MF-PESs). Second, the changes induced in the MFPESs by the restoration of the reflection symmetry have been considered by projecting the (Q 20 , Q 30 )-constrained intrinsic HFB states onto a good parity. Third, the quadrupoleoctupole coupling has been taken into account using a two-dimensional symmetry-conserving Generator Coordinate Method (2D-GCM) ansatz [11][12][13][14]. The key lesson extracted from the studies mentioned above [11][12][13][14] is that, for the considered nuclei, 2D-GCM zero-point quantum fluctuations are essential to obtain a systematic of the B(E1) and B(E3) strengths as well as of the excitation energies of the lowest negative-parity states that accounts reasonably well for the available experimental data. Moreover, it has also been shown that such 2D-GCM quantum fluctuations can lead to an enhanced octupolarity as well as to a weaker dependence of the correlation energies with neutron number. In this respect, we also refer the reader to previous large scale surveys, using the octupole degree of freedom as a single generating coordinate [64,65]. The main aim of this paper is to address, the stability of mean-field octupole deformation effects as well as the impact of beyond-mean-field (dynamical) correlations in dripline-to-dripline calculations for Dy isotopes. Our results reexamine the conclusions of relativistic mean-field [8] studies around both N = 88 and N = 134 pointing to permanent octupole deformation effects in those regions. In order to disentangle the role of static octupole deformation, we have first obtained a set of (Q 20 , Q 30 )constrained Gogny-HFB wave functions for the even-even isotopes 138−208 Dy. The energies corresponding to each of these mean-field states are then used to build the MF-PESs as functions of the quadrupole Q 20 and octupole Q 30 deformations. Note, that the considered range of neutron numbers, i.e., 72 ≤ N ≤ 142, includes the octupole magic number N = 88 and extends up to a very neutron-rich sector to also include the octupole magic number N = 134. Therefore, the Gogny-HFB calculations allow us to examine the emergence and evolution of static ground state reflection-asymmetric shapes along the Dy isotopic chain and, in particular, to compare with Mac-Mic [9] and relativistic mean-field [8] predictions around both N = 88 and N = 134. As will be shown later on in the paper, for the studied isotopes, the MFPESs often are rather soft along the Q 30 -direction and/or the mean-field octupole correlation energies E CORR,HF B [see, Eq.(8)], are rather small. Moreover, in some cases the MFPESs exhibit a transitional behavior along the Q 20 -direction. Taking into account the experience, obtained in previous works [11][12][13][14], on the role of dynamical correlations in such scenarios and the mean-field results already mentioned, we have then studied the impact of beyond-mean-field zeropoint quantum fluctuations in 138−208 Dy. To this end, we have resorted to both parity symmetry restoration and symmetry-conserving 2D-GCM quadrupole-octupole configuration mixing [11][12][13][14]. The results discussed in this paper, at the three levels of approximation employed, have been obtained with the parametrization D1M [66] of the Gogny-EDF. The parametrization D1M has already been shown to provide a reasonable description of octupole-related features in previous studies [11][12][13][14]. However, in some instances, we will also discuss results obtained with the parametrizations D1S [67] and D1M * [68] in order to illustrate the robustness of the predictions with respect to the underlying Gogny-EDF. The paper is organized as follows. The HFB and beyond-mean-field approximations employed in this study are briefly outlined in Secs. II A and II B. The results obtained with the corresponding approach will be discussed in each section. The HFB results will be discussed in Sec. II A, while dynamical beyondmean-field correlations are considered in Sec. II B. In particular, parity symmetry restoration is considered in Sec. II B 1, while symmetry-conserving 2D-GCM quadrupole-octupole configuration mixing is discussed in Sec. II B 2. In this Sec. II B 2, the excitation energies of the lowest negative-parity states as well as B(E1) and B(E3) strengths obtained for 138−208 Dy will be discussed and compared with the available experimental data [69]. Furthermore, we will also illustrate the robustness of the 2D-GCM predictions with respect to the underlying Gogny-EDF. Finally, Sec. III is devoted to the concluding remarks. II. RESULTS In this work we study the emergence and stability of octupole deformation effects in the isotopic chain 138−208 Dy from a microscopic point of view using the density-dependent Gogny-D1M EDF. To this end, the HFB approach [1], with constrains on the axially symmetry quadrupoleQ 20 and octupoleQ 30 operators, is employed as a first step. On the other hand dynamical beyond-mean-field correlations are considered via parity projection of the intrinsic HFB states and/or symmetryconserving 2D-GCM quadrupole-octupole configuration mixing. In this section, we briefly outline these approaches [11][12][13][14] and discuss the results obtained with each of them. A. Hartree-Fock-Bogoliubov We have first performed (Q 20 , Q 30 )-constrained Gogny-D1M HFB calculations for 138−208 Dy. In the calculations the HFB equation has been solved with constrains on the axially symmetric quadrupolê and octupoleQ operators, using an approximate second-order gradient method [70]. A constrain on the operatorQ 10 has also been used to fix the center of mass at the origin [11]. The HFB quasiparticle operators [1] have been expanded in a (deformed) axially symmetric harmonic oscillator (HO) basis containing 15 major shells. Axial symmetry has been kept as a self-consistent symmetry in order to alleviate the computational effort. For each of the intrinsic states |Φ(Q 20 , Q 30 )⟩ obtained in the constrained Gogny-HFB calculations, the quadrupole Q 20 and octupole Q 30 deformations are defined as the mean values and The corresponding deformation parameters β λ (λ = 2, 3) are then defined as with R 0 = 1.2A 1/3 and A the mass number. For example, for A = 150 a quadrupole deformation Q 20 = 5b is equivalent to β 2 = 0.217, whereas for A = 200 an octupole deformation Q 30 = 2.5b 3/2 is equivalent to β 3 = 0.113. The Gogny-HFB MFPESs are depicted in Fig. 1 for a selected set of Dy isotopes, as illustrative examples. Those MFPESs are nothing else than the HFB energies corresponding to each of the intrinsic states |Φ(Q 20 , Q 30 )⟩. The HFB energies (6) are invariant under the exchange of Q 30 into −Q 30 to be associated to the parity symmetry of the interaction As a consequence of this invariance, only the energies corresponding to Q 30 ≥ 0 values are included in Fig. 1. Along the Q 20 -direction there is a shape/phase transition from a prolate ( 138,140 Dy) to an oblate ( 142,144 Dy) ground state, followed by spherical ground states in 146−150 Dy, reflecting the proximity to the neutron shell closure N = 82. With increasing neutron number, the ground state quadrupole deformations increase reaching values of Q 20 = 8 − 9 b for 92 ≤ N ≤ 112. This is, once more, followed by shape/phase transitions to oblate ground states in 182−188 Dy and then to spherical ground states in 190−196 Dy, associated with the proximity to the neutron shell closure N = 126. For larger neutron numbers, the ground state quadrupole deformations exhibit a pronounced increase, reaching the value Q 20 = 12 b for 208 Dy. For the considered Dy isotopes, the ground state quadrupole deformations are within the range The results obtained with Gogny-D1M, as well as the ones obtained with the D1S and D1M * parametrizations, for the ground state quadrupole deformations agree well with previous Mac-Mic [9] and reflection-asymmetric relativistic mean-field [8] results. Note that, for some of the considered isotopes, the MFPESs depicted in Fig. 1 exhibit transitional features along the Q 20 -direction. As can be seen from Fig. 1 and from panels (c) and (d) of Fig.2, static Gogny-D1M ground state octupole deformations are only predicted for 198−202 Dy, i.e., for very neutron-rich isotopes around N = 134. In this case, the ground state octupole deformations are within the Octupole-deformed neutron-rich nuclei have already been predicted, in this [8,9] and other regions of the nuclear chart [14,36,50,[61][62][63]. The soft behavior of the Gogny-D1M MFPESs along the Q 30 -direction, as one approaches the neutron number N = 134, becomes apparent from Fig. 1. Nevertheless, even in the case of nuclei with octupole deformed mean-field ground states (i.e., 198−202 Dy), the HFB energy gained by breaking reflection symmetry and defined as the difference between the HFB energy corresponding to the absolute minimum obtained in reflection-symmetric calculations and the energy corresponding to the absolute minimum of the (Q 20 , Q 30 )-MFPES, is rather small (188, 266 and 70 KeV for 198−202 Dy, respectively). The MFPESs shown in Fig. 1 also become softer along the octupole direction as one approaches 154 Dy, i.e., the neutron octupole magic number N = 88. In our calculations as well as in previous Mac-Mic ones [9], there is no static octupole deformation in this region. This is at variance with recent relativistic mean-field results [8] that predict octupole-deformed Dy isotopes with N ≈ 88. However, for both N ≈ 88 and N ≈ 134 Dy isotopes, the softness displayed by the Gogny-D1M MFPESs along the Q 30 -direction (see, also Fig.4 of Ref. [8]), points towards the key role of dynamical beyond-mean-field correlations, i.e., symmetry restoration and/or quadrupole-octupole configuration mixing in the properties of the ground state and collective negative parity states in the studied nuclei. At this level, and at variance with Ref. [8], we conclude that the plain mean-field framework is not sufficient to extract conclusions about permanent octupole deformation effects in the considered nuclei. Therefore, we turn our attention to beyond-mean-field correlations in the next Sec. II B. B. Dynamical beyond-mean-field correlations In this section, we turn our attention to the impact of beyond-mean-field correlations in different low energy properties of the Dy isotopes considered. First, parity projection (after variation) calculations are discussed in Sec. II B 1. As shown, not only the MFPESs in Fig. 1, but also the parity projected potential energy surfaces obtained for some of the considered nuclei, exhibit a rather soft behavior along the octupole direction with a pronounced competition between reflection-symmetric and reflection-asymmetric configurations. As a result, not only symmetry restoration but also fluctuations in the collective coordinates should be considered for the studied nuclei. This is done in Sec. II B 2 within the frame- work of the symmetry-conserving 2D-GCM framework [11][12][13][14]. Since the octupole is the softest mode, the spatial reflection symmetry is the most important invariance to be restored. The simultaneous restoration of other symmetries, such as the rotational and particle number symmetries [22,23], is out of the scope of the present survey for technical reasons such as the large number of HO shells used and/or the number of degrees of freedom required in the 2D-GCM ansatz. Parity symmetry restoration Once the intrinsic HFB states |Φ(Q 20 , Q 30 )⟩, discussed in the previous Sec. II A, are obtained the spatial reflection symmetry in each of those states is restored by means of parity projection after variation. In what follows, and for the sake of simplicity, we will use the notation Q = (Q 20 , Q 30 ) for the pair of quadrupole and octupole deformation parameters that label each of the intrinsic HFB states, i.e., |Φ(Q 20 , Q 30 )⟩ = |Φ(Q)⟩. The projected states read where the projection operatorP π is written in terms of the desired parity quantum number π = ±1 and the parity operatorΠ. In the case of the density dependent Gogny-EDF, the projected energies associated with the parity-projected states |Φ π (Q)⟩ (9), have been computed using a mixed-density prescription in the density-dependent term of the EDF to avoid the pathologies found in the restoration of spatial symmetries [71][72][73][74][75]. We have also introduced first-order corrections in Eq.(10) to account for the fact that the parityprojected mean value of proton and neutron numbers, usually differ from the nucleus' proton and neutron numbers [11,12,14]. The π = +1 and π = −1 parityprojected potential energy surfaces (PPPESs), depicted in Figs. 3 and 4 for a selected set of Dy isotopes as illustrative examples, are nothing else than the energies E π (Q), as functions of the quadrupole Q 20 and octupole Q 30 deformations of the intrinsic states. As in previous studies, in Fig. 4 we have omitted the Q 30 = 0 line as the evaluation of E π=−1 requires the non trivial task of resolving numerically a zero over zero indeterminacy. Fortunately, the negative parity projected energy increases rapidly as the Q 30 = 0 line is approached (see, Fig. 5) and its limiting value [53] is high enough as not to play a significant role in the discussion of the π = −1 PPPESs [11]. The comparison between the PPPESs and the MF-PESs in Fig. 1, reveals that the quadrupole deformations corresponding to their absolute minima are close to each other. Moreover, from the comparison between the MF-PESs and π = +1 PPPESs one realizes that, in spite of the changes in topography along the Q 30 -direction, the latter are also rather octupole-soft and/or display a pronounced competition between reflection-symmetric and reflection-asymmetric configurations. This is illustrated in panels (a) and (b) of Fig. 5 where the π = +1 parity-projected energies obtained for 154 Dy and 202 Dy are plotted, as functions of Q 30 , for fixed values of the quadrupole moment. At the HFB level, the ground state of 154 Dy is reflection symmetric whereas the one of 202 Dy shows a non-zero octupole moment. However, for both isotopes the π = +1 parity-projected curves in Fig. 5 display a minimum with a pocket around Q 30 = 1b 3/2 . In both cases, such an octupole-deformed minimum is less than 1.3 MeV deeper than the reflection-symmetric configuration indicating that, in addition to parity symmetry restoration, fluctuations in the collective coordinates (in particular, the octupole coordinate which represents the softest mode) should be taken into account for the studied nuclei. On the other hand, the π = −1 PPPESs shown in Fig. 4 [see also, panels (a) and (b) of Fig. 5] exhibit in all the cases absolute minima with octupole deformations larger than the ones in the MFPESs and π = +1 PPPESs. Symmetry-conserving 2D-GCM quadrupole-octupole configuration mixing The results discussed in Secs. II A and II B 1 indicate that not only parity symmetry restoration but also symmetry-projected quadrupole-octupole configuration mixing is required to disentangle the stability of octupole deformation effects in the studied Dy isotopes. To this end, we consider the following 2D-GCM superposition of HFB states |Φ(Q)⟩ where, both positive and negative octupole moments are included in the integration domain D. The 2D-GCM ansatz |Ψ π σ ⟩ accounts for both reflection symmetry restoration and (Q 20 , Q 30 )-fluctuations [11][12][13][14]. In Eq.(11) π = ±1 represents the parity quantum number, while the index σ numbers the different GCM solutions. The amplitudes f π σ (Q) should be determined dynamically via the solution of the corresponding Griffin-Hill-Wheeler (GHW) equation [1,11,12,14], written in terms of non-diagonal norm N (Q, Hamiltonian H(Q, Q ′ ) = ⟨Φ(Q)|Ĥ|Φ(Q ′ )⟩ overlaps. In the evaluation of the Hamiltonian overlap one has to pay special attention to avoid the use of non-equivalent bases in the left and right HFB states [76]. In our case, this is accomplished by using the same oscillator lengths for all HFB states considered in the GCM mixing [77,78]. For the evaluation of the density-dependent contribution of the Gogny-EDF to the Hamiltonian overlap we have considered a mixed-density prescription in the densitydependent term of the EDF [11,12,14,75]. Finally, perturbative first-order corrections in both the mean value of proton and neutron numbers have been considered [11,12,14,75]. The solution of the GHW equation provides the dynamical amplitudes f π σ (Q). Nevertheless, in the case of a non-orthogonal basis of HFB states |Φ(Q)⟩, i.e., , such amplitudes f π σ (Q) cannot be assigned a quantum mechanical probabilistic interpretation [1]. One then introduces the collective wave functions [1,11,75] written in terms of the amplitudes f π σ (Q) Eq.(11) and the operational square root N The reduced transition probabilities B(E1, 1 − → 0 + ) and B(E3, 3 − → 0 + ) have been computed using the rotational model approximation for K=0 bands where σ corresponds to the first 2D-GCM excited negative-parity state. The electromagnetic transition op-eratorsÔ 1 andÔ 3 represent the dipole moment operator and the proton componentQ 30,prot of the octupole operator, respectively. The overlaps ⟨Ψ π σ |Ô λ |Ψ π ′ σ ′ ⟩ have been evaluated using the expressions given in Ref. [11]. The collective wave functions Eq.(12) corresponding to the ground and lowest negative-parity states of the nuclei 198 Dy, 202 Dy and 206 Dy are depicted in Fig. 6, as illustrative examples. Similar results have been obtained for other Dy isotopes. Note that at the HFB level 198 Dy and 202 Dy ( 206 Dy) exhibit reflection-asymmetric (reflectionsymmetric) ground states. The values obtained for the average quadrupole moments corresponding to the 2D-GCM ground states (Q 20 ) π=+1 σ=1 , display a pattern similar to the one obtained at the meanfield level [see, panel (a) of Fig. 2]. The pattern followed by (Q 20 ) π=+1 σ=1 , as well as the one followed by the average quadrupole moments corresponding to the first negativeparity states (Q 20 ) π=−1 σ , clearly reflect the impact of the neutron shell closures N = 82 and N = 126 in the evolution of the quadrupole properties along the considered isotopic chain. The ground state collective wave functions G π=+1 σ=1 (Q), shown in the bottom panels of Fig. 6 for the N ≈ 134 isotopes 198 Dy, 202 Dy and 206 Dy exhibit a large spreading along the Q 30 -direction. This is also the case for the G π=+1 σ=1 (Q) amplitudes corresponding to N ≈ 88 Dy isotopes. This reflects the octupole-soft character of the Gogny-D1M 2D-GCM ground states in the case of N ≈ 88 and N ≈ 134 Dy isotopes. However, for all the nuclei studied in this paper, the G π=+1 σ=1 (Q) amplitudes exhibit peaks around Q 30 = 0 pointing to an octupolevibrational character. In order to access dynamical octupole deformation effects at a more quantitative level, we have computed the average octupole moment [11,12,14] and obtained that, for all the considered nuclei, the ground state (Q 30 ) π=+1 (static) HFB ground state octupole deformations. Thus, to a large extent, even the static octupole deformation effects predicted at the Gogny-HFB level around N = 134 are washed out once symmetry-conserving quadrupoleoctupole configuration mixing is taken into account. The previous results, point towards octupolevibrational features in the Dy chain, and raise questions about the conclusions extracted in Ref. [8] from the results of a plain mean-field calculation. In this reference the existence of permanent octupole deformations in N = 88 and N = 134 Dy isotopes is concluded. Let us stress, that results (not shown) similar to the ones already discussed have also been obtained in the present study with other parametrizations of the Gogny-EDF, The 2D-GCM excitation energies ∆E neg−par of the first negative-parity states as well as the reduced transitions probabilities B(E1) and B(E3) obtained for the considered Dy isotopes, are plotted in panels (a), (b) and (c) of Fig. 7, as functions of the neutron number. Additional results for the parametrizations D1S and D1M * are also included in the figure. As can be seen, with minor exceptions, the results obtained with different parametrizations are rather similar. This points towards the robustness of the predicted trends with respect to the underlying Gogny-EDF. As functions of the neutron number, the 2D-GCM en-ergies ∆E neg−par display two pronounced minima, one at N = 88 and the other at N = 134. These Gogny 2D-GCM results indicate that, at a dynamical beyondmean-field level, both N = 88 and N = 134 represent (on the average) octupole magic numbers along the studied isotopic chain. Let us stress that no static octupole deformation is obtained for N ≈ 88 isotopes at the meanfield level. Moreover, the ground state collective wave functions for those isotopes are peaked around Q 30 = 0. These results suggest that in the case of Dy isotopes, the octupole collectivity around N = 88 is more vibrationallike in character than suggested in Ref. [8] on the base of plain mean-field calculations. Static octupole deformations have been obtained at the Gogny-HFB level for N ≈ 134 isotopes. However, as already mentioned, their ground state collective wave functions are also peaked around Q 30 = 0, while the corresponding mean-field deformation effects are reduced to more than half once 2D-GCM zero-point fluctuations are included. This suggests that the prominent minimum observed in panel (a) of the figure at N = 134, should also be associated with a vibrational character of the excitation instead of permanent octupole deformation effects. Regarding the comparison with the still scarce data [69], the predicted ∆E neg−par values reproduce reasonably well the experimental trend in the immediate neighborhood of N = 88, while they overestimate considerably the available experimental values as one moves away from this neutron number. The B(E1) strengths shown in panel (b) of the same figure exhibit two minima, one at N ≈ 82 and the other at N ≈ 126. From a dynamical point of view, it is precisely around these neutron numbers where the overlap ⟨Ψ π=−1 σ |Ô 1 |Ψ π=+1 σ=1 ⟩ (withÔ 1 being the dipole moment operator) reaches its minimum. Here, one should keep in mind, that the behavior of the B(E1) strengths is not directly related with the one observed in the ∆E neg−par energies and/or the B(E3) reduced transition probabilities (see, below). In fact, via the strong dependence of the dipole moment on the underlying single-particle structure, the B(E1) values might display strong suppression for some specific neutron numbers [11][12][13][14]53], specially around neutron shell closures. The trend observed in the predicted B(E3) values correlates well with the one in the ∆E neg−par energies, i.e., as functions of the neutron number the B(E3) strengths exhibit two pronounced maxima at N = 88 and N = 134 where the ∆E neg−par energies display two pronounced minima. The comparison with the available experimental data [69] reveals that, in spite of the quantitative differences, the predicted E3-trend reproduces the increased octupole collectivity around N = 88 as well as its sudden decrease with increasing neutron number. We stress, that the E3 collectivity around N = 88 and N = 134 is not the result of permanent mean-field octupolarity around those neutron numbers, as concluded in Ref. [8], but directly reflects the key role played by dynamical fluctuations. In fact, via the structure of the corresponding collective wave functions, the 2D-GCM overlap values) obtained as one approaches both N = 88 and N = 134 that leads to a reduction of the difference |(Q 30 ) π=+1 σ=1 − (Q 30 ) π=−1 σ | and, therefore, to larger B(E3) strengths as compared with the ones obtained as we move away from these two neutron octupole magic numbers. III. SUMMARY AND CONCLUSIONS In this paper we have carried out calculations, both at the mean-field level and beyond, to address the emergence and stability of (static) mean-field octupole deformation effects in Dy isotopes from dripline to dripline.To this end, we have resorted to the models already employed in Refs. [11][12][13][14] in other regions of the nuclear chart. Contrary to recent reflection-asymmetric relativistic mean-field [8] but in agreement with previous Mac-Mic [9] results, at the Gogny-HFB level static octupole deformations have been found only for N ≈ 134 isotopes, while nuclei with N ≈ 88 exhibit reflection-symmetric ground states. Moreover, even in the case of nuclei with octupole deformed Gogny-D1M mean-field ground states (i.e., 198−202 Dy), the HFB octupole correlation energies Eq.(8) are always smaller than 300 keV. This, as well as the octupole-softness of the corresponding MFPESs, indicate that the plain mean-field framework is not sufficient to extract conclusions about permanent octupole deformation effects in Dy isotopes. The results obtained in this paper, together with previous studies of the octupole dynamics in other regions of the nuclear chart [11-14, 64, 65], represent a warning to the use of the mean-field approach to extract con-clusions on the permanent and/or vibrational nature of octupolarity in atomic nuclei with shallow octupole minima and/or octupole-soft MFPESs. Furthermore, it has been shown that the octupole-softness found in the MF-PESs, especially around the neutron numbers N = 88 and N = 134, also extends to the parity-projected potential energy surfaces, pointing towards the key role of 2D-GCM symmetry-conserving configuration mixing in the studied nuclei. At the 2D-GCM level, zero-point quantum fluctuations associated with the restoration of reflection symmetry and fluctuations in the collective (Q 20 , Q 30 ) coordinates, lead to an enhanced octupolarity for all the considered isotopes, albeit with dynamical deformations less than half of the largest values obtained at the mean-field level. Therefore, to a large extent, the (static) mean-field octupole deformation effects are washed out in Dy nuclei once 2D-GCM fluctuations are taking into account. Our analysis of the 2D-GCM collective wave functions as well as the trends of the predicted ∆E neg−par excitation energies and B(E3) strengths, corroborate an increased octupole collectivity in Dy isotopes with N ≈ 88 and N ≈ 134. However, we stress that such increased octupolarity is a (dynamical) vibrational-like effect that is not directly related to permanent mean-field octupole de-formation in the considered nuclei. The predicted ∆E neg−par values reproduce reasonably well the available experimental data in the immediate neighborhood of N = 88, while in the B(E3) case the calculations account qualitatively for the increased octupole collectivity around N = 88 as well as its sudden decrease with increasing neutron number. The predicted B(E1) reduced transition probabilities display strong suppression around N ≈ 82 and N ≈ 126. Furthermore, the D1S, D1M * and D1M parameter sets provide rather similar results, pointing towards the robustness of the predicted trends with respect to the underlying Gogny-EDF.
2023-08-09T15:13:47.012Z
2023-08-07T00:00:00.000
{ "year": 2023, "sha1": "1ecc1d617b30559ac0ba789b77eb18bcf90162f3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "51fb2fbb23af0f81d8ed08f66d01d726b14b3df7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247016894
pes2o/s2orc
v3-fos-license
and Writing/Rédactologie Developing Dissertation Support in the Writing Centre Article abstract The uneven levels of writing support that dissertation writers receive throughout each stage of their PhDs has contributed to low completion rates and general dissatisfaction with the doctoral process. By offering both collective and individual assistance, the Café and one-to-one writing programming described herein centres students in their own work by examining the often-unspoken expectations that structure their PhDs. This programming establishes honest communication with dissertation writers in order to promote self-advocacy and intrinsic motivation so that they may take greater control over their projects. By modeling a reflective understanding of dissertation writing, this programming enhances longer term productive processes that enable higher completion rates and a greater sense of fulfillment with the PhD. Abstract The uneven levels of writing support that dissertation writers receive throughout each stage of their PhDs has contributed to low completion rates and general dissatisfaction with the doctoral process. By offering both collective and individual assistance, the Café and one-to-one writing programming described herein centres students in their own work by examining the often-unspoken expectations that structure their PhDs. This programming establishes honest communication with dissertation writers in order to promote self-advocacy and intrinsic motivation so that they may take greater control over their projects. By modeling a reflective understanding of dissertation writing, this programming enhances longer term productive processes that enable higher completion rates and a greater sense of fulfillment with the PhD. Introduction In the balancing of writing centre resource distribution, graduate writing support is often a lower priority, particularly when compared with the wide extent and variety of programming available to undergraduate students whose writing skills are typically assumed to be lagging. The opportunities for graduate writing support are often limited in scope when depicted as an impediment to degree completion (Aitchison & Lee, 2006). This gap leaves the heavy lifting of meeting graduate writing needs on the shoulders of an increasingly stretched supervisory faculty. The gap in support may be explained by the assumption that dissertation writers already know how to write (Turner & Edwards, 2006), or that the dissertation, as a highly individualized project, necessitates "solitary activity" (Mullen, 2006). Perhaps unwittingly, this translates into a downplaying of the complex challenges of dissertation writing by policy makers and their impact on mental health issues in Discourse and Writing/Rédactologie Volume 32, 2022 http://journals.sfu.ca/dwr 40 graduate education (Evans, 2018). Given these assumptions, failure to strive is often interpreted by the student as the individual student's shortcoming (Le Feuvre, 2010;Webb et al., 2013), despite ample evidence that highlights the numerous benefits of supporting student self-development (Lindsay, 2015) and the impact of anxiety on a dissertation writer's efficacy (Huerta et al., 2017). Operating within this context, in 2018 the York University Writing Centre began a program of writing support geared at building writing momentum through a process designed to boost dissertation writers' confidence in their own abilities, particularly through self-advocacy as a means to achieving dissertation writers' goals. Working under the principle that the more attuned students are to their goals, the more they are empowered to establish positive academic relationships, for instance by i.e. locating their own desired trajectory in the context of their academic program's expectations (Hoskings and Goldberg, 2005), this support aimed at activating the students' focus on the intrinsic value of their own production. In what follows, I will present and examine these support services. While this is not a one-size-fits-all approach, the practical and pedagogical processes sketched out below offer a way of approaching and implementing a student-centred model of writing and motivational support. The programming is defined by two separate yet connected supports: 1) A weekly Café writing group and 2) four one-to-one appointments with a graduate writing specialist. Demonstrating Needs and Identifying Problems In developing our programming, we began by identifying best practices in the literature of graduate writing support, and investigating what opportunities already existed on campus outside of department specific offerings, among which the quality and quantity of support varied dramatically. A survey of non-writing centre programs aimed at supporting graduate students and decreasing time-to-completion rates revealed a trend toward establishing benchmarks and setting arbitrary deadlines, the purpose of which was largely unclear to students. While initially designed as a motivating tool, such measures often interpellate dissertation writers as in some way deficient when they experience challenges in degree completion. A reason for this may be that, as Madden (2016) notes, "faculty often bear the incorrect assumption that students are already socialized as expert communicators for their disciplines by the time they enter their graduate programs" (p. 1). It is not uncommon for students to internalize this assumption which, in turn, may contribute to conceptions of self-defined by inadequacy, thus increasing the potential for detrimental effects on graduate students' mental health. Discourse and Writing/Rédactologie Volume 32, 2022 http://journals.sfu.ca/dwr Likewise, external measurements of success can negatively impact a writer's progress. Measuring success in ways contrary to how the student conceives of and situates their project by placing the burden of motivation on "extrinsic" factors may have detrimental effects on writing outcomes (Bansel, 2011). The resulting work "produces less satisfaction and lowers self-esteem" (Fegan,p. 24), and this is heightened by an excessive focus on the product over the process and by the expectation that projects must be intelligible before they are fully formed in the writer's mind (Rath, 2018). While this limited view of achievement is felt unevenly, it may disadvantage those who feel unable to compete (Burford 2017). In response, a key purpose guiding York's Writing Cafés has been to configure a support infrastructure that builds intrinsic motivation by offering a productive and collaborative space where students share their work with others. The benefits of communal writing groups have been well established in the literature, and in this setting we hoped to mobilize what Carr et al. (2010) refer to as "nourished scholarship" as a way to help students navigate towards completion. In this way, the writing space reinforces "the value of regular peer group communication and connectedness for developing a sense of belonging" (Hutchings, 2017, p. 11). Through promoting this sense of belonging and shared experience in a community of interest, we were able to address any negative emotional attachments that students may have formed with their projects. This has been best achieved through the sharing of and learning from students' individual experiences of the dissertation writing process in various disciplinary settings. Breaking down the Programming: What is involved? Our aim was to foreground peer-based structured conversation and individualized support that would centre the learner's active participation through a dialogic student-led process (Nordlof, 2014). An explicit goal was to promote a culture of self-advocacy that locates within each writer the ability to take a firmer grasp of the reins of their studies. A desired and necessary effect of nurturing intrinsic motivation is that the writer develops a clearer self-orientation that allows them to act in accordance with their own devised path. The strategy was to promote an explicit understanding of the complex practices that define dissertation writing and locate the ways these filter down to a student's particular circumstances. Through developing a more nuanced understanding of the larger context in which they find themselves, students are better positioned to fulfill their intrinsic expectations and reconnect with the authentic motivations that drove them to pursue a doctoral program. Cafés: Composition and Numbers The Café is run weekly over the course of a term for a total of eleven weeks. Each weekly meeting runs for three hours. While initial iterations included up to fifteen students, attendance in subsequent meetings was reduced to ten in order to increase individual involvement. Due to our limited capacity to meet increasing demand, several criteria were established to narrow eligibility, including limiting participation to registered dissertation students in the Faculty of Liberal Arts and Professional Studies and offering spots to students beyond course work and comprehensive examination stages, and therefore, already working on Proposals or their dissertation projects. Special attention was given to having a broad representation across the different departments and disciplinary fields. The rationale was to lessen any potential competition between graduate students working on similar projects, foregrounding a "sense of collaboration rather than competition" (Cuthbert and Spark, 2008, p. 86). While there is no inherent competitive aspect to discipline-specific writing groups, diversity helps reduce any potential issues that may arise and exposes students to contrasting expectations and procedures among departments, programs, and disciplines. Café Structure The graduate writing specialist normally begins each weekly Café writing session with an informal 30 minute "check-in." This discussion centres on a topic that is either brought up by a student in advance, or one that the instructor introduces based on what the research and experience show are common problem points or other habitually voiced concerns of dissertation writers. This is followed by "pomodoro" writing periods of twenty-five minutes with five-minute breaks in-between. The "Pomodoro Technique" is a time-management strategy that helps to break-down study or work tasks into smaller units of time in order to increase efficiency. During each pomodoro, students have the opportunity to engage the instructor if they feel stuck in their writing or have a specific problem they wish to discuss. The Café ends with a "check-out," wherein students are encouraged to speak to that day's writing experience. While the "check-in" invites students to view their particular experiences in a larger light, the "check-out" allows for a deeper dive into their individual process. The first few weeks tend to be defined by a reticence to share but as the Café progresses, the willingness to speak openly becomes almost ineluctable. Much of the success of the Café may be ascribed to its participatory nature. In order to deepen a student's belief in their own ability to become an active participant in their formative process, Discourse and Writing/Rédactologie Volume 32, 2022 http://journals.sfu.ca/dwr students collectively determine the timing of particular discussions at any given week of the Café, and may initiate discussions that they feel are pressing. Likewise, while the graduate writing specialist facilitates discussions, the objective is to progressively step back and have the group members autonomously problematize, empathize, and strategize with each other. Building new capacities and reaffirming those that students already possess is essential to the program's success during the life cycle of the Café and throughout the dissertation writer's career. Without student buyin, the Café would not register a sufficient level of engagement and recognition of the struggles involved in writing a dissertation. We are working towards arranging a welcoming and productive environment that promotes cooperation and prioritizes each student's conception of progress. This spirit of cooperation and diffuse learning allows the facilitator to bring in research on pertinent issues and supplement organic conversations driven by student engagement. The more students learn to direct and guide the conversation, the more able they are to integrate this reflective practice into their dissertation writing process. Prompts delivered during the "check-in" process are meant to enable, to the greatest degree possible, the agency of all assembled. For instance, in dealing with the question of missed deadlines and lessons learned from that experience, a sample prompt would read as follows: "Over the course of longer-term projects it's not uncommon to fall behind on our goals. Recognition of feelings of having 'fallen short' can leave us with negative emotions about ourselves, our abilities, and our projects. What ways have helped in recalibrating expectations to recover momentum again? What ways could help? What doesn't help?" In another discussion on the genre expectations of chapter writing (a discussion normally defined by both initial shyness yet intense interest), students learn to recognize their considerable agency in structuring their work and feel better prepared to negotiate the structures, norms, and requirements within which they are operating. The prompt reads as follows: "Often, dissertation writers embark on projects not fully cognizant of what's involved in writing a chapter. What are the chapter writing genre expectations in your field? What are the expectations around length, tone, purpose, etc.? How long is a chapter draft, for instance? Do you conceive of chapters as fully integrated or (mostly) stand-alone works?" When topics of an especially emotive nature are broached, they are reserved for the end of the Café. For instance, a particularly complex issue is navigating supervisory relationships. Often, the group consensus will reinforce a student's ability to marshal their self-advocating practices and move towards establishing a supervisory relationship grounded "in an understanding of the doctoral writer's own approach to writing" (Cayley, 2020, p. 8 One Sessions With the weekly Cafés, students are automatically enrolled in four one-to-one appointments with the Café facilitator spaced out across the term. Initial discussions are largely focused around onboarding, getting to know the student's project, and discussing any writing issues that the student feels comfortable enough addressing. The final three discussions are pronouncedly student-driven, and often include goal setting and project management, working though recurring writerly problems, and in-depth writing or rhetorical analysis of dissertation proposals or chapters. During these later appointments, students generally chart significant progress in their proposals and dissertations, in no small part due to the structure of accountability established throughout the Café meetings and the opportunity that the one-to-one meetings afford the writer to see how their specific writing issues often relate to those of their colleagues. The combination of these two forms of support reduces the sense of isolation dissertation writers often experience while also challenging burdensome and debilitating misconceptions about the necessarily "solitary nature" of the writing process. When thus challenged, students exercise more agency over their own work, and are subsequently better equipped to have their writing needs met in a manner that works for them. Writing centre efficacy is tied to the potential of one-to-one mentorship to address motivational issues and learner attributes that correlate with learning, such as attitudes toward study skills, writing, self-efficacy, and the institution itself (Babcock, Day, and Thonus, 2012). Studies of writing centre impact on student performance (dating back to early iterations of writing centre pedagogy and often viewed through the admittedly not unproblematic prism of course grades) show a clear correlation between one-to-one mentorship and enhanced student performance, over and above the impact of writing courses and other forms of group instruction (Tiruchittampalam, Ross, Whitehouse, and Nicholson, 2018). As part of the writing centre's pedagogical orientation, we, as writing mentors, aim at building affirming relationships with dissertation writers, "supporting….students to develop themselves" (Lindsay, 2015, p. 185), relations that heighten their sense of agency over their projects and ability to articulate and advocate their cause when gaps in support occur. One-to-one mentorship seeks to support writers who may need assistance in finding order or ideational coherence in their draft, and even recognition and affirmation in the complex writing process of long-form work. The goal is to Discourse and Writing/Rédactologie Volume 32, 2022 http://journals.sfu.ca/dwr develop skills for self-advocacy and diagnosis, engendering a sense of capability in weathering the ups and downs of their writing projects. Conclusion While there may be some hesitancy to openly share in the Café setting, this tends to dissipate when students begin to feel more comfortable engaging with their peers in structured conversations. The one-to-one session often bears the stigma of a remedial pedagogy (Schrecker, 2008), particularly for graduate students who may have been identified as "struggling" and pointed in the direction of the Centre. This is generally a problem quickly overcome when students are assured of their place as the main drivers of their own academic formation. The Café gives them the opportunity to voice the anxieties that occupy the day-to-day world of doctoral life and to better distinguish between external pressures and their self-fulfillment expectations. For graduate student writers working under the pressures of high-level programs, the writing centre can serve as a non-intimidating learning space because of the distance from evaluation. Outside of the supervisory committee, writing specialists are well positioned to provide constructive reader-response feedback, helping graduate writers identify potentially overlooked perspectives and opportunities to refine and emphasize their intellectual contributions outside of the sometimes-fraught supervisory relationship. A writing mentorship can be particularly helpful between meetings with supervisors, especially if supervisors are difficult to reach or reluctant to read anything but completed chapters. Writing specialists, however, understand well their role as mentors rather than as supervisors of student work; their teaching strategies place the student writer in control of all authorial decisions. While they defer to the authority of supervisory committee members and program requirements, writing specialists work with students to bridge the gaps between the perceived and real needs that graduate students express.
2022-02-22T16:09:45.576Z
2022-02-19T00:00:00.000
{ "year": 2022, "sha1": "5c57fb6b4e1f41ed2cd352ca81854b2545bc7743", "oa_license": "CCBYSA", "oa_url": "https://journals.sfu.ca/dwr/index.php/dwr/article/download/889/815", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "69c333e7537176c6d330b765cc01be9f46ddcb6b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
255226190
pes2o/s2orc
v3-fos-license
A case of severe increase of liver enzymes in a ATTRv patient after one year of inotersen treatment Background Inotersen is an antisense oligonucleotide used to treat hereditary transthyretin amyloidosis (ATTRv). The most common drug-related adverse effects (AEs) include thrombocytopenia and glomerulonephritis. Hepatic damage is rare, but liver enzyme monitoring is mandatory. Case report A 70-year-old man with ATTRv (Val30Met) treated with inotersen developed a severe increase of transaminases, with normal bilirubin and cholinesterase levels, that forced us to stop therapy. At the same time, other causes of acquired hepatitis were excluded, and the hypothesis of an inotersen-related hepatic toxicity was supported by the normalization of liver enzymes after 40 days from the drug interruption. Discussion Our case showed that 1-year inotersen treatment can stabilize neurological impairment and even improve quality of life and suggests to carefully monitor liver enzymes in order to avoid an inotersen-related hepatic dysfunction. Background Hereditary transthyretin amyloidosis (ATTRv) is a progressive and life-threatening disease due to deposition of amyloid fibrils of mutated transthyretin (TTR). Amyloid deposits occur above all in the peripheral nervous system (PNS) and heart [1,2]. In the last years, TTR gene-silencer molecules drastically changed ATTRv natural history [3]. Inotersen, an antisense oligonucleotide (ASO) inhibitor of the hepatic production of transthyretin protein, is effective in halting the progressive disability [4]. The most common drug-related adverse effects (AEs) include thrombocytopenia and glomerulonephritis, which require monitoring platelet count every 2 weeks and renal function every 3 months. Other frequently reported adverse effects are represented by nausea, urinary tract infection, vomiting, diarrhea, fatigue, chills, fall, peripheral edema, injection site-related pain, and reactions [5]. As other ASOs, inotersen accumulates in hepatic tissue, so liver damage, although uncommon, is possible and makes liver enzyme monitoring mandatory 4 months after the start of treatment and then every year. We reported a case of severe increase of liver enzymes in a ATTRv patient treated with inotersen. Case report A 71-year-old man complained of 1-year history of progressive imbalance during walking associated with unintentional weight loss. No comorbidity was present with the exception of a duodenal ulcer several years before. Nerve conduction study showed a sensory-motor axonal polyneuropathy and TTR genetic test resulted positive for Val30Met mutation. Multidisciplinary evaluation showed hypertrophic cardiomyopathy with positive bone scintigraphy (Perugini score 3), and no laboratory findings of kidney involvement were present. His baseline evaluation showed familial amyloid polyneuropathy (FAP) stage I, Polyneuropathy Disability (PND) score II, total Neuropathy Impairment Score (NIS) equal to 50.75, and Norfolk quality of life questionnaire (Norfolk QOL-DN) equal to 70 points. Before starting therapy with inotersen, blood and urinary examinations were performed in order to exclude any possible contraindication to treatment. Basal platelet count was 201,000/ µL (required value > 100,000/µL). Urinary protein to creatinine ratio (UPCR) and estimated glomerular filtration rate (eGFR), the most important renal function tests to perform according to current recommendations, were respectively 95 mL/min/m 2 (required value > 45 mL/min/ m 2 ) and 0.18 g/g (required value < 1 g/g). Liver enzymes (AST = 12 U/L; ALT = 58 U/L; GGT = 27 U/L) were normal, and a severe hepatic impairment was excluded. The patient started treatment with inotersen in November 2020, and a follow-up was performed with neurological evaluations every 6 months, laboratory examinations consisting of platelet count every 2 weeks, UPCR and eGFR every month, and liver enzymes 4 months after the start of therapy and then every 6 months. At neurological evaluation, NIS score appeared unchanged at 12-month visit (50.7). FAP stage and PND remained unchanged. Norfolk QOL-DN showed a significant improvement of quality of life after 12 months of treatment (Fig. 1A). Nerve conduction studies showed unremarkable changes at follow-up. Concerning laboratory analyses, serum transthyretin level become soon suppressed (0.08 g/L) with respect to the baseline (0.39 g/L; normal value > 0.20 g/L) (Fig. 1B). A non-significant (> 100,000/µL) reduction of platelet count occurred during treatment that did not necessitate any drug reduction/discontinuation (Fig. 1C) and renal function (UPCR and eGFR) remained stable over time. Liver enzymes slightly increased at 6-month follow-up during therapy (AST = 35 U/L; ALT = 65 U/L; GGT = 49 U/L), without any sign or symptoms of hepatic damage; however, at 12 months, a severe increase of liver enzyme (AST = 833 U/L; ALT = 665 U/L; GGT = 135 U/L) was observed (Fig. 1D). Bilirubin level and cholinesterase were both normal. Therefore, since the patient denied the assumption of other liver-harming substances (e.g., alcohol), inotersen therapy was immediately stopped considering a possible drug-related adverse effect. Moreover, according to the gastroenterologist consultant, N-acetylcysteine was administered. At the same time, other causes of acquired hepatitis were excluded. Liver echography showed just mild grade steatosis and biliary tract appeared normal. Blood tests for acute hepatic infections (HAV, HBV, HCV, HIV, CMV, EBV, HSV, VZV, toxoplasma, SARS-CoV-2) and autoimmune hepatitis (ANA, ENA, AMA, LKM1) were unremarkable. Eventually, the hypothesis of an inotersen-related hepatic toxicity was supported by the normalization of liver enzymes (AST = 25 U/L; ALT = 26 U/L; GGT = 66 U/L) 45 days after drug discontinuation. At that time, transthyretin concentration remained still suppressed (0.14 g/L), and his neurological conditions were stable. Therefore, to avoid new events of liver damage, our patient began therapy with patisiran, and 6 months later his clinical condition was still stable (FAP stage I, PND score II, total NIS equal to 50.2) with normal hepatic enzymes. Discussion Inotersen is an antisense oligonucleotide, administrable in the first stages (FAP I and II) of ATTRv, which suppresses hepatic production of transthyretin, halts PNS damage progression, and can even improve disease disability and quality of life [4][5][6]. According to this, our case showed that 1-year lasting treatment with inotersen was able to keep our patient's neurological conditions stable, as demonstrated by stability of NIS and nerve conduction studies, compared to baseline values. Moreover, quality of life appeared to be improved by pharmacological therapy, as documented by the decrease of Norfolk QOL-DN score. Nevertheless, our patient developed a drug-related severe increase of liver enzymes, so that he had to discontinue inotersen. Our patient during inotersen treatment did not experience platelet count reduction and had no laboratory findings of glomerulonephritis, reassuring about the safety profile of inotersen [6]. Inotersen-related hepatitis is rare, but possible, as animal models showed that inotersen dose-dependently accumulates in hypertrophied Kupffer cells of the liver, where, in line with other ASOs, it can elicit an immunological and pro-inflammatory effect and promote histological abnormalities (sinusoidal dilatation, bile duct hyperplasia, individual hepatocellular necrosis, and oval cell hyperplasia) [7]. According to these observations, data from the pivotal phase II/III study (ISIS 420915-CS2) showed an increase of transaminases level in just six subjects. Among them, one subject had a diagnosis of Gilbert's disease, thought to be causal for this, while four subjects showed increases which occurred at single occasions and resolved in a short period of time while inotersen was continued. The last patient had a gradual increase of ALT, AST, and ALP during inotersen treatment, lacking alternative explanations, so it was considered probably drug related [4]. Therefore, inotersen is contraindicated in people suffering from severe hepatic impairment. In conclusion, we confirmed inotersen's efficacy in halting disease disability and improving quality of life, and the good safety profile about thrombocytopenia and glomerulonephritis. Unfortunately, our patient developed a severe drug-related liver enzyme increase, which normalized after therapy discontinuation. Interestingly, serum TTR level appeared still suppressed after 45 days after inotersen interruption. Therefore, we decided to shift to the other TTR gene-silencer therapy, patisiran, since no drug-related liver toxicity was reported, as recently confirmed by the real-life use of patisiran in an Italian cohort of ATTRv patients [8]. Clinical condition after 6-month follow-up was still stable. Our case highlighted that inotersen-related liver damage, although uncommon as reported in scientific literature, is a possible event and liver function should be carefully monitored, to avoid a drug-related hepatic dysfunction. Data availability All data generated or analysed during this study are included in this published article (and its supplementary information files). Declarations Ethical approval Ethical approval was waived by the local Ethics Committee in view of the retrospective nature of the study and all the procedures being performed were part of the routine care. Informed consent Informed consent Informed consent for publication was collected from patient.
2022-12-30T05:06:33.879Z
2022-12-28T00:00:00.000
{ "year": 2022, "sha1": "13a1977e5dadb77e1970d1ffe0d252b7232deaa2", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10072-022-06568-w.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f848e5d91222a252e7982567f8794f9488773702", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
9741947
pes2o/s2orc
v3-fos-license
Surgical technique of concomitant laparoscopically assisted vaginal hysterectomy and laparoscopic cholecystectomy Background Laparoscopically assisted vaginal hysterectomy is one of the most frequently performed gynecologic operations, and numerous authors have demonstrated its safety and feasibility. Case presentation We practiced in some selected cases simultaneous laparoscopically assisted vaginal total hysterectomy with bilateral adnexectomy and laparoscopic cholecystectomy using 5 trocars without uterine manipulator. Previous examinations included abdominal ultrasound, cervix biopsy and CT of abdomen and pelvis. Our aim was to evaluate the surgical technique of our initial experiences for combined laparoscopically assisted vaginal hysterectomy and laparoscopic colecystectomy. Conclusions Laparoscopic hysterectomy had a number of advantages over the conventional technique given the underlying associated diseases, postoperative pain, rapid recovery and aesthetic benefits. Since laparoscopically assisted vaginal hysterectomy (LAVH) was first introduced in 1989 by Reich et al., various forms of laparoscopic hysterectomy (LH) such as laparoscopic supracervical hysterectomy or classic intrafascial supracervical hysterectomy, LAVH, and total LH have been performed [1]. Over the last period of time, minimally invasive surgery in the field of gynecologic surgery has moved from an experimental technique to safe and feasible procedures in the hands of highly skilled specialists and to an approach that many would consider standard and preferable for the treatment of many benign gynecological pathologies and selected early stages gynecological malignancies. The benign pathology (myomas, uterine prolapse) represent over 70% of all hysterectomies [2]. LAVH has become more widely used compared to open abdominal hysterectomy in recent years. It increases operative time but is potentially more cost effective due to reduced hospital stay. The four-port method with various port-placement systems is used in most LHs. LH without uterine manipulator is a feasible technique, which in the early stages of cervical cancer prevents tumor cell dissemination [3]. Our aim was to evaluate the surgical technique of our initial experiences for combined laparoscopically assisted vaginal hysterectomy and laparoscopic colecystectomy. LH has earlier recovery, less postoperative pain, and cosmetic advantage when compared to conventional abdominal hysterectomy. Compared with the vaginal access, laparoscopy allows concomitant interventions (appendectomy, cholecystectomy) and also provide a better anatomical view and performance of concomitant procedures such as for excision of endometriosis and a wide inspection of the peritoneal cavity in search of other pathologies [4]. Case Presentation Our 52 years old patient, without previous abdominal surgery, was admitted to our clinic for metrorrhagia during menopause and colicky pain in the right hypochondrium. She underwent preoperative assessment that included a detailed medical history, abdomen and pelvic clinical examination, abdominal and pelvic ultrasonography and computer tomography (CT), Pap smear, and a conization of the uterine cervix with endometrial biopsy. There was some documented moderate cardiopulmonary morbidity as relative contraindication to laparoscopic surgery, such as: high risk essential arterial hypertension stage II, permanent atrial fibrillation with medium ventricular rate, mitral valve insufficiency grade 2, tricuspid valve insufficiency grade 3, moderate secondary pulmonary hypertension, right major bundle branch block, large mitral stenosis and previous surgery for left breast cancer. After the consultation with a senior member of the anesthesiology team we decided to operate the patient by laparoscopy, after obtaining informed consent from the patient. There was no severe cardiopulmonary disease which contraindicates laparoscopy, defined as a history of cardiac failure, myocardial infarction, unstable angina or pulmonary obstructive disease poorly controlled or contraindicating prolonged Trendelenburg position [5]. The previous cervix biopsy highlighted an evolving low to high dysplasia of the exocervix, squamous metaplasia and high dysplasia on the surface epithelium of the endocervix, chronic ulcerative cervicitis and Papilloma virus infection. Abdominal ultrasound identified a malformation of the gallbladder with multiple hyperechoic images up to 30 mm diameter. Abdomen and pelvis CT was normal. The patient underwent general anesthesia with endotracheal intubation. A Foley catheter was inserted to provide bladder drainage throughout the operation. With the patient in gynecological position, after the pneumoperitoneum was insufflated to a pressure of 12 to 14 mmHg, we inserted 5 trocars: 11 mm optical umbilical trocar, 11 mm suprapubic trocar, 5.5 mm in lateral border of the right rectus abdominis, 11 mm in the same position on the left side for the Ligasure forceps and 5.5 mm under the right costal margin on the medioclavicular line ( Figure 1). The patient was positioned in anti Trendelenburg position and we performed the inspection of the peritoneal cavity. The laparoscope was positioned in left side 11 mm trocar and we used for dissection the 11 mm umbilical trocar, and for the gallbladder exposure we used the 5.5 mm trocar under the right costal margin on the medioclavicular line and the 5.5 mm in lateral border of the right rectus abdominis. We started with the retrograde laparoscopic cholecystectomy (LC) and sub hepatic drainage, then the gallbladder was inserted in an endobag and abandoned near the liver. The patient was then repositioned in Trendelenburg position. We started on the left part, sectioning the adhesions between the sigmoid colon and the utero-ovarian ligament, exposing the round ligament. The uterus is maintained cranially and anteriorly, so as to be opposite the side that will be operated. The LH, without using uterine manipulator, started with the progressive sectioning of the round ligaments, plane to plane, with the Ligasure forceps at about 3 cm from the pelvic wall. It is important to avoid the coagulation of the round ligament near the uterus because of higher bleeding. The ureters were visualized transperitoneally [6] ( Figure 2). In order to preserve the adnexa, the coagulation and section is performed proximal to the fallopian tubes and the utero-ovarian ligament. The dissection continues posteriorly on the broad ligament, taking care not to cut the uterine pedicle's vessels [7]. The visualization of a blue-gray color in the peritoneal leaflet indicates that there is an avascular structure without any anatomical elements behind. After cutting the posterior leaflet of the broad ligament, the adnexa remains pedunculated and the ureter is kept away, since it is mobilized along with the peritoneum. The first assistant should secure the adnexa and apply traction in a direction opposite to the lomboovarian ligament [8]. The peritoneum is sectioned with the Ligasure forceps to the utero-sacral ligaments. Then the uterine pedicle is treated also with Ligasure forceps. We repeated the previous steps in the same manner on both sides. The cranial and posterior traction of the uterus was performed in order to expose the bottom of the vesicaluterine sac. With an atraumatic 5.5 mm forceps the assistant gently elevates the peritoneum with the bladder, in order to avoid lesions while dissecting the vesical-uterine space, allowing to open the vesical-vaginal plane and sectioning of the vesical-uterine ligaments [9] (Figure 3). We used the 10 mm Ligasure forceps to coagulate the uterine pedicles, near the uterus. After the identification of the cervix we dissected the proximal third of the vagina in the anatomical space between the bladder and vagina and performed the incision of the anterior and posterior part of the vagina with the electrocautery hook [10]. Before the loss of pneumoperitoneum, a laparoscopic Babcock forceps is inserted into the vagina to extract the uterus with the ovaries and the endobag with the gallbladder. At this moment we ensured the hemostasis and we performed the vaginal suture with separate 0 absorbable sutures, in two layers muco-mucous and sero-serous, through the vaginal route. A laparoscopic control view was conducted after the pneumoperitoneum was recreated and we used the drainage of the Douglas space. The operative time was 125 minutes from the Calot triangle dissection to the vaginal cuff suture. There were no intra or postoperative complications. The patient received prophylactic antibiotherapy after the intervention and had antithrombotic prophylaxis with low-molecular-weight heparin for 1 week beginning from the day of surgery and then with oral anticoagulants and painkillers. The postoperative evolution was uneventful with treatment. The patient was discharged at 7 days after the surgical intervention. The histopathology examination result was "in situ" cervix carcinoma with intraglandular extension, without micro invasion aspects, but with the presence of a breast cancer metastasis and chronic ulcerative lithiasic cholecystitis. Discussion The complications directly related to laparoscopic access include the lesions caused by the insertion of the Veress needle and the trocars (bleeding, intestinal lesion), those related to pneumoperitoneum, incisional hernia of the orifices of the trocars, and the need to convert to conventional surgery. The complications of LH are the same as in case of the conventional hysterectomy [11]. The VALUE and eVALuate study found that the LH doubled the risk of operative complications compared with abdominal hysterectomy. The eVALuate study also compared abdominal hysterectomy (laparoscopic or conventional) and a vaginal hysterectomy, and observed that laparoscopy permitted a higher detection of unexpected pathologies such myomas, endometriosis, and adhesions, when compared with vaginal or abdominal access. The study confirmed some advantages of laparoscopy such as less pain, shorter hospitalization, a faster post-operative recovery, and a better short-term quality of life when compared with laparotomy. Downsides included longer surgical time and a higher rate of urinary tract lesions [12,13]. In literature, a meta-analysis, found that LH caused a higher risk of lesions of the bladder and ureters compared with conventional hysterectomy. LH was associated with fewer infections, fewer episodes of fever, less blood loss and a smaller drop of hemoglobin values when compared with conventional hysterectomy. When comparing a vaginal and abdominal hysterectomy the meta-analysis found the same risks. There was no difference in the frequency of fistulas, urinary or sexual dysfunction, when comparing the route of access for the hysterectomy. There wasn't differences in blood loss, the occurrence of pelvic hematoma, vaginal vault infection, urinary tract infection, or thromboembolic events [14]. A study of Donnez et al., including 3190 LH has showed that there is no increase in frequency of major complications during LHs performed by surgeons that passed over the learning curve of the procedure. No difference was found in the frequency of ureteral lesions after vaginal hysterectomy (0.33%) and LH (0.25%). Bladder lesions occur in 0.44% of women who underwent a vaginal hysterectomy and in 0.31% of those who underwent an LH [15]. One study that reviewed 7286 hysterectomies regarding the frequency of the dehiscence of the vaginal wall revealed a percentage of 4.93% after total LHs, 0.29% in case of vaginal hysterectomies, and 0.12% after abdominal hysterectomies. LH also had a decreased postoperative adhesion formation [16]. The American College of Obstetricians and Gynecologists Committee Opinion listed in 2005 the indications for the use of LAVH: adhesiolysis, endometriosis treatment, treatment of leiomyomas, ligation of the infundibulum-pelvic ligaments to facilitate the excision of ovaries, and the evaluation of the abdominopelvic cavity before the hysterectomy [17]. Korolija et al. have reported that quality of life improves earlier after laparoscopic than open surgery for a number of conditions including cholelithiasis and uterine disorders that require hysterectomy [18]. A review of 11662 patients has found that LC and LH are associated with statistically significantly lower risks for infections in comparison to conventional surgery [19]. We must remark our particular surgical solution using a concomitant laparoscopically assisted vaginal hysterectomy and laparoscopic cholecystectomy, in that way avoiding a conventional repeated and prolonged surgery, which would have had possible important complications in a patient with many associated diseases. This procedure reduced the length of surgery, hospital stay, and recovery time as well as pain and complications, and represents a major advancement in women's health care. The LAVH should be considered as a specific surgical approach with its own distinctive indications, in case of vaginal hysterectomy, with expected adhesions or endometriosis hindering vaginal surgery or planned accompanying adnexal surgery [20]. Conclusions Concomitant laparoscopic assisted vaginal hysterectomy and cholecystectomy, in selected cases, had a series of advantages to the conventional surgery regarding the possibility of exploring the abdominal and pelvic cavity, the associated comorbidities, postoperative pain, quick recovery and the esthetic advantages. Apart from the benefit to the patient it also appears to be cost effective both to the patients and to the hospital services because it decreases the morbidity and hospital stay. Using new technologies for sealing the vessels in laparoscopic hysterectomy and in laparoscopic cholecystectomy seems to be a time saving technique and can be safely used in a single session surgery.
2018-04-03T00:16:56.050Z
2017-07-15T00:00:00.000
{ "year": 2017, "sha1": "8eb04579823491fb3a480ebcdf790bc6f8efa467", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc5536215?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8eb04579823491fb3a480ebcdf790bc6f8efa467", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253358200
pes2o/s2orc
v3-fos-license
Current and Future Applications of Computational Fluid Dynamics in Coronary Artery Disease Hemodynamics interacts with the cellular components of human vessels, influencing function and healthy status. Locally acting hemodynamic forces have been associated—by a steadily increasing amount of scientific evidence—with nucleation and evolution of atherosclerotic plaques in several vascular regions, resulting in the formulation of the ‘hemodynamic risk hypothesis’ of the atherogenesis. At the level of coronary arteries, however, the complexity of both anatomy and physiology made the study of this vascular region particularly difficult for researchers. Developments in computational fluid dynamics (CFD) have recently allowed an accurate modelling of the intracoronary hemodynamics, thus offering physicians a unique tool for the investigation of this crucial human system by means of advanced mathematical simulations. The present review of CFD applications in coronary artery disease was set to concisely offer the medical reader the theoretical foundations of quantitative intravascular hemodynamics—reasoned schematically in the text in its basic (i.e., pressure and velocity) and derived quantities (e.g., fractional flow reserve, wall shear stress and helicity)—along with its current implications in clinical research. Moreover, attention was paid in classifying computational modelling derived from invasive and non-invasive imaging modalities with unbiased remarks on the advantages and limitations of each procedure. Finally, an extensive description—aided by explanatory figures and cross references to recent clinical findings—was presented on the role of near-wall hemodynamics, in terms of shear stress, and of intravascular flow complexity, in terms of helical flow. Introduction Following nucleation, coronary atherosclerotic plaques differentiate into several clinical phenotypes.Whilst most of the plaques will remain uneventful lifelong, a proportion of them will progress into flow-limiting lesions or become unstable, rupture and provoke acute coronary syndromes [1,2].For its epidemiological impact, the understanding of the mechanisms underlying coronary atherosclerotic plaque onset, progression and rupture is of clinical significance. Although extensive scientific efforts, prediction of plaque formation, evolution and vulnerability remains equivocal.Firstly, despite the arguably systemic distribution of vascular inflammation and the systemic effect of cardiovascular risk factors, plaque nucleation appears to be a local phenomenon.In fact, atherosclerotic plaques cluster in preferential anatomic regions (e.g., coronary, carotid or lower-limb arteries) and at preferential vascular sites (e.g., inner curvatures, bifurcations and T-junctions) [3,4].Secondly, several studies linked plaque composition and inflammatory plaque infiltration footprints with a vulnerable phenotype (see e.g., [2,5,6]).However, the registered elevated senescence rate of those lesions identified as vul-nerable have failed so far to justify pre-emptive therapeutic interventions aiming at stabilizing the plaque with an improvement of patient long term outcome [6].Thirdly, increased transcoronary pressure gradients were associated not only with myocardial flow impairment [7] but also with plaque destabilization [8], thus suggesting a harmful role of trans-stenotic forces acting across flow-impairing plaques [3].Lastly, coronary intervention targeting myocardial perfusion deficits failed to reduce occurrence of major adverse cardiac events compared to optimal medical treatment [9], indicating the prevention of acute coronary events rather than the sole treatment of myocardial ischemia as more relevant target therapy to impact patient outcome. Coronary atherosclerotic plaques experience complex biomechanical forces during each cardiac cycle as the result of the interaction between the pulsatile blood flow with the moving artery geometry [3].The role of local blood flow-vessel interaction has gained scientific momentum becoming subject to extensive investigation, especially in relation with vessel remodeling and atherosclerotic plaque evolution in the coronary vascular bed.Given the impossibility of a direct in vivo measurement of those flowrelated quantities acting as local biomechanical stimuli at the blood-endothelium interface, increasingly refined and personalized computer models able to realistically capture cardiovascular flows have been developed [10] and applied to study intracoronary hemodynamics [11].Consequently, hemodynamic factors influencing vascular homeostasis as well as atherosclerotic lesion development have been proposed, hence providing evidence to the so-called 'hemodynamic risk hypothesis' of atherosclerosis [3,12].According to this hypothesis, local onset and progression of atherosclerosis can be promoted by local blood flow disturbances. However, the integration of computer model-based intracoronary hemodynamic data within the clinical practice is mainly hampered by a demanding computational cost to run simulations, especially when compared to current diagnostic imaging acquisitions.This has prevented the use of computational hemodynamics in large clinical studies, which in turn would be required to prove the utility of computer based hemodynamic modelling, setting up a vicious cycle.Moreover, computer based hemodynamic modelling is perceived by cardiologists as a technology for which most of them have never been trained and this represent a barrier to its adoption. Aims and Structure of the Present Writing The present review of the literature aims to broaden the understanding of computer based hemodynamic modelling and to highlight the opportunities opened by its clinical application in cardiology.More specifically, it offers the non-technical, medical reader (i) a simplified but rigorous explanation of coronary artery hemodynamics, (ii) a broad overview on the applications of computational fluid dynamics (CFD) based modelling to coronary artery hemodynamics, and (iii) the current level of scientific evidence and of implementation of CFD in the clinical practice. After a brief overview on the complexity of coronary artery hemodynamics in Section 3, the principles of CFD application to the human coronary system and the generation of flow simulations are presented in a step-by-step fashion in Section 4. From here, a detailed description of CFD applications concerning the assessment of intracoronary pressure is presented in Section 5, with distinction between methods based on invasive and non-invasive imaging modalities.In this part of the manuscript, ample space is dedicated to the discussion of various existing CFD based tools and their clinical role.Near-wall and intravascular flow patterns will be the main focus of Section 6, where preclinical and early clinical applications will be presented.Finally, limitations of the CFD-based current methodology and future perspectives (including artificial intelligence) will be discussed in Section 7. Features of Intracoronary Hemodynamics Coronary artery hemodynamics can be seen as a system characterized by a remarkable level of complexity.A main source of complexity is the anatomy of the coronary tree, which presents a pronounced tapering (especially in the left coronary vasculature) and follows an asymmetrical fractal dichotomizing pattern [13], where flow distribution at bifurcations is not equal among the two daughter ramifications, namely distal main vessel and side branch [14].Acting as a flow divider, the presence of the coronary bifurcation carina literally splits the incoming flow rate into two asymmetric flows with velocity profiles modelled by the local geometry [15].In this region, the sudden changes in velocity direction and magnitude of the flowing blood lead to complex patterns usually characterized by flow separation and reattachment, with direct effect on endothelial cell distribution, shape and function [16] as well as on circulating cell prolonging their adhesion time to the endothelium [17].In addition, variability in the distribution of diagonal and marginal branches is commonly observed.Tortuous and ectatic vascular segments are frequently encountered [18].Another source of complexity is represented by the dynamic vasomotion autoregulation characterizing both epicardial coronary arteries and smaller arteriolae (i.e., with a cross-sectional diameter <400 µm).In fact, vascular smooth muscle cell contraction is finely tuned by circulating and endothelial-derived vasoactive substances (e.g., nitric oxide and adenosine diphosphate) released in case of changes in metabolic demands or perfusion [3].A further element of complexity is represented by the myocardial mechanics, where the systolic myocardial contraction interacts on coronary vessels causing (i) pulsatile and complex flow patterns with a prominent diastolic component [10], (ii) a cyclic longitudinal vessel shrinkage (which adds on the natural tortuosity of the epicardial vessels), and (iii) a cyclic transversal compression of the intramural segments of epicardial arteries [19].Finally, coronary driving pressure strictly depends on the systemic filling status and the cardiac function [20]. Hence, capturing the complexity of coronary artery physiology into a virtual environment for blood flow simulation represents for sure a singular challenge. Basics of Computational Fluid Dynamics Initially developed in the middle of the last century to solve complex engineering problems through the execution of numerical simulations, CFD solves numerically in space and time the physics equations governing fluid motion, thus allowing to mathematically describe and analyze flow fields also in complex geometries [21].To be clearer, the nature of the governing equations describing the time-varying motion of fluids, namely the Navier-Stokes equations (expressing the conservation laws of fluid dynamics), prevents their analytical resolution in case of complex 3D fluid domains.Thus, numerical schemes, typically based on the finite volume or finite element method, are adopted to solve the equations in their discretized form [21]. Applied with high spatial and temporal resolution to the simulation of blood flow patterns, the combination of CFD with clinical imaging rep- resents for cardiologists a powerful technology to quantitatively assess hemodynamic forces acting locally on the endothelium. To obtain robust results, CFD tools require several steps to be appropriately executed, including vascular geometry reconstruction, boundary conditions (BCs) definition, and material properties setting; all these steps concur to determine the reliability of the simulation results [22,23].Fig. 1 summarizes the main steps of patient-specific CFD simulations for the analysis of the coronary artery hemodynamics. Firstly, the patient-specific 3D coronary artery geometry is reconstructed from conventional invasive coronary angiography (ICA), computed tomography coronary angiography (CTCA), or from the fusion of one of the previous imaging modalities with intravascular imaging techniques (i.e., intravascular ultrasound -IVUS or optical coherence tomography -OCT).Clinical imaging is used to obtain information about the vascular segments of interest with resolutions close to 1 mm or even lower, which is of considerable importance for the accurate characterization of local coronary hemodynamics [23].This information will be used to create the CFD model, defining the fluid domain of interest (Fig. 1). Secondly, the so-obtained 3D fluid domain of interest is subdivided into smaller sub-domains called 'elements' (i.e., outputs of the discretization process, also known as meshing process), where the equations of fluid motion are solved in their discrete form.The discretization of the Navier-Stokes equations is necessary since their resolution in complex 3D fluid domains cannot be analytically obtained.By that, a system of non-linear partial differential equations is transformed into a system of algebraic equations that can be solved numerically.Finer grid spacing (i.e., smaller element size) is usually required for complex vascular regions, where larger variation in velocity and/or pressure profiles are expected.On the contrary, larger element size might be used in vascular regions where low spatial variability of the hemodynamic quantities is expected.High spatial resolutions imply computationally ex-pensive simulations, usually requiring the adoption of highperformance computing technology. Thirdly, the CFD simulation is set up by defining a priori the physical model, the blood material properties (in terms of blood density and viscosity), the initial conditions and BCs contextualizing as much as possible the physical phenomenon, and the solver numerical settings.CFD simulations can be carried out under constant (steady-state) or pulsatile (unsteady-state) flow conditions, depending on the quantities we are interested in (e.g., pressure rather than shear stress profiles).Blood is assumed as homogeneous, incompressible fluid with constant density.In most cases, blood viscosity is described through non-Newtonian rheological models able to replicate its shear-tinning behavior (e.g., Carreau or Quemada models) [24].The use of the Newtonian model is also accepted, since it proved to be likely appropriate for hemodynamics simulation in arterial domains characterized by high shear rates (>50 s −1 ) and low particle residence time [25].A proper description of the hemodynamic conditions at the inlet/outlet boundaries of the model, in terms of prescribed values of velocity or pressure, is required for the resolution of the governing equations of fluids (Fig. 2).Inlet/outlet BCs of the coronary artery model can be extracted using subject-specific clinical data.In detail, velocity and/or flow rate data are usually obtained from clinical imaging techniques, such as angiography-based thrombolysis in myocardial infarction (TIMI) frame count [26], as well as in vivo measurement techniques, such as intracoronary Doppler ultrasound [27] and intracoronary continuous thermodilution [28].Subjectspecific pressure data can be derived from in vivo pressure wire measurements [29].If such data are not available, generic flow/pressure references from literature can be prescribed.The latter make the CFD model weakly tailored to the specific subject, but that not necessarily implies less affordable simulation results (it depends on the simulated quantities of interest).An alternative strategy to define BCs consists in the coupling of the vessel inlet and outlets to lumped parameter circuit models (e.g., the Windkessel model), which mimics aortic driving forces and peripheral resistances and compliances, respectively (Fig. 2) [30].The coronary artery wall can be considered as a deformable structure by the simulation of vessel compliance and myocardial contraction-induced vessel deformation during the cardiac cycle (e.g., [31]), or as rigid structure (e.g., [32,33]).Usually, the latter option is adopted, as it has been demonstrated that cycle-average hemodynamic quantities are less impacted by vessel compliance and deformation [34,35]. Fourthly, once properly set, the CFD simulation is run.The discretized governing equations of fluid motion are iteratively solved to reach a solution for which residual errors in velocity and pressure fall below a certain threshold (preselected by the user based on the accuracy that is considered adequate for numerically solving the equations). At last, simulation results are post-processed to extract the hemodynamic quantities and indexes of interest.Both intracoronary pressures and flows can be quantified, describing their behavior within the streaming medium and along the blood-vessel interface (i.e., near-wall hemodynamic quantities).Accordingly, in the following sections clinical applications of CFD will be addressed separately for computational simulation of coronary pressure (Section 5) and coronary flow patterns (Section 6).Additionally, the main limitations of CFD application in coronary arteries will be discussed in Section 7. CFD Based Intracoronary Pressure Distribution Evaluation Plaque infiltration and the resulting inward vascular remodeling (according to Glagov's hypothesis [36]) impact vessel conductance and generate intravascular pressure gradients [3,37].In turn, increased vascular resistance impairs coronary flow downstream of the stenosis.This can be quantitatively assessed through invasive measurement in terms of fractional flow reserve (FFR) as ratio between hyperemic distal coronary and aortic pressures: a flow impairment higher than 20% during hyperemia-which translated in a FFR value lower than 0.80-was associated with myocardial ischemia and with adverse clinical outcomes, thus justifying coronary interventions aiming at resolving flowimpairing coronary lesions [8,38].The (assumed) linear relationship between coronary flow and pressure under hyperemia condition was empirically verified by the evidence of constant microvascular resistance during maximal pharmacological hyperemia, thus allowing the measurement of epicardial pressure gradients without the interference of variations in microvascular pressure [39,40].More recently, non-hyperemic pressure ratio (NHPR) indices have been developed and successfully validated against FFR [41].Similarly to FFR, NHPRs measure the status of epicardial vessel conductance, but without the need of administration of hyperemic agents.This is possible given the phasic behavior of microvascular resistances along the cardiac cycle and their stabilization in specific phases of the diastole [42,43]. Both FFR and NHPRs represent valid solutions to assess coronary perfusion in relationship to the status of epicardial impedance and are highly recommended from international guidelines for the functional assessment of intermediate-grade coronary lesions (typically around 40-90% stenosis) [44].However, given their invasiveness and the perceived additional procedural time and costs, clinical uptake of intracoronary pressure measurement remains low (<15%) [45] and highly variable among healthcare systems [46].To overcome the limitations hampering the diffusion of intravascular measurements, alterative solutions exploring derivation of intracoronary pressure profiling in a pressure wire-free manner (e.g., from non-invasive imaging modalities or from the integration of coronary imaging with CFD) have been proposed. Intracoronary Pressure Evaluation Based on Invasive Coronary Angiography 3D vessel reconstructions based on two or more orthogonal coronary angiograms were implemented to compute the so-called 'virtual' FFR (vFFR) [47].Pioneering the field, Morris and colleagues developed and validated an effective CFD solution for angiography-based vFFR called VIRTUheart TM [48].This CFD solution follows the general workflow summarized in Fig. 1.More in detail, firstly a 3D geometry of the diseased coronary artery is reconstructed from two angiograms as close to 90 degrees apart.Secondly, the vessel geometry is discretized and the CFD model is set up within the commercial software CFX (Ansys Inc, Canonsburg, PA, USA) by applying generic BCs.In this regard, a population-based, generalized pulsatile pressure waveform is prescribed at the inlet.Windkessel models with values of resistances and compliance averaged over the available patients' data is applied at the outlets.Lastly, CFD simulations are run and the vFFR is quantified.The VIRTUheart TM CFD solver reproduced physiological lesion significance with excellent accuracy (>90%) [48,49].However, the transient CFD simulations of this tool resulted in long processing time (>24 hours).Hence, 'faster' solutions based on steady-state CFD simulations for the identification of the parameters of simplified fluid dynamics mathematical models (i.e., lumped parameter models) were developed with significant reduction of computational time (<4 min) [50].Furthermore, a recent update to the software allowed the virtual simulation of stenting and accurate post-stenting FFR prediction (Fig. 3) [49].The software currently remains for research use only [51]. Differently from the time-consuming CFD approach, several methods rely on a simplification of the governing equations of fluid motion to describe the hemodynamic features within the coronary artery.The analytical solution of those simpler equations (e.g., Bernoulli's and Poiseuille's equations) may provide the needed hemodynamic quantities for fast vFFR computation.Three software solutions based on these methods are currently commercially avail- able in Europe, namely the Cardiovascular Angiographic Analysis System for Vessel CAAS-vFFR (Pie Medical, Maastricht, The Netherlands), the quantitative flow reserve QFR (Medis Medical Imaging, Leiden, The Netherlands and Pulse Medical Technology Inc., Shanghai, China) and the FFRangio (CathWorks Ltd., Kfar-saba, Israel) [52][53][54].In addition to the CE mark, QFR has also received the approval by the US Food and Drug Administration (FDA).Table 1 (Ref.[52][53][54][55][56][57][58][59][60][61][62][63][64][65][66]) summaries available clinical evidence for the software mentioned above.From the technical viewpoint, CAAS-vFFR uses angiography-based 3D vascular models without reconstruction of side branches [53].The user is requested to provide the invasively measured aortic root pressure [53].Next, to compute the CAAS-vFFR the pressure drop along the vessel segment of interest under hyperemic condition is instantaneously calculated by solving a simplified fluid dynamics equation accounting for pressure losses due to viscous friction of the blood flowing through the narrowed vessel and pressure losses due to flow separation downstream from the narrowing, with empirically determined coefficients [67]. Similarly, 3D side branch-free vessel reconstructions are employed by QFR for the vFFR computation [57].The algorithm automatically divides the reconstructed vessel into equally spaced consecutive segments and estimate the pressure drop for each segment as a quadratic function of the hyperemic flow velocity, with coefficients dependent on the stenosis geometry.By assuming a fixed mean hyperemic coronary flow velocity of 0.35 m/s [68], the algorithm generates an initial output, called 'fixed-QFR' (fQFR).To improve patient-specificity, the software allows applying the TIMI frame counting analysis-as related to vessel flow velocity [69]-and to obtain the contrast-QFR (cQFR) at non-hyperemic conditions or the adenosine-QFR (aQFR) after intravenous administration of adenosine [68].cQFR was shown to be superior to both fQFR and aQFR [57]. Differently from CAAS-vFFR and QFR, vessel geometrical reconstructions for FFRangio include bifurcations with side branches with diameter ≥0.5 mm [52].The coronary tree is generated rapidly thanks to automatic vessel and lesion detection combined with correction feedback from the user.Based upon Poiseuille's law, flow analysis is executed at each coronary segment and junction, and the overall resistance of the generated arterial network is determined.Hence, FFRangio values are inferred as the contribution of each narrowing to the total resistance [52]. Intracoronary Pressure Evaluation Based on Non-Invasive Imaging Modalities The application of CFD to non-invasive imaging modalities to predict blood flow and lesion-specific FFR preceded the development of angiography-derived FFR software (see Table 1).Taylor and colleagues [70] provided the first example of virtual fractional flow reserve derivation from CTCA, the so-called FFR CT (HeartFlow, Mountain View, CA, USA).This tool has received both the CE mark and FDA approval, and it is currently commercially available in Japan.The software solution is based on volumetric CTCA data, morphometric laws and CFD analysis, as detailed in [70,71].In short, a patient-specific 3D model of the aortic root and coronary tree reconstructed from CTCA is coupled with lumped parameter models representing heart, systemic circulation, and coronary microcirculation.To define the flow-split between the coronary branches, firstly the total coronary flow under resting con-dition is derived from the myocardial volume, estimated from CTCA.Secondly, the total coronary resistance is computed considering the total coronary flow and the mean aortic pressure.Lastly, unique resistance values are prescribed to the lumped parameter models of coronary microcirculation downstream of the epicardial arteries relying on vessel diameter-based morphometric laws (e.g., Murray's law [72], according to which the resistance to flow of a coronary branch is inversely related to the coronary artery diameter).To simulate hyperemic condition, the effect of adenosine on reducing the peripheral resistance of the coronary microcirculation is modelled by setting the total coronary resistance as 24% of the resting value [73] and assuming that the hyperemic microcirculatory resistance distal to a stenosis is the same as that of a healthy coronary artery [74].The CFD simulation is performed centrally by the company and FFR CT results are generated with a supercomputer within few hours (Fig. 4). Clinical evidence proved higher per-vessel diagnostic performance for FFR CT in direct comparison with coronary CTCA, single-photon emission computed tomography (SPECT), and positron emission tomography (PET) for ischemia diagnosis (AUC 0.94, 0.83, 0.70, 0.87, respectively; p < 0.01 in all cases) [75].In a large multicentric realworld patient cohort, the implementation of FFR CT led to modified treatment recommendation in two-thirds of subjects as compared to CCTA alone and was associated with less negative findings at the ICA [64].Furthermore, at one year baseline FFR CT below 0.80 showed a trend (p = 0.06) towards higher occurrence of adverse cardiovascular events [65].In patients with intermediate pre-test probability for CAD, a FFR CT -guided care decision making resulted in lower financial costs, while holding similar clinical outcomes and quality of life indices [63].Finally, a FFR CTbased PCI planner with simulation of (predicted) post-PCI FFR was recently clinically validated against invasive post-PCI FFR and showing high agreement level (mean difference: 0.02 ± 0.07 FFR unit) [66]. More recently, alternative solutions based on magnetic resonance angiography (MRA) have been also proposed.Contrast-enhanced ECG-gated 3T magnetic resonance scanners were used to produce 3D coronary images with a resolution of 0.64 × 0.64 × 0.75 mm 3 [76].Moreover, phase-contrast magnetic resonance imaging (PC-MRI) allowed coronary flow waveforms determination under rest and stress conditions [77], while self-gating principles improved vessel recognition by correcting for physiologic motion [78].The obtained patient-specific coronary flow values were applied as inflow BCs to determine FFR based on CFD simulations [79].This technology is currently undergoing further clinical investigation.blood flow crossing a straight coronary segment presents the general characteristics of laminarity with co-axiality of the flow velocity vectors pointing to the same direction and decremental magnitude towards vessel walls where the blood interacts with the endothelial surface.In this condition, the blood velocity profile is axial-symmetric at each vessel cross-section.However, any deviations from straight vessel geometry markedly impact coronary flow patterns.In particular, the presence of curvature imparts a displacement of the location of the maximum velocity with respect to the vessel, in consequence of the vessel curvature-generated centrifugal force acting on the streaming blood.The deflection of the maximum peak velocity from the centerline (as in the classical Poiseuille flow) to the outer side of the curved vessel is the consequence of the balance between the centrifugal force, the viscous forces exchanged by the wall with blood and of the pressure gradient generating radially on the vessel cross-section, which leads to the establishment of the so-called secondary flows on the vessel cross-section.The composition of the two blood flow components, the one along the main flow direction (through-plane component) with the secondary flows (in-plane component) leads to the production of fully 3D blood flow patterns characterized by helical motion.The described phenomenon is exacerbated by the presence of bifurcations and side branches.As a result, flow disturbances are generated close to the internal and external vascular walls facing the carina [80], where blood flow separation and reattachment to the vessel wall, stagnation and recircu-lation may occur.Such flow disturbances are recognized as aggravating flow events related to the atherosclerotic disease onset/development [81,82].In other cases, vascular remodeling may occur, disrupting the smooth interface between blood flow and endothelium.Typically represented by coronary atherosclerotic plaques, these anatomical elements shape the local hemodynamics imparting multidirectionality and flow disturbances.Depending on the level of luminal protrusion, the local hemodynamics may be altered not only 'near-wall' but also in the bulk region of the vessel. CFD Based Intracoronary Flow Patterns Differently from intracoronary pressure gradients, an invasive assessment of velocity vector fields and shear forces generated by the interaction between the viscous flowing blood and coronary arteries wall is at the moment elusive [23].Personalized computational simulations have the potential to bridge this gap, providing a reliable quantification of the velocity field and shear forces after verification, validation, and uncertainty quantification of coronary models [83].Currently, the application of CFD simulations for the characterization of flow patterns in human coronary arteries remains a subject of research.Commercial software solutions for clinical use are still not available. In Sections 6.1 and 6.2, computationally derived biomechanical quantities describing the near-wall and intravascular flow patterns are discussed.Focus is centered on their role in understanding of atherosclerotic pathophysiology and their clinical application. Near-Wall Flow Patterns The interaction between the viscous blood and the vessel wall imparts at the blood-endothelium interface a state of stress, i.e., a force per unit surface.Analytically, the vector resultant of those frictional forces applied to a given endothelial unit area and with orientation tangential to the luminal surface is defined as wall shear stress (WSS), measured in N/m 2 or dyn/cm 2 or, most commonly, in Pascal (Pa; 1 Pa = 1 N/m 2 = 10 dyn/cm 2 ).Although several order of magnitude lower than the tensile forces exerted on vascular structures by the pulsatile blood (in kPa) [3], WSS has a valuable biological significance [23,84,85], triggering the endothelial mechanosensory machinery that regulates endothelial function and homeostasis [12,86]. In regions of disturbed shear forces, such as near arterial bifurcations, the long-term exposure to low WSS values (typically <1 Pa) [23] has been associated to proinflammatory cellular cascade activation as well as enhanced lipidic and macrophage infiltration [87], hence leading to wall remodeling, fibrous cap thinning, and subintimal ischemia, which stimulates the local proliferation of the vasa vasorum, with risk of intraplaque hemorrhage [88].Clinically, luminal areas exposed to low WSS have been associated with regional endothelial dysfunction [89] and plaque progression requiring revascularization (PREDIC-TION study) [6].Moreover, low WSS has provided incremental risk stratification of untreated coronary lesions beyond measures of plaque burden, luminal surface area and plaque morphology (PROSPECT study) [32]. Conversely, high WSS magnitude values (typically >5 Pa) have been linked with plaque vulnerability and rupture [33,90,91].Longitudinal and cross-sectional studies on human coronary arteries based on vessel-specific CFD simulations have reported an increase in plaque necrotic core, calcium, increased strain, development of expansive remodeling, and presence of intraplaque hemorrhage, large necrotic core, napkin-ring sign in areas exposed to high WSS [88], and incremental value for high WSS for predicting myocardial infarction over FFR alone [33,91]. Notably, areas of low and high WSS may be contiguous.Low WSS surface areas are typically located at inner curvatures, at the waist of bifurcations or downstream of a stenosis (Fig. 5A).Conversely, high WSS surface areas are located at outer curvatures, at the flow divider of bifurcations, upstream or at the lesion throat (Fig. 5A) [6,12].For this reason, the interpretation of WSS values in absolute terms only could be misleading, and its contextualization in a proper physiological context is mandatory.As a consequence, in addition to the traditional time-average WSS (TAWSS, namely the WSS magnitude averaged along the cardiac cycle) [23], several WSS-based quantities have been introduced/tested, aiming at quantifying different features of the WSS profile, with particular attention to its multidirectionality and magnitude variability along the cardiac cycle (Fig. 6) [92][93][94].For instance, WSS-based quanti-ties were proposed describing (i) the degree of flow reversal (oscillatory shear index, OSI) [95], (ii) the near-wall solute residence time (relative residence time, RRT) [96], (iii) the multidirectional character of the disturbed blood flow through the quantification of the cycle-averaged WSS component orthogonal to the mean WSS vector direction (transverse WSS, transWSS) [97], or (iv) the variability of contraction/expansion action of endothelial shear forces along the cardiac cycle (topological shear variation index, TSVI) [98] (Fig. 5B).High OSI (≥0.15) was associated with a vulnerable plaque phenotype with lipid accumulation and inflammatory cell infiltration [99].A positive relation emerged for RRT and atherosclerotic plaque calcification and necrosis [100].TransWSS was related to changes of plaque composition over time in human coronary arteries [100].Finally, high TSVI (>40.5 m −1 ) identified mild coronary lesions future site of myocardial infarction within 5 years [90].Mechanistically, this may be linked to the altered shrinkage and widening of intercellular gaps in case of amplified contraction/expansion action of the endothelium [101], as well as to higher fibrous cap fragility, accelerated disease progression, and plaque rupture [102].).The table at the bottom reports the WSS-based descriptors of disturbed flow.For each descriptor, a short caption together with the mathematical formulation is reported.T is the cardiac cycle; WSSu is the normalized WSS vector field. Intravascular Flow Patterns Besides the role of WSS, distinguishable intravascular flow features have also been suggested to markedly impact the atherosclerotic disease natural history.Previous studies have clearly revealed that (i) arterial blood flow, under physiological conditions, is helical and (ii) the associated helicity intensity is instrumental in suppressing arterial flow disturbances in ostensibly healthy arteries, being thereby potentially protective for atherosclerotic lesions at the early stage [103][104][105][106][107][108][109]. The analysis of arterial helical flow patterns can be provided by using the local normalized helicity (LNH) [108].This hemodynamic quantity, defined as the cosine of the angle between the local velocity and vorticity vectors, allows for the identification of the rotating direction of helical fluid structures based on its sign (i.e., positiveright-handed; negative-left-handed) (Figs.5C,7).Recent evidence, based on the visualization of intravascular LNH iso-surfaces, has pointed out that helical flow is a feature characterizing the physiological intravascular hemodynamics of healthy coronary arteries [103,110].The topology of coronary helical flow structures strongly depends on the vessel geometry (i.e., curvature, torsion, bifurcations, presence of stenosis), which may affect their generation, transport, and intensity along the arterial length [111].For each descriptor, a short caption together with the mathematical formulation is reported.T is the cardiac cycle; V is the whole arterial volume. A quantitative characterization of helical flow in terms of strength, size and relative rotational direction can be obtained by several helicity-based descriptors, named as h indices (Fig. 7) [92][93][94]105].In detail, cycle averaged helicity (h 1 ) and helicity intensity (h 2 ) quantify the net amount and the intensity of helical flow, respectively, while the signed (h 3 ) and unsigned (h 4 ) helical rotation balance measure the prevalence (by the sign of h 3 ) or only the strength of relative rotations of helical flow structures, respectively.In particular, among the helicity-based descriptors, h 2 emerged as instrumental in stabilizing blood flow in coronary arteries imparting low WSS multidirectionality and minimizing the endothelial surface exposed to low atherogenic WSS [103].More specifically, a non-linear decreasing trend relating h 2 with the coronary luminal surface exposed to low WSS, was found, indicating that the higher is the helicity intensity, the lower is the coronary endothelial region facing proatherogenic WSS [103].As confirmation, recent findings revealed the existence of a clear association between helical flow intensity and coronary atherosclerotic plaque initiation and growth [104].The latter (i) confirmed the role of helical blood flow features in conditioning WSS luminal distribution, which in turn interacts with the pathophysiology of atherosclerotic plaque formation, and (ii) suggested that helical flow intensity is protective against coronary atherosclerotic plaque onset/progression, and may serve as a biomechanical predictor of it [104].The evidences of the physiological significance of helical blood flow, already emerged from CFD studies on swine coronary arteries [103,104], are expected to be directly translated to human coronary disease, due to the demonstrated applicability of swine-specific computational models to investigate the hemodynamic-related risk of coronary atherosclerosis in humans [110]. All these aspects together with the clinical feasibility of helical pattern quantification-at least in large arteriesby means of four-dimensional (4D) flow PC-MRI have stimulated the interest on the use of helical flow as a potential surrogate marker for the atherosclerotic risk at the early stage.The in vivo measurements of intravascular fluid quantities such as helical flow, which are less sensitive to noise, lumen edge definition, spatial and temporal resolution than in vivo WSS assessment [112,113], could be a novel surrogate determinant of plaque vulnerability.In the near future advances in clinical imaging (e.g., applying 4D flow PC-MRI sequences properly developed to measure coronary blood flow) [114] and online CFD analysis are indeed expected to allow non-invasive in vivo-based prediction of coronary atherosclerotic or plaque rupture risk based upon helicity-based descriptors [12,115]. Limitations of Current CFD Simulations and Future Perspectives Despite recent developments, intracoronary computational hemodynamics simulations still present several criticalities hampering their clinical usability. Firstly, considering that 'vessel geometry shapes the flow' [12], a reliable personalized CFD simulation requires accurate 3D reconstruction of the coronary artery lumen.Hence, inaccuracy in the vascular tracing, inadequate space resolution or blooming artifacts (especially for CTCAbased modalities) might affect the reconstructed vascular geometry [116].This is even more critical in case of bifurcations, where daughter vessels lie on different spatial planes and geometrical reconstructions based on two ICA projections could be therefore inaccurate [117].Moreover, additional manual corrections are often required for the contouring of the polygon of confluence of coronary bifurcations [118].Ideally, an accurate 3D vessel reconstruction could be achieved using intravascular imaging techniques, such as IVUS or OCT.However, invasiveness, the limitation to measure one vessel at a time, and costs advocated the exploration of alternative imaging modalities, namely CCTA and ICA, to perform CFD simulations for clinical applications.While adopted for intracoronary pressure gradient evaluation, the use of CCTA in CFD modelling for the characterization of flow patterns and shear forces is limited because of the low image resolution and presence of artifacts.Nevertheless, the potential utility of CCTAderived CFD for the identification of high-risk plaques was successfully validated in the EMERALD study [119].Recently, angiography-based CFD simulations were also applied in human coronary arteries with promising results for the quantification of the WSS patterns [90,91].Validation against IVUS and OCT, used as ground-truth, is in process [116,120]. Secondly, the definition of BCs, which highly impact the final results of the CFD simulations, is challenging and presents a high degree of uncertainty because intracoronary flow measurements are seldom executed in the clinical routine and often characterized by low accuracy and repeatability, thus requiring the use of theoretical assumptions and/or idealizations [121][122][123][124].When clinical measurements are available, a patient-specific approach to define inlet/outlet BCs should be preferred to generalized/estimated ones, which might result in not realistic profiling of flow disturbances, especially near side branches and curvatures where atherosclerotic plaques preferentially develop [123,124].Furthermore, the heterogeneity in their definition can often preclude comparison of the results from different studies. Thirdly, computational time needed to execute CFD simulations varies according to model complexity, spatiotemporal discretization, tracing length and computer characteristics, precluding in most cases the 'on-line' execution of CFD simulation within the time window of a diagnostic coronary angiogram.Of note, the computational time adds up to the time needed to upload the imaging data and to reconstruct the 3D coronary artery model (e.g., in case of angiographic data, to upload two angiographic projections, to complete the vessel tracing and to obtain the 3D vessel model).Therefore, a higher level of automation is needed to move CFD simulations from the lab to clinical practice.Next generation CFD software are expected to produce reliable coronary hemodynamics simulations within few minutes (or even instantaneously) and with minimal operator interference.In this context, a recent study has shown the clinical use of a prototype commercial software (CAAS Workstation, WSS tool, Pie Medical, Maastricht, The Netherlands) able to provide transient hemodynamic results for mild coronary artery lesions in terms of WSS-based descriptors using angiographic data and CFD modelling in less than 15 minutes [90].Furthermore, ad hoc programmed artificial intelligence and in particular machine learning algorithms can be trained to predict flow components directly from the coronary imaging and vessel geometry [125][126][127], hence bypassing the time-demanding computation of instantaneous intracoronary flow and pressure.This task can be achieved adopting several different strategies: among them we mention the use of physics-informed neural networks that, integrating mathematical equations governing blood flow with very few patient-specific measurement points within a flexible deep learning framework, have already demonstrated to improve WSS quantification in diseased arterial flows [128].Moreover, cloud CFD application may diminish computational time by allowing remote use of high-performance computing clusters, and, if associated with a centralized core laboratory, could favor the quality of the analysis while reducing inter-operator variability.On one hand, all this will facilitate clinical application of CFD simulations.On the other hand, it will push the boundaries of intracoronary biomechanics simulations even further.In fact, recent modelling strategies are combining plaque structural stress and strain with hemodynamic shear stress, thus providing a more comprehensive analysis of the local biomechanics exerted on plaques or vascular components, essential in understanding plaque vulnerability and in predicting results after coronary interventions [129]. Lastly, in order to justify the routine clinical application of CFD simulations in the catheterization laboratories, more robust clinical evidence for CFD results is advocated.To this aim, the execution of randomized trials is required to confirm the relationship between CFD results (with particular reference to the near-wall and intravascular hemodynamic quantities) and clinical outcomes, and ultimately to define the role of CFD simulations in clinical practice.Additional technologies, such as augmented reality and more immersive user interface, might also play a role towards the clinical use of these modalities, offering a more intuitive reading of intracoronary flow specifics to physicians [130]. Conclusions CFD models of coronary hemodynamics allow a fardeeper understanding of the critical relationship between intracoronary flow, vascular anatomy and plaque composition.In fact, the interplay between biology (patient risk profile, genetics and congenital vascular anatomy), intravascular pressure gradients and specific flow patterns has proven effects on atherogenesis, plaque composition and destabilization, as outlined in the 'hemodynamic risk hypothesis' [3,12].This gained basic knowledge has stimulated CFD based clinical applications, providing physicians with reliable non-invasive tools for intracoronary pressure gradients estimation as well as the quantitative assessment of intracoronary shear forces on the endothelium and their link with functional plaques characterization and vulnerability assessment, to be used for predictive purposes.Moreover, CFD application might entail also procedural planning (e.g., post-PCI FFR CT ) and stent scaffolds design. Overcoming current technical challenges with modern technologies will allow for quicker and more reliable computational solutions, which, validated in the proper clinical settings, will ultimately favor a wider use of a physiologybased lesion evaluation in the clinical practice, with expected benefit for patients and financial gain. Fig. 2 . Fig. 2. Explanatory strategies of computational fluid dynamics (CFD) boundary conditions (BCs) that can be prescribed to a diseased right coronary artery model.(A) In/out flow direction panel: the dark blue arrows display the direction of blood flow at each inlet/outlet boundary cross-section of the vessel model.(B) Measured flow rates panel: blood flow rate waveforms extracted from imaging or in vivo measurement techniques are prescribed at each model inlet/outlet cross-section of the vessel model.The measured blood flow rates applied as BCs are shown.(C) Measured inflow + lumped models panel: BCs are defined by coupling measured clinical data, available at the inflow section, with lumped parameter circuit models describing the peripheral vascular resistance and compliance.The diseased right coronary artery belongs to a patient recruited during the RELATE clinical trial (ClinicalTrials.govIdentifier: NCT04048005). Fig. 3 . Fig. 3. Explanatory case showing the typical output obtained through the VIRTUheart TM system for the computation of the vFFR.(A) A 66-year-old man presented with chronic stable angina.The left anterior descending (LAD) coronary artery had a severe mid vessel stenosis (arrow).The measured FFR between the proximal and distal points (dashed line) was 0.77.(B) Angiograms were used to model the vFFR by using the VIRTUheart TM system, which was calculated to be 0.75 over the same vessel segment.This is displayed in false color yellow, the straight yellow line connecting the same 2 points between which the vFFR was calculated, exactly matching the 2 spots marked by the dashed line in (A).(C) After implantation of a 2.75 × 18 mm stent at the stenosis, the measured FFR was 0.88 over the same segment.(D) Virtual coronary intervention using the VIR-TUheart system was then used to implant a virtual 2.75 ×18 mm stent, and the recalculated vFFR was 0.88, corresponding to the green line connecting the 2 points.Reprinted with permission from Gosling RC, Morris PD, Silva Soto DA, Lawford PV, Hose DR, Gunn JP.Virtual Coronary Intervention: A Treatment Planning Tool Based Upon the Angiogram.JACC Cardiovasc Imaging.2019; 12(5): 865-872.doi: 10.1016/j.jcmg.2018.01.019 [49] (http://creativecommons.org/licenses/by/4.0/). Blood flow velocity relates to general physical laws governing balance among fluid forces.Ideally, undisturbed Fig. 4 . Fig. 4. Two case examples showing the results of the HeartFlow CFD based tool for the computation of the virtual fractional flow reserve from CTCA (i.e., the FFRCT).The examples highlight the benefit of FFRCT in differentiating functional significance in coronary vessels with anatomically obstructive stenoses.(A) CCTA demonstrated significant coronary artery disease with stenosis >50% in the left anterior descending (LAD) artery.This was confirmed by quantitative angiography with a stenosis of 57%.The CFD model based on the CTCA revealed a hemodynamically significant lesion with FFRCT in the distal LAD of 0.62.The measured FFR during invasive angiography was 0.65.(B) CCTA demonstrated a stenosis >50% in the mid right coronary artery (RCA).This was confirmed by quantitative angiography with a stenosis of 62%.Computed FFRCT was 0.87, indicating a nonfunctionally significant stenosis.This was confirmed by a measured FFR of 0.86.Reprinted with permission from Zarins CK, Taylor CA, Min JK.Computed fractional flow reserve (FFTCT) derived from coronary CT angiography.Journal of Cardiovascular Translational Research.2013; 6(5): 708-714.doi: 10.1007/s12265-013-9498-4 [71].(http://creativecommons.org/licenses/by/4.0/). Fig. 5 . Fig. 5. Luminal maps of (A) time-average wall shear stress (TAWSS), (B) topological shear variability index (TSVI) and (C) cycle-average local normalized helicity (LNH) for an explanatory diseased right coronary artery model.As expected, high TAWSS values characterize the stenotic region of the coronary artery, while low TAWSS are present downstream of the stenosis (panel A).As for the TSVI, a high variability in WSS contraction/expansion action at the endothelium during the cardiac cycle clearly emerges downstream of the stenosis, at the bifurcation region and at the side branch (panel B).Counter-rotating helical flow structures develop in the intravascular region of the coronary model here reported (panel C).Right-/left-handed helical blood patterns are identified by positive/negative LNH values and displayed in red/blue, respectively.The diseased right coronary artery belongs to a patient recruited during the RELATE clinical trial (ClinicalTrials.govIdentifier: NCT04048005). Fig. 6 . Fig. 6.Near-wall hemodynamic descriptors.(A) Example of WSS vector acting on a generic point at the luminal surface (black arrow) of a diseased right coronary artery.At the same point, the unit vector n normal to the vessel wall is reported (orange arrow).(B) Explanatory maps of WSS vector field (black arrows) with identified contraction/action regions at the luminal surface of the same artery coloured by blue/red, respectively.The diseased right coronary artery belongs to a patient recruited during the RELATE clinical trial (ClinicalTrials.govIdentifier: NCT04048005).The table at the bottom reports the WSS-based descriptors of disturbed flow.For each Fig. 7 . Fig. 7. Intravascular hemodynamic descriptors.Figure: example of the helical-shaped trajectory described by an element of blood moving within an explanatory model of right coronary artery.This diseased artery belongs to a patient recruited during the RELATE clinical trial (ClinicalTrials.govIdentifier: NCT04048005).γ is the angle between local velocity (v) and vorticity (ω) vectors (black arrows).The table at the bottom reports the helical flow-based descriptors commonly used to characterize intracoronary hemodynamics. Table 1 . Continued.3DQCA, Three-dimensional Quantitative Coronary Angiography; ADVANCE, Assessing Diagnostic Value of Non-invasive FFR CT in Coronary Care; AUC, Area Under the Curve; BA, Bland-Altmann analysis; BC, boundary condition; CCS, Chronic Coronary Syndrome; CFD, Computational Fluid Dynamics; CAAS-vFFR, Cardiovascular Angiographic Analysis System for Vessel; CAD, Coronary Artery Disease; cQFR, Contrast Quantitative Flow Reserve; CTCA, Computed Tomography Coronary Angiography; DeFACTO, DEtermination of Fractional flow reserve by Anatomic Computed TOmographic Angiography; DISCOVER-FLOW, Diagnosis of Ischemia-Causing Stenoses Obtained Via Noninvasive Fractional Flow Reserve; DS, Diameter Stenosis; FAST, Fast Assessment of STenosis Severity; FAST-FFR, FFR angio Accuracy versus Standard FFR; FAVOR, Functional Assessment by Various Flow Reconstructions; FFR, Fractional Flow Reserve; FFR CT , Computed Tomography-derived Fractional Flow Reserve; HR, Hazard Ratio; ICA, Invasive Coronary Angiography; NPV, Negative Predictive Value; PLATFORM, Prospective LongitudinAl Trial of FFRct, Outcome and Resource IMpacts; QFR, Quantitative Flow Reserve; QoL, Quality of Life; RCT, Randomized Controlled Trial; vFFR, virtual Fractional Flow Reserve.
2022-11-06T16:02:07.542Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "b6cdcc41e3072d6df6fa08b4408575a1c063e347", "oa_license": "CCBY", "oa_url": "https://www.imrpress.com/journal/RCM/23/11/10.31083/j.rcm2311377/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3462966c6e7fcb4b0f7b1cdece2f7afbb36da41", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
225306793
pes2o/s2orc
v3-fos-license
E ff ect of a Combination of Moderate-Temperature Heat Treatment and Subsequent Wax Impregnation on Wood Hygroscopicity, Dimensional Stability, and Mechanical Properties : Wood is an environmentally friendly material, but some natural properties limit its wide application. To study the e ff ect of a combination of heat treatment (HT) and wax impregnation (WI) on wood hygroscopicity, dimensional stability, and mechanical properties, samples of Pterocarpus macrocarpus Kurz wood were subjected to HT at a moderate temperature of 120 ◦ C and a high temperature of 180 ◦ C, for a 4 h duration. Subsequently, half of the 120 ◦ C HT samples were treated with WI at 90 ◦ C. The results showed that 180 ◦ C HT and WI decreased the capacity of adsorption and liquid water uptake and swelled the wood significantly, while WI had the biggest reduction. The e ff ect of 120 ◦ C HT was significant only on decreasing the capacity of adsorption and the swelling of liquid water uptake. The bending strength (MOR) of wood decreased only after 180 ◦ C HT, and 120 ◦ C / 4h HT and WI had no significant influence on MOR. The bending sti ff ness (MOE) increased significantly after 180 ◦ C HT and WI, while 120 ◦ C / 4h HT had no significant influence on MOE. Therefore, the combination of moderate-temperature HT can act synergistically in the improvement of certain aspects of wood properties such as capacity of water adsorption and liquid water uptake. WI e ff ectively improved wood hygroscopicity, dimensional stability, and mechanical properties. Introduction Wood is widely used in buildings and wood products due to its special properties such as high strength-to-weight ratio, environmental sustainability, low production energy, and renewability. However, there are also undesirable properties such as poor durability and low dimensional stability which limit its utilization and reduce its service life and value [1,2]. These disadvantages of wood are all associated with the water present in wood cells [3,4]. In order to overcome wood's natural shortcomings, modifications are performed to improve its comprehensive quality [5]. The hygroscopicity is the capacity of wood to react to the moisture content of the air by absorbing or releasing water vapor. The hygroscopicity of wood is mainly attributed to the hemicelluloses, which are amorphous and readily hydrolyzed by hydroxyl groups [6,7]. The durability, dimensional stability, and hydrophobicity of wood therefore can be improved by reducing wood hygroscopicity. Raw Materials Air-dried P. macrocarpus Kurz boards measuring 1000L mm × 100T mm × 25R mm were collected from Degoo Furniture Co., Ltd., Xianyou, China. They were all heartwood boards with an average moisture content (MC) of 10 ± 1% (GB/T 1931-2009 national standard) [32]. The dimensions of the test specimens were 300L mm × 20T mm × 20R mm and 20L mm × 20T mm × 20R mm. Each dimension contained 28 specimens that were free of knots and other defects. Commercial microcrystalline wax was used for the impregnation test with a melting point of about 60 • C, molecular weight of 500-800 g, refractive index of 1.435-1.445, kinematic viscosity (99 • C) of 9.2-25.0 mm 2 /s, and density of 0.80-0.92 g/mL. Heat Treatment and Wax Impregnation All test specimens (300L mm and 20L mm) were first oven-dried at 103 ± 2 • C to constant mass and then were randomly divided into two HT groups and one control group. For 120 • C HT, there were 14 specimens for each dimension; for 180 • C HT and the control group, there were 7 specimens for each dimension. HT under vacuum pressure has some advantages, as reported in a previous study [33]. In this study, HT was therefore subsequently applied at two temperature levels (120 and 180 • C) under 13.4 kPa in a vacuum heating chamber (HJ-ZK60, Dongguan Hengjun Instrument Equipment Co., Ltd., Dongguan, China). As shown in Table 1, the HT processes were composed of preheating, HT, and cooling phases. After cooling, half of the 120 • C HT specimens were fully impregnated in a steel tank using liquefied wax at 90 • C until the weight became constant after 48 h. After impregnation, the remaining wax was wiped and the specimens were put into sealed bags and cooled at a constant temperature of 30 • C for 1 h. Mass Loss and Weight Percentage Gain The specimens of 20L mm × 20T mm × 20R mm were used for the measurement of mass loss (ML) owing to HT and weight percentage gain (WPG) due to WI. They were calculated using Equations (1) and (2), and the value is the average of seven replicates. The mass was measured using an electronic balance (JA21002, Shanghai Liangping Instrument and Meter Co., Ltd., Shanghai, China; 1200 g/1 mg). where M o is the oven-dried mass of specimens before HT, M h is the oven-dried mass of specimens after HT, and M w is the mass of specimens after WI. Moisture Adsorption and Liquid Water Uptake A total of 28 specimens measuring 20L mm × 20T mm × 20R mm were used for moisture adsorption and liquid water uptake tests. For the measurement of 120 • C HT, 120 • C HT + WI, 180 • C HT, and control, each group contained seven replicates. The specimens were first oven-dried at 103 ± 2 • C until constant mass and then conditioned in a climate chamber at 20 • C and 65% RH to reach the equilibrium moisture content (EMC) according to GB/T 1931-2009 national standard. After the moisture adsorption test, the same specimens were oven-dried at 103 ± 2 • C to constant, weighed again, and then were immersed in distilled water for liquid water uptake tests to constant weight. The results of moisture adsorption and liquid water uptake are presented as moisture content which was calculated using Equation (3), which is used to assess the capacity of wood adsorption and liquid water uptake. where MC e is the moisture content of the specimens after conditioning or water uptake tests, M e represents the mass of the specimens after conditioning or water uptake, and M o denotes the mass of the oven-dried specimens. Dimensional Stability Measuring The dimensional stability was estimated based on swelling tests according to GB/T 1931-2009 national standard. The swelling data of specimens of 20L mm × 20T mm × 20R mm was measured using a digital caliper (CD-20CPX, Mltutoyo, Japan, 0-200 mm/0.01 mm) during the moisture adsorption and liquid water uptake test. For the measurement of 120 • C HT, 120 • C HT + WI, 180 • C HT, and control, each group contained seven replicates. The dimensions and weights of the treated and control groups were measured before and after tests. The swelling was calculated using Equation (4): where S is the swelling in tangential or radial directions, L w is dimension after moisture adsorption and liquid water uptake, and L o represents the dimension in oven-dry state. Mechanical Property Testing Bending Strength and Modulus of Elasticity Mechanical properties of treated and controlled wood were determined according to GB/T 1936.1-2009 national standard [34]. Seven replicates of each group were tested for bending strength or modulus of rupture (MOR) and modulus of elasticity (MOE) using a 3-point bending test machine (Shimadzu, Japan). Prior to the test, all specimens of 300L mm × 20T mm × 20R mm were conditioned to a constant weight in a climate chamber at 20 • C and 65% RH. The average value of the seven replicates was used for comparison. Statistical Analysis Data were analyzed using analysis of variance (ANOVA) by SPSS to assess the effects of various treatments on the hygroscopicity, dimensional stability, and mechanical properties of treated wood. The differences between mean values of each treatment level were further separated using Duncan's multiple range test at p < 0.05. Mass Loss, Weight Percentage Gain, and Moisture Content of Conditioned Wood The ML of specimens after heat treatment, along with the WPG of the 120 • C HT group after wax impregnation, are shown in Table 2. The ML was slight after moderate-temperature HT and became severe when temperature rose to 180 • C. The ML of 180 • C HT group was 2.7 times that of 120 • C HT group, which indicates that the ML was influenced significantly by temperature. After 48 h wax impregnation, the weight of the 120 • C HT group increased by 8.82%, which demonstrates that wax was impregnated into wood successfully. Moisture Adsorption and Liquid Water Uptake To present the effects of HT and WI on the hygroscopicity of wood, the MC e of specimens after long-term water vapor sorption at 20 • C and 65% RH was determined, which demonstrates the EMC of wood. Meanwhile, the MC e of specimens after long-term water immersion was also determined. The MC e 's of the control and treated groups are shown in Figure 1 and Table 2. Figure 1a indicates that the MC e of samples after long-term exposure decreased significantly after HT and HT + WI. Compared with the control group, MC e of samples after 120 • C/4h HT, 180 • C/4h HT, and 120 • C/4h HT + WI decreased by 14.3%, 30.2%, and 64.1%, respectively. HT decreased the MC e , and the reductions in MC e were greater for the more severe heat treatment, which is similar to previous reports [35]. However, the 120 • C/4h HT + WI samples exhibited the lowest MC e , which decreased by almost half as compared to the samples after 180 • C/4h HT. This indicates that wax impregnation could further decrease the moisture adsorption capacity. In contrast to the control group, the contribution ratios of 120 • C/4h HT and WI to the reduction of MC e were 14.3% and 49.8%, respectively. The effect on moisture adsorption capacity reduction by wax impregnation was 3.5 times that of 120 • C/4h HT. This shows that WI played a significant role in further decreasing the moisture adsorption capacity of wood. The reduction of MC e in the current work was clearly larger than that of a previous report [19]. The main reason is quite likely due to the impregnation of a large quantity of wax into the wood. After water immersion, the moisture content of the treated group showed a similar tendency. MC e decreased significantly, except for in the 120 • C/4h HT group. Compared to the control group, the MC e of 180 • C/4h HT and 120 • C/4h HT + W decreased by 13.6% and 21.8%, respectively. Although the MC e after 120 • C/4h HT decreased slightly, the 120 • C/4h HT + W group showed remarkably lower moisture content than that of the control group and the 120 • C/4h HT group (Figure 1b). These findings suggest that wax impregnation further decreased the capacity of liquid water uptake of wood. The results may be attributed to the impregnation of wax. On the one hand, wax solution firstly fills the cell lumens fully or partly, which results in a reduced space in the impregnated wood structure for water uptake. On the other hand, wax impregnates into wood and forms wax coating attached to wood cell walls, which blocks the migration path of the free water due to the hydrophobic property of wax [30]. After water immersion, the moisture content of the treated group showed a similar tendency. MCe decreased significantly, except for in the 120 °C/4h HT group. Compared to the control group, the MCe of 180 °C/4h HT and 120 °C/4h HT + W decreased by 13.6% and 21.8%, respectively. Although the MCe after 120 °C/4h HT decreased slightly, the 120 °C/4h HT + W group showed remarkably lower moisture content than that of the control group and the 120 °C/4h HT group (Figure 1b). These findings suggest that wax impregnation further decreased the capacity of liquid water uptake of wood. The results may be attributed to the impregnation of wax. On the one hand, wax solution firstly fills the cell lumens fully or partly, which results in a reduced space in the impregnated wood structure for water uptake. On the other hand, wax impregnates into wood and forms wax coating attached to wood cell walls, which blocks the migration path of the free water due to the hydrophobic property of wax [30]. Bars with different letters present significant difference (p < 0.05) in accordance with Duncan's multiple range tests. Note: There are no significant differences between groups containing the same letters. The bar with ab means that there are no significant differences between this group and groups labeled with a or b. Effect of Heat Treatment and Wax Impregnation on Swelling of Wood The swelling of tangential and radial dimensions after adsorption and liquid water uptake are shown in Figure 2. After water vapor sorption and immersion, tangential and radial swelling of the treated groups was less than that of the control group and presented a similar tendency. The adsorption swelling (Figure 2a) of the treated group presented a significant difference (p < 0.05) compared to the control group, except for the 120 °C/4h HT group. These results suggest that moderate-temperature HT has little improvement on adsorption dimensional stability of wood, however, high temperature or WI improves wood adsorption dimensional stability greatly. The adsorption swelling of wood after WI was the smallest in all treated groups, indicating that this group has the best dimensional stability. This shows that the effect of WI on dimensional stability improvement was greater than the effect of 180 C°/4h HT and indicates that WI is a suitable treatment for dimensional stability modification. Wood has obvious anisotropy in swelling properties, and tangential swelling is generally twice as great as radial swelling. The ratio of adsorption swelling between tangential (T) and radial (R) directions presents a decreasing trend after 180 C°/4h HT and 120 °C/4h HT + WI. The T and R swelling became almost the same for the wood after 120 °C/4h HT + WI. These results indicate that WI extremely impacts the swelling properties, resulting in an improved dimensional stability. In contrast to the adsorption swelling, all water uptake (immersion) swelling of treated groups decreased significantly (p < 0.05) compared with the control group. The immersion swelling of the 120 °C/4h HT + WI group was the lowest in all treated groups. This suggests that both HT and WI improved wood immersion dimensional stability. Comparing the swelling of 120 °C/4h HT and 120 °C/4h HT + WI in Figure 2b, the swelling was greatly reduced after WI, showing that WI has a significant effect on dimensional stability. The reduction in swelling after WI was closely associated with the long hydrophobic chains of the wax and the bulking effect of wax [35]. The water uptake . Bars with different letters present significant difference (p < 0.05) in accordance with Duncan's multiple range tests. Note: There are no significant differences between groups containing the same letters. The bar with ab means that there are no significant differences between this group and groups labeled with a or b. Effect of Heat Treatment and Wax Impregnation on Swelling of Wood The swelling of tangential and radial dimensions after adsorption and liquid water uptake are shown in Figure 2. After water vapor sorption and immersion, tangential and radial swelling of the treated groups was less than that of the control group and presented a similar tendency. The adsorption swelling (Figure 2a) of the treated group presented a significant difference (p < 0.05) compared to the control group, except for the 120 • C/4h HT group. These results suggest that moderate-temperature HT has little improvement on adsorption dimensional stability of wood, however, high temperature or WI improves wood adsorption dimensional stability greatly. The adsorption swelling of wood after WI was the smallest in all treated groups, indicating that this group has the best dimensional stability. This shows that the effect of WI on dimensional stability improvement was greater than the effect of 180 C • /4h HT and indicates that WI is a suitable treatment for dimensional stability modification. Wood has obvious anisotropy in swelling properties, and tangential swelling is generally twice as great as radial swelling. The ratio of adsorption swelling between tangential (T) and radial (R) directions presents a decreasing trend after 180 C • /4h HT and 120 • C/4h HT + WI. The T and R swelling became almost the same for the wood after 120 • C/4h HT + WI. These results indicate that WI extremely impacts the swelling properties, resulting in an improved dimensional stability. In contrast to the adsorption swelling, all water uptake (immersion) swelling of treated groups decreased significantly (p < 0.05) compared with the control group. The immersion swelling of the 120 • C/4h HT + WI group was the lowest in all treated groups. This suggests that both HT and WI improved wood immersion dimensional stability. Comparing the swelling of 120 • C/4h HT and 120 • C/4h HT + WI in Figure 2b, the swelling was greatly reduced after WI, showing that WI has a significant effect on dimensional stability. The reduction in swelling after WI was closely associated with the long hydrophobic chains of the wax and the bulking effect of wax [35]. The water uptake capacity of wood was weakened both after HT and WI treatment, resulting in an improved deformation resistance property. However, the ratios of immersion swelling between T and R directions did not change after HT and WI treatment, indicating that the effect on reduction of immersion swelling was the same in both tangential and radial directions. Forests 2020, 11, x FOR PEER REVIEW 6 of 9 capacity of wood was weakened both after HT and WI treatment, resulting in an improved deformation resistance property. However, the ratios of immersion swelling between T and R directions did not change after HT and WI treatment, indicating that the effect on reduction of immersion swelling was the same in both tangential and radial directions. There are no significant differences between groups containing the same letters. The bar with ab means that there are no significant differences between this group and groups labeled with a or b, while the bar with AB means that there are no significant differences between this group and groups labeled with A or B. Effect of Heat Treatment and Wax Impregnation on Bending Strength and Bending Stiffness The bending strength (MOR) and bending stiffness (MOE) of the control and treated groups are shown in Figure 3. No significant difference (p < 0.05) among control, 120 °C/4h HT, and 120 °C/4h HT + WI specimens was determined. Only high-temperature (180 °C) HT decreased the MOR in current work, which is in agreement with previous studies [18,27,31,33]. These results demonstrate that moderate-temperature HT and moderate-temperature HT + WI had little influence on wood bending strength. However, the MOE of treated groups increased significantly, except for in the 120 °C/4h HT group. Several previous studies [36,37] showed that MOE of heat-treated wood under lower treatment severity was higher than control, which is in agreement with the current study. The reduction of MOR could be attributed to the degradation of hemicellulose and evaporation of extractives during heat treatment [38], while MOE loss is highly dependent on the density of treated wood. The increase of MOE shown by the WI group may be attributed to the impregnation of wax. Wax fills the cell lumens fully or partly, improving the density of treated wood; therefore, the MOE of wood after wax impregnation was improved. There are no significant differences between groups containing the same letters. The bar with ab means that there are no significant differences between this group and groups labeled with a or b, while the bar with AB means that there are no significant differences between this group and groups labeled with A or B. Effect of Heat Treatment and Wax Impregnation on Bending Strength and Bending Stiffness The bending strength (MOR) and bending stiffness (MOE) of the control and treated groups are shown in Figure 3. No significant difference (p < 0.05) among control, 120 • C/4h HT, and 120 • C/4h HT + WI specimens was determined. Only high-temperature (180 • C) HT decreased the MOR in current work, which is in agreement with previous studies [18,27,31,33]. These results demonstrate that moderate-temperature HT and moderate-temperature HT + WI had little influence on wood bending strength. However, the MOE of treated groups increased significantly, except for in the 120 • C/4h HT group. Several previous studies [36,37] showed that MOE of heat-treated wood under lower treatment severity was higher than control, which is in agreement with the current study. The reduction of MOR could be attributed to the degradation of hemicellulose and evaporation of extractives during heat treatment [38], while MOE loss is highly dependent on the density of treated wood. The increase of MOE shown by the WI group may be attributed to the impregnation of wax. Wax fills the cell lumens fully or partly, improving the density of treated wood; therefore, the MOE of wood after wax impregnation was improved. Forests 2020, 11, x FOR PEER REVIEW 6 of 9 capacity of wood was weakened both after HT and WI treatment, resulting in an improved deformation resistance property. However, the ratios of immersion swelling between T and R directions did not change after HT and WI treatment, indicating that the effect on reduction of immersion swelling was the same in both tangential and radial directions. Bars with different letters indicate significant difference (p < 0.05) in accordance with Duncan's multiple range tests. Note: There are no significant differences between groups containing the same letters. The bar with ab means that there are no significant differences between this group and groups labeled with a or b, while the bar with AB means that there are no significant differences between this group and groups labeled with A or B. Effect of Heat Treatment and Wax Impregnation on Bending Strength and Bending Stiffness The bending strength (MOR) and bending stiffness (MOE) of the control and treated groups are shown in Figure 3. No significant difference (p < 0.05) among control, 120 °C/4h HT, and 120 °C/4h HT + WI specimens was determined. Only high-temperature (180 °C) HT decreased the MOR in current work, which is in agreement with previous studies [18,27,31,33]. These results demonstrate that moderate-temperature HT and moderate-temperature HT + WI had little influence on wood bending strength. However, the MOE of treated groups increased significantly, except for in the 120 °C/4h HT group. Several previous studies [36,37] showed that MOE of heat-treated wood under lower treatment severity was higher than control, which is in agreement with the current study. The reduction of MOR could be attributed to the degradation of hemicellulose and evaporation of extractives during heat treatment [38], while MOE loss is highly dependent on the density of treated wood. The increase of MOE shown by the WI group may be attributed to the impregnation of wax. Wax fills the cell lumens fully or partly, improving the density of treated wood; therefore, the MOE of wood after wax impregnation was improved. Bars with different letters indicate significant difference (p < 0.05) in accordance with Duncan's multiple range tests. Note: There are no significant differences between groups containing the same letters. The bar with ac means that there are no significant differences between this group and groups containing a or c. Conclusions Certain properties of wood were increased via the synergistic combination of moderate-temperature HT and WI. WI effectively improved the hygroscopicity, dimensional stability, and mechanical properties as compared with the moderate-and high-temperature HT, as well as the combination of the two. Capacity and swelling of wood in adsorption and liquid water uptake conditions were decreased significantly after 180 • C HT and WI. The reductions were the lowest for WI wood. The 120 • C HT group exhibited significant decreases only for capacity of adsorption and the swelling of liquid water uptake. The ratio of adsorption swelling between tangential (T) and radial (R) directions decreased after 180 • C/4h HT and 120 • C/4h HT + WI, while the ratio of immersion swelling remained almost constant for all treated wood. There was no significant influence of 120 • C/4h HT and WI on wood bending strength (MOR), which decreased significantly only after 180 • C HT; 180 • C HT and WI improved bending stiffness (MOE) significantly, while no significant influence of 180 • C HT on MOE was observed. Therefore, WI played a significant role in improvement of wood hygroscopicity, dimensional stability, and mechanical properties. The combination of moderate temperature acted synergistically only in certain aspects of wood properties such as capacity of water adsorption and liquid water uptake. The combination of moderate-temperature heat treatment and wax impregnation is more suitable for the modification of indoor wood products.
2020-08-27T09:08:41.003Z
2020-08-23T00:00:00.000
{ "year": 2020, "sha1": "d0960f3c22f28a9b6fb5ae6396e5423139080417", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/11/9/920/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c655a58ec84b54105c3c0a33b914396ab9fccb71", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
5035537
pes2o/s2orc
v3-fos-license
Overexpression of the Synthetic Chimeric Native-T-phylloplanin-GFP Genes Optimized for Monocot and Dicot Plants Renders Enhanced Resistance to Blue Mold Disease in Tobacco (N. tabacum L.) To enhance the natural plant resistance and to evaluate the antimicrobial properties of phylloplanin against blue mold, we have expressed a synthetic chimeric native-phylloplanin-GFP protein fusion in transgenic Nicotiana tabacum cv. KY14, a cultivar that is highly susceptible to infection by Peronospora tabacina. The coding sequence of the tobacco phylloplanin gene along with its native signal peptide was fused with GFP at the carboxy terminus. The synthetic chimeric gene (native-phylloplanin-GFP) was placed between the modified Mirabilis mosaic virus full-length transcript promoter with duplicated enhancer domains and the terminator sequence from the rbcSE9 gene. The chimeric gene, expressed in transgenic tobacco, was stably inherited in successive plant generations as shown by molecular characterization, GFP quantification, and confocal fluorescent microscopy. Transgenic plants were morphologically similar to wild-type plants and showed no deleterious effects due to transgene expression. Blue mold-sensitivity assays of tobacco lines were performed by applying P. tabacina sporangia to the upper leaf surface. Transgenic lines expressing the fused synthetic native-phyllopanin-GFP gene in the leaf apoplast showed resistance to infection. Our results demonstrate that in vivo expression of a synthetic fused native-phylloplanin-GFP gene in plants can potentially achieve natural protection against microbial plant pathogens, including P. tabacina in tobacco. Introduction Downy mildew disease of cultivated tobacco (Nicotiana tabacum L.), commonly known as blue mold, is caused by the obligately biotrophic oomycete pathogen Peronospora tabacina D.B. Adam. Blue mold was first reported in tobaccogrowing areas around the end of the 19th century in Australia and Argentina [1]. In 1921, it was first seen in tobacco seedbeds in the United States in the state of Georgia. In 1979, blue mold epidemics resulted in annual crop losses exceeding $250 million in the eastern United States and Canada [1]. During periods of cool and wet weather, P. tabacina can complete its lifecycle in less than 10 days, and the disease becomes polycyclic, resulting in a continuous production of infective asexual sporangia (up to 10 6 /cm 2 of infected leaf tissue), which can cause widespread blue mold epidemics [1]. Pathogenic fungi and fungal-like organisms cause about 20% of annual crop losses worldwide [3]. To control plant diseases, large amounts of chemically synthesized fungicides are used at present in agriculture in both developed and developing countries. Chemical treatment using the systemic fungicide metalaxyl can effectively control blue mold disease on tobacco; however, the long-term use of chemical pesticides can result in the development of resistance in pathogens and can also have an adverse impact on human health and the environment. It has been reported that North and Central 2 The Scientific World Journal American isolates of P. tabacina have developed resistance to metalaxyl [4]. Such agricultural practice is causing environmental pollution and animal/human diseases and hazardous effects. To avoid or minimize the use of chemically synthesized fungicides, we need to develop alternative fungicides suitable for the environment and human health. Hence, the employment of host plant resistance is the most economic and environmentally sustainable means for controlling blue mold. Several Nicotiana species of Australian origin, such as N. debneyi, N. exigua, N. goodspeedii, N. maritime, N. megalosiphon, N. rotundifolia, a noncultivated tobacco species N. megalosiphon from Cuba, and the wild species N. langsdorffii from South America possess genetic resistance to blue mold [1,5]. Although host resistance to P. tabacina infection is low in N. tabacum, the species still possesses several defense responses against P. tabacina [6,7]. Identification and isolation of resistance gene(s) from these species will provide valuable tools for developing transgene-mediated cultivars of tobacco that can resist blue mold infection. Different types of plant proteins possessing antifungal properties have been identified and studied extensively in the past two decades. Examples are chitinases and chitinase-like proteins with antifungal activity towards Fusarium oxysporum and Rhizoctonia solani [8,9], cyclophilin-like proteins with antifungal activities [10,11], defensins and defensinlike peptides that inhibit growth of F. oxysporum and M. arachidicola [12], Asparagus deoxyribonuclease that exhibits antifungal activity against Botrytis cinerea [13], ginkbilobin that has strong antifungal action against B. cinerea, Coprinus comatus, F. oxysporum, and R. solani [14], and glucanases that are active against Alternaria longipes, Rhizoctonia cerealis, V. dahlia, and Fusarium oxysporum [15]. Topical application of certain antifungal peptides/proteins on plants appears to provide a first-line-of-defense/resistance towards a number of pathogenic fungi and fungal-like organisms. Plants producing such antifungal peptides/proteins in surface tissues (intracellular or intercellular spaces or the apoplast) might provide endogenous resistance or tolerance to invading fungi. About 30% of vascular plants possess glandular secreting trichomes; these can include tall glandular secreting trichomes (TGSTs) and short glandular trichomes (SGTs). Glandular head cells of TGSTs have been shown to synthesize diterpenoids and sugar esters [16]. In tobacco, short glandular trichomes (SGTs) synthesize unique proteins known as Tphylloplanins [17,18]. T-phylloplanin proteins secreted onto the surfaces of tobacco leaves have antimicrobial properties and have been shown to inhibit blue mold disease caused by P. tabacina [17][18][19]. Application of tobacco phylloplanin to turfgrass also inhibits gray leaf and brown patch diseases caused by the ascomycete Pyricularia oryzae and the basidiomycete Rhizoctonia solani [20]. Recently, it has been demonstrated that the mature tobacco phylloplanin gene without its own signal peptide fused with GFP and targeted to the apoplasm increases resistance to blue mold disease in tobacco [21]. In the present study, we generated chimeric gene constructs of synthetic native T-phylloplanin fused to GFP (nat-T-phyllo-GFP) to differentiate from the endogenous phylloplanin gene products. The codons of the native Tphylloplanin-GFP gene fusion constructs were optimized separately for dicot and monocot plants. This synthetic gene has minimal sequence homology with the endogenous gene and is expected to be less susceptible to posttranscriptional gene silencing in vivo. We report here the overexpression of the synthetic native T-phylloplanin (with its native signal peptide) fused GFP gene in tobacco cultivar KY14. Transgenic lines expressing the fused synthetic chimeric nat-T-phyllo-GFP gene showed resistance against blue mold infection. Chemicals and Enzymes. All chemicals and reagents used were of analytical grade or higher and were obtained from Sigma-Aldrich, Fisher Scientific, and BDH, as applicable. DNA modifying enzymes and restriction enzymes were purchased from Invitrogen Life Technologies (USA). Nitrocellulose membranes for western blot analysis were obtained from Schleicher & Schuell (Keene, NH, USA). Construction of Plant Expression Vectors pKM24-ibm8 and pKM24-ibm10. The chimeric gene constructs were designed using the tobacco native phylloplanin gene (Gen-Bank accession no. AY705384) fused with GFP; codon choices were optimized for the dicot species tobacco (Nicotiana tabacum) and the monocot species creeping bentgrass (Agrostis stolonifera). A translational enhancer sequence (5 amv), the 35-nt long 5 -untranslated region of AlMV RNA 4, was fused with the chimeric phylloplanin gene. The apoplast targeting sequence (aTP) of the Arabidopsis 2S2 protein gene was fused with the coding sequence of phylloplanin containing its native signal peptide (SP) fused with GFP [22]. The fused synthetic chimeric native T-phylloplanin-GFP (nat-T-phyllo-GFP) genes were synthesized by GeneArt (Invitrogen, Life technologies, USA, http://www.lifetechnologies.com/GeneArt/). Each modified synthetic nat-T-phyllo-GFP gene fragment that was codon optimized for either monocots or dicots was cloned separately into the XhoI/SstI sites of Bluescript (KS+) to generate plasmids pBibm8 and pBibm10, respectively. Before use, the sequences of both fragments were confirmed. The 5 -Xho1-SstI-3 fragments were gel purified and cloned into the corresponding sites in the binary vector pKM24KH (GenBank accession HM036220) to generate the plasmids pKM24-ibm8 (GenBank accession KF951257) and pKM24-ibm10 (GenBank accession KF951258). The resulting plasmids have the following general structure: 5 -EcoR1-M24-promoter-HindIII-XhoI-5 amv-aTP-SP-phyllo-GFP-SstI-3 ( Figure 1). The modified full-length transcript promoter (M24) of the Mirabilis mosaic virus [2,21,23,24] directs expression of the coding sequences of the nat-T-phyllo-GFP gene fusions. Figure 1: Schematic map of the plant expression vector constructs pKM24-ibm8 and pKM24-ibm10 containing the synthetic tobacco Phylloplanin gene (GenBank accession no. AY705384) fused in-frame with GFP. Two genes (native T-phylloplanin and GFP) were fused inframe in constructs pKM24-ibm8 and pKM24-ibm10 with linkers of 3 and 16 amino acids, respectively. The modified full-length transcript promoter (M24) of Mirabilis mosaic virus [2] directs the coding sequences of the respective native phylloplanin-GFP gene fusions. The chimeric gene sequence native T-phylloplanin-GFP was codon-optimized for both the dicot tobacco (pKM24ibm8; GenBank accession KF951257) and the monocot bent grass (pKM24ibm10; GenBank accession KF951258). A translational enhancer sequence (5 amv), the 35-nt long 5 -untranslated region of AlMV RNA 4, was fused with the gene. The apoplast targeting sequence (aTP) of the Arabidopsis 2S2 protein gene was fused in-frame with the coding sequence of native T-phylloplanin fused with GFP in the constructs. LT, left T-DNA border; RT, right T-DNA border; KanR, neomycin phosphotransferase II marker gene, and hygromycin resistance (HgR) directed by the nopaline synthase promoter (NosP), the 3 -terminator sequences (terminators) of the ribulose bisphosphate carboxylase small subunit (3 RbcS) and nopaline synthase (3 Nos) genes are also shown. The EcoRI, XhoI, SstI, NcoI, and ClaI restriction sites used to assemble these expression vectors are shown. Plant Material and pKM24-ibm8 and pKM24-ibm10, was introduced into the Agrobacterium tumefaciens strain C58C1 : pGV3850 by the freeze thaw method [25], and Agrobacterium tumefaciensmediated tobacco transformation was performed as described previously [26]. Ten independent plant lines (R 0 lines, 1st generation progeny) were generated for each construct. Regenerated kanamycin-resistant plants were grown in the greenhouse [27]; seeds were collected from self-pollinated primary transformants. Transgenic tobacco seeds (R 1 ) were germinated in the presence of kanamycin (300 mg/L). Transgenic lines (R 1 progeny, 2nd generation) with KanR/KanS segregation ratios of 3 : 1 were selected for further analysis. Polymerase Chain Reaction (PCR). Integration and transcription of the fused nat-T-phyllo-GFP constructs pKM24-ibm8 and pKM24-ibm10 in transgenic plants (T 1 and T 2 ) were analyzed by PCR, RT-PCR, and real-time qRT-PCR assays using appropriately-designed gene-specific primers Table 1: DNA sequences of oligonucleotide primers used for RT-PCR and qRT-PCR analysis of the T-phyllo-GFP gene in transgenic plants. RNA Isolation, Real-Time RT-PCR. Total cellular RNA from transgenic tobacco seedlings generated with the constructs pKM24-ibm8 and pKM24-ibm10 was isolated using the RNeasy Plant Mini kit (Qiagen, Chatsworth, USA) as described in [24]. Total RNA (2 g samples) was treated with RNase-free DNase (Sigma, USA) per the manufacturer's instructions and was used for synthesis of first-strand cDNA with the iScript cDNA synthesis kit (Bio-Rad, USA) in a total volume of 20 L following the manufacturer's instructions. For the no-reverse-transcriptase control, an individual reaction was performed in parallel without the addition of reverse transcriptase. One twentieth (1 L) of the RT reaction was used in the subsequent PCR reaction with gene-specific primers for nat-T-phyllo-GFP (#1 and #2) to detect the nat-T-phyllo-GFP-specific mRNA. As a negative control, each primer pair was tested against DNase-treated RNA to confirm cDNA dependence on the amplification. PCR products were examined on an ethidium bromide-stained agarose gel. Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR). The qRT-PCR reactions were performed in three biological replicates using total RNA samples extracted from three independent plants grown under identical conditions. The expression level of nat-T-phyllo-GFP mRNA in transgenic plants was evaluated by real-time quantitative RT-PCR. For qRT-PCR, gene-specific primers for native Tphylloplanin (#1 and #3) were used to evaluate T-phyllo-GFP transcript levels. The qRT-PCR assays were performed using iTaq SYBR Green Supermix with ROX (Bio-Rad, USA) according to the manufacturer's instructions. The tobacco tubulin gene (primers #4 and #5) was used as an internal control to normalize the expression of T-phyllo-GFP. The comparative Ct threshold cycle method (Applied Biosystems bulletin) was used to evaluate the relative expression levels of the transcripts. The threshold cycle was automatically determined for each reaction using default parameters (Step One Real-Time PCR System, Applied Biosystems). The PCR specificity was determined by melt curve analysis of the amplified products using the standard method installed in the System (Step One Real-Time PCR System, Applied Biosystems). Extracellular Fluid Extraction from Leaves. Extracellular fluid (EF) was extracted from leaves of transgenic and control plants grown in the greenhouse for 8 weeks as described earlier [21]. Total soluble leaf extract (TP), extracellular fluid extract (EF), and soluble postextracellular fluid (PEF) extracts were obtained from leaves of transgenic and control plants as described earlier [21] to estimate the amount of expressed GFP present. Green Fluorescent Protein (GFP) Assay. Eight-week-old transgenic and control plants grown in the greenhouse were sampled for protein extraction. To obtain total soluble leaf extract (TP) and soluble postextracellular fluid extract (PEF), the leaf was homogenized in 5 mM Hepes/NaOH, pH 6.3, containing 50 mM NaCl [21,28] before and after collection of extracellular fluid, respectively, and centrifuged at 10,000 ×g for 20 min at 4 ∘ C to collect the soluble fraction. The protein contents of the plant extracts were determined according to the Bradford method using BSA as the standard [29]. TP, EF, and PEF samples were further diluted with 0.1 M Na 2 CO 3 , pH 9.6, to estimate GFP concentrations [30]. Fluorometric quantification of GFP was done by following the method described earlier [30]. The GFP concentration (expressed in g GFP per mg protein) was measured in leaf protein extracts from transgenic and control plants with the Turner Biosystems Luminometer using the GFP-UV module. The results were expressed as means ± standard deviation of readings from ten different plants of the same line (three readings were taken per plant). Western Blot Analysis. Crude extracts of control and transgenic tobacco plants were prepared as described earlier [23,24]. Whole seedlings were homogenized in a mortar and pestle at 4 ∘ C in a protein extraction buffer (0.3 M NaCl, 0.1 M Tris-HCl pH 8.0, 5 mM PMSF, 10 g/mL each benzamidine, trypsin inhibitor, bacitracin, leupeptin, pepstatin A). Extracts were centrifuged at 10,000 ×g for 15 min at 4 ∘ C to obtain crude extract supernatants. Protein contents were determined by the Bradford method using BSA as standard [29]. Proteins (50 g per lane) were separated by SDS-polyacrylamide gel electrophoresis [31], transferred to a nitrocellulose membrane (Bio-Rad) and subjected to Western blot analysis as described earlier [23]. For nat-T-phyllo-GFP detection, the membrane was incubated with primary anti-GFP polyclonal antibody (1 : 5000), then with horseradish peroxidase-conjugated antirabbit secondary antibody (1 : 5000), and was developed by using a chemiluminescent reagent (Pierce, Supersignal West Pico Chemiluminescent substrate). Fluorescence) 1.8.1 build 1390 software. We used the PL FLUOTAR objective (10.0X/N.A.0.3 DRY) with confocal pinhole set at Airy 1 and 1x zoom factor for improved resolution with eight bits. GFP expressed in transgenic plants was excited with an argon laser (30%) with AOTF for 488 nm (at 40%) [32], and the fluorescence emissions were collected between 501 and 580 nm with the photomultiplier tube (PMT) detector gain set at 1150V. GFP Visualization by Confocal Laser Scanning 2.9. Plant Inoculations and P. tabacina Infection Assays. The P. tabacina isolate KY 79 was used in this study. Isolate KY 79 was originally collected from a tobacco field near Georgetown, KY, in 1979. The pathogen was maintained by weekly serial passage on N. tabacum cv. KY 14 plants (7-12-weekold) as described earlier [33]. A water suspension containing fresh sporangiospores of KY79 (10 5 sporangiospores/mL) was used for challenge inoculations by drop inoculation as described earlier [1]. In brief, leaves of control and transgenic N. tabacum cv. KY 14 plants (six to seven weeks old) were inoculated by applying a 3 L drop of the sporangial suspension directly onto the adaxial surface of the leaf panels (8-10 sites/leaf, three leaves/plant). Inoculated plants were placed in sealed premoistened plastic tubs in the dark and kept overnight before being transferred to a growth chamber specifically designed for blue mold containment. One week after inoculation, plant reaction to blue mold infection was evaluated. Any leaves that showed signs of blue mold infection were collected and incubated overnight in a humid chamber in the dark to induce sporulation. Zones of sporulation were measured and the spore concentrations per leaf were calculated for both control and transgenic plants. Molecular Analysis of Transgenic Plants Carrying the Fused Synthetic Chimeric T-phylloplanin-GFP Gene. The chimeric nat-T-phyllo-GFP gene (Figure 1) was introduced into tobacco (Nicotiana tabacum cv. KY14) plants by Agrobacterium-mediated transformation. We developed ten independent transgenic lines from each of the two constructs, pKM24-ibm8 and pKM24-ibm10. The independent primary transgenic lines (R 0 plants) were assayed for gene integration by PCR analysis (data not shown). Reverse transcriptase-PCR (RT-PCR) analysis of transgenic R 1 and R 2 progeny gave the expected 1304 bp fragments derived from nat-Tphyllo-GFP, showing the stable integration and expression of the nat-T-phyllo-GFP gene in the transgenic KY 14 tobacco genome (data presented only for R 1 progeny; Figure 2(a)). Examination of the nat-T-phyllo-GFP protein load in transgenic plants was estimated via Western blot assay against the C-terminal fused GFP using primary anti-GFP antibodies (Figure 2(b)). To determine the degree of nat-T-phyllo-GFP expression in the transgenic plants among the independent lines, we examined transcript abundance by real-time qRT-PCR (Figure 2(c)). As is typically observed, the relative level of the fused synthetic chimeric nat-T-phyllo-GFP-specific mRNA varied by approximately 136-to 458-fold in group-I transgenic plants with tobacco-optimized phylloplanin and by approximately 22-to 121-fold in group-II transgenic lines with grass optimized phylloplanin (Figure 2(b)). Transgenic lines expressing the fused synthetic chimeric nat-T-phyllo-GFP gene were also evaluated by quantifying GFP fluorescence using a spectrofluorometric assay [30] (Figures 3(a) and 3(b)) and by fluorescent laser confocal microscopy ( Figure 4). GFP was quantified in total soluble protein (TP), extracellular fluid (EF), and postextracellular fluid (PEF) fractions from leaves of transgenic plants generated with pKM24-ibm8 and pKM24-ibm10. A higher GFP concentration was found in the EF as compared to the TP and PEF fractions (Figure 3(b)). Malate dehydrogenase (MDH) activity was measured in the TP, EF, and PEF fractions, and there was no detectable MDH activity in the EF fraction (Figure 3(c), Table 2). Confocal microscopy also showed GFP fluorescence in the apoplast region ( Figure 4). Independent transgenic lines showing good expression were selected for further analysis. In this study, wild-type untransformed plants, the plants transformed with the empty vectors, and the vector-GFP construct were used as controls. Plants transformed with the empty vector and the vector-GFP construct behaved the same as the untransformed wild-type plants (data not shown). Overexpression of T-phyllo-GFP Inhibits P. tabacina Spore Germination and Leaf Infection. Due to over expression of nat-T-phyllo-GFP, blue mold infection (as determined by lesion formation) was dramatically decreased from 87-100%, with a marked reduction in spore count of 99-100% and a decrease in the infected area between 86 and 100% in transgenic KY 14 tobacco plants (codon optimized for tobacco) as compared to wild KY14 plants (Figures 5 and 6). However, in transgenic KY 14 tobacco plants expressing the nat-T-phyllo-GFP construct codon optimized for creeping bentgrass, lesion formation was inhibited from 40-66%, with a 79-87% reduction in spore count and a 79-88% decrease in lesion area in comparison to wild-type KY 14 tobacco plants ( Figures 5 and 6). Discussion Genes encoding defense proteins have already been employed to boost plant resistance against fungal and bacterial phytopathogens [34,35]. A significant effort has been directed toward the identification and characterization of antifungal proteins and their expression in transgenic plants [36]. For instance, the expression of the defense-related gene ch5B encoding a chitinase reduced disease symptoms of Botrytis cinerea in strawberry [37], transgenic orange plants expressing a tomato thaumatin-like protein exhibited better tolerance toward Phytophthora citrophthora [38], constitutive overexpression of an antimicrobial protein gene, Ace-AMP1, from Allium cepa in Oryza sativa increased resistance against major rice pathogens like Magnaporthe grisea, R. solani, and Xanthomonas oryzae [39], and potato plants expressing the snakin-defensin hybrid protein exhibited no above-ground or tuber symptoms of potato ring rot disease caused by the bacterium Clavibacter michiganensis [40]. Also, overexpression of CaAMP1 (Capsicum annuum ANTIMI-CROBIAL PROTEIN1) in Arabidopsis thaliana conferred broad-spectrum resistance to the hemibiotrophic bacterial pathogen Pseudomonas syringae, the biotrophic oomycete Hyaloperonospora parasitica, and the fungal necrotrophic pathogens Fusarium oxysporum and Alternaria brassicicola [41]. Transgenic tobacco expressing different antimicrobial proteins also exhibit enhanced tolerance against fungal and bacterial phytopathogens. A hybrid of the cysteinerich antimicrobial proteins snakin-1 (SN1) and defensin-1 (PTH1) expressed in tobacco protects the plants from severe anthracnose symptoms caused by the fungus C. coccoides [40]. Another study in transgenic tobacco demonstrated that overexpression of a novel small antimicrobial protein LJAMP1 significantly enhanced the resistance of tobacco against not only the fungal pathogen A. alternate, but also against the bacterial pathogen Ralstonia solanacearum, with no visible alteration in plant growth and development observed [42]. Studies by Alexander et al. [43] demonstrated that although the pathogenesis-related protein 1a (PR 1a) does not have a measurable effect on diseases caused by tobacco mosaic virus or potato virus Y, it significantly reduces the disease severity caused by infection with the oomycete pathogens Peronospora tabacina and Phytophthora parasitica var. nicotianae. Recently, it has been reported that the apoplast-directed native mature T-phylloplanin protein fused to GFP confers resistance against blue mold better than it does when targeted to the cytoplasm [21]. Hence, in the present study, we used transgenic tobacco (Nicotiana tabacum cv. KY14) plants overexpressing a synthetic native T-phylloplanin-GFP fusion (nat-T-phyllo-GFP) protein (with its own native signal peptide) directed to the apoplast to evaluate the antimicrobial activity of the fusion protein. The coding sequences of the chimeric gene constructs that were codon optimized for either a dicot (tobacco) or a monocot (creeping bentgrass) were placed between the heterologous M24 promoter of the Mirabilis mosaic virus [2,21,23,24] and the terminator sequence from the rbcSE9 gene ( Figure 1). It has been documented that the Mirabilis mosaic virus full-length transcript promoter is constitutive in nature and is 25fold stronger than the CaMV35S promoter in transgenic tobacco plants [2,27,32]. Regenerated plants were screened for kanamycin resistance and then by further molecular analysis [24]. The nat-T-phyllo-GFP-positive, kanamycinresistant plantlets were obtained and grown in the greenhouse. The transgene insert copy numbers in the independent transformant lines (R 0 ) were estimated by examining the segregation of kanamycin resistance in the self-pollinated progeny resulting (R 1 ). Ten transgenic lines that showed Mendelian inheritance ratios of 3 : 1 (Kan R : Kan S ) in the progeny were considered to be carrying a single copy of the transgene. Then the isolation of homozygous transgenic plants with a single insert of the transgene was conducted in the progeny self-fertilized from R 1 detected by nat-Tphyllo-GFP analysis. We randomly selected six transgenic R 2 lines that carried a single insert of the transgene for subsequent molecular analyses. Quantitative RT-PCR and Western blot analyses showed different levels of accumulation of nat-T-phylloplanin-specific transcripts and the synthetic nat-T-phylloplanin-GFP fusion protein in transgenic plants ( Figure 2). Strong expression of the synthetic nat-T-phyllo-GFP gene was observed in the transgenic ibm8-(1. 2) also exhibited a significant increase of disease resistance compared with the control (Figures 5 and 6). The disease resistant transgenic ibm8 (1.3) and ibm10 (2.1) lines were further extensively evaluated by confocal microscopy, which again revealed the presence of the chimeric nat-T-phylloplanin-GFP fusion protein in the apoplast region. MDH activity in extracellular fluid (EF) from leaf samples of transgenic ibm 8 (1.3) and ibm10 (2.1) lines was undetectable, which again shows that EF was not contaminated with cytosolic proteins and has a higher concentration of phylloplanin fused to GFP. Our data indicate that transgenic plants display resistance only if they express the antifungal phylloplanin-GFP protein fusion at levels over 6 to 8 g per mg of extracellular or apoplastic protein. Overexpression of the chimeric native T-phylloplanin-GFP gene in transgenic tobacco plants resulted in dramatically decreased lesion formation and spore count, as well as a reduction in the size of the infected area, in transgenic KY14 tobacco plant lines generated with both the tobacco-optimized and bentgrass-optimized genes, in comparison to the KY14 control. However, plants of transgenic KY14 line ibm10T2 (codon-optimized for bentgrass) showed less blue mold resistance in comparison with plants of line ibm8 (codon-optimized for tobacco). These results demonstrate the important role that phylloplanin plays in controlling blue mold infection; however, the significance of codon optimization for tobacco cannot be ruled out. P. tabacina is an oomycete pathogen that reproduces by airborne vegetative sporangia, and the initial host contact and spore deposition occur at the phylloplane [17,18]. Our data strongly suggest a plant protective role for phylloplanin protein against pathogen infection, and it can be concluded that expression of phylloplanin increases disease resistance against P. tabacina. In a similar study, it has been shown that high-level expression of PR-la in transgenic tobacco results in tolerance to infection by P. tabacina, again demonstrating that host plants overexpressing a defense protein have increased tolerance against pathogenic organisms [43]. It has been suggested that tobacco has two surfacedisposed mechanisms for inhibiting P. tabacina disease: the first is the SGT-produced T-phylloplanins and the second is the abundant diterpenes and T-phylloplanins produced by tall trichomes on older leaves [19]. Microarray transcriptome analysis of different gene expression levels in tobacco leaf trichomes showed a 22-fold enrichment of T-phylloplanin in trichomes [44]. It has been reported that phylloplanins are not unique to tobacco but are also present on the leaf surfaces of other plants. The phylloplanin levels are very high, moderate-to-high, moderate, and low in tobacco, jimson weed, sunflower, and soybean, respectively. However, the relative abilities of leaf water washes (LWWs) from these plants to inhibit P. tabacina spore germination and leaf infection were found to be sunflower > tobacco > jimson weed, with no activity from soybean [19]. It has been reported that tobacco (Nicotiana tabacum cv. KY14) contains less phylloplanin I to IV than other varieties [17,18]. Therefore, in the present study we increased the blue mold resistance of KY14 by overexpressing T-phylloplanin. It has been shown that LWWs containing phylloplanin immediately arrest the germination of sporangia and also tube growth and development in P. tabacina [17,18]. Furthermore, studies using GUS/GFP reporter genes and the T-phylloplanin promoter demonstrate that T-phylloplanins are produced locally in SGTs and are secreted onto the leaf surface, where they dissolve in TGST exudate and are dispersed widely on the leaf surface as a result of exudate flow [17,18]. RNAi-mediated knockdown of the Tphylloplanin gene results in increased plant susceptibility to P. tabacina infection [45]. In the present study, using an apoplast-targeting sequence, we could overexpress synthetic nat-T-phyllo-GFP in the apoplast region where it strengthens the host-defense system and inhibits P. tabacina spore germination and leaf infection. It is noteworthy that a combination of T-phylloplanins and high TGST exudates may provide maximal inhibition of spore germination as demonstrated by overexpressing the native mature tobacco phylloplanin protein without its signal peptide fused to GFP [21] or the synthetic native phylloplanin with its own native signal peptide fused to GFP (present study). The engineered secretion of candidate defense proteins on leaf surfaces might enhance disease resistance [46], and in the current study the overexpression of synthetic nat-T-phylloplanin in the blue mold-susceptible tobacco variety KY 14 has been shown to increase plant resistance to P. tabacina. Such strategies could be a valuable tool in increasing the capacity for the elimination of susceptible individuals and lines during early stages of plant breeding programs.
2018-04-03T06:03:43.930Z
2014-03-20T00:00:00.000
{ "year": 2014, "sha1": "6f81680c8da616e172cfa2e58a2f2eecfd9e1f58", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/601314.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a89f742c7c9b740088c09cc9b1c66572258c747", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269127294
pes2o/s2orc
v3-fos-license
Diagnostic method for a piezoelectric injector using the Newton-Cotes formula The article presents a method for regenerating common rail injectors, which involves extending the standard diagnostic procedure with a phase of analytical calculations. The closed-type Newton-Cotes formula (referred to as the trapezoidal rule) was employed to estimate the resulting fuel spray patterns and compare them with the manufacturer's reference. In the discussed example, it was demonstrated that the proposed solution is particularly useful in challenging situations where a definitive assessment of the injector's technical condition remains difficult. Several other advantages were also highlighted, making this method suitable for application in laboratory and workshop conditions. Introduction In recent years, there has been an increased demand for the regeneration of common rail fuel injectors, as this process allows for the restoration of factory performance parameters.Consequently, there is a need for precise and reliable monitoring of their technical condition, which is ensured by diagnostic tests on dedicated test benches.In the vast majority of cases, standard procedures are sufficient, primarily based on determining the fuel dosing at selected base points [8,13,21].Unfortunately, problematic situations arise where improper injector operation is observed on the engine, despite meeting rigorous manufacturer criteria during testing.A solution can be extended diagnostics, involving the creation of a fuel delivery characteristic across the full range of working pressures and injection times (nozzle opening) [6,9,17].It is essential to emphasize that this is a significantly more time-consuming process and is only available on advanced test benches, such as STPiW3 [12].There is also the possibility of employing strictly scientific methods, the use of which is not always feasible in typical workshop conditions [3,14,18]. The above reasons led to the proposal of a technique based on the closed Newton-Cotes formula.It is widely known for numerical integration and is used to determine the surface area bounded by a function's graph [4,25].However, the mathematical algorithm itself can be easily implemented in the environment of any spreadsheet, allowing for quick calculations, verification of their accuracy, and ultimately the assessment of the technical condition of the injector based on the results of the standard test procedure.Such an approach is practical, as it does not increase the number of measurements in the experimental phase.As a result, costs and labor remain unchanged.As an example, a piezoelectric injector was chosen for which repair technology has been made available, along with a complete set of spare parts (excluding the crystal stack). Test beds The research was conducted on a test bench equipped with a Bosch EPS 205-type table (Fig. 1).This is a versatile diagnostic device with a single measurement tower, allowing for automatic testing, coding, and internal cleaning of injectors.The results are compared with the manufacturer's database, which is accessible from the control screen and subsequently saved and printed as a final report.The functionality of the tester has been significantly enhanced compared to previous versions (EPS 200/200A), as the manufacturer has installed sets (attachments, adapters, connectors) and software adapted for testing piezoelectric injectors.Due to its compact size and affordable price, the device is very popular in the service market. During the regeneration process, additional equipment and specialized tools are also used, with the most important ones being: -Yizhan 13MP HDMI VGA industrial camera -Bene YesWeCan 3L ultrasonic cleaner Diagnostic method for a piezoelectric injector using the Newton-Cotes formula -Facom E.316A200S torque wrench clamps, holders, and workshop tools -PC-class computer. Test object The research was conducted on a piezoelectric injector from Bosch, which was removed from the N47 D20 engine of a BMW X3 vehicle with a mileage of 268,000 kilometers.Injectors of this type are classified as third-generation systems, operating at maximum pressures up to 180 MPa [15].Figure 2 illustrates the internal structure, detailing the components of the hydraulic amplifier group, control valve, and nozzle.A characteristic feature is the absence of a lateral fuel supply channel in the nozzle, resulting in the central delivery of fuel in the spaces around the needle.Consequently, this component has a triangular cross-section in the guiding part, which is essentially unique among other manufacturers.The sealing of the tip is achieved solely through the valve assembly [11]. Research plan Table 1 presents the research plan, which in the experimental and operational part closely aligned with the manufacturer's procedure [23].The exception was the additional phase of analytical calculations conducted in a spreadsheet on a workstation with a computer.Typically, it is also used for visualization and recording of images generated from a microscopic industrial camera [24].It should be noted that the scope of service-diagnostic activities may change depending on the type of identified malfunctions or the initial condition of the injector.For example, in the case of severe coking of the nozzle (sprayer), the manufacturer recommends performing preliminary cleaning with an ultrasonic cleaner.This process requires placing the injector vertically in the device's basket, eliminating the possibility of damaging its electrical components.Its purpose is to ensure the openness of the outlet holes before conducting a flow test on the test bench. Newton-Cotes formula In the Cartesian coordinate system, points corresponding to the fuel doses of the reference injector were located. By connecting them, a non-rectangular quadrilateral with vertices 1-2-3-4 was obtained (Fig. 4).Subsequently, trapezoids were extracted, and their surface areas were calculated using the closed Newton-Cotes formula [2]: To simplify the calculations, formula (1) was replicated in spreadsheet cells.The total area of the shape was determined using the relationship [7]: This way, a reference base (a benchmark) for the results was established, which was estimated based on the preliminary and main tests of the regenerated injector.The Newton-Cotes formula has not been applied in the diagnosis of common rail fuel injectors so far, hence the presented calculations are not reflected in the literature on the subject.However, the presented example indicates that it can be effectively applied in scientific and engineering practice.To automate the computational process, a spreadsheet was used, enabling rapid results at each stage of the conducted operations. Preliminary tests According to the standard procedure, the injector was mounted on the test bench, where it underwent internal cleaning, a leakage test, and testing of basic electrical parameters.In the latter case, crystal stack failure was ruled out, as values consistent with the manufacturer's specifications were obtained, i.e., capacitance C = 2.3 μF (1.5-3.3 μF) and resistance R = 186 kΩ (150-210 kΩ).There was also no evidence of insulation damage between the actuator and the main body (R I = ∞).Similar conclusions were drawn after preliminary flow tests (IVM), as all fuel doses fell within the specified ranges (Table 2).Unfortunately, during the final acceptance on the vehicle's dashboard, the warning light would illuminate, and the engine would enter what is known as 'limp mode'.As a result, the injector was disassembled once again, and the proposed calculation method was implemented. Calculation stage From the data presented in Tables 3 and 4, it can be observed that the injection process disturbance caused a displacement of the quadrilateral 1`-2`-3`-4`.Although the fuel doses met the requirements of the test procedure, their values were significantly underestimated compared to the reference.This was especially true for points 2` and 3`, which corresponded to engine operating conditions at half and full load.As a result, the surface area for the tested injector was 18.5% smaller (Fig. 5).The cause should be attributed to the improper functioning of the valve group, which should be replaced after disassembly.A similar decision was made regarding the precision pair (nozzle, needle).In this regard, the decisive factor was the relatively high mileage of the vehicle rather than the nature of any dysfunction. Microscopic examination and injector assembly The microscopic examination was preceded by disassembling the injector into its components and cleaning them with an ultrasonic cleaner. During the inspection under high magnification, corrosion was observed on most components that had direct contact with fuel, such as the needle, valve plate, throttle, and the hydraulic amplifier body (Fig. 6).This process had an adverse effect on the dynamics of individual assemblies, and the resulting contaminants further polluted the interior of the injector, leading to accelerated wear of interacting surfaces.It should be noted that corrosion intensification is a commonly encountered phenomenon in injectors operating at such high working pressures.This occurs despite the structural and material modifications employed by manufacturers [1,5,10].Fig. 6.Observation of corrosion on the hydraulic amplifier body Consequently, it was decided that restoring full functionality would only be possible by replacing all executive and control groups, except the piezoelectric actuator. During the injector assembly process, it is essential to purge the hydraulic amplifier by assembling its components in diesel oil and then compressing them using a specially dedicated press.This step is of fundamental importance for ensuring accurate fuel delivery during primary investigations, as it eliminates the possibility of result distortion, namely, zero doses.Subsequently, the nozzle assembly (nozzle, needle, spring, washer) is assembled and securely fastened to the main body using a nut.Throughout this operational procedure, strict adherence to the manufacturer's guidelines is crucial, utilizing a torque wrench for this purpose (Fig. 7). Main tests In Table 5, the results of the main flow tests are presented, while Table 6 compiles the results of calculations conducted after the regeneration process.It can be observed that the replacement of key components yielded favorable results.There was an increase in the values of all fuel doses compared to the preliminary tests.As a result, the surface area of the quadrilateral 1``-2``-3``-4`` almost completely overlapped with the reference, with a difference of only 2.8% (Fig. 8).This suggests that the factory settings of the tested injector have been restored.Of course, new codes were assigned before installation in the engine.This process was carried out automatically on the same test bench where fuel dosing measurements were taken. Validation of calculations In authorial publications [20,22], Gauss formulas were employed for the computation of surface areas of polynomials localized in Cartesian coordinate systems.In the context of the considered case, these formulas can be expressed in the following form: and The results presented in Table 7 unequivocally indicate that the formulas (3) and ( 4) can be successfully applied to verify the accuracy of calculations conducted using the Newton-Cotes method.Their implementation in a spreadsheet environment proceeds in a similar manner and does not pose significant difficulties.Simultaneously, the formulas introduced into the appropriate cells of the program can be reused in their unchanged form, providing the user with a ready computational tool for future research. Conclusions The proposed method enables the application of extended diagnostics for common rail injectors that operate incorrectly despite meeting the criteria specified by the manufacturer.Among its most significant advantages are: 1.The use of the standard procedure's baseline points does not require additional measurements in the experimental phase.This eliminates the need to create a complete fuel dosing characteristic, which is only available on selected test benches.2. There is no need to modify the software used by the tester.3. Transferring the analytical process to a spreadsheet environment does not increase the regeneration costs.The formulas can be successfully applied in the examination of injectors of various types or generations (including electromagnetic solutions).4. The position of the vertices of the analyzed figures indicates possible causes of malfunction.However, in laboratory workshop conditions, there is no need for a graphical interpretation of the results.Therefore, the drawings presented in this article are purely illustrative.5.The accuracy of the conducted calculations can be easily verified using alternative mathematical methods, such as Gauss's formulas.Since these studies were conducted on injectors from different manufacturers and on distinct test benches, the presented conclusions and observations have a more general nature.Simultaneously, the choice of computational technique has no impact on the final results and depends on individual preferences.It should be emphasized that the resulting dosage areas presented in the form of polygons in the Cartesian coordinate system should be treated purely hypothetically.This is because they do not accurately reflect the actual fuel injection method at intermediate points, i.e., beyond the vertices of the generated figures.Nevertheless, the proposed method allows for the assessment of the technical condition of common rail injectors in problematic situations, as demonstrated in a specific example.In this way, it constitutes an effective solution that has been addressing the needs reported by service companies for years. Acknowledgments The research was conducted at the service company AUTO NEXT SERWIS, located in Szczecin, which provided a complete set of measurement equipment and specialized tools necessary for the injector regeneration process. Fig. 3 . Fig. 3. General view of the tool station Fig. 4 . Fig. 4. Graphical interpretation of the trapezoidal method for the discussed example Fig. 5 . Fig. 5. Graphical interpretation of the results of preliminary tests Fig. 7 . Fig. 7. Tightening the nut with a torque wrench Table 1 . Research plan with division into stages and workstations Table 2 . Results of preliminary IVM flow tests Table 3 . Results of surface area calculations for the reference figure Table 5 . Results of main IVM flow tests
2024-04-14T15:46:00.793Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "6e38452a3ceb8bad503086a94602b53743599b38", "oa_license": "CCBY", "oa_url": "http://www.combustion-engines.eu/pdf-177132-98249?filename=Diagnostic%20method%20for%20a.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "302b81ba6bf787e6c0b5db33d024b9f3bc558b66", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
119202634
pes2o/s2orc
v3-fos-license
Charge-dependent correlations from event-by-event anomalous hydrodynamics We report on our recent attempt of quantitative modeling of the Chiral Magnetic Effect (CME) in heavy-ion collisions. We perform 3+1 dimensional anomalous hydrodynamic simulations on an event-by-event basis, with constitutive equations that contain the anomaly-induced effects. We also develop a model of the initial condition for the axial charge density that captures the statistical nature of random chirality imbalances created by the color flux tubes. Basing on the event-by-event hydrodynamic simulations for hundreds of thousands of collisions, we calculate the correlation functions that are measured in experiments, and discuss how the anomalous transport affects these observables. Introduction The Chiral Magnetic Effect (CME) [1,2,3,4] has received considerable attention in recent years, particularly in the context of heavy-ion collisions. The anomaly-induced transport effects like the CME are macroscopic and are incorporated into hydrodynamic equations giving rise to "anomalous hydrodynamics" [5]. Theoretically, the CME is expected to occur in heavy-ion collision experiments. The data reported by STAR [6,7] and PHENIX [8] collaborations at RHIC and ALICE collaboration [9] at the LHC show a behavior consistent with the CME, but the quantitative understanding is still lacking. In order to reach a definitive conclusion, a reliable theoretical tool that can describe the charge-dependent observables is indispensable. In this work [10], we quantitatively evaluate the observables to detect the anomalous transport, basing on event-by-event simulations of anomalous hydrodynamics. The observable of interest in this talk is a charge-dependent two-particle correlation [11], where φ i α is the azimuthal angle of i−th particle (i = 1, 2) with charge α ∈ {+, −}, and Ψ RP is the reaction plane angle for v 2 . Physical meaning of this observable is evident if we decompose γ αβ as where v α 1 (a α 1 ) is the directed flow which is parallel (perpendicular) to Ψ RP , respectively. Let us see how a 1 's behave in the presence of anomalous effects. In off-central collisions, the magnetic fields perpendicular to Ψ RP (on average) are created. If the CME occurs, a current should be generated along the magnetic field, which would result in finite a + 1 and a − 1 . The direction of the current depends on the sign of the initial axial charge, which is basically random, so the signs of a 1 s are also random. However, the signs of a + 1 and a − 1 tend to be opposite. Thus, the CME expectations are the following: (1) a + 1 = a − 1 = 0, because the sign of initial axial charge is random; (2) a α 1 2 becomes larger in the presence of the CME currents; (3) a + 1 a − 1 < 0, which indicates the anti-correlation between a + 1 and a − 1 . Event-by-event anomalous hydrodynamic model for heavy-ion collisions The model consists of three parts: (i) anomalous-hydro evolution, (ii) hadronization via Cooper-Frye formula, and (iii) calculation of the observables. For the hydro part, we solve the equations of motion for a dissipationless anomalous fluid, The energy-momentum tensor and currents are written as where ε is the energy density, p is the hydrodynamic pressure, n and n 5 are electric and axial charge densities, eκ B ≡ Cµ 5 [1 − µn/(ε + p)] and eξ B ≡ Cµ[1 − µ 5 n 5 /(ε + p)] are transport coefficients for chiral magnetic/separation effects (CME/CSE), and η µν ≡ diag{1, −1, −1, −1} is the Minkowski metric. In this work, the electromagnetic fields are not dynamical and treated as background fields. As for the equation of state (EOS), we use that of an ideal gas of quarks and gluons. Let us specify the electromagnetic field configurations used to get the results shown later. We take B y to be (x-axis is chosen to be the reaction plane angle Ψ RP ) where σ x , σ y , and σ η s are the widths of the field in x, y, and η s (space-time rapidity) directions, τ B is the duration time of the magnetic field, R = 6.38 fm is the radius of a gold nucleus, and b is the impact parameter. Other elements of B and E are set to zero. The widths are taken so that the fields are applied only in the region where matter exists as σ x = 0.8 R − b 2 , σ y = 0.8 R 2 − (b/2) 2 , and σ η = √ 2. We set other parameters as τ B = 3 fm and eB 0 = 0.5GeV 2 in following calculations, which is equivalent to eB y (τ in , 0, 0) ∼ (3m π ) 2 . By solving the hydrodynamic equations, we obtain a particle distribution via the Cooper-Frye formula with freezeout temperature T fo = 160 MeV. We produce the hadrons by the Monte-Carlo sampling based on that distribution. Thus, one random initial condition results in the particles in an event. We repeat this procedure many times and store the data of many events, that are later used to calculate the charge-dependent correlation functions. We calculate fluctuations of v 1 and a 1 separately, with the following expressions, v α (4) for the same-charge correlation, where M is the number of produced particles, M P 2 = M(M − 1), <i, j> indicates the sum over all the pairs, and outer bracket means averaging over events. Similar expression is used for the opposite-charge correlation. It is an important issue to estimate the amount of axial charges at the beginning of hydro evolutions. The major sources of the initial chiralities are color flux tubes in heavy-ion collisions. When two nuclei collide, numerous color flux tubes are spanned between them. The anomaly equation, ∂ µ j µ 5 = CE a · B a , determines the rate of the axial charge generation, so the rate is determined by the value of E a · B a . There is no preferred sign of E a · B a and it can be positive or negative for different color flux tubes. In order to incorporate this feature, we have made an extension to the so-called MC-Glauber model. For each binary collision, we assign ±1 randomly. Each sign indicates to the sign of color E a · B a of the flux tube. Then, we initialize the axial chemical potential as where X j are the signs of E a · B a randomly assigned to binary collisions, and C µ figure), v + 1 v − 1 , and a + 1 a − 1 (lower figure) for anomalous and non-anomalous cases at b = 7.2 fm. Those quantities are calculated from the data of 10, 000 events for both of the anomalous and non-anomalous cases. Calculated observables The values of the observables are shown in Fig. 1. The data from 10, 000 events are used to calculate those observables for each of anomalous and non-anomalous case. Impact parameter is set to 7.2 fm. The upper figure of Fig. 1 shows the values of v − In the lower figure of Fig. 1, we show the values of v + 1 v − 1 , and a + 1 a − 1 . In the absence of anomaly, they take similar positive values, but once we turn on the anomaly, a + 1 a − 1 becomes negative. This is the indication of the anti-correlation between a + 1 and a − 1 and is consistent with the CME expectations. It has been discussed that the observed values of γ αβ might be reproduced by other effects unrelated to the CME, including transverse momentum conservation [12,13], charge conservation [14], or cluster particle correlations [15]. Such effects are absent in the calculations here, because the particles are sampled based on the Cooper-Frye formula, which is one-particle distribution, whereas all of the background effects arise from multi-particle correlations. Thus, the difference between anomalous and non-anomalous calculations purely originates from the CME and CSE. The contribution from the transverse momentum conservation in the CME signal is recently estimated in Ref. [16], in which the charge deformations are treated as linear perturbations on the bulk evolutions in 2+1D. Conclusions and outlook We reported the results of event-by-event simulations of an anomalous hydrodynamic model for heavyion collisions. We solved the hydrodynamic equations including anomalous transport effects (CME and CSE) in 3+1D, and calculated the values of observables. We also developed a model of the initial axial charges created from the color fux tubes. The caluculated values of the observables indicate that this observable works as expected, and the order of magnitude is comparable to experimentally measured values. The largest uncertainty arises from the choice of the life-time of the magnetic fields. The existence of conducting matter affects the duration of the magnetic fields. We thus have to solve the hydrodynamic equations together with the Maxwell equations -this work is deferred to the future.
2016-01-05T16:14:54.000Z
2016-01-05T00:00:00.000
{ "year": 2016, "sha1": "765ca6da73d8bf170f05f90ddba8f17e6fe1ccb2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1601.00887", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "765ca6da73d8bf170f05f90ddba8f17e6fe1ccb2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219138972
pes2o/s2orc
v3-fos-license
A low‐cost environmental chamber to simulate warm climatic conditions Environmental chambers are used for a variety of experiments in multiple disciplines but are often prohibitively expensive. In this study, we developed an environmental chamber that allows reliable regulation of temperature and relative humidity in a range typical for warm climatic conditions. As we have only used consumer products, which are readily available off the shelf, the device is affordable (<€900) and easy to replicate. The presented chamber has inner dimensions of 1,790 × 970 × 520 mm (height × width × depth). It is heated with two infrared lamps, and for moistening, an ultrasonic mister is used. Air dehumidification and cooling down to ambient temperature are realized with inflowing compressed laboratory air. Additionally, we installed a Peltier element cooling system to enable temperatures below the ambient laboratory temperature. The chamber works in a temperature and humidity range of 15–50 °C and 10–95%, respectively. 2016) , particularly for laboratories that do not use them on a routine basis. Consequently, scientists tend to improvise (Michelsen et al., 2018;Schulz et al., 2015), and although improvisation has helped in these cases, a fully functional but affordable environmental chamber would be a useful addition to the researcher's toolbox. Some researchers have designed affordable environmental chambers, but often they only propose solutions for the control of temperature (Greenspan et al., 2016;Song et al., 2014). Others have constructed working models for the control of both temperature and humidity, yet they mostly focused on plant growth chambers simulating moderate environmental conditions (Bernard, Pitz, Chang, & Szlavecz, 2015;Katagiri et al., 2015). Katagiri et al. (2015), for instance, aimed for a temperature of 22 • C and a relative humidity of 75%, and they generally assumed that the temperature in their chamber would be within 8 • C of the ambient temperature. In view of much harsher conditions prevailing in nature, we developed a robust climate chamber that can mimic a relatively wide range of warm environmental conditions, in terms of temperature and humidity. We describe a simple, affordable (<€900) do-it-yourself (DIY) chamber that mainly consists of off-the-shelf equipment. After providing an overview of the design, we present the results of a rigorous multiweek test of the device. A bill of materials, including suppliers and prices, a construction manual, and a comparison with commercially available environmental chambers is given in Supplemental Materials 1 and 3, respectively. DESIGN Aiming at a versatile tool for multiple applications such as larger column experiments (Schulz et al., 2015) or tests of equipment (Michelsen et al., 2018), we designed an environmental chamber with inner dimensions of 1,790 × 970 × 520 mm (height × width × depth). Although we generally acknowledge the usefulness of microcontrollers, we deliberately avoided this tool (and the associated coding) and entirely relied on readily available consumer products to allow for easy replication and operation. The chamber itself consists of 16-mm polycarbonate twinwall sheets framed by a heavy-duty shelf. The front has an opening of about 900 × 400 mm, which is covered by a transparent acrylic glass panel (Figure 1a). To guarantee a homogeneous temperature and humidity distribution, an air circulation system with a fan ensures a gentle but constant air flow through the chamber (Figure 1b). The temperature is controlled by a dual-relay thermostat. If the temperature in the chamber falls below a defined threshold, two infrared lamps (2 × 150 W or 2 × 300 W) are switched on. If the temperature exceeds a threshold, the lamps are switched off and a valve opens so that cool and dry air, provided by the compressed air system in the laboratory, flows into the chamber. Here, a pressure-reducing valve, set to 0.5 bar, maintains a constant flow rate of 0.14 L s −1 . Our compressed laboratory air is dehumidified and has a constant temperature of 21-22 • C, which in turn constitutes the minimum temperature that can be reached in this setup. For most experiments targeting warm arid conditions, this would be cold enough. Nevertheless, we installed an additional cooling system, which replaces the previously described one, if temperatures below the compressed air temperature must be reached. This system is based on eight 60-W Peltier elements that are installed into the back wall of the chamber. On top of the cooling side of the Peltier elements (inside of the chamber), a fan is mounted for the cold air distribution. On the warming side (outside of the chamber), a water-cooling block is attached. Nonetheless, for the simulation of warm arid conditions, the cooling system using compressed laboratory air should be sufficient. Hence, the cooling method based on Peltier elements can be considered as an optional add-on. Core Ideas • We develop a low-cost environmental chamber to simulate warm climatic conditions. • It allows the regulation of temperature (15-50 • C) and relative humidity (10-95%). • All parts are available off the shelf. • It is affordable (<€900) and easy to replicate. Simultaneously to temperature, relative humidity is controlled by a dual-relay hygrostat. When the relative humidity falls below or exceeds a threshold, the air inside the chamber will be moistened or dehumidified, respectively. Air moistening is realized with a fan-driven air circuit passing an external humidifier ( Figure 1a). This humidifier consists of a polypropylene container with deionized water and an ultrasonic mister (Hannusch, 1995;Katagiri et al., 2015). Air dehumidification works the same way as the firstly described cooling method (i.e., via inflow of cool and dry compressed air). For both cooling and drying, the compressed air flows into the internal air tubing system for quick distribution. The thermostat and the hygrostat are simple plug and play devices, which allow intuitive programming and, in case of the thermostat, even for different time windows per day. To avoid a permanent on-and-off switching of the coolingheating or moistening-drying systems, a buffer can be applied to the target values for temperature and humidity. TEST To explore the chamber's capabilities and limits, it was tested with various temperature and relative humidity settings for several weeks. For monitoring purposes during this test, we additionally installed a CS215 temperature and relative humidity probe (Campbell Scientific) next to the sensors of the thermostat and the hygrostat, which are located in the center of the chamber about 60 cm below the infrared lamps. Measurements were recorded with a CR800 datalogger (Campbell Scientific) with a 1-min logging interval. The laboratory in which the test was performed has an ambient temperature of 21-22 • C and a relative humidity between 50 and 60%. These conditions correspond approximately to the test standard (i.e., most commercial chambers are tested at 23 • C ambient temperature and about 65% relative humidity; ESPEC CORP, 2019a, 2019bMemmert, 2019;Russells Technical Products, 2019). In a series of pre-tests, we figured out that depending on the target temperature, different hardware configurations of the environmental chamber are required. For target temperatures of 25-35 • C, two 150-W infrared lamps are suitable. For higher target temperatures of up to 50 • C, the 150-W lamps should be replaced by two 300-W bulbs. For both temperature ranges, the cooling is realized with inflowing compressed laboratory air. In the case of lower desired temperatures, ranging from ambient laboratory temperature down to 15 • C, the compressed air-cooling system has to be replaced by the Peltier element cooling system (see above). The test was conducted with target temperatures of 15,20,25,30,35,40,45, and 50 • C. For each temperature setting, the relative humidity was initially set to 5% and subsequently increased to 10, 30, 50%, etc., until a temperature-dependent limit was reached. As soon as the feasible relative humidity setting was exceeded (indicated by unstable relative humidity records and condensed water at the acrylic glass), we gradually lowered the target humidity until the values stabilized ( Figure 2). The buffers for the thermostat and the hygrostat (see the section above) were set to ±0.5 • C and ±1%, respectively. Each selected temperature-humidity combination was tested for 24 h, resulting in a total test period of 48 d. The test yielded several findings. First, it showed how long it takes to reach a given target value (lag time). Generally, temperature increases by 5 • C are associated with lag times of <10 min. In the case of relative humidity, lag times are not that constant. For the increases from 10 to 30%, 30 to 50%, and 50 to 70%, lag times of about 30, 60, and 120 min were noted, respectively. The strong decrease in relative humidity at the start of a new temperature setting lasted even longer. In the case of the transition from 90 to 5% (at 25-30 • C), it took about 8 h. This is caused by the condensation of water drops and the relatively long time required for their evaporation. As soon as a target setting was reached, temperature and relative humidity records were rather stable. Small fluctuations are usually within the buffer range of the thermostat and hygrostat. An exception is the period in which the Peltier elements were used for cooling. Here, stronger fluctuations were noticed (Figure 2). The test of the environmental chamber showed that temperature as well as humidity measurements of the thermostat and hygrostat differ from those recorded by the independent CS215 probe (Figure 2). The latter is a tested device with a temperature accuracy of ±0.4 • C (5-40 • C), a relative humidity accuracy of ±2% (10-90%), and a temperature dependency of ±2% (20-60%) for the relative humidity sensor (Campbell Scientific, 2016). For our purposes, these are satisfying accuracies. Hence, we consider the measurements of the CS215 probe as true reference values. For future applications of the environmental chamber, we would like to have the possibility to omit the somewhat expensive reference probe and datalogger. Therefore, we developed correction functions for the measurements of the thermostat and the hygrostat. F I G U R E 2 Target and measured temperature (left axis) and relative humidity (right axis) over the 48-d testing period The temperature sensor deviation is independent from humidity and only depends on the temperature itself. The relation between true temperature and target temperature can be described by a simple linear regression model: (1) with n = 8 and R 2 = 1.00, where ϑ is the true temperature ( • C) and ϑ tar is the target temperature ( • C). The relative humidity sensor drift depends on both temperature and relative humidity. The relation between true relative humidity and target temperature and target relative humidity can be described by a multivariate linear regression model: φ = 1.008φ tar − 0.175ϑ tar + 8.957 (2) with n = 42 and R 2 = .99, where φ is the true relative humidity (%) and φ tar is the target relative humidity (%). Finally, this test shows the range of feasible temperature and relative humidity settings. Although the temperature range seems to be independent from the relative humidity, the range of the relative humidity is very sensitive to the temperature. The presented environmental chamber can simulate a temperature window from 15 to ∼50 • C. A low relative humidity of 10-60% is possible over the entire temperature range. However, a higher relative humidity of up to 95% is only feasible with moderate temperatures around 25 • C. From this optimum, the maximum possible relative humidity continuously decreases to 60% towards the minimum and maximum temperature limits (Figure 3). For high temperatures, this can be explained by increasing gradients to the ambient laboratory temperature. The higher the gradient, the lower the relative humidity, which leads to condensation (i.e., reaching of the dew point) at the inner walls. In addition, we performed a second series of tests to analyze the homogeneity of the temperature distribution within the chamber for the target temperatures of 25, 30, 35, 40, 45, and 50 • C. To analyze the vertical temperature distribution, we installed eight temperature sensors in two arrays below the infrared lamps at different levels. It is not surprising that the upper sensors, which were closest to the lamps, showed the highest temperatures, whereas the lowest sensors showed the lowest temperatures. Here, the largest difference (4.3 • C) was recorded for the target temperature of 50 • C (Supplemental Figure S2.2). We then repeated the test with the same target temperatures but placed 21 sensors aligned on a horizontal plane 80 cm above the bottom of the chamber to map out the lateral homogeneity. In general, the recorded temperatures differ less with this sensor arrangement. Again, the largest difference (1.7 • C) was recorded for the highest temperature setting of 50 • C (Supplemental Figure S2.4). A more detailed description of this test can be found in the Supplemental Material 2. We also point out that there might be a difference between air temperature and sample temperature (caused by the infrared radiation emitted by the heating system), which implies that researchers would have to decide where to place the sensors of the thermostat and hygrostat. Using the example of a soil column experiment, one has to consider which temperature is relevant-air or soil temperature. Accordingly, one has to place the sensor in the air or on the soil surface. For a simple test case, we observed a surface of a soil column (fine quartz sand, 10-cm diam., 20-cm height), which was ∼1 • C warmer than the air temperature (shielded sensor, 40 • C). CONCLUDING REMARK S The presented environmental chamber has proven to be a reliable and affordable tool to mimic warm arid conditions. All necessary parts are consumer products and readily available Vadose Zone Journal F I G U R E 3 Temperature and relative humidity ranges that can be simulated with the environmental chamber (gray box) off the shelf. Moreover, the setup does not require microcontroller coding or difficult wiring, enabling replication with reasonable effort. Nevertheless, the test of the chamber also revealed some limitations. The simulation of temperatures below ambient laboratory temperature requires a hardware modification (i.e., Peltier elements have to be used for cooling instead of compressed air). In our case, this enables only a minor temperature decrease of 6-7 • C below ambient temperature. Moreover, higher relative humidities are only possible for moderate temperatures. For example, 80% are only possible in the range of 20-35 • C. In most of the tested cases, the target settings are reached relatively quickly, which allows to simulate diurnal cycles. However, a strong decrease of relative humidity from its temperature-dependent maximum down to the minimum takes several hours. For some applications (e.g., short-term humidity fluctuation during the simulation of diurnal cycles), these long equilibration times might be not acceptable. In such cases, the total possible humidity range cannot be used (i.e., one must stay below the upper humidity limit to prevent condensation at the inner chamber walls). A further limitation is the slightly inhomogeneous temperature distribution, which is due to the small number of lamps, acting as a point heat source. Given these limitations, commercial environmental chambers may still be the better choice for some applications. For example, various commercial chambers can simulate temperatures that exceed our 50 • C limit while maintaining relative humidity of up to almost 100%. However, many of these are unable to simulate low temperatures (<30 • C) and low relative humidities (<20%, Supplemental Figure S3.1). Others cover a wide temperature and humidity range but may not provide sufficient space for the planned experiment (Supplemental Table S3.1). Eventually, the right choice depends on individual and demand-specific needs. To check whether our DIY chamber is suitable for one's purpose or which commercial chamber would be an alternative, we provide a comparison of selected environmental chambers in the Supplemental Material 3. For the simulation of warm arid conditions, the presented environmental chamber shows a satisfying performance. Moreover, its design leaves margin for demand-specific adaptations (e.g., in terms of [a] chamber size, [b] number, distribution and power of cooling or heating units, or [c] improved insulation). Such an improved insulation could potentially remedy some limitations and enhance cooling capabilities or increase relative humidity limits for higher temperatures, if necessary.
2020-04-16T09:07:33.459Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "de478fb4fd40d4ef440d1cf11df54d1e103de7f0", "oa_license": "CCBY", "oa_url": "https://acsess.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/vzj2.20023", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "c2439459f0acc05bf0a1c178ead433d6ff089c57", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
55571280
pes2o/s2orc
v3-fos-license
Antidiabetic potential of liquid-liquid partition fractions of ethanolic seed extract of Corchorus olitorious The Corchorus olitorius seeds were pulverized (grounded) to powder. The powdered seed (200 g) was extracted with 500 ml of ethanol (99.9%) within a period of 24 h and the procedure repeated 3 times using the same powdered extract. Extraction and fractionation were carried out with some modification in the choice of primary solvent (water) and partitioning (separating) solvents (hexane, chloroform, ethyl acetate and butanol). The fractions obtained (hexane, chloroform, ethyl acetate, saturated butanol and last remaining aqueous) were tested for antidiabetic and phytochemical properties. Two doses were employed while testing in diabetic rats, 500 and 250 mg/kg body weight. Diabetes was induced by a single intraperitoneal injection of 150 mg/kg body wt alloxan (Sigma) in saline. Animals with a blood glucose level ≥ 150 mg/dl were considered diabetics. All the fractions had some bioactivity in alloxan induced diabetic rats. The activity being better with the 500 mg doses than the 250 mg. Statistical significance (p ˂ 0.05) in bioactivity (blood sugar change) was only seen with the aqueous fraction (1 h post treatment), chloroform fraction (1, 3 and 4 h post treatment) and the ethyl acetate fraction (2 and 3 h post treatment). The action of the seed extract can be attributed to phytochemical content of the extract. Of these flavanoids, alkaloids, saponins have been reported to have hypoglycaemic effect. INTRODUCTION The therapeutic cure for diabetes mellitus has remained elusive despite the discovery of an array of medications that can ameliorate the outcome of the disease (Holman, 2013).Plants have remained a veritable source for drug discovery the world over (Etuk, 2006).The leaves extract of Corchorus olitorius (CO) had been reported to possess hypoglycaemic effect (Abo et al., 2008) and high antibacterial activity (Adegoke and Adebayo, 2009).The crude ethanolic extract of the seed has been evaluated in our labouratory for antidiabetic properties in experimental animals (In Press).The current effort is aimed at fractionating the ethanolic seed extract of the plant and assessing the antidiabetic effect of each fraction in alloxan induced diabetic rats.The outcome may stimulate the development of an antidiabetic drug from the plant extract. The experimental model of a disease aids not only the understanding of the pathophysiology of the disease but also the development of drugs for its treatment (Etuk, 2010).Alloxan is a well known diabetogenic agent widely used to induce type II diabetes mellitus (DM) in animals (Viana et al., 2004).Alloxan causes selective necrosis of pancreatic islet β-cells producing different grades of the severity of DM by varying dose used.The simplistic argument made against the use of alloxan to induce type II DM is that alloxan produces β-cells damage thus *Corresponding author.E-mail: limax3m@yahoo.com.leading to type I rather than type II DM.But studies showed that there are no differential response to hypoglycemic agents by alloxan and glucose loading hyperglycemic (with intact pancreatic cells) rats (Etuk, 2010).The best known drug induced DM is the alloxan induced, capable of inducing both type I and type II DM with proper dosage selection (Etuk, 2010).These suffice its use in this study. The prevalence of diabetes for all age-groups worldwide was estimated to be 2.8% in 2000 and 4.4% in 2030.The total number of people with diabetes is projected to rise from 171 million in 2000 to 366 million in 2030.The prevalence of diabetes is higher in men than women, but there are more women with diabetes than men.The urban population in developing countries is projected to double between 2000 and 2030 (Sarah et al., 2004).In Africa, the prevalence of DM is estimated at about 2.4%, in Nigeria, at about 3.1% (Gill et al., 2009). Laboratory animals Male albino rats from the Biological Sciences Department of Usmanu Danfodiyo University (UDUS) were used for the study.The rats were housed in metal cages in the laboratory at temperature between 30 to 37°C; 12 h/12 h light/dark cycle and maintained with free access to standard rat feeds and water, for 7 days before experimentation.12 h before experimentation, food was withdrawn but water available ad libitum. Extraction and fractionation procedure Extraction and fractionation were according to Gandhi et al. (2003) and Leila et al. (2007) with some modification in the choice of primary solvent (water) and partitioning (separating) solvents (hexane, chloroform, ethyl acetate and butanol).The powdered seed (200 g) was extracted with 500 ml of ethanol (99.9%) within a period of 24 h and the procedure repeated 3 times using the same powdered extract.The solvent was removed at 45°C under vacuum.The ethanol extract residue obtained was dissolved in water (500 ml) and exhaustively extracted by consecutive liquid/liquid partition with hexane (500 ml), chloroform (500 ml), ethyl acetate (500 ml) and saturated butanol (500 ml) using a separating funnel (1000 ml).The hexane, chloroform, ethyl acetate, saturated butanol and last remaining aqueous fractions was evaporated to obtain fractions (Gandhi et al., 2003).The fractions obtained (hexane, chloroform, ethyl acetate, saturated butanol and last remaining aqueous) were tested to evaluate the antidiabetic and phytochemical properties. Phytochemical analysis The phytochemical constituents of the CO fractions were conducted using methods outlined by Odebiyi and Sofowora (1979). Induction of diabetes in rats Diabetes was induced by a single intraperitoneal injection of 150 Egua et al. 5 mg/kg body wt alloxan (Sigma) in saline.Animals with a blood glucose level ≥ 150 mg/dl were considered diabetic (Diniz et al., 2008). The normal control was injected intraperitoneally with normal saline (2 ml/1 kg).A commercial available Glucometer (Accu Chek Active, Roche Diagnostics GmbH, D-68298 Germany) was used to determine blood glucose level in the animals (Glucose dye oxidoreductase mediator reaction method).Blood glucose was measured through tail tipping blood technique (Karl-Heinz et al., 2001). Hypoglycemic activity in alloxan induced diabetic rats In this experiment, seven major groups of rats consisted of 5 alloxan induced diabetic rats each.A group without any form of treatment but 10% tween 20 in normal saline was used as diluents in the treatment groups (Gowthamarajan and Sachin, 2010).Another consisted of alloxan induced diabetic rats administered 0.2 mg/kg glibenclamide orally and groups 3 to 7 consisted of 2 dosage groups (250 and 500 mg/kg) each with 5 alloxan induced diabetic rats' administerded the fractions (hexane, chloroform, ethyl acetate, butanol and aqueous fractions).Glucose levels were measured just prior to and 1, 2 and 4 h after extract/drug administration (adm) (t = 0 min).Results were calculated as percentage decrease of the initial value (by the difference between the glucose level at time t = 0 min and at the respective hours) (Cunha et al., 2008). Phytochemicals of fractions The phytochemicals isolated in the raw powdered seed were also seen in the ethanol extract, with exception of anthraquinone which was absent in the ethanol extract.All the fractions of the ethanolic seed extract were noticed to have volatile oil present.Also, all the fractions except the aqueous fraction contained alkaloids and cardiac glycosides.All the fractions, except the hexane fraction contained glycosides.The fractions were noticed to have phytochemicals in different combinations and proportions.While the aqueous fraction had the least, containing 3 (glycosides, volatile oil and balsams), hexane fraction contained 4 (alkaloids, tannins, cardiac glycosides and volatile oil), ethyl acetate had 5 (alkaloids, flavanoids, glycosides, cardiac glycoside and volatile oil), chloroform had 7 (alkaloids, tannins, flavanoids, glycosides, saponins, cardiac glycoside and volatile oil) and butanolic fraction had the highest 8 (alkaloids, tannins, flavanoids, glycosides, saponins, cardiac glycoside, volatile oil and balsams).All the fractions lacked steroids and anthraquinone present in the powdered seed (Table 1). Bioassay of fractions in alloxan induced diabetic rats The bioassay were carried out using two doses, 500 and 250 mg/kg and these showed the fractions all had some bioactivity in alloxan induced diabetic rats (Tables 2 to 4).The activity was better with the 500 mg doses than the Values are mean ± SD (n=5).*significant difference (p˂0.05) with respect to control.#significant difference (p˂0.05) with respect to glibenclamide and ## p˂0.01. 250 mg.Statistical significance in bioactivity (blood sugar change) was only seen with the aqueous fraction (1 h post treatment), chloroform fraction (1, 3 and 4 h post treatment) and the ethyl acetate fraction (2 and 3 h post treatment) (Table 3).The calculated percentage reduction in blood sugar due to fractions (Table 4) showed that the aqueous fraction had the best bioactivity, followed by chloroform, butanol, ethyl acetate and hexane fractions in that order, respectively. DISCUSSION In diabetic rats, the bioassay of fractions were carried out using two doses, 500 and 250 mg/kg and this showed the fractions all had some bioactivity in alloxan induced diabetic rats (Tables 2 to 4).The activity was better with the 500 mg doses than the 250 mg.Statistical significance (p ˂ 0.05) in bioactivity (blood sugar change) was only seen with the aqueous fraction (1 h post treatment), chloroform fraction (1, 3 and 4 h post treatment) and the ethyl acetate fraction (2 and 3 h post treatment) (Table 3).The Calculated percentage reduction in blood sugar due to fractions (Table 4) showed the aqueous fraction having the best bioactivity, followed by chloroform, butanol, ethyl acetate and hexane fractions in that order.Using the calculated percentage reduction in blood sugar (Table 4), in the 1st hour, all the fractions were noticed to have a better sugar control to glibenclamide in the following order; aqueous fraction, ethyl acetate, hexane, chloroform and butanol fractions.In the 2nd hour, the fractions had a better control to glibenclamide in this order; aqueous, ethyl acetate, hexane, butanol and chloroform.In the 3rd hour, the order was aqueous, chloroform, butanol, ethyl acetate, hexane and lastly glibenclamide.In the 4th hour, the order was aqueous, chloroform, butanol, glibenclamide, ethyl acetate and hexane.These findings suggested the different liquid-liquid partition fractions of the ethanolic seed extract of Corchorus olitorius had different efficacy, onset of action and period of action as an antidiabetic. There are a number of other plants with acclaimed antidiabetic activity.Among these are Treculia africana and Bryophyllum pinnatum in the management of diabetes and heart disease (Ogbonnia et al., 2008), there is also report that ethanol leaves extract of Cissampelos mucronata possess hypoglycemic activity instreptozocin induced diabetic rats.Gynostemma pentaphyllum Tea was found to improve insulin sensitivity in Type 2 diabetic patients (Huyen et al., 2013).Aqueous extract of Ganoderma lucidum has shown significant hypoglycemic effects in alloxan induced diabetic wistar rats (Mohammed et al., 2007).Aerial parts of Phyllanthus niruri have great potentials as anti-diabetic remedy (Nwanjo, 2007).Aqueous extract of Ficus religiosa bark possess significant anti diabetic activity (Rucha et al., 2010).Oral administration of Boerhaavia diffusa and Ocimum sanctum possess anti-hyperglycemic activity (Dwividendra et al., 2013). Hypoglycemic activity of Fumaria parviflora in the treatment of diabetes has been validated (Fatemeh et al., 2013).The action of the liquid-liquid partition fractions of the seed extract can be attributed to phytochemical content of the extract.Of these phytochemicals, flavanoids (Taoying et al., 2009;Kaku et al., 2004), alkaloids (Day et al., 1990), saponins (George et al., 2002) have been reported to have hypoglycaemic effect.Several researchers have reported plant extracts (hypoglycaemic agents) with several combinations of phytochemicals to which the reported phytochemicals (Table 1) belong (Ahad et al., 2011;Ocho-Anin et al., 2010;Atangwho et al., 2009).Adeneye and Adeyemi (2009) reported the phytochemicals, alkaloids, flavonoids, tannins and glycosides of Hunteria umbellate to have hypoglycaemic effects in normoglycaemic, glucose and nicotine-induced hyperglycaemic rats.It therefore would mean that the hypoglycaemic action of the fractions of the seed extract of CO could be due to the phytotochemicals present singly or in combination.This study stimulated further research (ongoing) on the most active fraction in the bid to isolate and structurally elucidate the active antidiabetic agent/agents. Table 2 . Effect of CO on blood glucose level of alloxan-induced diabetic rats/ reduction% in blood sugar. Table 3 . Blood sugar change due to treatment with CO fractions. Table 4 . Calculated percentage reduction in blood sugar due to fractions.
2018-12-11T03:33:53.991Z
2014-01-31T00:00:00.000
{ "year": 2014, "sha1": "7a916753ff78a58dbe869321c2bffbc22392eaa7", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/JPP/article-full-text-pdf/E9EF95A42489.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7a916753ff78a58dbe869321c2bffbc22392eaa7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
2230649
pes2o/s2orc
v3-fos-license
Gross hematuria as the presentation of an inguinoscrotal hernia: a case report Introduction Several complications have been reported with inguinal hernias. Although hematuria and flank pain, either as the presentation or as a complication of inguinal hernia, are infrequent, this condition may lead to the development of obstructive uropathy, which can have diverse manifestations. Case presentation A 71-year-old Iranian man with Persian ethnicity presented with new onset episodes of gross hematuria and left-sided flank pain. A physical examination revealed a large and non-tender inguinal hernia on his left side. An initial workup included an abdominal ultrasound, an intravenous pyelogram and cystoscopy, which showed left hydronephrosis and a bulging on the left-side of his bladder wall. On further evaluation, computed tomography confirmed that his sigmoid colon was the source of the pressure effect on his bladder, resulting in hydroureteronephrosis and hematuria. No tumoral lesion was evident. Herniorrhaphy led to the resolution of his signs and symptoms. Conclusion Our case illustrates a rare presentation of inguinal hernia responsible for gross hematuria and unilateral hydronephrosis. Urologic signs and symptoms can be caused by the content of inguinal hernias. They can also present as complications of inguinal hernias. Introduction Hematuria may reflect either significant nephrological or urological disease. Hematuria of nephrological origin is frequently associated with casts in the urine and almost always associated with significant proteinuria [1]. Isolated hematuria without proteinuria, other cells or casts is often indicative of bleeding from the urinary tract. Hematuria is defined as two to five red blood cells (RBCs) per high-power field (HPF). Common causes of isolated hematuria include stones, neoplasms, tuberculosis, trauma and prostatitis. Gross hematuria with blood clots almost never has a glomerular basis;rather, it suggests a postrenal source in the urinary collecting system. The likelihood of urogenital neoplasms in patients with isolated painless hematuria (nondysmorphic RBCs) increases with age. Hematuria with pyuria and bacteriuria is a typical presentation for infection. Acute cystitis or urethritis in women can cause gross hematuria. Hypercalciuria and hyperuricosuria are also risk factors for unexplained isolated hematuria in both children and adults [2]. Herein, we present a patient with a left inguinal hernia resulted in flank pain, hydronephrosis and hematuria. Case presentation A 71-year-old Iranian man with Persian ethnicity presented to our urology clinic complaining of recurrent episodes of gross hematuria and left-sided flank pain of one week's duration. He mentioned no history of trauma, prior disease, medication usage or significant family disorder. On physical examination, he was fully conscious with a blood pressure of 110/70 mmHg, pulse rate of 120 beats/min and temperature of 37°C. His bowel sounds were normoactive on auscultation. There were no remarkable findings on abdominal palpation, except for a non-tender, large left inguinal hernia with extension to his scrotum. Other related examinations, including a digital rectal examination, were normal. A urine analysis showed pH 5, with 10 to 15 RBCs per HPF, 10 white blood cells per HPF, with rare bacteria and yeast. A urine culture was negative. Genitourinary ultrasonography reported a grade 2 hydroureteronephrosis on his left side. An intravenous pyelogram also revealed left hydroureteronephrosis associated with an ill-defined filling defect on the left side of his urinary bladder ( Figure 1). In order to rule out any intravesical lesion, a cystoscopy was performed that showed a bulging on the left side of his bladder wall due to extravesical pressure. The mucosal lining of his bladder was normal. Intravenous and oral contrastenhanced abdominal computed tomography (spiral multislice thin section scan) showed an entrapped sigmoid colon that was herniated through his left inguinal canal. Anteromedial displacement of his bladder and left ureter were also evident due to the pressure of his sigmoid colon. His left ureter was dilated due to distal obstruction ( Figure 2). A diagnosis was made of a large inguinal hernia with pressure effects on the urinary system, resulting in hematuria and obstructive hydroureteronephrosis;the abdominal wall was thus opened with a classic inguinal incision. The contents of his hernial sac, including his sigmoid colon and its mesentery, were adhered to the surrounding tissues. An attempt to reduce the content of his hernial sac was unsuccessful, so a low midline incision was made for better exposure and reduction. There was no intra-abdominal mass. A redundant sigmoid colon was found fixed at the internal ring due to severe and chronic adhesion. His proximal sigmoid colon had compressed his bladder and distal ureter at the vesicoureteral junction. After reduction of his hernial sac content, our patient underwent a successful hernia repair with mesh, leading to a quick and uneventful postoperative recovery. Our patient's signs and symptoms subsided following surgery. On a postoperative cystogram, all signs had disappeared ( Figure 3). Our patient was initially followedup with monthly visits for the next six months, and then every six months. He remained symptom free during postoperative follow-up. Discussion Although hematuria presents with many urologic diseases, the acute onset of gross hematuria is almost always indicative of a postrenal pathology. This includes stones, infection, trauma and tumor. Inguinal hernias present along a spectrum of scenarios. These range from incidental findings to surgical emergencies such as incarceration and strangulation of the hernia sac contents. Asymptomatic inguinal hernias are frequently diagnosed incidentally on physical examination or may be brought to the patient's attention as an abnormal pain-free bulge. In addition, these hernias can be identified intra-abdominally during laparoscopy [3]. Patients who present with a symptomatic inguinal hernia will frequently present with groin pain. Less commonly, patients will present with extra-inguinal symptoms such as a change in bowel habits or urinary symptoms. Regardless of size, an inguinal hernia may impart pressure onto proximal nerves, leading to a range of symptoms. These include generalized pressure, local sharp pains and referred pain. Lastly, neurogenic pains may be referred to the scrotum, testicle or inner thigh. A change in bowel habits or urinary symptoms may indicate a sliding hernia consisting of intestinal contents or involvement of the bladder within the hernia sac [3]. The presence of urological symptoms and signs such as hematuria, flank pain and hydroureteronephrosis may be seen with an inguinal hernia, but they generally occur when there is associated bladder herniation [4][5][6]. A broad literature search revealed that the presence of the bladder within an inguinal hernia occurs in approximately 1% to 4% of all adult hernia cases. Ureteroinguinal herniation was also reported and seems to be an even more rare entity [7]. Moreover, although urological symptoms and signs are often the predominant clinical symptoms when herniation of urinary organs occurs, our case revealed that urological features could be present even in the absence of urinary organ herniation. We think that, given the resultant urinary retention, high intraluminal pressure and consequent urothelial injury in the ureter and upper urinary system, the hematuria occurred as a rare presentation of an inguinal hernia. So, in this clinical pattern, care must be taken to prevent probable morbidities through either diagnostic or therapeutic procedures. Ruling out urinary organ herniation during preoperative work-up is helpful to avoid surgical complications, the most common being ureteral injury. Ultrasound may be helpful in diagnosis, with its ability to demonstrate hydronephrosis and, occasionally, the herniated urethra in the inguinoscrotal region, but it has limited value in showing the herniated ureter and a high index of suspicion is required. Nevertheless, in this case, ultrasound could be the first step in diagnostic imaging since urinary symptoms were present. Intravenous pyelography may aid the diagnosis by showing an intact non-herniated ureter. Computed tomography is the preferred imaging modality with high spatial resolution, not only to detect herniation, but also to show associated pathologies. Conclusion Our case illustrates a rare presentation of inguinal hernia, responsible for gross hematuria and unilateral hydroureteronephrosis. Urologic signs and symptoms can be induced by the contents of an inguinal hernia. They can also be present as complications of an inguinal hernia. Consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for Review by the Editor-in-Chief of this journal.
2017-06-25T10:25:09.514Z
2011-12-04T00:00:00.000
{ "year": 2011, "sha1": "020743c6d2b622f8afb20dbe07d87b36af199412", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-561", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bd9a6f66eef086d4b2e80ed5f4e0762b38a4e73", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258543182
pes2o/s2orc
v3-fos-license
Robot’s Inner Speech Effects on Human Trust and Anthropomorphism Inner Speech is an essential but also elusive human psychological process that refers to an everyday covert internal conversation with oneself. We argued that programming a robot with an overt self-talk system that simulates human inner speech could enhance both human trust and users’ perception of robot’s anthropomorphism, animacy, likeability, intelligence and safety. For this reason, we planned a pre-test/post-test control group design. Participants were divided in two different groups, one experimental group and one control group. Participants in the experimental group interacted with the robot Pepper equipped with an over inner speech system whereas participants in the control group interacted with the robot that produces only outer speech. Before and after the interaction, both groups of participants were requested to complete some questionnaires about inner speech and trust. Results showed differences between participants’ pretest and post-test assessment responses, suggesting that the robot’s inner speech influences in participants of experimental group the perceptions of animacy and intelligence in robot. Implications for these results are discussed. between a child and a caregiver that instructs the child to solve simple tasks. Inner speech arises in a developmental fashion because first it figures out as social speech, that is the set of instructions the caregiver explains to the child. Then, it comes the egocentric speech of the children who repeats these instructions and progressively internalizes them, taking the form of covert self-directed speech. After the internalization process, inner speech is formed. In time, the child gradually becomes more autonomous and gains the ability of self-regulation. Vygotsky claimed that "...inner speech is speech for oneself: external speech is for others". Inner speech consists of predicates and is highly abbreviated. Scholars have used different terms when referring to inner speech (e.g. covert speech, self-talk, private speech). However, it is generally defined as the subjective experience of language in the absence of an audible articulation [2]. There is some evidence that inner speech plays an important role for human psychological balance as it is linked to self-awareness [3], self-regulation [4], problem-solving [5], and adaptive functioning [2]. Recently, an innovative computational model has been developed which pave the way to a new frontier in the field of artificial intelligence: implementing inner speech in robot [6] in order to improve human-robot interaction. More specif-ically, since inner speech is a covert speech that cannot be heard from the outside, robot's inner speech is reproduced using overt self-talk. The same architecture was used for demonstrating how robot inner speech improves the robustness and the transparency during cooperation, meeting the standard requirements for collaborative robots [7]. Suggestive results were also obtained in passing the mirror test: inner speech enables a conceptual reasoning for inferring the identity of the reflected entity in a mirror, and robot becomes able to recognize itself [8]. In a previous paper [9], we argued that robot's inner speech might act as a facilitator for human understanding and predicting the robot behaviors, as they form adequate mental representation of the robot. As a matter of fact, mind perceptions consist of two core dimensions: (1) agency, e.g. self-control, memory, planning and communication; (2) experience, e.g. pain, pleasure, desire, joy, consciousness [10]. Thus, such system, which simulates a human psychological functioning, would improve humanrobot interaction by facilitating users' attribution of human qualities to the robot, and by enhancing human-robot trust. As a matter of fact, a recent study [11] demonstrated that, in a human-robot collaborative environment, the robot ability to explain its choices and decision making increased trust and the perceptions of robot animacy, likeability and perceived intelligence. Both human-robot trust and users' attribution of human qualities to the robot are very important aspects of humanrobot interaction. Trust is a multifaceted psychological construct for which there is no universal definition and many different disciplines have contributed to its study. From human-human trust studies in psychology, there are two main perspectives on trust: on the one hand, trust is considered a stable trait, shaped by early trust experiences in human life, which highlights a dispositional tendency to trust others [12,13]. On the other hand, trust is described as a changing state influenced by cognitive, emotional, and social processes [14,15]. More generally, scholars agree that trust involves two main characteristics: the positive attitude and expectations of the trust giver [16] and the willingness to be vulnerable and accept risks [17]. Trust has also a function of saving cognitive resources, since the creation of beliefs and expectations about others reduces the complexity of the social environment which otherwise require an active search and process for information [15,18]. However, the same elements that typify the human-human trust, may not be applied when a human interacts with an automation [19]. As a matter of fact, in human-human interaction, trust is affected by cognitive and affective processes [15,17], on the contrary, in human-robot interaction, trust might be affected predominantly by cognitive aspects since robots are expected to reach standard performances [20][21][22]. In the past years, trust became one of the leading research topic in the field of human-machine interaction, since artificial systems development and implementation have increased exponentially in every context, leading to growing interactions with humans [21]. In particular, robots are now used in different contexts such as military, security, medical, domestic, and entertainment [23]. Despite some robots are completely human operated or teleoperated, other robots are designed to be self-governed to some extent, in order to respond to situations that were not pre-arranged [22]. In this case, the greater the complexity of robots the higher the importance of trust in human-robot interaction. For these reasons, in the context of human-robot interaction studies, trust became a key factor in human reliance on robot partner [15,24] and it has been defined as an "attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability" [24]. Trust is an important factor for humans and robots to fully cooperate as a team [24,25] and humans tend to rely on the robot they trust compared to the one they do not [24,26] by willingly accept and use robot's instructions and suggestions [11,27]. Therefore, if human trust in robot is "misplaced" and not well calibrated the inevitable outcomes will be robot misuses or disuse leading to some negative or even catastrophic consequences [24,28]. Trust is closely related also to users' attribution of human qualities to the robot. Indeed, HRI studies supported the idea that human-robot trust dynamically emerges from the interaction among human-related factors (e.g. personality traits, emotional and cognitive processes), environment-related factors (e.g. competitive/collaborative context, culture, physical environment) and robot related factors (e.g. intelligence, transparency, anthropomorphism) [27,29]. Among robot related factors, an important role is definitely the perceived anthropomorphism, since studies have shown that, in the social-based HRI, people tend to trust more to robots that look (i.e. head, body, face, voice) and behave (e.g. nonverbal elements, dyadic and social gestures) like humans [30][31][32][33][34][35][36][37][38]. Other empirical evidences shows that trust is enhanced when people have a clear understanding of why, when and how a robot operates [39], that's because a system transparency help humans to form a precise mental model of robot capabilities [39]. It is critical for humans to understand exactly how and why a robot works, because trust can be compromised if the robot's capabilities cannot be understood [40]. Consequently, new automation systems should be developed with such insights from empirical research in mind to facilitate human-robot collaboration. Taking all this into account, this study aims to investigate if the robot's inner speech improves humans' trust levels and the perceptions of the robot features (anthropomorphism, animacy, likeability intelligence and safety) when the human and the robot interact for reaching a common goal. In addition, we examined also if the effects of inner speech were less or more related to participants' use of inner speech in daily life. In particular, our hypotheses were that: • H1: participants interacting with a robot equipped with inner speech system would have improved their trust levels more than participants interacting with a robot not equipped with inner speech system. • H2: participants interacting with a robot equipped with inner speech system would have improved their perception of robots' anthropomorphism, animacy, likeability intelligence and safety more than participants interacting with a robot not equipped with inner speech system. • H3: participants using inner speech in everyday life would show a higher effect of inner speech in experimental condition. • H4: independently from the use of inner speech, we expected also to find an increasing of trust towards robots and perception of robot features in all participants after the interaction with the robot. Method We planned a pre-test/post-test control group design. Participants were divided in two different groups, one experimental group and one control group. Participants in the experimental group interacted with the robot equipped with inner speech (independent variable/experimental treatment) whereas participants in the control group interacted with the robot that produces only outer speech. The choice of including a control group in the research design is to establish a baseline for comparison, by ensuring that the independent variable (inner speech) is the one responsible for changes in the dependent variable (trust and perceptions of robot features), and ultimately for experimental results. Without a control group, it is difficult to determine the effects of the independent variable (robot inner speech) on the dependent variable (perception of robot features). In addition, before and after the interaction, both groups of participants were requested to complete some questionnaires about inner speech and trust (see Subsection 2.2) in order to detect differences between experimental and control groups and also between pre-test and post-test sessions. Participants The sample is composed of 51 participants (29 males, 22 females) with a mean age of 25.04 (SD 9.53) that were randomly assigned to the experimental and to the Control condition. Experimental group consists of 33 participants (16 males, 17 females) with a mean age of 26.79 (SD 9.34), whereas control group consists of 18 participants (13 males, 5 females) with a mean age of 21.83 (SD 9.26). Difference in groups' size is due to many dropout in the control group after pre-test phase. Most of participants are students from engineering and psychology courses at the University of Palermo and participated voluntarily. All of them completed the informed consent and COVID-19 protocol before starting the experiment. Prior to this study, none of the participants had ever seen or interacted live with a robot. Materials and Procedures Questionnaires described below have been administered to all participants through online platform both in pre-test (Research Protocol A) and post-test (Research Protocol B) sessions. Research Protocol B has been administered after 15 days from Research Protocol A. The interaction session took place in the Robotics Lab of the University of Palermo. Questionnaires included in the research protocols were: • Trust Perception Scale-HRI [41] that assesses human perception of trust in robots. The shortened version of the scale, consisting of a 15 item scored on a 0-100 scale • GODSPEED Questionnaire [42] that assesses human perceptions and impressions of a robot. It is one of the most used measurement tool to assess perceptions of robot [43]. It is a 24 item rating scale, that consists of a set of bipolar pair of adjectives rated on a 5-point scale. The scale measures human perceptions of five robot features: Anthropomorphism (5 items), Animacy (6 items), Likeability (5 items), Perceived Intelligence (5 items), and Perceived Safety (3 items). The total score ranges from 1 to 5; • Self-Talk Scale [44] that measures how frequently participants use inner speech in everyday life. It consists of 16 items scored on a 5-point Likert-type scale (from 0 = Never, to 4 = Very Often). The scale also measures four different dimensions of inner speech from 4 item each: Self-Criticism, Self-Reinforcement, Self-Management, Social Assessment. The total score ranges from 0 to 64. This scale was used only in pre-test session (Research Protocol A). The Scenario A simple scenario was defined in which participants have to cooperate with robot in order to achieve a common goal. The scenario foresees the setting up of a virtual table with the robot, following an etiquette schema. The schema defines the set of rules according to which the utensils have to be arranged in the table. With the aim to not affect the interactions and the evaluations by the participants of the robot's behavior, the etiquette schema is not shown to the participants before the interactive sessions. In this way, the participants could question their own knowledge about the positions of the utensils, and possibly act affected by the robot's speech. The schema is shown at the end of the interaction, when the robot lists the objects correctly placed on the table, for mere knowledge. Figure 1 shows the etiquette schema used in the experiments. If a utensil is finally placed on a different position than the expected one according to the schema, the etiquette rule for that utensil is infringed. The virtual table is implemented on a tablet surface, where the participant can drag and drop the utensils, can make requests to the robot, and can see the robot's actions. The choice of that scenario enabled the possibility to analyze the cues in particular situations which occur during human-robot cooperation, that are: • The etiquette infringement, representing a conflicting situation, that is the participant places the utensils in an incorrect final position, or he/she asks to the robot to place an object in a position which infringes the etiquette; the conflict arises because the action is not allowed, and the human and the robot have to decide how to continue. In some cases, the human can decide to infringe the rule, or to repeat the action to be compliant with the schema. • The discrepancy situation, that is the participant asks the robot to pick an object already on the table. When humans and robots work together to set the table, an important aspect was to define the type of dialogue the robot engages in, including inner and external turns of phrase. The linguistic form of the sentences in the turns was distinguished for inner and outer speech in order to evaluate the impact of inner speech when it is activated in the experimental session, compared to the control session when inner speech is not activated. In this way, the impact of the robot's inner speech on the cues in the human-robot interaction can be analysed. Because of the COVID pandemic, we were forced to take some special hygienic safety precautions. We had to ensure the least possible contact between people and things in the laboratory. To allow people to interact with the robot and share the common goal of a laid table, we developed an application that recreates the table with all available cutlery, plates and so on in a virtual environment. The virtual environment for setting a table was implemented by an Android app running on a 15" tablet, designed and built by means of the MIT App Inventor platform by the Massachusetts Institute of Technology. The app was designed and developed with some specific features allowing us not to lose the sense of the interaction that we intended in the experiment. In particular, we have focused on: • The event detection strategy-this is the requirement analyzed and implemented for capturing the actions executed by the participant. From the point of view of the user, this feature let him evaluate the final location in which he places the utensils, or the request he makes to the robot using the checkbox list; • The action execution strategy-this feature allows the robot to place utensils on the tablet according to the participant's request or based on its autonomous choices. In simple terms, it reproduces the outcome of the robot decision process in a way that is easy to understand and to detect from the users. The app was integrated with typical robot routines to enable the robot to detect events on the virtual table and perform virtual actions. Figure 2 shows the app interface that includes a main canvas with the table cloth and the utensils representation, and a lateral bar containing the list of checkbox for the requests to the robot. Moreover, the lateral bar includes the stop button for ensuring the participants to stop at any time they desire. Fig. 3 The detail of the checkbox list in the lateral bar of the app interface. By selecting an option, the participant can make a request to the robot. Given all the participants are from Italy, the requests are in Italian. For example, some requests are, in order: "Place the plane plate on the tablecloth", "Place the fork at the right", "Place the napkin", "Place the water glass top right", and so on At the start of the experimental session, the utensils are sparse on the table, and they have to be placed on the table cloth. The table cloth was marked by black dots, for highlighting the correct final locations. In this way, the participant has just the burden to select which objects to place in which dot, reducing the degrees of freedom. The Fig. 3 shows the list of checkbox in the lateral bar with the possible options the participant can select. Begin the participants from Italy, the options are in Italian. The figure's caption contains the English translation of some options with the aim to show the kinds of requests. By selecting an option, each participant can ask the robot the same questions, enabling the same observations for all participants. All these implementation features are detailed in the Sect. 2.4. Resorting to the virtual environment did not affect the experimental results. Instead of using and moving real The communication between the robot and the app was implemented by a hybrid client-server architecture. Figure 4 shows the whole platform. The central node, represented by a computer, handles synchronous network requests. The node is hybrid because it runs as a client or a server according to the item with which it interfaces. In particular, the node will be: • The client, when it requests to the robot to do something (to speech, to execute a virtual action, to track the participant, and so on). In this case, the server is the proxy of the robot, implemented by the Aldebaran library 1 (ALProxy), which switches the client's request to the typical robot's services (Speech, Track, Leds, and so on) implemented by the same library, and enabling the robot to take the corresponding actions (speech, track the participant, turn on and off its LEDs with different colors); • The server, when it receives request by the app, that will be in turn switched to the robot's proxy. The robot-app communication involves the following use cases with corresponding kinds of requests: • The robot has to execute a virtual action: when the participant selects a command in the lateral bar and clicks the Send Command button, the robot should execute the specific action (it should to move an utensil on the tablet). In this case, the app sends to the node the request specifying the action to take, and the node forwards it to the robot. The request to the proxy will involve the aforementioned service, and the robot could dialogue with itself, or with the participant, or execute the action by answering to the node. • The participant executes an action: when the participant drags and drops an utensil on the tablet screen, and finally he/she touches up the utensil, the final position could be on a correct dot, or not. The app detects such an event and sends to the node the information of correct or incorrect final location. The node forwards the message to the robot's proxy, and it calls one of the aforementioned services. Specific events during the interaction trigger the situation in which the robot decides to do something (for example, it refuses to execute the participant's request, or it decides to give to the partner the suggestion to do something else). Implementing Inner Speech in the Robot In order to present the same stimuli in both experimental and control groups the structure of robot outer and inner speech was defined prior to the experiments (Table 1). Participants can set up the table either moving objects on their own or asking the robot to do it. Either way, the robot will produce a vocal response in the form of outer speech followed by the inner speech only in the experimental condition. Outer speech follows the typical language that is expected by an artificial agent, as it uses formal language and it only gives objective feedback based on the participant's performance and actions. On the contrary, inner speech traces a human-based language, since it expresses robot values, personal statements and comments on participant's performance and actions using a friendly and colloquial form. The robot's inner speech is implemented by the cognitive architecture proposed by some of the authors [6]. An outline of the architecture is shown in Fig. 5 The core of the architecture is the working memory: it decodes input signals from the environment, perceived by the sensory-motor block, and associates to them symbolic information (labels). Generally, this process is the output of typical routines, as speech-to-text routines which decode audio in sequences of words, or neural networks which extract the content of an image and associates to each recognized entity the corresponding word. The declarative memory represents the domain knowledge, that is a semantic net of concepts. Given a concept, the relationships between it and other concepts in the net allow exploring correlated concepts. Once the working memory decodes a signal, it recalls from the declarative memory the concepts corresponding to the labels, and new related concepts could emerge. These concepts are in turn decoded by the working memory, as they were perceived from the environment, and are processed as the labels. At this point the rehearsal loop starts. The recalled concepts are processed one at a time, and for each of them the described process is repeated until no further concepts emerge. Inner speech is that rehearsal loop that enables the emergence of other concepts and themes in the working memory. It is a sequence of turns, that are the concepts emerging in each iteration. The recall from the declarative memory, the production of the recalled concepts and the rehearsal of them is a single turn, that is the equivalent of a thought. During the process, the robot "thinks aloud", because it vocally reproduces the recalled concepts. to highlight the differences when the robot thinks aloud and talks to the partner, the voice's parameters (establishing speed, tone, double voice effect) are set differently for the two cases. For the same reason, the color of the robot's LEDs, that are in the eyes and in the shoulders of the robot, is rainbow when the robot thinks aloud, while it is set to the standard white when the robot talks to the partner. The robot does not have gestures during inner speech, while it uses animated speech when talking to the partner. In the proposed scenario, the inner speech is a bit differently implemented within the cognitive architecture, with the aim to enable the observations of the specific cues. In particular, to analyze the cues in the same conditions for each participant, the inner and outer dialogue of the robot has to involve the same turns for the same events. In this way, to reduce possible the participants' evaluations about the interaction depend on the same variables and parameters, and the evaluations can be compared for abstracting a general inner speech affection on the interaction. For this reason, the inner speech cognitive architecture functioning was simplified in respect to the aforementioned completed version. The table 2 shows the differences in the implementation about the general architecture and the used one in the proposed experiments. For each cognitive process, the table reports how the process is implemented in the general architecture and in the used one. The main differences regarded the decoding of the perception and the emergence of the semantic content of the dialogue. In the experiments, the environment is virtual and the perception just regarded the actions the participant does on the tablet surface. To each action executed by the participant corresponds an event that is detected by the robot (the robot perceives the event). The event can involve a wrong or a correct action in respect to the etiquette rules, In the cognitive architecture, the event is decoded by the working memory as described. Whereas in the original version, the working memory decodes environmental signals by assigning labels to them (as outputted from the aforementioned typical routines for decoding signals, as speech to text for decoding verbal commands, or classifiers for decoding entities in the image or video, and so on), the working memory now assigns to each event of the interactive session, detected by the app interface, a numerical symbol that uniquely identifies that event. For example, if the participant drags and drops the plane plate, three events are involved, that are: (i) to touch down the plane plate, (ii) to drop it and (iii) to touch up it. Each of these events corresponds to a unique symbol. Generally, there are three different symbols for each utensil, decoding one of the three identified events that lead when the participant moves this utensil in the app interface. Moreover, there are different symbols for each request to the robot. Each symbol corresponds to a sentence in the declarative memory, and that sentence becomes a turn of the dialogue. Summarily, the declarative memory works as a vocabulary of turns by returning the turn that corresponds to the inputted symbol. Only the turn corresponding to the specific event is retrieved from the declarative memory. The rehearsal loop consists of producing and listening the current turn, and the next turn of the dialogue is then retrieved from declarative memory as it was a symbol. That is, when the input to the declarative memory is a symbol, the memory returns the corresponding sentence (the recall function): when the input to the declarative memory is a previously produced turn, the memory returns the new next turn (the rehearsal function). The declarative memory represents another difference in respect to the original version of the cognitive architecture, where the declarative memory was a semantic net of concepts. Now, it is a kind of vocabulary that contains the correspondences between symbols and sentences, and between sentences and the next sentences in the dialogue. In the original version, the recall function involves concepts of the semantic net, in this version it involves turns corresponding to symbols and reheard turns. In the original version, the robot produced the labels of the concepts from the declarative memory. In this version it will produce the turns as emerging from the declarative memory. In this way, the same dialogue emerges corresponding to the same event and to the same sentences, reducing the parameters and the variables affecting the observation, as discussed. The involved turns in the loop, recalled and retrieved from the declarative memory, may be inner or outer sentences produced according to a specific protocol, as described in the first part of this section. This protocol aims to define typical turns in the interactions that correspond to the participant's expectations. For example, the participant always waits for vocal feedback from the robot, so the robot will always produce one or more outer sentences. Instead, the participant does not often pay attention to the inner speech, and the inner dialogue is not always produced by the robot. Obviously, the turns involved have a specific meaning that is semantically related to the event or the previous reheard sentence. They are retrieved from the declarative memory in the order previously mentioned, and a disambiguation strategy was not necessary. For example, let us suppose the participant (named Bill) asks the robot to place the knife in a wrong location on the table, that is to the left of the plate, while it has to stay to the right. In this case, the event is a request to the robot to infringe the etiquette. The robot perceives that event, and the working memory associates the numerical identifier to it. It recalls from the declarative memory the first sentence of the dialogue, and the loop starts, by recalling the other sentences, that are in turn (I stays for inner sentence, O for outer sentence): I: "To make this request, Bill does not know that the knife should not be placed in that position or he wants to test me." I: "Should I put the knife to the left of the plate? But if it goes right! " O: "Bill, do you really want to infringe the etiquette rule for the knife?" CASE 1: Bill answers yes Bill: "yes, I do!" I: "I don't want to disappoint him..." O: "Ok Bill, I will place the knife to the left of the plate, as you want." CASE 2: Bill answers no Bill: "No!" O: "Great! I will place the knife in the position expected for it!" I: "I must pay attention; the knife is dangerous!" I: "But I'm robot, the knife never hurts me" O: "Knife moved to the right of the plate!" The participant listens to all the turns of the dialogue generated by setting different parameters for inner and outer sentences. In this way, the participant is able to distinguish the dialogue with the self from the dialogues with oneself, and can assess the potential of the inner speech during the interaction. In particular, the parameters include the melody and volume of the voice, the colour of the robot's LEDs, and the double effect in the voice that is activated during the production of the inner sentence to create a mentalizing effect of the voice. Moreover, the robot uses an animated speech when talking to the partner, and it keeps motionless when thinks aloud. Results Data were analyzed through descriptive statistics and a series of 2 × 2 factorial ANOVAs and ANCOVAs, specifically used in order to test research hypotheses. Table 3 presents the results of descriptive statistics for all the scales. Skewness and kurtosis values range below ±1 indicating a nearly normal distribution. Tables 4 presents experimental and control groups descriptive statistics of all the variables measured between pre-test and post-test and Table 5 report the results of 2 × 2 factorial ANOVAs and ANCOVAs with repeated measures, performed on scores at the Trust and GODSPED questionnaires (anthropomorphism, animacy, likeability, perceived intelligence, perceived safety) collected during pre-test and post-test phases from both groups. Both factors Group and Time had two levels (Group: experimental and control; Time: pre-test, post-test). In ANCOVAs, individuals' score on self-talk questionnaire were used as covariate in order to examine to what extent the participants' everyday use of self-talk influenced the effect of robot inner speech on trust. These results indicate that all participants in both groups have improved their trust in the robot, from pre-test to posttest sessions, but that there are no differences in experimental and control group in the size of this effect. ANCOVA revealed also that participants' rate of everyday self-talk has no influence on the effect of robot inner speech on trust Discussions This research aimed to investigate if the interaction with a robot equipped with an inner speech system during the execution of a cooperative task improves human trust levels and perception of robot anthropomorphic features. In addition, it was investigated the possible influence of human use of everyday self-talk on the perception of robot's inner speech. Concerning Trust, the results demonstrated that all participants' trust scores significantly improved from pre-test to post-test, demonstrating that the interaction with the robot produced an increase in their trust levels. However, no Group x Time differences were found, indicating that the use of inner speech did not specifically influence the level of Trust toward robot in participants in the experimental group. since the participants had never met face to face with a social robot before, it is possible to attribute this result to a sort of "novelty effects"; the simple interaction with a humanlike robot increased trust in participants that is kind of robots before. That is consistent with studies [45,46] demonstrating that trust is also shaped by history-based interaction: interaction with the robot changes the way human perceive and trust the robot, and this is particularly true in HRI with social robots that, like Pepper, look and behave like humans [30][31][32][33][34][35][36][37][38]. On the contrary, the results of users' perception of robot revealed that only participants in the experimental group, who interacted with the robot equipped with inner speech, improved their perception of robots' animacy and perceived intelligence from pretest to post-test, while there were not pre-/post-test differences in the control group. Even in this case, results were not influenced by individuals' use of selftalk. These results confirmed our hypothesis and support those studies that show that robot Pepper exhibiting human-like behaviors [30,35,45] are perceived as livelier and more intelligent than robot Pepper not showing human-like behaviors. In our experiment, through the overt inner speech system Pepper share with participants its thoughts and emotions, often addressing ironic and sarcastic comments to users. This particular interaction, by evidence, led users to perceive Pepper as more animated and intelligent. It is also possible that the ability of the robot to openly speak its mind made it easier for participants to understand its behaviors by forming a sort of mental representation of the robot. We found no effect of individuals' use of inner speech on examined variable, indi-cating that the personal use of inner speech by participants in everyday situation did not influence the interaction with a robot equipped with inner speech system. Conclusions and Future Works In conclusion, or study allowed to obtain two main findings. Firstly, they support the idea that, in social HRI, the more a robot shows human-like functioning the greater are humans perceptions about. A robot equipped with an inner speech system, which express his "thoughts"and explain its behaviors through an overt self-talk, is perceived animated and intelligent. Secondarily, interaction with social robots, independently of the use of inner speech systems, increases trust in all participants to the experiment. Thus, in this case, inner speech does not play a specific role in improving users' trust. This result may be due to different reasons, as follows: (1) involvement of novice participant: as already claimed, all participants were at the first interaction with Pepper, and the general novice effect of this first experience could have overcame and reduced the perception of the slight differences between the Inner speech/no inner speech conditions; (2) type of interaction: the proposed task did not represent an at risk situation for participants. In the future, a new task integrating competitive environment together with cooperative one, could probably explicitly elicit more trustworthy towards robots. On the other hand, to the best of our knowledge, this is the first study to attempt at investigating if humans can trust more a robot that show, although rudimentary, inner speech. Future studies may allow to study further the effects of this new and robot feature. Data Availability All data generated and/or analyzed during this study are available from the corresponding author on reasonable request. Conflict of interest The authors declare no competing interests. Ethical Approval The study was approved by the Office for Human Research Protections (OHRP) with the Federalwide Assurance for the Protection of Human with IRB number IRB00008110, and by the Ethics Committee of the University of Palermo. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Alessandro Geraci is a Ph.D. Student in Health Promotion and Cognitive Sciences at the Department of Psychology, Educational Science and Human Movement, University of Palermo. His research focuses on emotional intelligence, school psychology, and human-robot interaction. Antonella D'Amico Ph.D., is associate professor in Developmental and Educational Psychology. Her research focuses on learning and emotions and she realized many publications in the areas of emotional intelligence, learning disabilities and new technologies for learning. Valeria Seidita is Assistant Professor at the University of Palermo; she received the PhD in Computer Science in 2008. Her main interest is in software engineering applied to robotics. Antonio Chella is a Professor of Robotics at the University of Palermo, Italy and the Director of the Robotics Lab at the Department of Engineering of the same University. He is a former Director of the Department of Computer Engineering and of the Interdepartmental Center for Knowledge Engineering. The primary research expertises of Prof. Chella concern Machine Consciousness, Artificial Intelligence and Cognitive Robotics. He is a fellow of the Italian National Academy of Science, Humanities, and Arts. He received the James S. Albus Medal award of the Biologically Inspired Cognitive Architectures (BICA) Society for the outstanding contribution to the science of BICA and for support and scientific achievement of the BICA Society. He is a founder and Editor in Chief of the Journal of Artificial Intelligence and Consciousness and of the Book Series on Machine Consciousness by World Scientific.
2023-05-07T15:18:09.007Z
2023-05-05T00:00:00.000
{ "year": 2023, "sha1": "65db7f41468fdd90008fdfdf69fd5f4c3759e848", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7717586c3cd714069f1d8da16e4207f970f5e641", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
27570925
pes2o/s2orc
v3-fos-license
Red Collar Crime Traditional viewpoints held by academic and non-academic professional groups of the white-collar crime offender profile(s) are that they are non-violent. Yet research has begun to unveil a sub-group of white-collar offenders who are violent, referred to as red-collar criminals, in that their motive is to prevent the detection and or disclosure of their fraud schemes through violence. This article is the first to discuss the origin of the red-collar crime concept developed by this author coupled with debunking white-collar offender profile misperceptions that have persisted for decades by offering current research on the anti-social qualities displayed by this offender group that predates their violence. Secondly, the article applies behavioral risk factors, such as narcissism and psychopathy, which contributes to our understanding of why some white-collar offenders may resort to violence while other white-collar offenders do not. Case analysis also draws upon gender distinctions, workplace violence and homicide methods used to illustrate that red-collar criminals are not an anomaly to ignore simply because they may not reflect the street-level homicides typically observed by society, investigated by law enforcement and studied by academia. In 2005, the author of this article participated in a homicide trial as counsel to the accused who was charged with murdering his business partner (Hansen, 2004).During the homicide investigation, a motive offered was that the murder resulted from fraud detection by the deceased partner upon learning that her business partner committed financial fraud on their business.The accused, in order to avoid detection and or disclosure of the fraud silenced the partner by committing murder through blunt trauma force to the head with a hammer.Subsequent to the trial and conviction of the accused, the author explored, via internet searches, whether this author's case facts reflected a statistical anomaly or whether violent white-collar offenders constituted an offender group researchers inadvertently overlooked.Through this method the author located homicide cases with similar motives within legal documents, such as court decisions, and homicide trials disclosed within various periodicals, where a white-collar offender was found guilty of homicide or those homicides that were attempted, but failed. In addition, the author discovered that extant academic research on violent white-collar criminals, their potential motive(s) and behavioral profile was virtually non-existent with the majority of commentary indicating that white-collar offenders are non-violent.Unfortunately, a consequence of such a void in scholarship is that offender profile misperceptions prevail ultimately reinforced by academia, law enforcement and the criminal justice system.Moreover, because white-collar crime is classified as non-violent, the offender is assumed to be non-violent by nature; assumptions repeated with enough regularity that they are accepted as fact.A review of the researched cases revealed that violent white-collar criminals were not an anomaly and as illustrated within this article, they harbor behavioral risk factors that facilitate their use of violence as a solution to a perceived problem no differently than non-white-collar offenders that resort to violence, albeit for different motives. From this author's involvement with the above mentioned homicide case together with a review of the additional homicide cases collected, this author coined the term red-collar crime to describe white-collar criminals who turn violent.Red-collar criminals, however, should not be confused with white-collar criminals who have a mixed criminal history of white-collar and non-white-collar crime that might include violent criminal histories.It is the motive behind the violence that determines the offender sub-group as a red-collar crime.Red-collar criminals engage in violence to silence those, as stated above, who are in a position to detect and/or disclose their fraud schemes: hence the name fraud detection homicide describing the motive to classify the murder.The instrumental, planned nature of these disclosed red-collar homicides together with those attempted were derived from case facts in written judicial opinions, investigative and prosecutorial disclosures and review of case facts that support conclusions as to the underlying motive. In reviewing the various classifications of crime and in particular homicide, the author found that different types of homicides, such as sexual homicides, parricides, contract killings, domestic homicides to name a few, are counted, classified and studied for behavioral patterns and offender characteristics facilitating statistical analysis (Douglas et al., 1992).To date for example, fraud detection homicides are not tracked and studied in the same way, limiting the amount of research to support this paper.Thus there are no longitudinal studies as to how many red-collar crimes have been committed limiting the amount of knowledge available as to not only descriptive statistics such as age, gender, race, types of frauds that preceded the homicide, and victimology, but also as to their behavioral make-up.It is the author's belief, however, that it is more advantageous to begin the conversation by offering a template to refine and clarify their profile rather than ignore this lethal group simply because information may be incomplete at the moment. Section two of this article covers several important areas including white-collar offender profile misperceptions and the consequences of not understanding the anti-social qualities that are antecedents to their violence.However to understand red-collar criminals, it is necessary to examine the underlying criminal and behavioral traits of white-collar criminals that may facilitate the decision to resort to violence.Unfortunately, despite the enormity of economic damage committed by this offender group (Freidrichs, 2007), be it financial, emotional or physical (Pleyte, 2003), the characteristics of white-collar offending and their behavioral characteristics have been understudied to society's detriment even though this offense, at times, is perceived as more serious than traditional street-level crimes (Harel, 2015).There exists a bias, whether conscious or not, that there is something inherently different about the white-collar offender profile that prompts a different analysis than non-white-collar offenders instead of examining the similarities in their criminal attitudes (Brody, Melendy, & Perri, 2012).Furthermore, white-collar crime scholarship historically and behavioral traits associated with such offenders remains sparse (Simpson, 2013); however, this does not mean they may not harbor traits that facilitate criminal decision making (Alalehto, 2003).The fact that these offenders aggress against others in a manner that is not reflective of traditional notions of criminality or in ways that are easily recognizable by the general public does not mean that they are not capable of inflicting harm to others.Research has begun to reveal that white-collar offenders manifest their aggression in a different form against others and/or organizations to satisfy a motive no differently than non-white-collar offenders creating victims nonetheless (Perri, Brody, & Paperny, 2014).Yet, scholars have virtually ignored researching violent white-collar offenders (Brody & Kiehl, 2010).If behavioral traits are important risk factors for common forms of crime, their potential application to white-collar crime are a logical extension and an important issue to explore in the context of these offenders (Listwan, Piquero, & Van Voorhis, 2010). Sections three through seven examines the transition from offenders that engage in white-collar crime to those that resort to violence by exploring the behavioral traits that may serve as violence risk factors coupled with reviewing red-collar cases displaying various behavioral traits.Although not a comprehensive explanation for white-collar crime, the author agrees with white-collar scholars Benson and Simpson in that if individuals harbor identifiable behavioral traits that can be considered risk factors that increase the probability of engaging in anti-social behaviors "we should not be particularly surprised" when they commit white-collar crime (2009, p. 51) or when they resort to violence (Perri, 2011c).Though only committed by a subgroup of white-collar criminals, a tendency towards violence is not surprising considering many white-collar criminals harbor the same deviant personality traits as conventional street-level criminals.Consequently, although fraud and murder are two distinct crimes, the behavioral traits of the offenders may be the driving force behind both crimes (Ablow, 2008).For example, research confirms that there is a relationship between the behavioral traits of narcissism and psychopathy, creating a negative synergy when they combine with criminal thinking patterns increasing the risk of both non-white-collar crime, white-collar crime and eventually violence (Perri, 2013). The article also examines female participation in white-collar crime that eventually turns to lethal violence by examining gender distinctions and similarities.In addition, this author illustrates through case examples that red-collar crime permeates not only the workplace arena, but also unexpected venues such as families where one family member is exploiting other family members through fraud schemes that eventually ends with members being murdered or with attempts on their lives being made. This article concludes on a cautious note.At times relying on erroneous preconceived notions of what the face of criminality should resemble may expose one to risks that are real and lethal, but ignored.It is the author's hope that scholars from diverse disciplines will devote their talents to researching this understudied offender group so that offender profile misperceptions can be neutralized, behavioral risk factors can be refined, violence investigations can examine homicide methods and motives that are apparent but not necessarily considered coupled with understanding this crime's victimology. White-Collar Offender Profile Misperceptions Although Edwin Sutherland popularized the use of the term white-collar crime, for purposes of this paper, the socio-economic status of the offender important to Sutherland's understanding of white-collar crime is not relevant for purposes of this paper.The socio-economic status of an offender may be important for profiling and statistical purposes, however, it is not relevant in classifying red-collar crimes because the offender's socio-economic status does not alter the definition of what constitutes a homicide.Homicides are committed by a wide spectrum of individuals on the socio-economic continuum from those that are wealthy to those that are poor.Thus, white-collar crime is considered a broad category reflecting high-level corporate misconduct that reflected Sutherland's perception of its offenders (Hasnas, 2003), occupational fraud schemes by middle-class citizens (Weisburd et al., 1991), as well as predatory offenders (Bucy et al., 2008). White-collar criminals are often thought as unlikely to be processed in the criminal justice system following an initial brush with the law in addition to being "neither violent [nor] anti-social" (Mauer, 1974, p. 152).A common misperception is that white-collar crimes represent out of character offenses because white-collar criminals, who are generally educated, employed, and considered law-abiding, exhibit ethical behavior in other facets of their lives and are therefore less apt to engage in crime despite the magnitude of their harm (Brody, Melendy, & Perri, 2012).This misperception persisted for many decades because scholars in the various social and behavioral sciences failed to apply the criminal thinking traits to white-collar offenders as contrasted to non-white-collar offenders.Criminological scholarship focused more on conventional crimes such as violence, narcotics and property related offenses (Lilly, Cullen, & Ball, 2011) as well as examining social processes within an organization that might serve as risk factors for fraud to flourish (Sutherland, 1949) while rejecting individual personality traits as potential fraud offender risk factors (Perri, Lichtenwald, & Mieczkowska, 2014).Both non-white-collar and white-collar offenders display consistent criminal thought patterns and attitudes about others and/or situations to exploit and these thought patterns apply to considering violence as a solution to a perceived problem regardless of one's socio-economic standing (Samenow, 1984). Understanding criminological thought patterns of white-collar offenders requires debunking myths surrounding the white-collar offender profile in that these offenders do not represent a homogenous offender group, which is often at the root of the misperception.The degrees of deviancy and criminal histories they represent is no different than non-white-collar offenders (Walters & Geyer, 2004) together with the fact that criminal thinking patterns coupled with behavioral traits attributable to white-collar offenders can no longer be considered anomalies to ignore (Ragatz & Fremouw, 2010).Not understanding the criminal thinking that supports offender attitudes exacerbates erroneous character assumptions individuals rely upon to form opinions about this offender group that may expose them to financial exploitation risks to being targets of violence by the same individuals they believed would not have the capacity to resort to violence.This author agrees with the comments by white-collar crime scholars Schlegel and Weisburd who state that "attention to white-collar crime will best be served in the future by studying the similarities and differences between white-collar crimes and those referred to as common crimes" (1992, p. 4). Offender Criminal Thinking Patterns and Attitudes Forensic criminal psychologist Dr. Stanton Samenow cautions against the premise that a crime may be out of character for an offender because of no history of prior offenses, has an excellent employment history, and appears to be an upstanding member of the community (Samenow, 2010a) and this premise holds true for white-collar offenders just as it does for non-white-collar offenders (Perri, 2013).Some academicians argue that offender character evidence is irrelevant as they relate to criminal behavior (Heath, 2008), however the issue is whether character and attitudes are revealed when a decision is made to engage in anti-social activity.During Samenow's over 40 years of research, evaluation, and treatment of criminals, he has yet to find an individual who did something not within his or her character (Samenow, 2010b).He further states that often there is a lack of information about particular aspects of a person's behavior, thought processes and thinking patterns pre-dating their offense that have long been present, yet expressing themselves at a moment of opportunity (Samenow, 2010a).The fact than an offender chooses to engage in an act that was not within his or her ordinary lifestyle choice, or that the offender would have preferred an alternative path rather than having to commit a crime to satisfy a motive does not mean it was not within one's character or that anti-social qualities were not revealed (Perri et al., 2014). Offenders engage in a cost-benefit analysis or some type of risk assessment and decide whether it is more advantageous to move forward with a crime or not (Shover & Wright, 2001).Criminal, anti-social attitudes permeate all socio-economic levels and Edwin Sutherland was one of the first to begin to apply criminal thinking attributes to white-collar offenders (Bodeszek & Hyland, 2012).Anti-social/criminal thinking has been conceptualized as distorted or concentrated thought patterns involving attitudes and values that support a criminal lifestyle by rationalizing and justifying law-breaking behavior (Taxman, Rhodes, & Dumenci, 2011): in other words, thinking that says it is alright to violate others and/or the property of others.Criminal traits displayed include but are not limited to rationalizations, exploitations, entitlement, power orientation, lack of empathy, and a disregard for rules, norms, and social boundaries (Walters, 1995).These anti-social and criminal thinking traits apply to white and red-collar offenders (Walters, 2002;Walters & Geyer, 2004;Perri & Lichtenwald, 2007). Adults convicted of white-collar crimes are often repeat offenders no differently than non-white-collar offenders (Weisburd, Warring, & Chaye, 2001), countering the belief that white-collar offenders "do not have a commitment to crime as a way of life" because the loss of "social status, respectability, money, a job, and a comfortable home and family" deters them from a criminal lifestyle unlike street-level offenders who have no concern about how criminality affects their future or status (Shover & Wright, 2001, p. 369).However, studies have shown that "[e]ven though fraud and larceny offenders have lower recidivism rates" for first-time offenders, for offenders with a criminal history, "the recidivism rates of these offenses exceeds 50 percent", which is comparable with the recidivism rates for robbery and firearm offenders (Weissmann & Block, 2007, p. 290).Walters and Geyer (2004) found that "white collar offenders do not form a homogenous group with respect to their pattern of offending, level of deviance, attitudes toward crime, or social identity" (p.280), coupled with histories of violence, property offenses and substance abuse that are traditionally thought to be attributed mainly to uneducated, street-level offenders (Harel, 2015).There are white-collar criminals whose criminal deviancy and criminal thinking traits are indistinguishable from non-white-collar criminals, especially those that are chronic re-offenders (Walters & Geyer, 2004).Moreover, the complexion of white-collar criminals starts to change when there is evidence of a continuum of fraudulent activities, and they are considered pathological offenders or "predators" (Dorminey et al., 2010).For example, chronic white-collar offender Barry Webne stated, "if you put me in a position of trust again, chances are that I am going to violate that trust" (Patterson, 2011, para. 8). Consequences of Offender Profile Misperceptions Unsupported assumptions with respect to the underlying character aspects of fraud offenders and ultimately red-collar criminals invite interpretations about their offender profiles that are not grounded in thoughtful analysis, but reflective of personal biases of what we wish these offenders represent or do not represent.Thus it is not uncommon to hear that offenders are just "ordinary people who made a mistake" (Goodman, 2010, para. 8), "really nice, everyday people… [T]hey could be anyone walking down the street" (Weigel, 2013, para. 4).Some academicians have tried to label some fraud offenders as "accidental offenders" (Dorminey et al., 2010) which is rather contradictory given that fraud requires an intentional, knowing or reckless state of mind (Perri & Mieczkowska, 2015).Perhaps a more accurate description may be the "unexpected offender" reflecting offender traits that one would normally not equate with criminality because of the appearance of respectability (Perri et al., 2014). Consider how white-collar criminals are perceived to be non-violent by academicians: "There are some notable differences involved [with] white-collar criminals compared with…criminals on the lower rungs of the offense ladder.For one thing, white-collar criminals pose no physical danger…Violence is not their thing" (Hobbs & Wright, 2006, p. 79).From the criminal justice arena, one United States federal judge stated, "White-collar criminals are not people who are threatening the lives of others; they are not violent people" (Wheeler, Mann, & Sarat, 1988, p. 63).Yet, consider the actions of former president of the AFG Financial Group, Alan Hand, who orchestrated a $100 million mortgage fraud scheme who personally wanted to kill the witness that had disclosed the fraud scheme to the authorities but could not because he was incarcerated.He attempted to hire contract killers stating, "I wish I was there to watch him suffer" (Rudolf, 2012, para. 3) and "kill the man's wife and children if they were home" (AP, 2012, para.9). Part of the reason why there may be a disconnect between what traits white-collar offenders harbor and how they are perceived is because professionals may not have had the education and/or training to understand what comprises the criminal personality and behavioral traits harbored by such offenders.In addition, white-collar crime was not always perceived as a serious offense, thus further perpetuating offender profile misperceptions as somehow being less criminally inclined when contrasted to non-white-collar offenders.This is due to the fact that public perceptions of what crime entails disproportionately fueled the attention of researchers and criminal justice agencies towards violent crimes while ignoring white-collar offender scholarship for years and its victims in criminological surveys (Simpson, 2013;McGurrin et al., 2013). Sutherland warned of being seduced by offender appearances and attributing a value system to the white-collar offender based on some of the descriptions mentioned above (Lilly et al., 2011) especially in light of the fact that they can harbor predatory traits no differently than non-white-collar offenders given their fraud schemes can last for months, even years (Freiberg, 2000).For example, Sutherland (1949) illustrates this bias of the criminal justice system that inures character traits to these offenders as being, refined, cultured with excellent reputations in their communities (p.8) and this tradition still applies today (Moore, Gilman, & Kethledge, 2013).Yet, Brody and colleagues (2012) challenge unsupported assumptions of offender character traits perpetuated by academic, practitioner circles and offenders themselves.Is there a psychological explanation for the variance between what fraud offenders may actually represent and how they are perceived?One explanation advanced posits that professionals engage in projection bias to fill the void that is created when a framework does not exist to understand this offender profile because of a lack of research to neutralize such biases about offender characteristics.Projection bias is a psychological defense mechanism to reduce personal anxiety where an individual transfers his or her own attributes, values, thoughts, feelings, and emotions, usually to other people, given a set of circumstances. It is the inclination to assume that others share similar values and beliefs with one's own when there may not be a competing paradigm to offer more accurate information to neutralize these erroneous assumptions.Thus it may be surprising to learn that individuals, who are similar to us in terms of being educated, considered being trustworthy employees or employers, engage in white-collar crime.Furthermore the more similar in various characteristics we perceive others to be when compared to ourselves, such as religion and educational levels as examples, the more we tend to believe that a person harbors a similar value system, are perceived to be more trustworthy even though there is no evidence that similarity is a guarantee of parallel values because we can identify with them (Burgoon, Dunbar, & Segrin, 2002).Part of the challenge involves debunking the belief that because white-collar offenders do not harbor the same optics of criminality associated with the image of street-level criminals, then one is not truly criminal at heart (Kanazawa, 2011). In essence, one has to look like and display criminal lifestyle characteristics to truly be a criminal that parallels Weisburd and colleagues (1991) in that white-collar and street offenders were drawn from distinctly different sectors of the American population.Ironically, even white-collar offenders distance themselves from stereotypes in order to appear more benign to authorities and the general public.For example, one offender comparing himself to street-level offenders stated, "I am not at all similar.I don't look the same, talk the same, act the same" (Hare, 1999, p. 122).A securities fraud offender who spent time in prison stated, "I felt different from most of the men around me.My background was too different.They had tattoos, meth teeth, and they could hardly string together two grammatically correct sentences.We think our education and background separates us from the other criminals around us" (Perri, Brody, & Paperny, 2014, p. 39). Red-Collar Offender Behavioral Risk Factors Edwin Sutherland, albeit erroneously, rejected the notion that individual behavioral proclivities and personalities contributed to the understanding of criminality, focusing instead on group dynamics to explain why individuals succumbed to crime and white-collar crime (Perri, Lichtenwald, & Mieczkowska, 2014).Sutherland's position is ironic in light of the fact that he believed white-collar offenders "are by far the most dangerous to society of any type of criminals from the point of view of effects on private property and social institutions" (Sutherland, 1934, p. 32).This is due in part to the antagonistic attitude that sociologist and criminologists, such as Sutherland, displayed toward other disciplines, such as psychiatry and psychology that attempt to understand how personality correlates with criminal propensities.The impact of Sutherland's position influenced other scholars in rejecting personality traits as a factor to the detriment of developing and refining white-collar criminal profile(s) that would actually assist in understanding harmful risk factors they pose toward society. As a result, decades were lost in not developing white-collar crime behavioral profiles that may have assisted both academia and non-academic professional groups.For example, scholars Shover and Grabosky state, "[W]e are not interested in the reasons why some individuals and organizations commit white-collar crime more often than others" (2010, p. 430).Consider white-collar crime scholar James Coleman who states "[it] is generally agreed that personal pathology plays no significant role in the genesis of white-collar crime" (Coleman, 2002, p. 184); in fact, this conclusion has been so widely accepted that only a few empirical studies on the issue have actually been done" (Coleman, 2002, p. 185).In reference to white-collar offenders, Heath (2008) states, "Why do psychologically normal individuals, who share the conventional value-consensus of the society in which they live, sometimes take advantage of opportunities to engage in criminal conduct (p.602)?" Further, consider Alalheto (2015) stating "[T]he majority of white collar offenders do not suffer from psychological disorders" (p.32). Yet, the modern approach to studying white-collar crime incorporates the offender's behavioral traits as a risk factor in the decision to commit crime even though there are legitimate debates on how important behavioral traits may be and which specific traits are common among offenders (Ramamoorti, 2008).Myths surrounding this offender behavioral profile are being dismantled and behavioral research is beginning to shed light on this offender group that is more accurate and not based on conjecture.What is becoming increasing clear is that white-collar offenders manifest their aggressions in different forms in order to satisfy their motive which at times involves using violence as a solution to problem.Contemporary research suggests that behavioral and personality traits should not be ignored as anomalies because they may at times be symptomatic of potential white-collar criminal behavior especially when criminal thinking traits are present (Ragatz, Fremouw, & Baker, 2012). What is problematic when characteristics of white-collar offenders are ignored is that important factors in their offending patterns may be overlooked.According to forensic psychologist Dr. Robert Hare, white-collar criminal's fraudulent activities may reflect a virulent mix of criminal thinking and behavioral traits, including a sense of entitlement, a propensity to deceive, cheat, and manipulate, a lack of empathy and remorse viewing others merely as resources to be exploited-callously and without regret (Carozza, 2008, p. 38).Research confirms that there is a relationship between anti-social dispositions, evidence of narcissism and/or psychopathy creating a negative synergy when they combine with criminal thinking patterns increasing the risk of white-collar criminal behavior together with those offenders who turn violent (Perri & Brody, 2012).Moreover, what do their behavioral traits tell us about the type of violence they prefer to engage in?Do red-collar offenders engage in a more reactive violent manner or do they take their time, in an instrumental manner, to think through how to execute their homicidal plans?The author cautions that harboring behavioral traits discussed below should not be interpreted as being the cause of criminal behavior, but their correlation is considered a red-collar crime offender risk factor. Type of Violence: Instrumental or Reactive In order to classify the type of violence displayed in red-collar crime, the author used the template offered by Woodworth and Porter (2002).For a homicide to be rated as instrumental, the offense had to be goal-oriented in nature with no evidence of an immediate emotional or situational provocation.Instrumental violence, in essence, is a means to an end; it is violence committed to further some other motive (Hart & Dempster, 1997).If there was "a cooling off period or a discernible gap in time between the provocation/frustration and the homicide" the homicide was classified as instrumental (Woodworth & Porter, 2002, p. 439).In contrast, for reactive violence to be present there must be strong evidence for a high level of spontaneity and a lack of planning surrounding the commission of the offense.Thus there is a rapid affective reaction prior to the act with no apparent goal other than to harm the victim immediately following a provocation and/or conflict. Reactive violence put another way, is the end in itself (Hart & Dempster, 1997).Reactive violence is more illustrative between family members and acquaintances while instrumental violence is more illustrative of violence between strangers.Yet what is factually interesting in red-collar crime cases is that the exact opposite holds true for the majority of cases; the offender knew the victims, reflective of reactive violence scenarios but instead very instrumental in nature.The author reviewed the available facts of all the cases listed in Table 1 which revealed that the offender did not display reactive violence, but rather planned the homicide upon believing that the fraud scheme had been detected and or disclosed to the authorities.Case review revealed that there was a discernible time gap between the defendant's belief that his fraud scheme was detected and or disclosed and the execution of the homicide.If the facts were ambiguous as to the type of violence displayed, the case was coded as "unknown".Given article space limitation, only a few red-collar crime factual scenarios can be reviewed within this article. The Robert Petrick Case Janine Sutphen became aware of her husband, Robert Petrick's schemes when she detected fraudulent transactions affecting her bank account.According to the prosecution, Petrick killed his wife after Sutphen detected his fraud schemes (Petrick, 2007;Lewis, 2005b).Janine Sutphen's body was found near her home, wrapped in a tarp, sleeping bag, blankets, and chains, floating in a nearby lake.The prosecution offered evidence of a murder plan recovered from the defendant's computer searches several weeks prior to the murder.The defendant had searched under 22 ways to kill a man with your bare hands, "neck", "snap", and "break" (Jones, 2005, para.2), together with searches regarding the water level in the lake where Sutphen's body was found.Petrick offered no explanation for searches on the topic of "body decomposition, rigor mortis", or how the human body deteriorates (Lewis, 2005c, para. 7).During the period of time that his wife was allegedly missing, one witness recalled that when asked about his wife, Petrick appeared upset and indicated that she died of cancer (Lewis, 2005a).Another female witness testified that she and Robert had been going through pre-marital counseling and had set a wedding date-even before he killed Janine (Lewis, 2005b). Narcissism, White-Collar Crime and Violence Narcissism has been found to be a fraud offender risk factor (Blickle, Schelgel, & Fassbender, 2006;Bucy et al., 2008), and also a risk factor for white-collar offenders to commit murder (Perri, 2011c).Some of the underlying traits of narcissism include a pervasive pattern of grandiosity, a sense of entitlement to resources regardless of the imposition it places on others, exploitative, lacking in empathy, at times a sense of vulnerability, a belief that one is superior, unique, and "chosen", together with inflated views of their own accomplishments and abilities. Consider that there may be adaptive qualities to narcissism such as ambition and motivations to succeed (Pincus & Lukowitsky, 2009).Those deemed to be pathological exhibit defective self-regulation of their emotional states displaying maladaptive strategies to cope with perceived threats to their self-image (Pincus & Lukowitsky, 2009). In order to restore homeostasis, they may exhibit interpersonal aggression to right when they believe they were wronged; hence resorting to revenge as a strategy to restore their self-image, grandiosity, sense of entitlement and superiority (Brown, 2004). A narcissistic sense of entitlement can drive an individual to manipulate circumstances to satisfy their motives, whether the result is fraud, murder, or both (Ablow, 2008).Fraud offenders exhibiting narcissistic traits of extreme entitlement may not be deterred from committing fraud because they may not "fear being caught or what punishments may come their way" (Bucy et al., 2008, p. 417).In addition, their narcissism may not allow them to fully appreciate how their actions play themselves out because their sense of entitlement requires a need for gratification, and the use of deception to achieve fraud does not create a moral dilemma for them to resolve (Barnard, 2008).Narcissism is best understood as a risk factor that has been empirically linked to violent aggression (Bushman & Baumeister, 1998) especially when the offender's inflated view of self is wounded by criticism or interference with a plan constituting a major threat because it signifies that one is not the omnipotent person they perceived themselves to be (Baumeister, 2001).Narcissists often target those they perceive to be a threat to their sense of grandiosity and egocentricity (Baumeister, Bushman, & Campbell, 2000).Moreover, recent scholarship has identified that narcissists who displayed traits of extreme entitlement and exploitation of others to achieve their goals were more likely to resort to extreme forms of aggression and deleterious violence against innocent people even in the absence of provocation representing some of the most maladaptive narcissistic traits (Reidy, Zeichner, Foster, & Martinez, 2008b). In addition, Russ and colleagues (2008) found that malignant narcissists who display a history of interpersonal conflicts, criminal behavior, abuse, intense anger, blame externalization, entitlement, a lack of empathy, disdain for others, and arrogance are prone to violence.Further, narcissism is linked to revenge, increasing the risk of retaliation (Brown, 2004) by resorting to brutal forms of violence against those they perceive as interfering with their schemes (Reidy, Zeichner, & Martinez, 2008a).Even in the absence of provocation or criticisms, narcissists aggress against innocent individuals who might be viewed as potential threats foregoing an escalation in aggression, such as verbal aggression, and resort to intense aggressive acts as their initial method of resolving ego threats and satisfying their sense of entitlement (Martinez et al., 2008).Their grandiose, omnipotent nature produces an overconfident perception of their ability to avoid detection which is referred to as narcissistic immunity.For example a female colleague from the above mentioned case, Robert Petrick, testified that while visiting him in jail, he became aware that the police were searching a small lake near his home for Sutphen's body.She says Petrick stated with "great disdain and arrogance, 'they'll never find her there'" (Lewis, 2005c, para. 13). The Eric Hanson Case Eric Hanson was found guilty of a quadruple homicide murdering his mother, father, sister, and brother-in-law (Hanson, 2010).According to the prosecution, the defendant is responsible for the theft of thousands of dollars from his parents through forgery, mail fraud, credit card fraud, and identity theft schemes (Golz, 2008a) continuing to use their credit cards even after their murders (Perri et al., 2008c).The prosecutor stated, "Eric Hanson in a cold, calculated and premeditated manner committed the execution-style murders out of greed and fear of having his fraudulent scheme discovered" (Rozek, 2005, p. 3).The deceased sister Kate Hanson confronted her brother Eric about the fraud and Eric threatened to kill Kate if she disclosed the fraud schemes (Gregory, 2008).Eric denied the threat, however, in a recovered letter he admitted to the threat (Golz, 2008b).Several weeks passed between the fraud detection and the murders.Interestingly, Eric's mother attempted to find a way to pay off his fraudulently obtained money by taking out loans in the tens of thousands of dollars (Gutowski, 2008a). Dr. Marva Dawkins, a clinical psychologist, evaluated Eric concluding that he exhibited narcissistic personality disorder coupled with anti-social features with no evidence of psychotic disorders or abuse (Gutowski, 2008b;Barnum, 2008a) with the inability to bond or feel empathy for others (Gutowski, 2008c).Some of Eric's anti-social features included a history of domestic violence (Gregory & Barnum, 2008b) home invasion (Gutowski, 2008c) and watching videos of animals being tortured (Perri et al., 2008c).Eric exhibited a parasitic lifestyle, pathological lying, juvenile delinquency that caused chronic family turmoil (Gutowski, 2008d) coupled with impulsive, irresponsible financial habits (Barnum, 2008b).Interestingly, a psychologist who evaluated Eric as an adolescent indicated that he "wasn't a threat to commit more violence" (Gregory & Barnum, 2008a, para. 6). Psychopathy, White-Collar Crime and Violence The concept of psychopathy refers to a specific cluster of traits and behaviors used to describe an individual in terms of pervasive dominating personality traits (Hare, 1999), however there are debates of what personality traits should reflect the construct and how psychopathy should be measured (Skeem et al., 2011).Signature traits of psychopaths are their self-centeredness, pathological lying, lack of empathy, lack of conscience, exploitative, parasitic lifestyle, impulsivity, narcissism, thrill seeking activities, being irresponsible, displaying antisocial traits and the pursuit of their desires above all others in a way that disregards the rights or feelings of others (Cleckley, 1941(Cleckley, , 1976;;Hare, 1991Hare, , 1999)).Dr. Hare further states, "[I]t is possible to have people who are so emotionally disconnected that they can function as if other people are objects to be manipulated and destroyed without any concern" (Chivers, 2014, para. 3).Lacking in feelings for others, they take what they want doing as they please, violating social norms and expectations without the slightest sense of guilt or regret (Hare, 1999;Burkley, 2010).Mental illness and psychopathy can co-occur (Murphy & Vess, 2003); they are not disoriented or out of touch with reality, nor do they experience the delusions or hallucinations, that characterizes most other mental disorders (Meloy, 2002).Moreover most psychopaths are capable of appreciating the criminality of their actions and can be rather methodical and strategic regarding their crimes even though they may display an impulsive lifestyle (Hanlon, 2010). Psychopathy is not synonymous with criminality, however those that have psychopathic traits are more at risk for committing crime and acting out violently (Herve & Yuille, 2007) coupled with a diminished capacity to learn from self-destructive behaviors (Cleckley, 1941).This may be due to Gacono and Meloy (2012) observation that psychopaths "remain prisoners of the present, unable to project into the future and foresee the consequences of their actions, and lacking a capacity to reflect upon the past in any meaningful way" (p.49).Furthermore, not all criminal psychopaths are violent and incarcerated criminals; some are unethical and predatory business associates (Walsh & Hemmens, 2008).Psychopathic behavior is a social problem that cannot be ignored especially its link to white-collar offenders (Bromberg, 1948). A question that often arises is if there is an absence of or blunted emotions, lack of conscience and empathy coupled with the inability to form attachments to others, what replaces these human qualities?According to psychologist Dr. Liane Leedom, the inability to have emotions is replaced by the motivation for dominance, control or power; to them, having power over another is the pleasure (Leedom, 2006).For those psychopaths who view homicide as an acceptable and ultimate solution to controlling others, Dr. Leedom's views are accurate considering that homicide is the ultimate control over another person.Another way to think about what replaces these human qualities is to consider, psychologist, Dr. Martha Stout's assessment when she states that life, in essence, is reduced to a contest and human beings are nothing more than game pieces to be moved about, used as shields or destroyed-it's about winning to satisfy an intrapsychic need (Stout, 2005). Thus it is not surprising that psychopathic offenders search for vulnerability in other people to exploit (Hakkanen-Nyhom & Nyholm, 2012) supporting psychology professor, Dr. Robert Rieber statement "[F]or psychopaths, power can be experienced only in the context of victimization: If they are to be strong, someone else must pay.There is no such thing, in the psychopathic universe, as merely the weak; whoever is weak is also a sucker, that is, someone who demands to be exploited" (Rieber, 1997, p. 47).Psychopaths have a strong need for psychological and/or physical control to reinforce their authority especially if there are perceived threats by others (Martens, 2003).Psychopathy is one of the strongest predictors of aggression and violence and the distinct psychopathic traits of lack of empathy and lack of remorse are the best indicators of aggression especially in unprovoked aggression (Reidy, Zeichner, & Martinez, 2008a).Expanding on Martens (2003), research has shed light on the fact that the narcissistic sub-dimension of psychopathy is linked to the probability that a psychopath will resort to violence (Cale & Lilienfeld, 2006) to protect their self-image (Pincus & Lukowitsky, 2009). While several experts in the field allude to the idea of psychopathy and its influence on white-collar criminality, empirical research is sparse (Lesha & Lesha, 2012) as is research on the behavioral profile of these offenders (Ragatz et al., 2010) even though individuals of professional status who would be in a position to commit white-collar crime do exhibit psychopathic traits (Mullins-Sweat et al., 2010).Although psychopathy has become a highly researched personality disorder predicting criminal behavior, "there is little understanding as yet how psychopathy contributes causally and under what circumstances" to criminal behavior in general (Skeem et al., 2011, p. 126).There appears to be anecdotal evidence of a relationship between the personality traits of psychopathy when they combine with criminal thinking patterns creating a negative synergy increasing the risk of white-collar criminal behavior (Perri, 2013).However, its application to white-collar criminality cannot simply be based on anecdotal evidence of an expression of psychopathy (Smith & Lilienfeld, 2012). Researchers suggest that some psychopaths are more capable of engaging in white-collar crime because their executive and cognitive abilities allow them to not act impulsively and instead focus in a conscientious manner (Glenn & Raine, 2009) while rarely relying on violence due to their intelligence (Herve & Yuille, 2007).Ray and Jones (2011) examined the relationship between psychopathy and attitudes towards white-collar crime.They found self-centeredness that entails blaming others for their own mistakes, manipulative behaviors and a disregard for norms together with cold-heartedness that were positively associated with white-collar criminal attitudes and intentions to commit white-collar crime.Research also reveals traits reflecting egocentricity, manipulation, exploitation, and a Machiavellian attitude where the means justify the ends regardless of the criminal nature (Ray, 2007).Psychopathic white-collar offenders high in conscientiousness prefer planned rather than spontaneous behavior effectively regulating their impulses by keeping their behavior in check to prevent detection.This observation is interesting in that it parallels red-collar violence reflecting an overwhelming instrumental act and not an impulsive reaction.Ragatz and colleagues (2012) found that white-collar offenders had lower scores on lifestyle criminality but scored higher on some measures of psychopathic traits compared with non-white-collar offenders while white-collar versatile who committed white and non-white offenses were highest in displaying criminal thinking traits.Psychopathic white-collar offenders displayed more narcissism and attitudes of entitlement when compared to non-white-collar psychopathic offenders. Psychopathy, Reactive and Instrumental Violence Although psychopaths do engage in reactive violence, they also engage in violence, especially homicide, in a more predatory, planned, and instrumental manner as contrasted to non-psychopathic homicidal offenders by roughly a two to one margin (Woodworth & Porter, 2002, 2007).It has been theorized that the absence of emotion actually assists them in planning the homicide because they can, with coolness, think through a plan as opposed to reacting impulsively where emotions dictate one's violent outburst that is contemporaneous with the provocation (Cleckley, 1976;Meloy, 2002).Psychopathic offenders are more apt to view murder as a means to an end (Porter & Woodworth, 2006), not an unpleasant act (Snowden et al., 2004), where the "end" may be the pleasure gained from the violent act itself (Warren, 2009) coupled with the fact that they do not see a difference from other instrumental actions simply because violence is involved (Porter & Woodworth, 2006).Psychopathic homicidal offenders are more likely to claim that their actions were reactive and not instrumental (Porter & Woodworth, 2007). Psychopaths' do not display a state of heightened emotional arousal at the time of the murder as contrasted to non-psychopaths whose murders exhibited an emotional discharge such as jealousy, rage or a heated argument during the offense (Porter & Woodworth, 2007).The rage displayed by a psychopath, in the context of instrumental violence, should not be confused with emotion displayed rage.Psychopathic rage represents a dispassionate expression of their devaluation of others where murder is a viable solution to satisfy their motives (Perri, 2011a).However their dispassion should not be interpreted to mean that they might not experience gratification, a smug satisfaction from their violence due to their belief that they have fulfilled their motive through dominance and control (Murphy & Vess, 2003;Perri & Lichtenwald, 2010).Furthermore, their rage may be invisible to an observer because it is disguised as silence or feigned indifference, however the thinking behind the placid, stoic exterior may be shockingly sadistic retaliating at a time least expected and in a manner totally unanticipated (Samenow, 1984). What is confusing and may appear contradictory is how can psychopathic violence be instrumental when part of the psychopathic construct is that they are impulsive?Psychopathic impulsivity can have multiple definitions explaining the confusion (Hart & Dempster, 1997).For example, psychopathic impulsivity can refer to a tendency to commit harmful acts without planning or general "lifestyle impulsivity" that may reflect parasitic behaviors, irresponsible dispositions and lack of goals.Another reference is "impulsive aggression" referring to a tendency to perceive environmental stimuli as threatening and responding in an aggressive manner.The tendency toward impulsive aggression may reflect the fact that psychopaths see hostile intent in the action of others and are quick to react with a "preemptive strike" toward others be it family or non-family members (Hart & Dempster, 1997, p. 223). Although psychopathic impulsivity can mean "unpremeditated", "acting before thinking" or the "spur of the moment" behavior, one should not extrapolate this to mean that somehow psychopathic aggression is random, lacking in reflection, risk assessment or planning when acting without fully considering the consequences may reflect opportunistic behavior-ready to exploit a situation for immediate gratification.Clinician experience assessing psychopathic homicidal offenders supports the view that their violence can be methodical and strategic (Hanlon, 2010).The result is an individual who "appears impulsive, rash, irrational, and/or reactive to an observer although in reality, his or her plan came about in a calm, methodical, and instrumental fashion" (Herve & Yuille, 2007, p. 434).So what is meant by psychopathic impulsivity and what is its link to instrumental and reactive violence? At the moment research debates still exist as to whether psychopathy is more related to instrumental violence and the exact reasons for this association is not entirely clear despite numerous attempts to refine the link because they also engage in reactive violence (Blais, Solodukhin, & Forth, 2014).Perhaps as Porter and Woodworth (2002) state some psychopaths exhibit "selective impulsivity" in that the more serious the type of violence they engage in, the more psychopaths are willing to take an instrumental rather than a reactive approach to a homicide and when convenient exhibit reactive violence (p.443).More research is also revealing that psychopathy comes in different variations producing different expressions of the psychopathic construct due to how the impulsivity factor is manifested (Skeem et al., 2011).Thus, "there may be subtypes of psychopathic offenders who engage in more instrumental and severe violence than others" (Laurell, Belfrage, & Hellstrom, 2014, p. 292). For example, some psychopaths are considered more impulsive, interpersonally hostile, anxious, aggressive, and more apt to engage in reactive violence while other psychopaths exhibit less anxiety, greater traits of emotional detachment, narcissism, and more instrumental in their violence where their goal is to control and dominate (Skeem et al., 2007, p. 406).Some researchers posit that psychopaths are "impulsively instrumental-that is, that they commit goal-directed violence with little planning or forethought" producing a flawed risk assessment (Hart & Dempster, 1997, p. 226).In other words, how psychopaths engage in risk assessment may be different from non-psychopaths for various reasons.For example, psychopaths tend to be more immune to the thought of punishment as deterrence to engaging in crime as opposed to a non-psychopath who may experience fearfulness/anxiety at the thought of actually engaging in a criminal act and subsequently refrain from acting upon impulses (Skeem et al., 2011).In addition, narcissistic immunity may distort their risk assessment because their grandiosity produces the belief that they are above apprehension due to their superior planning skills (Perri & Lichtenwald, 2008b).This observation makes sense in that if white-collar offenders are capable of successfully engaging in fraud schemes, then it is plausible that their narcissist grandiosity would lead them to believe they are capable of superimposing their fraud scheme skills to successfully plan a homicide. The Christopher Porco Case During the early morning hours of November 15, 2004, Christopher Porco, then 21 years old, entered his family home and brutally murdered his father, Peter Porco, and attempted to murder his mother, Joan Porco, with an ax while they were sleeping.Prior to the homicide, Christopher fraudulently obtained loans in the thousands of dollars using his parent's relevant personal and tax information (Lyons, 2006a) while accumulating thousands of dollars of debt from lavish spending and internet gambling (McNiff & Cuomo, 2006).The parents eventually confronted Christopher on his fraudulent behavior and threatened to go to the authorities to take action against him (McNiff & Cuomo, 2006).Within two weeks from the time the father warned his son, Christopher executed his plan to neutralize the threat.Behavioral data yielded warning signs of Christopher's psychopathic qualities by leaving a trail of deceitful behavior such as falsifying college grades (Karlin, 2006), staging burglaries from his parents' home (Lyons, 2006a), breaking into his former employer's place of work stealing equipment (Lyons, 2006b) and lying to friends about a fictitious inheritance from his grandmother worth millions. Several psychologists familiar with the case stated that Christopher fits the profile of a psychopath focusing on a continued pattern of lying and deceitful behavior.As psychologist Dr. Wulfert Edelgard stated, "There's an overlap between psychopathic and narcissistic tendencies… He (Christopher) believes that the rules do not apply to him and he has a need to show off in front of people" (Grondahl, 2006, para. 23).Moreover, post-offense behavior is an important indicator of whether psychopathic traits are present because these offenders are less distraught and immobilized with fear (Hakkanen-Nyholm & Hare, 2009).For example, Christopher displayed no grief during the interview with detectives within hours of the homicide (Perri, 2011a).Additionally during a hospital visit, Christopher stated, "I saw her…I burst into tears.I fell on the floor right there" (Bell, 2007, para. 2).Yet a colleague who went to the hospital claimed to be "struck by Porco's odd behavior because he did not seem to exhibit any grief" (Bell, 2007, para. 2).Psychopathic offenders engage in impression management by attempting to read a situation to determine the appropriate emotional response others want to hear to enhance their credibility to outsiders (Hakkanen-Nyholm & Hare, 2009). Comparison of the Christopher Porco and Eric Hanson Profile Comparing the Porco and Hanson homicide cases for profile similarities and dissimilarities displayed in Table 2 raises some questions.For example, what can be a plausible explanation for why these red-collar offenders decided to kill their mothers even when they tried to help their sons?Consider the previously mentioned research that even in the absence of provocation, the narcissistic aspect of red-collar offenders illustrates aggression against innocent individuals viewed as potential threats, resorting to intense aggressive acts to satisfy their sense of entitlement.In addition, their controlling, psychopathic nature may predispose them to seeing hostile intent in the action of others resorting to a preemptive strike toward family members (Hart & Dempster, 1997).In these cases, their mothers too represented potential witnesses to their fraud schemes and to the murders of their husbands, thus, in cold blood, they needed to be eliminated as potential witnesses.These offenders display the criminal thinking trait of power orientation exemplifying the need to exert control over a person that appears to be interfering with an offender's goal which is the perpetuation of fraud (Walters, 2002, p. 57). Red-Collar Crime and the Female Offender Common and legitimate explanations used to rationalize homicides committed by females include killing because of a mental illness, coercion, abuse, or self-defense (Follingstad et al., 1989;Vronsky, 2007).However, women have been perceived to be capable of committing only reactive or "expressive" violence-an uncontrollable release of pent-up rage or fear-and that they murder unwillingly and without premeditation.Many in the social and behavioral sciences communities were unwilling to accept that women could be intentionally violent (Beckner, 2005).However, as for their criminal inclinations, "Women hurt others.They abuse, kill, inflict harm on the human spirit, and dominate others through pain and intimidation....Violence is not limited to men (Jack, 1999, p. 11)".Nancy Siegel, for example, murdered her fiancé by strangulation to prevent the detection of her fraud schemes against him (Siegel, 2008).It is interesting to compare the Robert Petrick case mentioned above to the Siegel case in that both were courting future partners for marriage while planning the murder of their current spouse or fiancé. Female Behavioral Traits Women exhibit anti-social behaviors (Dolan & Vollm, 2009) coupled with personality disorders (Warren & South, 2006) suggesting that they serve as red-collar crime risk factors (Perri & Lichtenwald, 2010).Psychopathy, for example, is displayed by both genders Cleckley (1976), although psychopathy has been studied in reference to men more than to women (Skeem et al., 2011).According to Dr. Robert Hare, there are many clinical accounts of female psychopaths, but relatively little empirical research (Carozza, 2008).Reasons for the neglect of research on female psychopathy include the persistence of rigid sex role stereotypes in society and the diagnosis of personality disorders is, to a large extent, influenced by sex role expectations (Widom, 1978;Brown, 1996).Female offenders diagnosed with narcissistic personality disorder (Warren et al., 2002) and psychopathy engaged in violence, homicide and white-collar crimes (Warren & South, 2006;McKee, 2006). The available evidence suggests that male and female psychopaths share similar interpersonal and affective features, including egocentricity, deceptiveness, shallow emotions, and lack of empathy (Carozza, 2008), resorting to brutal violence including the use of instrumental violence and engaging in fraud (Wynn, Hoiseth, & Pettersen, 2012).Violence is a solution that is available to them as other forms to control someone such as deceit, manipulation, and charming someone.Female psychopaths were comparable to psychopathic males in terms of irresponsible lifestyles (Rogers, Jordan, & Harrison, 2007), higher unemployment rates and relationship instability (Salekin et al., 1998) more promiscuous behavior (Grann, 2000) and relying on manipulation to achieve goals (Nicholls & Petrila, 2005).Regardless of gender, they do not value traditional social norms, close relationships, can be vengeful, physically violent, and victimize others for personal gain (O'Connor, 2002).Homicide and psychopathy have been linked to female offenders (Hicks, Vaidyanathan, & Patrick, 2010;Weizmann-Henelius, Viemero, & Eronen, 2003).Consider white-collar criminal turned serial killer, Velma Barfield, who poisoned her fraud victims because she believed they would detect and disclose her fraud (Barfield, 1983).Harvard professor and forensic clinical psychologist, Dr. Ellsworth Fersch, concluded that Barfield was a psychopathic criminal, exhibiting manipulation skills, antisocial behavior, coupled with no remorse (Fersch, 2006). The Sante Kimes Case After the victim, David Kazdin, detected that his colleague Sante Kimes committed mortgage fraud in which she obtained a $280,000 loan in his name, he began receiving threatening telephone calls from Sante demanding that he cooperate with the fraud scheme.Sante's son, Kenny Kimes, indicated that his mother was concerned about Kazdin's detection of the fraud scheme with Sante stating "we're going to have to kill him" (Grace, 2004, para. 7).Kenny went to Kazdin's home and shot him in the back of the head.In other homicide cases, Kenny testified that he and his mother drugged and killed a 55-year-old male banker by holding his head under water in a bathtub (AP, 2004).Sante and Kenny were also convicted of murdering 80-year-old Irene Silverman with the motive of fraudulently obtaining her residence, with the sentencing judge stating, "It is clear that Ms. Kimes has spent virtually all her life plotting and scheming, exploiting, manipulating and preying upon the vulnerable and the gullible at every opportunity" (King, 2002, p. 279). Forensic psychologist Dr. Arthur Weider stated that Sante displayed psychopathic personality features with "no guilt, conscience, remorse or empathy," adding that Sante was "socially charming, arrogant, full of herself [and] egocentric coupled with a superiority complex" (King, 2002, p. 266).Psychiatrist Dr. William O'Gorman "found Sante to have poor insight and impulsive" and lacking in reflective judgement (King, 2002, p. 252).Despite the extraordinary amount of planning that Sante engaged in, she left behind incriminating evidence perhaps reflecting the "impulsive instrumentality" previously mentioned.For example, Sante maintained voluminous notes of her criminal plans, coupled with maintaining evidence of the crime scene that prompted the sentencing judge to state that Sante "grossly over estimated her own cleverness" coupled with "the staggering stupidity of a criminal keeping a detailed to-do list" (King, 2002, p. 279): comments reflective of previously mentioned narcissistic immunity. Red-Collar Crime and Murder-For-Hire Murder-for-hire appeals to some red-collar criminals because of the belief, albeit erroneous, that it offers an airtight alibi for the person who takes out the contract, known as the solicitor, at the time of the killing.Contracts to kill begins in the mind of the solicitor who experiences some insurmountable problem that can best be solved by having someone else kill the target.The killing is referred to as a "hit" and the person being killed is referred to as the "target".According to Professor James Black, Most solicitors do not see themselves as killers, "they want to get rid of a problem and go on with their lives…[T]hey see themselves protecting a way of life or restoring a way of life" by avoiding personal responsibility (Piper, 1999, para. 8).While murder-for-hire may appear to outsiders to be an impulsive act, they are the product of considerable reflection and planning (Black & Cravens, 2000).In addition, the one that commits the homicide comes in different skill levels: some are professional and others may be classified as amateurish (Schlesinger, 2001).Moreover, international red-collar cases reflect murder-for-hire schemes (Perri & Lichtenwald, 2008b). The Irwin Margolies Case Executive Irwin Margolies was found guilty for the deaths of his controller, Margaret Barbera and her co-worker Jenny Chin (Raab, 1983b).According to Margaret, Irwin generated invoices that were fictitious to create the appearance of revenue in order to get advance payments from the fictitious invoices from a financing company with the fraudulently obtained advance payments laundered to foreign countries (Raab, 1982a).Margaret agreed to testify against Margolies in a fraud inquiry involving the company (Raab, 1982b).According to the prosecutor, Irwin conceived the scheme to have Margaret and Jenny killed in order to silence witnesses who were disclosing his fraud crimes (Raab, 1983a).Irwin wanted Margaret killed first because she had the records that showed the fraud he committed (Chambers, 1984b).Irwin's attorney paid Donald Nash $2,000 to kill Margaret (Chambers, 1984a).Prior to the murder Nash stalked the federal witness for four months to learn her daily routine (Perri & Lichtenwald, 2008b). The Fredric Tokars Case Attorney Fredric Tokars was found guilty of murdering his wife, Sara Tokars, in a murder-for-hire scheme because Sara discovered documents revealing his involvement in money laundering and tax evasion activities stating that she "knows too much…I'm going to have to have her taken care of (McDonald, 1998, p. 176)."Fredric contracted with his associate, Ed Lawrence, to have his wife killed for $25,000 (McDonald, 1998), however Mr. Lawrence sub-contracted the killing to a third party for $5,000 (McDonald, 1993).Lawrence did mention to Fredric that his two boys would be without a mother and Fredric's response was, "They'll be all right…They're young, they'll get over it" (McDonald, 1998, p. 178).While in prison and in reference to Tokars, the psychiatrist stated, "Regarding his personality structure it seems apparent that he has been dealt many narcissistic blows.He has a long history of manipulating and coercing people.He did not talk of his crimes at all and he does not seem to have any remorse for his crimes" (Tokars, 2008, para. 17). Red-Collar Crime and Workplace Violence Workplace violence is any physical assault, threatening behavior or verbal abuse occurring at or outside the workplace and it does include homicide, one of the leading causes of job-related deaths (Perri & Brody, 2011).Typical examples of employment situations that may pose higher risks for violence include duties that involve mobile workplace assignments, working alone and working with volatile persons (Perri, 2011b).Although there are workplace risks, biases of what white-collar criminals are capable of in terms of aggression may cloud one's judgment by not incorporating those risks factors when performing professional duties. The Michael Howell Case State insurance auditor Sallie Rohrbach was killed by insurance agency owner Michael Howell (Perri & Brody, 2011;Lowe, 2009) because Sallie detected evidence of his insurance fraud (Boudin, 2009;Wright, 2009). According to Howell's wife, Howell became aggressive with Sallie as she questioned him about his finances eventually striking her with a computer stand while she was at his agency (Wright, 2009).One colleague stated, "[W]e just don't expect our people in the field to be put in this kind of danger" (Boudin, 2008, para. 17).Ms. Rohrbach's husband indicated that it was his belief Howell "snapped" and did not plan to murder Sallie (APA, 2008, para.7).Burton and Stewart (2008) debunk the idea that a person just snaps and commits workplace homicide; they are planned in advance targeting specific individual(s). Attempt Red-Collar Crime Attempt murder is the incomplete, unsuccessful act of killing someone.The cases listed in Table 3 are reflective of attempt red-collar crime.Interestingly, these cases also reflect murder-for-hire schemes displaying planned, instrumental violence.In the Paul Kruse case, after an employee disclosed the securities fraud to the Federal Bureau of Investigations (FBI), Kruse hired hitmen to murder the former employee to prevent her from testifying for the government (USDOJ, 2013) while Paul's brother who was an accomplice committed suicide prior to the resolution of the case.Consider there may be times when an individual commits suicide due to the fact that their fraud was discovered; however, such an act does not constitute red-collar crime because suicide is not considered a crime.Caution warrants that for investigative purposes a perceived suicide may reflect a red-collar crime given a homicide can be staged to look suicidal (Geberth, 2013). The Randy Novak Case In 2008, Randy Nowak was found guilty of attempt murder of an IRS agent (Smith, 2008).The prosecution argued that Nowak's motive for the murder revolved around the fact that he feared that the agent would disclose the tax fraud and money laundering schemes (Geary, 2009) coupled with not paying taxes for several years (Pera & Geary, 2008).Evidence consisted of recorded conversations between Nowak and an undercover FBI agent posing as a hit man (Jones, 2008).Nowak paid him $10,000 to kill the agent (Jones, 2008) plus another $10,000 to burn down the local IRS office so that any documents linked his fraud would be destroyed (Geary, 2009). Conclusion For many decades misperceptions prevailed about the white-collar offender profile based on projection bias due to the fact that academia failed to devote some of its energies into understanding this offender class to produce a more refined and accurate behavioral profile of white-collar offenders in general and specifically to research those that display violent tendencies.As we have observed, red-collar criminals are not an anomaly to ignore, harboring anti-social and behavioral traits no differently than homicides that occur with street-level homicide offenders.As indicated at the beginning of this article, scholars from diverse disciplines devoting some of their time and resources would greatly assist in refining our understanding of this lethal, understudied offender group. Table 1 . Red-collar crime cases Table 2 . Offender profile trait comparison Table 3 . Attempt red-collar crime cases
2017-09-20T00:27:27.774Z
2015-12-20T00:00:00.000
{ "year": 2015, "sha1": "f8c9de044634d34ebd22a34289a4a2ca98712f67", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/ijps.v8n1p61", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9f1c3a35f4d533da36cc1a83098f1159e887036a", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Psychology" ] }
256007580
pes2o/s2orc
v3-fos-license
The fully differential hadronic production of a Higgs boson through bottom-quark fusion at NNLO The fully differential computation of the hadronic production cross section of a Higgs boson via bottom quarks is presented at NNLO in QCD. Several differential distributions with their corresponding scale uncertainties are presented for the 8 TeV LHC. This is the first application of the method of non-linear mappings for NNLO differential calculations at hadron colliders. Introduction The Large Hadron Collider is now at its third year of successful operation and both ATLAS and CMS report tantalizing hints of a Higgs boson at about 125 GeV. By the end of the 2012 run the experiments are likely to be able to either confirm those hints as a firm discovery or else exclude any Standard Model (SM) Higgs boson. In the event of a firm discovery further detailed examination of various production and decay channels will be necessary to determine the nature of the Higgs sector. The dominant production channel in the SM, but also in all non-fermiophobic models of new physics, is single Higgs hadroproduction. Within the SM the production mechanism is dominated by gluon fusion, since the alternative mechanism of quark annihilation is severely suppressed by the small Yukawa coupling of bottom and light quarks to the Higgs boson. However, if the Higgs sector is non-minimal, as is the case in any two-Higgs-doublet model JHEP07(2012)115 In this paper we present the fully differential NNLO QCD cross section for bb → H in the 5FS within the SM. NLO computations are currently performed with very well automated methods. Obtaining fully differential cross sections and decay rates at one order higher in the perturbative expansion requires the solution of new challenging problems. Regarding the treatment of the real emissions, pioneered for NLO computations in [26,27], rapid progress has been made in the last decade , mainly focusing on the treatment of the double real emission, 1 which resulted in the fully differential calculations of Higgs production via gluon fusion [54][55][56][57], Drell-Yan [58][59][60][61][62], associated Higgs production with a vector boson [63], three jet production from e + e − [64][65][66][67] and diphoton production [68]. However, further development of methods and new ideas are necessary for efficient cancellations of infrared singularities and evaluations of novel two-loop amplitudes in more complicated LHC processes. With this paper we also take the opportunity to complete the second NNLO application, after the fully differential decay H → bb [69], using the method of nonlinear mappings to factorize singularities in the double real corrections [70]. The double real contributions have often been regarded as the bottleneck of NNLO, and this paper therefore also demonstrates the validity of the approach as a method for NNLO corrections in hadronic collisions. The paper is organized as follows: in section 2 we set up the notation and describe the main components of the calculation. In section 3 we provide some detail about the treatment of the separation of soft and hard contributions. In section 4 we describe the treatment of the double real and the real-virtual components. In section 5 we present the way we perform the (non-trivial at NNLO) convolutions for the collinear subtraction terms in mass factorization. In section 6 we provide various numerical results both on jet rates, p T and rapidity distributions; demonstrate the completely differential nature of our calculation and provide typical results for the case in which the Higgs boson decays to two photons, including standard experimental cuts on photon momenta and isolation. Fully differential calculations One of the merits of fully differential calculations is the possibility to arrive at theoretical predictions for observables in the presence of final state phase-space cuts, like those used in experimental analyses, under the precondition that the observable defined is infra-red safe. Throughout this article the dependence on such arbitrary phase-space constraints will be contained in the jet-function J ({p} f ), where {p} f denotes the set of final state momenta in the lab frame. We will refer to the fully differential cross section as σ[J ], which we schematically define as where the sum is over all final states f . 1 A variety of methods has been proposed covering the range from fully orthodox to outright heretic. JHEP07(2012)115 The usual role of the jet function J is to apply arbitrary final state phase-space cuts while ensuring infra-red safety. Here we promote it to a further task, which is to keep track of the bin-integrated cross section for any given differential observable with or without applying phase-space cuts. This can be achieved simply at the level of Monte Carlo integration by passing to J not only the set of final state momenta but also the weight of the given event. The role of the jet function becomes crucial in all amplitudes that have soft and collinear singularities which are regulated by counter terms. In such cases the jet function is keeping track of the kinematics of every subtraction term. Hadronic cross section We consider the following hadronic process where P 1 , P 2 are the incoming hadrons, H denotes the Higgs boson and X generically denotes surplus QCD radiation in the final state. The Higgs boson is assumed to couple only to bottom quarks via the SM Yukawa interaction. Assuming the usual factorization, the fully exclusive hadronic cross section can be written as where the f i (x) denote the bare (unrenormalized) parton distribution functions (PDFs) in the 5FS, x 1 and x 2 are the usual Bjorken-x momentum fractions of the partons i 1 and i 2 respectively, and τ = m 2 H S , where m 2 H is the (on-shell) mass of the Higgs-boson and S is the square of the total center of mass (CoM) energy of the colliding hadrons. By σ i 1 i 2 →HX we denote the partonic cross section for the processes The PDFs we have inserted in eq. (2.3) are bare and we still have to rewrite them in terms of the renormalized PDFs. This step will introduce collinear counter terms that cancel the initial state collinear singularities of the partonic cross section, which remain after all real and virtual corrections are added together. This cancellation is achieved fully numerically in our calculation. We outline the way collinear counter terms can be computed processindependently in section 5. Partonic cross sections Expanding the partonic cross section to NNLO in QCD we obtain corrections are absorbed into the renormalized couplings, see e.g. [19,69] or any textbook on QCD. In contrast, infra-red divergences cancel only after summing all real and virtual corrections contributing to a given infra-red safe observable. Factorized singularities on the unit hypercube may be dealt with using the plusdistribution expansion where D m (x) = ln m (x) x + and the plus-distribution is defined through Beyond NLO a factorization of singularities is highly non-trivial. In this work it is achieved systematically using the method of nonlinear mappings [70]. Care must be taken when dealing with infrared singularities of real emission amplitudes: the plus-distribution, eq. (2.6), also acts on the jet function, such that cancellations happen also at the differential level and are therefore fully local. We now give a brief overview of the matrix elements and phase-space measures required for the computation of the partonic cross section. Here we assume the amplitudes to be color and spin summed. Averaging and phase-space symmetry factors will be explicitly factored out. i) Purely virtual corrections: The purely virtual corrections include the Born, virtual and double virtual contributions and are of the form where s 12 = (p 1 +p 2 ) 2 is the partonic CoM energy and N bb = 1/36. The corresponding phase-space volume element is trivially given by dΦ 2→1 = 2πδ(s 12 −m 2 H ), constraining s 12 = m 2 H = p 2 H . Regarding the computation of amplitudes we refer the reader to [69] where the full virtual matrix-elements can be found. Explicit expressions for these contributions are given in section 4.4. JHEP07(2012)115 ii) Single real emissions: The single real emissions include real and real-virtual corrections and are of the form Further details are given in section 4.1. iii) Double real emissions: These are of the form Further details are given in section 4.3. Separation of soft and hard Since in the soft limit of a 2 → 1 process the produced particle is at rest in the partonic center of mass frame, there is no difference between the soft piece of a fully differential partonic cross section and that of a fully inclusive partonic cross section. It is therefore very convenient to isolate the soft contribution (σ S ) to the partonic cross section (σ) from the hard one (σ H ), i.e. This allows for a fully analytic treatment of σ S , while σ H must, as far as external kinematics are concerned, be treated numerically. Let us introduce the variable Then the soft limit of all real emission amplitudes corresponds to z → 1, which identifies the production threshold. Given that infrared singularities are of logarithmic nature, the divergence at z = 1 can be exposed as follows where σ V denotes the purely virtual correction, while σ (n) R (z, ) denotes real corrections collectively (at NNLO this includes both real-virtual as well as double real corrections). Separation into soft and hard parts can now be achieved by adding and subtracting the soft limit from the second term in the above, yielding JHEP07(2012)115 such that σ H is integrable in the range z ∈ [τ, 1]. Of course this decomposition of the partonic cross section into its soft and hard components is not unique: one could use any other subtraction term with the correct limit, thereby including, for example, the luminosity function. Our choice, however, has the nice property that the soft part σ S can be expanded purely in terms of δ-and plus-distributions via eq. (2.6), Thereby all threshold divergences between σ V and σ (n) R are canceled analytically, leaving only a finite threshold contribution. Furthermore this framework provides a natural way to incorporate threshold re-summation in fully differential calculations. The single real The single real partonic cross section may be expressed as We define d Φ 2 to be the conventional phase-space volume dΦ 2 up to some renormalisation constants. Here we have to consider 6 separate channels: The corresponding amplitudes may all be found in [69]. A convenient phase space parametrization is given by where λ ∈ [0, 1], with the Lorentz invariants taking the simple form Note that the singularities of s 13 and s 23 are factorized in λ, (1 − λ) and (1 − z) which allows for a simple subtraction of the poles using eq. (2.6). This also allows us to identify The calculation of the hard part then trivially follows from eq. (3.4). The real-virtual The real-virtual partonic cross section may be expressed as where we have taken the liberty to define d Φ 2 to equal eq. (4.2) up to some renormalization constants. Then The real-virtual amplitude can be obtained from the corresponding one from the decay process H → bb published in [69] by crossing particles to the initial state. The box integrals we encounter in this amplitude are entirely expressible in terms of Gauss' hypergeometric where z can be in any of the three sets S fine , S inv and S nl : When attempting a direct subtraction of the singularities created by the real emission, the points of subtraction overlap with singular points of the hypergeometric functions in the box integrals. It was found in [69] that one can apply transformations on the argument of the functions to circumvent this difficulty. Since here we are no longer in the euclidean regime of this amplitude, the required transformations are different than in [69]. Analyzing integral representations, we find that we have to apply the following identities: • If z ∈ S fine the soft-collinear limits are well defined. • If z ∈ S nl we apply • If z ∈ S inv we employ the argument inversion, JHEP07(2012)115 After these transformations are applied, the singularities corresponding to the real emission are factorized in λ, (1 − λ) and (1 − z). The soft singularity structure of the real-virtual may then be extracted as In the soft limit only the m = 2, 4 coefficients survive and the integration over λ can be done analytically. The explicit expressions for the soft limit can be found in appendix A. The computation of the hard part then follows from eq. (3.4). While the structure is more complicated than in the case of the single real, a direct subtraction via eq. (2.6) can still be achieved in a straightforward manner. In order to obtain the final Laurent expansion in we employ the library HypExp [71] to expand the hypergeometric functions in terms of polylogarithms. The double real The double real partonic cross section can be written as where dΦ 3 is equal to the conventional three-particle phase space element dΦ 3 up to renormalization constants. Using the discrete symmetries of the squared amplitudes we are able to considerably reduce the number of independent channels, which one has to implement separately. These symmetries are due to the charge invariance of all the bb → H double real amplitudes (exchanging q ↔q or b ↔b leaves the amplitudes invariant). This leaves us with the following list of channels (4.8) By crossing partons from the initial to the final state, we can obtain all of the above from the three amplitudes |M bbggH→0 | 2 , |M bbbbH→0 | 2 and |M bbqqH→0 | 2 JHEP07(2012)115 published in [69]. In order to deal with the intricate singularities, their factorization and subtraction, we refer the reader to the methods developed in [70], which we have implemented faithfully. As in the single real emissions, the double soft singularity occurs at the threshold. Its structure may be identified as (4.9) The soft Let us expand σ S in the strong coupling where The NLO correction ∆ S,NLO may be expressed as while the NNLO correction ∆ S,NNLO can be expanded as follows Here n f is the number of light flavors, ζ n are the usual Riemann zeta values and The explicit soft limits of the real-virtual and double real pieces that are included in σ S can be found separately, and with their full color-dependence, in appendix A. Collinear factorization Parton distribution functions are renormalized to absorb initial state collinear singularities viaf where µ is the factorization scale and f i are the bare parton densities. In the following discussion summation over indices will always be assumed unless explicitly stated. We will also need the convolution integral, which is defined as The kernel Γ ij is defined in the MS scheme by where the coefficients of the expansion in the strong coupling involve the Altarelli-Parisi splitting functions P n ij . Specifically, with β 0 = 11 4 − 1 6 N F . Let us define the inverse of the kernel Γ ij as JHEP07(2012)115 such that it satisfies the condition (Γ ik ⊗ ∆ kj ) (z) = δ ij δ(1 − z). Solving for the coefficients yields The strong coupling expansion of the bare PDFs then reads In evaluating the collinear counter terms we encounter convolutions of the type (f ⊗ ∆)(x), where the function f is regular and ∆(x) can in general be written as Care has to be taken with convolutions over D n . Since the integration does not start at zero, a boundary term must be included Because of the downward sloping shape of all parton distribution functions, a quadratic remapping of the integration variable y was found to optimize the convergence behavior, i.e. we parametrized the integral like with z uniformly distributed between 0 and 1. In our code, this integration is carried out numerically. The integration is onedimensional, which makes a simple deterministic trapezium integration with about 50.000 points the simplest option. The result of the integration is accurate to at least 5 digits, which is usually below the precision of the Monte Carlo integration. The precision of the JHEP07(2012)115 integration can be arbitrarily increased by increasing the number of points used. For every bare PDF used, we construct a one-dimensional grid in the Bjorken-x variable and interpolate from it during runtime. An alternative to constructing a grid is to perform the integration numerically along with the phase space ones, thereby increasing the dimensionality of the Monte Carlo integration by one (or by two in the case of double NLO kernels convoluted with the Born). We have implemented this as well and found that it yields the same results as the grid approach. This procedure allows us to expand the (singular) bare PDFs via eq. (5.10) order by order in the dimensional regulator and substitute them directly in eq. (2.3). The singularities in the resulting convolutions, appearing as poles in the -expansion, cancel the initial state collinear singularities of the partonic cross section. This cancellation is achieved numerically in our calculation and can be observed bin by bin in e.g. the rapidity distribution of the Higgs boson. One can achieve this cancellation in each initial state channel separately, at the cost of separating the convolution integrals depending on the initial state parton in the convolution, i.e. by not performing the implicit j-summation in eq. (5.11). It is worth pointing out that the procedure described here is entirely generic, i.e. it provides the collinear counter terms for any NNLO process numerically. Moreover, we thereby circumvent the usual insertion of eq. (5.1) in the equivalent of eq. (2.3) for renormalized quantities and the resulting cumbersome and process specific analytic treatment of the convolutions. Numerical results We have performed a number of tests to ensure that our results are consistent with each other and with results available in the literature: • We have implemented the entire calculation in two different computer codes, one in Fortran and one in C++, and all results agree within their respective Monte Carlo errors, both inclusively and differentially. • The coefficients of all poles in the -expansion of all cross sections cancel both inclusively and differentially for the entire process and also for all individual initial state channels. • The inclusive cross section agrees with the one available in [19] and from ihixs [73] and so does the inclusive cross section per initial state channel. This is the first independent check of the inclusive cross section published in [19] and adopted in [73]. • The soft limit of both real-virtual and double real contributions were computed both numerically (as a limiting case of the generic matrix elements) and analytically. Moreover the integrated double real contributions were found to agree with an analytic computation provided by [72]. • The subtraction process for every double real integral was implemented in two different ways and were found in complete agreement. We present results for the LHC with a center of mass energy of 8 TeV. We fix the mass of the Higgs boson at 125 GeV. We have used the MSTW2008 (68%CL) PDFs for all results presented here. The value of α s at m Z that we use is the best-fit value of the PDF set at the corresponding order. We use µ R = m H as the central renormalization scale. The value of α s used is run from m Z to µ R through NNLO in QCD. The mass of the bottom quarks is set to zero in all matrix elements, consistently with the 5FS choice. The bottom Yukawa coupling, however, depends on the mass of the bottom. The Yukawa coupling at µ R is obtained from the Yukawa coupling at µ * = 10 GeV, using m b (µ * ) = 3.63 GeV. JHEP07(2012)115 We do not vary µ R in what follows, since the µ R scale dependence of the total cross section has been found to be very mild. We have also checked that the µ R -dependence of differential distributions is very small. Previous studies have shown that the inclusive cross section is very sensitive to the choice of factorization scale. Arguments related to the validity of the 5FS approximation with respect to the collinearity of final state b-quarks, as well as to the matching to the 4FS calculation or to the need for a smoother perturbative expansion, point to factorization scales that are much lower than the Higgs boson mass. We adopt the choice µ F = m H 4 as a central scale and vary it in the range [ m H 8 , m H 2 ] to estimate the related uncertainty. All Monte Carlo integrations was performed with the Cuba [74] implementation of the Vegas algorithm. The rapidity distribution of the Higgs boson is shown at figure 4. As expected, the perturbative expansion is converging smoothly for this choice of central µ F and the NNLO uncertainty band is entirely engulfed by the NLO one. The transverse momentum distribution for the Higgs boson is shown in figure 5. This observable starts at NLO in QCD in the 5FS, and the fixed order prediction fails, as usual, to describe the very low p T spectrum due to the related large logarithms. At the large p T range we see that the NNLO calculation leads to a harder spectrum than the NLO one and the NLO scale uncertainty fails to capture this feature. This implies that great care should be taken when relying on NLO predictions for observables that are highly exclusive in the transverse momentum of the Higgs boson. The differential distribution in both the rapidity and the p T of the Higgs is shown in figure 6, both in a three-dimensional lego plot and in a density plot. We see that the bulk of the events are produced centrally (with |y| < 2.5) and at relatively low p T ( 35−50GeV). In figure 7 we show the cumulative distribution of the Higgs transverse momentum. This observable is equivalent to the cross section in the presence of a jet veto at NLO, but only related indirectly at NNLO. In figure 8 we present the cross section in the presence of a jet veto. We see again that the perturbative description for high p T cut-offs is satisfactory (despite the discrepancy in high p T between NLO and NNLO, which is, in absolute terms, unimportant), while for cut-offs lower than 20 GeV the NLO description does not coincide with the NNLO one. The vanishing of the uncertainty around 15 GeV (which in the case of the jet veto is taking place at a slightly lower p T -veto value) is a feature reminiscent of a similar situation in Higgs production via gluon fusion [75]. The fixed order prediction in this region is very stable under varying the factorization scale, and any residual uncertainty in quantities like the acceptance in the presence of a veto is driven by the uncertainty in the total cross section. Various approaches to assign a larger uncertainty to similar observables involving re-summation exist, see for example [76]. An important observable in bb → H is the cross section for zero, one and two jets. We use the anti-k T algorithm [77] for jet clustering 2 with a cone in the y − φ plane of radius R = 0.4. We show in figure 10 the jet rates as a function of the jet p max T used to define them. Here we do not distinguish between b-jets and light jets. We find the jet rates for p max T = 20GeV to be in agreement with those published in [24]. A wealth of information can be derived from examining the contribution of the different initial state channels to differential distributions. The six initial state channels that contribute to our NNLO calculation have singularities in various collinear regions that are canceled against the collinear counter terms from mass factorization. In order to make the cross section per channel finite one has to use collinear counter terms that include Γ (m) ij kernels involving only the initial state partons of the channel considered. Since we calculate the collinear counter terms numerically this modification was relatively easy to achieve. Initial state channel contributions to differential distributions have a strong dependence on the factorization scale, as do initial state channel contributions to the inclusive cross section. In figure 11 we see the contributions to the Higgs boson p T distribution from each channel, for various factorization scales ranging from m H /16 to 2m H . Within the 5FS, the factorization scale regularizes the collinear singularities which in the 4FS are regularized by the bottom mass. At NNLO, three initial state channels, bb, bg and gg share common collinear configurations whose leading logarithms cancel each other in different bins of the Higgs p T distribution. In the zero p T bin, in particular, squared logarithms from the double collinear limit of the gg channel cancel against the single collinear limit of the bg channel and the born contribution of the bb channel. Moreover at NNLO one also sees sub-leading (single) logarithms canceling each other between the single collinear configurations of the gg channel and the regular contributions to the bg channel, a cancellation that appears in non-zero p T bins as well. The magnitude of those logarithmic cancellations is regulated by the value of the factorization scale. The factorization scale dependence is an artifact of the truncation of the perturbative series, so one would naively choose the scale in a way that minimizes the cross-channel logarithmic cancellations. How in the region where the collinear approximation implicit in the 5FS is still reasonable, which is at m H /4 or lower. Corroborative evidence for such a choice comes from the behavior of the average transverse momentum and of the average rapidity of the Higgs boson as a function of the factorization scale choice, shown in figure 9. These features are also seen in the rapidity distribution of the Higgs boson per initial state channel, shown in figure 12 for various values of µ F . There it is clearly seen that a scale like µ F = m H /4 eliminates the cross-channel cancelations but a lower scale µ F = m H /16 leads to a reduced, bg-dominated prediction. We turn now to more exclusive observables. In large tan β models where the Higgs boson production gets significant contribution from the bottom quark annihilation process, one would like to examine differential distributions involving decay products of the Higgs boson, with cuts necessary in the experimental analyses. We focus here, for demonstration purposes, on the case where the Higgs boson decays to two photons. In such an analysis the minimal cuts used by CMS and ATLAS include: • A cut on the p T of the leading photon: p T ;1 > 40GeV. • A cut on the p T of the trailing photon: p T ;2 > 25GeV. • An isolation cut on photons: no jet is allowed in a cone of radius 0.4 around any of the two photons if it is p T > 15GeV. We treat the Higgs boson in the zero width approximation in this article. We defer a more realistic treatment of the Higgs propagator to future work. Within this setup we show in figure 13 the distribution of the average transverse momentum of the two photons and the distribution of the absolute of the difference in pseudo-rapidity between the two photons, Y * = 1 2 |y 1 − y 2 |. Conclusions We have presented the fully differential NNLO calculation of bb → H, a process of prime phenomenological importance for the LHC in all models with enhanced bottom Yukawa couplings. This is the first independent cross-check of the inclusive NNLO calculation performed in [19]. We have presented a variety of differential distributions for Higgs production that can only be obtained with a fully differential calculation and are useful for assessing the quality of the perturbative expansion and the level under which several features are under control at a fully differential level. We have also presented predictions for fully exclusive observables for the bb → H → γγ process in the presence of tight cuts on the final state photons including isolation cuts, demonstrating that our calculation can fully simulate any experimental setup at the partonic level. This is the second application of our approach to treat real emission singular amplitudes at NNLO [70]. It is the first application for the more complicated case of a hadron collider process. We find the approach particularly beneficial, both in terms of automatization and in terms of performance of the resulting numerical code. We find that the improvement in performance compared to the sector decomposition approach is significant. We intend to release the computer code in the near future and we defer for then any detailed comments on performance issues. A study of significantly wider scope, including the production via gluon fusion in models with enhanced bottom Yukawa couplings, as well as the decay of Higgs to bottom quarks or tau leptons would vastly benefit the experimental searches. We defer such a study for a future publication. B Scale separation The renormalization and factorization scales, µ R and µ F , can be conveniently separated by first setting µ = µ F and then applying the following relations
2023-01-20T14:54:39.336Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "4bab966265fca230b631f9392849e7acbd217231", "oa_license": "CCBY", "oa_url": "https://www.research-collection.ethz.ch/bitstream/20.500.11850/55617/2/Buehler2012_Article_TheFullyDifferentialHadronicPr.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "4bab966265fca230b631f9392849e7acbd217231", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
74079876
pes2o/s2orc
v3-fos-license
Outcome of health education on HIV/AIDS knowledge, attitude and risky sexual behavior among commercial motorcyclists in Osogbo, Nigeria Acquired Immune Deficiency Syndrome (AIDS) is a global health problem. AIDS is the fourth largest cause of death globally and the leading cause of death in Africa. 1 Nigeria carries the second heaviest burden of HIV in Africa. AIDS was first reported in Nigeria in 1986, and as at December 2011, 3459363 people now live with HIV with an estimated 1449166 require ARV, 388864 new infections occurred in the year ended 2011 and records ABSTRACT show 217148 AIDS related deaths. 2 In addition, about 2193745 orphans are living with HIV/AIDS. 3 A survey among commercial motorcyclist in year 2003 found the baseline prevalence of HIV-1 to be 14% in Nigeria. 4 Majority of commercial motorcycle riders are mostly young adult males found within the ages of 20-59 years which represents largely the economically productive segment of Nigerian population. 5 This age group is at greatest risk of HIV infection. 5 An in depth assessment of out-of-school youths identified motorcyclist as a high risk group to having HIV infection. 6 It is important to mention that these categories of individuals often offer free rides in exchange for sexual favors and their trust of sexual partners was cited as the most pertinent reason for non-use of condoms. They sometimes patronize commercial sex workers who have a high prevalence (30-40%) of HIV/AIDs. 7 The commercial motorcyclist serve as interface between this high-risk group and the community. Since 80% of the infection is acquired heterosexually in Africa changing sexual behavior among this group will have significant impact on the control of infection. In addition, a great number of these commercial riders are illiterates thereby limiting their access to correct information on HIV and understanding the health messages and consequent use of preventive measures. 5 This low level of literacy rate and limited access to information also contributes to low utilization of our Sexually Transmitted Infections treatment facilities within the health facilities. Likewise, this occupational group operated under the influence of substances such as alcohol, tobacco, marijuana while at work etc. 8 It has been documented that excessive consumption of these substances is independently associated with sexual behavior involving greater risk of HIV infection. 8 The use of substances make it psychologically easier to engage in prostitution. Motorcyclists also have access to disposable incomes at a younger age which is often used to engage in high sexual activity. However, previous studies done in Benin City, South Western Nigeria seems to show suboptimal information on mode of HIV transmissions and preventions about this new group of transporters in Nigeria. 8 Sexuality education among this professional group will help in influencing important risky sexual behavior such as delay of sexual initiation, reducing the number of sexual partners and increasing use of condom. It will also help in improving their knowledge of causation and prevention of HIV/AIDS and STDS which will assist in reducing the incidence of HIV/AIDS and other STDS among this group within our society thereby necessitating the need for this study. Study area The study was conducted in osogbo town. Osogbo is presently the capital of Osun state, Nigeria and was carved out from old Oyo state in 1991. It is situated on latitude 7.47 North of the equator and longitude 4.33 East of the Greenwich meridian. The city is located at the geographical center of Osun-state and is about 48 km from Ife, 32 km from Ilesa, 48 km from Iwo and about 48km from Ikire. Osogbo has a network of motorable roads which makes it accessible to all the towns mentioned as well as other towns and villages. Commercial motorcyclists were located in 6 motorcycle parks namely: Oja Oba, Isale Osun, Oke baale, Dada Estate, Okefia and Olaiya garage. The control group was commercial motorcyclists located in Ede. The town is located 20 kilometers away from Osogbo and has a geographical area of about 40 square kilometer and a population of about 490000 with an annual rainfall of about of 55-57 inches. Majorities are predominantly Muslim but some are Christians while others practice traditional religion. The town is similar in many socio-demographic and cultural aspects to Osogbo town. Commercial motorcyclists were also located Akala Park, Orita Oloki Park, Olukolo Park and Sekona Park. Study population The study population comprises of all registered commercial motorcyclists across 12 terminals in Osogbo and 10 terminals in Ede Township. Study design A cross-sectional descriptive survey with an intervention stage (health education component) given to the study group. Sample size estimation The Corlien method 9 (for comparative studies) was used to calculate the sample size i.e. However, 150 Commercial motorcyclists were deliberately recruited for the study group and an equal number of Commercial motorcyclists were also recruited for the Control group. Sampling technique Multi-stage sampling technique was used to sample respondents of commercial motorcyclists registered with the central union after consent was obtained. At the time of study there were 2100 and 1595 registered members distributed among the 22 commercial motorcyclists' terminals who constituted the study population (12 terminals in Osogbo and 10 terminals in Ede). The list of the terminals formed the primary sampling unit, while the registered motorcyclists formed secondary sampling unit. Based on this, 6 out of 10 terminals were chosen at random for the study group while 5 out of 10 terminals chosen at random for the control group. The list of registered motorcyclists in their respective terminals was used as a sampling frame to select the sample-size. Respondents were selected for both the cases and control groups using a sampling interval. In each of the selected terminals, structured closed ended and interviewer administered questionnaires were used to collect information on socio-demographic characteristics, knowledge, perception and risky sexual behavior on HIV/AIDS. Interviewers were trained before administering the questionnaire. Questionnaire was pre-tested among commercial motorcyclists in Ilesa, a town similar to Osogbo. This was done to ensure validity and reliability of the instrument. The pre-tested questionnaire was analyzed and necessary modifications effected. The questionnaires were both interviewer and self-administered. Selfadministration was done for those who could read and write while interview administered was done using six recruited and trained research assistants (final year medical students who have just completed their final posting in community medicine) for those who couldn't read nor write. Before each motorcyclist completed the questionnaire, purpose of the study was explained collectively and individually. To ensure confidentiality, no names were recorded. Permission was also sought and obtained from the chairmen of association of commercial motorcyclists from all the selected terminals. Baseline information were collected from both study and control groups in the first two weeks and last two weeks of November, 2007 on 300 commercial motorcyclists, 150 in Osogbo (cases) and 150 in Ede (control group) respectively. Based on the baseline findings, area of misconception, knowledge gap, attitude and risky sexual practices were noted and used in constructing health education messages. Six weekly health educations were implemented among commercial motorcyclists in Osogbo by the main investigator (community physician) assisted by 6 recruited/trained research assistant in conjunction with health educator specialist at Osogbo town hall. Topics to be discussed were assigned before hand to each person to allow adequate preparation and same topics were handled by same individuals during all the sessions to ensure uniformity of messages. Materials used for the health education session included pamphlets (that have been previously designed and tested by United Nations Fund for Population activities, and are relevant to HIV education among at risk groups in the society), relevant posters (obtained from Osun state ministry of health and UNFPA which were pasted on the walls) and power-point projections with relevant pictures and diagrams (were referred to for illustrations). The time schedule for health education was drawn in collaboration with the chairmen of the commercial motorcyclist association in respective terminals and the selected members. A day of the week from Monday to Saturday was assigned for training each group. Participants were grouped into 4 sets based on the number selected from each terminal. Six days in a week was used in training each set of group. Each training session lasted for about 30 minutes and was conducted in the morning hours between 10:30am and 11:00am. Each set of group had six health education sessions. This makes the health education activities convenient for each participant and the facilitators and for better understanding. All the invited 150 participants participated in the training. Names of participant with respective terminal and phone numbers were taken for proper follow-up. Evaluation was carried out 6 months later by the researcher using the same research tool used during the pre-intervention stages to determine effect of health education intervention on their indulgence in risky behaviors, knowledge of HIV transmission and prevention, perception towards HIV/AIDS. For ethical reasons, the control group also benefited from the health education after collection of post intervention data in July, 2008 using same education materials. A feedback of the results of the study was discussed with the chairman of the association. Four commercial motorcyclists (2%) were lost to follow up, therefore 148 commercial motorcyclists in both study and control groups filled the questionnaire at post-evaluation. Baseline and end line data were evaluated using Statistical Package for Social Sciences (SPSS) version 14. Frequency distribution tables were generated while cross-tabulation and test statistics like the t-test for testing the significance of an observed difference between means of two groups, the Pearson's Chi-square (for comparing proportions of events occurring in two or more groups of categorical data) and McNemars Chi square test (to compare proportions of paired observations) were applied in detecting statistical associations. The P values were generated and the significant level set at level less than 0.05 (P <0.05). Baseline comparison At baseline, the control and experimental groups are matched. Demographic data There was no statistical difference in the mean age, tribe, religion, educational and marital status of respondents in the experimental and control groups. For example, the mean age of respondents is 33.1 ± 9.6 years with a range of 18-56 years in the study and 32.3 ± 9.0 years with a range of 17-57 years in the control. The major ethnic group among the respondents in both study and control groups was Yoruba with 140 (93.3%) in the study and 141 (93.3% in control) groups respectively. Majority of the respondents 99 (66%) of the study and 114 (76%) of the control groups were Muslims, while 51 (34.0%) of the study group and 36 (24.0%) of the control group were Christians. Also, most of the respondents 79 (52.7%) of the study group and 167 (55.7%) of the control group had secondary education. Most of the respondents 110 (73.3%) of both the study group and 110 (73.3%) of control group were married while 40 (26.7%) of both the study group and control groups were singles (Table 1). Knowledge on HIV symptoms Also there was no significant difference in the knowledge on HIV symptoms between both groups (P>0.05). Majority of the respondents in the study and control groups were aware of weight loss as a major symptom of someone infected with HIV/AIDS as mentioned by 105 (70%) of the study and 126 (84%) of the control. There was also no statistical difference in knowledge in both the study and the control group on other symptoms such as chronic fever, skin rashes, headache, weakness and chronic diarrhea (P>0.05) ( Table 2). Knowledge on preventive measures Out of the 150 respondents in the study and control groups, 42 (28%) and 44 (29.3%) were aware abstinence as a preventive measure, 66 (44%) and 71 (47.3%) of faithfulness to one's partner, 57 (38%) and 71 (47.3%) of condom use, 3 (2%) and 5 (3.3%) of blood screening, 26 (17.3%) and 40 (26.7%) of sterilizing clipper in both study and control groups respectively. 1 (0.7%) in the control group is of the opinion of isolation of affected patient as a means of preventive measure while 2 (1.3%) of the study group beliefs that voluntary counseling and testing before marriage as a means of prevention of HIV/AIDS. There is no statistical difference related to the modes of prevention between the study groups and the control groups (P>0.05) ( Table 3). Attitude towards HIV/AIDS Out of the 150 respondents in both study and control groups, 51 (34%) and 53 (35.3%) cannot carry an infected male friend on the motorcycle, 37 (24.7%) and 114 (76.0%) cannot buy food from infected food seller, 105 (70%) and 99 (66%) cannot eat with an infected person, 84 (56%) and 78 (52%) cannot sleep in the same room with infected person, 49 (32.7%) and 59 (39.3%) cannot continue friendship with an infected partner in both study and control groups respectively. There is no statistical difference in both groups (P>0.05) ( Table 4). Sexual behavior There was no statistical difference in the patronage of commercial partners between the study and the control groups as shown in Figure 1 (P=0.5). Knowledge on HIV symptoms In the study group, levels of knowledge of respondents on HIV symptoms increased after intervention and the Knowledge on preventive measures Post intervention, there was an improvement in the knowledge of respondents in the study group, about ways in which HIV/AIDS can be prevented -when compared the differences were found to be statistically significant (P<0.05) ( Table 6). Attitude towards HIV/AIDS There was a statistical significant improvement in the attitude of respondents towards PLWA more of the respondents in the study group (P<0.05) while those in control group showed no statistically significant difference in the attitude of the respondents in the control group towards people living with HIV/AIDS (P>0.05). (Table 7). Sexual behavioral practices Among the respondents in the study group, the patronage of commercial partners reduces from 8% to 2%. This reduction is statistically significant in the study group. (P<0.05) while in the control group, there was an increase from 6.0% to 9.5% with no significant difference as shown in table 30 (P>0.05) ( Table 8). DISCUSSION Commercial motorcyclists are among the vulnerable groups to HIV/AIDS infection in Nigeria due to the fact that they are male, much younger, abuse substances and have access to disposable incomes, which they often use to engage in high-risk sexual behavior however, they are more likely to change their behavior if they know what to do to avoid the disease. Hence the message "education is the only vaccine against AIDS" that was commonly aired during the early years of efforts to control the epidemic which is also applicable to this group. The respondents in both study and control groups were similar in their socio-demographic characteristics (as was evidenced by similarity in age group, religion, ethnicity, education and marital status) as well as knowledge and attitude on HIV/AIDS and high risk behaviors before health education intervention. (P>0.05 i.e. Table 1). All were males with more than two third in the age range of 25-44 years and this age group represents largely, the economically productive segment of the Nigerian society, and also the group at the greatest risk of HIV/AIDS infection. A similar age bracket was observed among motorcycle riders in Songkhla urban area, Thailand. 10 Following the educational intervention, knowledge on symptoms and methods of prevention as did attitudes to the disease and individuals with the disease. Comparison of responses between baseline and end-line in the intervention group also show remarkable improvement while in control there was no improvement in responses. This improvement is attributed to health education. Similar improvement was also found by other actors in a similar study done among secondary school students. 10,11 High risky sexual behavior was recorded in this study. Many of the respondents had multiple sexual partners (24.7% in the study and 14.2% in the control) in the last 6 months before the onset of the study (Table 8). Similar findings was reported in Thailand where six out of ten admit of having sexual partner other than their regular spouse during their life time period and one out of five (22.7%) had this practice during the last 6 months and Nigeria. 10 Higher percentage was observed in a study in Benin, Nigeria where 66% of commercial motorcyclist studied had multiple sexual partners. 5 And another study done in North West, Nigeria where 31% of commercial motorcyclist interviewed had girlfriends. 12 Post-intervention, number of respondents having multisexual partners decreased from 24.7% pre-intervention to 14.2% six months post intervention ( Table 8). The significant difference in sexual behavior in this group of people post-intervention corroborates the earlier studies which showed that theory based behavior change interventions can succeed in achieving better results even in the areas of HIV/AIDS prevention and control. 13 A 2005 Nigerian study proposed an integrated model for addressing HIV/AIDS in Sub Saharan Africa. 14 Within the social context of Africa, the model was based on the convergence of three theories -social learning, diffusion of innovation, and social networks. Researchers can look into this and come up with behavior change interventions that can have more effect on the behavior of commercial motorcyclist. CONCLUSION The findings in this study showed a high level of general awareness about the existence of HIV/AIDS but comprehensive knowledge of HIV/AIDS remains low. Many of the respondents have poor attitudes towards people living with HIV/AIDS. High-risk behaviors that can predispose to HIV/AIDS are still predominant among respondents. However, the study confirms that health education is effective in improving HIV/AIDS knowledge among commercial motorcyclist and also in shaping appropriately their attitudes and high-risk behaviors. Based on the findings in this study, it is hereby recommended that continuous health education programmes and seminars for these people should be organized at by governmental and non-governmental organizations so as to keep them informed and equip them with skills to propagate anti-AIDS message.
2019-03-12T13:07:51.272Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "e531c16e720009c8a9b5b344340f4ba5c7919b22", "oa_license": null, "oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/1024/891", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0f808cfa7ea207704d274a66766ffeb770aadedd", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
101193997
pes2o/s2orc
v3-fos-license
Heat Transfer Enhancement in Microchannel Flow : Presence of Microparticles in a Fluid In the present study, a numerical model was developed for laminar flow in a microchannel with a suspension of microsized phase change material (PCM) particles. In the model, the carrier fluid and the particles are simultaneously present, and the mass, momentum, and energy equations are solved for both the fluid and particles. The particles are injected into the fluid at the inlet at a temperature equal to the temperature of the carrier fluid. A constant heat flux is applied at the bottom wall. The temperature distribution and pressure drop in the microchannel flow were predicted for lauric acid microparticles in water with volume fractions ranging from 0 to 8%. The particles show heat transfer enhancements by decreasing the temperature distribution in the working fluid by 39% in a 1 mm long channel. Meanwhile, particle blockage in the flow passage was found to have a negligible effect on pressure drop in the range of volume fractions studied. This work is a first step towards providing insight into increasing heat transfer rates with phase change-based microparticles for applications in microchannel cooling and solar thermal systems. INTRODUCTION Heat transfer enhancement is important for variety of applications, including microchannel cooling and solar thermal energy conversion, among others. A promising method to enhance heat transfer rates is by introducing a fluid with particles that undergo phase-change in a flow system, such as in a microchannel flow. In this case, the momentum and energy exchange between the fluid and particles determines the heat transfer enhancement. Therefore, an investigation to quantitatively determine the heat transfer increases in a microchannel flow with particles become essential. Significant efforts have focused on using PCMs for heat transfer enhancements in the past decade [1-7]. In particular, numerical studies have used different modeling frame works to examine heat transfer performance. Kondle numerically studied heat transfer characteristics of PCMs in a laminar flow for circular and rectangular microchannels [1]. The carrier fluid and particles were modeled as a bulk fluid and a specific heat model was used for the phase change of particles. The Nusselt number was found to be higher for the constant axial heat flux with constant peripheral temperature boundary condition Proceedings of the ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting and 8th International Conference on Nanochannels, Microchannels, and Minichannels FEDSM-ICNMM2010 August 1-5, 2010, Montreal, Canada INTRODUCTION Heat transfer enhancement is important for variety of applications, including microchannel cooling and solar thermal energy conversion, among others.A promising method to enhance heat transfer rates is by introducing a fluid with particles that undergo phase-change in a flow system, such as in a microchannel flow.In this case, the momentum and energy exchange between the fluid and particles determines the heat transfer enhancement.Therefore, an investigation to quantitatively determine the heat transfer increases in a microchannel flow with particles become essential. Significant efforts have focused on using PCMs for heat transfer enhancements in the past decade [1][2][3][4][5][6][7].In particular, numerical studies have used different modeling frame works to examine heat transfer performance.Kondle numerically studied heat transfer characteristics of PCMs in a laminar flow for circular and rectangular microchannels [1].The carrier fluid and particles were modeled as a bulk fluid and a specific heat model was used for the phase change of particles.The Nusselt number was found to be higher for the constant axial heat flux with constant peripheral temperature boundary condition Proceedings of the ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting and 8th International Conference on Nanochannels, Microchannels, and Minichannels FEDSM-ICNMM2010 August 1-5, 2010, Montreal, Canada FEDSM-ICNMM2010-compared to that corresponding to the constant heat flux with variable peripheral temperature.A similar specific heat model was used tomodel phase change of particles by Hu and Zhang [2] for the flow of PCM slurries in a circular tube with constant heat flux.Hao and Tao evaluated the performance of liquid flow with PCM particles in circular microchannels [3].The conservation equations for the particle and liquid phase were solved separately while considering the effects of particleparticle interaction and the particle depletion boundary near the wall.A particular Reynolds number and wall heat flux was found to achieve maximum heat transfer enhancement with PCM particles.Al-Hallaj performed a three-dimensional numerical study on the performance of microchannel heat sinks using micro-encapsulated PCMs and considered the thermal resistance of the heat sink walls while taking temperature dependent physical properties for the PCM slurry [4].In addition to the modeling efforts, significant experimental work has also been performed to examine heat transfer enhancements using PCM particles.Wang studied heat transfer in microencapsulated PCM suspension for a laminar flow through a circular tube under a constant heat flux, where a new expression for the Stefan number was developed [5].Niu [6] studied the micro-encapsulated slurries in a horizontal circular tube by varying the mass fractions of slurries from 5% to 27.6%.A new correlation for heat transfer coefficients for laminar flow slurries in a horizontal circular tube was developed.Yamagishi [7] studied the heat transfer enhancement using micro-encapsulated PCM slurry in a circular tube with uniform heat flux for both laminar and turbulent flows.As the particle volume fraction was increased, the flow regime changed from turbulent to laminar.While significant studies have been performed in the past, a model that provides insight into how various parameters affect heat transfer performance is needed, that will also guide future experiments.We initiate these efforts by developing a numerical model using an Euler-Lagrangian approach to investigate the effect of increasing PCM particle volume fraction in a microchannel.We use water as the carrier fluid and lauric acid as the PCM particles with different volume concentrations ranging from 0 to 8%.The thermophysical properties of both the carrier fluid and the PCM were assumed to be constant during simulations and are given in Table 1.The model currently neglects particleparticle interaction effects. MODEL FRAMEWORK A schematic of the microchannel for the model is shown in The flow is assumed to be steady and incompressible with constant properties.The equations governing laminar flow for the carrier fluid are shown below: Momentum equation ( ) The particles are injected uniformly at the inlet of the microchannel.The trajectory of the particles in the flow domain is obtained by integrating the force balance on the particle.The forces acting on the particle are the drag force and the virtual mass force [9].The force balance equating the inertia of the particle to the forces in the x-direction (in Cartesian coordinates) is given below: ( ) where D F is the drag force per unit particle mass and x F is the virtual mass force per unit mass acting on the particle and are given respectively as: 2 18 ( ) 1.257 0.4 where λ is the molecular mean free path.The Cunningham's correction factor, C c , is taken as for continuum flows.The momentum transfer from the fluid to the particles is obtained by computing the change in momentum of the particle as it passes through each control volume as given below: ( ) The momentum transfer appears as a sink/source in the momentum equation for the carrier fluid.The heat transfer from the carrier fluid to the particles is obtained by computing the change in thermal energy of a particle as it passes through each control volume as given below: The heat transfer term in the above equation appears as a source or sink term in the energy equation for the carrier fluid. In order to account for the phase change of the particles, a specific heat model is used [1].In the specific heat model, the phase change of particles is modeled by varying the specific heat capacity of the particles between the solidus and the liquidus temperatures.The melting temperature of lauric acid is 317.2K.In the present study, the melting range of PCM particles is assumed to 317-320 K.The equations used for specific heat capacity are given below: The flow within the microchannel is assumed to be hydrodynamically fully developed.In order to obtain the developed flow profile at the inlet, repeated simulations are performed without particles and heat transfer.The fully developed velocity profile is extracted at the outlet, and then subsequently applied as an inlet boundary condition. An average velocity of 0.1 m/s for both the carrier fluid and the particles is prescribed at the inlet (Re = 7.65).Thus, the mass flow rate of the particles is calculated based on the particles volume fraction and inlet average velocity.The temperature of the particles and carrier fluid is 315 K at the inlet, which is initially less than the melting temperature of the particles.A constant heat flux of 120 W/cm 2 is applied at the bottom wall of the microchannel. At the outlet, an outflow boundary condition is considered.The outflow boundary condition assumes zero diffusion fluxes in the direction normal to the exit plane for all flow variables (i.e., the velocity and temperature) except pressure.Outflow boundary conditions incorporated in FLUENT are used to model flow exits where the details of the flow velocity and pressure are not known prior to the solution of the flow problem [9].In this case, the zero diffusion flux condition applied at outflow cells means that the conditions of the outflow plane are extrapolated from within the domain and have no impact on the upstream flow [9].The zero diffusion flux condition can be used when the velocity and temperature profiles are fully-developed.In this work, the temperature profiles becomes fully developed at approximately x=0.8 mm.While there does not appear to have a significant effect at the upstream temperature profiles due to the outflow boundary condition, the convective boundary condition would be most appropriate for the temperature at the exit of the channel when the total length of the channel is considered in this study. NUMERICAL SOLUTION A square grid with an aspect ratio of 1 is defined for the flow domain.The control volume approach is used in the numerical scheme.All variables are computed at each grid point except the velocities, which are determined midway between the grid points. A staggered grid arrangement is used in the present study, which links the pressure through the continuity equation and is known as SIMPLE algorithm [8].This iterative process is used for convergence.The pressure relationship between continuity and momentum is established by transforming the continuity equation into a Poisson equation for pressure.The convergence criterion for the scaled residuals is set to 10 -6 .We compare results of 2D and 3D simulations with a microchannel width of 2 mm in Table 2.The results show that there is a notable difference between normalized values of pressure and temperature for the cases of using 2D and 3D.These results motivate performing simulations in 3D despite the significantly longer computation times. RESULTS AND DISCUSSION Figure 2 shows the pressure drop along the mid-line (Fig. 1) for 0-8% volume concentrations of PCM particles.First, a fully developed flow profile was obtained by iteration, where the flow outlet solution was input into the flow cell five times with no heating.Once the flow profile became fully developed, the heating was initiated.The pressure drop along the length of the microchannel length is a result of frictional losses from the wall shear stress and the drag forces due to the interactions between the particle and the carrier fluid in the flow field.The particles in the working fluid influence the momentum as a result of the drag force and local acceleration.However, in the modeled case, the size of the particles is small (1µm) and the channel length is short, such that the drag and particle acceleration have a negligible effect on the pressure drop as shown in Fig. 2. Figure 3 and Fig. 4 show temperature contours along the channel length while Fig. 5 and Fig. 6 show temperature contours across the channel cross-section for the cases with and without the presence of PCM particles in the carrier fluid, respectively.The temperature increases in the channel towards the channel exit due to heating from the channel bottom wall. Although, the friction in the shear layer contributes to temperature rise, the temperature increase in the channel is primarily a result of external heating of the working fluid.Consequently, the major contributor to the temperature increase is the external heat input to the channel.Moreover, increasing the concentration of PCM particles contributes to decreasing the temperature in the channel towards the channel exit, which is attributed to the latent heat associated with phase change of the PCM particles.This effect is more significant at higher concentrations of PCM particles, as shown in Fig. 7.These results indicate that improvements in thermal storage capacity of the working fluid are possible by increasing PCM particle concentration with minimal pumping power losses. Fig. 1 . A microchannel of constant height (50 µm, H), length (1000 µm, A) and width (2000 µm, B) was defined in the FLUENT TM simulations.The carrier fluid with micron-sized particles of 1 µm diameter enters the microchannel at a temperature just below the melting temperature of the particles.A constant heat flux is applied at the bottom wall, which heats up the carrier fluid and particles.After traversing a certain length of the microchannel, the particles undergo phase change.The phase change of the particles plays an important role in decreasing the bulk temperature change of the suspension as compared to the case with no phase change and thereby increases the thermal storage capacity of the suspension, and effective heat transfer coefficient. Figure 1 . Figure 1.Geometry and flow direction in the microchannel used in the FLUENT simulations. Figure 3 . Figure 3.Temperature contour along the channel length for 0% volume fraction of particles. Figure 4 . Figure 4.Temperature contour along the channel length for 8% volume fraction of particles with phase change. Figure 5 . Figure 5.Temperature contour across the channel cross section for 0% volume fraction of particles. Figure 6 .Figure 7 . Figure 6.Temperature contour across the channel crosssection for 8% volume fraction of particles with phase change. TABLE 1 . THERMOPHYSICAL PROPERTIES OF CARRIER FLUID AND PCM PARTICLES provide design guidelines for PCMs in microchannels based on PCM particle volume fractions, different PCM particle and carrier fluid properties, and various microchannel geometries.
2019-04-07T13:07:19.072Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "17612b886df09772f8b3f99e6bf5a1519904b823", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/120353/1/353_1.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "17612b886df09772f8b3f99e6bf5a1519904b823", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
54740611
pes2o/s2orc
v3-fos-license
Reliability of complete gravitational waveform models for compact binary coalescences With recent advances in post-Newtonian (PN) theory and numerical relativity (NR) it has become possible to construct inspiral-merger-ringdown waveforms by combining both descriptions into one hybrid signal. While addressing the reliability of such waveforms, previous studies have identified the PN contribution as the dominant source of error, which can be reduced by incorporating longer NR simulations. Here we overcome the two outstanding issues that make it difficult to determine the minimum NR simulation length necessary to produce suitably accurate hybrids: (1) the criteria for a GW search is the mismatch between the true waveform and a set of model waveforms, optimized over all waveforms in the model, but for discrete hybrids this optimization was not yet possible. (2) these calculations typically require that numerical waveforms already exist, while we develop an algorithm to estimate hybrid mismatches errors without numerical data. Our procedure relies on combining supposedly equivalent PN models at highest available order with common data in the NR regime, and their difference serves as a measure of the uncertainty assumed in each waveform. Contrary to some earlier studies, we estimate that ~10 NR orbits before merger should allow for the construction of waveform families that are accurate enough for detection in a broad range of parameters, only excluding highly spinning, unequal-mass systems. Nonspinning systems, even with high mass-ratio (q>=20) are well modeled for astrophysically reasonable component masses. The parameter bias is only of the order of 1% for total mass and symmetric mass-ratio and less than 0.1 for the dimensionless spin magnitude. We take the view that similar NR waveform lengths will remain the state of the art in the advanced detector era, and begin to assess the limits of the science that can be done with them. I. INTRODUCTION A network of gravitational-wave (GW) detectors is preparing to achieve a remarkable scientific goal: the first direct detection of GWs. This will not only test the predictions from Einstein's general theory of relativity, it will also open a new window to the universe, revealing details of the population, composition and formation history of various astrophysical objects [1]. One particularly interesting and promising source of detectable GWs is the inspiral, merger and ringdown of compact objects, such as black holes or neutron stars. An important contribution to the effort of detecting the signature of coalescing compact binaries in the noise-dominated spectrum of a GW interferometer is the accurate modeling of the expected signals. Only with an entire family of these theoretically predicted template signals is it possible to filter large amounts of data taken from the interferometers. In a "matched-filter" search (see e.g. [2]), these data are convolved with the model signals and if the agreement exceeds some predefined threshold one claims detection and further exploits theoretical predictions to estimate physical parameters of the binary system, such as component masses and spins. In the case of a binary black hole (BBH) with comparable masses, at least two different approaches are needed to describe the full motion and radiated GW content from the system. Post-Newtonian (PN) theory is an asymptotic weak field approximation that treats black holes as point particles with a relative velocity v that is small with respect to the speed of light c (for details, see e.g., [3] and reference therein). The standard PN formulation is based on expanding the relevant quantities (such as energy and GW flux) in terms of the small parameter v/c. Depending on the details of the expansion, resummation and integration of the resulting differential equations, different waveform models for the early inspiral are known, commonly denoted by TaylorTn (with n = 1, . . . , 4) [3][4][5][6][7][8], TaylorF2 [9][10][11][12] and TaylorEt [13,14]. A further inspiral waveform family is obtained by mapping the two body problem to an effective one body (EOB) system with the appropriate potential [5,[15][16][17]. All these analytical approximations break down in the strong gravity regime, and one has to perform computationally expensive numerical calculations in full general relativity to describe the complete dynamics. Since 2005 [18][19][20] stable numerical-relativity (NR) simulations of a few orbits plus merger and ringdown to a final Kerr black hole have become a standard tool to consistently predict the last stages of a BBH coalescence [21]. The exploration of the whole parameter space, however, has just begun and, for instance, long simulations of systems with mass-ratios higher than q = m 2 /m 1 ∼ 5 are still exceptionally time-consuming. For current overviews of the field see [22][23][24][25]. An obvious goal is to combine PN and NR results to produce "complete" waveform models. Such signals contain physical information up to frequencies higher than the pure PN templates, which becomes increasingly important when the total mass of the system increases. According to re-cent studies, binary neutron stars as well as mixed black hole/neutron star binaries can be detected well by pointparticle PN templates [26,27] assuming the current and anticipated performance of the Laser Interferometer Gravitationalwave Observatory (LIGO) [28]. We therefore focus on complete waveform models for BBH coalescences in this paper. By including systems with small total masses in our analysis, however, we effectively consider the detection problem for a broad range of possible compact binary systems, although the extraction of all physical effects requires further modeling in the neutron star case. Several approaches have already been suggested to analytically build template families that include all the stages that the BBH undergoes. The EOB family has been refined by adding extra parameters that cannot be determined by PN calculations but are fixed by calibrating them to highly accurate NR data. This combination of EOB and NR information yields a description of the entire coalescence process in the time domain [29][30][31][32][33][34][35][36], often referred to as EOBNR. A different time-domain description based on standard PN expansions was presented in [37] as a step towards modeling generic spin configurations. In this paper, we consider the direct matching of standard PN waveform models to NR data. In this approach, PN data are used up to some point in time or frequency and NR results are then taken to describe the remaining part of the waveform. A phenomenological fitting of these "hybrid" waveforms can then be performed to obtain an analytical closed formula (in the frequency domain) which interpolates between the physical parameters of the hybrids [38][39][40][41]. All these procedures are subject to ambiguities and errors that limit the applicability of the final waveforms. Here we will focus on the error due to the PN contributions to the hybrids only. Previous work has shown that the uncertainties in the NR waveforms and in the hybridization procedure make a negligible contribution to the overall hybrid waveform error budget [41][42][43], and as such we will estimate modeling errors on the basis of the dominating inspiral part of the waveform. In the absence of a well-defined notion of the PN error, however, we have to account for it simply by considering two different PN descriptions of the inspiral signal, which are equivalent to all known orders of their Taylor expansion. We then quantify the effect of this ambiguous part of the waveform by calculating an appropriately defined inner product ("match") between both choices. This data analysis-motivated measure leads directly to conclusions about how useful hybrid waveforms are in the presence of an ambiguous PN part, or conversely, what requirements have to be posed in order to model waveforms accurately enough. As an important application of this procedure we started in a previous paper [42] addressing the question of how long numerical waveforms have to be in order to fulfill the accuracy requirements for a PN/NR hybridization. Our analysis of nonspinning binaries with mass-ratio q ∈ [1,4] and equalmass binaries with spins (anti-)aligned to the orbital angular momentum (with χ i = S i /m 2 2 ≤ 0.5) lead to the conclusion that NR simulations of such systems should cover 5 to 10 orbits to be used in hybrids that satisfy the minimal accuracy requirement for signal detection. For larger mass-ratios and larger spins, our results suggested that far longer numerical waveforms were required. However, that study was limited due to the following restriction: The efficacy of a model in a search is determined by the best match between the true waveform and any waveform in the search model. This best match (called the "fitting factor") should be calculated not only by comparing two candidates but by maximizing the match over all of the physical parameters of the model. With access to hybrids from discrete points in the parameter space, we were only able to maximize the match over the total mass of the binary, and so our results were a (possibly very) conservative estimate of waveform length requirements. Even stronger requirements were presented in a number of other studies, where no maximization was performed at all (except over the initial phase and time-of-arrival of the signal), with the intention of determining the waveform length requirements not just for detection, but also for parameter estimation. With these more stringent requirements, MacDonald et al. [43] as well as Boyle [44], concluded that NR waveforms generally have to be much longer than currently possible to produce hybrids sufficiently accurate for both detection and parameter estimation. In addition, Damour et al. [45] presented a detailed comparison of phenomenological waveform models [40,41] and a recent member of the EOBNR family [33]. As part of their approach they find that in particular systems with higher mass-ratio (q 10) can be combined accurately with a standard PN approximant in the frequency domain only if the NR waveform contains hundreds of orbits. In this paper, we study hybrid accuracy and NR waveform length requirements in the context of fully optimized mismatches, i.e., fitting factors. Put differently, instead of quantifying the reliability of a single waveform with fixed physical parameters we ask how accurate the induced waveform family is at that point in the parameter space. To do this, we first simplify the nonoptimized match calculation, showing that it can be performed without full numerical waveforms. We then generalize our procedure to optimize the match with respect to physical parameters, and to then calculate the fitting factor that is necessary to make estimates of NR waveform length requirements that are meaningful for GW searches. By looking at the parameter bias between the best-match waveform and the target signal, we also gain some insight into the parameter estimation errors due to the uncertainties in the waveform modeling process. In the following sections, we will develop this procedure step by step, starting with the stringent assumptions previous results were based on and subsequently relaxing them until we reach the final result. After providing a mathematical definition of the (mis)match as our notion of error (Sec. II), we show in Sec. III that the mismatch between two hybrids is determined by the PN uncertainty and the relative power between the NR and PN parts of the signals. Thus, our accuracy estimate requires only amplitude information in the NR regime and we derive how this can be incorporated, including the effect of possible time and phase shifts of the entire waveforms. (This is similar to the procedure developed by Boyle in [44], where instead of NR data, EOBNR signals are taken as "ersatz" waveform. We use a slightly more general approach by incorporating only amplitude information of the phenomenological model [41].) Along the way we compare with previous results in the literature, showing that we fully agree on nonoptimized mismatch errors. When we finally optimize these mismatches with respect to physical parameters in Sec. IV, we find that the corresponding errors for waveform families are much smaller than assumed so far. In particular, based on our estimates (that are mainly limited by the choice of PN families compared to each other) we conclude that NR simulations that cover ∼ 10 orbits are probably acceptable for most astrophysical applications during the Advanced detector era. This includes nonspinning binaries (for which significant improvements in PN approximants are less likely in the next five years), where we explicitly show that this relatively small number of NR orbits is sufficient up to at least q = 10, and with astrophysically reasonable restrictions even for q = 20 and above. We also adopt the view that, since typical simulations will be of comparable lengths over the next five years, our focus here and in future work should not be on prescribing ideal (and unrealistic) waveform lengths, but on determining the limits of the science that we can do with them. II. PRELIMINARY CONSIDERATIONS We shall address the question of accuracy of BBH hybrid waveforms in the following sense. Let be the complex GW strain that combines the plus and cross polarization of the GW as the real and imaginary part, respectively. It is constructed from its PN description h PN and the NR part h NR . We assume that the transition from h PN to h NR is enforced at a single frequencỹ whereh denotes the Fourier transform of h and f m is the matching frequency. Such a procedure can be employed in a direct Fourier-domain construction of the hybrid [41], but it is also approximately true for time-domain hybrid constructions. In the latter case, the transition is carried out at a time t m , where the instantaneous frequency is ω(t m ) = d arg h dt = 2π f m . Then, for (2) to be true, we have to assume that 1. the transition frequency in the Fourier domain is equal to the instantaneous matching frequency calculated in the time domain; 2. the signal at times t < t m only significantly affects the Fourier domain for f < f m and t > t m correspondingly determines the wave for f > f m . These assumptions are not trivial since the Fourier integral is a "global" transformation. However, it was shown that assuming such a stationarity is reasonable in a regime where both PN and NR are valid [41] and time-and frequency-domain construction methods lead to very similar results [42]. The final hybrid waveform is subject to several errors, and we account for these errors here simply by the fact that one could have taken slightly different ingredients h PN and h NR for the same physical scenario. These could be different post-Newtonian approximants and numerical data from different codes or different resolutions. Denoting the different waveform models by h 1 and h 2 , we calculate the mismatch Re where φ 0 and t 0 are relative phase and time shifts between the waveforms and h 2 = h, h . S n is the noise spectral density of the assumed detector, * indicates the complex conjugation and ( f 1 , f 2 ) is a suitable integration range. O is called the overlap (or match) of the two waveforms. Throughout this paper, we will follow the choices of our preceding work [42], i.e., f 1 = 20Hz and S n is given by the analytic fit of the design sensitivity of Advanced LIGO [39]. The upper integration bound f 2 is given by our waveform model, and we use f 2 = 0.15/M, although the results do not depend sensitively on this value (M is the total mass of the binary). Broadly speaking, the mismatch indicates how "close" h 1 and h 2 are. Smaller values for M represent smaller errors in the waveform model, given that h 1 and h 2 are approximations of the same signal. Direct conclusions can be drawn from calculating the mismatch: If M is less than some threshold, we regard the final hybrid as accurate enough for the purpose in question. For a maximum loss of 10% of the signals in the detection process, we can accept a mismatch of ≈ 3%, disregarding the addition from a discrete template spacing. If we account for the latter, one may decrease the accepted mismatch in the waveform modeling to 1.5% (see a similar discussion in [42]) or even 0.5% as suggested in [46]. A generally more stringent requirement is that the uncertainty we have in the modeling is indistinguishable by the detector. Such a statement is obviously dependent on how "loud" the signal is in the detector. As discussed in [46] and further detailed in [45,47] we can write the indistinguishability criterion as where the waveforms are optimally aligned in the sense of (4) and ε parametrizes the effective noise-increase due to model uncertainties. The minimal requirement for h 1 and h 2 to be indistinguishable is ε = 1, although [45] argues that ε ∼ 1/2 and probably less are more reasonable thresholds. Manipulating (5) under the assumption of equal norms leads to the equivalent inequality (see the calculation in [48]) where ρ eff = h /ε is the effective signal-to-noise ratio (SNR) of the signal. When we later calculate M as a measure of the error in hybrid waveforms, we can set various thresholds based on M < M max or Eq. (6) to evaluate the reliability of current models. A potentially very useful application is then to conclude which matching frequency is needed (i.e., how long do the numerical waveforms have to be) to ensure the desired accuracy. III. HYBRID MISMATCHES Having introduced the mismatch between supposedly equivalent waveform models as our notion of error, we shall devote this section to simplifying the mismatch calculation of two hybrid waveforms, optimized only with respect to a relative time and phase shift. As we have pointed out in the introduction, this is not the complete procedure to assess the model accuracy in terms of signal detection because the optimization with respect to physical parameters is not considered. However, we first need to develop some insights into this simpler procedure to eventually generalize it in the next section. The results presented in Sec. III C therefore have mainly illustrative character, showing that our simplified approach fully agrees with previously published results, but it leads to overly conservative requirements, e.g., for the length of NR waveforms. A. General procedure Before we calculate mismatches for many different scenarios, we establish a few more assumptions to gain some insights on the structure of Eq. (4). These will allow us to propose an approximation to the mismatch between two hybrid waveforms that can be calculated without the need for any NR data. In addition to (2), we further assume: 3. following [21,41] we regard the error on the NR side as small, negligible compared to the uncertainties PN introduces up to currently practical matching frequencies. 4. independent of the PN approximant that is used, the norm of the waveforms are to high accuracy the same (i.e., only the phase is affected). This is reasonable to take as a good approximation, because the amplitude description in PN is usually formulated as a function of the orbital frequency [49][50][51] (which we again identify with the content on the Fourier side as well) and the mismatch is much more sensitive to phase differences than to amplitude discrepancies. Let us now consider a BBH system with fixed physical parameters. Our error measurement assumes the construction of two hybrid waveforms that differ in the PN part only. Their overlap reads where A i = |h i | and φ i = argh i . The effect of a time and phase shift of one waveform with respect to the other is explicitly written out in the second exponential term. Assuming two PN models (PN1 and PN2) combined with the same NR waveform we trivially obtain the phase difference Note that (8) is only true for one particular alignment of the two waveforms, any other relative shift in time or phase leads to an additional dephasing, also beyond f m . Since we have separated this effect explicitly in (7), we are, however, free to write φ 1 − φ 2 as in (8). The open question is the functional form of the PN phase-difference (or simply the PN phase error) in the case where the NR part of h 1 and h 2 are perfectly aligned. Here we have to apply an actual matching procedure, although we can use any preferred method without having NR data at hand. The key property of (8) we are exploiting is that only PN-PN differences are taken into account, and a direct PN-NR comparison is not necessary. The only input we need from NR simulations is the amplitude |h| = A 1 = A 2 for f > f m . A good estimate for that can be taken from phenomenological models, such as [40] or [41], where the Fourier-domain amplitude is approximated by a closed-form analytic description. A similar approach was recently suggested by Boyle [44] who realized that it is sufficient to combine PN approximants with ersatz NR data which he takes from the EOBNR model [15,16,30,34]. We independently derive an algorithm here that is based on the same perceptions but highlights that no NR phase information at all is needed. The final global time and phase shift used in (7) to maximize the overlap is simply a (phase shifted) inverse Fourier transform of the remaining integrand. Its maximal real part is obtained by choosing φ 0 (for any t 0 ) such that the generally complex number lies on the real axis. Based on that, our final algorithm for estimating hybrid mismatch errors caused by the uncertainty in the PN model is the following 1. Calculate the two different PN waveforms expressing the uncertainty to be quantified. f ≥ f m , e.g. from [40,41] or from a short NR simulation. Set the phase in this regime to 0 (or any other function, but equal for bothh 1 andh 2 ). Calculate the overlap ofh 1 andh 2 by maximizing the magnitude of the inverse Fourier transformation. To test the efficacy of our approach, we compare our estimate with the mismatch of actual hybrids consisting of either the TaylorT1 or TaylorT4 approximant (in the form detailed in [41]; see also Sec. III C) and the numerical data from the SpEC equal-mass run [52,53]. The matching frequencies are chosen as Mω m = 2πM f m ∈ {0.04, 0.06, 0.08}, and the stitching procedure is carried out in the Fourier domain as explained in [41]. The agreement illustrated in Fig. 1 is excellent in all cases. As expected by the relatively small effect of the amplitude on the mismatch calculation, our method proves to be fairly robust with respect to the chosen amplitude description in the NR regime. In fact, the dashed lines in Fig. 1 use the phenomenological model detailed by Santamaría et al. in [41] but there is no noticeable difference when we use the model presented by Ajith et al. in [40]. B. Mismatch contributions The method presented above can readily be applied to estimate the uncertainty of hybrids with the caveats mentioned at the beginning of Sec. III, and we shall do so in Sec. III C. For now, however, let us manipulate the mismatch (4) further to separate the various contributions to it. We make this important aside to point out that, although only the PN contribution is considered as ambiguous in our approach, its influence on the final waveform error is twofold: directly through the (power-weighted) PN mismatch and in terms of an additional dephasing, also of the "exact" high-frequency part. We can see these two effects separately through the following instructive lower bound on M which is obtained under the assumptions detailed above. Here we introduced the notation h 2 (a,b) to specify the integration range. M PN is the mismatch of the PN part only, restricted to f < f m . In the first line of (9) we use the fact that the amplitudes agree (in fact, we do not require pointwise agreement, only the norm is assumed to be the same) and that h 1 = h 2 for f > f m . The second line is a lower estimate because the maximization was originally carried out by shifting the entire waveforms relative to each other, whereas now we allow the maximization over the PN part alone. The final step involves the obvious relation . The interpretation of (9) is straightforward: the mismatch of hybrids is determined by the uncertainty of PN [restricted to the frequency range ( f 1 , f m )] multiplied by the fraction of power that is coming from the PN part of the wave signal. This fundamental error, independent of the actual PN/NR fitting, is directly inherited from the differences of standard PN approximants and any PN/NR matching cannot be better than the result of (9). Therefore, one might think that analyzing the overlaps or fitting factors (or whatever strategy is appropriate) of different post-Newtonian approximants directly leads to conclusions of how reliable the hybrid is for a particular choice of f m . When we compare, however, the mismatch of actual hybrid waveforms with the estimate (9) we find that the latter is considerably less than M . An illustration of that is included in Fig. 1, where we show the lower bound (9) in comparison with the actual (and accurately estimated) mismatches. Why is the hybrid disagreement that much greater than what is expected from PN in the given frequency range? The reason can be identified from the derivation of (9), where we effectively allow an optimal alignment (for each M) of both PN models while independently keeping the NR part perfectly aligned. In a true hybrid mismatch calculation, one the other hand, a time and/or phase shift always affects the entire PN+NR hybrid, and an optimal alignment of one part leads to a dephasing of the other. This effect is not caused by an erroneous matching, but an illustration of the fact that the optimal choice of t 0 and φ 0 in the sense of Eq. (4) is mass (frequency)dependent for the PN models we consider. Finally, by considering the obvious generalization of (9), we can identify the three main contributions to the hybrid uncertainty: The PN and NR error, each weighted by the power they contribute to the signal and the misalignment caused by the fact that in the hybridization procedure the PN wave is aligned at high frequency which is potentially different from the optimal alignment for lower frequencies. The procedure introduced in Sec. III A automatically takes the dominant PN error and possible misalignments (also of the NR part) into account. C. Application Now that we have established an algorithm to predict the full waveform mismatches, we can exploit the computationally cheap procedure and calculate M for many different physical scenarios. Our aim is to show how "reliable" the final combination of PN and NR waveforms is in different points of the parameter space, assuming that the physical parameters are fixed from the outset. First, let us highlight again that ideally, we are interested in the mismatch of the approximate waveform model to the true one. Since we cannot calculate the latter (which would also make the whole discussion pointless), we estimate the PN uncertainty by calculating the mismatch between different approximants. This can certainly be no more than a rough estimate since we are not aware of any principle that would guide us to which approximants at which PN order should be compared in order to obtain a well-defined notion of the PN error. To still reach some understanding of the uncertainty in currently used high-order PN models we present the anticipated hybrid mismatches when approximants commonly denoted by TaylorT1, TaylorT4 and TaylorF2 are used. TaylorT1 and T4 are solutions of ordinary differential equations in the time domain describing the adiabatic inspiral of the BBH on quasicircular orbits, whereas TaylorF2 is a frequency-domain description based on the stationary phase approximation. Details on these approximants can be found, e.g., in [7,8] and references therein. We mainly employ the equations presented in [41], but with an updated 2PN spin-spin contribution from [51], see [54] for a collection of explicit expressions. Throughout this paper, we always employ the highest currently determined PN order, i.e., 3.5PN accurate phasing with spin contributions up to 2.5PN (and incomplete terms at higher order) and the 3PN amplitude expansion [50] including up to 2PN spinning corrections [51]. As in the construction of phenomenological models, we restrict the parameter space to black holes with comparable masses and spins aligned or antialigned with the orbital angular momentum of the binary L L L (with its unit vector denoted bŷ L L L). Then, each spin can be parameterized by just one dimensionless quantity, where m i and S S S i are mass and spin of the individual black hole, respectively. By exploiting a degeneracy in the spins, as observed in [55,56], the parameter space can be further reduced, and we only use the mass-weighted total spin and the symmetric mass-ratio to label the different physical setups. (In fact, in the following analyses, each point with fixed χ is represented by To assess how the accuracy of currently feasible hybrid waveforms varies in the parameter space, we apply the algorithm outlined in Sec. III A for different mass-ratios ranging from equal masses to 4:1, with spin magnitudes from −0.9 to 0.9 in each case. For every pair (η, χ) one obtains massdependent mismatches in the form of Fig. 1 that generally increase with increasing matching frequency Mω m . Several plots illustrating this behavior can already be found in the literature. Contour plots of the mismatch as a function of mass and matching frequency are the main result of Boyle [44], and we obtain similar results by continuously varying Mω m , e.g., in Fig. 1. Taking the maximum mismatch with respect to the total mass instead (i.e., only considering the peaks in Fig. 1), Fig. 4 by Damour, Nagar and Trias in [45] shows the inaccuracy of TaylorF2 hybrids compared to EOBNR as a function of the matching frequency. Fig. 11 by MacDonald, Nissanke and Pfeiffer in [43] presents a similar study with Taylor approximants and actual NR data. Given some slightly different choices in our approaches (especially lower cutoff frequency and detector noise curve) the results we obtain are fully consistent with the numbers presented in the articles mentioned. Generally, the conclusions [43][44][45] draw are sobering regarding GW detections and parameter estimation. The mismatches found are too high, current numerical relativity waveforms are by far too short and hybrids are consequently too inaccurate. In the following, we illustrate the basis of these statements and expand the existing knowledge by exploring the parameter space. To reduce the dimensionality of the problem, we calculate the maximum of the mismatch with respect to the total mass and fix Mω m = 0.06 (which corresponds to 10 GW cycles before the maximum of |h(t)| in the equal-mass case). In Fig. 2 we show contour plots that compare either Tay-lorT1 with TaylorT4 hybrids or TaylorT1 with TaylorF2 hybrids. The matching frequency is fixed at Mω m = 0.06. Certainly, we could include many more variants of PN approximants (including different versions of EOBNR), but we find it sufficient to present some general conclusions that become already clear from the examples chosen here. As reported before [42,45] we see that deviating from equal-mass cases, the disagreement generally becomes larger. This effect is even more pronounced when increasing spin magnitudes are considered. Heuristically we can understand the worse performance for increasing spins by the simple fact that spin contributions are only included up to 2.5PN order, whereas nonspinning terms are known up to relative 3.5PN order. Surprisingly, the 'island' or 'band' of minimal mismatch does not occur strictly around vanishing spin magnitudes, indicating that different approximants can by chance agree extremely well in some portions of the parameter space. For completeness, let us report that the TaylorT4/TaylorF2 mismatch yields a pattern similar to the right panel of Fig. 2 but with minimal values moved to weakly positive spins. The conclusions suggested by Fig. 2 and results from previous work [41,43,44] are indeed disappointing. If the mismatches caused by different PN approximants actually represent a reasonable estimate for the uncertainty in currently practical hybrid waveforms, then values up to M ≈ 50% are certainly unacceptable. Reducing the matching frequency, thereby demanding longer NR waveforms, does reduce the mismatch everywhere, but it leads to unrealistic requirements in many portions of the parameter space. To illustrate this, Table I addresses two important questions by analyzing the TaylorT1/TaylorF2 hybrid mismatches in selected points in the parameter space. First, what is the required matching frequency if a desired accuracy has to be fulfilled? Note that due to our algorithm we overcome the restriction of currently available NR waveform lengths that the authors in [41,43] were facing. We also do not rely on assuming a particularly promising "candidate waveform" to act as a long NR waveform as was done in [42,44]. In fact, phase information above Mω m is not required and does not enter the result; we can simply apply our algorithm to arbitrarily small matching frequencies. For each set of parameters we maximize the mismatch with respect to the total mass M (which we, however, restrict to M ≥ 5M for computational reasons) and thus obtain the monotonically increasing function max M M (Mω m ). By demanding either M < 3% as the most relaxed requirement or the more stringent case of indistinguishable differences for effective SNRs of at most 20 [see (6)] we obtain the values given in Table I. In parentheses we also give the number of gravitational-wave cycles from dφ GW /dt = ω m to the maximum of |h(t)| as predicted by the phenomenological waveform model [41]. It is unlikely that the typical length of "long" numerical waveforms will change by an order of magnitude before the advent of Advanced LIGO, and so a more practical question is: given a currently achievable NR waveform length, in which mass-range is the PN+NR hybrid accurate enough? As an example we assume again a matching frequency of Mω m = 0.06 and show on the right-hand side of Table I the minimal masses the hybrid is accurate for in the sense detailed above. For comparison, the pure NR part occupies the entire frequency band down to 20Hz for masses M ≥ 97M . Note that, distinct from [45], we do not consider errors above Mω m since we are concerned with hybrids and not possibly fitted closed-form waveform models that introduce additional errors. Therefore, our values for M min are less than the corresponding results in [45] that are based on the comparison of EOBNR and the phenomenological model of [40]. The obvious message from Table I is that in general extremely long NR simulations would be needed to overcome the intrinsic uncertainty in standard PN formulations for given physical parameters. For NR waveforms containing so many cycles our assumption that their intrinsic error can be neglected is possibly no longer valid, which would lead to even higher modeling errors. Anyway, the numbers presented are only an "order of magnitude" estimate in this most conservative approach. The reader should always keep in mind that our notion of error is based on comparing different, at highest available order consistent PN descriptions and especially concrete statements for particular points in parameter space may be spoiled by an (un)fortunate choice of approximants (see a similar discussion in [42]). More importantly, as we shall show in the next section, fixing the physical parameters of the waveforms from the outset greatly overestimates the uncertainty for signal detection. A. General The accuracy assessment presented in Sec. III only allows for very limited conclusions about the actual utility of hybrid waveforms in various applications. Apart from the restrictions coming from our limited understanding of the PN error there is also an important fact we have neglected so far: in astrophysically relevant applications the knowledge of physical parameters like total mass, mass ratio and spin is never exact. If a set of hybrid waveforms constitutes a waveform family which is used to extract information from an unknown signal, then the standard matched-filter procedures rely on varying (and maximizing with respect to) such parameters. The accuracy of the predicted "best-fit" parameters is once again limited by the detector noise and the modeling error and even if the latter exceeds the first, one may still argue that a tolerated bias does not significantly reduce the scientific output from GW detections. In this section we shall therefore consider combinations of NR data with a particular PN approximant as the ingredients of an entire manifold of waveforms, parametrized by an absolute time and phase scale (t 0 and φ 0 ) as well as the physical parameters introduced before: M (total mass), η [symmetric mass-ratio (13)] and χ [spin combination (12)]. The efficiency of detecting a signal defined by t 0 , φ 0 , M, η and χ is properly quantified through the fitting factor Note that the maximization with respect to t 0 and φ 0 is already included in the definition of the overlap O, see (7). The accuracy threshold for detection we quoted before is indeed defined including this additional maximization, i.e., in terms of If a waveform family {h 1 } satisfies M FF (h 1 , h 2 ) < M max (with sufficiently small M max ) then it is said to be effectual in the detection of the target signal h 2 [5]. The results in Sec. III are only a lower bound on this effectualness. The accuracy requirements for parameter estimation are naturally more demanding than those for detection. In the recent literature [41,43,45,46] the faithfulness of waveforms was usually defined by the criterion (5) (without optimization with respect to physical parameters), thereby demanding that the maximal information can be extracted from the data without being restricted by the model itself. Here, however, we want to understand faithfulness in the original sense introduced in [5] that is based on the difference of the target waveform parameter λ with the recovered model parameterλ for which (14) is maximal. If this bias ∆λ =λ − λ is small enough, we can still accept the waveform model family as sufficiently accurate, even for parameter estimation. Therefore, by analyzing M FF and the corresponding parameters we can sensibly make analogous conclusions as before, but based on the actual optimization strategy that is employed in current template-based GW searches. Because of the additional freedom of varying physical parameters we now have to calculate the ambiguity function between hybrids constructed from the same set of NR waveforms but members of different PN approximants. It depends on the parameters of the waveforms, λ λ λ and λ λ λ , as well as the waveform models themselves. Since the phase difference above Mω m in the overlap integral (7) does not vanish generally for λ λ λ = λ λ λ , we have to slightly modify the algorithm presented in Sec. III A. In particular, we now need an estimate of how small changes in physical parameters affect the phase difference in the assumed NR regime. (The PN regime is affected as well, but there is no qualitative difference to the PN comparison incorporated before.) One possible strategy to quantify phase changes along variable physical parameters is to perform a number of numerical simulations and interpolate between the data obtained. Depending on the density of samples in the η and χ directions (the scaling with M is given trivially by a single simulation), such a procedure can be very time-and resource-consuming. However, the phenomenological fittings performed in [38][39][40][41] have utilized exactly this type of interpolation, and we conveniently use the result of [41] here because the fitting there is localized to frequencies close to and in the NR regime. Finally, to ensure the proper relative alignment, our algorithm to calculate A for arbitrary (in practice small) variations in all parameters is to match different PN approximants to a phenomenological waveform (phase and amplitude) that is used above Mω m resulting in a hybridh( f ; M, η, χ,t 0 , φ 0 ). Let us highlight that although we are now building PN+phenomenological hybrids our analysis is not assessing how accurate individual waveforms describe the entire coalescence process. Note for instance that we could have introduced this hybridization concept already in the previous section, but, as we have shown, the phase above the matching frequency did not enter the overlap calculation. Similarly now, we use the phenomenological phase description merely to model the M-, η-and χ-dependence at higher frequencies. In Sec. III we only exploited A = 1 for λ λ λ = λ λ λ whereas now we need an estimate of the shape of A also for λ λ λ = λ λ λ (although for small |λ λ λ − λ λ λ |). We can make two immediate observations from Fig. 3. Especially for small masses we see that relatively small changes in, for instance, symmetric mass-ratio or total mass (the other parameters are kept constant, respectively) modify the waveform considerably, so that the high mismatches for equal parameters (reported, e.g., in Figs. 1 and 2) could potentially be reduced drastically by only small variations in the physical parameters of one model waveform. Although the formal criterion (5) for faithfulness (or better indistinguishability) failed, the fitting factor could still be extremely close to unity with a minimal bias in the parameters. The second interesting observation from Fig. 3 is that the width around the maximum of the ambiguity function increases towards higher masses so that a comparison of two waveforms is increasingly insensitive to parameter changes at higher frequencies. This in turn endorses our assumption that the fitting factors and biases we shall calculate are dominated by PN effects (and not the choice of data above Mω m ) for small masses, where the accuracy requirements turned out to be hardest to satisfy. B. Comparison with previous results Before exploring fitting factors across the parameter space, let us present two examples that illustrate the general conclusions we shall draw in this paper. We first come back to the canonical equal-mass, nonspinning case and the Tay-lorT1/TaylorT4 comparison that was employed before (see Fig. 1 and [42]). To test the validity of our approach we again compare our estimate to hybrids constructed with actual NR data (matched at Mω m ∈ {0.04, 0.06, 0.08}, respectively). Because of the unavailability of NR data with arbitrary η and χ, we for now only maximize with respect to the total mass M. Note that the results shown in Fig. 4 fully agree with the analysis of Hannam et al. [42] (see Fig. 6 therein). They not only confirm that our combination of PN and phenomenological data accurately predicts the disagreement of the "true" PN+NR hybrids, one can also observe the striking improvement when the additional maximization with respect to M is taken into account. The peak mismatch without optimization was approximately 8.8%, 4.5% or 2.2%, depending on Mω m . With mass optimization we instead find M FF < 3.2%, 2.0% and 1.5%, respectively. The relative bias in the total mass, (M − M)/M, is always less than 0.8% and the earlier the matching is performed the smaller the bias becomes. A subsequent question that has not been answered so far is to what extent further optimizations, say along the symmetric mass-ratio and the spin(s) of the model system, improve the agreement between the waveform families even more. Full fitting factor calculations are commonly used to compare waveform models (see, e.g., [8,45]), but they have not been employed in the context of hybrid waveforms and studies of the required length of numerical waveforms. Reference [42] only applied a crude estimation of the effect an additional massratio optimization has, and concluded that a (total) massoptimization alone serves as a sufficient assessment of the full fitting factor. We now find that this conclusion was incorrect. We illustrate the effect of further optimizations through the comparison of TaylorT1-and TaylorF2-based waveforms (matched at Mω m = 0.06) in Fig. 5. The TaylorT1 target signal is fixed as a system with mass-ratio 4:1 and spin χ = 0.5, a point in parameter space that clearly fails all accuracy require- ments when looking at Fig. 2. By maximizing with respect to M, however, the maximal mismatch drops from 32.2% to 10.4%. Varying all three considered physical parameters finally yields a curve with M FF ≈ 1.6% at maximum, making the TaylorF2-based family accurate enough for detection. The relative bias in the parameters are less than 1% for M, of the order of 1% for η and 10% for χ. Note that a faithfulness analysis, as in Sec. III and [43,44], would conclude that NR waveforms with many hundreds of cycles are necessary to produce hybrids (and consequently waveform models) that are sufficient for parameter estimation purposes. Here we see that waveforms that we might at first sight regard as far too inaccurate, in fact may yield relatively small parameter biases when embedded in a waveform family. The optimization algorithm with respect to physical parameters is computationally more challenging than maximizing the inner product with respect to t 0 and φ 0 only. For each set of test parameters (η, χ) we have to construct a new waveform. Since TaylorF2 is an analytical closed-form PN description that is fast to evaluate and our matching to the phenomenological model is performed directly in Fourier space [41] we only consider TaylorF2-hybrids as test waveforms h 1 . For the fixed target waveforms h 2 we chose to employ the TaylorT1 approximant, because it was shown in [57] that its (dis)agreement to premerger NR data is most robust over the considered parameter space and [44] noted that a maximal uncertainty estimate involves comparing to TaylorT1-inspirals. Starting with equal parameters λ λ λ = λ λ λ , we search for the nearest local maximum of the overlap O(h 1 , h 2 ) by varying λ λ λ along the gradient of the overlap. Thus, we ensure a quickly converging improvement after a relatively small number of iterations. The results we present, however, do not take into account the entire distribution of the ambiguity function and are still only a lower bound on the fitting factor. Given the tremendous decrease in mismatch for relatively small changes in physical parameters we argue nevertheless that this local extremum should serve as a reasonable estimate of the error one has to assume in terms of the fitting factor. We repeated the exploration of the parameter space with a study similar to the one presented in Fig. 2. The matching fre-FIG. 6. The maximum of the optimized mismatch (in %) for hybrids constructed either with TaylorT1 (target signal) or TaylorF2 (template signal) and a matching frequency of Mω m = 0.06. quency is again fixed at Mω m = 0.06 and we calculate M FF , Eq. (15), for masses 5M ≤ M ≤ 20M . We checked that the mismatch decreases towards the boundaries of this interval, so that the enclosed maximum can indeed be regarded as the global extremum. After performing this maximization of the mismatch with respect to M for fixed (η, χ), we present our results as a contour plot in Fig. 6. The structure is very similar to the pattern of the nonoptimized mismatch, cf. the right panel of Fig. 2. The obvious difference is, however, that calculating the detection-relevant quantity M FF instead of the diagonal mismatch 1 − A (λ λ λ , λ λ λ ) results in numbers that are ∼ 10 times less than what was considered before as error estimates. This allows for very different conclusions: Even a moderate matching frequency like the one considered here leads to hybrids that are accurate enough for detection in a large portion of the parameter space. Simulating NR waveforms with few (< 10) orbits should hence be good enough for many applications considering systems with moderate spins and massratios. Although this is a very broad statement, it is clearly distinct from previous analyses [43][44][45] that concluded much longer NR waveforms are needed to sensibly connect them to standard PN approximants. Of course, Fig. 6 only shows the optimal agreement between the two considered waveform families and one might fear that the difference between simulated and recovered parameters is large in some parts of the parameter space. However, as anticipated by Fig. 5, the bias in total mass and symmetric mass-ratio are small, approximately ±1% and ±1.5% at most, respectively. The spin parameter χ is uncertain by −0.15 ≤ ∆χ ≤ 0.05. A deeper analysis of these biases is beyond the scope of this paper and results are likely more modeldependent than the general conclusions we present here. For completeness, we note that for increasing values of the simulated spin, ∆η and ∆χ generally decrease from positive to negative values, ∆M increases at the same time. This correlation is expected from the form of the PN expansion, where modifications of M can be compensated at lowest order by changing η inversely. Studies of PN approximants in [8] show similar tendencies, although the biases reported there are considerably higher due to the absence of a common NR part at high frequencies. The same holds for the comparison of complete models (including uncertainties in the NR regime) [45]. The modeling biases we find should be compared to statistical errors of full waveform families. In the case of the nonspinning phenomenological model [39] a Fisher matrix study as well as Monte-Carlo simulations were presented in [58], and the uncertainties found for Advanced LIGO and signals of SNR 10 are ∆M/M 3% and ∆η/η 8% (M < 100M ). These values are of the same order of magnitude as our results, and we take this as an indication that modeling errors do not vastly dominate the parameter estimation uncertainty. However, further studies are underway [59] to determine statistical errors for spinning waveform models. C. Model accuracy for spinning systems These new results constitute much brighter prospects for currently feasible NR simulations than the conclusions drawn in Sec. III and [42][43][44][45]. In certain parts of the parameter space, however, the mismatch error presented in Fig. 6 is still too high, particularly if one keeps in mind that gaining sensitivity of GW detectors is extremely difficult on the hardware side and theoretical considerations should reduce this sensitivity as little as possible [45]. Therefore, M FF > 3% for highly spinning systems should be improved by considering lower matching frequencies. Equally important is the question of whether numerical simulations for systems with moderate spins and mass ratios can be considerably shorter than Mω m = 0.06 which we assumed so far. In Fig. 7 we analyze the dependence of the mismatch error by showing the maximum of M FF as a function of Mω m . We consider equal masses and mass-ratio 4:1 with spins χ ∈ {0, 0.2, 0.4, 0.6, 0.8} in each case. Note that we do not include negative values of χ here, because the fact that the mismatch error for χ < 0 is smaller and not monotonic in χ (see Fig. 6) is likely an artifact of our choice of PN approximants (recall the obvious differences in Fig. 2). As expected, Fig. 7 illustrates that reducing the matching frequency, e.g., from Mω m = 0.08 to Mω m = 0.02, leads to an improvement in mismatch by a factor of 2 to 10, depending on the spin. Larger values of the spin generally yield larger mismatches which in turn leads to stronger requirements for Mω m , assuming a given accuracy goal. This is unfortunate because the orbital hangup configuration of positive aligned spins decelerates the frequency evolution in the inspiral of the binary, demanding even longer simulations for a given frequency range. As such extremely long NR waveforms may not be available in the near future (including the Advanced LIGO era), we continue with a slightly different application of our results: How reliable is a set of complete waveforms constructed with standard PN approximants and NR simulations covering 5 (10,20) orbits before merger (i.e., 10, 20 or 40 GW cycles prior to the maximum of |h(t)|)? To quantify these uncertainties we have to combine an estimate of the minimal matching frequency allowed by such NR waveforms with the resulting mismatch error from Fig. 7. We calculate the first from the inverse Fourier transform of the phenomenological model [41] This spin-and η-dependent value is then taken into the results presented with Fig. 7 to estimate M FF for each configuration. Note that we use a more pessimistic error estimate for antialigned spins (χ < 0) by assuming the mismatches of |χ| due to the reasons discussed above. One kind of possible conclusion one can then draw is summarized in Table II for equal masses and mass-ratio 4:1. Given an accuracy goal (which we take as either 3%, 1.5%, 0.5% or 0.2%) we provide the range of spins in which hybrids with the specified number of NR orbits fulfill this goal. Note that the asymmetry in the spin parameter is only caused by the different matching frequencies waveforms with constant length permit. Again, we can very clearly see that even relatively short waveform are good enough for detection. In fact, mismatches of 0.5% are below the noise level for SNR 10, and differences of 0.2% are indistinguishable for SNR 16 according to Eq. (6). However, one can also see from Table II that doubling the number of orbits does not enlarge the accuracy range dramatically in many cases, although such simulations would take far more computer power and time. D. Nonspinning unequal-mass systems So far, we refrained from explicitly calculating mismatches for mass-ratios > 4:1 here because our underlying phenomenological model was only calibrated to numerical simulations with mass-ratios ≤ 4:1. Pushing the model beyond these values would add another uncertainty in addition to the way we estimate PN errors already, and more elaborate studies (possible including different models such as [40] and variants of EOBNR) are needed to reach sound conclusions. Nevertheless, numerical simulations of higher mass ratios are potentially interesting, and we shall try to estimate their reliability on the basis of our (extrapolated) knowledge here. We restrict this study, however, to nonspinning target signals. These are the systems where we do not expect the PN errors to drop significantly on the timescale of Advanced LIGO (in contrast to spinning binaries, where higher-order PN terms may well be calculated in the next few years). We find that the agreement between TaylorT1-and TaylorF2-based hybrids is exceptionally good along χ = 0 (see Figs. 6 and 8). In contrast, the TaylorT4/TaylorF2 uncertainty increases towards higher mass-ratios (smaller values of η) as we would expect from the form of the PN expansion. Therefore, we shall conservatively base our statements on comparing TaylorT4 and F2 approximants in this section. To illustrate our argument, we plot in Fig. 8 the maximum of the fully optimized (i.e., with respect to M, η and χ) mismatches between TaylorF2 and either TaylorT1 or TaylorT4 hybrids, all matched to fictitious NR data at Mω m = 0.06. The fixed target parameters are chosen as χ = 0 with the mass ratio q varying from 1 to 4 in steps of 0.5 as well as q = 10 and q = 20. While the comparison with TaylorT1 yields weakly η-dependent mismatches below 0.3%, TaylorT4 target signals exhibit a steeply increasing divergence from the model signals towards higher mass ratios. Its approximately exponential behavior is well described by the following fitting formula log 10 M FF ≈ −0.29 − 14.1η (17) which is included as a straight line in Fig. 8. A conservative estimate of the general model uncertainty would be the maximum of both data series for each η, i.e., (17) for small η and roughly constant M FF ≈ 0.12% for η > 0.1866 (q < 3). Evidently, a matching frequency of Mω m = 0.06 is only good enough for η > 0.081 (q < 10.2) if a mismatch of at most 3% is tolerated. Again, reducing the matching frequency helps to increase the accuracy of the final waveform, and we systematically analyze how useful numerical simulations of 5, 10 or 20 orbits before merger are in the nonspinning unequalmass regime. For that, we calculate max M M FF as a function of the matching frequency and the symmetric mass ratio, similar to what was done for Fig. 7. The matching frequency is then converted to orbits before merger as explained in the previous section. In Table III we present our results in analogy to Table II, where we provided the range of the spin parameter χ in which the waveform model meets certain accuracy requirements. Now we complement the picture by restricting ourselves to the nonspinning case; our error estimates are based on optimized TaylorT4/TaylorF2 hybrid mismatches, and we present the accuracy range in terms of the mass ratio. Note that, although only five orbits of NR data before merger are sufficient for detection for most of today's standard simulations (q 6), even the computationally very challenging goal of 20 orbits before merger is not enough to reliably model mass ratios as high as 15 or more for arbitrary total masses of the binary. It should be pointed out, however, that we report the worst disagreement between the considered hybrids in the left column of Table III, i.e., we demand that the assumed accuracy requirement is satisfied for all values of the total mass. As discussed in [44] already, one should rather understand the mismatch error and the accuracy requirement as functions of the total mass. After all, binaries with larger total mass have higher SNR in the detector (for constant distance of the source). More important for us here is that some of the considered astrophysical scenarios may not even exist or be extremely unlikely, and if the modeling error exceeds accuracy thresholds in these regions, we do not have to bother. We illustrate this argument with a concrete example: The (fictitious) waveform of a binary with mass-ratio 20:1 exhibits the largest uncertainty at total masses less than 20M , depending on the matching frequency (the values for NR simulations covering 5, 10 or 20 orbits before merger are given in parenthesis in the right column of Table III). If we only consider black holes as objects in the binary and follow observational [60] and theoretical [61] evidence that their individual masses are > 3M , then the lowest total mass to consider in our error analysis is instead 63M . With our idealized assumptions, this is a regime where the mismatch drops monotonically with increasing total mass (due to the dominating amount of exact high-frequency data), and the maximal uncertainty at 63M proves to be more than sufficient for detection purposes, even with only a few NR orbits; see Table III. In this sense, modeling higher mass-ratios is more accurate than comparable masses, as [44] noted already for diagonal (nonoptimized) mismatches. One the other hand, one could argue that the smaller object in the 20:1-binary could also be a neutron star. If the companion is a much heavier black hole, tidal effects are extremely weak [27] and the plunge is hardly affected from finite size effects of the neutron star [62]. Thus, we may hope to accurately capture these systems with a BBH template family as well, and smaller total masses have to be considered. According to [63], (proto)neutron stars are expected to have masses > 1M , which is in agreement with current observations (see [64] for an overview). Assuming the lower bound of 1M for the mass of a single compact object, we consequently have to consider total masses down to 21M (for q = 20) which leads to higher modeling uncertainties in the waveform. However, as Table III shows, 10 NR orbits before merger would be virtually good enough for detection purposes, 20 orbits already yield a mismatch of only 0.8% at 21M . Hence, even the theoretically and numerically difficult unequal-mass regime may well be modeled with only a few NR orbits, given the astrophysical expected properties of such systems. Of course, these astrophysical limitations are highly uncertain, and the conservative error analyses are the ones presented in Table II and the left column of Table III. However, given that caveat, we conclude that currently feasible numerical simulations are potentially good enough to model in combination with PN approximants an important fraction of the parameter space. V. DISCUSSION Predicting the GW signature of an inspiraling and merging BBH in General Relativity is inevitably associated with analytical or numerical approximations to the full theory, which introduce errors in the final result h(t) orh( f ). In this paper we estimated these errors by the distance between two approximate solutions for each physical configuration. While neglecting uncertainties on the NR side, we assumed different standard PN approximants in a frequency range up to the point where the waveform is matched to an NR-based merger and ringdown model. We quantified the uncertainties by comparing the currently available 3.5PN (spinning contributions up to 2.5PN) versions of TaylorT1, TaylorT4 and TaylorF2 approximants. Introducing a simple algorithm that only requires amplitude information beyond the matching frequency, we first confirmed previous studies [42][43][44] that found that the mismatch error for fixed physical parameters greatly exceeds reasonable accuracy requirements, assuming typical NR waveform lengths. Instead of demanding extremely long numerical simulations to overcome this uncertainty in the modeling process, we refined the understanding of the waveform error by adopting the actual data analysis strategy of detecting an unknown signal in noise-dominated interferometer data. In particular, assuming waveform families instead of individual waveforms naturally redefines the concept of distance by allowing physical parameters to be varied in the mismatch calculation. The results presented in Sec. IV indicate then that the GW signatures for many astrophysically relevant systems can in fact be well modeled by straightforward combinations of standard PN approximants and currently feasible NR simulations, covering < 10 orbits before merger. The accuracy has not yet reached a level such that detection and parameter estimation errors are limited only by the detector noise for high SNR events, and the intrinsic uncertainty of BBH models may exceed in some cases the anticipated deviations caused by non-black holes, making is impossible to identify them as such. Nevertheless, the reported disagreement among different BBH models and biases in the parameters are certainly tolerable for the first GW detections that are likely to have low SNRs (∼ 10). While this is true for systems with moderate spins, one has to keep in mind that even our idealized setting yields mismatch errors for high values of spins that are of the order of a few percent, which increase for higher mass ratios. Reducing the matching frequency poses unrealistic challenges for current NR codes, and either fundamentally different numerical approaches or advances in PN are needed to fully control the entire parameter space. While the next spin-contributions in PN theory may become available in the near future to further improve the modeling of spinning systems (see the recent calculations of higher-order spin-orbit contributions [65][66][67]), unequal-mass nonspinning contributions at 4PN order are unlikely to be calculated with established techniques soon. However, as we discussed for a binary with mass-ratio 20:1, astrophysical expectations are that such systems only form with a high total mass, thereby reducing the impact of PN uncertainties. Even for 20:1 binaries, our results suggest that NR simulations of less than 10 orbits are sufficient. In summary, we found that not single hybrid waveforms, but rather the embedding in the waveform manifold, results in templates accurate enough for detection, even with today's limited number of NR orbits. The uncertainty in physical parameters we had to accept for this tremendous increase in overlap is rather small, ∼ 1% in mass and symmetric massratio, and ∼ 0.1 at most for the spin parameter χ. For nearly equal-mass systems, the individual masses of the constituents are then only reliable to and it has to be decided whether this is good enough for astrophysical studies. Of course, our results rely on a number of assumptions that are reasonable in the range where we apply them, but we shall collect and discuss their generalizations and limitations below. First of all, our analyses are meant to provide a general concept of how to deal with modeling errors, instead of giving final answers. Especially, as we emphasized throughout the paper, we do not address the question of how accurate a particular waveform model is. The statements formulated here are based on selecting PN approximants that are compared with each other, and our choices were made to illustrate the order of magnitude one generally has to assume for our notion of error. This can be taken as a conservative estimate for all currently exsisting combinations of analytical and numerical relativity, because even a remarkable agreement in the overlapping region of both approaches does not necessarily diminish the uncertainty of many ambiguous choices that enter the modeling of (up to thousands of) GW cycles in the inspiral waveform. Nevertheless, one should keep in mind that a particular PN (or EOB)+NR combination can be much closer to the real waveform than estimated here, as well as the possibility that the PN ambiguity at consistent 3.5PN order generally underestimates the true error in the signal description. Two further essential assumptions should be noted: We neglect both the error of the hybridization procedure and any uncertainties beyond the matching frequency. Both assumptions are well motivated by previous studies [21,[41][42][43], but care has to be taken when generalizing their validity. For instance, from Fig. 6 or Table II one might be tempted to conclude that actually very short NR waveforms are enough for modeling equal-mass, hardly spinning systems. This is certainly true from our results if the matching to PN can be done unambiguously. However, if there are too few cycles to align PN and NR signals properly, different matching procedures may lead to very different results. This aspect was not treated here as it can be checked separately, and it should only affect the resulting waveform for very short (< 5 orbits) NR simulations. The other key assumption, the presence of exact highfrequency data, implies another important aspect to our results. Not only do we say that the error of the NR part of the wave is negligible (an assumption that could easily be dropped if the NR mismatch becomes significant) we also use waveform families that directly resemble PN/NR hybrids. In other words, the additional error that is introduced in the phenomenological fitting and interpolation process is not taken into account here. Again, this is an error that can be quantified separately, but it has to be taken into account when interpreting the comparison of different complete waveform models, as was done in [45]. We merely state the fact here that in principle PN+NR combinations constitute sufficiently accurate target waveforms for the construction of template families. This work can be complemented in many different ways. One obvious, yet involved extension is the completion of the parameter space by allowing arbitrary spin orientations that cause additional precession dynamics. Some steps towards building such hybrids have taken place already [68,69], but a deeper understanding of the waveform structure has to be gained before an extensive error analysis like the present one can be performed. Similarly, this study was restricted to the dominant spherical harmonic mode as it is crucial to understand and quantify the errors here first. Nevertheless, a final waveform model would have to include higher modes as well, and the algorithm we presented should be easily adaptable to these cases. Implementing more PN approximants and repeating our analysis with pairwise comparisons of various flavors of PN and EOB will help to fully understand the spread of equivalent descriptions of the inspiral process. When more contributions to PN expansions become available the present analysis has to be repeated, hopefully reflecting the enhanced knowledge of the analytical approximation. This is especially true for spinning binaries, where calculations of higher-order PN contributions are expected in the next few years. Also, work is already underway [59] to extend previous work [58] and relate the parameter uncertainties found in this study to statistical errors that are inevitably present for signals with a given SNR in the detector. Only these results will allow for statements about how useful current waveform constructions are for parameter estimation and if the uncertainty in recovered parameters is dominated by the detector noise or the waveform model itself. Finally and most interestingly, one should address the question of what kind of physics can be achieved given a certain performance of complete waveform models and, of course, given real GW detections with the upcoming generation of interferometers. It will be particularly important to analyze whether a certain disagreement between signal and model can be entirely explained by model uncertainties or if possibly unknown physical effects are the cause. This study serves as a first step to prepare for those kinds of questions.
2011-09-12T11:07:16.000Z
2011-07-05T00:00:00.000
{ "year": 2011, "sha1": "e5467825929fe626acad97c4609c191df45842d4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1107.0996", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "acbb6aa4d9e40c07d616aa9ceac6d156ec12955b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232022204
pes2o/s2orc
v3-fos-license
Root Tonics and Resilience: Building Strength, Health, and Heritage in Jamaica Jamaican root tonics are fermented beverages made with the roots, bark, vines (and dried leaves) of several plant species, many of which are wild-harvested in forest areas of this Caribbean island. These tonics are popular across Jamaica, and also appreciated among the Jamaican diaspora in the United States, Canada, and the United Kingdom. Although plants are the focal point of the ethnobotany of root tonics, interviews with 99 knowledgeable Jamaicans across five parishes of the island, with the goal of documenting their knowledge, perceptions, beliefs, and oral histories, showed that studying these tonics solely from a natural sciences perspective would serve as an injustice to the important sociocultural dimensions and symbolism that surround their use. Jamaican explanations about root tonics are filled with metaphorical expressions about the reciprocity between the qualities of “nature” and the strength of the human body. Furthermore, testimonies about the perceived cultural origins, and reasons for using root tonics, provided valuable insights into the extent of human hardship endured historically during slavery, and the continued struggle experienced by many Jamaicans living a subsistence lifestyle today. On the other hand, the popularity of root tonics is also indicative of the resilience of hard-working Jamaicans, and their quest for bodily and mental strength and health in dealing with socioeconomic and other societal challenges. Half of all study participants considered Rastafari the present-day knowledge holders of Jamaican root tonics. Even though these tonics represent a powerful informal symbol of Jamaican biocultural heritage, they lack official recognition and development for the benefit of local producers and vendors. We therefore used a sustainable development conceptual framework consisting of social, cultural, economic, and ecological pillars, to design a road map for a cottage industry for these artisanal producers. The four steps of this road map (growing production, growing alliances, transitioning into the formal economy, and safeguarding ecological sustainability) provide a starting point for future research and applied projects to promote this biocultural heritage product prepared with Neglected and Underutilized Species (NUS) of plants. Jamaican root tonics are fermented beverages made with the roots, bark, vines (and dried leaves) of several plant species, many of which are wild-harvested in forest areas of this Caribbean island. These tonics are popular across Jamaica, and also appreciated among the Jamaican diaspora in the United States, Canada, and the United Kingdom. Although plants are the focal point of the ethnobotany of root tonics, interviews with 99 knowledgeable Jamaicans across five parishes of the island, with the goal of documenting their knowledge, perceptions, beliefs, and oral histories, showed that studying these tonics solely from a natural sciences perspective would serve as an injustice to the important sociocultural dimensions and symbolism that surround their use. Jamaican explanations about root tonics are filled with metaphorical expressions about the reciprocity between the qualities of "nature" and the strength of the human body. Furthermore, testimonies about the perceived cultural origins, and reasons for using root tonics, provided valuable insights into the extent of human hardship endured historically during slavery, and the continued struggle experienced by many Jamaicans living a subsistence lifestyle today. On the other hand, the popularity of root tonics is also indicative of the resilience of hard-working Jamaicans, and their quest for bodily and mental strength and health in dealing with socioeconomic and other societal challenges. Half of all study participants considered Rastafari the present-day knowledge holders of Jamaican root tonics. Even though these tonics represent a powerful informal symbol of Jamaican biocultural heritage, they lack official recognition and development for the benefit of local producers and vendors. We therefore used a sustainable development conceptual framework consisting of social, cultural, economic, and ecological pillars, to design a road map for a cottage industry for these artisanal producers. The four steps of this road map (growing production, growing alliances, transitioning into the formal economy, and safeguarding ecological sustainability) provide a starting point for future research and applied projects to promote this biocultural heritage product prepared with Neglected and Underutilized Species (NUS) of plants. Keywords: ethnobiology, biodiversity, neglected and underutilized species, wildcrafting, Caribbean, Jamaica, intangible cultural heritage, sustainable development INTRODUCTION Traditional and indigenous fermented plant mixtures, multicomponent alcohol infusions, and bitter tonics, consisting of roots, bark (and other parts) of wildcrafted species, are prepared and drunk as beverages, medicines, or for sociocultural purposes around the world, e.g., kaojiuqian in Shui villages in China (Hong et al., 2015); garrafadas in Brazil (Barros dos Passos et al., 2018); mahuli (country liquor) in India (Kumari et al., 2015); and bita in French Guiana (Tareau et al., 2019). The preparation and use of alcohol-based or fermented plant mixtures made with roots and bark of wild and cultivated species has also been recorded both in Africa and in countries across the Atlantic Ocean with a significant Afro-descendant population, such as in several Caribbean islands and the wider Caribbean region, especially as aphrodisiacs and for treating sexually transmitted infections (Cano and Volpato, 2004;Payne-Jackson and Alleyne, 2004;Vandebroek et al., 2010;van Andel et al., 2012). In Jamaica, artisanal fermented decoctions that include several wildharvested and forest plants are known as root tonics . These tonics play a dual role as food and medicine, and have been recognized as a product made with Neglected and Underutilized Species of plants (NUS) that shows potential for income-generation, empowerment of local communities, and reaffirmation of their cultural identity (Padulosi et al., 2013). Jamaican root tonics are commonly produced and consumed at home, or sold locally in the informal economy, and are widely appreciated by Jamaicans as an energizer, aphrodisiac, for blood purification, and for the promotion and maintenance of good health (Sobo, 1993). Although root tonics are inherently a Jamaican product, their impact reaches beyond this Caribbean island, as their commercialization by a handful of producers in Jamaica and overseas has followed the Jamaican diaspora to London, Toronto, and New York City (Dickerson, 2004;. The popularity of root tonics as a symbol of Jamaican biological and cultural heritage (in short "biocultural heritage") stands in stark contrast to the breadth and depth of their scientific study. So far, one paper has reviewed the plant diversity of root tonics, from a study that used data from labels of listed ingredients on commercial products (Mitchell, 2011). In addition, the same paper contributed to a comparison of plant mixtures used as aphrodisiacs across the Caribbean and Africa (van Andel et al., 2012). Data is also lacking about the history and cultural context of their use, as well as levels of consumption, domestic production, and sales of artisanal root tonics across Jamaica, and how artisanal root tonics differ from commercial products. The diverse biological, medical, historical, and cultural dimensions of Jamaican root tonics invite several important research questions, including related to the botanical identity of the plant diversity found in recipes, the illnesses treated and purported health boosting properties, their historical origin and present-day cultural importance, and their potential for sustainable heritage development for the benefit of small-scale Jamaican producers. The term "sustainable development" is widely used with varying definitions based on the context and purpose of use, but was first coined by the World Commission on Environment and Development in 1987. Common pillar structures found in discussions about sustainable development touch on its economic, social, and ecological dimensions. For the purpose of this paper, we are using the term "sustainable heritage development" (Keahey, 2019) and incorporate a fourth pillar, namely cultural sustainability, which seeks to recover and protect cultural identities through a celebration of local and regional histories and the passing down of cultural values to future generations (Farsani et al., 2012). The cultural pillar of sustainability exists in parallel to ecological, social, and economic sustainability, and stresses the relation of heritage to social cohesion and local identity (Soini and Birkeland, 2014). In this paper, we focus on the intangible cultural aspects of Jamaican root tonics, using information from ethnobotany research and oral history testimonies as a lens to explore the potential for development of an equitable Jamaican cottage industry for artisanal root tonic producers. Our primary goal was to conduct ethnobotanical research to increase the scientific knowledge base about root tonics. Our secondary goal was to move beyond research and make this data applicable and relevant to local communities. Specifically, this paper uses a mixed methods approach based on ethnobotany and oral history research, and a contextual analysis of the production market, to understand the "emic" (insider's or community) perspective of root tonics (Gros-Balthazard et al., 2020), addressing the following questions: (1) What are Jamaican root tonics? (2) Why do Jamaicans drink root tonics? (3) Where did the tradition of making root tonics come from (who developed this tradition)? (4) Who is especially knowledgeable about root tonics (5) What is the profile of an artisanal root tonic producer? and (6) What should a roadmap to a socially just Jamaican root tonics cottage industry look like? collaborators to the success of this project, they are co-authors on this paper. Before each interview, we explained the goals of the project and asked for the participant's verbal, free and prior-informed consent (FPIC). Written informed consent was obtained for Figure 2. We conducted face-to-face interviews in Jamaican Patois, with the interviewer asking survey questions and recording the answers on paper or a laptop. To protect their identity, study participants received a number, nickname, or initials on the questionnaire, unless they explicitly gave their permission to be acknowledged for their participation in the project. For the purpose of this scientific paper, data of all study participants was anonymized. Our questionnaire contained 23 questions that pertained to five sections: (1) Definition and use-patterns of root tonics; (2) free-listing of plant ingredients of root tonics; (3) preparation of root tonics; (4) opinions about root tonics; (5) socio-demographic information of participants. In total, between February 2018 and May 2019, we interviewed 99 people, 88 men and 11 women, across five parishes (Figure 1). The lower number of women reflects the gendered nature of plant collectors and root tonic producers that is skewed toward men. The age of study participants varied from 26 to 88 years, with an average (±STDEV) of 59 ± 13 years. Most people (63) were farmers, seven persons were retired; other professions included vendor (7), herbalist (6), mason or construction worker (6), "roots man" who prepares and sells roots (5), while one or two people reported other occupations, such as fisherman, cane cutter, artist, steelworker, higgler, shoemaker, dressmaker, musician, artist, security guard, or taxi driver. Data Analysis of Ethnobotanical Interviews Interview answers from all 99 participants were entered and organized in an Excel spreadsheet. Column headings consisted of variables (gender, age, occupation, religion, number of plants reported...) or survey questions, while rows and cells contained individual answers from participants (see Supplementary File). The formatted Excel spreadsheet was imported into Atlas.ti, and four central interview questions were coded for qualitative analysis: Q1-Definition (what is a root tonic?), Q2-Motivation (why do Jamaicans drink root tonics?), Q3-Origin (where does this tradition come from?) and Q4-Knowledge keepers (who is especially knowledgeable about root tonics?). After several rounds of careful reading through all interview answers, we identified and assigned 23 codes, based on the recurrence of verbatim terms that were expressed in answers from interviewees to these open-ended questions (Table 1). Next, Atlas.ti 8.4 was used to explore relationships between these codes through cooccurrence tables and visual networks. Creating a Road Map for the Sustainable Development of Root Tonics Based on the responses from interviews, we developed four central questions to assist in creating a road map for the sustainable development of a cottage industry for root tonics, as follows: (1) What is our definition of sustainable development? (2) What are the steps that a traditional, small-scale root tonic producer can take to develop and scale-up their production in the informal and formal sectors? (3) How can a traditional root tonic be improved upon for sale to the general local population? (4) How feasible is it to suggest a cottage industry; what would FIGURE 2 | Collecting roots and lianas (called "wiss") of various wild-harvested plant species to prepare root tonics in St. Thomas parish, at the fringes of the John Crow Mountains, 3 days before the full moon (photo credit IV). be the ideal socio-economic situation for traditional root tonic producers in Jamaica, and how might this scenario be realized in the future? After developing further sub-questions and grouping these thematically, we determined that the major topics to be researched further were the current industry environment, marketing, traditional knowledge, culture, and health. We based the definition of sustainable heritage development used in this paper on the results of a review of the literature. To find journal articles, we searched Google scholar and EBSCOhost using the keywords "ethnobotany, " "ethnobiology, " "culture, " "rooibos, " or "traditional knowledge" and "sustainable development, " as well as "cultural sustainability." Rooibos was used as a search term as an example of a plant species that has specific geographical origins and is used in a beverage with established cultural significance to the people of the region in which it is cultivated. We then defined the goals of the roadmap and used interview responses, direct (participant) observation of Jamaican society, its culture and economy, and research into the resources available to informal micro enterprises to identify barriers that artisanal root tonic producers might face. Internet searches were performed to identify the relevant public and private sector authorities and resource-providers in the areas where support is needed, and a review of each of the relevant entities' websites was conducted to identify what resources, publications, training, and support are being offered to the micro, small and medium enterprise (MSME) sector, particularly for micro enterprises in the agro-processing sector. The level of production and the sales environment gleaned from the survey results were used to determine the assumed starting point for the root tonic producer to be a home brewer with sales scattered throughout the year, with production being limited to usually a 5-gallon batch of tonic sold over several weeks to mostly people within the producer's social network. Based on this assumption, we determined what resources would be available, and sought attainable strategies to improve this producer's situation, with steps that can be taken within the informal economy until the producer feels empowered to formalize their root tonic business. The identified barriers were used to create a road map that would seem manageable, and culturally acceptable, to the average producer. Currency conversions to USD in this paper use a conversion rate of $1 USD to $148.74 JMD, the rate available on August 12, 2020. RESULTS Individual interview answers to a selection of the survey questions and psychosocial data can be found in the Supplementary File. In their answers to this question, Jamaican participants emphasized a root tonic's strength-building quality as a drink made of a combination of plants that supports and cleanses the body, cures sickness, provides energy, and settles the nerves. Table 2 shows the recurrent use of these terms by their counts in quotations, as well as their associations with four questions (Q1 to Q4) through the C-coefficient that varies between 0 (no association) and 1 (perfect association) ( Table 2). The number of plant species used in root tonics varied between 4 and 55, with an average of 15 ± 8 (STDEV) plants. Persons who prepared root tonics used the roots, bark and whole chopped liana parts of these species, and for some also the leaves, all of which needed to be dried before use. Several producers stated that it was important to work with plant parts that were fully dried, or that otherwise the tonic would spoil. In colloquial language, a root tonic is often referred to as "roots." According to Table 2, root tonics are not considered bitters, with only three people mentioning this term, of which one person explicitly clarified that "bitters is not a roots" (MT3, male, age 60). In addition, the difference between a tea and a root tonic was also explained as follows: "[It depends on the] amount of different things you put in it, for a tea [you] just [put a plant like] sarsaparilla, ramoon, chainey root. For a tonic you put more things, 20 different something, bark and roots" (Windsor Forest-1, male, age 62). The preparation of a root tonic is a time-consuming process that involves the collection, drying, and boiling of various plant ingredients in water, after which the decoction is cooled, strained, and bottled. The whole process from collection to finish can take several weeks, or even months. Most participants reported collecting plants during a specific moon phase, often three days before or three days after the full moon, when the moon is considered strongest (Figure 2). Important plant species used in root tonics, notably vines and roots, are wild-harvested in forests and other remote ecosystems that are difficult to reach and require long collection trips on foot. The botanical diversity of root tonics falls outside the scope of this paper and will be addressed elsewhere, but two of the most popular species across the five study areas were lianas of the genus Smilax, belonging to the Smilacaceae: Chainey root (Smilax canellifolia Mill., illegitimate synonym Smilax balbisiana Griseb.), and sarsaparilla (Smilax ornata Lem., synonym Smilax regelii Killip and C.V.Morton). The plants are usually dried naturally in direct sun or shade, over several days or weeks (Figure 3). Each person has their own specific recipe, which we did not record during interviews, out of respect for, and to protect, their intellectual property rights (IPR). The general process for preparing roots involves boiling the plant mixture over several hours, traditionally over firewood, after which the liquid of the "first boil" may be decanted and either finished at this stage, or new water is added, and the whole process repeated (Figure 4). Then this liquid is added to the previous, and the preparation is left to cool. Next, it is bottled ( Figure 5) and put down in a cool place for a month or longer, which is described as "curing." Several persons noted that in the past, the bottles were often buried under the earth to keep them cool and slow down the fermentation process which prevents the glass from bursting. Ingredients that can be added as a preservative to keep a root tonic from spoiling included burned sugar, molasses, honey, wine, or rum. A root tonic can be consumed directly as a shot from the bottle, or as a punch by combination with some of the following ingredients, such as condensed milk (or coconut milk sweetened with honey), Irish moss, rum, Dragon Stout, or Guinness beer. Jamaicans tend to consume root tonics in a shot glass in the morning and/or evening, especially when they need energy, or to relax the mind. When asked whether men, women, and children all drink root tonics, 84 people (85 percent) answered "everyone, " whereas 11 people said "only adults, " 2 people "mostly men, " one person said it depended on the type of root tonic, and another person did not answer the question. However, 30 people specified that children should only drink a small amount, measured as one or two spoonsful, or that their root tonic should be diluted with water. Three people added that root tonics should not be consumed by pregnant women. Why Do Jamaicans Drink Root Tonics (Q2-Motivation)? It was not until interview participants were asked about reasons for drinking root tonics that their role as an aphrodisiac beverage came to the forefront. Other important functions, such as strengthening, building and cleansing the body and blood, curing sickness, providing energy, and settling the nerves had already been emphasized previously in response to the question "What is a root tonic?" The strength-building capacity of root tonics was associated with working hard (9 answers We identified at least seven functions of root tonics from the interviews, which complement and/or overlap each other ( Table 3). One study participant described the multifunctionality of root tonics as follows: "It has a lot of meaning[s]-a product that can help sickness, like a medicine. It has a lot of different things, substance" (St. Thomas-6, male, 56 years). When Jamaicans mentioned the word "bush" during interviews, they referred to several possible meanings: Any plant, a specific plant species, a specific natural area or forest known to both people who hold the conversation, or any (unspecified) wild natural place. Where Did the Tradition of Making Root Tonics Come From (Q3-Origin)? In their answers to this question, study participants emphasized the Africa connection (39 quotations, Table 2). They described root tonics as a tradition with deep spiritual and natural connotations passed on by African elders, who endured the brutal hardships of slavery. Several people referred to Creation, God, visions, or a spiritual origin. Someone said: "Older head people, them maybe learn it from the Spirit. Some people recognize it spiritually, the bush become[s] like a spiritual thing, a living soul, them [plants] have their own purpose" (Windsor Forest-7, male, age 66). Another participant stated: "African[s] -our history, they come here, a lot of beating and harassment that we [were] getting, we just go [in the] woods and find some bush to keep strong" (South Manchester-9, male, age 56). A third person said: "That come from slavery when the white man take away the medicine, and they [the Africans] have to seek their own medicine to stay alive. They try it out and feel nice, and then tell them bredren [friends]" (Windsor Forest-13, male, age 64). Someone else explained: "Slaves, they never got good food from white slave masters so they consumed the roots" (Kingston-9, male, age 62). Interview answers showed African agency as an act of resistance to slavery instead of passive endurance, with several people associating root tonics with the Maroons who freed themselves from enslavement, and who engaged in sophisticated guerrilla warfare and revolts against the British colonizers to maintain their freedom, while living deep in the Jamaican mountains where they thrived, as the following three quotes illustrate: "The Maroons are the first to release themselves and go into the hills. I always said my ancestors is from that group of people and that is where I get that nature from" (St. Thomas-8, male, age 64). "The Maroons ran away and survived in the forests and they started the tradition [of boiling root tonics]. But their knowledge FIGURE 4 | Preparation of a root tonic showing boiled plant ingredients after decantation of the liquid. Often, depending on the producer, new water is added for a "second boil." This process can take several hours to an entire day (photo credit IV). originated out of Africa. They tapped into the knowledge in order to survive" (St. Thomas-10, male, age 61). "Maybe [it came from] the Maroons -them [are] a rough people, they fight wicked" (Windsor Forest-16, male, age 53). Eight people also referred to Amerindians, postulating that the original inhabitants of the Caribbean islands may have exchanged their knowledge with Africans: "It start[s] from how you learn it, the traditional people, from the Tainos them, the Indians them, they was here first. From Cuba most of them spring from. Because we [Africans] come after the Taino, find out the knowledge, and pass it on the same way" (Maroon Town-9, male, age 67). "It is an African tradition. [The] Arawak (Taino) [are the] medical experts in South America and teach the Africans as a slave. All top herbalists are Indian people (Arawak). After Africans become enslaved, them mix together" (Windsor Forest-8, male, age 45). Several of these and other answers also highlight the continuing relationship that exists between the use of root tonics and resilience as a people, dynamically finding and employing solutions to respond to, and overcome, adversity and stress, from the past into the present: "In Jamaica, when people escape Who Is Especially Knowledgeable About Root Tonics (Q4-Knowledge Keepers)? Half of all study participants (50 people) mentioned Rastafari as persons who are especially knowledgeable about root tonics FIGURE 5 | The final artisanal product is stored in recycled bottles of rum (or occasionally other types of bottles, such as wine or Campari), often with their original labels (photo credit IV). (Supplementary File), and 43% self-identified as Rastafari when asked about their religion. The following quote links "roots, " a term that represents both the physical roots of plants that grow in the earth and the cultural roots from the ancestors, to Rastafari and other Jamaicans who believe in nature and culture as a way of life: "It [knowledge about root tonics] is coming from the ground (roots, ancestors On the other hand, ten people also replied that not one specific group was especially knowledgeable and said that root tonics are prepared by Jamaicans across Jamaica. What Is the Profile of an Artisanal Root Tonic Producer? Almost everyone (97 of 99 people interviewed) drank root tonics, whereas the number of people who reported preparing, collecting plants, and selling root tonics was 87, 84, and 61, respectively. Six persons who sold root tonics (10%) did not collect the plant ingredients themselves; they were all vendors in the capital, Kingston. However, everyone who sold root tonics also prepared them. Root tonic makers and vendors selfidentified predominantly as Rastafari, adhering to a natural lifestyle (40 people), whereas 25 people reported to be Christian, 19 stated no religion, two did not want to answer this question, and one person declared to be Zionist. These producers and vendors were predominantly male (78 of 87 people), middle aged to senior (average age of 58 ± 13 years), and embedded in a social network of family and friends who follow the tradition of "boiling roots." However, six study participants who sold root tonics were younger than 40, with the youngest being 26 years. Producers learned about root tonics from multiple, complementary and overlapping, sources, including: Elders in the community and other relatives (36 people), parents (32 people), grandparents and great-grandparents (28 people), traditional specialists such as roots men, herbalists, bush doctors, and Maroon mothers (11 people), God, visions, and spirituality (5 people), friends, referred to as "bredren" (5 people), books (5 people), experimentation (4 people), health stores (1 person), the internet (1 person). The occupation of root tonic producers consisted of farmers who "trot the hills" (in rural areas; 53 of 68 people), market or street vendors (in the capital; 7 of 19 people), and sometimes herbalists or "roots doctors" (11 of 87 producers). They were local producers who operated in the informal sector, without established businesses or products that have been packaged for commercial sale. Root tonics were sold directly from the producer's home, roadside stalls, small community shops, or more structured market stalls. Occasionally, the artisanal producer will travel to deliver products directly to consumers or to sell their product at festivals, other events, or on the street. The majority of vendors reported selling their product to locals (56 people). Of these, less than half (22 people) also sold to tourists, foreigners, and visitors. Just four persons said they only sold to the latter group. The Future of Roots: Toward Developing a Road Map for a Root Tonics Cottage Industry Using the four pillars (economic, ecological, cultural, and social) of sustainable heritage development, we identified the following key considerations and action points in preparing a road map for a root tonics cottage industry, while also pinpointing potential barriers that producers and this cottage industry might face (Figure 6). Economic Pillar The main consideration identified under the economic pillar is that traditional root tonic producers need to be able to bring in a reasonable level of income for a fairly-priced product over the long-term. The size of repurposed bottles is generally either 200 ml or more commonly 1 liter. Prices per one-liter bottle range from $1,500 to 2,000 JMD ($10.09 USD and $13.45 USD, respectively) in Kingston and $1,000 to 1,500 JMD ($6.72 USD and $10.09 USD, respectively) in rural areas at the time of research, with $1,000 JMD ($6.72 USD) for a one-liter bottle FIGURE 6 | Identification of key elements for the creation of a road map for sustainable development of root tonics (road map shown in Figure 7), based on four pillars (economic, ecological, cultural, and social). The arrows and diagonal dotted lines indicate and delineate the barriers, key considerations, and action points that are associated with each pillar. For example, for the ecological pillar, one of the barriers is that knowledge about the plant species used in root tonics (botanical identity and plant diversity) is incomplete, whereas a key consideration is that the production of root tonics is based on the use of local, Neglected and Underutilized Species (NUS) of plants; an action point for the development of a road map is to distinguish between sustainable production at a smaller, cottage industry scale, compared to a larger and more ecologically impactful industrial production line. being the modal price for those producers who reported pricing. Artisanal producers who commented on batch volume generally indicated that a regular batch was around 5 gallons. Several of the producers interviewed said that they produced batches of root tonics only sporadically throughout the year, or to order for customers, and did not have a steady supply that was ready for sale. While the goal is to ultimately have a cottage industry that operates in the formal sector, a road map will need to meet producers where they currently are in the informal sector. It will need to help them to use their own local knowledge paired with formalized training via public resources and industry tools that equip producers with what they need to enter the formal sector. There exist several economic barriers to traditional producers developing their own production lines, most significant of which is the lack of, or limited access to, resources such as business training and financial capital for producers operating outside of the formal sector. Other barriers include a low production capacity with a lengthy timeline, lack of IPR recognition, food safety concerns, a general underappreciation for traditional products, limited marketing and distribution options and strategies, and limited infrastructure. Ecological Pillar A key consideration here is that root tonics are biodiversitybased products, which depend on the integrity of ecosystems. In order for a road map to help ensure that an increase in production of traditional root tonics will continue to support ecological sustainability, there should be a market differentiation between commercial and artisanal products. Major ecological hurdles that artisanal producers of traditional root tonics face include incomplete botanical knowledge of the plant species used, including about their diversity and ecology; lack of evidence-based knowledge on their health benefits; hard-to-reach locations of the wild plant species used; and a high potential for unsustainable harvesting methods with a significant increase in production. Cultural Pillar Given the cultural importance of root tonics, a road map will need to ensure that the cottage industry development is culturally acceptable, both for traditional producers and for consumers. In order for this to be most effective, a deeper understanding of the benefits of root tonics as a cultural heritage product will need to be developed within the local market. Current barriers to continuing and growing root tonics' acceptance as a cultural product include the general misconception that it is only an aphrodisiac, a loss of traditional knowledge on the beverage's production as compared to past generations, and local stigmas against farming and bush remedies. Social Pillar One of the central social considerations in developing a sustainable road map for a root tonics cottage industry is the empowerment of local communities. Currently, there exists a significant divide between locals and the "establishment, " and a general mistrust on the part of locals toward each other, government, formal institutions, and capitalistic ideas of development and progress. Systemic social barriers for root tonic producers include racism, classism, and gender inequality. These co-exist with more specific social barriers, such as a lack of mentorship, a lack of interest from the younger generation, and a lack of protection by any social classification such as "indigenous" that might bring with it the right to maintain traditional knowledge and traditional cultural expressions as intellectual property (IP). A road map should consider whether it is desirable to encourage collaboration among producers, and how this should be facilitated. Also, it is necessary to reflect on how an industry marketing strategy might boost the image of traditional root tonics to policy makers, educators, gastronomes, and other key influencers that could help to promote more widespread consumption. During interviews, multiple artisanal producers commented on competition from other artisanal producers, as well as commercial products that are available nationwide. In Jamaica, we found at least eight commercial products that are being sold in supermarkets and labeled as roots, root tonics or tonic wine, namely "Baba Roots Herbal Drink, " "Put It Een Roots Tonic Wine, " "Pure Roots 100% Herbal Tonic, " "Pump It Up Roots Tonic Wine, " "Mandingo Roots Tonic Wine, " "Daniel's Roots Drink, " "Power Man Roots Drink" and "Hard Driver Roots Drink." These commercial root tonics are packaged in bottles ranging from 148 ml to 1 liter, with the more common size being single-serving bottles of 148 ml. Single-serving commercial root tonic bottles are generally sold for $350 JMD ($2.35 USD) each, a price that is $1.36 JMD/ml (2.4 times) more expensive than the modal artisanal product with reported pricing in this study. However, none of these beverages are considered as authentic as those of the real "roots man" who sells a cultural product based on tradition. One person alluded to this during interviews, by stating that "commercial root tonics have lost their purpose [to help] cure sickness out of your body" (South Manchester-12, male, age 52). Jamaican Root Tonics Have a Deep Socio-Cultural History and Their Use Tells the Story of Survival, Resistance, and Resilience The preparation and use of complex plant mixtures made with roots and bark in the Caribbean is not restricted to Jamaica, but has been reported in the scientific literature for other islands, for example in the Dominican Republic and Cuba, where they are popularly known as botellas and galones (Cano and Volpato, 2004;Vandebroek et al., 2010;van Andel et al., 2012). However, beyond the use of these beverages as aphrodisiacs and medicines, there is little scholarly information about their origin, perceived functions, and meanings. This may be due to a generalized lack of mixed methods approaches that combine ethnobotany with archival and/or oral history research. A study of pru, a fermented beverage characteristic to Eastern Cuba known there as "root champagne, " asked questions to pru producers, and merchants in herbal medicine, about the drink's production, consumption, origin and history that were similar to this study (Volpato and Godinez, 2004). However, the authors noted that the literature did not offer conclusive evidence about the drink's origin or its development over time. Interestingly, one of the plant components in pru was Smilax domingensis Willd., a species closely related to the two Smilax species found in Jamaican root tonics, and Cubans also considered pru a blood purifier (Volpato and Godinez, 2004). The scientific literature, as well as advertisements and consumer views of commercial root tonics, have popularly described them as aphrodisiacs or bitter medicines (van Andel et al., 2012). However, this view may be too limited, since according to our study which was grounded in the informal economy, Jamaican root tonics are fermented beverages without a bitter taste profile that are consumed to sustain, strengthen, and treat the whole body, including the mind. In addition, oral history data associated with these beverages tells a complex story of survival and resistance that is deeply anchored in Jamaica's socio-cultural past and present. Dating back to the Transatlantic slave trade and gruesome forced labor on Caribbean plantations, Africans turned to nature and herbal medicines to fend off illness and to provide much needed energy, as well as physical and mental strength to survive. Today, according to their oral testimonies, rural Jamaican farmers, facing economic hardship and carrying out demanding manual labor without much help or technological tools (Sander and Vandebroek, 2016), continue to turn to root tonics to cope with hardship. Importantly, root tonics have complex and layered metaphorical meanings that go beyond the notion of survival. There exist parallel narratives of resilience, and of returning to, believing in, and recognizing one's cultural roots, referring to the traditions from the past and the African continent. This paper follows the definition of resilience as "the capacity and dynamic process of adaptively overcoming stress and adversity while maintaining normal psychological and physical functioning" (Wu et al., 2013). In the case of root tonics, oral testimonies from Jamaicans described how using root tonics kept older generations alive, strong, and healthy during and after escaping from enslavement. Today, root tonics are still regarded as a product of self-reliance. Furthermore, root tonics are prepared with plant parts, including roots, for a beverage that is "rooted in tradition" (Sobo, 1993), and directly linked to cultural heritage and the ancestors. Since these tonics are considered an "all in one" by people for either obtaining or maintaining an optimal status of well-being, their use is embedded in a holistic framework of health. This framework also considers the human body as an element within the larger natural environment, characterized by a symbolic transfer of strength from plants to humans. Study participants described some of the plants used in root tonics as particularly resistant and difficult to harvest, and they believed that these plants subsequently transferred their quality of strength to the human body when they were prepared and ingested. In addition, consumption of root tonics also represented a double symbolism of using elements of nature (the earth) to sustain sexual nature and to guarantee human procreation (Sobo, 1993). Unraveling narratives such as these offer a much deeper insight into the cultural importance of plants for people than that offered simply by an ethnobotanical inventory or tallying of plant use-reports. Who Developed Root Tonics in Jamaica and Who Continues the Tradition? According to the oral history data presented here, the preparation and consumption of root tonics is primarily an African tradition, which is in agreement with the literature (Volpato and Godinez, 2004;van Andel et al., 2012). In the case of pru in Cuba, the authors postulated that the beverage was either "invented and developed locally, " or "a tradition brought to Cuba" in the eighteenth and nineteenth century by Haitians, Jamaicans, and Dominicans, who worked in coffee and sugarcane plantations in Eastern Cuba (Volpato and Godinez, 2004). However, these authors also considered an Amerindian origin of pru, based on testimonies from pru producers, literature reports that the indigenous population in the Caribbean made fermented drinks of pineapple, and the observation that the name "bejuco de indio" [Gouania lupuloides (L.) Urb.], a plant component of pru, refers to Amerindian people (Volpato and Godinez, 2004). In Jamaica, Higman (2008) described how during slavery (African) sugar workers on plantations were sometimes allowed to drink sugarcane juice or "cane liquor" fermented with Gouania lupuloides, which is called "chewstick" (or in earlier texts "chawstick") there, to produce a "tolerable beer." Several participants in our study suggested a shared African-Amerindian origin of Jamaican root tonics, recalling cultural memories that both groups lived together in Jamaica in the past. The island's original Amerindian inhabitants, and later the Africans, have endured two waves of European colonization, first the Spanish (1509-1660), followed by the British (1655 until independence in 1962) . During the latter occupation, Anglo-Irish naturalist and physician Hans Sloane (1707-1725) wrote: "The Indians are not the natives of the island [of Jamaica], they being all destroy'd by the Spaniards, [....] but are usually brought by surprise from the Musquitos [sic] or Florida, or such as were slaves to the Spaniards, and taken from them by the English". He specified further that the Mosquitos (also known as the Miskitos) were "an indian people near the Provinces of Nicaragua, Honduras and Costa Rica". However, others have pointed out that British writers who held deep Eurocentric views and hardly ventured beyond the coastal plantations and the edge of mountains were likely simply unaware of the existence of surviving Taino or other Amerindian peoples living in Jamaica's remote interior mountain areas (Craton, 1982;Fuller and Benn Torres, 2018). The Amerindian influence on Jamaica's Traditional Knowledge Systems (TKS) has received very little attention thus far (see, for example, Payne-Jackson and Alleyne, 2004), and shared African-Amerindian ancestry has been a standing topic of debate and contention in Jamaica (Fuller and Benn Torres, 2018). On the other hand, one school of thought is that Maroon communities, who settled in the almost inaccessible mountains of the island's interior, where they successfully fought for independence from the British, were in touch or coexisted with surviving Taino Amerindians, who had also fled to these mountains since Spanish occupation (Payne-Jackson and Alleyne, 2004;Fuller and Benn Torres, 2018). What remains unclear, however, is whether in the past Amerindians prepared root tonic beverages from the Smilax species that they collected and sold to European colonizers throughout the larger Caribbean region, including what is now Central and South America. Sloane wrote: "I was informed that Sarsaparilla is very frequent and cheap up Rio San Pedro in the Bay of Honduras where are several Indian towns. There is brought into Jamaica great quantities of sarsaparilla, by trade with the Bay of Honduras, New Spain and Peru. It grows in all these places on the banks of the rivers, and in moist ground. The Spaniards think it makes the water of those rivers, where it grows wholesome" (Sloane, 1707(Sloane, -1725. However, there does not seem to exist immediate confirmation that root tonics are an Amerindian tradition; instead these tonics seem to be associated with Afro-descendant communities, as is the case in Cuba, the Dominican Republic, Surinam, and the Guianas. In French Guiana, Afro-Guyanese soak roots and bark in rum or vermouth (Guillaume Odonne, personal communication). Also, in Suriname (and Northwest Guyana), Smilax roots are soaked in alcohol in bottled mixtures together with bitter plants, or boiled in water and drunk as a tea ( van Andel, 2000;van Andel and Ruysschaert, 2011). Although these bottled mixtures of wood and bark are known as "Black man's medicine, " the Amerindian population in Guyana collects the plant ingredients that are used by Afro-Guyanese people (Tinde van Andel, personal communication). Literature records of Jamaican root tonics are scarce, and archival evidence about them seems non-existent. According to Sloane (1707Sloane ( -1725 "[The Africans] use very few decoctions of herbs, no distillations, nor infusions, but usually take the herbs in substance". Thus, either root tonics were not yet prepared in the eighteenth century, or Sloane was not privy to their preparation and use. Sloane did mention several fermented beverages, which he described as "cool drinks" or "diet-drinks, " including "China drink" made with the two species of the genus Smilax that our study identified as important components of root tonics, the first a plant called China root (nowadays chainey root), Smilax canellifolia, and the second being sarsaparilla (Smilax ornata). Sloane prescribed these drinks as a regular treatment in his medical practice. He considered the Jamaican China root superior to the one Europeans imported from China, stating: "This is used for China roots, and yields a much deeper tincture than that of the East-Indies, whence I think it much better for the purposes to which it is employed, than that which is worm eaten coming from China, although [Willem] Piso [a Dutch physician and naturalist] seems to be of another mind" (Sloane, 1707(Sloane, -1725. Sloane added that the original China root became known by "Latins" in 1535, who learned it from China merchants, and that the Arabs knew it before the Europeans. Sarsaparilla, on the other hand, was obtained through trade with the Spanish colonies in the Americas, and was described in 1570 by a physician living in Mexico. Europeans thus knew of, and used, these two species since at least the sixteenth century. However, none of our study participants hinted at a possible European contribution to Jamaican root tonics. Charles Leslie, a Barbadian writer, described in 1753 that cool drinks were also consumed by Jamaica's African population, although he did not mention any Smilax species: "Their [referring to Africans and Creoles] common drink is water; but they prefer cool drink, a fermented liquor made with chaw-stick, lignumvitae [Guaiacum officinale L.], brown sugar, and water" (Higman, 2008). In present-day Jamaica, study participants considered Rastafari to be the knowledge keepers of root tonics. This is not surprising, given that the Rastafari movement emphasizes "returning to the (cultural) roots." Moreover, Rastafari celebrate "natural livity" (Dickerson, 2004). Root tonics embody a return to nature and natural solutions, since they are made with wildharvested species collected far away from the potential negative influence of chemical pollutants, which Rastafari consider as one of the main causes of modern diseases (Sobo, 1993). How Can We Improve the Local Development of Root Tonics as an Income-Generating Product for Subsistence Families? Based on interviews conducted in five of Jamaica's 14 parishes that represent different geographic areas of the island, our study showed that root tonics are drunk, prepared and sold across Jamaica, and that Jamaicans have detailed knowledge of their ingredients and processing. According to the oral history evidence, production and consumption of traditional root tonics reaffirm Jamaican cultural heritage and identity and celebrate local history. Resilience is an important aspect of Jamaican culture, and the concept of food as medicine is very popular, as is the desire to retain and build strength naturally (Sobo, 1993). The question that remains is how these beverages can be properly promoted as a cultural heritage product to a broader audience, including policy makers and gastronomes, and developed in a sustainable way for the benefit of local communities? Although in our interviews we did not specifically ask producers if they wanted to upgrade their production and sales of root tonics, our study found that the majority of interview participants (62%) were already selling (and preparing) these tonics on their own, without receiving any form of assistance or feedback. The development of a road map for a root tonics cottage industry thus presented itself as a logical applied extension of our ethnobotanical research, in order to provide recommendations to those producers who might be interested in upgrading their production in a sustainable way, at present or in the future. Given that artisanal root tonics reportedly can be enjoyed in moderation for their alleged health benefits by children and youth as well, pursuing a more formal production through a sustainable cottage industry could be an effective way to ensure that future generations retain access to this traditional knowledge while generating extra income. The current informality of artisanal production is not unique to root tonics, but is common in Jamaica, where it was estimated in 2006 that the economic activities of the informal sector represented 43 percent of gross domestic product (GDP) (MICAF, 2018). The root tonic producers in this study would most likely be categorized as micro enterprises, which the Jamaican government defines as an enterprise with total annual sales falling under $15 million JMD ($100,847 USD) and less than five employees. The MSME sector accounts for 80 percent of jobs within the Jamaican economy and at the time of research was considered to be a priority within the Ministry of Industry, Commerce, Agriculture and Fisheries (MICAF). However, many of the resources made available for MSMEs are only available to those businesses that are formally registered. The MSME & Entrepreneurship Policy of Jamaica incentivizes MSMEs to formalize their businesses in order to receive government and private sector support (MICAF, 2018). It is important that any road map for a sustainable cottage industry of root tonic producers emphasizes that power and ownership need to remain in the hands of the artisanal producers, given the many barriers these producers face, and the general mistrust between Jamaican people and formal institutions in the public and private sectors (Sobo, 1993). It will be important to provide tools, resources and support to these producers, while acknowledging that the knowledge and expertise belongs fully to them. Based on this premise, the initial road map we developed (Figure 7) should be viewed as a suggested starting point for a cottage industry that has not yet been established, but in which the waypoints, and therefore the map itself, will inevitably change as the industry develops and the socio-economic situation in Jamaica changes over time. Growing Production A recent study showed that the impact of soft skills training for entrepreneurs in Jamaica was somewhat positive over only a 3 month term, and only for men (Ubfal et al., 2020). However, the study suggested that business training for small enterprises may be more effective if it is specific to the business, encourages a proactive mindset, has hands-on training, focuses on personal initiative, includes SMART (Specific, Measurable, Attainable, Relevant, Time-based) goal setting training, addresses innovation, efficiency, and resilience, and is followed up by mentorship. Training of this type should be provided by the public sector for businesses operating both formally and informally so that the cost is approachable to entrepreneurs at all levels. In order to increase production, each producer will need to first complete a cost-benefit analysis, i.e., they will need to assess their current levels of production in terms of revenues and other benefits, direct expenses and other costs, production quantities and timelines. Doing so will allow for a known starting point from which producers can identify goals and track progress. Once this analysis has been completed, a simplified strategic plan that addresses the strengths, weaknesses, threats, and opportunities of the business with immediately actionable steps can be developed. It is important that the price of artisanal products reflects the time it took to produce the product, as well as the higher than average number of plant species ingredients used as compared to the industrial products on the market. Comparatively, the mean number of plant ingredients for artisanal products was 15, as compared to 9 for the sample commercial products. Given that the commercial products are currently priced higher than artisanal ones, there is room for artisanal producers to increase their price to some degree. For artisanal production, glass bottles of varying sizes are repurposed from a prior commercial product, usually alcoholbased, and are filled with the producer's root tonic. The original alcohol's label is either left on the bottle, or removed and not replaced with a new label for the root tonic. Jamaicans tend not to purchase food products unless they are confident of the safety of the food. Consumer confidence in the Jamaican market is something that can be established via purchasing from someone within your social network, and/or purchasing products from established businesses that have clear, detailed product information, including batch code, full list of ingredients, company information, and best by or expiration date. Since many root tonic producers operate within the informal sector, they do not use the labeling requirements set out by the Bureau of Standards Jamaica (BSJ). As a prerequisite toward expanding their distribution network, producers should strive to create a label for their products that indicates information about the producer, a list of main or common ingredients, how the product should be used, and how to store and consume the product. Producers should also consider their background story: What makes their root tonic special and why do they brew root tonics? Adding a label with these details will increase competitiveness by allowing producers to tell their stories without being physically present at the point of sale, which in turn will allow producers to expand their distribution channels. Once a producer is ready to enter the formal sector, they will need to ensure that their label is compliant with the requirements indicated by the BSJ. A finished product from a formalized business that abides by food safety regulations can attract a higher price due to the consumer confidence that is gained when they have awareness of product ingredients and that safety protocols are being adhered to. Distribution in Jamaica is difficult for any producer who lacks access to a vehicle, funding for transportation costs, and/or road infrastructures surrounding their production location. Even for those who do have sufficient access to resources and infrastructure, transport to urban markets from rural regions can be costly and time-consuming, with no guarantee that daily sales will cover costs. The Jamaican government is encouraging MSMEs, whether they operate in the formal or informal sector, to digitize their business. As such, MICAF is offering a free resource to MSMEs so they can create a website for themselves (Kolau, 2019). MICAF is also encouraging linkages between the tourism and agricultural sectors, and has partnered with the Ministry of Tourism to create a digital Agro-trading platform, called "ALEX, " that connects hundreds of small-scale farmers to consumers (Tourism Enhancement Fund, 2020). However, the ALEX platform is intended for farmers selling fresh produce rather than those working in the agro-processing sector. Platforms such as ALEX would be useful to artisanal root tonic producers, though it should be noted that many Jamaicans have a smartphone but do not subscribe to a data plan, so internet access is not always consistent or economically within reach. In the short term, root tonic producers can develop their distribution channels by utilizing their community networks to find roadside and market vendors and community shopkeepers who would be willing to sell their root tonics. Products could be transported via handcart, bicycle, or motorcycle as funds allow. As sales increase, production can be scaled up by increasing the number of 5-gallon pots to increase batch size. Growing Alliances There is currently a misconception held by the Jamaican public that root tonics are only an aphrodisiac, for men, and there is a general lack of public awareness of the myriad health benefits of the product. Public campaigning is generally effective in Jamaica, since the population is accustomed to seeing multimedia campaigns launched by public and private sector organizations. If artisanal producers in the root tonics cottage industry could band together to create a public education campaign, it could significantly grow their potential market. Since there are limited funds available, social media platforms would be the most costeffective tool to spread awareness. In order to differentiate the traditionally produced root tonics from industrial ones, the development and use of a visual aid such as a logo or certification mark would be helpful. Producers can also use cultural and agricultural events to circulate information about traditional root tonics. Annual events such as the Denbigh Agricultural, Industrial and Food Show would allow producers to set up stalls to allow the public to sample, purchase, and learn about their traditional products. Business cooperatives are not readily accepted in Jamaica, but the Government of Jamaica has identified the need for business clusters to enhance business development, competition, productivity, knowledge-sharing, marketing, and networking (MICAF, 2018). The public and private sectors will need to be creative in order to foster the spirit of collaboration, and an important first step would be to hear directly from current producers about the conditions in which collaboration would work for them, since not everyone will likely be comfortable sharing knowledge about their recipes, or process of harvesting and production. Producers will be better off if they build connections amongst themselves and with key people in the public and private sectors who can assist with advocating for recognition of root tonics as a biocultural heritage product, but this cannot be established without prior dialogue and consensus-building. Having some kind of cooperative between artisanal producers may also make it easier for these producers to achieve recognition for their traditional knowledge of root tonic production as IP. Having this IP recognition and protection would help to increase awareness of root tonics as a cultural heritage product, encourage interest in production from newcomers, clearly demarcate artisanal and commercial products, and enhance the ability for an artisanal industry to develop in an economically sustainable manner. Transitioning to the Formal Economy The Jamaican MICAF has already published an infographic road map consisting of four steps to assist potential small business owners in establishing a formal business (MICAF, 2019). The first step is to develop a business plan and obtain support from the Jamaica Business Development Corporation (JBDC). The next step is to register the business at the Companies Office of Jamaica (COJ). The third step involves receiving assistance from the Jamaica Intellectual Property Office (JIPO), and the fourth and final step is to meet business standards (e.g., for labeling), with help from National Compliance Regulatory Authority (NCRA) and Bureau of Standards Jamaica (BSJ). Once a root tonic business has been formalized, additional resources will become available to the owner for business development and support, depending on various factors, such as the type of enterprise, creditworthiness, scale of operation, and length of time in business. The cost and requirements to access these resources are varied. For example, the Development Bank of Jamaica (DBJ) offers a Voucher for Technical Assistance (VTA) program to assist formalized MSMEs who have not received a voucher within the prior 2 years in closing management gaps by strengthening managerial and administrative abilities with the aim of improving creditworthiness. The DBJ subsidizes 70% of the value of the voucher, and the business owner pays the balance (DBJ, 2020). Safeguarding Ecological Sustainability If the cottage industry for traditional root tonics will grow in size, it will be imperative for producers to focus on conservation of plant species and the habitats where these plants grow. If established producers would be willing, they can teach newer producers to harvest in ways that ensure these species grow back, for example by hosting periodic field or "in-the-bush" workshops, taking on apprentices to whom they can transmit traditional knowledge directly over a period of time, and/or by creating reference materials for those getting their root tonic production lines off the ground. Due to the importance that newcomers to the industry understand the need for sustainable harvesting practices, apprenticeship training over a sustained period will likely be most effective in promoting the continued ecological sustainability of the cottage industry. Currently, the majority of plant species used in traditional root tonic production are wild-harvested. Internationally, the majority of medicinal and aromatic plant species (MAPs) and non-timber forest products (NTFPs) also continue to be wildharvested. Standards for wildcrafting MAPs and NTFPs are included within a number of existing organic management and certification programs as a means to improve natural resource management and generate higher incomes for communities. The standards for wild collected, rather than cultivated, products are different, focusing on collection activities and the way they are carried out. The aim is to ensure that the collection methods are sustainable and do not damage the ecosystem and natural yield of the collected products (ITC, 2007). Examples of organic management and certification programs with provisions for wildcrafted products include the National Organic Program (NOP), overseen by the U.S. (USDA (U.S. Department of Agriculture), 2020) and Ecocert, one of the largest international organic certification organizations (Ecocert, 2013). Non-organic initiatives also address wild collection practices. The World Health Organization (WHO) published a set of guidelines on good agricultural and collection practices (GACP) for medicinal plants in 2003(WHO, 2003 (MPSG, 2007). In 2008 the FairWild Foundation was established to facilitate the global implementation of the ISSC-MAP standard and to ensure that wildcrafted products are produced in a socially and ecologically sound manner (FairWild, 2020). FairWild certification requires the active participation of stakeholder groups, including local communities, businesses, academic institutions, non-profits, and government institutions. FairWild represents one of the more rigorous of a number of voluntary sustainability standards (VSS), supporting sustainable production and trade, biodiversity conservation, and resilient rural economies. Ultimately FairWild aims to reward communities who wildcraft for functioning as stewards of sensitive ecosystems (Yearsley, 2019). FairWild certification represents, perhaps, the best fit for the Jamaican root tonics industry but would, most likely, require academic grants-private sector funding to offset the inevitable cost barriers to its successful implementation. In addition to sustainable harvesting, producers, in collaboration with scientists, can experiment with the cultivation of these wild plant species in order to encourage their growth in the producing region. It will also be crucial for producers to engage with scientists to learn more about the conservation threats to the plant ingredients used in root tonics and their natural habitats, in order to better understand the specific ways in which they can promote ecological sustainability. The Caribbean islands represent a global biodiversity hotspot with high priority for conservation, since the region has a high degree of endemic plants and animals (occurring nowhere else in the world) and their habitats face significant environmental threats from anthropogenic activities such as agricultural expansion of high-value commercial crops, wood extraction, mining, and infrastructure development. Jamaica's level of plant endemism (34%) ranks third in the Caribbean islands, after Cuba (53%) and Hispaniola (44%) (Acevedo-Rodríguez and Strong, 2008). Forest cover change has been relatively well-documented in Jamaica, including in protected areas, and the island experienced net deforestation during 2001-2010 (Newman et al., 2018), although estimates of annual deforestation rates have been highly variable. A comparative regional paper indicated Jamaica as one of two countries with the greatest area of woody vegetation loss (minus 299 km 2 ) between 2001 and 2010 among all countries in the Caribbean (Aide et al., 2013). Future Research In order to refine the road map and continue with the development of a cottage industry, additional research can include market surveys of consumer trends, population surveys to better understand the perceived health benefits of root tonics, and laboratory studies to develop an evidence base about these health claims. In addition, further comparative research is needed into current production methods and tools used by traditional producers, as well as the production cost of root tonics, and the potential cost of more efficient tools, better packaging and labeling, transportation, and distribution. Sensory analysis and taste profile comparisons between commercial root tonics and traditional (artisanal) ones will be useful to differentiate better between these two types of products. Finally, marketing campaigns can be designed to explain the evidence-based health benefits and cultural heritage value of traditional root tonics to Jamaicans living on the island and in the diaspora. CONCLUSIONS Root tonics are fermented beverages, not bitters, that are consumed, prepared, and sold across Jamaica. The documentation of the oral histories of these tonics shows that there exists a wealth of traditional knowledge related to their use that conceptualizes and situates the functioning and wellbeing of the human body within the island's natural environment and history. This data contributes much-needed insights into the intricate and layered sociocultural meanings and origin of these beverages, information that has hereto remained undocumented in ethnobotany studies, which often tend to myopically focus on plant diversity and plant uses. Our study has revealed important new perspectives of root tonics beyond their aphrodisiac qualities, as food-medicines that have supported, and continue to support, the holistic health and mind-body equilibrium of Jamaica's Afro-descendant and wider population in the past and present. The strength-building qualities of these root tonics are embedded in a narrative of survival, resistance, and resilience that dates back to the history of Transatlantic slavery. Root tonics are thus rooted in tradition, and knowledge about these beverages has been passed along by African ancestors, Maroons, and others with close access to nature who searched, and continue to search, for plants that could transfer specific therapeutic qualities such as strength to the human body in times of need. Root tonics also embody a double symbolism of using elements of nature (the earth) to sustain sexual nature. The natural lifestyle that is at the core of the consumption of Jamaican root tonics is also at the heart of the Rastafari movement and religion, and it is therefore not surprising that Rastafari, who celebrate a return to the (cultural) roots of Jamaicans, are seen as the current knowledge holders. Future studies can examine archival ethnobotany records, to trace traditional knowledge about the use of individual plant species in root tonics over time, to learn about the health conditions these species were used for in the past, and to understand which cultural groups knew and used these plants. Untangling this complexity will help to better understand and promote Jamaica's rich biocultural heritage. Currently, most root tonics are prepared at home and sold in the informal economy. Using the oral history data in our study as a guide, we identified key considerations, barriers, and action points for the development of a sustainable cottage industry for these traditional producers. We then designed a roadmap based on four steps: Growing production, growing alliances, transitioning into the formal economy, and safeguarding ecological sustainability. The main premise of this roadmap is that a cottage industry for Jamaican root tonics should put the concerns and benefits of small-scale, artisanal producers at the center, and recognize and honor their IPR. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The University of The West Indies, Mona. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Written informed consent was obtained from the relevant individual(s) for the publication of any potentially identifiable images or data included in this article.
2021-02-24T14:22:55.197Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "932e17bc89fe2e87be9ce13c7d2f5c1ab9911561", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsufs.2021.640171/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "932e17bc89fe2e87be9ce13c7d2f5c1ab9911561", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
211667790
pes2o/s2orc
v3-fos-license
The Buddhist Caves in Western Deccan, India, between the Fifth and Sixth Centuries This article examines the dynamics that led to the renaissance of Buddhist rock-cut architecture in Western Deccan between the fifth and sixth century. This was a transformative period in India as political, economic, and religious traditions underwent important changes; from a global perspective, this was also a time of tremendous international engagement both across the Indian Ocean and the northwestern regions of the Subcontinent. The artistic and architectural evidence from caves like Ajanta and Aurangabad will be examined in a global perspective, connecting these sites to the Buddhist networks leading to the Northwest of the Indian Subcontinent and Central Asia, and to renewed Indian Ocean trade. 1 Brancaccio, Auranagabad, 71-77. 2 Spink,Ajanta,4, T he present contribution explores the global dynamics that led to the renaissance of Buddhist rock-cut architecture in Western Deccan, India, between the fifth and sixth century, by focusing on the Ajanta region in particular. In this area, during the Vakataka dynasty and shortly after its downfall, Buddhist cave sites created at the beginning of the Common Era were greatly expanded or redecorated, and entirely new cave monasteries were established. New modes of patronage, religious values, and ritual forms swept through these Buddhist communities; the sponsors of Buddhist caves were no longer ordinary individuals but wealthy members of the ruling elites and the iconography, layout and conceptualization of new caves underwent significant transformations. It is my intention to show that this period of great activity in the Ajanta region was heightened by tremendous international engagement. The artistic and architectural evidence from caves like Ajanta and Aurangabad seems to connect these sites to the Buddhist networks that led to renewed Indian Ocean trade and to Central Asia. At the end of the fifth century CE, the Ajanta plateau became the hub of an unprecedented and cohesive movement of revival of Buddhist activity (Figure 1). At Ajanta, twenty-two new caves were added to the pre-existing nucleus of four dating to the first century BCE to the first century CE; four new caves were added at Aurangabad; the caves at Pitalkhora were refurbished with many paintings, and new cave sites were established at Ghatotkacha and Banoti. 1 The Ajanta inscriptions tell us that the patrons of the fifth century caves were the powerful members of the Vakataka ruling elite such as king Harishena, his minister and his feudatories. 2 For this reason, scholarship has always interpreted the resurgence of Buddhist activity at Ajanta and neighboring sites as a regional phenomenon linked to the prestige of a dominating group and to internal political strives. Yet at a closer look, it appears that much like in earlier times, the life of these rock-cut sites in the fifth century continued to be closely related to a network of commercial activities linked to Indian Ocean trade. The financial investment required to expand and maintain a Buddhist site of the size of Ajanta with hundreds of new monastic cells must have been enormous. The prominent sculptures of two plump dvarapala pouring large quantities of coins from a bag on the façade of Ajanta cave twenty-six, aside from their allusions to auspiciousness, are surely suggestive of the abundant monetary wealth that patrons invested at the site. This is not a far-fetched interpretationon the Cantamula I pillar from the site of Narjunakonda in Andhra Pradesh (end of the second century CE), the representation of a pile of coins is completed by an inscription associating the depiction on the pillar to an actual donation of gold coins in favor of the Buddhist establishment. As coins are a way of depicting the kind of wealth probably accumulated through trade rather than agriculture, De Romanis proposed that Cantamula's donation in Nagarjunakonda may have actually consisted of Roman gold coins (aurei) obtained through involvement in long-distance trade. 3 Given the poor numismatic visibility of the Vakataka rulers who were among the patrons of the Ajanta caves-it looks like they did not issue a significant coinage-one wonders whether the large amounts of coins represented on the façade of cave twenty-six at Ajanta may also refer to foreign gold coins. An interesting archaeological find from the Ajanta caves may shed some light on this issue. In 1999-2000, while clearing the banks of the river Waghora at the feet of the caves, the Archaeological Survey of India excavated what appeared to be the remains of a monastery built with bricks. In the archaeological deposit of the last phase of occupation of the monastery a gold coin was found: archaeologists identified it as being an issue of the emperor Theodosius II. This find from Ajanta confirms that fifth century foreign trade linked to Indian Ocean must have contributed to the growth of the Buddhist site. In fact, the Ajanta range, while distant from the coastal area, was located at the crossroads of important commercial itineraries linking the inland parts of the Deccan, producing semiprecious stones and cotton textiles, to different distribution centers: Ujjain to the north and the ports of Sopara, Kalyan and Baruch to the coast. As much as the growth of Buddhism along the Silk Road went hand in hand with the development of a commercial economy linked to silk trade, the flourishing of Buddhism in this part of the Deccan was probably associated with the production and trade of cotton textiles-in essence we are looking at a system that could be referred to as the 'Cotton Road'. Cave sites were strategically positioned to take advantage of the trade routes, and were also situated to exploit agricultural areas. Long-distance commerce stimulated agricultural activities, linked to cotton production, as textiles became some of the most desired Indian exports on the international market. As early as the first century CE, the Periplus Maris Erythraei mentions the town of Ter, located in the vicinity of the Ajanta Plateau, as a place where 'large quantities of cloth of ordinary quality and all kinds of cotton garments' were produced to be shipped 3 De Romanis, 'Aurei', 77. abroad across the Indian Ocean. 4 Kosmas Indikopleustes, writing about Indian Ocean trade in the early part of the sixth century, also stated that 'cloth for making clothes were exported from Kalyan in Konkan'. 5 This certainly implied that the fertile 'black cotton soil' or regar covering the upper parts of the Western Deccan was exploited throughout the centuries to satisfy the international demand for cotton products. Cotton from Western Deccan was shipped across the Indian Ocean in the centuries following the great explosion of Indo-Roman trade around the beginning of the Common Era. In the excavations of the ancient Red Sea port of Berenike in Egypt, archaeologists identified more than two hundred fragments of Indian cotton. Among these are pieces identical to the ones represented in the Ajanta paintings, for color and design that are found in layers dating to the late antique period that postdate the third century CE. 6 The number and variety of beautiful textiles represented in the Ajanta paintings, the careful reproductions of designs and color schemes, and the depiction of women making cotton on the left wall of cave one are all elements that confirm that cotton fabrics were not only part of the local material cultural, but also very likely produced in the region. Buddhism was also historically linked to the cotton industry in ancient India: a passage of the Bhikṣunīvināya of the Mahāsāṃghikas, a Buddhist school present in the Western Deccan since early on, tells us that nuns were involved in preparing and spinning cotton. 7 The paintings at Ajanta also portray a prosperous and multicultural environment populated by foreigners and foreign products which must have reached the region via the Indian Ocean. For example, the painting depicting the Buddha's descent from Trayastrimsa Heaven in cave seventeen shows many non-Indic types with different clothing styles, hair styles, and skin colors which suggests that it may have been common to see people from different parts of Central 4 Casson,Periplus,83. 5 McCrindle, Christian Topography, 366. 6 Sidebotham, Berenike, 243-44. 7 Schlingloff, 'Cotton', 87. Asia, Persia, the Middle East, and possibly East Africa. In addition to foreign people, foreign imports made their way to the Ajanta region and became an integral part of the ruling elite's material culture. In the Visvantara jataka painted in the porch of cave seventeen, a servant from Central Asia holds a metal ewer as he offers wine to a couple, while a servant below hands over a matching wine cup (Figure 2). The artists represented these foreign objects in great detail: the drinking paraphernalia appear painted in brown with thin strokes of white pigment to show that their surfaces reflected light, suggesting that the ewer and the cup were made of bronze or silver. Local elite must have used these imported ewers and cups to consume wine that was also imported from far away. We know that special vessels were imported just for the king since the time of the Periplus. 8 The Ajanta paintings from the end of the fifth century indicate that foreign wine 8 Casson, Periplus, 81. was also still imported and in high demand at the time of the Vakataka king Harisena. There are many instances in which foreigners in the Ajanta paintings are represented in association with wine: the best-known instance is the so-called 'Persian Embassy' depicted on cave one's ceiling. In antiquity, wine was always transported in amphorae. Therefore scholars have interpreted all findings of amphorae fragments in India as markers of thriving Indo-Roman trade around the beginning of the Common Era. Several archaeological excavations in Gujarat, Western Deccan and South India have documented shards from such imported vessels. However, in recent times, a thorough re-examination of amphora shards from Maharashtra, Gujarat and South India carried out by Roberta Tomber from the British Museum revealed a surprising picture. 9 A large number of what were previously thought to be Mediterranean Amphorae, are in fact different types of vessels called Torpedo Jars. They are originally from the Persian Gulf region and were made throughout the later Parthian and Sasanian periods (first to seventh century CE) up to the early Islamic periods (ninth century); they display a distinctive coarse whitish texture, a wide mouth, and no neck or handles. Most of the fragments that Tomber analyzed had a black internal coating just like the one found in Roman amphorae used to transport wine, and in fact some scholars suggest that within the Persian (Persian-Sasanian) world, these containers were used to transport wine. The highest concentration of Torpedo Jars appears on the west coast of India: in Gujarat, Maharashtra and especially in Konkan, where a very large number of Torpedo Jars have been found on the island of Elephanta. Tomber dates most of the torpedo jars found in India to the fifth and sixth centuries. 10 This means that the west coast of India experienced a peak in Indian Ocean trade in the fifth and sixth centuries comparable to the well documented one in the early historic period. The Christian Topography by Kosmas Indikopleustes, compiled 9 Tomber, Indo-Roman Trade. 10 Tomber,'Beyond Western India',[51][52] around 550 CE, speaks of many commodities plying the Indian Ocean, and describes the town of Kalyan in Konkan near Mumbai as one of the main trading port. 11 Visual evidence from Buddhist caves located on the Ajanta plateau, a feeder area to transoceanic trade, confirms Kosma's scenario. A fifth century painting of the Purnavadana episode on the right wall of Ajanta cave two depicts a ship and is cargo. Other work suggests that the ship was a type of Indian vessel used in long-distance trade, much like others represented in the caves, and that the containers transported are water pots. A closer look suggests that these vessels are remarkably similar to the 'torpedo jars' used for transporting wine, plentiful on the island of Elephanta. 12 Furthermore, the protagonist in the Divyavadana is originally from the city of Surparaka 13 identified with the ancient port of Sopara in Konkan, thus locating the episode in the context of Indian Ocean trade networks. A remarkable image from the caves at Aurangabad, near the Ajanta caves, seems to confirm the global reach of the Ajanta region ( Figure 3). The porch of Aurangabad cave seven, more or less contemporary to the colossal Saiva rock-cut temple at Elephanta, is dominated by an imposing image of the Bodhisattva Avalokitesvara rescuing worshippers from the eight great dangers; a boat and its occupants are represented with great attention to detail. 14 We know from the ship's two masts that this image alludes to ocean sailing. Additionally, the presence of a foreigner wearing a pointed cap and a caftan confirms that the image refers to the Indian Ocean commercial system, where foreign agents were not an uncommon sight. The twenty-fourth chapter of the Saddharmapundarika sutra (Lotus Sutra) entitled 'The Exposition of the Miraculous Transformations of Avalokitesvara, the One Who Faces in All Directions' describes the dangerous scenarios illustrated in the beautiful rock-cut Avalokitesvara mentioned above. Calling the name of this bodhi- 11 McCrindle, Christian Topography. 12 Tomber, 'Beyond Western India', 51-52. 13 Rotman, Divyavadana, 85. 14 Brancaccio, Aurangabad, 160. sattva can save merchants at sea from shipwrecks, and protect those traveling in caravans from assaults from thieves. Avalokitesvara can save those condemned to capital punishment and those threatened by the magic powers of pretas he can also fulfill the wishes of women desiring offspring. The twenty-fourth chapter ends with a litany in verse, in part repeating the same miraculous interventions from the beginning of the chapter, in part a list of more remarkable rescues: Avalokitesvara protects those thrown off mountains, those hit by rocks or by a sword, those who are about to be executed or imprisoned, those who are victims of witchcraft or threatened by ghosts, and those surrounded by frightful beasts and snakes. The Lotus Sutra suggests that the cult of the bodhisattva Avalokitesvara was especially widespread among merchants and travelers, including those who travelled by sea. The invocations listed in the twenty-fourth chapter of this text include a rather long description of a shipwreck in Sri Lankan waters. Interestingly, some of the merchants in the Lotus Sutra carried precious items such as gold, lapis lazuli, emeralds, 'Musaragalvas', and corals, which are also enumerated in another passage of the same sutra as the 'seven precious treasures' that form a stūpa that magically appears to the bodhisattva Mahapratibhana. 15 As Xinru Liu notes, the 'seven precious treasures' mentioned in Mahayana texts consist essentially of precious items traded along the Silk Road and the Indian Ocean. 16 The Lotus Sutra, the image of Avalokitesvara's miracles, and the shipwreck described in its twenty-fourth chapter, bring together the world of the Indian Ocean and that of the Central Asian mercantile communities. The geographic diffusion of images of Avalokitesvara performing the eight great miracles is very telling: these icons are absent from north India while the largest concentration can be found in the western Deccan caves and along the Silk Road at Dunhuang. 17 The Saddharmapundarika was surely popular both along the Silk Road and in Western Deccan. This text was translated many times in Chinese-the last time at the beginning of the seventh century by the monk Dharmagupta from Lata, an area relatively close to the Ajanta 15 Kern, Saddharmapundarika, XI, 228. 16 Liu,Ancient India,[97][98][99][100][101][102] At Ajanta alone there are seven tableaux of the so-called litany, a painted one was in Pitalkhora cave no. 3, while the most complete example is the one carved in cave 7 at Aurangabad (figure 10). A later sculpture of the litany can also be seen at Ellora cave 3. In the region of Konkan, along the coast of Maharashtra, icons of this miraculous bodhisattva are found at the Buddhist site of Kanheri in caves 2, 41, and 90. region. Monks must have moved between India and China, and as monks traveled along routes well established by merchants, the diffusion of Astamahabhaya icons protecting merchants and travelers across these two worlds makes perfect sense. 18 Among the long-distance merchants at this time, the Sogdians deserve a special mention for their extensive mercantile network. They were especially involved in silk and textile trade and between the fourth and the sixth centuries and were active in the upper Indus region. They were also recognized traders in Xinjiang and the Caucasus, and established direct commercial relationships with the Byzantine empire. 19 At the same time, they also engaged in sea trade and looked with great interest to Southeast Asian commercial networks. The Chinese pilgrim Faxian mentions that in the fifth century, Sogdian merchants traded at Anuradhapura in Sri Lanka, and there are images of Sogdian donors at Buddhist sites in Thailand. 20 Many of the products that the Kosmas Indikopleustes listed as crossing Indian Ocean commercial circuits were, in fact, those traditionally traded by the Sogdians in China. In letters recovered in 1907 by Sir Aurel Stein near Dunhuang, which may date to the fourth century, Sogdian merchants refer to handling linens, other unprocessed cloth, musk and pepper. 21 The Sogdians probably played an important role in the long-distance cotton textile trade-this network is the 'Cotton Road' mentioned above. Sogdian inscriptions on the upper course of the Indus river indicate that these Central Asian merchants, many of whom supported Buddhism, ventured south towards India in search of great profits; surely some of them sailed from Sind towards Kalyan and further south. To conclude, the evidence I discuss above is a convincing argu-ment that the rebirth of Buddhist patronage at cave sites located in the Ajanta area during the fifth and the sixth century CE, was far from a regional phenomenon. In fact, this patronage relates to far reaching connections between the Western Deccan, the Indian Ocean network and the Silk Road.
2019-10-31T09:02:07.764Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "6d04f3253bbe15cb11794d832bb881aeff9ff652", "oa_license": "CCBY", "oa_url": "https://glorisunglobalnetwork.org/wp-content/uploads/2020/04/hualin1.2_brancaccio.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "119f029e5c4799e96e376b706b7c159a7defb785", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
36961284
pes2o/s2orc
v3-fos-license
Increased hepatic cell proliferation and lung abnormalities in mice deficient in CCAAT/enhancer binding protein alpha. CCAAT/enhancer binding protein α (C/EBPα) is a transcription factor that has been implicated in the regulation of cell-specific gene expression mainly in hepatocytes and adipocytes but also in several other terminally differentiated cells. It has been previously demonstrated that the C/EBPα protein is functionally indispensable, as inactivation of the C/EBPα gene by homologous recombination in mice results in the death of animals homozygous for the mutation shortly after birth (Wang, N., Finegold, M. J., Bradley, A., Ou, C. N., Abdelsayed, S. V., Wilde, M. D., Taylor, L. R., Wilson, D. R., and Darlington, G. J. (1995) Science 269, 1108-1112). Here we show that C/EBPα −1−mice have defects in the control of hepatic growth and lung development. The liver architecture is disturbed, with acinar formation, in a pattern suggestive of either regenerating liver or pseudoglandular hepatocellular carcinoma. Pulmonary histology shows hyperproliferation of type II pneumocytes and disturbed alveolar architecture. At the molecular level, accumulation of glycogen and lipids in the liver and adipose tissues is impaired, and the mutant animals are severely hypoglycemic. Levels of c-myc and c-jun RNA are specifically induced by several fold in the livers of the C/EBPα −/− animals, indicating an active proliferative stage. Furthermore, immunohistologic detection with an antibody to proliferating cell nuclear antigen/cyclin shows a 5-10 times higher frequency of positively stained hepatocytes in C/EBPα −/− liver. These results suggest a critical role for C/EBPα in vivo for the acquisition of terminally differentiated functions in liver including the maintenance of physiologic energy homeostasis. dimerization (5). Several members of this family have been cloned from different species, for a review see Refs. 6, 7, that are able to both homodimerize and heterodimerize with each other and to bind the same C/EBP consensus DNA sites. These properties are reflected in the high degree of homology of the bZIP domain (8). By contrast, the transactivation and attenuation domains are located in the N-terminal part (9 -12), where a relatively low level of homology exists (8). Thus, the members of the C/EBP family are able to bind the same DNA sites but differ in their transacting and attenuating properties. Expression of C/EBP␣ in rodents is restricted to certain tissues and cell types (8,13,14). High expression levels are detected in liver, white and brown adipose tissue, and placenta. C/EBP␣ is also expressed in the lung, mainly in differentiated type II cells (15), in the intestine where it is found in differentiated enterocytes (16), in the ovary during follicular development (17), and in myeloid cells (18). Furthermore, expression has been described in adrenal gland and skin (13), and at least in humans, also in pancreas and prostate (19). The basis for the restricted and differential expression pattern of the C/EBP␣ mRNA is regulation at the transcriptional level (14). The molecular mechanisms that control the expression of the C/EBP␣ gene have not been fully elucidated (20); however, the organization of the promoter has been studied in detail (21)(22)(23). The results from these studies suggest an autoregulatory mechanism that contributes to the control of the expression of this gene (22,23). Studies in 3T3-L1 adipoblasts have provided evidence for reciprocal expression patterns of C/EBP␣ and c-myc (24). It has been suggested that this is the result of negative regulation of the C/EBP␣ gene exerted by c-myc. Repression is mediated through interactions of c-myc with the initiator element of the C/EBP␣ promoter (25,26). There are an increasing number of genes known to be subjected to transcriptional regulation by C/EBP␣. Several of the target genes are expressed in a cell-specific manner, e.g. albumin in hepatocytes (27,28) and uncoupling protein in brown adipocytes (29). Furthermore, many target genes such as aP2 and SCD-1 (30,31), , PEPCK (33), aldolase B (34), and acetyl-CoA carboxylase (35) are involved in carbohydrate and lipid metabolism pathways. In addition, it has been demonstrated by two different sets of experiments that C/EBP␣ is of central importance in the process of terminal differentiation of adipocytes. First, it was shown that C/EBP␣ antisense RNA blocks the differentiation of preadipocytes into adipocytes (36,37). Second, overexpression of C/EBP␣ in different fibroblastic cell lines is sufficient to induce adipocyte differentiation (38,39). C/EBP␣ can induce growth arrest in adipocytes suggesting that it may play a role in the regulation of the balance between proliferation and differentiation (24,40). It is also conceivable that C/EBP␣ contributes to the maturation of hepatocytes and the maintenance of the differentiation state. Results from studies on regenerating liver after partial hepatectomy clearly show a dramatic decrease of the expression of C/EBP␣, of at least 5-fold, during the proliferative phase (41,42). Recently, growth arrest by C/EBP␣ was demonstrated by co-transfection of a C/EBP␣ expression vector in a variety of cell lines (12). The multiplicity of interactions of C/EBP␣ with a variety of regulatory elements of genes suggests a central role for C/EBP␣ in cell proliferation and differentiation and in key metabolic pathways. To ascertain the role of the C/EBP␣ protein in vivo, we have generated C/EBP␣-deficient mice by homologous recombination in ES cells. Here we report that C/EBP␣ is critical for the proper development of both the liver and the lung since animals deficient in C/EBP␣ display gross abnormalities in these organs and die within 10 h after birth. In accordance to a previous report (1), we also show that C/EBP␣ is indispensable for postnatal maintenance of systemic energy homeostasis and lipid storage. Furthermore, in nullizygous livers c-myc, c-jun, and ␤-actin steady state RNA levels and PCNA/cyclin protein levels are increased suggesting an active proliferative state. EXPERIMENTAL PROCEDURES Construction of the Targeting Vector-A genomic clone including the C/EBP␣ gene was derived from an OLA 129-GEM-12 genomic library by screening with a PstI/SstI 400-bp fragment representing the bZIP region of C/EBP␣ as a probe. A 7.3-kb EcoRI fragment from the 12.6-kb genomic clone was subcloned into a pBS Ϯ vector (Stratagene). The subcloned fragment contains sequences 3.4 kb upstream and 1.2 kb downstream of the C/EBP␣ gene. The 5Ј EcoRI site in the fragment is derived from the GEM-12 linker, whereas the 3Ј site represents a genomic EcoRI site. An 1133-bp XhoI/HincII fragment, containing the neoR gene driven by a TK promoter, was derived from the vector pMC1neo poly(A) (Stratagene), optimized for neoR expression in ES cells (43). This fragment was blunt-end ligated into the MluI site within C/EBP␣ in the opposite direction compared with the transcriptional direction of the C/EBP␣ gene. To enable negative selection against random integration, a 1854-bp XhoI/HindIII fragment containing the HSV thymidine kinase gene driven by a polyoma virus enhancer (44) was inserted into the HindIII/SalI site of linker, 5Ј of the homologous genomic sequence. Electroporation, Selection, and Screening of ES Cells-The targeting vector was linearized by PvuI digestion. 20 g was used to electroporate 20 ϫ 10 6 R1 ES cells (45) with a Bio-Rad Gene Pulser at 260 V and 500 microfarads. Transfected R1 cells were then plated on gamma-irradiated G418-resistant mouse embryo fibroblast feeder cells. Around 48 h after transfection selective media containing 675 g of G418/ml (50% active substance) was added. After another 24 h new media also containing 2 M Ganciclovir (Cymevene, Syntex Inc.) was added. New selective media (G418 ϩ Ganciclovir) was added daily, and after 10 days surviving clones were transferred individually to round-bottomed 96well plates, trypsinized, and then seeded onto two different flat-bottomed 96-well plates, one with feeder cells and one without feeder cells. The cells on the former plate were trypsinized prior to confluency, suspended in fetal calf serum, 10% dimethyl sulfoxide, and frozen in the plate, whereas the cells on the latter plate were allowed to reach confluency and then used for DNA extraction according to the microextraction method described by Ramírez-Solis et al. (46). DNA was then analyzed by Southern blot to screen for homologous recombination. In brief, microextracted genomic DNA was digested by 15 units/well of BamHI (high concentration, Life Technologies, Inc.) in a total volume of 50 l in a digestion mix including 100 g of bovine serum albumin/ml, 1 mM spermidine, and RNase A at 50 g/ml. Digested DNA was separated on 0.6% agarose gels, transferred to Hybond N membranes (Amersham Corp.), and hybridized to a 1.3-kb EcoRI/BamHI fragment (E1.3B), representing the region 3Ј of the genomic sequence in the targeting construct (Fig. 1D). The 1133-bp XhoI/HincII TK promoter-Neo-poly(A) fragment, derived from pMC1neo poly(A), was used as a probe to verify that only one copy of the targeting construct had been inserted into the genome. Generation of C/EBP␣ Ϫ/Ϫ Mice-R1 ES cells with one targeted C/EBP␣ allele were injected into C57BL/6 blastocysts and implanted into F 1 (CBA ϫ C57BL/6) foster mothers. Male chimeras were mated to C57BL/6 females to verify germ line transmission by coat color. Agouti offspring was screened for presence of the targeted C/EBP␣ allele by Southern blot on tail DNA, according to Laird et al. (47) using the E1.3B probe described above. Heterozygous offspring was then intercrossed to obtain homozygous mice with both C/EBP␣ alleles targeted. Analysis of Serum Glucose and Lipid Levels-Blood was obtained by decapitation and bleeding onto heparinized 35-mm cell culture plates. The blood was collected with a small rubber policeman, and 20 l was added to 400 l of 0.2% NaF, 0.9% NaCl solution. Blood was centrifuged, and the glucose concentration in the supernatant was measured in a Hitachi 917 spectrophotometer (Huddinge University Hospital) according to standard methods. Northern and Western Blot Analysis-Total RNA was isolated from liver using the Ultraspec RNA isolation system (Biotecx, Houston, TX), according to the manufacturer's instructions. 20 g of total RNA/well was fractionated on 1% agarose/MOPS/formaldehyde gels and transferred to Hybond N filters. Prehybridization was performed at ϩ42°C in 50% formamide, 1.5 ϫ SSPE, 10 ϫ Denhardt's, 1% SDS, 0.5 mg/ml fragmented and denatured salmon sperm DNA, 0.2 mg/ml tRNA. Hybridization was performed at ϩ42°C with same buffer composition as above with the exception that Denhardt's was lowered to 5 ϫ and dextran sulfate was added to 10%. Probe concentration in the hybridization was 2 ϫ 10 6 cpm/ml buffer. Densitometric analysis of the Northern blot signals was performed using a Molecular Dynamics instrument and software. Liver nuclear lysates for Western blot analysis were prepared by adding an equal volume of 2 ϫ polyacrylamide gel electrophoresis reducing sample buffer to purified nuclei pellets (48) and then carefully sonicated to shear DNA and heated at ϩ100°C for 3 min. After correction for differences in protein concentrations between the samples, 20 l of the nuclear lysates were loaded on SDS-polyacrylamide gel electrophoresis minigels. Western blot analysis was then performed as described (42). Detection of total protein on the transfer membranes was used as a control of protein loading and was performed with the enhanced chemiluminescence protein biotinylation system (Amersham Corp.) according to the manufacturer's recommendations. Morphological and Immunohistologic Analysis-Tissues were fixed in 10% neutralized formalin or frozen in liquid nitrogen. Sections for analysis were prepared either with a cryostat or a microtome after paraffin embedding. Fixed and paraffin-embedded sections were stained with hematoxylin and eosin using standard protocols. Oil Red O staining of fat in liver and white and brown adipose tissue, periodic acid-Schiff staining of glycogen, and PCNA immunostaining (monoclonal antibody number 19A2, Innovex Biosciences) in liver were performed using standard protocols. Generation of C/EBP␣ Ϫ/Ϫ Mice-To inactivate the C/EBP␣ gene, a mutation was generated by inserting a HSV-TK promoter-driven neoR poly(A) ϩ gene into the unique MluI site at position ϩ701 in C/EBP␣. Within the C/EBP␣ gene there are several in frame AUG translation start codons, but only two AUG codons (ϩ130 and ϩ491) are used in vivo, giving rise to two proteins, p42 and p30 (49,50). Another AUG codon further downstream of the MluI site was shown to be used only when the other upstream AUG codons were deleted. By inserting the neoR gene into the MluI site in the opposite direction compared with the C/EBP␣ gene, we introduced stop codons in all three reading frames downstream of both AUG start sites. Thus, only truncated protein products, lacking the DNA binding and dimerizing domain (bZIP), are expected to be translated from the transcript of the targeted C/EBP␣ gene. Furthermore, since the nuclear localization signal resides in the bZIP domain, the produced truncated proteins will not be imported into the nucleus and will probably be rapidly degraded in the cytoplasm. Fig. 1, A and B, shows the construction of the targeting vector, including a flanking tk gene to reduce the number of ES cell clones with randomly integrated vector (44). Correct insertion of the neoR gene was verified by sequencing of the NeoR:C/EBP␣ boundaries (data not shown). As shown in Fig. 1, C and D, homologous recombination between the targeting vector and the C/EBP␣ gene will result in the introduction of a new BamHI site within C/EBP␣, resulting in a BamHI fragment of decreased size (5.7 kb) compared with the wild type 10.5-kb fragment. A representative Southern blot analysis of microextracted DNA from transfected and double selected R1 ES cell clones is shown in Fig. 1E. In total, seven positive ES clones were obtained with a frequency of homologous recombination of 1:38. The positive clones were expanded and analyzed further with a neoR probe. To substantiate that the event of homologous recombination had been correct, EcoRI-digested DNA was probed with an XhoI/HindIII neoR fragment from pMC1Neo poly(A) that resulted in the expected hybridizing 3Ј fragment of 3.25 kb and one 5Ј fragment of 7.3 kb. In addition, BamHI digestion resulted in the expected 5.7-kb fragment, verifying that only one copy of the targeting vector was present in the genome (data not shown). Two positive clones (S12 and S18) were used for blastocyst injections and generation of chimeric males. Germ line transmitting chimeric animals were derived from both targeted ES clones and were used to establish two independent heterozygous lines. The same phenotypic effects were observed in Ϫ/Ϫ animals derived from both lines. Heterozygous animals were interbred to obtain mice homozygous for the mutated alleles. Fig. 1G shows a Southern blot analysis, using the E1.3B probe, of a litter resulting from a representative heterozygote intercross. The outcome of the intercrossings shows that there is no significant negative se-lection against the mutation during embryogenesis since C/EBP␣ Ϫ/Ϫ animals were born approximately at the expected 1:4 Mendelian ratio. A small reduction in the numbers of Ϫ/Ϫ animals was observed, but this decrease was not statistically significant (27% ϩ/ϩ, 51.8% ϩ/Ϫ, 21.2% Ϫ/Ϫ, n ϭ 425; 2 test, p Ͼ 0.10). Transcription of the Inactivated C/EBP␣ Locus Occurs in the Absence of the C/EBP␣ Protein-In addition to inactivation of the locus, our targeting strategy with insertion of the neoR gene in the opposite orientation within the C/EBP␣ gene was designed to test whether efficient and sustained transcription of the locus is possible without the positive autoregulation by the C/EBP␣ protein. In such case a 3.9-kb transcript should be detected in C/EBP␣ Ϫ/Ϫ mice using either a C/EBP␣ or a neoR probe. The data presented in Fig. 1G, demonstrate that a neoR:C/EBP␣ fusion transcript of 3.9 kb appears as a result of transcription of the targeted C/EBP␣ allele, not only in the ϩ/Ϫ mice but also in the Ϫ/Ϫ animals. As expected the fusion transcript is also detected by the neoR probe as shown in The neoR gene was inserted into the MluI site in the opposite direction compared with the C/EBP␣ gene, and the tk gene was inserted 5Ј of the genomic sequence. The neoR gene is driven by a HSV-TK promoter, whereas the tk gene is driven by a duplicated mutant polyoma virus enhancer (PYF441 Enh). C-D, homologous recombination between the targeting construct and the C/EBP␣ allele and the targeted allele resulting from homologous recombination. Small x in the mRNA represents stop codons introduced in the C/EBP␣ transcript by the insertion of the neoR gene. E, Southern blot analysis of transfected and selected R1 ES cell clones. The 10.5-kb wild type BamHI fragment from the nontargeted allele and the 5.7-kb fragment from the mutated allele are shown. F, Southern blot analysis of a litter from a heterozygote intercrossing. DNA was digested with BamHI and probed with E1.3B. The genotypes are indicated above the lanes. ϩ/ϩ, wild type; ϩ/Ϫ, heterozygous; and Ϫ/Ϫ, nullizygous animals. G-H, Northern blot analysis of liver RNA from a litter resulting from a heterozygote intercrossing (same individuals as in F). Hybridization with a 400-bp probe of the C/EBP␣ bZIP region detects both the 2.8-kb wild type C/EBP␣ transcript and the 3.9-kb C/EBP␣:Neo fusion transcript (G). A neoR probe detects the 1.1-kb Neo(ϩ) mRNA, transcribed by the TK promoter and the fusion transcript in ϩ/Ϫ and Ϫ/Ϫ animals (H). show transactivation of the C/EBP␣ promoter by C/EBP␤ (22). Finally, the appearance of both bands in the heterozygous animals shows that both C/EBP␣ alleles are transcribed. To demonstrate that the targeted event resulted in total elimination of the C/EBP␣ protein, liver nuclei lysates were analyzed by Western blot analysis that was performed on pups from several litters. The results from one such experiment are shown in Fig. 2A, verifying that the C/EBP␣ protein is completely absent in livers from nullizygous animals. Two other members of the family, C/EBP␤ and C/EBP␦, were analyzed in parallel. As shown in Fig. 2, B and C, no overt differences could be demonstrated between the three genotypes. Most C/EBP␣ Nullizygous Mice Die in the First 10 h after Birth, but Some Die at Birth-Littermates in most litters showed no obvious differences at birth. However, in about 20% of the litters, most of the C/EBP␣ Ϫ/Ϫ pups died virtually at delivery. The majority of these born almost dead (BAD) mice died immediately after birth, while a few were able to survive for a shorter period of approximately 30 min. Dissection of these animals showed the presence of bubbles in their stomachs. BAD mice are, in general, born in large litters (9.4 Ϯ 1.6 pups, n ϭ 10 litters). The reason why this phenomenon only occurs in about 20% of the C/EBP␣ Ϫ/Ϫ mice and only in large litters remains unclear. As shown in Table I, there was a small but significant difference in body weight at birth between the pups. Normal and heterozygous pups had the same weight, whereas nullizygous pups weighed 10% less than their littermates. This weight difference is further augmented at 7-10 h where the ϩ/ϩ and ϩ/Ϫ pups have increased their body weight by 15-20%, while the C/EBP␣ Ϫ/Ϫ animals have not gained any weight at all. These nullizygous animals, although apparently normal at birth, gradually become weaker, and most of them were never able to start feeding. After 7-10 h severe symptoms of hypoglycemia, such as lethargy and shivering, were manifested, and these animals died soon after. Although some Ϫ/Ϫ pups clearly were able to start feeding, they did not survive more than 20 h. Dissection revealed that these mice had milk in their stomachs. Finally, in very few cases (less than 1%) nullizygous mice were able to survive for a considerably longer period of up to 4 weeks of age. These long term survivors are severely retarded in development. At around 2 weeks of age they are about half the size of their littermates. These animals are very thin and skin problems were observed with flaking from large areas of the body before fur outgrowth. This is a very rare but reproducible phenotype. Since C/EBP␣ has been shown to regulate several genes involved in carbohydrate and lipid metabolism, the dramatic effects on the early postnatal survival in nullizygous mice could be due to low blood glucose levels, similar to the phenotype displayed by the C 14Cos albino deletion mice (51). We therefore tested the blood serum levels of glucose in newborn litters and in litters at the time point when knockout animals usually die (7-10 h). The results from these tests, shown in Table I, clearly demonstrate that there is a dramatic drop in blood glucose levels some hours after birth in the Ϫ/Ϫ animals when compared with their ϩ/Ϫ and ϩ/ϩ littermates. Low glucose levels between 0.1 and 0.2 mM are detected within 1 h postpartum (data not shown), but the animals survive for an additional 6 -10 h. This is consistent with the results of an earlier report (1). We have also investigated the serum lipid levels at birth and around 10 h postnatally. No statistically significant differences for either cholesterol or triglyceride levels were detected. C/EBP␣ Nullizygous Mice Have Disturbed Liver Architecture, Immature Pulmonary Phenotype, and Fail to Accumulate Lipids-Histologic analysis of litters from heterozygous intercrosses, performed either at birth or at 7-10 h postpartum, revealed striking differences in liver, lung, and adipose tissues when Ϫ/Ϫ animals were compared with the other two genotypes. The hepatic architecture of the nullizygous mutants was severely distorted with acinar formation. The liver morphology also had a clear resemblance to regenerating liver following partial hepatectomy or pseudoglandular hepatocellular carcinoma. The number of biliary canaliculi in the C/EBP␣ Ϫ/Ϫ liver is considerably higher compared with both ϩ/ϩ and ϩ/Ϫ liver (Fig. 3, upper lane). Hepatocytes from nullizygous animals appear to have a smaller cytoplasm/nucleus ratio, which may explain the dilated bile canaliculi. However, no choleostasis or bile thrombi were observed. Liver sections of littermates using periodic acid-Schiff stained for glycogen at birth and 10 h postpartum showed that the normal and heterozygous animals contained substantial amounts of glycogen, whereas the Ϫ/Ϫ mice had drastically decreased but detectable levels (data not shown). It has been suggested that the distorted architecture may be due to the deficiency in fat and glycogen stores in the cytoplasm that results in smaller hepatocyte volumes (1). An alternative explanation may be that genes involved in the cytoskeleton formation in hepatocytes are targets of C/EBP␣ regulation resulting in an imperfect threedimensional structure. Lipid accumulation in both white and brown adipose tissue is dependent on C/EBP␣ (36,37). To investigate the effects of C/EBP␣ deficiency in lipid storage, we performed histologic analysis of adipose tissue. In newborn mice there are detecta- ble fat depots in white adipose tissue localized to the inguinal region and considerable amounts in the brown adipose tissue. Although cells mass and general histologic appearance of the brown adipose tissue was not altered, the fat depot was greatly reduced in the C/EBP␣ Ϫ/Ϫ mice, as seen by Oil Red O staining (data not shown). Thus, unlike the situation in the liver, C/EBP␣ deficiency does not affect the overall brown adipose tissue morphology but is rather specific for the accumulation of lipids. All C/EBP␣ Ϫ/Ϫ mice displayed irregular pulmonary histopathology. Although location of the lungs of C/EBP␣ Ϫ/Ϫ mice was comparable with the normal littermates, embryonic, fetal, and neonatal development of the airways was abnormal (Fig. 3). In mutant mice the lungs showed hyperproliferation of type II pneumocytes and bronchiolization of the alveoli. The primitive-appearing lung resembles the appearance of the lungs of premature human infants. C/EBP␣ Ϫ/Ϫ mice, particularly of the BAD type, had clinical symptoms of a respiratory nature, suggesting a role for C/EBP␣ in the normal development of the lung. However, in spite of the immature histologic phenotype of the mutant lungs, we did not detect any significant changes at the protein expression level of some key respiratory epithelial cell-specific molecular markers. Thus, unlike the pulmonary pathology observed in TGF-␤3 and GM-CSF nullizygous mice (52, 53) C/EBP␣ deficiency did not alter the expression patterns of surfactant protein C (proSP-C), thyroid transcription factor-1 (TTF1), and Clara cell secretory protein (CC10). 2 Although we did not see any effects on the expression of these important proteins that are markers for a highly differentiated state of the lung, the observed respiratory problems could be explained by an inadequate expression of other surfactant apoproteins than SF-C. Interestingly, the promoter of the major surfactant protein, SP-A, was recently shown to contain three potential C/EBP␣ binding sites (54). Since surfactants are lipoproteins, another possible explanation for respiratory dysfunction could be that the lipid production in the lung is affected by the C/EBP␣ deficiency, resulting in inadequate production of functional surfactant. Induction of c-myc and c-jun in the Liver of C/EBP␣ Ϫ/Ϫ Mice-The distorted liver architecture of the C/EBP␣ Ϫ/Ϫ mice is indicative of an active proliferative state. To investigate the molecular mechanisms that may underlie these differences, we analyzed the transcription rates and steady state levels of genes that may be important in maintenance of the balance between proliferation and differentiation in hepatocytes. We chose to test the expression levels of albumin and ␣-fetoprotein as markers for hepatocyte differentiation and tumor development, respectively. In addition, we tested ␤-actin, c-myc, and c-jun that correlate well with active cellular proliferation. Total liver RNA isolated at birth and several hours later was compared among littermates. A representative Northern analysis is shown in Fig. 4, and densitometric analysis of these experiments is shown in Table II. mRNA levels of albumin were reduced in C/EBP␣ Ϫ/Ϫ animals, especially in the livers of newborn animals. By contrast, ␣-fetoprotein expression levels were increased about 2-fold. This is indicative of a more dedifferentiated state of the C/EBP␣ Ϫ/Ϫ hepatocytes. Levels of the ␤-actin RNA were induced by 3-fold suggesting a proliferative state. However, the most predominant change, in addition to changes in the expression of genes involved in glycogenesis that were demonstrated in an earlier report (1), was the induction of c-myc and c-jun RNA (Fig. 4 and Table II). Steady state c-jun RNA levels from livers of both newborn and 7-h-old C/EBP␣ Ϫ/Ϫ mice were increased by about 10-fold. Induction of c-myc RNA was pronounced in livers of 7-h-old Ϫ/Ϫ mice ( Fig. 4 and Table II). These findings are consistent with the patterns of expression observed in the early regenerating mouse liver. In addition, early stages of experimentally induced hepatocellular 2 J. Whitshett, personal communication. carcinoma in rodents display a similar expression pattern for these genes (55). Since C/EBP␣ and c-myc are reciprocally regulated in adipocytes (24) and hepatocytes (26), it is possible that c-myc induction is a direct effect of C/EBP␣ deficiency. However, a more likely explanation is that c-myc induction reflects the critical role of this molecule in mitogenesis and transformation (56) and, together with the c-jun induction, is indicative of the proliferative stage of the C/EBP␣ Ϫ/Ϫ hepatocytes. Because proliferating hepatocytes and hepatocellular carcinoma cells have induced levels of proliferating cell nuclear antigen (PCNA/cyclin) (57), we performed immunohistostaining with an antibody to PCNA/cyclin. The results presented in Fig. 5 illustrate a 5-10 times higher frequency of positively stained hepatocytes in C/EBP␣ Ϫ/Ϫ liver, further supporting the notion that an increased portion of the nullizygous hepatocytes are in the G 1 /S phase of the cell cycle. Thus, loss of C/EBP␣ has an effect on the proliferative potential of hepatocytes in vivo. DISCUSSION C/EBP␣ Is Essential for Postnatal Survival-We have inactivated the C/EBP␣ gene in mice by the introduction of stop codons downstream of the two AUG translation start sites used in the C/EBP␣ gene. These stop codons are situated within the neoR gene sequence that was inserted in the opposite direction to the C/EBP␣ transcription unit. Thus, this manipulation will not result in a gene inactivation at the transcriptional level but rather at the translational level. This targeting strategy has enabled us to obtain new information about the mechanisms of transcription of the C/EBP␣ gene. First, both alleles of the C/EBP␣ are actively transcribed since two transcripts of different sizes are detected in heterozygote animals. Thus, we conclude that the C/EBP␣ gene is not imprinted in mice. Second, the presence of a C/EBP␣:NeoR fusion transcript in the nullizygous liver indicates that the previously suggested autoregulatory mechanism of the C/EBP␣ gene can be substituted by other C/EBP members. It is likely that C/EBP␤ is responsible, since this C/EBP family member has been shown to transactivate C/EBP␣ promoter-reporter constructs in vitro (22) and is expressed at high levels in nullizygous liver (Fig. 2). The steady state level of the C/EBP␣:NeoR fusion transcript is lower compared with the C/EBP␣ wild type transcript in heterozygote liver. This may be due to a less efficient transcription of the mutated allele or the fusion transcript is more unstable. Alternatively, C/EBP␣ may not be required for its own expression, but rather it may augment transcription from its own promoter through interactions with other transcription factors, as suggested for the human gene (23). C/EBP␣ deficiency may result in lower levels of C/EBP␣ gene expression. Mice nullizygous for C/EBP␣ are born to term. We have not found any significant deviation from the expected Mendelian ratio of nullizygous mice in the litters. This is different compared with a recent report (1). Perhaps this reflects the different genetic background of the back crossings of the two nullizygous mutants. Another remote possibility is that in the study of Wang et al. (1) C/EBP␣ gene inactivation was accomplished by the deletion of the entire C/EBP␣ coding region plus 2.4 kb of upstream sequences. This deletion could in turn have affected important regulatory elements associated with other genes important for embryogenesis, particularly since a strong DNase I-hypersensitive site is located in this region (20). Disturbed Liver Architecture-The effects of the C/EBPa gene inactivation are dramatic but are not manifested until after birth, when metabolic functions characteristic for the differentiated liver are initiated in the newborn animal. C/EBP␣ nullizygous mice begin to runt and die shortly after birth. About 20% of the mutant mice die consistently at birth or within 30 min postpartum while the rest survive only for a period of 7-10 h. All mice show gross liver histologic abnormalities and hypoglycemia and failure to accumulate lipids or fat. The drastically reduced amounts of stored glycogen and fat in liver and adipose tissues observed in newborn nullizygous mice are highly likely to be an important reason for the weakness of these animals and is probably the reason for the lower body weight at birth. Stored energy fuels, which are normally built up prior to birth in liver and adipose tissues, are very important glucose sources for the newborn animal before the suckling period. Furthermore, the ability to realize gluconeogenesis is necessary for maintaining energy homeostasis. Genes coding for PEPCK, glucose-6-phosphatase, and tyrosine aminotransferase, all highly expressed in liver, are crucial for gluconeogenesis. We have found that the transcription rate of the PEPCK gene, a known C/EBP␣ target gene, is reduced to approximately 30% in C/EBP␣ Ϫ/Ϫ liver (data not shown). Thus, inadequate gluconeogenesis is likely to be another reason for the rapidly appearing low blood glucose levels in C/EBP␣ Ϫ/Ϫ mice. This hypothesis is further substantiated by the fact that another gene involved in the reversible gluconeogenic/ glycolytic pathways, aldolase B, displays a transcription rate in C/EBP␣ Ϫ/Ϫ liver that is only about 30% of the rate found in ϩ/ϩ and ϩ/Ϫ liver (data not shown). Interestingly, aldolase B has recently been shown to be a target gene for C/EBP␣ (34). The maternal glucose in the blood of the newborn C/EBP␣ Ϫ/Ϫ animals is likely consumed very rapidly, since we detect very low blood glucose levels in these mice already 1-h postpartum. The apparent lack of energy makes the nullizygous mice so weak that they very often are unable to start suckling. Even individuals that are able to start some feeding do not generally survive for any longer periods. This might indicate metabolic dysfunctions other than storage of energy fuels and gluconeogenesis. It has been suggested that severe hypoglycemia is the primary reason for death (1). However, glucose injections cannot rescue the mutant mice for more than a couple of days (1). Problems with absorption of nutrients do not appear to be likely, because histology of the gut does not reveal any abnormalities, 3 a view that is supported by a previous analysis (1). Thus, the exact mechanisms underlying the death of C/EBP␣ Ϫ/Ϫ animals is not completely clear. The fact that triacylglycerol stores are not found in either liver or in adipose tissue suggests that genes important for lipid accumulation, expressed in both these tissues, may be the targets of C/EBP␣ regulation. We envisaged that the identification of such target genes regulated by C/EBP␣ will reveal the mechanisms of action of this molecule in lipid storage. The effects of C/EBP␣ gene inactivation are pleiotropic. As mentioned under "Results," in very rare cases C/EBP␣ Ϫ/Ϫ animals are able to survive for longer periods of up to 4 weeks. These mice will be the subject for further analysis, enabling studies of the effects of the C/EBP␣ gene inactivation in later stages of mouse development. For instance, the brain would be a tissue of interest since C/EBP␣ expression has been shown to appear first a few weeks postpartum (14,58) and the Aplysia C/EBP has been implicated in long term potentiation of neurons (59). In addition, it has been postulated that C/EBP␣ may play a role in keratinocyte development (13). The long term C/EBP␣ Ϫ/Ϫ survivors may provide some information on this aspect since the few individuals that survived that long appear to have skin problems. Retarded Pulmonary Development-The C/EBP␣ nullizygous mice also differ from their littermates in that they exhibit retarded pulmonary development. In particular, hyperproliferation of type II pneumocytes is clearly visible in neonatal lungs. About 20% of the C/EBP␣ nullizygous mice die within 20 -30 min after birth, apparently from respiratory failure. However, all nullizygous animals showed histologic evidence of delayed maturation of type II epithelial cells. This phenotype is analogous to the pulmonary effects of targeted disruption of a homeodomain gene, GSH-4 (60), TGF-␤3, and GM-CSF nullizygous mice (52,53). Recent evidence suggested a role for C/EBP␣ in the development and maintenance of the surfactant system in lung type II cells (15). Nonetheless, since both C/EBP␤ and C/EBP␦ are expressed in the lung, it is possible that these proteins compensate for the loss of C/EBP␣ in proper regulation of surfactant protein genes in lung epithelial cells. By contrast, deficiency of lipid production in alveoli, due to the absence of C/EBP␣, may interfere with inadequate production of functional surfactant molecules. This could contribute to the respiratory problems associated with the nullizygous mice and may be the primary reason for the cause of the immediate death observed in C/EBP␣ Ϫ/Ϫ animals of the BAD type. Induced Hepatic Proliferation in C/EBP␣ Ϫ/Ϫ Mice-Earlier experiments suggested that the C/EBP␣ gene product may be a component of the balance between proliferation and differentiation (24, 40 -42). The loss of C/EBP␣ results in a dramatic induction of c-myc, c-jun, and ␤-actin RNA in the liver. These genes correlate well with active cellular proliferation. Histology of the liver shows that hepatocytes in the nullizygous mice appear healthy with smaller cytoplasm/nuclei ratio. The morphology of the Ϫ/Ϫ liver is indicative of either regenerating liver or pseudoglandular hepatocellular carcinoma. PCNA/cyclin immunostaining experiments that demonstrate excessive accumulation of PCNA/cyclin in C/EBP␣ Ϫ/Ϫ hepatocytes further support the notion that a substantial portion of the nullizygous hepatocytes are in the G 1 /S phase of the cell cycle. Taken together these data suggest a role for C/EBP␣ as "orthogene" necessary for acquisition and maintenance of the differentiated hepatocyte phenotype. However, heterozygous mutants do not show, so far, any evidence of hepatocellular 3 P. Flodby and K. G. Xanthopoulos, unpublished data. carcinoma formation. It is clear that some of the activities of the C/EBP␣ gene are compensated for by other members of the C/EBP family (i.e. as is the case of activation of C/EBP␣ gene promoter). However, the severity of the C/EBP␣ Ϫ/Ϫ phenotype certainly suggests that this protein is indispensable for many other critical functions. The availability of this mutant and eventually a tissue-specific C/EBP␣ knockout line will greatly facilitate our understanding of the function of this critical molecule in vivo.
2018-04-03T03:52:59.158Z
1996-10-04T00:00:00.000
{ "year": 1996, "sha1": "a2334c1728d77bf184018a571fc85b5e46628583", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/40/24753.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "769492fb3d3e9a87ae9cb3ffa1c1f3698a1b87e7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
259567169
pes2o/s2orc
v3-fos-license
Changes in Documentation Due to Patient Access to Electronic Health Records: Protocol for a Scoping Review Background: Internationally, patient-accessible electronic health records (PAEHRs) are increasingly being implemented. Despite reported benefits to patients, the innovation has prompted concerns among health care professionals (HCPs), including the possibility that access incurs a “dumbing down” of clinical records. Currently, no review has investigated empirical evidence of whether and how documentation changes after introducing PAEHRs. Objective: This paper presents the protocol for a scoping review examining potential subjective and objective changes in HCPs documentation after using PAEHRs. Methods: This scoping review will be carried out based on the framework of Arksey and O’Malley. Several databases will be used to conduct a literature search (APA PsycInfo, CINAHL, PubMed, and Web of Science Core Collection). Authors will participate in screening identified papers to explore the research questions: How do PAEHRs affect HCPs’ documentation practices? and What subjective and objective changes to the clinical Background Electronic health records (EHRs) are common in almost all areas of health care and are an indispensable tool for saving and sharing information between health care professionals (HCPs) [1]. A more recent development focuses on opening clinical notes or entire EHRs for patients [2] and their proxies [3][4][5][6][7]. These so-called patient-accessible electronic health records (PAEHRs) are well established internationally, especially in Scandinavian countries and the United States, but are not yet fully embedded even among high-income countries [8][9][10][11]. An essential part of patients' online record access (ORA) through a PAEHR is access to the clinical free-text notes written by clinicians. Giving patients access to these notes is often referred to as "open notes" in the literature [2]. Research shows that HCPs are often skeptical about giving patients ORA [12,13], and many of their concerns relate to how PAEHR use might impact their clinical routines, workload, and patient safety [11,[13][14][15][16][17][18][19][20]. Regarding documentation, many HCPs anticipate changing the content and tone of their notes when patients have ORA, which, it is feared, might ultimately compromise the integrity of their records [11,19,21]. For example, a tendency to avoid technical terminology to facilitate patient understanding could have a negative impact on multidisciplinary communication within the team [21][22][23][24]. Also, some HCPs feel that they may be less detailed or less candid in their documentation and need to omit information or even start using parallel documentation (a "shadow record") to protect patients from information that they consider as potentially harmful or disruptive [14,15,20,[25][26][27]. In contrast, however, there are other voices that assume the introduction of PAEHRs could make notes more patient-friendly by using a more patient-centered and less stigmatizing language and could also stimulate communication between HCP and patients [22]. As Blease et al [11] noted, while most studies explore subjective changes after introducing open notes, there are few studies demonstrating objective changes, and where these studies exist, they offer inconclusive results [22,28,29] and are often hampered by methodological limitations. There is a growing body of qualitative research [30] as well as research using natural language processing approaches to explore the language used by clinicians in their records, including the potential for stigmatizing language [31]. However, it is unclear from these studies whether access affects, or indeed, even improves the quality of recordkeeping with the knowledge that patients may read what the clinician has written [31]. Despite the increasing scientific interest and debate within medicine, little is currently known about how far sharing EHRs with patients affects clinical documentation [11,32]. Study Objectives The objective of the proposed scoping review is to identify, collate, and evaluate possible changes of documentation after implementing patient access to EHRs. The scoping review focuses exclusively on studies including postimplementation data, such as experiences of the stakeholders (HCPs, patients, policymakers and designers of EHRs or patient portals), while excluding expectations prior to implementation. As outlined, HCPs are often reluctant, or even critical of giving patients ORA, and expect an additional documentation burden through their introduction. To address these obstacles to implementation, it is timely and appropriate to review the existing body of literature and summarize what is currently known regarding PAEHR documentation change. This scoping review is intended to increase knowledge for stakeholders about the kinds of documentation changes that might arise with PAEHRs, illuminate how this relates to documentation practices, provide recommendations for future clinical practice, and identify further research gaps. Scoping Review Compared with the systematic review method, which is guided by a strongly focused research question, a scoping review aims to open up the spectrum of the available evidence on a relatively new field of research, so that its breadth and depth become visible [33]. We will conduct a scoping review following the framework proposed by Arksey and O'Malley [33]. Their approach consists of the following five stages: (1) identifying the research question, (2) identifying the relevant studies, (3) selecting eligible studies, (4) collecting data, and (5) summarizing data and synthesizing results. The review will be reported following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Extension for Scoping Reviews checklist [34,35]. Any subsequent modification of the study design will be highlighted in the final publication, which is aimed at being published in a JMIR journal. Stage 1: Identifying the Research Question Through discussions with the research team, we decided on the following research questions: Does clinical documentation change after the introduction of ORA for patients? If so, what objective and subjective changes arise after PAEHR implementation? By objective, we mean such differences that can be demonstrated by a direct quantifiable comparison of clinical notes before and after implementation of PAEHRs. By subjective, we refer to clinicians' perceptions of how they write their notes after PAEHR implementation. In the context of this scoping review, we define PAEHR to be any channel in which patients have electronic access to their patient record (eg, through the internet or via patient portals and apps). Stage 2: Identifying Relevant Studies The process of identifying relevant studies is outlined in the flowchart (see Figure 1). The deduplication process will be carried out by an experienced research librarian at Uppsala University, Sweden. In advance, the research team will conduct a rigorous manual search to obtain a basic overview of the available evidence and to refine the scope of the review as well as the search strategy as Popay et al suggest [36]. The literature search in the following 4 databases will be conducted by the librarian Malin Barkelind from Uppsala University: APA PsycInfo, CINAHL, PubMed, and Web of Science Core Collection. The search strategy was developed in collaboration with the Uppsala University library and consists of three key concepts: (1) EHRs, (2) sharing EHRs with patients, and (3) changes in documentation, which were combined with the Boolean AND (Textbox 1). The search terms were adapted according to different databases. The complete search string is stored in Multimedia Appendix 1. In addition, we will include individual relevant records from the hand search conducted previously. Electronic health record search string • "inpatient portal*" OR "open notes" OR opennotes OR PAEHR OR "patient portal*" OR "patient web portal*" OR "Electronic Health Records" • "clinic notes" OR "clinical notes" OR "progress notes" OR "doctors notes" OR EHR OR "health record*" OR "health care record*" OR "medical record*" OR "mental health notes" OR "patient record*" OR "psychiatric notes" OR "psychotherapy notes" OR "visit notes" Sharing electronic health records with patients search string • "guardian access" OR "parental access" OR "parents access" OR "patient access*" OR "patients access*" OR "patient online access" OR "patients online access" OR "proxy access" OR "shared medical record*" OR "shared health record*" Inclusion and Exclusion Criteria Inclusion and exclusion criteria (Textbox 2) were defined by the entire research team and will be applied in the study selection process. Due to the limited number of publications available on the subject, there will be no restrictions on the study type. As PAEHRs are only gradually being implemented in various countries, we will refrain from any location restrictions. A wide variety of approaches exist to make clinical notes available to patients electronically [37]. We will include all studies examining the actual implementation and the use of patient ORA regardless of the digital device used (eg, web-based and mobile apps). Varieties of studies exploring the sharing of hard copies of patients' clinical records will be excluded. • Gray data (websites, tweets, blogs) • Study Selection Process We will use Rayyan Software (Rayyan Systems, Inc) for conducting a collaborative, blinded title and abstract screening [38]. All members of the research team will participate in this process and each record of the result set will be evaluated by at least 2 people. Discrepancies will be discussed, taking the full texts of the corresponding studies into account. In case of disagreements that cannot be resolved, a third reviewer will be involved and entrusted with the decision of including or excluding the study. Stage 4: Collecting Data After selecting the studies to include, metadata (eg, title, authors, and publication year) of the remaining records will be exported and summarized in a Google Sheets (Google LLC) spreadsheet for further processing. To extract and organize relevant data from included studies, the spreadsheet will be extended by the following and other parameters based on the studies' full text: country, study design, sample, characteristics of study participants (eg, gender, age, ethnicity, type of stakeholder), treatment setting and medical specialty, and study purpose. Data extraction will be performed involving all members of the research team. Furthermore, the first author will check the data extraction for correctness and completeness. To assess the quality and methodological rigor of the studies, the Mixed Methods Appraisal Tool (MMAT) will be used [39]. Two researchers will independently conduct the MMAT grading of all studies and consent their results. If no agreement can be reached, a third independent researcher will be consulted. Narrative Synthesis Study results will be extracted from the full texts by the lead author and summarized in (1) a reduced format within a textbox, providing an overview of the findings from all included studies, and (2) a detailed version for narrative synthesis. The latter will be analyzed independently by at least 2 researchers using thematic analysis [40]. Objective and subjective changes of HCPs' documentation practices after the introduction of patient ORA will be used as guiding deductive themes and are informed by the research question but may change in the analytical process. As Levac et al [35] suggest, we aim to identify patterns and relationships within and across studies to identify potential factors influencing documentation after PAEHR implementation. In assessing the methodological rigor of the studies, we also envisage the potential to identify research gaps; for example, we predict there may be a preponderance of survey research investigating clinicians' perceptions about documentation changes rather than studies investigating objective markers of any such documentation changes. While the former studies may be useful, they may be compromised by responder biases. Results will then be discussed and approved by the entire research team. Assessing the Robustness of the Synthesis As Popay et al [36] state, the robustness of the narrative synthesis depends on the quality of included studies as well as on the trustworthiness of the synthesis. In order to minimize bias, we will conduct the study quality appraisal through MMAT to ensure that studies of equal technical quality are given equal weight. To provide a high level of trustworthiness in our review, reviewers will have detailed information about the eligibility criteria and the type of intervention (PAEHR) in order to provide sufficient information for replication. Ethical Considerations Since we will use only publicly available data material with the scoping review methodology, this study is not subject to ethical approval. Results The main results from our analysis will be presented in a narrative form, focusing on subjective and objective changes in clinical notes as well as on changes in HCPs' documentation after sharing EHRs with patients. Additional data on year, country, study design, characteristics of study participants, setting, sample, medical specialty, and study purpose will be presented in diagrams or tabular format. Discussion Research indicates that HCPs have concerns regarding the effects of ORA on clinical routines, workload, and patient safety as patients increasingly gain access to medical information on the internet [11][12][13][14][15][16][17][18][19][20]. Although a limited number of studies have examined alterations in documentation practices following the implementation of policies, these studies have primarily focused on subjective evaluations rather than objective assessments [11]. Furthermore, the few studies attempting to investigate objective changes in documentation have yielded inconclusive results that are further restricted by methodological limitations [22,28,29]. Although there are already a few reviews that address the use of patient ORA [32,[41][42][43], no review specifically addresses the potential changes to clinical documentation that may result from the implementation and use of PAEHR. Our scoping review aims to map current research into documentation changes and to potentially raise awareness among many different involved parties about the risks and opportunities of PAEHR use. A potential limitation of the study is a reduced depth of the analysis due to the broader nature of the scoping review. In addition, due to exclusion of gray literature, it is possible that some studies will be overlooked. Nevertheless, we defined our search strategy to identify the most comprehensive and high-quality evidence. Dependent on the findings, this study may offer important insights on how to support effective documentation practice in the future. For example, the findings may provide a basis for a widely demanded "documentation training" that could better prepare HCPs and patients on how to read and write notes [20]. The scoping review will strive to identify any existing research gaps and to indicate directions for further studies in this field. The growing body of evidence on natural language processing in relation to documentation changes after PAEHR introduction will be explored.
2023-07-11T16:00:30.206Z
2023-02-24T00:00:00.000
{ "year": 2023, "sha1": "472ed109a2ce83bcd10e8792e08a2d6c6ba32ddd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/46722", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1d375138b0f7d6bf144dc702a45ea3e076456eb5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
15270805
pes2o/s2orc
v3-fos-license
Electrostatic Instability in Electron-Positron Pairs Injected in an External Electric Field Motivated by the particle acceleration problem in pulsars, we numerically investigate electrostatic instability of electron-positron pairs injected in an external electric field. The electric field is expected to be so strong that we cannot neglect effects of spatial variation in the 0-th order distribution functions on the scale of the plasma oscillation. We assume that pairs are injected mono-energetically with 4-velocity $u_0>0$ in a constant external electric field by which electrons (positrons) are accelerated (decelerated). By solving linear perturbations of the field and distribution functions of pairs, we find a new type of electrostatic instability. The properties of the instability are characterized by $u_0$ and the ratio $R$ of the braking time-scale (determined by the external electric field) to the time-scale of the plasma oscillation. The growth rate is as large as a few times the plasma frequency. We discuss the possibility that the excited waves prevent positrons from returning to the stellar surface. Introducton Waves in homogeneous plasmas are well described by the linear perturbations that have the Fourier-harmonic dependence of the form exp [i(k · x − ωt)]. The properties of various wave modes have been extensively studied for various physical situations. Plasma instabilities, such as the two-stream instability, the Weibel instability (Weibel , 1959), and many others have been recognized as important processes in astrophysics as well as in laboratory situations. For inhomogeneous plasmas, the Fourier-harmonic dependence is not assured in a strict sense. If the wavelength is short enough compared with the scale of the inhomogeneities, we may neglect effects of the spatial gradient of the distribution functions of particles, and carry out the Fourier-harmonic expansion. We call this treatment the local approximation hereafter. When static electric field exists in a plasma, it accelerates particles and leads to inhomogeneous velocity distributions of particles. Wave properties in an electric field were studied in geophysical researches, adopting the local approximation (e.g. Misra and Singh 1977, Misra at al. 1979, Das and Singh 1982. In many high-energy astronomical phenomena, electric field is expected to be so strong that the local approximation is not adequately applied. Such a situation typically appears in the magnetosphere of pulsars. A spinning magnetized neutron star provides huge electric potential differences between different parts of its surface as a result of unipolar induction (Goldreich and Julian , 1969). A part of the potential difference will be expended as an electric field along the magnetic field somewhere in the magnetosphere. Although a fully consistent model for the pulsar magnetosphere has yet to be constructed, several promising models have been considered. Among them, the polar cap model (Sturrock , 1971;Ruderman and Sutherland , 1975) assumes that an electric field E parallel to the magnetic field lines exists just above the magnetic poles. The electric field accelerates charged particles up to TeV energies, and resultant curvature radiation from these particles produces copious electron-positron pairs through magnetic pair production. These pairs may provide gamma-ray emission by curvature radiation or synchrotron radiation as well as coherent radio emission and a source for the pulsar wind. The localized potential drop is maintained by a pair of anode and cathode regions. In the cathode region the space charge density ρ deviates from the Goldreich-Julian (GJ) density ρ GJ ≃ −ΩB z /2πc negatively, where Ω = 2π/T is the angular velocity of the star and B z is the magnetic field strength along the rotation axis. On the other hand, ρ deviates positively for the anode. Outside the accelerator the electric field is screened out. In the polar cap model, especially for space charge limited flow model (Fawley et al. , 1977;Scharlemann et al. , 1978;Arons and Scharlemann , 1979), where electrons can freely escape from the stellar surface, i.e., E = 0 on the stellar surface, the formation mechanism of a static pair of anode and cathode, which can sustain enough potential drop for pair production, is a long-standing issue. Current flows steadily along the magnetic field line so that the charge density is determined by the magnitude of the current and field geometry with suitable boundary conditions. Good examples for space charge limited flow are in Shibata (1997). When ρ GJ < 0 and the electron density (n ∝ B/v ≃ B/c, where B is the magnetic field strength) is larger than the GJ number density (n GJ = |ρ GJ /e| ∝ B z , where −e is the electronic charge) on the stellar surface, a cathode is provided on the stellar surface. The cathode accelerates electrons. When the field lines curve away from the rotation axis, n deviates n GJ nagatively so much more for 'away' curvature, which enhances the cathode. Hence electrons continue to be accelerated, and potential drop becomes large enough to produce pairs. The mechanism of the electric field screening, i.e., a way to provide an anode, has been considered to be provided by pair polarization. Although most papers take it for granted that copious pair production can instantly screen the field, recently Shibata et al. (1998Shibata et al. ( , 2002 casted doubt on this issue; the electric field screening is not an easy task as considered usually. Shibata et al. (1998Shibata et al. ( , 2002 investigated the screening of electric fields in the pair production region. They found that the thickness of the screening layer is restricted to be as small as the braking distance l E = m e c 2 /|eE | for which decelerating particles become non-relativistic, where m e is the electron mass. If the above condition does not hold, too many positrons are reflected back and destroy the negative charge region (cathode). In order to screen the electric field consistently, huge number of pairs should be injected within the small thickness l E . The required pair multiplication factor per one primary particle is enormously large and cannot be realized in the conventional pair creation models. Thus, some other ingredients are required for the electric field screening. In the previous studies of the screening, pairs were assumed to accelerate or decelerate along the 0-th order trajectories determined by E . However, if an electrostatic (longitudinal) instability occurs, the excited waves may produce effective friction. Friction on particles change the charge polarization process. Thus, instability in the presence of an external electric field may have a relevance to this problem, which motivates us to make an exploratory study in this paper. Various instability mechanisms outside the accelerator have been studied for pair plasmas along with a primary beam in relation to coherent radio emission mechansims (Hinata , 1976;Cheng and Ruderman , 1977;Asséo et al. , 1980Asséo et al. , , 1983Lyubarskii , 1992). However, plasma instability inside the accelerator has not been studied. The Lorentz factor of the primary beam (Γ ≃ 10 6 -10 7 ) is much larger than that of electron-positron pairs (γ ≃ 10 2 -10 3 ). In such a case, it is difficult to induce the two-stream instability. However, it is not clear whether pairs stably flow in the electric field E or not. For typical pulsar parameters, the braking distance is l E ≃ 10 −2 cm, while the length-scale of plasma oscillation is (m e c 2 /4πe 2 n GJ ) 1/2 ≃ 1 cm (Shibata et al. , 1998(Shibata et al. , , 2002, where n GJ = |ρ GJ /e|. Particles are accelerated or decelerated in a period that is shorter than the typical time scale of plasma oscillation. Therefore, the distribution function is not uniform on the scale we consider. The local approximation is not adequate to deal with plasma oscillation. Properties of a pair plasma in such a strong electric field have not been studied. Such studies may bring us a new key to understanding astrophysical phenomena. One of our purposes is to examine if electrostatic instability makes the screening easier. As the first step toward this purpose, in this paper we investigate electrostatic instability of pairs injected in an external electric field. Investigations of electrostatic waves, when we cannot adopt the local approximation, may be important not only for pulsars but also for other high-energy astronomical phenomena. Since an analytical treatment is difficult in this case, we simulate electrostatic waves numerically in idealized situations. In §2 we mention the two-stream instability without an electric field for comparison. In §3 we describe the situation we consider. The most simplified physical condition is adopted; the electric field, injection rate, injection energy are constant. In §4 we explain our numerical method. Our method can treat only linear waves. Numerical results are summarized in §5. Our results show a new type of plasma instability due to electric field. §6 is devoted to summary and discussion. Two stream instability with local approximation First of all, for reference, we consider one-dimensional (1-D) homogeneous flows of electrons and positrons in the absence of electric field. The pair-distributions are functions of the 4-velocity u = β/ 1 − β 2 , where β = v/c. For simplicity, we assume that the distribution functions are expressed by the step function Θ(u) as (1) where u 4 > u 3 ≥ u 2 > u 1 with constant u i . Electrons homogeneously distribute between u 3 and u 4 in u-space, while positrons similarly distribute between u 1 and u 2 (see Figure 1). As will be seen in the next section, the distributions in an external electric field that we will consider are similar to the above distribution functions. In the linear perturbation theory, the dispersion relation for electrostatic waves (Baldwin et al. , 1969) is given by where q a and m a denote charge and mass of the particle species a. Solutions of the dispersion relation usually yield a complex frequency. The imaginary part of ω corresponds to the growth rate of waves, ω i . A positive growth rate ω i > 0 implies an exponentially growing wave, while a negative ω i does an exponentially damped wave. Adopting the distribution functions (1) and (2), we obtaiñ where the subscripts i denote corresponding values to u i , ω = ω/ω p ,k = kc/ω p , and ω 2 p = 4πe 2 C/m e . The above equation is reduced to a quartic one. Although it is messy to obtain the solutions analytically, we can numerically confirm thatω has complex values, for a givenk. The solutions obey the usual two-stream instability properties; the larger the difference in velocities of electron and positron flows is, the smaller the maximum growth rate and the correspondingk become. When u 2 = u 3 , electrons and positrons distribute continuously, though the average velocity is different. In this case, the dispersion relation is reduced to a quadratic equation, and we obtaiñ Apparently,ω is real. Therefore, even though the velocities of the two flows are different, instabilities are not excited in homogeneous pair plasmas, as long as electrons and positrons distribute continuously in u-space. This is because the absolute values of their charges and masses are the same for electron-positron pairs. As will be shown in the next section, the distribution of pairs injected in an external electric field is similar to the above distribution locally. However, we will show that an electric field induces instability. Pair injection in an external electric field In a strong magnetic field as in the pulsar polar caps, transverse momenta of relativistic electrons and positrons are lost during a very short time via synchrotron radiation. These particles move along the magnetic field lines and their distribution functions are spatially 1-D. In this section we consider 1-D distribution functions of electronpositron pairs injected in an external electric field that is parallel to magnetic field lines. As the conventional theories for the two-stream instability implicitly assume, we neglect the toroidal magnetic field due to the global current of plasma. Only within this treatment the 1-D approximation is adequate. In order to simplify the situation, we assume the external electric field E 0 is constant. In pulsar models, there exists a primary beam which produces current flow. The external electric field is determined by the complicated combination of the beam current, injected pair plasma, and GJ density. The constant E 0 requires that the charge density of the beam, pairs, and GJ density cancels out, which may be an artificial situation. As will be discussed in §6, the approximation of constant E 0 is justified for a smaller rate of pair injection, which may be realized for actual pulsar parameters. Anyway, we depart from actual pulsar physics, and deal with plasma physics in an idealized situation hereafter. We adopt the 1-D approximation and assume the existence of the background charge which leads to constant E 0 . In our treatment we totally neglects effects of the existence of the background on development of waves, and consider the behaviour of pair plasma only. Pairs are assumed to be injected between z = 0 and z M at a constant rateṅ 0 . In our calculation the pair injection is monoenergetic with 4-velocity u = u 0 > 0. Let us start from assuming the steady state of flows of electron-positron pairs. The distribution function f 0 (z, u) satisfies the Boltzmann-Vlasov equation for 0 < z < z M . Hereafter we assume E 0 < 0 for definiteness. Then, injected positrons (q = e > 0) will be decelerated as where t E = m e c/|eE 0 |, and t inj is the injection time. The value of t E represents the time-scale in which the Lorentz factor of decelerating particles decreases by one. Positrons will be turned back following the trajectory where γ = u 2 + 1, γ 0 = u 2 0 + 1, l E = ct E , and z inj is the injection point. Electrons (q = −e < 0) will continue to be accelerated as In our model electrons and positrons distribute in the 2-D phase space (spatially 1-D) as is illustrated in Figure 2. In the phase space the trajectory of electrons injected at z = 0 provides the maximum 4-velocity u M = u M (z) at z, while u of positrons injected at z = z M corresponds to the minimum 4-velocity u m = u m (z) at z. The region enclosed by the curves in Figure 2 is composed from the family of trajectories of pairs. As long as pairs are uniformly injected at a constant rate, the distribution function is constant because of the Liouville's theorem. The distribution functions between z = z ret = (γ 0 − 1)l E (see Figure 2) and z M are the same as in Figure 1 with u 1 = u m , u 4 = u M , and u 2 = u 3 = u 0 . In this region the distribution functions are expressed as where n 0 = t Eṅ0 . It is easily confirmed that f 0− and f 0+ satisfy the Boltzmann-Vlasov equation (6). Electrons and positrons distribute continuously. The charge density given by Eqs. (11) and (12) is not strictly zero so that our approximation of constant electric field is not selfconsistent, unless the background charge cancels the total charge. If the background does not play such a role, our treatment is correct only when n 0 is small enough. The quantitative condition for n 0 will be discussed in §6. As was discussed in §2, if we neglect the electric field and adopt the local approximation, no wave instability is generated for this continuous distribution . However, in our consideration the time-scale t E is too short (or l E is too short) to adopt the local approximation. In the region 0 < z < z ret the pair distribution has separate two streams. Since this region is peculiar in our idealized model, we do not consider the waves in this region hereafter. Numerical method We consider linear perturbations of the distribution function and electric field as where |f 1 | ≪ f 0 and |E 1 | ≪ |E 0 |. Since the unperturbed distributions of pairs are inhomogeneous, we cannot carry out a Fourier-harmonic expansion of the perturbations. Therefore, we solve time development of the perturbations rather than obtain the linear modes. As we have seen in the previous section, our idealized situations lead to the simple distribution function f 0 , which makes numerical computaions easier. We directly solve the perturbations f 1 and E 1 from the linearly perturbed Boltzmann-Vlasov equation where Dt is the differential along the 0-th order trajectory. The Ampère-Maxwell law is written by where j is the perturbed current density. Initially E 1 depends on only z, and the magnetic field B x = B y = 0. Then, the Faraday law ensures that B x and B y remain zero all the time. The component B z does not affect time developments of E z and f 1 . We set up grids along the 0-th order trajectories of pairs in the 2-D phase space (s = z/l E , u). Following the Lagrangian method we follow time evolution of f 1 in these grids from equation (15). For s ret ≤ s ≤ s M , equation (15) is rewritten as where we introduce dimensionless values F = f 1 l E , τ = t/t E , and E = l E n 0 E 1 /|E 0 |. At the injection of pairs (u = u 0 ) F obtains a finite value in proportion to the perturbed electric field E. Then, the value of F is propagated along the 0-th order trajectory (see Figure 2), i.e., F is conserved along the characteristics. The disturbance F + initially propagates forward, and then turns back at a distance γ 0 − 1 from the injection point, while F − simply propagates forward. For |u| < u 0 , F + (u, s) is originated from inner injection positions between s − γ 0 + 1 and s. On the other hand, for |u| > u 0 , F + (u, s) is originated from the outer injection positions > s. The values of F on the trajectories of the pairs injected at z = 0 and z = z M are special and they will continue to be changed by the electric field during propagation. On the other hand, the evolution of electric field is calculated by the Eulerian method; where R = ω p t E with the plasma frequency ω p = 4πe 2 n 0 /m e . The parameter R is the ratio of the braking time-scale to the typical time-scale of the plasma oscillation. In our case R ∝ t 3/2 E ∝ |E 0 | −3/2 . In the pulsar polar cap model electric field is so strong that R is much smaller than one. Since the charge conservation is assured by the Boltzmann-Vlasov equation, the Gauss law, is automatically satisfied, if the law is initially satisfied. The configuration of E is determined by the charge density that is an integral of F + −F − . Since the propagations of F + and F − are complex, it is difficult to predict the behaviour of E intuitively. We have ascertained that results obtained from our numerical code satisfy the Gauss law. In addition we have checked our code by reproducing two stream instability in the absence of electric field, using the distribution functions in §2. Results We have simulated electrostatic waves from various initial conditions and parameter values. We are interested in a parameter region R < 1. In this region the typical wavelength of plasma oscillation ∼ l p = c/ω p is longer than the braking distance l E = Rl p . We give an initial disturbance in a spatially limited region. As will be shown below, when Ru 0 ≥∼ 1, we find an absolute instability in which disturbance grows in amplitude but always embraces the original region, where the initial disturbances of F and E are given. The condition Ru 0 ≥ 1 means that the distance injected positrons move forward before they turn back, l E (γ 0 − 1) ∼ l E u 0 , is larger than the lengthscale of the plasma oscillation l p . On the other hand, for ∼ 0.1 ≤ Ru 0 ≤∼ 1, we find a convective instability in which disturbance grows while propagating away from the original region. The waves excited from the convective instability propagate backward. Empirically, the results do not largely depend on the spatial size of the pair injection region s M = z M /l E , which determines the minimum 4velocity u m (z). In this section we show some examples of the instabilities found in our simulations. The parameters and initial conditions are summarized in Table 1. The initial conditions are taken to satisfy the Gauss law. Given the parameters R, u 0 , and s M , we set the initial values of the disturbances F and E as for s i ≤ s ≤ s i + 2π/k i . The initial disturbances are confined within a small spatial region of one wavelength (2π/k i ≪ s M ). In the other region there is no disturbance. The initial perturbation of E(s) has a single sign with a form of a cosine curve. On the other hand, F − and F + have the form of a sine curve, satisfying the Gauss law. The parameter k ∆ induces asymmetry of the charge density of electrons and positrons. The total charge density (∝ F + − F − ) does not depend on k ∆ , and also has a form of a sine curve with k i . We have tried various values of k ∆ in our simulations and find that the ratio of F − to F + is settled as time passes irrespective of k ∆ . When instabilities occur, growing wave modes end up dominating other modes. Therefore, results do not largely depend on the initial conditions. First we describe the results for R = 0.1 (the calculations RUN1-RUN3) and see the behaviour of the linear perturbations for various values of the parameter u 0 . In Figures 3 and 4 we plot electric field E for RUN1 for which Ru 0 = 1. In this calculation, the initial disturbance exists from s i = 120 to s i + 2π/k i ≃ 180. Since positrons will turn the direction of motion after τ ∼ 10, we must follow the disturbance much longer than that. As is illustrated in Figure 3, at τ = 25 the disturbance E remains in the originally disturbed region. As time passes, the amplitude around the original region of the disturbance grows, and the wave packet spreads backward little by little. In the forward region s >∼ 200, we do not observe any growing wave. Although particles move almost at the light velocity, the disturbances remain around the original region and the wave packet does not spread at the light velocity. In order to show the growth of the amplitude, in Figure 5 we plot the time evolution of E M that is the electric field for the maximum amplitude. The maximum electic field E M oscillates over positive and negative regions. Initially E M changes complicatedly because of the initial conditions we artificially set. As time passes, E M smoothly grows while oscillating. The period of the oscillation of E M is ≃ 20t E = 2/ω p . The growing time t i , where |E M | ∼ exp (t/t i ), is ≃ 50t E = 5/ω p . Let us look into the the behaviour on a shorter time scale for RUN1. At a fixed position s, the local electric field E(s) grows while oscillating. However, when we see spatio-temporal behaviour, we notice that the spatial pattern propagates backward while changing their amplitude. As is shown in Figure 4, waves propagate from s ≃ 200 ≃ s i + 2π/k i backward with a growing amplitude. The amplitude becomes maximum around s i = 120, and then the amplitude declines while propagating backward. This decline leads to the confinement of the wave packet. Even though waves pass the disturbed region many times, waves exist only in a spatially limited region. In the wave packet of E(s) there are multiple peaks and bottoms, and the most prominent one of them corresponds to E M . If we define the 'phase velocity' as the velocity of peaks (or bottoms), the phase velocity (≃ −2.8c) turns out to be faster than the velocity of light. As peaks propagate backward, the peak or bottom associated with E M alternates one after another, so that the position of E M hangs around the original region. When we define the group velocity by averaging the velocity of the position of E = E M for a longer time scale than the oscillation period, the group velocity turns out to be almost zero. The wavelength λ is about 60l E ≃ 2πl p which is almost the same as the initial wavelength of the disturbance. Even if we start from another k i , the growing mode dominates others and the final wavelength is the same as this result, 60l E . The final wavelength of growing waves is unchanged for different initial conditions. Figure 6 shows charge density distributions at τ = 195. The number density of electrons (positrons) has opposite (same) sign of the charge density in Figure 6. The phases of number densities of electrons and positrons are the same. The amplitude of the positron density is always larger than that of the electron density. The difference in the number densities of positrons and electrons is proportional to the total charge density which satisfies the Gauss law. Next we discuss the results of RUN2 (Ru 0 = 30 > 1). The initial distubance ranges from s i = 500 to s i +2π/k i ≃ 560. In Figure 7 we show electric fields at several epochs. The wave profiles are not so simple compared to the case of RUN1 (Ru 0 = 1). There are waves propagating both forward and backward. This may be because the distance positrons move forward before they returns (∼ 300l E from their injection point) is longer than the typical wavelength 2πl p ≃ 60l E . Though the properties of the waves are complex, we can see that there is an instability in this case, too. The waves around the original region grow while diffusing both forward and backward. We note that a separate component of the disturbance appears around z < z ret (s < 300 in this case). This disturbance may be due to two stream instability, which grows faster than in the other region. Although we do not show here, for a much larger value of Ru 0 , absolute instabilities are found to occur in our simulation. However, we do not further pursue this issue because we need to calculate over a much wider region of s and resultant memory in computation becomes large for a large value of u 0 . In cases for Ru 0 = 0.3 < 1 (the calculations RUN3), absolute instability does not occur, but convective instability propagating backward occurs (see Figure 8). In RUN3 the initial disturbance is given from s = 420 to ∼ 426 with a wavelength 2πl E /k i ≃ 6l E . The disturbance of the electric field is seen to propagate backward, and before long characteristic waves grow. The wavelength of the growing wave (≃ 9l E ) is slightly longer than the initial length and almost constant. As the disturbance propagates, the wave packet spreads and the number of waves in the packet increases. The growing time of E M is about 100t E = 10/ω p (see Figure 9). The phase velocity, v ph ≃ −1.09c, is faster than the velocity of light. The amplitude of each peak in the packet initially grows. As the peak approaches the head of the wave packet, the growth of the amplitude turns over to damping. This behaviour is similar to that in the absolute instability for RUN1. We obtain the group velocity v g ≃ −0.85c. Figure 10 shows that the charge density is dominated by positrons. We have simulated for Ru 0 = 0.3 with various initial conditions. However, in any case there is no sign of wave instability propagating forward. We have investigated for R = 0.01 also (the calculations RUN4-RUN6), and confirmed that the qualitative results are determined by the value of Ru 0 (see Table 2). The absolute instability and convective instability are induced for Ru 0 = 1 and Ru 0 = 0.3, respectively. For Ru 0 = 0.03 (RUN6), however, we do not find any instability. A smaller value of R means lower density of pair plasma for a given value of the external electric field. In such a low density plasma, particles tend to be less affected by forces from other particles compared to the external electric field. Therefore, too small value of R ≪ 1/u 0 makes the plasma stable. From the above results, we may conclude that electrostatic instability is mainly determined by the parameter Ru 0 . The wavelength and the growing time are a few or ten times the typical scales of the plasma oscillation, l p and 1/ω p , respectively. The value R has been assumed to be smaller than 1 heretofore. We have also simulated for R ≥ 1 and found instabilities. The wave properties are as complicated as the example in RUN2, so that we do not report the details of the results in this paper. When R ≥ 1, E 0 is so small that l E ≥ l p . In the limit of E 0 = 0 (R → ∞), the instability does not occur as was shown in §2. However, we have not tried the cases R ≫ 1, because of poor computational capacity so far. Judging from the backward spread of wave packets and the negative phase velocity, it is seen that returning positrons play a decisive role in the instabilities. The dominance of positrons in the charge density in comparison with electrons also suggests that the instabilities are due to returning positrons. It is remarkable that positrons pass the same region twice, forward and backward. The typical wavelength of plasma oscillation (2πl p ) may resonate with the distance positrons move forward, (γ 0 −1)l E . As we have mentioned in §4, the excited electric field E generates the disturbances of pairs, F (u 0 ), at their injection. The value of F (u) is transported along the trajectories of pairs, conserving their value. Since F + and F − acquire the same value at their injection, the contribu-tion to the charge density is almost canceled out as long as electrons and positrons move forward together at ∼ c. When positrons turn around, the charge is polarized. As for positrons, F + (u, s) at τ conserves the information on the electric field E(s ′ ) in the past (τ ′ = τ − (u 0 − u)), where s = s ′ + γ 0 − γ. The displacement between s and s ′ conforms to the trajectories of positrons. Therefore, the charge density of positrons (except for u = u m ) is proportional to a superposition of displaced E(s ′ ) at different times. This superposition will increase the amplitude of the charge density in response to evolution of E. Since F + (u = u m ) is constant along the characteristics, the value F + remains to be finite even after positrons enter the region where E ∼ 0, while F + (u m ) changes as long as E exists. Our numerical results imply that F + (u m ) cancels out the charge density due to F + (u = u m ) in the region where E ∼ 0. This process prevents the waves from spreading backward at the speed of light. The propagation of the disturbance by electrons is relatively simple. The contribution due to F − (u M ) prevents the waves from spreading forward, as F + (u m ) does. In RUN3 and RUN5 (Ru 0 < 1), positrons turn back and move backward quickly before F + (u m ) cancels out the charge density, so that growing waves propagate backward. Conclusions and Discussion In this paper we have found a new type of instability in electron-positron pair flows injected in an external electric field, which is assumed to be spatially constant. The properties of the instability are characterized by the ratio R (the braking time-scale to the typical oscillation time-scale of the plasma 1/ω p ) and 4-velocity u 0 at injection. For Ru 0 >∼ 1 absolute instability is induced, while convective instability propagating backward is excited for ∼ 0.1 < Ru 0 <∼ 1. The growing time in amplitude is as short as a few times the time-scale 1/ω p . The wavelength is also several times l p = c/ω p . The instabilities are caused by returning positrons. For Ru 0 ≪ 1, the pair plasma turns out to be stable. A small value of R implies that the plasma density is so low compared to the electric field E 0 . For R ≪ 1/u 0 the collective interaction of the pair plasma is not important, so that each particle moves along the trajectory determined by E 0 independently of other particles. Growing electrostatic waves may work as frictional forces. In this paper we have treated waves as linear perturbations, following the propagation of disturbances in the distribution function and electric field. Our method does not allow us to follow processes of gaining or losing kinetic energy of each particle from the waves. The quasilinear theory is not applied to deal with the reaction of particles as it is, because the disturbances do not have the Fourier-harmonic dependence. Thus, we consider the qualitative character of the effective reaction force from a numerical treatment as follows. The spatial averages of F and E oscillate with time in our simulations. Therefore, the expectation values of F and E can be considered to be zero. On the other hand, when waves grow or are attenuated, the spatial average of the cross term, F E , may have a finite value. As is the case with the quasi-linear theory, the 0-th distribution function may change, following the 2-nd order order approximations of the Boltzmann-Vlasov equation: where and f 0 = n 0 independently of u in our case. In view of the Fokker-Planck approximation, G(u) ∝ u f 0 , where u is the average change of u due to the reaction force per unit time. Since f 0 is constant in our simulation, G(u) is proportional to the reaction force. We plot G(u) in Figure 11 for τ = 195 in RUN1. The modulation pattern of G(u) does not change, but the amplitude grows with time. Apparently, the modulation pattern is asymmetric for particles of u > u 0 = 10 (electrons) and u < u 0 (positrons). These qualitative behaviour is common for the other RUNs. As Figure 2 and equations (17) and (18) show, perturbations are generated at u = u 0 and u = u M (or u m ). Figure 11 shows G(u) for a region around u = u 0 only, and outside of this region G(u) has also significant value due to the disturbances generated at u = u M and u = u m . However, the modulation pattern of G(u) for such regions oscillates with time. In the usual two-stream instability, the excited waves accelerate background fluid, and decelerate beam fluid. As is shown in Figure 11, the direction of the reaction force depends on u even in the same species of particles. The amplitude of G(u) takes always the maximum value at u = u 0 . If the effective reaction force grows enough, positrons (electrons) just injected (u ≃ u 0 ) feel positive (negative) force as is shown in Figure 11. Thus, the reaction force may make particles tend to stay around the regions of u = u 0 . The integral of G(u) around u = u 0 (roughly from u = −100 to 10 for Figure 11) for positrons is also positive. Such positrons are accelerated by the reaction force on average. However, the integral becomes negative all the time, if we include the contribution due to the disturbances generated at u = u m , though G(u) for a large |u| oscillates with times. The returning positrons injected at z = z M feel a negative reaction force on average. On the other hand, the absolute value of the integral for electrons is much smaller than those for positrons. Therefore, the reaction force does not work as usual 'frictional' force between electrons and positrons. The particles just injected are most affected by the reaction force, and lose (or gain) their energy owing to waves. If the reaction force is strong enough, positrons just injected may have difficulty to turn back. If positrons suffer from such frictional force, the distribution function f 0 should be largely altered. This may help to solve the problem of the electric field screening in the pulsar polar caps. If excited waves grow enough to change trajectories of particles, the waves cannot be treated as the linear perturbations. In a strong electric field, |f 1 | can be as large as f 0 before |E 1 | ∼ |E 0 | achieves. We may simplify the energy-loss process of particles due to perturbed electric field as follows; particles from u = u 0 to u 0 + ∆u lose their energy owing to the waves at the same rate, where ∆u is the equivalent width of particles interacting the waves. Then, the growth rate of the field energy is roughly considered as the average energy loss rate of pairs, which means ω i E 2 1 /4π ∼ |γ∆u|n 0 m e c 2 , whereγ is the temporal change of γ due to the reaction force. Here, we assume that the energy density of the waves attributed to induced particle motions is comparable or negligible to the energy density of the electric field E 1 . When |γ| > 1/t E , the frictional force is sufficient to alter trajectories of pairs. The above condition is rewritten as ω i E 2 1 /ω 2 p > m e c|E 0 |∆u/e. Assuming ω i ≃ ω p ≃ 10 8 , ∆u = 50, and |E 0 | ≃ 10 5 in esu (these values may be typical for the pulsar polar cap), |E 1 | is needed to be larger than 5 × 10 3 in esu to change the distribution of pairs. Even if |E 1 | < |E 0 |, the excited disturbances may change the 0-th distribution of pairs. However, we need numerical simulations, which can deal with non-linear process, in order to check how the reaction force modifies the distribution function f 0 . As a first step to deal with behaviour of pairs in an electric field, in this paper we have assumed that the background charge distribution cancels out the modification of E 0 due to injected electron-positron pairs. Of course, this simplification may not be appropriate for pulsars, while it makes computation easier. Inhomogeneous electric field might play an important role in plasma instability. Let us check whether our model can be used when the background charge density is constant for s > s 0 . The charge density changes for s > s 0 owing to the pair injection and electric field should be modified. The charge density decreases with distance as ∝∼ n 0 s for s > s 0 . In this case the variation of ∆E 0 over the moving distance of injected positrons before they turn back l E u 0 is ∼ (Ru 0 ) 2 E 0 . Therefore, in the case of stable plasma (Ru 0 ≪ 1) the constant electric field is a good approximation even for the constant background. Although our simulations show a new possibility of plasma instability around pulsars, we need to simulate with an inhomogeneous electric field for Ru 0 ∼ 1 in order to conclude whether instability occurs in actual situations on pulsars. In any case, the condition Ru 0 >∼ 0.1 is an necessary (but not sufficient so far) condition to induce the instability. Let us consider implications for the pulsar polar cap. We suppose that the primary electron-beam is accelerated from z = 0, and its Lorentz factor becomes Γ at the pair production front (PPF) (z = L). In this case the average electric field is |E 0 | = Γm e c 2 /eL. The braking time is expressed as t E = m e c/|eE 0 | = L/cΓ. By curvature radiation an electron of Γ emits 2e 2 Γ/3rh ≡ M photons per unit time, where r is the radius of curvature of the field line. We express the number density of the primary beam as n b = f b n GJ . Assuming curvature gamma-rays imme-diately turn into pairs, the pair-injection rate is approximated asṅ 0 ≃ M f b n GJ . Adopting the average electric field, the ratio R becomes R = ω p t E = 4f b e 3 ΩBL 3 3m e c 4h rΓ 2 (26) ≃ 2 × 10 −5 f where Γ 6 = Γ/10 6 , and T 0.3 , B 12 , L 4 , and r 7 are in units of 0.3 s, 10 12 G, 10 4 cm, and 10 7 cm, respectively. The average Lorentz factor of pairs is at mosthΓ 3 /(2m e cr) ≃ 2Γ 3 6 r −1 7 . Therefore, we obtain Even for the large values of Γ and L we can suppose in pulsar models, we may expect 10 −4 < Ru 0 < 10 −2 at most. The electric field in the polar cap may be too strong to induce the instability, compared to the pair-plasma density, while the approximation of constant E 0 may be not so wrong in such cases. If the electric field is much smaller than the average one Γm e c 2 /eL at PPF as in the model in Aron's group (Fawley et al. , 1977;Scharlemann et al. , 1978;Arons and Scharlemann , 1979), Ru 0 can be large enough to induce the instability. In such models, however, the pair polarization is not so important to achieve the screening, while there is possibility that the instability affect the radio emission process. At present it is not clear that the electrostatic instability we have considered in this paper is an important process for the screening of electric field above the pulsar polar cap. However, there may be extreme environments (magnetars etc.), where Ru 0 is as large as one. We expect that studies on the plasma instability in electric fields lead to opening a new approach to high-energy astrophysics. Table 2. Rough values of the growing time, wavelength, and phase velocity. The characters "A" and "C" represent the absolute instability and convective instability, respectively.
2014-10-01T00:00:00.000Z
2004-01-15T00:00:00.000
{ "year": 2004, "sha1": "749ad16e26002f047b52eead9cc32cb65840c0c9", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2004/12/aa0343.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "749ad16e26002f047b52eead9cc32cb65840c0c9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233426925
pes2o/s2orc
v3-fos-license
Few new drugs deserve expedited regulatory treatment DISCLOSURES: This commentary is based on work by the author that was supported by Arnold Ventures and the Harvard-MIT Center for Regulatory Science. The funders had no role in the writing of this commentary, or the decision to submit for publication. The author has nothing else to disclose. ill patients…are generally willing to accept greater risks," but the regulations did not describe any heightened threshold of benefit needed to justify those greater risks beyond what was needed for ordinary approval. 4 More recent expedited development programs have acknowledged the importance of efficacy while simultaneously establishing flexible criteria that do not require drugs to be more effective than under the traditional de minimis standard, thereby contributing to a growing chasm between public perception of what FDA approval means and the degree of is available and that more limited evidence can sometimes adequately satisfy statutory standards. Regardless of the amount of evidence that should be required for approval, the debate over evidence has often distracted attention from the more fundamental issue of efficacy. Evidence standards relate to certainty, rather than magnitude, of benefit. Even under the heightenend 1962 evidence requirements, there was never any minimum quantum of efficacy that drugs had to possess to receive FDA approval (other than nonzero), 2 and reviews consistently find that most (69%-98%) new drugs fail to provide large benefits over existing therapies (Table 1). As the new expedited programs for high-priority medicines were implemented, the extent to which such drugs benefitted patients continued to receive little serious attention. For example, the House Report to the 1983 Orphan Drug Act rationalized the use of smaller clinical trial sizes by observing that "dramatically effective" drugs "do not need large patient populations to demonstrate the point," 3 but dramatic effect size was not included as an Orphan Drug Act requirement. The FDA's fast-track regulations, promulgated in 1988 amid the worsening AIDS epidemic, explained that the more limited evidence required under this program was appropriate because "desperately DARROW'S VIEWPOINT Safety testing of new drugs has been required since the 1938 Federal Food, Drug, and Cosmetic Act, but applications were automatically approved under that law unless the US Food and Drug Administration (FDA) acted to prevent marketing within 60 days. In 1962, the Kefauver-Harris Drug Amendments raised the bar by prohibiting marketing until affirmative FDA approval based on "adequate test," showing safety, and "adequate and well-controlled investigations," providing "substantial evidence" of efficacy. The heightened requirements were intended to screen useless or harmful remedies from the market, but they also increased development costs and lengthened the time before drugs were available to patients. Over the next 6 decades, these increased requirements met resistance, causing the pendulum to swing back, with Congress and the FDA establishing a growing array of programs that reduced evidence requirements and were intended to lower development costs and expedite availability. Some suggest that these more flexible requirements reflect regulatory capture, pointing to industry funding of the FDA, which has grown dramatically since its inception in 1992 and now provides a majority of FDA drug review budgets. 1 Supporters of expedited programs counter that patients with serious illnesses cannot wait until higher quality evidence efficacy that the law actually requires. For example, accelerated approval (which relies on surrogate endpoints that are reasonably likely to predict clinical outcomes) requires "meaningful therapeutic benefit…over existing treatments," but the 1992 regulations did not define "meaningful." Eteplirsen (Exondis 51) was approved under this program in 2016 for the treatment of Duchenne muscular dystrophy over the objections of the FDA review team that there was "no clear evidence of efficacy." 5 More generally, the FDA has acknowledged that surrogate endpoints, which are the touchstone of the accelerated approval program, "may not in fact be causally related" to clinical endpoints and that, even if causally related, "the drug may have a smaller than expected benefit," since even perfect correlation does not imply any particular magnitude of benefit. 6 Similarly, the 2012 breakthrough therapy program included the criterion of "substantial improvement over existing therapies," but this criterion could be met even if a new drug was no more effective than available alternatives (eg, if it offered a safety advantage). 7 Among 58 oncology drugs approved from 2012 to 2017, there was no statistically significant difference in median solid tumor response rates between drugs designated as breakthrough and nonbreakthrough. 8 Among the 16 breakthrough-designated oncology treatments approved from 2014 to 2016, a median of 57% of patients experienced no benefit per prespecified criteria, and for 13 (81%) of these drugs, 8.5% of patients or fewer experienced complete responses. 9 Pimavanserin (Nuplazid), a drug used to treat Parkinson disease psychosis, was designated as a breakthrough therapy and approved despite failing to show efficacy in 2 double-blind, placebo-controlled trials. 8 Approval then occurred following a third trial that shifted the assessment to a purpose-built scale, which showed a 3-point benefit out of 45 possible points. Expedited programs have been applied to the approval of a few highly effective drugs, such as imatinib (Gleevec, 2001) for cancer (Orphan Drug Act, fast-track, accelerated approval), and sofosbuvir (Sovaldi, 2013) for hepatitis C virus (fast-track, breakthrough therapy). But the fact that the literature continues to reference examples of transformative medicines approved 8 or 20 years ago (of approximately 654 new molecular entities approved from 2001 to 2020) suggests the rarity of true breakthroughs, consistent with published assessments (Table 1). Even when drugs provide large benefits, perceptions of efficacy can generously outpace the evidence. An 83% remission rate was widely reported for tisagenlecleucel (Kymriah), a CAR-T therapy approved in 2017 for leukemia (Orphan Drug Act, breakthrough therapy), but this figure was based on assessments within 3 months after treatment and did not include subjects who later relapsed or died during a median follow-up of just 4.8 months. Far fewer (46%) were in remission and not censored by the study's end. 10 Voretigene neparvovec-xioi (Luxturna), a gene therapy that was approved in 2017 to treat a rare inherited blindness (Orphan Drug Act, breakthrough therapy), was widely reported to be curative even though the evidence did not support this claim. 11 Nusinersen (Spinraza), a costly treatment for a rare muscle-wasting disease (Orphan Drug Act, fast-track), was hailed as "dramatically effective" even For most drugs, policymakers should reevaluate whether the pendulum has swung too far, leading to more rapid approval of costly medicines that some believe are more prone to adverse events 17 and that-most importantlypast experience suggests are unlikely to substantially improve or extend patients' lives. DISCLOSURES This commentary is based on work by the author that was supported by Arnold Ventures and the Harvard-MIT Center for Regulatory Science. The funders had no role in the writing of this commentary, or the decision to submit for publication. The author has nothing else to disclose. the absence of meaningful benefit and possibility of harm. But optimism bias may lead patients to substantially overestimate the magnitude of incremental benefit that a new drug is likely to provide. The interests of future patients are also important, and the failure to adequately test new treatments before approval can delay data collection 15 and deprive those patients of a fully informed treatment decision. Inadequate evidence that obscures limited efficacy can also lead to wasted resources (see Ferries et al, "FDA Expedited Approval and Implications for Rational Formulary and Health Plan Design, in this issue), create false hope, reduce motivation to undertake preventive efforts, and divert patients from alternate therapeutic options mistakenly believed to be less effective. Even when a new drug provides large benefits, it is not clear that earlier approval is necessary to help patients with immediate needs. Since 1987, the FDA has administered an expanded access (or "compassionate use") program that allows patients to request experimental therapies before approval, and the FDA nearly always approves these requests. 16 Although the program depends on the ability and willingness of manufacturers to provide their products and access is not guaranteed, broadening expanded access for today's patients is an alternative to impairing evidence collection in a way that may adversely impact all future patients. Expedited programs took root during the 1980s when the AIDS crisis presented a new and deadly infectious disease threat with no effective treatments, representing an uncommon circumstance for which an expedited approach may have been particularly suited. The SARS-CoV-2 pandemic presented a similar scenario of a new infectious disease for which no treatments or vaccines were available, though most (59%) subjects in its pivotal trial did not respond according to prespecified criteria. 12 Expedited programs are most defensible when applied to drugs that offer large incremental benefits. However, the average share of drugs qualifying for at least 1 expedited program (excluding priority review) was 59% from 2010 to 2019, 13 far exceeding the share of drugs offering major therapeutic gains (Table 1). Of 135 drug-indication pairs approved from 1999 to 2012 for which qualityadjusted life-year (QALY) data were available, 46 (34%) provided a median incremental benefit of about 0.1 QALY, and another 59 (44%) offered a median gain of just 0.003 QALY. 14 Early in clinical development it can be difficult to predict which drugs will be beneficial, and it may be appropriate for regulators to shift additional resources to promising early-stage drugs at the risk of later disappointment. To its credit, the FDA has on average applied expedited programs to those drugs later determined to be more beneficial, as measured by QALYs. 13 However, even among the 7% (10/135) of drug-indication pairs that received 3 simultaneous program designations, median benefits were less than 0.4 QALY. Certain biases could make even these modest benefits appear larger than they actually are. For example, assumptions about the correlation between surrogate and clinical endpoints that underlie QALY calculations may be false; publication bias can mean studies with QALY gains are more likely to be reported; and studies where QALY gains are unlikely may not be undertaken at all. The modest benefits offered by most drugs receiving expedited treatment raise questions about the wisdom of reducing evidence requirements to expedite their availability. It is true that patients with life-threatening diseases may be willing to risk
2021-04-29T06:17:19.870Z
2020-12-08T00:00:00.000
{ "year": 2021, "sha1": "66af1538a0e68b910adb13f74d0a1078ca6a1da8", "oa_license": "CCBY", "oa_url": "https://www.jmcp.org/doi/pdf/10.18553/jmcp.2021.27.5.685", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b0b252b96235aae5770a25ad2fe527368f4fcb9b", "s2fieldsofstudy": [ "Medicine", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
237347170
pes2o/s2orc
v3-fos-license
MRI-active inner regions of protoplanetary discs. II. Dependence on dust, disc and stellar parameters Close-in super-Earths are the most abundant exoplanets known. It has been hypothesized that they form in the inner regions of protoplanetary discs, out of the dust that may accumulate at the boundary between the inner region susceptible to the magneto-rotational instability (MRI) and an MRI-dead zone further out. In Paper I we presented a model for the viscous inner disc which includes heating due to both irradiation and MRI-driven accretion; thermal and non-thermal ionization; dust opacities; and dust effects on ionization. Here we examine how the inner disc structure varies with stellar, disc and dust parameters. For high accretion rates and small dust grains, we find that: (1) the main sources of ionization are thermal ionization and thermionic and ion emission; (2) the disc features a hot, high-viscosity inner region, and a local gas pressure maximum at the outer edge of this region (in line with previous studies); and (3) an increase in the dust-to-gas ratio pushes the pressure maximum outwards. Consequently, dust can accumulate in such inner discs without suppressing the MRI, with the amount of accumulation depending on the viscosity in the MRI-dead regions. Conversely, for low accretion rates and large dust grains, there appears to be an additional steady-state solution in which: (1) stellar X-rays become the main source of ionization; (2) MRI-viscosity is high throughout the disc; and (3) the pressure maximum ceases to exist. Hence, if planets form in the inner disc, larger accretion rates (and thus younger disks) are favoured. INTRODUCTION Exoplanet discoveries have shown that close-in super-Earths, planets with radii of 1-4 R ' and orbital periods shorter than "100 days, are extremely common (Fressin et al. 2013;Dressing & Charbonneau 2013Mulders et al. 2018;Hsu et al. 2019). To form the solid cores of these planets requires more mass in solids than is expected to exist at short orbital periods in the initial phases of planet formation in protoplanetary discs (Raymond et al. 2008;Chiang & Laughlin 2013;Schlichting 2014;. Because of this, it has been proposed that these super-Earths form further away from the star, in regions where the temperature in the protoplanetary disc is low enough for water ice to condense, which increases the total amount of solids. In this hypothesis, icerich planets migrate inwards, to their present orbits, through gravitational interactions with the disc (Terquem & Papaloizou 2007;Ogihara & Ida 2009;McNeil & Nelson 2010;Cossou et al. 2014;Izidoro et al. 2017Izidoro et al. , 2019Bitsch et al. 2019). However, when com-‹ E-mail: mj577@cam.ac.uk pared against atmospheric evolution models, the observed radius distribution of close-in super-Earths is found to be consistent with their cores being rocky, with very little ice present (Owen & Wu 2017;Van Eylen et al. 2018;Wu 2019;Rogers & Owen 2021). This possibly implies that close-in super-Earths form in the inner, hot regions of protoplanetary discs, near their present orbits. As noted above, it is not expected that these inner regions contain, initially, enough mass in solids to form the super-Earths. However, the inner disc can be enriched by pebbles from the outer disc (Hansen & Murray 2013;Boley & Ford 2013;Chatterjee & Tan 2014;Hu et al. 2018;Jankovic et al. 2019), as pebbles are prone to inwards radial drift due to gas drag (Weidenschilling 1977). It has been hypothesized (Chatterjee & Tan 2014) that the radial drift of pebbles could be stopped at a local gas pressure maximum in the inner disc. Over time the pressure maximum could accumulate enough material to form a super-Earth-sized planet. A gas pressure maximum is expected to form in an inner disc that accretes viscously via turbulence induced by the magnetorotational instability (MRI; Balbus & Hawley 1991;Kretke et al. 2009;Dzyurkevich et al. 2010;Chatterjee & Tan 2014). The sus-ceptibility of the disc to the MRI depends on the coupling between the gas and the magnetic field, and thus on the ionization fraction in the disc. In the hot innermost disc, the MRI is expected to drive high viscosity (i.e., efficient accretion) as a result of thermal ionization. At larger distances, where gas is colder and the ionization fraction drops, the viscosity is expected to be low (such a region is called a dead zone; Gammie 1996). In steady-state, a local gas pressure maximum forms at the transition between the high-viscosity and the low-viscosity regions (the dead zone inner edge) (e.g. Terquem 2008). The pressure maximum will only trap pebbles which are prone to radial drift relative to the gas. Smaller dust grains that are well coupled to the gas may be advected and diffused through the pressure maximum by the gas accreting onto the star. In the inner disc, the size of dust grains is limited by fragmentation due to relative turbulent velocities (Birnstiel et al. 2010(Birnstiel et al. , 2012Drazkowska et al. 2016). Pebbles that radially drift from the outer to the inner disc become smaller due to fragmentation, and the effect of radial drift weakens. Jankovic et al. (2019) showed that, in an inner disc in which the gas accretion and the grain turbulent velocities are driven by the MRI, the grains can become small enough to escape the pressure trap through advection and radial mixing by the turbulent gas. Additionally, it was found that this leads to an enhanced dustto-gas ratio of small dust grains throughout the inner disc, interior to the pressure maximum. Jankovic et al. (2019) did not explicitly take into account the effects of dust on the MRI, whereas it can be expected that the accumulation of dust could quench the MRI in the innermost disc since dust grains adsorb free charges from the gas phase (Sano et al. 2000;Ilgner & Nelson 2006;Wardle 2007;Salmeron & Wardle 2008;Bai & Goodman 2009;Mohanty et al. 2013). As a consequence of quenching the MRI, the strength of the turbulence would fall allowing some grain growth. However, this would concurrently push the MRI-active region and the pressure maximum inwards, possibly eliminating it from the inner disc. Evidently, the outcome is a function of the size and the abundance of the dust grains. In a previous paper (Jankovic et al. 2021, Paper I) we presented a model of a steady-state viscously accreting disc which includes both the MRI-driven viscosity and the effects of dust on the MRI. This accounts for the adsorption of free charges onto dust grains, and also for the electron (thermionic) and ion emission from dust grains into the gas phase. The thermionic and ion emission become important at temperatures above "1000 K (so at the temperatures present in the inner disc) and act to increase the ionization fraction of the gas (Desch & Turner 2015). We found that for 1 m grains, comprising 1% of the disc mass, these dust effects balance out and result in a pressure maximum at roughly the same location as predicted from thermal ionization. Additionally, this model also self-consistently considers the disc opacity due to dust grains, thus accounting for the effects of dust on the disc thermal structure. In the above work, we focused on a fiducial disc model. Building on this, in this paper we investigate how the inner disc structure changes with dust-to-gas ratio, dust grain size, and other disc and stellar parameters, in order to narrow down the region of parameter space where the formation of planetary cores in the inner disc is more likely. In section 2 we discuss the theoretical expectations about the location of the pressure maximum based on the results of Paper I. We briefly overview our disc model in section 3 and present our results in section 4. In section 5 we focus on the existence and location of the gas pressure maximum as a function of the above parameters, exploring the entire parameter space in detail. In section 6 we discuss the implications of our results for the formation of the super-Earths and the limitations of our work, and in section 7 we summarize our conclusions. THEORETICAL EXPECTATIONS A local gas pressure maximum is expected to form in an accretion disc at steady-state if the material in the inner regions accretes faster than the material in the outer regions. In the context of viscous accretion discs, for the pressure maximum to form, the disc viscosity should decrease with distance from the star. If the viscosity is driven by the MRI, the requirement is for the coupling between the magnetic field and the gas (i.e., the ionization fraction) to decrease radially outwards. Such a configuration has been predicted to arise in protoplanetary discs as the innermost regions can be hot enough to thermally ionize potassium, whereas outer regions are not (Gammie 1996). Desch & Turner (2015) have shown that, in fact, at the temperatures present in the inner disc, the dominant sources of ionization are thermionic and ion emission from small dust grains, and not thermal ionization. Nevertheless, the resulting ionization fraction sharply increases above a certain critical temperature, the same as in the case of thermal ionization. Moreover, for the materials out of which we expect the small grains to be made of in the inner disc, this critical temperature is likely close to the temperature required for thermal ionization of potassium (about 1000 K; see the discussion in Desch & Turner 2015). This suggests that the pressure maximum should arise at roughly the same distance from the star as in the case of thermal ionization. In Paper I we showed that, for fiducial stellar, disc and dust parameters, this is indeed the case. Therefore, we expect that a local gas pressure maximum indeed forms at the distance from the star at which the disc temperature reaches about " 1000 K. In the inner disc, the MRI is active around the disc midplane (if the vertical temperature distribution is calculated self-consistently, see Paper I, and also Terquem 2008), and so it is the disc midplane temperature that sets the location of the pressure maximum. Furthermore, as we show in Paper I, the midplane temperature is set by the heat released by the accretion and the disc cooling, while the heating by stellar irradiation has a small effect on the location of the pressure maximum, due to the inner disc being vertically optically thick. We also find that the inner disc is unstable to vertical convection. Given the above findings, it is useful to consider how the disc midplane temperature (and thus the location of the pressure maximum) scales with different disc, stellar and dust parameters using the simple Shakura & Sunyaev (1973) viscous thin disc model, modified to account for vertical convection. In this model, the viscosity is parametrized by the dimensionless parameter and given by " 2 {Ω, where is the sound speed and Ω the angular Keplerian velocity. It is assumed that the disc is at steady-state, i.e., has a radially-constant gas accretion rate 9 , and that the only source of heating is the accretion, that the disc is strongly optically thick (such that the majority of the discs mass is below the photosphere) and is convective up to the cooling radiation photosphere. Under these assumptions, Garaud & Lin (2007) showed that the disc's midplane temperature ( ) is approximately related to the effective temperature ( eff , determined by viscous dissipation) via 9 2{7 0 p7`2 q{7 eff where encapsulates the temperature dependence of the opacity via " 0 p { 0 q , with 0 a reference temperature and 0 " 0 Σ{2 (with Σ the disc surface density) the optical depth to radiation at the reference temperature 1 . For constant the disc structure is then described by a set of power-law relations. For the disc midplane temperature, away from the disc inner edge, we have where˚the stellar mass, 9 the disc accretion rate and the cylindrical radius. Then, if the pressure maximum arises at a fixed temperature, its radial location scales with other parameters as: (2) Inspection of our Rosseland mean opacities in Figure 1 indicates values of " 0.5 for temperatures around 1000 K, thus for this choice of we find: Therefore, for this choice of , we obtain an expression that is exactly the same as the one found by Chatterjee & Tan (2014), who assumed that the disc was cooled radiatively. This agreement arises because for values of of order unity or smaller, the radiative p q relation in discs 9 1{4 eff is a reasonable approximation for the mid-plane temperature in a very optically thick convective disc (e.g. Cannizzo & Wheeler 1984). Putting in the correct constants and fiducial disc parameters for a Solar mass star into eq. (3), one arrives at a value of a few tenths of an AU, which falls within the range of the observed orbital radii of the close-in super-Earths (see Chatterjee & Tan 2014). Equation (3) tells us how the radius of the pressure maximum depends on the various disc and stellar parameters. If the pressure maximum arises at a fixed temperature, a disc that is hotter will feature a pressure maximum at a larger distance from the star, and a disc that is too cold will not feature a pressure maximum at all. The disc midplane is hotter if the heating rate is higher, or if radiative cooling is less efficient. The heating rate due to viscous dissipation is directly proportional to the accretion rate and the stellar mass. On the other hand, the cooling rate is reduced by increasing the optical depth of the disc. At a fixed value of viscosity parameter , higher accretion rate and stellar mass yield a disc with a higher surface density and thus a disc that is more optically thick and less efficient at cooling. Higher opacity also makes the disc more optically thick. Therefore, higher accretion rate, stellar mass and opacity all lead to a larger radius of the pressure maximum. Conversely, higher value of the viscosity parameter makes the disc surface density lower and thus it makes the disc less optically thick. The radius of the pressure maximum is thus inversely related to . Note that the value of discussed here is the value at the location of the pressure maximum, i.e., at the location where the MRI is (largely) suppressed. It thus refers to the small viscosity driven either by propagation of turbulence from the adjacent MRIactive inner region or by other, hydrodynamical processes. In this work, it is a free parameter, and we refer to it as the minimum or dead-zone viscosity parameter. Furthermore, we can expand on eq. (3) by considering how the mass opacity ( ) depends on the properties of dust. First, we assume that the dust grain size distribution follows a power-law distribution p q 9´3 .5 with a minimum grain size min and a maximum grain size max . Second, we neglect gas opacities and scattering (or assume a constant albedo). Under these assumptions, the opacity 1 The optical depth to cooling radiation can be calculated through the evaluation of " ş 8 0 p p qq p qd . can be approximated as the ratio of the surface area of all dust grains larger than the wavelength of peak local radiation (smaller grains, if there are any, contribute much less to the absorption cross section) and the total dust mass. For the above grain size distribution, this surface area is dominated by the grains of the size of the peak wavelength, while the mass is dominated by the largest grains. Then, if we also assume that the maximum dust grain size ( max ) is much larger than the peak wavelength, at a fixed temperature, the opacity has a simple dependence on max and the dust-to-gas ratio dg , 0 9 dg´1 {2 max . Finally, for the radial location of the pressure maximum we obtain Evidently, this scaling does not take into account how the critical ionization temperature (at which thermionic and ion emission become efficient) depends on the dust properties. Even if the disc is thermally ionized, the above derivation neglects the dependence of the critical temperature on the density and the complexity of the criteria for the onset of the MRI. However, for the case of thermal ionization, we can compare eq. (4) with fits to the numerical results of Mohanty et al. (2018), who self-consistently coupled the simple Shakura-Sunyaev disc model with thermal ionization of potassium and a detailed prescription for MRI-driven viscosity (the same one used in this work). The fits were obtained for the viscosity parameter, accretion rate and stellar mass, which combined together yield While the exponents here deviate somewhat from eq. (4), the deviations are small (see also Kretke et al. 2009). Therefore, for the case of thermal ionisation, the scaling of max with , 9 andg iven in eq. (4) remains approximately correct. In this work, we use our new model to determine whether these deviations from the simple scaling become larger when thermionic and ion emission are accounted for, and also explore the dependence on the dust parameters. Finally, note that the entire discussion so far has been made under an assumption that the disc's ionization fraction rises sharply above a fixed critical temperature. However, non-thermal sources of ionization are also present in protoplanetary discs (most importantly stellar X-rays; Glassgold et al. 1997;Ercolano & Glassgold 2013). Non-thermal sources of ionization increase the ionization fraction at all orbital radii, not only in the high-temperature inner region. In Paper I we explored how this affects the MRI-driven viscosity in the inner disc, for a set of fiducial model parameters. The effect of any of the ionisation sources on the MRI-driven viscosity depends on the magnetic field strength. Under the assumptions made in Paper I about the field strength (that it is vertically constant and evolves to maximize the MRI-driven ), the stellar X-rays only contribute to the MRI-driven accretion in the outer regions, outwards from the pressure maximum, and also at the location of the pressure maximum, since has to be a continuous function. If this is indeed the case, the role of the X-rays is simply to increase the in eq. (4), which pushes the pressure maximum inwards. Since the contribution of the X-rays to accretion in the inner regions is neglected, the above assumptions maximize the effect of the X-rays on the location of the pressure maximum. Nevertheless, in Paper I we showed that, for fiducial disc and stellar parameters, this effect is completely negligible. Firstly, for those fiducial parameters, the surface densities in the inner disc are much higher than the column that the X-rays can penetrate. Second, in the X-ray ionized regions, dust grains act as an efficient recombination pathway for the ions, which significantly suppresses the MRI (Sano et al. 2000;Ilgner & Nelson 2006;Wardle 2007;Salmeron & Wardle 2008;Bai & Goodman 2009;Mohanty et al. 2013). The adsorption of gas-phase ions onto the dust grain surfaces depends primarily on the total dust grain surface area. Therefore, we can expect that the stellar X-rays become more important in a less massive disc (e.g., for disc accretion rates that are lower than our fiducial one), and also for lower dust-to-gas ratios and larger dust grains. As we explore the disc parameter space in this paper, we identify the limits in which the ionization due to stellar X-rays becomes important for the location (and the existence) of the pressure maximum. METHODS Our disc model is presented in Paper I. Here, we only summarize the main points. It is assumed that the viscously-accreting disc is in steady-state, i.e., that the gas accretion rate 9 is radially constant. The disc structure is calculated self-consistently with disc opacities, ionization state and the viscosity due to the MRI. The disc is assumed to be in vertical hydrostatic and thermal equilibrium, heated by viscous dissipation and stellar irradiation, and cooled radiatively and/or via convection. We account for heating by stellar irradiation using the grazing angle prescription (Calvet et al. 1992;Chiang & Goldreich 1997;D'Alessio et al. 1999). For the first radial point we assume the flatdisc approximation. The disc structure and the grazing angle at every other radial point are determined self-consistently. To calculate the grazing angle, we perform a smoothing over a number of grid points (see Paper I for details). As such, the first few grid points in our calculation are sensitive to how we perform this smoothing, and we indicate the affected region in our figures throughout this paper. We consider disc opacities due to silicate dust grains, and we calculate the opacities for a MRN grain size distribution (Mathis et al. 1977), with a minimum grain size min " 0.1 m and a maximum grain size max , using optical constants from Draine (2003, see Paper I for details), and assuming bulk density of dust grains s " 3.3 g cm´3. In the calculation of the Rosseland-mean opacity we include the scattering coefficient, but assume that the scattering is isotropic. Figure 1 shows the opacities, calculated per unit mass of gas, for different maximum grain sizes max , assuming a dust-to-gas ratio dg " 0.01 and stellar effective temperature˚" 4400 K. We only consider the structure of the disc beyond the silicate sublimation line, when dust opacities typically dominate over those of the gas, and so neglect the opacities due to gas molecular and atomic lines (but see section 6.3). Additionally, the water ice and carbonaceous grains have sublimation temperatures that are much lower than the temperatures expected in the hot MRI-active regions (e.g. Pollack et al. 1994). These grains may condense in the colder layers such as the photosphere to the disc radiation. Still, for simplicity, we neglect the contribution from other dust species. The disc ionization state is calculated using a simple chemical network (Desch & Turner 2015) that includes thermal (collisional) ionization of potassium; ionization of molecular hydrogen by stellar X-rays, cosmic rays and radionuclides (producing metal (magnesium) ions by charge transfer); gas-phase recombinations; adsorption onto dust grains and thermionic and ion emission from dust grains. Both charged and neutral species can be adsorbed (or condensed) in collisions with dust grains. Neutral potassium atoms, potassium ions and electrons can also be emitted back into the gas phase. The rates at which the latter processes occur increase with /g] a max = 10 4 cm a max = 10 3 cm a max = 10 2 cm a max = 10 1 cm Figure 1. Planck-mean opacity P (top), Rosseland-mean opacity R (middle), and Planck-mean opacity at the stellar effective temperatureP (i.e., the absorption coefficient for the stellar irradiation; bottom), per unit mass of gas, as functions of disc temperature, for different maximum grain sizes max as indicated in plot legend, assuming a dust-to-gas ratio dg " 0.01 and stellar effective temperature˚" 4400 K. Absorption is dominated by small grains, and so the Planck-mean opacities decrease with increasing maximum grain size. The Rosseland-mean opacity is a non-monotonic function of grain size for max À 10´2 cm. increasing temperature, as determined by 3 different activation energies and the charge state of the dust grains. Firstly, evaporation of potassium atoms from dust grains is determined by our adopted binding energy of potassium ( a " 3.26 eV), whose value is chosen to match the condensation temperature of a common potassiumbearing mineral (see Desch & Turner 2015). Furthermore, a fraction of the potassium atoms evaporating from dust grains may be ionised in the process. This is referred to as ion emission, and it is a function of the ionization potential (IP " 4.34 eV) of potassium, as well as the work function of the material of which the dust is composed (for the discussion of the adopted value, " 5 eV, see Desch & Turner 2015). Thermionic emission, i.e., emission of electrons from heated dust grains, is also determined by the work function. Additionally, the charge state of the grains effectively changes the work function. For example, ion emission results in negatively charged grains, which reduces the effective work function and increases the thermionic emission rate. Only a single dust grain species of size gr " 0.1 m is considered in the chemical network, but an effective dust-to-gas ratio eff is chosen to mimic the full size distribution stated above (see Paper I for details). This approach is based on an assumption that multiple grain species behave independently in the chemical network, and that it is a combination of the grain abundance and the grain size that controls the chemistry of the disc (see also Bai & Goodman 2009). Specifically, we choose eff here such that the ionisation threshold temperature (above which thermionic and ion emission become efficient) approximately mimics that expected for the full range of grain sizes (see Paper I). Among the non-thermal sources of ionization, stellar X-rays appear to be the most important one for the models investigated here where the focus is on the inner regions of the disc. The ionization rate of molecular hydrogen by stellar X-rays is calculated using Bai & Goodman (2009) fits to the Igea & Glassgold (1999) Monte Carlo simulations (using the fits for " 3 keV), so at any point in the disc it is a function of the cylindrical radius and the vertical mass column from the disc surface to that point. We ignore the X-rays coming through the bottom side of the disc and note that in a low-surface-density disc this may increase the ionization fraction by a factor of 2 at most. For the stellar X-ray luminosity we adopt X " 10´3 .5 bol (e.g. Wright et al. 2011;Güdel et al. 2007;Preibisch et al. 2005, but note that there is significant scatter in the observed luminosities and for a single star this luminosity may also be variable due to stellar flares). Finally, the viscosity due to the MRI, parametrized using the Shakura & Sunyaev (1973) parameter, is calculated using a prescription based on the results of magnetohydrodynamic simulations (see Paper I). This accounts for the suppression of the MRI by Ohmic and ambipolar diffusion. In the MRI-dead zones (where the MRI is suppressed), we assume the gas can still accrete due to a small constant viscosity parameter DZ , induced either by the adjacent MRI-active zone or by purely hydrodynamical instabilities. The viscous is calculated both as a function of radius and height, and we define a vertically-averaged viscosity parameter where surf is the height of the disc surface, defined as the height above the mid-plane where the gas pressure falls below a small constant value ( p surf q " 10´1 0 dyn cm´2). The weighting of the viscosity parameter by pressure is motivated by the relationship between the and the accretion rate: in steady-state, the accretion rate is proportional to the numerator in the above expression. RESULTS Here, we explore the effects of varying the important physical parameters on the inner disc structure. Initially, we vary the parameters one-by-one: dust-to-gas ratio dg and maximum dust grain size max in section 4.1, and the gas accretion rate 9 , stellar mass˚, and the dead-zone viscosity DZ in section 4.2. We then extend the exploration of the parameter space by concurrently considering a larger dust grain size and a lower gas accretion rate, in section 4.3, discovering an X-ray dominated solution. As our fiducial model, taken from Paper I, we consider a disc with a gas accretion rate 9 " 10´8 M d yr´1, stellar mass˚" 1 M d , stellar radius " 3 R d , effective stellar temperature˚" 4400 K 2 , viscosity parameter in the MRI-dead zone DZ " 10´4, dust-to-gas ratio dg " 10´2, and maximum dust grain size max " 10´4 cm. In Paper I, this model was used to discuss the impact of various physical and chemical processes on the inner disc structure. Dust-to-gas ratio and dust size As noted in section 2, dust has two effects on the disc structure in our model: it determines opacities in the disc, and it affects the disc ionization state. To better understand the results of varying dust properties, we first consider only the dust opacities. That is, in section 4.1.1, we consider a model with a vastly simplified chemical network, in which the only source of ionization is thermal ionization of potassium and free charges recombine only in the gas phase. Then, in section 4.1.2 we present the results of our full model which also accounts for the adsorption of charges onto dust grains, thermionic and ion emission from dust grains, and ionization of molecular hydrogen by stellar X-rays, cosmic rays and radionuclides. Thermally-ionized model In this section we consider a model which includes dust opacities, but does not include dust effects on the disc chemistry, nor ionization of molecular hydrogen. Thus, the ionization fraction is set exclusively by thermal ionization. The results of varying the dustto-gas ratio, dg , in the range 10´4´1 are shown in the left column of Fig. 2. In this simplified, thermally-ionized disc model, the MRI is only active at small radii. Therefore, the viscosity is highest in the innermost region where the midplane ionization fraction and the midplane temperature (shown in the second and third row, respectively) are highest, and it decreases with distance from the star. At some radius the ionization fraction drops below that needed to sustain the MRI, and the viscosity parameter¯falls to the minimum, dead-zone value DZ . That radius is the location of the local gas pressure maximum, shown in the bottom panel. We find that a higher dust-to-gas ratio results in a larger MRIactive zone (the region where¯ą DZ ). This is the behaviour expected from eq. (4). Because the inner disc is optically-thick, the disc midplane temperature is set by the accretion heat released near midplane and the optical depth of the disc to its own radiation (with a caveat that the vertical temperature gradient is additionally limited by convection; see Paper I). The disc's opacity is directly proportional to the dust-to-gas ratio, i.e., a disc with more dust is more optically-thick. As discussed in section 2, increasing optical depth makes the cooling less efficient, and the midplane hotter and more ionized, leading to a higher MRI-induced viscosity at a given radius. At lower dust-to-gas ratios gas opacities (which are neglected here) should become more important than at higher dust-to-gas ratios, but the main results shown here would likely be unaffected if the gas opacities were included (see Section 6.3). Results for a thermally-ionized disc (where the only source of ionization is thermal ionization of potassium, and dust effects on the disc chemistry are not included). The left column shows models with a constant maximum grain size max " 10´4 cm and varying dust-to-gas ratio dg as indicated in plot legend. The right column shows models with a constant dust-to-gas ratio dg " 10´2 and varying maximum grain size max as indicated in plot legend. The rows show radial profiles of (from top to bottom) vertically-averaged viscosity parameter¯, midplane free electron fraction e { H 2 , midplane temperature and midplane pressure. The inner edge of the dg " 1 model is set to " 0.3 AU, since radially inwards temperature increases above the sublimation temperature of silicates. The light lines indicate the regions affected by the inner boundary condition (see Section 3). The radius of the pressure maximum is larger for a larger dust-to-gas ratio and a smaller maximum grain size. Note that the axis ranges are different in the left and the right column. See Section 4.1.1. Furthermore, the right column of Fig. 2 shows models with a constant dust-to-gas ratio of dg " 10´2, but varying the maximum dust grain size max in the range 10´4´1 cm. For the three values of max considered here, the MRI-active region becomes smaller if dust grains are larger. This is because, for these values of max , larger dust grains have lower opacities (see Fig. 1), making the inner disc less optically thick. Just as in a disc with a lower dust-to-gas ratio, this makes the disc midplane cooler and less ionized. Therefore, in these simplified, thermally-ionized models, if dust growth were to happen, this would result in the MRI-active zone edge being pushed inwards. We emphasize that it is the changes to the dust opacities alone that lead to these significant changes in the extent of the MRI-active zone. The effects of including dust grains in our chemical network are explored next. Full model In this section, we consider our full model that additionally includes (direct) effects of dust on the ionization fraction (adsorption of free charges onto dust grains, thermionic and ion emission), and also ionization of molecular hydrogen. Three sources of ionization are considered for the latter (stellar X-rays, cosmic rays and radionuclides), the X-rays being the most important (see Paper I). The results of varying the dust-to-gas ratio and maximum dust grain size in this full model are shown in the left and the right column of Fig. 3, respectively. As in the simplified, thermally-ionized models discussed above, in the innermost regions the viscosity parameter decreases with distance from the star. Here, however, the viscosity parameter reaches a minimum value close to the dead-zone viscosity, and then increases again radially outwards, due to ionization by stellar X-rays (this is true in all models, even if not always evident in the plots). As discussed in Section 2 (Paper I; Desch & Turner 2015), in the full model the main source of ionization are thermionic and ion emission from dust grains in the inner disc. While this is a fundamentally different mechanism from gas-phase thermal ionization, the ionization states of the disc are quantitatively similar in the two cases due to their similar activation energies. As a result, the viscosity parameter in these innermost regions is similar to the models with no dust in the chemical network. In Paper I, we discussed this case for the fiducial maximum grain size max " 10´4 cm. The results presented in this work show that this conclusion holds at a wide range of dust-to-gas ratios and grain sizes (e.g., compare Fig. 3 with Fig. 2). The case of high dust-to-gas ratio of dg " 1 deviates somewhat from the above scenario. In this model, the midplane free electron fraction decreases substantially already at the distance of " 0.9 AU (see the left panel, second row in Fig. 3). However, the viscous¯remains high out to " 2 AU in this model (the top left panel in Fig. 3). In this high-¯region, the MRI is indeed active at the disc midplane (as shown in the top panel of Fig. 4), despite the large decrease in the midplane electron number density. What drives the MRI in this case? The bottom panel of Fig. 4 shows that between " 0.4 AU and " 2 AU the main ionized species are the potassium ions and the dust grains. As expected, the number density of electrons decreases with increasing dust-to-gas ratio (keeping other parameters fixed). However, the opposite is true for the number density of potassium ions evaporating from dust grains, above " 900 K (see Desch & Turner 2015), which increases with increasing dust density. In the resulting disc ionization state, due to charge conservation, the total charge of potassium ions equals the total charge on dust grains. Clearly, since the dust grains have a much higher inertia than potassium ions, it is the potassium ions that couple the gas to the magnetic field. Overall, these results show that emission of potassium from dust grains is sufficient to sustain the MRI out to large radii, at high dust-to-gas ratios. Although, we note that at dust-to-gas ratios approaching unity the dynamical back reaction of the dust on the gas would need to be included, an effect which is poorly studied in the context of MRI turbulence and by extension not included in our parameterisation of the how the viscosity depends on the ionization structure. Clearly, MHD simulations of this disc state are warranted to study its behaviour. Particularly as our previous work in Jankovic et al. (2019) has indicated dust-to-gas ratios approaching unity in the inner MRI active regions is a possible outcome of disc evolution. Furthermore, as noted above, in our full model the viscosity parameter¯reaches a minimum value, outwards from which it increases with radius. The minimum in the viscosity parameter corresponds to the location of the gas pressure maximum, shown in the bottom panels of Fig. 3. Outwards of the pressure maximum, the temperature at the disc midplane is too low for efficient ionization, and so the MRI is only active in an X-ray-ionized layer high above disc midplane (seen near " 2 AU in the top panel of Fig. 4). Inwards of the pressure maximum, this X-ray-ionized layer does not appear; as discussed in Paper I, this is due to our assumption that the magnetic field strength is vertically constant, and the fact that the magnetic field strengths required for the MRI differ greatly in the high-density midplane and the low-density upper layers. In the outer, X-ray-ionized regions, the viscosity parameter increases with decreasing dust-to-gas ratio and increasing dust grain size. This is because the main source of ionization are the stellar X-rays, and dust only acts as a recombination pathway. For lower dust-to-gas ratios and, equivalently, higher maximum grain size, the total grain surface area (onto which free charges adsorb) decreases, leading to higher ionization fraction and higher MRI-driven viscosity (Sano et al. 2000;Ilgner & Nelson 2006). As expected from eq. (4), this increase in the value of¯at the location of the pressure maximum pushes the pressure maximum inwards. This effect is somewhat exaggerated since the effect of the X-rays on¯is neglected inwards of the pressure maximum (due to the model limitations noted above). Nevertheless, for the majority of the parameter space explored so far the contribution to the accretion rate from the X-ray-ionized layer is low in all cases and the viscosity parameter¯" DZ in the outer regions. In other words, in these outer regions the gas primarily accretes through the dead zone, and the stellar X-rays do not affect strongly the location of the pressure maximum (for the given gas accretion rate). Overall, the extent of the high-viscosity inner region and the location of the pressure maximum is dictated by the dependence of disc's vertical optical thickness on the dust-to-gas ratio and dust grain size through the effects discussed in the previous section. In our full model the location of the pressure maximum is approximately the same for max " 10´4 cm and max " 10´2 cm (see the top right panel in Fig. 3). This is because a larger grain size results in a lower effective dust-to-gas ratio in our chemical network. This slightly decreases the critical temperature at which the thermionic and ion emission make the gas sufficiently ionized to start the MRI at disc midplane. Concurrently, the dust opacity for the case of max " 10´2 cm is only slightly smaller than for max " 10´4 cm in the relevant temperature range (see Fig. 1). The location of the pressure maximum as a function of dust grain size is considered in more detail in section 5. Results for our full model which includes dust effects on disc chemistry and ionization of molecular hydrogen. The left column shows models with a constant maximum grain size max " 10´4 cm and varying dust-to-gas ratio dg as indicated in plot legend. The right column shows models with a constant dust-to-gas ratio dg " 10´2 and varying maximum grain size max as indicated in plot legend. The rows show radial profiles of (from top to bottom) vertically-averaged viscosity parameter¯, midplane free electron fraction e { H 2 , midplane temperature and midplane pressure. The light lines indicate the regions affected by the inner boundary condition (see Section 3). The radius of the pressure maximum is larger for a larger dust-to-gas ratio; it is approximately the same for the maximum grain size of 10´4 cm and 10´2 cm, but much smaller for the maximum grain size of 1 cm. Note that the axis ranges are different in the left and the right column. See Section 4.1.2. Midplane n/n H2 n e /n H2 n K + /n H2 n i /n H2 n g |Z g |/n H2 Gas accretion rate, stellar mass, dead-zone viscosity In this section we keep the dust-to-gas ratio and the maximum dust grain size constant and equal to our fiducial values, and investigate how the structure of the inner disc changes with varying gas accretion rate, stellar mass and dead-zone viscosity. Fig. 5 shows the results of our fiducial model compared to three other models in which we vary these three parameters. The different panels show, from top to bottom, the vertically-averaged viscosity parameter (¯), midplane free electron fraction ( e { H 2 ), midplane temperature and midplane pressure, as functions of radius. In each panel the dashed line shows a model with a gas accretion rate 9 " 10´9 M d yr´1, lower than in our fiducial model with 9 " 10´8 M d yr´1, shown by the solid line. The lower gas accretion rate results in a smaller high-viscosity inner region, and a gas pressure maximum at a shorter radius. This is in line with the theoretical expectations discussed in Section 2. In the opticallythick inner disc, the midplane temperature (and consequently the ionization fraction and the viscosity) is set by the rate of viscous dissipation, and the latter is directly proportional to the gas accretion rate. Additionally, fixing other parameters, gas accretion rate also sets the gas surface density, and thus the disc optical depth and (2020) how efficiently the disc cools. Therefore, the lower gas accretion rate yields a colder, less ionized disc. The radius of the gas pressure maximum scales with the gas accretion rate approximately as max 9 9 1{2 . This is close to the prediction given by eq. (4) and, in fact, the same as the scaling given by eq. (5), previously found by Mohanty et al. (2018), who neglected heating by stellar irradiation, dust effects and ionization of molecular hydrogen. While we find that stellar irradiation and ionization of molecular hydrogen are indeed unimportant for the model parameters chosen here, the dust effects are not. However, as discussed in Section 2, while the chemistry setting the ionization state of the disc is qualitatively different in their simple models (thermal ionization of potassium) and in the models presented here (thermionic and ion emission), in both cases the ionization fraction increases sharply above roughly the same temperature (" 1000 K), yielding the same approximate scaling. X-ray ionization of molecular hydrogen becomes more important at the lower gas accretion rate. X-rays activate the MRI in a layer high above the disc midplane outwards of the pressure maximum, increasing the viscosity parameter¯compared to the dead-zone value DZ . For an accretion rate of 10´9 M d yr´1 the vertically averaged viscosity has increased by a factor of 2 outside the deadzone over the model with an accretion rate of 10´8 M d yr´1. This is because in these outer regions a lower gas accretion rate results in lower gas surface densities. Since the accretion rate carried by the X-ray ionized layer is roughly constant (e.g. set by the penetration depth of the X-rays) at lower accretion rates it has a larger relative contribution to the total accretion rate. This contribution remains small for the small maximum grain size assumed here; in the next section we discuss how this finding changes if grains are larger. We find that the structure of the inner disc surrounding a˚" 0.1 M d star 3 (shown by the dash-dotted lines in Fig. 5) is merely shifted radially inwards compared to our fiducial model with˚" 1 M d . Similar to the gas accretion rate, the stellar mass determines the disc heating rate due to accretion, as well as the disc optical depth through the dependence of the disc surface density on the stellar mass. The resulting approximate scaling max 9´1 {3 is identical to the one given by eq. (4), stressing again that (for the chosen 9 , dust parameters etc.) stellar irradiation and ionization by stellar X-rays are unimportant in setting the location of the pressure maximum. Note that in Fig. 5 we vary the disc parameters one-byone from our fiducial values, whereas observations show that stellar mass and gas accretion rate are correlated (e.g. Mohanty et al. 2005;Manara et al. 2012;Alcalá et al. 2014Alcalá et al. , 2017Manara et al. 2017). We investigate the combined effect of a lower stellar mass and a lower gas accretion rate in our detailed parameter study in Section 5. Finally, the dotted line in Fig. 5 shows a model with a deadzone viscosity DZ " 10´3 (higher than our fiducial DZ " 10´4). As in the simple models of Mohanty et al. (2018), the exact value of DZ is unimportant in the innermost, well-ionized region. In the outer regions the accretion stress is dominated by that in the dead zone, and so DZ sets the disc structure there. This includes the disc midplane temperature, and so DZ sets the location where, going radially inwards, the midplane temperature reaches the critical value above which the disc midplane becomes well ionized. Once again, the approximate scaling that we find, max 9´0 .22 DZ is close to the one expected from simple arguments laid out in Section 2. Large dust grains and low accretion rates Within the parameter space explored above, the steady-state solution for the inner disc structure remains qualitatively the same, with a local gas pressure maximum at the transition between the high-viscosity inner region and the low-viscosity outer region. Nonthermal sources of ionization (dominated by stellar X-rays) can suppress this picture by increasing the viscosity in the outer region and removing the pressure maximum. Within the parameter space explored above, the importance of non-thermal ionization increases for larger grains and lower gas accretion rates, but overall remains very small. However, for large enough grains and low enough gas accretion rate, the above picture changes entirely. We find that for a maximum grain size max " 1 cm and a gas accretion rate of 9 " 10´9 M d yr´1 (with other model parameters equal to the fiducial values) our model produces a steady-state solution in which the viscosity parameter is of the order of¯" 7ˆ10´2 throughout the inner disc, and consequently there is no local gas pressure maximum (see Fig. 6). Stellar X-rays are the main source of ionization in this low-surface-density solution and they drive the MRI down to the disc midplane ( Fig. 7 shows that the metal ions vastly outnumber the potassium ions at the midplane). The disc is hot enough to ionise potassium inwards of 0.1 AU and there is a drop in the midplane ionisation fraction at this radius (see Fig. 7). However, there is no pressure maximum associated with this drop in ionisation, as the ionisation fraction remains high enough to yield a high viscosity. Moreover, the midplane ionisation fraction increases again radially outwards from this location, suggesting that the viscosity remains high in the outer disc, beyond our computational domain. Overall, these results may be interpreted as the MRI-dead zone being completely wiped out from the disc by high ionisation rate (due to low gas surface densities) and low recombination rate (due to the total surface area of the dust grains being reduced for larger maximum dust grain size). This steady-state solution does not appear to be a unique steady-state solution for these model parameters. Multiple solutions can arise, for example, because the various thermal and nonthermal sources of ionization produce an ionisation fraction that is a complex function of temperature, density and column density (an example was discussed in Paper I). The disc ionization state is further convolved with complex MRI criteria, our assumptions about the magnetic field strength, and possibly with artefacts of the numerical procedures used (e.g. the vertical smoothing of the viscosity parameter). The uniqueness of the solution is affected both in terms of the equilibrium steady-state solution at a fixed orbital radius and a fixed value of the magnetic field strength, and in terms of the number of local maxima of the MRI-driven viscosity parameter¯as a function of the magnetic field strength. The steady-state solution shown in Fig. 6 is favoured by our assumption that the disc structure and the magnetic field strength adapt to maximize¯at any given orbital radius. Further investigation of this issue (e.g. determining whether a steady-state solution featuring a pressure maximum can be constructed for these model parameters) is hampered by computational cost of root-solving and optimization of functions with non-unique solutions. Regardless of the existence of other solutions, the abrupt change in the steady-state structure as model parameters are varied implies that time-dependent simulations are required to examine what configuration the disc would evolve into once the X-ray-driven solution becomes viable. Nevertheless, we can analyse under which conditions the Xray-ionized steady-state solution can arise. This solution does not Figure 6. Vertically-averaged viscosity parameter¯(top) and midplane pressure (bottom) for a disc model with a gas accretion rate 9 " 10´9 M d yr´1, stellar mass˚" 1 M d , dead-zone viscosity DZ " 10´4, dust-to-gas ratio dg " 10´2 and maximum grain size max " 1 cm. There is no local gas pressure maximum in the inner disc for these model parameters. The light lines indicate the regions affected by the inner boundary condition (see Section 3). See Section 4.3. appear in a disc with small dust grains, because the higher surface area of the grains enhances the adsorption of charges onto dust grains and lowers the ionisation fraction, as discussed in previous sections. This solution also does not seem to exist at higher gas accretion rates. This can be understood as there being a maximum accretion rate that can be driven by the X-rays and the MRI, essentially due to the X-rays only ionising a limited column in the disc. We can estimate this maximum accretion rate as a function of model parameters and the orbital radius using a simplified model, akin to the approach of Perez-Becker & Chiang (2011). Perez-Becker & Chiang (2011) focused on the role of both X-ray and UV ionisation and a much more detailed chemical network in their work, but the physics setting the maximum accretion rate is similar. First, note that the gas accretion rate is given in steady-state by 9 " 3 Σ 2 {pΩ q, where Σ is the total gas surface density, is the sound speed, Ω the Keplerian velocity at a given orbital radius and " 1´a˚{ a factor accounting for the inner disc edge boundary condition. Second, note that this solution must feature low surface densities, so that the X-rays can ionise the disc midplane. At these low surface densities the disc is only marginally optically thick, and the disc midplane temperature is set by stellar irradiation. In fact, neglecting stellar irradiation largely suppresses the X-ray-ionized steady-state solution, since the temperature (and thus the sound speed) produced by viscous dissipation in a marginally optically thick disc is much lower than the one produced by stellar irradiation, which lowers the accretion rate given by the above expression. Therefore, in this regime, we estimate the disc temperature as " p {2q 1{4 p˚{ q 1{2˚( Chiang & Goldreich 1997), assuming for simplicity that the incident angle of stellar irradiation is given by the flat-disc approximation (valid for the inner regions). As this yields a fixed radial temperature profile, at a given radius the accretion rate is maximised by maximising the product Σ. In this regime, the MRI is primarily suppressed by ambipolar diffusion. The maximum viscosity parameter that can be driven by the MRI increases with an increasing ambipolar Elsasser number Am, according to a simple function given by Bai & Stone (2011). Assuming that the only important charged species are electrons and atomic ions and that we are in the ambipolar-diffusion-dominated regime, Am « i x y i {Ω, where i is the number density of ions and x y i is the rate coefficient for momentum transfer in collisions between ions and neutrals. Therefore, the ambipolar Elsasser number is directly proportional to the number density of ions. On the other hand, in a disc ionised by stellar X-rays, the number density of ions at disc midplane decreases with increasing gas surface density, due to attenuation of the stellar X-rays. As a result, the product of the viscosity parameter and the surface density, pAmpΣqqΣ, is maximized at some value of Σ, and so is the gas accretion rate. Furthermore, we can estimate Am and i at disc midplane, by assuming the only source of ionization are X-rays and atomic ions recombine on dust grains, and using the reaction rates given in Paper I (but here we ignore the charge state of the dust grains). In our chemistry calculations we use an effective dust-to-gas ratio to mimic the grain size distribution; for the purpose of this calculation we use the effective dust-to-gas ratio that corresponds to dust grains of different sizes being weighted by their surface area (different than in the rest of our calculations), which is a more appropriate choice when dust grains only act as a recombination pathway (see section 2.4.2 in Paper I). We calculate the product pAmpΣqqΣ for a range of values of Σ and pick the maximum value. For our fiducial stellar (Solar-mass) parameters and a maximum grain size max " 1 cm, at the radius of 0.1 AU we find that the maximum accretion rate that can be obtained via an X-ray-dominated solution is 9 max « 3ˆ10´9 M d yr´1. This simple estimate thus appears to be in agreement with the results of our full numerical model. Notably, 9 max increases with orbital radius, due to the dependence of the accretion rate on the Keplerian angular velocity, and in spite of the decrease in the X-ray ionisation rate at larger radii (see also Perez-Becker & Chiang 2011). Therefore, at large enough radii the X-ray-dominated steady-state solution should arise even for our fiducial accretion rate (and higher values). For the same parameters as above, within 1 AU from the star, 9 max remains below our fiducial value of 9 " 10´8 M d yr´1, and so the simple estimate also confirms our results from the previous sections. However, more generally, such a configuration where the disc would adopt an X-raydominated solution only outwards from some radius is unphysical, as it would require a sharp and large drop in the gas surface density at the transition between the different steady-state solutions. In the following section we discuss further the importance of this X-raydominated solution for the existence of the pressure maximum, and in Section 6 we discuss how it could fit into the general picture of disc evolution. PRESSURE MAXIMUM The above results show that, for a wide range of disc, stellar and dust parameters, an MRI accreting protoplanetary disc features a high-viscosity inner region, a low-viscosity outer region, and a gas pressure maximum at the transition between the two regions. This gas pressure maximum has been hypothesized to have a key role in the formation of the super-Earths inside the water ice line (Chatterjee & Tan 2014Hu et al. 2016Hu et al. , 2018. In this section, we wish to examine in more detail the existence and location of the pressure maximum. As discussed in Section 2, the existence of the pressure maximum in the inner disc requires two conditions. First, the ionization fraction should increase (sharply) above some critical temperature. Second, the disc should be hot enough so that the ionization fraction in the innermost region increases above a value required to sustain the MRI. The first of these conditions is not fulfilled if the disc evolves into a steady-state structure primarily ionized by stellar X-rays, discussed in Section 4.3. As noted in Section 4.3, timedependent simulations are required to establish whether the stellar X-rays could indeed overtake the disc structure and at what point in the disc evolution this could happen. Still, it is important to consider this possibility when discussing the existence of the pressure maximum in the inner disc. Due to computational cost, it is not possible to calculate a large grid of models which include the possibility of multiple equilibrium solutions, as well as heating by stellar irradiation (which is, as discussed above, key for the existence of X-ray-dominated solutions). Therefore, we proceed as follows. In this section, we neglect the heating by stellar irradiation and the ionization of the disc by the stellar X-rays, and use our model to find the radial location of the pressure maximum for various combinations of disc and stellar parameters. Then, for each combination of parameters, we estimate the maximum gas accretion rate that can be attained in an irradiated X-ray-ionized disc, at that radial location, using the simplified model outlined in Section 4.3. If this maximum gas accretion rate of an X-ray-ionized solution is larger than the given, input accretion rate, we note that the pressure maximum might not exist for those parameters. This condition for the existence of the pressure maximum is likely too strict. The maximum accretion rate of an X-ray-dominated solution increases with increasing In all models the dust-to-gas ratio is 10´2. Solid, dashed and dotted lines show results for the dead-zone viscosity parameter DZ " 10´5, 10´4 and 10´3, respectively. Blue, green and red lines indicate different gas accretion rates 9 as indicated in plot legends for each panel. To obtain these results, we neglect stellar irradiation in our disc model, and use a simple calculation to estimate if including the irradiation would produce a steady-state solution ionized primarily by stellar X-rays and featuring no pressure maximum (shown as empty symbols). This is indeed found to be the case for lower accretion rates and larger dust grains. For " 0.1 M d , there may be no pressure maximum for the lower end of the observed accretion rates ( 9 " 10´1 1 M d yr´1) for any maximum grain size. See Section 5. orbital radius, and so the above condition does not guarantee that this solution would exist all the way to the inner disc edge for the given parameters. Nevertheless, this condition can also be understood as an estimate of whether a surface X-ray-ionized layer may become more important than the accretion through an MRI-dead midplane, i.e., whether the X-rays could perturb the steady-state solution featuring a pressure maximum. Ultimately, time-dependent simulation and a more complex chemical network will be needed to investigate how a real disc would behave, and we discuss this further in Section 6.3. Fig. 8 shows the radial location of the pressure maximum (or pressure bump) as a function of the maximum dust grain size (see Section 3 for details of the dust grain distribution and how it is mimicked in our chemical network), for a stellar mass of 1 M d (the top panel) and 0.1 M d (the bottom panel). Those models in which the X-rays may erase the pressure maximum are indicated by empty symbols. In all models shown here, we adopt our fiducial dust-togas ratio of 10´2. The solid, dashed and dotted lines correspond to different values of the dead-zone viscosity parameter ( DZ " 10´5, 10´4 and 10´3, respectively). For the different stellar masses we explore different ranges of the gas accretion rate 9 , as indicated in plot legends next to each panel. The chosen ranges are motivated by observational studies, which find that for a Solar-mass star typically 9 " 10´8 M d yr´1 and for the stellar mass of 0.1 M d , typically 9 " 10´1 0 M d yr´1 (e.g. Mohanty et al. 2005;Manara et al. 2012;Alcalá et al. 2014Alcalá et al. , 2017Manara et al. 2017). There is a significant spread both in the reported mean values in these studies (˘1 dex for the stellar mass of 0.1 M d , and somewhat less for a Solar-mass star) and within the observed samples in each study (up to 2 dex). However, the correlation with the stellar mass appears robust, and so we adopt the above typical values as mean values and vary the gas accretion by˘1 dex for each stellar mass. For the Solar-mass star we find that, for the upper end of the observed range of gas accretion rates ( 9 Á 10´8 M d yr´1) and a wide range of DZ , the existence of the pressure maximum is robust for a wide range of grain sizes. For the lower gas accretion rate ( 9 " 10´9 M d yr´1) and grain sizes larger than a few microns, the simple estimate described above suggests that accretion through an X-rayionized surface layer may perturb the disc structure. Therefore, we caution that a pressure maximum may not always occur for these parameters. For˚" 0.1 M d the portion of the parameter space within which the pressure maximum is always present is even smaller, since the observed mean gas accretion rate is two orders of magnitude lower. Note that the range of grain sizes for which the pressure maximum is unperturbed for 9 " 10´9 M d yr´1 around the lower-mass star is larger than for the Solar mass star, due to the lower-mass star having a lower (X-ray) luminosity. The radius of the pressure maximum as a function of the maximum dust grain size ( max ) shows a similar trend across the various values of the stellar mass, accretion rate and dead-zone viscosity: it weakly increases with increasing max for small grains, peaks at about max " 10´2 cm, and then steadily decreases for larger dust grains. The factors causing this have already been briefly discussed in section 4.1. First, recall that an increase in the disc opacity means that accretion heat can escape less easily, making the disc midplane hotter and pushing the pressure maximum outwards. In addition, for small dust grains, larger dust grain size leads to a moderate increase in the opacity (here, the relevant opacity is the opacity of the disc to its own radiation in the optically-thick regions, i.e., the Rosseland-mean opacity, see Fig. 1). At the same time, the increase in dust grain size reduces the critical temperature at which ionization fraction rises due to thermionic and ion emission (as the increase in dust grain size is equivalent to a reduction in the effective dust-to-gas ratio in our chemical network; see also Desch & Turner 2015), pushing the pressure maximum outwards. When these factors are compounded, for small grains, the radius of the pressure maximum increases with max . However, if the grains grow beyond max " 10´2 cm, dust opacities are severely decreased with increasing grain size and the net effect is a decrease in the radius of the pressure maximum. Additionally, note that the exact value of max at which the radius of the pressure bump peaks varies somewhat with 9 and DZ ; this can be expected, since both determine the gas surface density and thus the optical depth at disc midplane and the relative importance of the above two effects. It is also important to note that including heating due to stellar irradiation seems to somewhat modify the trend for small dust grains, as can be seen by comparing Fig 8 to the models discussed in the previous section. Specifically, Fig. 3 shows that for our fiducial disc and stellar param-eters, micron-size grains result in a pressure maximum at a larger radius. The resulting differences in the radial location of the pressure maximum are, however, small, and certainly for larger grains the pressure maximum moves inwards. DISCUSSION In this work we have investigated how the structure of the inner disc, accreting primarily through the MRI, changes with various disc, stellar and dust parameters. Of particular interest are the existence and the location of a local gas pressure maximum and a highlyturbulent region inwards of it, which could accumulate dust grains drifting in radially from the outer disc, possibly leading to the formation of planetary cores (Chatterjee & Tan 2014;Hu et al. 2018;Jankovic et al. 2019). The models presented in this work are steady-state models, each with a distribution of dust grains that is fixed throughout the disc. However, as discussed in Sections 6.1 and 6.2, these models provide us with important insights into how the inner disc could evolve as the dust grains grow, if and how the dust will accumulate, how this accumulation could feedback on the gas structure, and the disc parameters that are favourable for the formation of planetary cores. In Section 6.3 we discuss the various limitations of these models. Before we proceed, we re-iterate that dust is incredibly important for the inner disc structure. Thermionic and ion emission from dust grains and the adsorption of charges from the gas onto the dust grains control the ionization fraction, and thus the extent of the inner region where the MRI can drive efficient accretion. These processes yield an ionization fraction that increases sharply above a critical temperature, similar to thermal ionization. This critical temperature appears to be a slowly varying function of dust properties, however, and so it is the effect of the dust on the disc opacity that largely determines the location of the pressure maximum. Increasing the dust opacity results in a more optically-thick, hotter inner disc, with a radius of the pressure maximum at a larger distance from the star. Dust growth As discussed in Section 5, dust growth to max " 10´2 cm increases the extent of the high-viscosity inner region and the radius at which the pressure maximum is located, as an increase in dust grain size leads to a moderate increase in the disc opacity and a decrease in the threshold temperature at which thermionic and ion emission become efficient. Growth beyond that size has the opposite effect, as it leads to a significant decrease in the disc opacity, making the disc midplane colder, and thus less ionized. Therefore, in the inner disc, if dust grows larger than " 100 m sizes, the dead-zone inner edge moves inwards. Note that this is the opposite of what happens in the outer regions of protoplanetary discs. The outer regions are ionized primarily by the stellar X-rays and cosmic rays. These sources of ionization become more important further away from the star, as the disc column density decreases and high-temperature effects become unimportant. These regions are expected to be optically thin to their own radiation, and the primary source of heat is stellar irradiation. Therefore, the dust acts primarily to lower the ionization fraction by adsorbing free charges from the gas. Because of this, in the outer regions the dead zone is expected to shrink as the dust grains grow (Sano et al. 2000;Ilgner & Nelson 2006). We can calculate the location of the pressure maximum under an assumption that the maximum dust grain size has reached a growth limit. In the inner disc, dust growth is limited by collisional fragmentation of dust grains due to relative turbulent velocities (Birnstiel et al. 2010(Birnstiel et al. , 2012Drazkowska et al. 2016). Here, the relative grain velocities are induced by the MRI-driven turbulence, as well as the lower levels of turbulence assumed to persist in the MRI-dead zone. Since this growth limit depends on the velocities of dust grains due to turbulent velocities of the gas, it is given in terms of the particles "Stokes number" (St), the ratio between the particle gas drag stopping time and the eddy turnover time (where the eddy turnover time is taken to be 1{Ω Zhu et al. 2015). In a turbulent disc, typical collisional relative velocity between dust grains is given by 2 dd « 3 2 g St (for St < 1, Ormel & Cuzzi 2007), where g is the typical turbulent gas velocity (given by 2 g " 2 s ). There is a critical velocity frag above which a collision between dust grains results in their fragmentation, rather than sticking/growth. For silicate grains of similar size, frag " 1 m s´1 (Blum & Münch 1993;Beitz et al. 2011;Schräpler et al. 2012;Bukhari Syed et al. 2017, although note that grains might become more sticky at the high temperatures present in the inner disc (Demirci et al. 2019)). Since the Stokes number St is directly related to the grain size, and the collision velocity to St, fragmentation imposes an upper limit on dust growth. At the fragmentation limit (Birnstiel et al. 2009(Birnstiel et al. , 2012, The exact relationship between the Stokes number and the particle size depends on the relevant drag law (Weidenschilling 1977). Typically, the dust grains in protoplanetary discs are smaller than the mean free path of gas molecules, and therefore couple to the gas according to the Epstein drag law. However, due to the high densities in the inner disc, dust grains may enter the Stokes regime. Importantly, the above approximate expression for the turbulent relative velocity between dust grains ( dd ) has been derived under an assumption that St does not depend on the relative velocity between the dust grain and the gas, dg . This assumption is true for grains in the Epstein drag regime. In the Stokes regime, it is true only if the Reynolds number Re of the particle is less than unity. We always check that this condition is fulfilled for our particles in the Stokes regime, and that we may employ the above expression for dd . The Reynolds number of a particle itself depends on the velocity dg , for which we adopt another approximate expression, 2 dg " 2 g St{p1`Stq (Cuzzi & Hogan 2003, note that this expression was derived analytically for St ! 1, but also shown to be applicable for a wide range of St through a comparison with numerical simulations). To calculate the location of the pressure maximum in a disc in which grain growth is limited by fragmentation, we proceed as follows. At the location of the pressure maximum for various combinations of stellar mass, accretion rate, dead-zone viscosity parameter and maximum dust grain size (i.e., for every point in Fig. 8), we calculate the fragmentation limit for the particle Stokes number, St frag (assuming "¯), and the corresponding grain size, frag (for an appropriate drag law). For each combination of the disc and stellar parameters (stellar mass, accretion rate and dead-zone viscosity parameter), this yields a set of points describing a function frag p max q. We connect these points using linear interpolation, and find the maximum grain size such that frag p max q " max . Then, again using the results shown in Fig. 8 and linear interpolation, we find the radius of the pressure maximum at that value of max . Similarly, we find the corresponding midplane temperature and density. This calculation utilizes models in which the maximum dust grain size is assumed to be constant everywhere in the disc, and so the obtained solutions also formally correspond to models in which the maximum dust grain size is radially constant (and equal to the fragmentation limit frag at the pressure maximum). In a real disc, the fragmentation limit to which particles can grow would be a function of the turbulence levels and other parameters which vary as functions of radius. While this calculation does not take this radial variation of dust size into account, the fragmentation limit at the pressure maximum and the location of the pressure maximum would remain the same as in the solutions found here. In particular, note that radially inwards from the pressure maximum, frag should decrease compared to the value at the pressure maximum, as the turbulence parameter increases. Furthermore, for large maximum grain sizes max (which is the regime pertaining to the solutions found here), a decrease in max yields an increase in at a fixed radius. Thus, if we accounted for the decrease in max " frag inwards of the pressure maximum, this would only make the radial gradient of steeper inwards of the pressure maximum, but it would not change the location of its minimum, and therefore not the location of the pressure maximum obtained here. The results for the radius of the pressure maximum and the grain size are shown in Fig. 9, as functions of the gas accretion rate, for different values of stellar mass and the dead-zone viscosity parameter. The maximum grain size at the pressure maximum is limited by turbulent fragmentation (middle panel) and thus sensitive to the dead-zone viscosity parameter DZ (results for DZ " 10´5, 10´4 and 10´3 are shown by the solid, dashed and dotted lines, respectively). This is because, in these models, at the location of the pressure maximum, the vertically-averaged viscosity (and turbulence) parameter¯" DZ . Remarkably, despite the sensitivity of the grain size to DZ , the radius of the pressure maximum depends very weakly on this parameter. This is a result of three different inter-dependencies. At a fixed maximum grain size (and other parameters), a lower DZ yields a larger radius of the pressure bump, and also a larger fragmentationlimited grain size. Concurrently, a larger maximum grain size yields a smaller radius of the pressure bump (for maximum dust grain size larger than about 10´2 cm, see Fig. 8). Evidently, compounding these inter-dependencies results in a weakly-varying radius of the pressure bump. We can also compare these results against simple theoretical expectations, by coupling eq. (4) with the expressions for the fragmentation-limited dust grain size (obtained from eq. (7) and the appropriate drag laws). In the Epstein drag regime (pertaining to the solution for DZ " 10´3 in Fig. 9), we obtain for the radius of the pressure bump max 9 4{15 dg 9 2{5 1{3 9 0.27 dg 9 0.4 0.33 , These simple predictions agree with our finding that the dependency of the radius of the pressure bump on DZ is very weak, and in fact expected to be non-existent in the Epstein drag regime. The dependencies on the other parameters also seem to match fairly well. Small deviations are to be expected since in the numerical models the pressure bump does not occur precisely at a constant fixed temperature. Furthermore, for the Solar-mass star, the pressure maximum is at the pressure bump (bottom) as functions of the gas accretion rate, for a maximum dust grain size that corresponds to the grain growth limit due to turbulent fragmentation. Solid, dashed and dotted lines correspond to different values of the dead-zone viscosity parameter DZ as shown in the plot legend. The dust grain size and St{¯are highly sensitive to the deadzone viscosity parameter DZ . Conversely, varying DZ affects the location of the pressure bump only weakly. Blue lines show the results for a stellar mass˚" 1 M d , orange lines for˚" 0.1 M d . Empty symbols denote solutions where an alternative steady-state solution may exist that is ionized primarily by stellar X-rays and featuring no pressure maximum, same as in Fig. 8. There is no solution for˚" 0.1 M d and 9 " 10´1 1 M d yr´1. See Sections 6.1 and 6.2. located at radii between " 0.3 AU and " 2 AU in the upper end of the range of the observationally-motivated gas accretion rates considered here. However, for the accretion rate of 9 " 10´9 M d yr´1 the pressure maximum may be perturbed and possibly removed from the inner disc altogether. As discussed in Section 5, for lower gas accretion rates and larger dust grain sizes, we find that the disc could assume a high-viscosity X-ray-dominated steady-state solu-tion which features no pressure maximum (in Fig. 9, this is indicated by empty symbols). For a lower-mass star,˚" 0.1 M d , at a fixed gas accretion rate, dust grains grow to similar sizes as for the Solar-mass star. Overall, the radius of the pressure maximum is expected to be smaller, due to the lower viscous dissipation rate and the lower observed gas accretion rates (see the discussions in sections 4.2 and 5). Analogously to the case of the Solar-mass star, there might not be a pressure maximum at lower accretion rates. Dust accumulation For dust grains to become trapped within the pressure maximum, the outwards radial drift velocity of dust grains just inwards of the pressure maximum should be higher than the velocity with which the accreting gas advects the grains inwards. The ratio of the radial drift and the gas advection velocities is roughly equal to the ratio of the particle Stokes number and the viscous (Jacquet et al. 2012). Therefore, for the particle radial drift to overcome advection with the gas inwards of the pressure maximum, it is required that St{ ą 1. To check whether this condition is fulfilled, in the bottom panel of Fig. 9 we show the ratio between the Stokes number at the fragmentation limit St frag and the vertically averaged viscosity and turbulence parameter¯at the location of the pressure maximum. The ratio St frag {¯is most sensitive to the value of the dead-zone viscosity parameter DZ . The Stokes number at the fragmentation limit St frag is given by eq. (7). At a fixed critical fragmentation velocity frag , it is a function only of the viscosity parameterā nd the speed of sound s (i.e., the temperature) at the pressure maximum. Since these solutions are extracted from models which neglect ionization by stellar X-rays,¯" DZ , the value of the deadzone viscosity parameter. Hence, St frag {¯9 1{ 2 DZ . This ratio also varies slightly with gas accretion rate, which we can attribute to the fact that the pressure maximum does not occur precisely at a fixed critical temperature. Instead, the ionization fraction, driven primarily by thermionic and ion emission, also depends on the dust properties and the disc density. Overall, whether the dust grains become trapped in the pressure maximum depends on an assumed value of DZ . We may consider how the disc might evolve forward, depending on this value. First, if the dead-zone viscosity parameter is low, DZ " 10´5, St frag {¯" 1. In this case, dust grains could readily accumulate at the pressure maximum. This accumulation might lead to an unstable configuration, as an increase in the dust-to-gas ratio leads to an increase in¯at a given radius (see Fig. 3). As the dust-to-gas ratio would increase at the pressure maximum, and decrease either side, this could lead to an emergence of an additional minimum in¯outwards from the original minimum, and thus to a formation of an additional pressure trap. Time-dependent simulations are needed to examine further evolution of the disc. In general, if the pressure maximum efficiently traps the radially-drifting dust, one observable consequence would be that the gas accreting onto the star would be depleted in elements of which the dust is composed: such a mechanism has been proposed to be at work in the inner disc of TW Hya, where the levels of depletion of refractory and volatile elements suggest a dust trap inside of the water ice line (McClure et al. 2020), and also in transition discs surrounding Herbig Ae/Be stars (Kama et al. 2015). Second, consider a case in which St frag {¯" 1, which would occur if the dead-zone viscosity parameter is slightly above the middle of the plausible range, DZ " 10´4. In this case, considered by Jankovic et al. (2019), the gas pressure maximum does not trap large amounts of dust. This is because¯increases inwards of the pressure maximum, and St frag decreases. This limits the radial width of the pressure trap. Dust advection with the accreting gas is not sufficient to remove the dust grains from the trap; however, the grains are also mixed radially by the turbulence. The dust-to-gas ratio at the pressure maximum is then limited by the corresponding radial diffusion term. Nevertheless, dust would still accumulate in the entire region interior to the pressure maximum, as in the highlyturbulent innermost region dust grains become small enough to couple to the gas, reducing the radial drift relative to the outer disc. However, in this case, the amount of dust that can be accumulated in the inner disc is limited by the flux of dust drifting inwards from the outer disc. Moreover, the small size of the dust grains is detrimental to planet formation. In this work, we also find that a higher dust-to-gas ratio yields a larger extent of the high-viscosity inner region (see Section 4.1.2). This is highly beneficial for planet formation in the inner disc, as it implies that the accumulation of dust is not only sustainable, but also leads to a radial expansion of the high-viscosity, high-turbulence region inside of which the dust accumulates. In particular, this expansion is beneficial for the growth of the small fragmentation-limited dust grains into larger, more rigid solid bodies, i.e., into planetesimals. Specifically, planetesimals may form out of dust grains through a combination of the streaming instabilities (SI) and the gravitational instability (Youdin & Goodman 2005;Johansen & Youdin 2007;Bai & Stone 2010;Johansen et al. 2012;Simon et al. 2016Simon et al. , 2017Schäfer et al. 2017). Under certain conditions, the SI leads to localized concentrations of dust grains susceptible to gravitational collapse. Jankovic et al. (2019) pointed out that this process is unlikely in the inner disc if the pressure (and the density) maximum is located at very short orbital distances, as too close to the star the tidal effect of the star prevents the gravitational collapse. Therefore, the shift of the pressure maximum to larger orbital distances due to the accumulation of dust could potentially help to overcome this barrier and form planetesimals. Finally, if the dead-zone viscosity parameter is high, DZ " 10´3, St frag {¯! 1. In this case, dust grains are so small and well-coupled to the gas that they are advected through the pressure maximum inwards. Dust may still accumulate interior to the pressure maximum, as a consequence of fragmentation in the innermost regions, as noted above. However, it is unlikely that this could lead to the formation of larger solid bodies. While the exact value of the dead-zone viscosity is unimportant for the location of the pressure maximum (see Fig. 9 and Section 6.1), in this case the grains would likely be too small, too well-coupled to the gas to start the streaming instabilities (Carrera et al. 2015;Yang et al. 2017). Limitations In this work, we have assumed that the MRI-accreting inner disc is in an equilibrium, steady state. However, there are several processes whose further study requires to consider the time dependence of the disc structure. First, even if variations in the dust properties are neglected, the steady-state MRI-accreting inner disc is unstable to surface density perturbations . At a fixed radius, the MRIdriven accretion rate decreases with an increasing surface density (as calculated using the steady-state models). Therefore, a perturbation in the disc surface density might lead out of the steady-state, creating a pile up of mass in a certain region. This so-called viscous instability (Lightman & Eardley 1974;Pringle 1981) could have important consequences for the inner disc structure and planet formation, as it is likely to produce rings and gaps on the viscous timescale. Second, the results presented in this work show that the accumulation of dust in the inner disc is possibly a highly dynamical process. The dust grain size and the dust-to-gas ratio strongly affect the MRI-driven viscosity at a given radius, and, concurrently, the extent of the innermost region within which the dust may accumulate. The evolution of the disc is particularly unclear if the dust is trapped in the pressure maximum, e.g., whether further evolution of gas and dust might lead to a modified equilibrium state, or creation of multiple pressure maxima. Third, while we considered the effects of dust on the disc thermal and chemical structure, we did not account for the dynamical effects. If dust grains drift radially due to gas drag, there is also a back-reaction on the gas (Nakagawa et al. 1986). This becomes increasingly important at high dust-to-gas ratios. In particular, drag back-reaction acts to flatten the radial gas pressure profile, and so it would affect accumulation of dust in the pressure maximum (Taki et al. 2016). Further study of the early stages of planet formation in the inner disc should account for the self-consistent, time-dependent evolution of the gas and the dust. Fourth, it was shown that for some disc parameters dust growth could lead to steady-state solutions in which the disc is primarily ionized by stellar X-rays and no pressure maximum would exist. This deserves further study through time-dependent simulations, and also using a more detailed chemical network than considered here. In the regime in which the high-temperature effects are unimportant, at low dust-to-gas ratios (equivalent to larger grain sizes) simplified chemical networks (such as the one used here) overestimate the disc ionization fraction, and the MRI-driven accretion efficiency, compared to the more complex networks (Ilgner & Nelson 2006). The stellar X-ray luminosity may also need to be more carefully considered, as the stellar parameters used here are adopted from very early times in the stellar evolution models, while low gas accretion rates (for which the X-ray-dominated solutions appear) are observed at later stages of protoplanetary disc evolution. Lastly, propagation of the X-rays may need to be treated more accurately at short periods, and the penetration of X-rays from the "bottom" side of the disc should be taken into account when X-rays can reach the disc midplane. Furthermore, we showed that the inner disc structure is sensitive to the disc opacities. Yet the radiative properties of the disc are determined only by silicate dust grains in this work. Other important species (e.g. carbonaceous grains; Pollack et al. 1994) could condense in some of the colder regions of the disc (e.g. near the disc photosphere -below the upper layers heated by stellar irradiation and above the hot, optically-thick disc midplane), and their contribution to the opacities could alter the details of the location of the pressure maximum. Opacities due to atomic and molecular lines have also been neglected. In the optically-thick regions (such as the disc midplane in our models), this is a good assumption since the Rosseland-mean opacity of the gas is always negligible compared to that of the dust where the pressure maximum is located. However, in the optically-thin regions (in the disc upper layers), Planck-mean opacity of the gas at high temperatures is comparable to that of micron-sized dust, and the absorption coefficientP greatly exceeds that of cm-sized dust (e.g. Malygin et al. 2014). Since the gas accretes primarily through the dense, optically-thick regions around the disc midplane, we can expect that including the gas opacities would not change our results, as the higher absorption of stellar light would only increase the temperature in the uppermost disc layers. Nevertheless, note that the gas opacities are strongly non-monotonic, and the upper disc layers might not be in thermal equilibrium (Malygin et al. 2014), which is assumed to be the case here. Another limitation of this work in modelling of the disc upper layers is that the possibility of shadowing neglected. As in other 1+1D models (e.g. Chiang & Goldreich 1997;D'Alessio et al. 1998), when heating by stellar irradiation is considered, there is an underlying assumption that the disc is flaring, so that stellar rays can reach the disc surface at all radii. However, Terquem (2008) showed that the inner boundary of the MRI-dead zone may be puffed up sufficiently to throw a shadow over the outer regions of the disc. In our models, we can look for this possibility by inspecting the irradiated surface of the disc arising from the integration of the disc structure in the vertical direction (as opposed to the adopted irradiated surface, constructed by ray-tracing in two dimensions under the assumption that the disc is flaring, see Paper I; D' Alessio et al. 1999). Indeed, we find that, in the vicinity of the pressure maximum there is an inconsistency between the two surfaces, with the former surface implying that the region immediately outwards from the pressure maximum should be in a shadow. Nonetheless, we do not expect that correcting for the shadowing would change any of the conclusions of this work due to the same reason as above: temperature at the disc midplane is primarily determined by viscous dissipation, and not by stellar irradiation. Furthermore, in this work we have assumed that the disc accretes viscously, via the MRI, and that the resulting viscosity is well described by criteria extracted from local magnetohydrodynamic simulations. Accretion in protoplanetary discs may also be driven by non-viscous processes, e.g. by large scale laminar flows if Hall effect is the dominant non-ideal MHD effect (Lesur et al. 2014), or, in the presence of a magnetic field threading the disc, by magnetic winds (Suzuki & Inutsuka 2009;Suzuki et al. 2010;Bai & Stone 2013;Fromang et al. 2013;Lesur et al. 2013). It is likely that both the Hall effect and magnetic winds play a significant role in the overall evolution of protoplanetary discs, driving gas accretion at a much larger range of radii than the MRI (e.g. Bai 2017). Magnetic winds in particular could be shaping the inner disc structure along with the MRI (Suzuki et al. 2016). Nevertheless, the structure of the innermost regions of discs is still likely to be strongly affected by the MRI, and especially so the disc midplane where planets are expected to form. Therefore, while we do not consider non-viscous drivers of accretion in this work, the models presented here should still offer important insights for future work on the inner disc structure. Finally, in this work we have also assumed that where the disc is not sufficiently ionized to drive the MRI, the viscosity parameter obtains a minimum, floor value ( DZ ). The value of this parameter determines whether dust will accumulate at the inner edge of the dead zone (i.e., at the pressure maximum; see Section 6.2). An assumption of a fixed, non-zero DZ in the MRI-dead zone is reasonable if such a viscosity can be driven by non-magnetic instabilities. The range of values explored in this paper covers the values observed in simulations of various hydrodynamic instabilities (e.g. Lesur & Papaloizou 2010;Nelson et al. 2013;Stoll & Kley 2014). However, outwards from the pressure maximum accretion may also be driven by propagation of waves from the adjacent MRI-active zone, and/or those outer regions may be heated by radial transport of heat from the MRI-active zone (e.g. Latter & Balbus 2012;Faure et al. 2014). This would be contrary to our assumptions that the outer regions are heated by an uncorrelated source of viscosity and that the disc structure at different radii is uncorrelated except for the heating by stellar irradiation. In this case, the physics setting the location of the pressure maximum is more complex and the pressure maximum would likely occur at shorter orbital distances than predicted here. Ultimately though, at larger radii non-viscous drivers of accretion discussed above are likely to take over the evolution of the disc. If these are relevant at the outer edge of the MRI-active zone, the pressure maximum may still exist, provided that the gas still accretes faster in the inner region than in the outer. In this case, the disc midplane could be non-turbulent, allowing grain growth beyond the sizes predicted in Section 6.1 and promoting accumulation of grains at the pressure maximum. Lastly, the inner edge of the MRI-dead zone is possibly unstable to formation of vortices (Lyra & Mac Low 2012;Faure et al. 2014), which would invalidate our assumption of an azimuthally symmetric disc, but also possibly further promote accumulation of dust and planet formation. SUMMARY We have explored how the structure of the MRI-accreting inner regions of protoplanetary discs changes as a function of the dust-togas ratio, dust grain size, and other disc and stellar parameters. We have especially focused on the location of the gas pressure maximum arising at the boundary between the highly-viscous innermost region and the low-viscosity outer region. The existence and the location of the pressure maximum, and the disc structure in its vicinity, are key to the formation of the super-Earths inside the water ice line. At fixed dust parameters, the radius of the pressure maximum is directly related to the stellar mass˚and the gas accretion rate 9 . This is because the stellar mass and the accretion rate determine the total viscous dissipation at a given radius, and thus the temperature and the ionization fraction at disc midplane. The radius of the pressure maximum is inversely related to the assumed viscosity parameter in the MRI-dead zone DZ . The location of the pressure maximum corresponds to a minimum in the viscosity parameter. Even though in our model there is an MRI-active layer at all radii, in the outer regions this is a (X-ray ionized) layer high above the disc midplane (in the vicinity of the pressure maximum), and the disc primarily accretes through the dense MRI-dead regions around the midplane. Therefore, the minimum viscosity parameter is close in value to the dead-zone DZ . However, this picture of the highly-viscous innermost region and the low-viscosity outer region may change qualitatively for some disc and stellar parameters. Specifically, for low gas accretion rates (ď 10´9 M d yr´1), we find that a steady-state solution could exist in which there is no gas pressure maximum. In these solutions the disc features low gas surface densities and high viscosity driven by X-ray ionization. Such solutions are more likely to exist for larger grain sizes (and, equivalently, at lower dust-to-gas ratios). At fixed stellar and disc parameters, as long as the pressure maximum does exist, its location moves radially outwards as the dust grains grow to max " 10´2 cm. Grain growth to still larger sizes results in the pressure maximum moving inwards, towards the star. This behaviour is primarily driven by the effects of dust opacities on the disc thermal structure. We calculate the location of the pressure maximum for the case of dust growth being limited by turbulent fragmentation. For a Solarmass star and gas accretion rates in the range 10´9´10´7 M d yr´1, this always places the pressure maximum outwards of 0.1 AU. In this fragmentation-limited regime, the radius of the pressure maximum depends very weakly on the dead-zone viscosity parameter, and it is most sensitive to the gas accretion rate. The pressure maximum may possibly not exist for a Solar-mass star and a gas accretion rate of ď 10´9 M d yr´1, nor for a stellar mass of 0.1 M d for gas accretion rates ď 10´1 0 M d yr´1, if the disc evolves into the high-viscosity X-ray ionized structure as soon as such structure can match the required disc accretion rate. This suggests that planet formation in the inner disc is more likely early in the disc lifetime. The fragmentation-limited dust grain size and its Stokes number are most sensitive to the value of the viscosity (and turbulence) parameter at the pressure maximum. As noted above, this roughly equals the assumed value of the viscosity parameter in the MRIdead zone ( DZ ). Therefore, whether the dust grains can become trapped in the pressure maximum is determined by this uncertain parameter. Dust trapping is likely for the lower end of plausible values ( DZ " 10´5) and will not happen for the higher end ( DZ " 10´3). Importantly, the pressure maximum does not move inwards for higher dust-to-gas ratios. That is, dust accumulation near the pressure maximum (and/or inwards of it) should result in an expansion of the dust-enriched region and/or dynamical evolution of the disc structure. However, time-dependent simulations are needed to further study the potential outcomes, and the viability of planetesimal formation in the inner disc.
2021-08-30T01:15:26.730Z
2021-08-27T00:00:00.000
{ "year": 2021, "sha1": "06467e1004155aebf883058527acf19c1be7d1f6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.12332", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "292668f736fb0e8879594de9731ae9c51f4adbbc", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
257705877
pes2o/s2orc
v3-fos-license
Frequency of Modification of Pharmacological Treatment Is Equivalent for Virtual and In-Person Psychiatric Visits Background: During the coronavirus pandemic there was a rapid adoption of telehealth services in psychiatry, which now accounts for 40% of all visits. There is a dearth of information about the relative efficacy of virtual and in-person psychiatric evaluations. Methods: We examined the rate of medication changes during virtual and in-person visits as a proxy for the equivalence of clinical decision-making. Results: A total of 280 visits among 173 patients were evaluated. The majority of these visits were telehealth (224, 80%). There were 96 medication changes among the telehealth visits (42.8%) and 21 among the in-person visits (37.5%) (z = −1.4, p = 0.16). Conclusion: Clinicians were equally as likely to order a medication change if they saw their patient virtually or in person. This suggests that remote assessments yielded similar conclusions to in-person assessments. Introduction In December 2019, visitors to Huanan and Yangchahu markets in Wuhan and racoon dogs being sold there developed an illness that was subsequently determined to be caused by the severe acute respiratory syndrome-related coronavirus. 1 In the spring of 2020, the World Health Organization declared that the coronavirus 2019 (CoViD-19) outbreak was a pandemic. 2 As the rate of CoViD-19 infections, severe illnesses, and deaths increased, many countries, including the United States, began declaring shutdowns and lockdowns. 3 These shutdowns included nonemergency medical services. At that time only a fraction of states allowed parity of telehealth and in-person visits, 3 and its use was predominantly limited to rural settings and about 8% of medical provider visits. 4 By the spring of 2021, about one quarter of Americans were receiving telehealth services, 5 a 200% increase. Even before the CoViD-19 lockdown, there had been a call to digitalize psychiatric outpatient care. 6 Nonetheless, psychiatrists had been concerned that use of virtual settings would lead to deterioration of the doctor-patient relationship, which many believe is central to the practice of psychiatry. [7][8][9] Pre-CoViD-19 studies comparing in-person care with telepsychiatry generally showed that remote treatment was comparable or superior to in-person treatment. 10 Patient satisfaction was generally good, 10 but some patients felt a lesser connection to their clinician. 11 Provider satisfaction is more mixed with concerns about quality of services, patient perceptions, and technological issues. 10 Throughout CoViD-19 restrictions, clinician satisfaction with telepsychiatry effected adoption rates when an option was available. 12,13 However, participants in pre-CoViD-19 studies chose to be involved in a telehealth study. Patients and clinicians through the CoViD-19 shutdowns had telehealth ''forced'' upon them. That difference is a potentially important variable. We performed a quality assurance study examining the objective outcome of medication intervention comparing individuals who attended the same clinic remotely or in person. The study was performed after the shutdown restrictions were slowly being lifted in 2021, and some patients were transitioning back to in-person visits. Methods The study was a quality assessment of visits at the University of Louisville outpatient psychiatric service. With the switch to virtual visits we needed to examine a quality measure to ensure that we were providing adequate care to our patients. We examined the rate of medication changes during virtual and in-person visits as a proxy for the equivalence of clinical decisionmaking because this method was utilized previously. 14 All patients were ‡18 years. We identified times when patients came to two consecutive visits with the same provider and the same primary diagnosis and evaluated treatment changes on the second visit. We documented when there was a change in prescribed medication or dosage but excluded ''as needed'' (PRN or Pro Re Nata) medications. A test evaluating differences in proportions was used. 15 Because this was a quality assessment study, it did not require evaluation or approval by the Human Subjects Protections Program of the University of Louisville. Results The study period spanned all of 2021. A total of 280 visits among 173 patients were evaluated (Table 1). There were some 20 providers in the clinic, all of whom did both in-person and virtual visits; we corrected for this by only examining patients who came to the same provider at least twice in a row. The majority of these visits were telehealth (224, 80%) ( Table 1). There were 115 women (66.5%), 57 men (33%), and 1 unspecified gender. The age range was 19-89 years (Median age 43 years; mean age 45.4 years; interquartile range 24 years); the majority of patients (98 or 56.6%) were under the age of 50 years. Discussion We performed this study because there is a dearth of information regarding the relative efficacy of telehealth visits when patients do not choose that modality. We examined medication change as a proxy for clinical decision-making by psychiatrists in an academic outpatient clinic. Previous study had suggested that this was a reasonable measure of clinical decision-making. 14 In our quality assurance examination, clinicians, were equally likely to act on their collected clinical data by ordering a medication change if they saw their patient virtually or in person. This suggests that remote assessments yielded similar conclusions to in-person assessments, even in patients who do not voluntarily participate in a telehealth study. It is important to note that this study does not address the quality of care provided or health care outcomes. We did not measure patient satisfaction and did not collect any correlates of clinical outcomes. We utilized medication change as a measure because it is driven by patient complaints or clinician observations during patients' presentations. 16 Thus, it reflected the quality of the transfer of information during the session. The lack of significant difference suggests that the quality of information transfer is equivalent, or nearly so, in the two forms of evaluation. Telepsychiatry has grown considerably. During the height of the CoViD-19 restrictions, 40% of all outpatient visits were provided virtually by mental health providers, compared with only 11% by other providers. 17 Even as CoViD-19 restrictions have been largely removed, and telepsychiatry continues to be highly utilized, accounting for 36% of all outpatient visits and 39% of all telehealth services. 17 There are clear limitations to our study design. As noted earlier, we did not examine the actual quality of outcome. Furthermore, this was not a randomized study. Nearly all patients seen virtually did not have a choice of how the evaluation would be done, but all patients seen in person chose that format. Furthermore, it is possible that patients who would not do well with virtual visits simply dropped out of treatment during CoViD-19 restrictions and were not studied in our population. Telepsychiatry will continue to be a growing presence in future health care. Additional studies need to be done to confirm the equivalence of remote versus in-person outcomes. Nonetheless, the early results of our study and the current literature suggest that using telehealth for mental health conditions is a reasonable option.
2023-03-24T15:32:41.050Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "97f644c6c9c4ee5582bb11f4e6d7cce0f0887c84", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1089/tmr.2023.0004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b36249787d9705924c87b070955a106047f1165", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
218517710
pes2o/s2orc
v3-fos-license
Genome-Wide Screen of the Hippocampus in Aged Rats Identifies Mitochondria, Metabolism and Aging Processes Implicated in Sevoflurane Anesthesia Previous studies have shown multiple mechanisms and pathophysiological changes after anesthesia, and genome-wide studies have been implemented in the studies of brain aging and neurodegenerative diseases. However, the genome-wide gene expression patterns and modulation networks after general anesthesia remains to be elucidated. Therefore, whole transcriptome microarray analysis was used to explore the coding gene expression patterns in the hippocampus of aged rats after sevoflurane anesthesia. Six hundred and thirty one upregulated and 183 downregulated genes were screened out, then 44 enriched terms of biological process, 16 of molecular function and 18 of the cellular components were identified by Gene Ontology (GO) and KEGG analysis. Among them, oxidative stress, metabolism, aging, and neurodegeneration were the most enriched biological processes and changed functions. Thus, involved genes of these processes were selected for qPCR verification and a good consistency was confirmed. The potential signaling pathways were further constructed including mitochondrion and oxidative stress-related Hifs-Prkcd-Akt-Nfe2l2-Sod1 signaling, multiple metabolism signaling (Scd2, Scap-Hmgcs2, Aldh18a1-Glul and Igf1r), as well as aging and neurodegeneration related signaling (Spidr-Ercc4-Cdkn1a-Pmaip1 and Map1lc3b). These results provide potential therapeutic gene targets for brain function modulation and memory formation process after inhaled anesthesia in the elderly, which could be valuable for preventing postoperative brain disorders and diseases, such as perioperative neurocognitive disorders (PND), from the genetic level in the future. INTRODUCTION Neurodegenerations, including Alzheimer's disease (AD), represent important causes for the brain's aging processes and related cognitive dysfunction and dementia (Wyss-Coray, 2016). For the pathogenesis of stroke, conventional risk factors explain only a small proportion of causes, and evidence from twins and family history study suggests that genetic predisposition is important (Dichgans, 2007). In common with many other complex diseases, in which environmental risk factors are thought to interact with multiple genes, the identification of the underlying molecular mechanisms and genes contributing to degenerated brain diseases is valuable and challenging. Candidate gene studies have produced few replicable associations (Dichgans and Markus, 2005). More recently, genome-wide association studies, using microarray platforms, have allowed a deeper understanding of the molecular factors involved in the pathophysiology of degenerated brain disease. These studies identified multiple susceptibility loci for neurodegeneration (Harold et al., 2009;Lambert et al., 2009;Seshadri et al., 2010;Hollingworth et al., 2011), and these genes were clustered into pathways including inflammation and immune response, lipid metabolism, endocytosis/intracellular trafficking (Kunkle et al., 2019). Studies also found that oxidative stress is associated with neurodegenerative disorders (Coyle and Puttfarcken, 1993). And oxidative stress (Chamorro et al., 2016), lipid metabolism, blood circulation (Ji et al., 2017), multi-organism process, protein catabolic metabolism (Cui et al., 2018), and autophagy (Menzies et al., 2015) are the major shared genetic etiologies for stroke. Nevertheless, the are few studies about the gene network and pathophysiology for brain function modulation during the perioperative context. Sixty-six million patients over 65 years of old worldwide undergo surgeries each year, including 8.5 million AD patients (Xie and Xu, 2013). Up to 40% of these patients suffer from perioperative neurocognitive disorders (PND), which includes postoperative cognitive dysfunction, postoperative delirium, et cetera (Monk et al., 2008;Evered et al., 2018). Anesthesia, surgical trauma, aging, as well as preoperative cognitive impairment propose to the onset of PND (Monk et al., 2008;Schenning et al., 2016;Racine et al., 2018). Meanwhile, neuroinflammation, mitochondrial dysfunction and oxidative stress (Fischer and Maier, 2015), DNA damage and apoptosis (Madabhushi et al., 2014), synaptic plasticity dysfunction (Li X. M. et al., 2014), amyloid plaques and neurofibrillary tangles could be the contributing pathological factors. Ours and related researches have indicated that inhaled general anesthesia plays a major role in the PND (Xu et al., 2014;Ni et al., 2015), however, the gene expression patterns and modulation networks during general anesthesia remains to be elucidated. Therefore, we used the genome-wide screen to explore the gene expression patterns in the hippocampus of aged rats after sevoflurane anesthesia. And we established functional annotation of differentially expressed genes, modulation networks, as well as potential signaling pathways during the process, to provide insights into the monolithic mechanisms for inhaled anesthesia and related brain function modulation and memory formation. Animals Male Sprague-Dawley rats, 18-month old, weighing 550-600 g, were used in the studies. Before sevoflurane exposure, the rats were maintained on a standard housing condition with food and water ad libitum for 2 weeks. Rat Anesthesia The animal protocol was approved by the Peking University biomedical ethics committee experimental animal ethics branch (No. LA2018085). The rats were randomly assigned to control and sevoflurane groups. Minimum alveolar concentration (MAC) of sevoflurane for aged rats has been reported as 2.4-2.7% (Li X. Q. et al., 2014). In the present study, rats in the sevoflurane group received 2.5% sevoflurane in 100% oxygen for 4 h in the anesthetizing chamber, whereas the control group received 100% oxygen for 4 h in an identical chamber. The rats breathed spontaneously, and the anesthetic and oxygen concentrations were monitored continuously (Datex, Tewksbury, MA, USA). Temperature of the anesthetizing chamber was controlled to maintain the rectal temperature of the animals at 37 ± 0.5 • C. Four hours sevoflurane anesthesia has been shown not to significantly alter values of blood pressure and blood gas in our preliminary experiment. After the termination of sevoflurane anesthesia, rats were placed in a chamber containing 100% oxygen until the regain of consciousness 20 min later. The rats were sacrificed by decapitation at the end of the experiments. The brain tissues were removed, and the hippocampus was dissected out and frozen in liquid nitrogen for the subsequent experiments. RNA Extraction and Quantification Total RNAs were isolated from the hippocampus using trizol reagent (Invitrogen, Carlsbad, CA, USA), then digested with RNase-Free DNase to remove residual DNAs. The RNA concentrations were analyzed using the Nanodrop2000 (Thermo Fisher Scientific), then total RNA (2 µg) was reverse-transcribed using the GoScriptTM ReverseTranscription System (Promega, Madison, WI, USA). Affymetrix Whole Transcriptome Microarray Analysis and Functional Annotation Whole transcriptome microarray analysis was performed using GeneChip TM Rat Transcriptome Array 1.0 (Affymetrix, Santa Clara, CA, USA), and the result data were deposited in NCBI with the GEO accession code GSE141242. Briefly, isolated RNA (100 ng) was mixed with 1.5 µl of Poly-A RNA control solution and subjected to reverse transcription. The obtained cDNA was used for in vitro transcription to prepare antisense RNA (aRNA) by incubation at 40 • C for 16 h. Then, the aRNA was applied for the second round of sense cDNA synthesis using the WT Expression kit (Ambion, Austin, TX, USA). The obtained cDNA was used for biotin labeling and fragmentation by Affymetrix GeneChip WT Terminal Labeling and Hybridization. Biotinlabeled fragments of cDNA (5.5 µg) were hybridized to the Affymetrix Rat Transcriptome Array Strip (45 • C/24 h), and up to 25 unique probe sequences were hybridized to a single transcript. Following hybridization, each array strip was washed and stained using the Fluidics Station of GeneChip Scanner 3000 7G system (Affymetrix, Santa Clara, CA, USA). The array strips were scanned using the Imaging Station of the GeneChip Scanner 3000 7G system. Gene Ontology (GO) functional annotation and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses were performed for DEGs using Database for Annotation, Visualization, and Integrated Discovery (DAVID 1 ). GO enrichment analysis contains three categories: biological process, molecular function, and cellular component. Quantitative Real-Time PCR (qPCR) The significances of genes changes were calculated by −log10 (pvalue), and higher −log10 (p-value) indicated that the gene was with more significant changes. We selected the top differentially expressed genes for qPCR verification. qPCR was performed on the CFX96 Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). Amplification mixture consisted of PowerUpTM SYBR Green master mix (Thermo Fisher Scientific), 10 µM forward and reverse primers (Invitrogen, Carlsbad, CA, USA) and approximately 1.5 µl of cDNA template. Primer sequences were obtained from the literature and checked for their specificity through in silico PCR. The forward and reverse primers are shown in Table 1. Amplification was carried out with an initial denaturation step at 95 • C for 2 min followed by 45 cycles of 95 • C for 10 s, 55 • C for 30 s and 60 • C for 30 s, then 65 • C for 2 min in 10 µl reaction volume. All reactions were run in duplicate and the results were averaged from six independent studies. qPCR was quantified in two steps, first, β-actin levels were used to normalize target gene levels [∆Cycle threshold (∆Ct) = Cttarget gene-Ctβ-actin, target gene level = 2-∆Ct]. Second, the target gene levels of the sevoflurane group were presented as the percentage of those of the control group, and 100% of the target gene levels referred to the control levels. Fear Conditioning Test (FCT) The FCT (Xeye CPP, Beijing MacroAmbition S&T Development, Beijing, China) was used to assess the cognitive function of rats after sevoflurane anesthesia as described in previous studies (Dong and Li, 2014;Cheng et al., 2015) with modification. FCT consisted of a training process at 3 h after the sevoflurane anesthesia and the evaluations at 2 and 7 days after anesthesia. In the training process, rats were placed in the context chamber to acclimate for 180 s, then they received a 2 Hz pulsating tone (80 dB, 3,600 Hz) for 60 s co-terminated with a mild foot shock (0.8 mA, a 0.5 s). In the evaluations, the hippocampal-dependent memory was assessed by the freezing time during exposure to a novel context test (the test was performed in the same chamber but with no cues or shock), while the hippocampal independent memory was assessed by the freezing time during exposure to the tone stimulus (the test was performed in an alternative context and with no shock; Chowdhury et al., 2005). Statistical Analysis Statistical analysis was performed with Graphpad Prism 7.0 software. Quantitative data are presented as the mean ± SD. Non-paired two-tailed Student's t-test was used to determine significant differences between the two groups. One-way ANOVA with Bonferroni's multiple comparison test was used to analyze significant differences between multiple groups. p < 0.05 was considered significant. The microarray analysis was performed by Expression Console and Transcriptome Analysis Console Software. One-way ANOVA was applied. The p-value was adjusted with the FDR method (Benjamini Hochberg procedure). RNAs were screened with p < 0.05. The significance of GO and KEGG enrichment was calculated by the hypergeometric distribution and Fisher exact test, and a lower p-value indicated that the specific term was more significantly enriched. Two-way repeated-measures analysis of variance followed by a post hoc Bonferroni test was performed to analyze the results of behavioral studies. Values of p < 0.05 were considered to be significant. RESULTS Aged rats were assigned to control and sevoflurane groups. The vital signs and arterial blood gas analysis results during anesthesia were within the normal range. Due to previous studies from ours and other groups, multiple pathophysiological changes in the hippocampus emerged 3-12 h after anesthesia, and for oxidative stress, even immediately after anesthesia (Zhang et al., 2012;Li et al., 2013). And inhaled anesthetics could affect the hippocampus related behavioral function from 3 h after anesthesia (Zhang et al., 2012), so the hippocampus was dissected and tested 3 h after anesthesia in the present study. The whole transcriptome gene expressions in the hippocampus of aged rats were detected by whole transcriptome microarray analysis (GeneChip TM Rat Transcriptome Array, n = 3). The microarray analysis was performed by Expression Console and Transcriptome Analysis Console Software. One-way ANOVA was applied and the p-value was adjusted with the FDR method. And the genome-wide map of all autosomal and heterosomal coding and complex genes was represented as a circular ideogram, composed of concentric circles depicting the entire autosome complement, with chromosomal location annotated in a clockwise manner and statistical significance indicated by radial arrangements and color codes. The black innermost ring (with vertical lines) represents autosome ideograms (annotated is the chromosomal number), with the pter-qter orientation in a clockwise direction. Small red lines represent the centromeres within each chromosome. Red dots outside the ideograms mark genes with expression (p < 0.05), while green dots inside mark genes with decreased expression (p < 0.05). The dot position marks the location of the Illumina 450K probe distribution along the genome. The second innermost black circle represents baseline (zero) and the β-value difference between sevoflurane and control groups. Red lines signify increased gene expression regions and green lines signify decreased gene expression regions, with the length of each line representing the difference level (fold change). The names of DEGs match the Ensembl gene database were listed in the two outermost circles, the first outermost circle listed DEGs with increased expression (Red), and the second outermost circle listed DEGs with decreased expression (green, Figure 1). FIGURE 1 | Circos plot of genome-wide coding gene expression differences of rat hippocampi in sevoflurane group vs. control condition (n = 3). The black innermost ring (with vertical lines) represents autosome ideograms (annotated is the chromosomal number), with the pter-qter orientation in a clockwise direction. Small red lines represent the centromeres within each chromosome. Dots outside the ideograms mark gene expression increase (red dots denote significantly increased mRNA signal), while dots inside mark gene expression decrease (green dots denote significantly decreased mRNA signal). The dot position marks the location of the Illumina 450K probe distribution along the genome. The second outermost black circle represents baseline (zero) and the β-value difference between isoflurane anesthesia and control condition. Red lines signify increased gene expression regions and green lines signify decreased gene expression regions, with the length of each line representing the difference level (p < 0.05). The last two circles show the RefSeq genes associated with different signal intensity (p < 0.05, and within the Ensembl database). Outer circle (red) shows genes with increased signal, and inner circle (green) shows genes with decreased signal. The scatter plot indicated the variation in hippocampal gene expressions between the sevoflurane group and control condition. The values corresponding to the X-and Y-axes in the scatter plot are the normalized signal values of the control and sevoflurane groups (log2 scaled). Expression values are represented in different colors, indicating expression levels above and below the median expression level across all samples. The red dots indicate increased-expressed genes, while the green dots indicate decreased-expressed genes of the sevoflurane group compared with the control condition (p < 0.05, Figure 2A). Hierarchical cluster analysis showed differentially expressed genes in the sevoflurane group compared to the control condition. The non-paired t-test was used to determine the differences between the two groups. We identified 814 differentially expressed genes (DEGs, p < 0.05), 631 of which were down-regulated and 183 were up-regulated ( Figure 2B). To explore the pathophysiologic mechanism of sevoflurane anesthesia-related brain dysfunction, enrichment analysis was carried out. The significance of GO and KEGG enrichment was calculated by the hypergeometric distribution and Fisher exact test, and a lower p-value [higher −log 10 (p-value)] indicated that the specific term was more significantly enriched. The significance of GO and KEGG enrichment was calculated by the hypergeometric distribution and Fisher exact test, and a lower p-value indicated that the specific term was more significantly enriched. The results of DAVID for GO and KEGG analysis revealed that 44 terms of biological process, 16 terms of molecular function, and 18 terms of the cellular component were significantly enriched after sevoflurane anesthesia (p < 0.05), respectively. Among them, oxidative stress, metabolism, aging, and neurodegeneration were most enriched biological processes and changed functions after sevoflurane. Six GO terms of oxidative stress, 28 GO and 7 KEGG terms of metabolism, 12 GO terms of aging and neurodegeneration, and 20 terms of the cellular component were significantly enriched. The typical terms were displayed and ranked according to the value of −log 10 (Enrichment p-value, Figure 3). Previous studies have found that oxidative stress is involved in the development of AD, Parkinson's disease and other neurodegenerations (Giasson et al., 2000). And we focused firstly on the enriched GO terms related to oxidative stress. The terms include mitochondrion, response to hypoxia, cellular response to hypoxia, cellular response to oxidative stress, positive regulation of superoxide anion generation and cellular response to mechanical stimulus, and the −log 10 (p-value) of these terms were 5.32, 4.6, 2.79, 2.21, 1.37 and 1.13, respectively ( Figure 3A). Our previous results also indicate that elevated reactive oxygen species (ROS) and related DNA damage are involved in anesthesia-related pathophysiological changes (Ni et al., 2017). Enriched metabolic terms including energy, carbohydrate, lipid, nucleotides and amino acid metabolism, were associated with the most potential target genes (Figures 3B,C). Glucose transport, long-chain fatty acid metabolic process, and protein binding were top significant enriched GO terms in carbohydrate, lipid, and amino acid-related metabolic process, and the −log 10 (p-value) were 2.26, 1.75 and 4.18, respectively ( Figure 3B). Then KEGG pathway analysis was employed to reveal involved molecular interaction, reaction and relation networks after sevoflurane anesthesia. Figure 3C highlighted seven significantly enriched signaling pathways in our annotation with -log 10 (p-value) > 1, including certain signaling pathways such as adipocytokine signaling pathway, insulin resistance, metabolic pathways, lysine degradation, inositol phosphate metabolism, and biosynthesis of amino acids. Previous studies show that apolipoprotein E plays a central role in the clearance of β-amyloid (Aβ) from the brain (Cramer et al., 2012) and insulin pathway is involved in resistance to oxidative stress and aging (Byrne et al., 2014). The present results indicate that the sevoflurane anesthesia cloud also alters multiple metabolic pathways and processes. Enriched GO terms related to aging and neurodegeneration include aging, regulation of cell cycle, negative regulation of cell growth, negative regulation of neuron apoptotic process, intrinsic apoptotic signaling pathway in response to DNA damage by p53 class mediator, positive regulation of extrinsic apoptotic signaling pathway via death domain receptors, apoptotic process, negative regulation of apoptotic process and positive regulation of MAPK cascade, and −log 10 (p-value) of these terms were 2.84, 2.68, 2.14, 1.87, 1.20, 1.14, 1.06, 1.04 and 1.01, respectively ( Figure 3D). Apoptosis and related pathway contribute to the Aβ neurotoxicity in AD, neurodegeneration and dementia (Gervais et al., 1999), and DNA damage also involves in the processes of aging (Lu et al., 2004). And the present results indicate that apoptosis, DNA damage, and other aging neurodegenerative mechanisms are activated, and related gene expressions have been changed in the aged brain after inhaled anesthesia stimulation. Enriched GO terms of cellular components were ranked according to the location of cellular organelles (from cell membrane to nucleus, Figure 3E). Cytosol, extracellular exosome, membrane, and mitochondrial inner membrane were top significant enriched terms, −log 10 (p-value) of which was 3.9, 3.09, 2.45 and 4.18, respectively. These cellular components could play important roles in sevoflurane-induced pathophysiologic changes. Among which, the mitochondrion is a critical regulator for cell death and mitochondrial dysfunction occurs early and acts causally in multiple disease pathogenesis. Mutations in mitochondrial DNA and oxidative stress both contribute to the aging process, which is the greatest risk factor for neurodegenerative diseases (Lin and Beal, 2006). Based on GO and KEGG functional annotation and enrichment analysis, involved mechanisms and genes of oxidative stress, metabolism, aging, and neurodegeneration were selected for qPCR verification (n = 6 in each group). These included eight DEGs involved in oxidative stress (Hif2a, Hif3a, Prkcd, Akt, Nfe2l2, Sod1, Scap, and Scd2), six DEGs involved in metabolism (Scap, Hmgcs2, Scd2, Aldh18a1, Glul, and Igf1r), and eight DEGs (Spdir, Ercc4, Cdkn1a, Hipk2, Mal, Pmaip1, Bmpr1b, and Map1lc3b) involved in aging and neurodegeneration processes. A good consistency between the qPCR and microarray results was confirmed in 17 genes. The non-paired t-test was used to determine significant differences between the two groups. However, qPCR validation did not show significant changes for Hipk2 (103.70 ± 9.061 vs. 100.00 ± 22.04, p = 0.8790), Mal (115.7 ± 20.74 vs. 100.00 ± 21.49, p = 0.6101) and Bmpr1b (97.88 ± 17.84 vs. 100 ± 15.52, p = 0.9300) after sevoflurane anesthesia. As data quality parameters such as array p values and fold change may exert influence on the consistency of the two methods, we assume PCR validations across different experimental conditions are more reliable according to previous studies (Morey et al., 2006). Furthermore, Hifαs, Hmgcss, and Cdkn1a are involved in multiple signaling pathways in oxidative stress, metabolism, and aging/neurodegeneration processes respectively, and were selected for immunofluorescence analysis for their expression levels and regions. Since close monitoring excluded hypoxia during anesthesia, we assume that sevoflurane may induce perioperative oxidative stress in the brain, and activate related mechanisms and genes. Figure 4 shows oxidative stress-related signaling pathways involved in the aged hippocampus after sevoflurane anesthesia, and differently transcribed genes are shown in red. Sevoflurane activated hypoxia-inducible factors (HIFs) directly, or through Prkcd and Akt-mTOR-signaling, and DEGs include Hif2a and Hif3a. Although the increase of Hif1a expression was not significant in the present microarray, our previous studies have shown that the expression of HIF-1α increased significantly after inhaled anesthesia (Cao et al., 2018a,b). Activated HIFs involved in Scap/SREBP1 and Scd2 expression increase, and resulted in oxidative stress, then more ROS were generated. It has been reported that the balance of oxygen supply and demand relies on HIFs (Huang, 2013), and Prkcd could also control HIFs translation via AKT-mTOR signaling under hypoxic conditions (Kim et al., 2016). On another aspect, ROS products activated the Nfe2l2-antioxidant response element pathway and increased Sod1 expression. And Nfe2l2 mediated Sod1 increase could attenuate oxidative stress and protect DNA from related damage (Bordoni et al., 2019 100.00 ± 37.35, p = 0.034) and Sod1 (8.045 ± 2.39 vs. 5.34 ± 1.689, p = 0.0471) after sevoflurane anesthesia compared with control condition. As immunofluorescence shows both the presence and location of protein expression, it was selected for further protein expression verification. Due to previous studies, the hippocampal CA1 region is the substrate for long-lasting potentiation and encoding of synaptic memory (Volianskis and Jensen, 2003), dentate gyrus (DG) serves an important role in engram maintenance and remote memory generalization (Guo et al., 2018). Thus, these regions were selected as the target regions in the present study, and the results showed that the protein expression of HIF-3α in both regions increased after sevoflurane anesthesia ( Figure 5). Figure 6 shows metabolism-related signaling pathways involved in the aged hippocampus after sevoflurane anesthesia with DEGs. Sevoflurane activated Scap/SREBP signaling and Hmgcs2 expression. Hmgcs2 expression is both sufficient and necessary to the control of fatty acid oxidation in cells (Vila-Brau et al., 2011), and Scap/SREBP signaling also involves the process. Sevoflurane induced Scd2 expression increases and mitochondrial dysfunction related genes. Scd2 knockdown increases whole-body energy expenditure (de Moura et al., 2016), and the increased expression of Scd2 could result in energy metabolism decrease. Sevoflurane also increased Aldh18a1, Glul and Igf1r expression. Hypoxia activated proline biosynthesis via upregulation of Aldh18a1 (Tang et al., 2018), Glul is an enzyme that converts glutamate and ammonia to glutamine (Eelen et al., 2018), and Igf1r plays a central role in glucose metabolism and regulates lifespan and resistance to oxidative stress as well (Holzenberger et al., 2003). Thus, sevoflurane also affected the metabolism of protein and glucose. Then, qPCR validation for the DEGs related to metabolism showed significant changes for Scap (167.13 ± 48.00 vs. 100.00 ± 53.06 p = 0.0444), Hmgcs2 (156.52 ± 32.07 vs. 100.00 ± 27.43, p = 0.0083), Scd2 (129.10 ± 20.24 vs. 100.00 ± 19.69, p = 0.030), Glul (150.55 ± 42.23 vs. 100.00 ± 30.84, p = 0.039), Aldh18a1 (200.42 ± 90.37 vs. 100.00 ± 59.11, p = 0.046) and Igf1r (160.26 ± 44.83 vs. 4.678 ± 2.097, p = 0.047) after sevoflurane anesthesia compared with control condition. And as shown in Figure 7, the protein expression levels of HMGCS2 increased significantly in the CA1 region and DG of the hippocampus after sevoflurane anesthesia. Figure 8 shows aging/neurodegeneration related signaling pathways involved in the aged hippocampus after sevoflurane anesthesia with DEGs. Besides oxidative stress and metabolismrelated signaling, sevoflurane affected Spidr and Ercc4 expression. Spidr involved in DNA repair, and its depletion leads to genome instability and causes hypersensitivity to DNA damaging agents (Wan et al., 2013). Ercc4 is one of the components of structure-specific endonucleases, which mediate cleavage of DNA structures formed during the repair of collapsed replication forks and double-strand breaks (Svendsen et al., 2009). These changes indicate that sevoflurane could induce DNA damage. DNA damage is a unifying mechanism in neurodegeneration (Ross and Truant, 2017), and could also involve in brain function alteration after anesthesia. Sevoflurane increased Cdkn1a and Pmaip1 expression. Activation of the tumor suppressor p53 by DNA damage induces either cell cycle arrest or apoptotic cell death, and the cytostatic effect of p53 is mediated by transcriptional activation of the cyclindependent kinase inhibitor p21 (coded by Cdkn1a, Seoane et al., 2002). Cdkn1a was also associated with aberrant cell-cycle and apoptosis (Khan et al., 2018), and Pmaip1 was associated with apoptosis (Zhao et al., 2014). Sevoflurane induced Map1lc3b expression, which plays an important role in autophagy (Samdal et al., 2018). Then, qPCR validation for the DEGs related to aging/neurodegeneration showed significant changes for Spidr (158.56 ± 53.87 vs. 100.00 ± 33.15, p = 0.047), Ercc4 (70.43 ± 12.22 vs. 100.00 ± 29.81, p = 0.049), Cdkn1a (206.99 ± 51.71 vs. 100.00 ± 30.31, p = 0.0014), Pmaip1 (232.86 ± 106.46 vs. 100.00 ± 44.78, p = 0.018) and Map1lc3b (141.64 ± 35.05 vs. 100.00 ± 22.76, p = 0.034) after sevoflurane anesthesia compared with control condition. The protein expression levels of p21 correlated with mRNA results and increased significantly in both the CA1 region and DG of the hippocampus after sevoflurane anesthesia (Figure 9). To assess the relationship between sevoflurane anesthesia and hippocampus-dependent behavioral variations, a subgroup of aged rats was subjected to the FCT, consisted of a training process at 3 h after anesthesia (the same time of genomic expression analysis in the present study), and evaluations at 2 days and 7 days after anesthesia. The results showed that the freezing time decreased significantly at 7 days (21.75 ± 11.32 vs. 36.29 ± 13.50, p = 0.0091, Figure 10C), but not 2 days (34.71 ± 19.77 vs. 46.59 ± 20.33, p = 0.1609, Figure 10A) after anesthesia in the context test (reflected hippocampusdependent memory), which suggested that sevoflurane related FIGURE 4 | Hypothetical pathway related with oxidative stress identified with altered mRNA expression in sevoflurane group vs. control condition, Regular triangle represents genes up-regulated; inverted triangle represents genes down-regulated; Graphs show the difference in expression of Hif2a, Hif3a, Akt, Nfe2l2, Prkcd, Sod1 and Scap for sevoflurane anesthesia group and controls (*p < 0.05; **p < 0.01). Frontiers in Aging Neuroscience | www.frontiersin.org FIGURE 5 | Immunofluorescent staining of hippocampal hif3a in the CA1 region and DG region. Immunofluorescence images show RelA (FITC, green) and DAPI (blue) counterstain. In the control condition, RelA was primarily distributed in the cytosol of the pyramidal cell layer, CA1 region and the cytosol of the granular cell layer, DG region. Six hours after exposure, RelA was distributed in both nucleus and cytosol. Arrows point to the typical RelA distribution, which are provided as high magnification images. Magnification 400×, scale bar 100 and 25 µm. DISCUSSION In the present study, we screened out 814 coding and complex genes that were located across 21 pairs of chromosomes in the aged hippocampus at 3 h after sevoflurane anesthesia, among which, 631 genes were upregulated and 183 genes were downregulated, and the training process of FCT was conducted at the same time. GO and KEGG analysis revealed that 44 terms of biological process, 16 terms of molecular function, and 18 terms of the cellular component were enriched after anesthesia. Among them, oxidative stress (6 GO terms), metabolism (28 GO and 7 KEGG terms), aging and neurodegeneration (12 GO terms) were the most enriched biological processes and changed functions. Candidate genes of oxidative stress (Hif2a, Hif3a, Prkcd, Akt, Nfe2l2, Sod1, Scap, and Scd2), metabolism (Scap, Hmgcs2, Scd2, Aldh18a1, Glul, and Lgf1r), aging and neurodegeneration (Spdir, Ercc4, Cdkn1a, Hipk2, Mal, Pmaip1, Bmpr1b, and Map1lc3b) were selected for qPCR verification. A good consistency between the qPCR and microarray results was confirmed, and potential functional genes and signaling pathways were constructed in these biological processes including mitochondria and oxidative stress, metabolism, aging, and neurodegeneration. The FCT results showed that sevoflurane affected memory retrieval at 7 days after anesthesia. The training process of FCT was performed at the same time of genomic expression analysis (3 h after sevoflurane anesthesia), and the fear conditioning memory retrievals were assessed at 2 and 7 days after the training process (or anesthesia), which both reflected the recent memory. The trend of hippocampus-dependent fear memory decrease was observed at 2 days after anesthesia, but the difference was not significant. While the significant difference of freezing time was observed at 7 days after anesthesia, which indicates that sevoflurane anesthesia could accelerate hippocampusdependent memory decline. The effects of sevoflurane on memory formation and the hippocampal genomic expression changes during memory formation could be pivotal mechanisms for these effects. Furthermore, the similar phenomenon of contextual fear memory retrieval was observed in both perioperative period and isoproterenol treatment (Qi et al., 2008), and sevoflurane exposure has FIGURE 6 | Hypothetical pathway related to metabolism identified with altered DNA expression in the sevoflurane anesthesia group in comparison with the control group. The regular triangle represents genes up-regulated; inverted triangle represents genes down-regulated; Graphs show the difference in expression of Aldh18a1, Glul, Hmgcs2, Lgf1r, Scap, and Scd2 were for sevoflurane anesthesia group and controls (*p < 0.05; **p < 0.01). been reported to affect the level of noradrenaline in the brain (Anzawa et al., 2001). Mitochondrion and oxidative stress were top enriched terms in cellular component and biological process in the hippocampus after sevoflurane anesthesia based on GO functional annotation, and both control and anesthesia groups received the same concentration of oxygen. Thus, the results indicate that elevated ROS is involved in anesthesia-related pathophysiological changes in the brain. Oxidative stress could be generated as a consequence of antioxidant molecules decrease or inactivation, an increase of ROS and other oxidant molecules, as well as an increase of endogenous metabolites capable of autoxidation FIGURE 7 | Immunofluorescent staining of hippocampal Hmgcs2 in the CA1 region and DG region. Immunofluorescence images show RelA (FITC, green) and DAPI (blue) counterstain. In the control condition, the staining intensity of RelA in the cytosol of the pyramidal cell layer, CA1 region and the cytosol of the granular cell layer, DG region are weak. Six hours after exposure, the staining intensity of RelA in the cytosol of cells in both the CA1 region and the DG region is obviously increased. Arrows point to the typical RelA distribution, which are provided as high magnification images. Magnification 400×, scale bar 100 and 25 µm. (Lushchak, 2014). Oxidative stress is related to lipid droplet accumulation and lipid peroxidation process (Liu et al., 2015), which is involved in the development of neurodegeneration (Sultana et al., 2013). A previous study shows that the effect of intermittent hypoxia on serum triglycerides levels is mediated through HIFs, and HIF inhibitors have a neuroprotective effect in hippocampal apoptosis (Kunimi et al., 2019). HIF-1 impacts on posttranscriptional regulation of SREBP-1 by the increased level of SCAP (Pallottini et al., 2008), which could increase the lipid metabolism and lead to neurodegeneration (Hallett et al., 2019). HIFs also impacts on the of acid-synthesizing enzyme, stearoyl-CoA desaturase (SCD1 and SCD2), which transcription in the hippocampus has been implicated in neurodegeneration (Vozella et al., 2017). The aging retinal pigment epithelium (RPE) expressed higher levels of the Nrf2 (encoded by Nfe2l2) target genes compared with the RPE of younger mice under unstressed conditions, suggesting an age-related increase in basal oxidative stress and that Nrf2 signaling is a promising target for novel pharmacologic or genetic therapeutic strategies against aging (Sachdeva et al., 2014). The results indicate that multiple metabolism-related signaling pathways involved in the process of the aged hippocampus after sevoflurane anesthesia, including energy, lipids, proteins, carbohydrates, etc. Energy metabolism in the aging brain is affected by numerous factors. SCD2 is the main δ9 desaturase expressed in the central nervous system, which has been found playing important role in controlling whole-body energy expenditure (de Moura et al., 2016) and maintaining normal biosynthesis of lipids during early skin and liver development (Miyazaki et al., 2005). SREBP1c is a transcription factor that induces an entire program of de novo lipogenesis primarily in response to increased insulin, and induction of de novo lipogenesis in adipocytes under excess carbohydrate intake is likely to be primarily mediated by SREBP1c as seen in the liver (Kim et al., 1998). SREBP1c is involved in the energy metabolic effects of phenelzine in rats fed a high-fat diet (Mercader et al., 2019). Mass spectrometry and purified protein analysis identified mitochondrial HMG-CoA synthase (HMGCS) as the primary autoantigens, which are ubiquitous and partition with mitochondria, and involved in energy metabolism and oxidative stress (Toivola et al., 2015). HMGCS2 is the regulatory enzyme of ketogenesis in liver mitochondria, which serves as an alternative fuel to reduce the use of glucose during the fasting period, especially in the brain (Nakamura et al., 2014). Ammonia is a toxic product of protein catabolism and involved in glutamate metabolism changes, one of the primary roles of astrocyte is to protect neurons against excitotoxicity by taking up excess ammonia and glutamate and converting it into glutamine via FIGURE 8 | Hypothetical pathway related to metabolism identified with altered DNA expression in the sevoflurane anesthesia group in comparison with the control group. Regular triangle represents genes up-regulated; inverted triangle represents genes down-regulated; graphs show the difference in expression of Spdir, Pmaip1, Ercc4, Map1lc3b, and Cdkn1a were for sevoflurane anesthesia group and controls (*p < 0.05; **p < 0.01). the enzyme glutamine synthetase (GLUL). Gene study has found that GLUL is associated with major depressive disorder, and loss of astroglial GLUL is reported in hippocampi of epileptic patients (Zhou et al., 2019). The study also showed that insulin/insulin-FIGURE 9 | Immunofluorescent staining of hippocampal Cdkn1a in the CA1 region and DG region. Immunofluorescence images show RelA (FITC, green) and DAPI (blue) counterstain. In the control condition, the staining intensity of RelA in the cytosol of the pyramidal cell layer, CA1 region and the cytosol of the granular cell layer, DG region are weak. Six hours after exposure, the staining intensity of RelA in the cytosol of cells in both the CA1 region and the DG region is obviously increased. Arrows point to the typical RelA distribution, which are provided as high magnification images. Magnification 400×, scale bar 100 and 25 µm. like growth factor 1 (IGF1) signaling inhibits age-dependent axon regeneration and involves in neurodegeneration, and growth hormone (GH)/IGF-1 pathway plays a key role in the modulation of the aging process (Byrne et al., 2014). The sequence of the human genome represents our genetic blueprint, and accumulating evidence suggests that loss of genomic maintenance may causally contribute to aging, and brain aging and neurodegeneration show similarities at the molecular level (Maynard et al., 2015). The most studied molecular pathways involved in neurodegeneration are inflammation and oxidative stress (Fischer and Maier, 2015), metabolism (Citron et al., 2016) and DNA damage (Madabhushi et al., 2014), which are consistent with the pathophysiological process in the hippocampus after sevoflurane anesthesia. The brain consumes oxygen at a relatively high rate, leading to the high exposure of neurons to ROS products. If antioxidants are depleted in the brain, neurons become susceptible to ROS induced DNA damage (Nakae et al., 2000). DNA damage and mitochondrial dysfunction can adversely affect neuronal functions, thus increasing the risk of neurodegenerative disease (Madabhushi et al., 2014). Neurological dysfunction has been found in individuals and mouse models with genetic errors in DNA repair genes (Jeppesen et al., 2011). ERCC4 forms a complex with ERCC1 and is required for the 5 incision during nucleotide excision repair. And ERCC4 illustrates a critical role in DNA interstrand crosslink repair, and pathogenic variants in this gene cause segmental progeroid syndromes (Mori et al., 2018). In the brains of AD patients and AD mouse models, Aβ plaque-associated Olig2-and NG2-expressing oligodendrocyte progenitor cells, exhibit a senescence-like phenotype through the upregulation of p21 (encoded by Cdkn1a) and p16 (Zhang et al., 2019). Autophagy is critical to the maintenance of organismal homeostasis in both physiological and pathological situations. Autophagy protects neurons against regulated cell death by preventing the accumulation of cytotoxic protein aggregates and preserving metabolic homeostasis (Menzies et al., 2015). mTOR inhibitor rapamycin activates autophagy, alleviates the accumulation of Aβ and ameliorates cognitive deficits in mice expressing mutant APP (Caccamo et al., 2010). Combined with the present results, autophagy could also be the therapeutic target for anesthesia-related brain function changes. Previous studies have shown multiple mechanisms and pathophysiological changes after anesthesia (Ni et al., , 2017Xie and Xu, 2013;, and genome-wide FIGURE 10 | Fear conditioning test (FCT) consisted of a training process at 3 h after anesthesia and evaluations at 2 days and 7 days after anesthesia. The freezing time decreased significantly at 7 days (B), but not 2 days (A) after anesthesia in the context test. During the tone test, the freezing time did not decrease significantly at 2 days (C) or 7 days (D) after anesthesia. **p < 0.01 and N.S.: Not significant. association studies have been implemented in the studies of brain aging and neurodegenerative diseases (Harold et al., 2009;Lambert et al., 2009;Seshadri et al., 2010). The present study explored sevoflurane anesthesia induced genome-wide changes in the hippocampus of aged rats. Based on functional annotation, mitochondrial dysfunction and oxidative stress, metabolism changes, aging and neurodegeneration, and multiple mechanisms were found to be involved in postoperative pathophysiological processes and function modulations in the hippocampus. Potential genetic regulatory network and involved signaling pathways were established, and genes (include Hifs, Prkcd, Nfe2l2, Hmgcs2, Glul, Ercc4, Cdkn1a, Map1lc3b, etc.) participate in this genetic regulatory network. These results provide the therapeutic gene targets for brain function modulation and memory formation process, which could be valuable for preventing postoperative brain disorders and diseases, such as PND, from the genetic level in the future. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the GEO (submission number: GSE141242). ETHICS STATEMENT The animal study was reviewed and approved by Peking University biomedical ethics committee experimental animal ethics branch. AUTHOR CONTRIBUTIONS YW performed the experiments, analyzed the data, and wrote the original draft of the manuscript. MQ performed the experiments, analyzed the data, and revised the manuscript. YQ and NY contributed to the experiments. KL, JY, and YZ contributed to the data analysis. XG and BM contributed to the manuscript revision. JZ contributed to the experiment design and manuscript revision. CN designed the project, supervised the experiments, drafted and revised the manuscript. All authors read and approved the final manuscript. FUNDING This work was supported by the National Natural Science Foundation of China (Grant Nos. 81771146, 81901095, 81971868 and 81400869).
2020-05-07T13:06:30.454Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "0a4be82bc84969ab41d5bd2f503312d9ba464e84", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2020.00122/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a4be82bc84969ab41d5bd2f503312d9ba464e84", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256600623
pes2o/s2orc
v3-fos-license
Generation of human blastocyst-like structures from pluripotent stem cells Human blastocysts are comprised of the first three cell lineages of the embryo: trophectoderm, epiblast and primitive endoderm, all of which are essential for early development and organ formation. However, due to ethical concerns and restricted access to human blastocysts, a comprehensive understanding of early human embryogenesis is still lacking. To bridge this knowledge gap, a reliable model system that recapitulates early stages of human embryogenesis is needed. Here we developed a three-dimensional (3D), two-step induction protocol for generating blastocyst-like structures (EPS-blastoids) from human extended pluripotent stem (EPS) cells. Morphological and single-cell transcriptomic analyses revealed that EPS-blastoids contain key cell lineages and are transcriptionally similar to human blastocysts. Furthermore, EPS-blastoids are similar with human embryos that were cultured for 8 or 10 days in vitro, in terms of embryonic structures, cell lineages and transcriptomic profiles. In conclusion, we developed a scalable system to mimic human blastocyst development, which can potentially facilitate the study of early implantation failure that induced by developmental defects at early stage. Introduction Human embryogenesis begins with a fertilized egg and then undergoes cell divisions, lineage segregations, and morphogenic rearrangements lay the foundation for blastocyst formation. Following activation of the embryonic genome and the beginning of compaction and polarization, blastomeres undergo lineage segregation and morphogenetic rearrangements to form a ball-shaped structure termed the blastocyst [1][2][3] . Blastocysts contain specialized cell types, namely epiblast (EPI), primitive endoderm (PE), and trophectoderm (TE) [4][5][6] . In recent years, "multi-omic" technologies have enabled researchers to chart the gene-transcription and chromatinmodification landscapes of these cell types, providing valuable information regarding human embryogenesis [7][8][9][10] . However, the supply of human embryos is extremely limited due to ethical and technical limitations, thereby precluding a precise mechanistic understanding of early human embryogenesis. To systematically interrogate the early human development, a robust in vitro model of human embryogenesis is urgently needed. Using human embryonic stem cells (hESCs), researchers have been working toward modeling embryogenesis in a dish. Previous studies have reported that hESCs cultured in three-dimensional (3D) soft-gel or a microfluidics device can form embryonic sac-like structures that mimic the early postimplantation human EPI and amnion development [11][12][13] . Recently, 3D human gastrulating embryo-like structures (gastruloids) were generated by subjecting hESCs to a pulse of Wnt agonist, allowing modeling of the spatiotemporal organization of the three germ layers during gastrulation 14 . However, all these models use only EPI-derived hESCs and lack cells resembling the TE and PE. Therefore, they can not fully recapitulate the lineage interactions that characterize human embryonic development. In addition, recent studies suggested that hESCs are of postimplantation EPI origin and represent the primed pluripotency state 5,15 . Therefore, hESCs may not be suitable for modeling preimplantation development. Recently, researchers have attempted to generate human stem cells resembling those in the preimplantation embryo (i.e., exhibiting naive pluripotency). These efforts revealed a continuum of pluripotency in the form of human naive stem cells or extended pluripotency stem cells (EPS cells) 16,17 . With these cells, it became possible to test whether they can self-organize into preimplantation embryo-like structures. This interest was further stimulated by recent success in generating mouse blastocyst-like structures, blastoids [18][19][20] . Mouse EPS cell aggregates recapitulate several morphogenic hallmarks of preimplantation embryogenesis and differentiate into both embryonic and extra-embryonic lineages, thus forming blastoids that share many features of the blastocyst. Recently, three groups have reported the reprogramming of fibroblasts into in vitro three-dimensional models of the human blastocyst, and the generation of blastocyst-like structures in vitro from naive human pluripotent stem cells [21][22][23] . However, given the significant differences between mice and humans, it is unclear whether human EPS cells can also generate blastoids in vitro to mimic human embryogenesis from preimplantation to postimplantation stages. In this study, we developed a 3D, two-step differentiation protocol for generating human blastocyst-like structures from human EPS cells, named EPS-blastoids. Human EPS-blastoids partially resembled human blastocyst in terms of morphology, specific markers for the three cell lineages, and global transcriptome signatures at single-cell resolution. Importantly, further in vitro culturing of these human EPS-blastoids resulted in the emergence of structures similar to those observed in early postimplantation embryos. Results A 3D two-step differentiation method for generating human blastoid By using an established protocol 16 , we converted human induced pluripotent stem cells (iPSCs) into EPS cells 16 . Then we attempted to generate human blastoids from these EPS cells using a modified protocol for generating mouse blastoids 18 . However, EPS cell aggregates treated with induction media containing BMP4 generally failed to form a cavity-containing structure after 5 days of induction ( Supplementary Fig. S1a). A few aggregates appeared to contain a small cavity, but they were enclosed by a membrane instead of TE-like cells and did not have the typical morphology and size of a blastocyst. Immunofluorescence labeling showed that the majority of these solid structures expressed the EPI marker OCT4 and the PE marker GATA6, while small fraction (~10% of day 6 aggregates) also expressed the TE markers CK8 and GATA2/3 (about 30%-50% of TE cells in each aggregate), albeit partially ( Supplementary Fig. S1b, c). These results indicate that at least a small number of human EPS cells were capable of differentiating into the TE lineage with BMP4 induction using the modified mouse blastoids protocol 18 . We then attempted to pretreat human EPS cells with BMP4 to generate TE-like cells. First, gene expression was analyzed via real-time quantitative PCR (qPCR) during a time-course of BMP4 induction to optimize human EPS cell differentiation conditions. Between day 0 and day 5 of BMP4 stimulation, EPI-specific genes were gradually downregulated as TE-specific genes were upregulated ( Supplementary Fig. S1d). Interestingly, genes characteristic of mid-and late-TE were activated by BMP4 induction, with most TE-specific genes reaching 1.5-10fold induction by day 3 (30-and 500-fold for WNT7 and GATA3, respectively) ( Supplementary Fig. S1e). Higher levels of induction were observed on day 5 of BMP4 treatment. TE-like cells exhibited morphological changes by 2 days of BMP4 induction, with cells becoming flattened and enlarged ( Supplementary Fig. S1f). Immunostaining revealed more GATA2/3-, CK7-, CK8-, and TFAP2C-positive cells on day 3 compared with day 1 or 2 ( Supplementary Fig. S1g, h). As day 3 marks the onset of significant gene expression and morphological changes, we decided to use TE-like cells subjected to 3 days of BMP4 pretreatment for subsequent experiments. Given the importance of TE-like cells for making blastoids structures, we further characterized these EPSderived TE-like cells and compared them to TE derivatives of pluripotent stem cells (PSCs). We performed RNA sequence (RNA-seq) analysis of EPS-derived TE-like cells following BMP4 or BAP (BMP4, A83-01, and PD0325901) treatments and compared the gene expression data with published data of TE-like cells derived from human naive PSCs treated with BAP or with A83-01 and PD 0325901 (PDA83) 24,25 . Human EPS cells and naive PSCs clustered together in principal component analysis (PCA), while TE-like cells formed a separate cluster. EPS cells and naive PSCs shared a similar differentiation trajectory when subjected to TE-inducing conditions (Supplementary Fig. S2a). qPCR analysis showed that genes associated with TE, such as GATA3, TFAP2C, GATA2, DAB2, and KRT18, which was also observed in BMP4-treated naive PSCs, were significantly upregulated in EPS cells treated with BMP4 for 3 days (Supplementary Fig. S2b). Moreover, hierarchical clustering of gene expression showed that while EPS-derived TE-like cells had a comparable (hEPS-BMP/BAP vs PXGL_PDA83) or better (hEPS-BMP/BAP vs iNPSC_BAP) upregulation of TE marker genes when compared to naive PSCs TE derivatives, they extinguished the pluripotency program more completely ( Supplementary Fig. S2c). On the other hand, EPS cells expressed high levels of pluripotency genes associated with the preimplantation naive state (e.g., KLF4 and NANOG) but not the postimplantation primed state (e.g., ZIC2), and efficiently downregulated pluripotency genes upon BMP4 treatment ( Supplementary Fig. S2b, d). These data suggest exposure of EPS cells to BMP4 for 3 days enabled reliable TE fate transition. To generate human blastoids, we first pretreated human EPS cells with BMP4 to generated TE-like cells, and then mixed human EPS cells with these TE-like cells at a ratio of 1:4-1:5 (Fig. 1a). After 24 h, these loosely connected cells formed small aggregates, which grew and formed structures with a small cavity by day 4. By days 5-6, blastocyst-like structures were apparent (Fig. 1b), with about 1.9% exhibiting typical blastocyst morphology (Fig. 1c). Morphologically these human blastoids were similar to natural human blastocysts (Fig. 1d, e) in terms of average diameter, but blastoids seemed to have more total cells and fewer cells in the inner cell mass (ICM) (Fig. 1f-h). Human EPS-blastoids contain cells of the three lineages of blastocysts Mixing TE-like cells and EPS cells resulted in cell aggregation and the formation of blastocyst-like structures (EPS-blastoids) on day 5. To investigate whether EPS-blastoid formation recapitulated key cellular processes, we monitored cellular dynamics of blastoids during days 2 and 3. We found that the cell-adhesion protein, E-cadherin (E-cad), localized to cell-cell junctions, indicating cell-cell interactions and communications within EPS aggregates 26 (Supplementary Fig. S3a). Ki-67 labeling showed the proliferation of EPS-blastoids on days 5 and 6 ( Supplementary Fig. S3b). We sought to determine whether blastoids contained the three cell lineages of the blastocyst, namely EPI, PE and TE, as all three are necessary for an embryo to develop beyond implantation. Immunofluorescence analysis of day 4 blastoids revealed extremely low levels of OCT4 in the outer cell layer (TE cells), with highest levels localized to EPI cells in the interior of the ICM. Cells surrounding these OCT4-positive cells expressed the PE marker, GATA6, as seen in natural early blastocysts 27 ( Supplementary Fig. S4a). Immunofluorescence analysis of EPS-blastoids during days 5 and 6 revealed that OCT4-positive cells localized exclusively to the ICM-like compartment (Fig. 2a, e), whereas cells in the outer layer expressed the TE-specific transcription factors, GATA2 and GATA3 (Fig. 2c, d). The outer layer of cells also expressed the trophoblast (TrB)-specific cytokeratin, KRT8 (CK8), indicative of TE specification (Fig. 2e). The PE marker, GATA6, localized to cells adjacent to the OCT4-positive cells (not those within the outer layer of the blastoids) (Fig. 2b, f). This pattern of localization is similar to that seen in human blastocysts (Fig. 2g). In some blastoids, the positive signal of OCT4 and GATA6 can be detected in TE (Fig. 2e, f), indicating the incomplete programming of TE lineage from EPSCs. We then calculated the percentage of CK8-positive or OCT4-positive cells in day 6 EPS-blastoids (n = 172). These analyses revealed a smaller fraction of OCT4positive cells than that in human blastocysts, whereas the fraction of CK8-positive cells was reminiscent of human blastocysts. On average, there were~15% and 80% of cells expressed OCT4 and CK8 in one blastoid, respectively (Supplementary Fig. S3c, d). Of these, 53.5% exhibited the correct pattern of ICM-(OCT4 + ) and TE-like (CK8 + ) localization, 36.6% exhibited only correct TE-like pattern (CK8 + ), 5.8% had only the ICM-like lineage, and 4.1% exhibited mislocalization of ICM-and TE-like cells (Fig. 2h). With further development of early blastocysts, the ICM divides into two lineages: EPI and PE. We examined whether blastoids could develop into late blastocyst-like structures composed of three lineages (EPI, PE and TE). Of 131 blastoids on day 6, about 8% cells of one blastoid were positive for GATA6 expression on average ( Supplementary Fig. S3e). Of these, 26% showed PE-like lineage (GATA6 + , in a stochastic manner) ( Fig. 2i), 21% expressed all three lineage markers (Fig. 2j). Moreover, around 80% of the blastoids expressed OCT4, CK8, or GATA6 (Supplementary Fig. S3f-h). Using this induction system, a large proportion of EPS cells failed to form blastoids, although they expressed markers indicative of the EPI, PE and TE lineages. They instead retained features of day 4 aggregates (Supplementary Fig. S4a). In some abnormal day 6 aggregates, OCT4, SOX2 or GATA6 localized improperly to TE cells ( Supplementary Fig. S4b-e) instead of the ICM. In summary, these results demonstrated that EPS-blastoids recapitulated the segregation of TE and ICM cell lineages, and that these blastoid structures possessed the three lineages typical of a blastocyst, however, the incomplete programmed EPSCs can be still detected in some blastoids. To confirm the existence of EPI and TE lineages in EPSblastoids, we attempted to derive ESCs, PSCs, and trophoblast stem cells (TSCs) from day 6 EPS-blastoids. We were able to establish 3 PSC lines from 6 EPS-blastoids, and 4 TSC lines from 10 EPS-blastoids using the culture condition reported previously 28,29 (Supplementary Fig. S5a, e). Induction of human blastoids under 3D two-step condition. a Schematic of human blastoid formation. EPS cells were firstly induced to TElike cells, and then TE-like cells were mixed with EPS cells and seeded together to AgreeWell on day 0. The aggregates further differentiated and organized into a human EPS-blastoid. b Phase-contrast images of human aggregates in the indicated days showing the formation of human blastoids from day 0 to day 6. Scale bar = 5 μm. c Derivation efficiency of human blastoids is about 1.9% that significantly lower than the developmental efficiency of human blastocysts. d Phase-contrast image of human blastoids on day 6, Scale bar = 50 μm. e Phase-contrast image of human EPS-blastoid (upper) and human blastocyst (lower). Red line indicated inner cell mass (ICM) of the structure and the outer layer cells represented trophoblast cells (TE). Scale bar = 50 μm. f-h Mean diameter (f), total cell number (g), and ICM cell ratio (h) were quantified between human EPS-blastoids and blastocyst. n = 30 EPS-blastoids, n = 30 blastocyst. Data in c, data are means ± SD (n = 12 blastoids). **P < 0.001. Data in e, data are means ± SD (n = 12 blastoids). P > 0.05. Data in f and g, data are means ± SD (n = 12 blastoids). *P < 0.05. Data in h, data are numbers (n = 40 blastoids, 40 blastocysts), **P < 0.001. PSCs and TSCs were morphologically similar to those derived from blastocysts ( Supplementary Fig. S5b, f). PSC colonies expressed the pluripotency markers OCT4, SOX2, SSEA4, and TRA-1-60 ( Supplementary Fig. S5c), whereas TSC colonies expressed the TE-specific markers GATA3, CK7, and TFAP2C ( Supplementary Fig. S5g). Further, PSCs derived from EPS-blastoids showed the trilineage differential potential ( Supplementary Fig. S5d). TSCs derived from blastoids can generate syncytiotrophoblasts (STBs) and extravillous cytotrophoblasts (EVTs) ( Supplementary Fig. S5h-j). The PCA indicated that PSCs and TSCs derived from blastoids showed a closer transcriptional resemblance to the PSCs and TSCs lines 25 (Supplementary Fig. S5K). Single-cell transcriptome analysis of human blastoids We performed single-cell RNA-sequencing (scRNA-seq) analysis of 200 day 6 human blastoids. After quality control and filtering, 10,933 single cells were further analyzed using the bioinformatic suite Seurat. Uniform manifold approximation and projection (UMAP) clustering analysis showed that cells from human blastoids could be divided into 12 clusters ( Supplementary Fig. S6a). Base on the expression of marker genes ( Supplementary Fig. S6a), we characterized two clusters as EPI/ICM, two clusters as PE, and two clusters as TE. The remaining six clusters seemed to express both ICM and TE markers and could each represent an intermediate cell type (Fig. 3a, b). In addition, we performed unsupervised clustering analysis and confirmed that the top genes specific for each lineage were consistent with the UMAP clustering results. The EPI/ICM cluster expressed OCT4 (POU5F1), NANOG, and SOX2, the PE cluster expressed FN1, COL3A1, and GATA6, and the TE cluster expressed GATA2 and GATA3 ( To reveal similarities and differences between EPSblastoids and natural blastocysts, we compared our scRNA-seq data from day 6 EPS-blastoids with a dataset acquired from human blastocysts 7 . Comparisons of our results with data derived from E5-E7 blastocysts 7 revealed 40, 16, and 37 genes that overlapped with those in EPSblastoid EPI, PE, and TE clusters, respectively (Supplementary Fig. S6b Table S1). UMAP analysis revealed that cells from EPI-, TE-, and PE-like clusters mostly overlapped with EPI, TE, and PE counterparts from blastocysts (Fig. 3c). The expression of overlapped differentially expressed genes (DEGs) between three lineages was shown in heat map ( Fig. 3e; Supplementary Table S1). Overall, our scRNA-seq analysis revealed similarities between the transcriptome landscape of EPS-blastoids and early blastocysts and confirmed that, by day 6, human blastoids contained the three cell lineages found in blastocysts. Human EPS-blastoids can develop postimplantation embryonic structures To test whether human EPS-blastoids could undergo postimplantation morphogenesis, we kept culturing day 6 blastoids within 2-4 days (hereafter referred to as day 8 and day 10 embryonic structures) using a previously established in vitro culture (IVC) system, which needed matrigel and modified IVC1/2 medium to mimic blastocyst implantation 30,31 . On day 8, GATA6-positive cells encircled the OCT4-positive cells (Fig. 4a). On day 10, the localization patterns for OCT4 and GATA6 resembled those on day 8, except for an increased number of cells within the day 10 embryonic structures (Fig. 4b). Moreover, GATA3 positive cells were distributed around the postimplantation structure (Fig. 4a, b). We then performed scRNA-seq analysis using day 8 (40) and day 10 (20) human embryonic structures. After quality control and filtering, 11,634 single cells from day 8 and 7872 single cells from day 10 were further analyzed. The human placenta consists of three major TrB subpopulations: CTBs, STBs, and EVTs. UMAP analysis by Seurat revealed that the primary clusters could be identified as EPI, PE, TrB (CTBs and STBs) on day 8 (Fig. 4c, Supplementary Fig. S7b), and EPI, PE, TrB (CTBs and STBs) on day 10 ( Fig. 4d, Supplementary Fig. S8b). This is based on the expression of 16 representative marker genes respectively ( Supplementary Fig. S6a, S7a). To reveal similarities and differences between EPS embryonic structures and natural blastocysts subjected to IVC, we compared our results of EPS embryonic structures from days 8 and 10 with data acquired from 7 to 14 d.p.f IVC embryos 32,33 . A total of 155, 263, and 78 genes, specific for our ICM, PE, and TB clusters (day 8) overlapped with Xiang's analyses of 7-14 d.p.f IVC embryos (Supplementary S7c; Table S1). Similarly, comparison of day 10 embryonic structures and 7-14 d. p.f. IVC embryos revealed 175, 157, and 40 genes that overlapped with our EPI, PE, and TB clusters (Supplementary Fig. S8c; Table S1), respectively. UMAP analysis revealed a concordance between cells of embryonic structures on days 8 and 10 and 7-14 d.p.f IVC embryos (Fig. 4e, Supplementary Fig. S8d). Moreover, heat map showed that our EPS embryonic structures (days 8 and 10) and 7-14 d.p.f IVC embryos had a similar transcriptional profile (Fig. 4f). In conclusion, embryo-like structures cultured from EPS-blastoids in vitro (days 8 or 10) resembled natural human embryos in terms of their single-cell transcriptome landscape in a certain degree, performing their potential in modeling human early postimplantation development. Discussion Mouse EPS cells can be induced to form blastocyst-like structures (blastoids) 18,19 . Because of the significant differences between mouse and human developmental processes, it is thought that the generation of human blastoids may be more challenging. Indeed, applying the modified mouse culture system to human EPS cells failed to generate blastoids. To overcome this obstacle, we developed a 3D, two-step induction system for generating blastoids from human EPS cells. In our two-step induction system, we first exposed EPS cells to BMP4 for 3 days to induce TE-like cells formation. These TE-like cells were then mixed with EPS cells to generate EPS-blastoids. We found that TE-like cells expressed early-, mid-, and late-TE cell markers, which enhanced their subsequent developmental potential. Human EPS-blastoids were similar to human blastocysts of the same stage based on both morphology and cell lineage analysis. The latter conclusion was based on immunofluorescence and scRNA-seq analyses. The efficiency of EPS-blastoids was lower than seen for mouse blastoids (1.9% vs 15% in the human and mouse systems, respectively). One potential reason for this difference is the difficulty in maintaining stemness and pluripotency of human EPS cells in our differentiation system. The human EPS cells were difficult to maintain as dome-shaped colonies during the cultivation and contained differentiated cells that could disturb blastoids formation. Other reasons include differences between human and mouse early embryonic development and differences between the mouse and human embryo culture system. Culture medium is an important component for inducing blastoids. The efficiency of the culture system for human embryos is not as robust as the mouse system. Up to 90% of mouse embryos can develop to blastocysts in vitro, whereas only 50% of human embryos reach the blastocyst stage. The efficiency of PE lineage derivation is relatively lower than EPI and TE lineage. There are some possible reasons: First, there is no mature PE derivation protocol now, which is hard to induced EPS cells to classical PE cells; Second, the PE lineage grows mature slower than EPI and TE lineage. The PE markers often are shown express in TE cells, which is difficult to identify the right PE pattern and calculate the percentage of right phenotype. Also, the comparisons of our scRNA-seq data with previous data suggested that the human two-step blastoid differentiation system must be further optimized to achieve a potential model system for studying the human blastocyst. Moreover, it is few transcriptome data-sets available for human blastocysts and blastoids using the similar system of scRNA-seq, therefore, further detailed analysis are required from early developmental stages to postimplantation stages. Our human EPS-blastoids recapitulated to a great extent the 3D-architecture of human blastocysts and exhibited all three developmental lineages. Functionally, we could derive both PSCs and TSCs from human EPSblastoids. More importantly, they gave rise to postimplantation embryonic structures. These observations suggested that hESC-blastoids manifest at least some functionalities of the natural human blastocyst. The ability of hEPS-blastoid to generate several types of mature TrBs like those present in the human placenta offers great promise for studying placenta disorders in the future. In summary, we have established an in vitro system for generating human blastoids that accurately recapitulate the development of a human blastocyst. We note that during the preparation of this manuscript three studies 21-23 reported successful generation human blastoids from human iPSCs and naive PSCs, respectively. The human blastoid models provide an alternative and potentially high-throughput platform for exploring the mechanisms of human blastocyst development and stem cell differentiation during preimplantation and postimplantation stages. Human samples and ethics statement Human skin fibroblasts were isolated from chest of a female aborted fetus that were obtained with informed written consent and approval by the Third Affiliated Hospital of Guangzhou Medical University. The generation of iPSCs with donated human fibroblasts was approved by the Ethics of Third Affiliated Hospital of Guangzhou Medical University. Human blastocysts produced from in vitro fertilization for clinical purposes were got with informed written consent and approval by the Third Affiliated Hospital of Guangzhou Medical University. All procedures were approved by the Institutional Review Board of the Third Affiliated Hospital of Guangzhou Medical University (2020027), and Peking University Third Hospital (S2020022). (see figure on previous page) Fig. 3 Landscape of transcriptome in human blastoids on day 6. a A UMAP plot 10,933 cells from human blastoids showing the cells in blastoids were divided into 4 major clusters on day 6. EPI/ICM, PE, TE, and IM (intermediate) subgroups were determined according to the lineage-specific markers. The cells in IM subgroups express all three lineages markers or some uncertain genes. b Heat map of lineage signature genes expression on day 6. c UMAP projection of integrated datasets showing EPI-, PE-, and TE-like cells (and IM clusters) together with EPI, PE, and TE cells of blastocysts from a previous published study 7 . d Dot plot indicating the markers of EPI, PE, and TE lineage markers. e Overlapping genes expression between a previous study 7 and this study was performed by heat map and GO term analysis. In vitro culture of EPS-blastoids for 8 and 10 days The method of embryo extended cultured in vitro refers to the previous studies 30,31 . The blastoids were collected on day 6, and transferred to the eight-well plate (treated with Matrigel 30 min in advance), and cultured with IVC1 for 2 days. On day 8, observe whether blastoids were adherent to the bottle of the dish or not, and replace the culture medium with IVC2 for further culture if the blastoids were adherent to the well till to day 10. The composition of IVC1 medium is the same as describe in the "In vitro 3D generation of human EPS-blastoids", IVC2 culture medium is composed of Advanced DMEM/ F12, 30% Knockout serum, 2 mM L-glutaMAX, 0.5% penicillin/streptomycin, 1% ITS-X, 1% sodium pyruvate, 8 nM β-estradiol, 200 ng/mL progesterone, 2 μM Y27632 and 25 μM N-Acetyl-L-cysteine. For embryoid body (EB) formation, primed PSCs at 70% confluency were gently picked using a glass needle and cultured in DMEM/F-12, GlutaMAX supplemented with 20% KSR, 1% nonessential amino acids and 55 μM β-mercaptoethanol and 2 ng/ml bFGF (Thermo Fisher Scientific) for EB formation on ultra-low-binding attachment plates (Corning), with medium replacement every other day. On day 7, the trilineage differentiation assay of primed PSCs was performed on the matrigel-coated fourwell plate in EB induction media for 4-5 days. Immunofluorescence labeling The samples were fixed with 4% paraformaldehyde in phosphate-buffered saline (PBS) for 20 min at room temperature, washed three times with PBS, and permeabilized with 0.2% Triton X-100 in PBS for 15 min. After blocking with 5% BSA in PBS for 2 h at room temperature, samples were then incubated with primary antibody diluted in blocking buffer overnight at 4°C. After primary antibody incubation, samples were washed three times with PBS containing 0.1% Tween20. Samples were washed three times with PBS containing 0.1% Tween20 and incubated with fluorescence-conjugated secondary antibodies diluted in blocking buffer at temperature for 2 h. Nuclei were stained with Hoechst 33342 (Sigma, 94403) at 1 μg/mL. Zeiss LSM 710 or 880 and Leica SP8 confocal microscopes were used for imaging. Images were processed by ZEN (Zeiss) and Fiji (ImageJ, V2.0.0) software. The primary antibodies and dilutions were following: Real-time quantitative PCR Total RNA was extracted using TRIzol (Invitrogen, 15596018). RNA (2 μg) was reverse-transcribed to cDNA template using RevertAid First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, K1621). qPCR was analyzed in the Applied Biosystems QuantStudio 3. The changes of genes were calculated by the comparative ΔΔCt method or ΔCt method relative to GAPDH expression. All the experiments were performed in triplicates. The primers used in this study were listed in Supplementary Table S2. hCG ELISA detection The medium was collected from conditional medium of TSC-STBs. The hCG level in the medium was measured using a human CG beta (HCG beta) ELISA kit (R&D Systems, DY9034-05) according to the manufacturer's instructions. RNA-sequencing and analysis TE-like cells derived of human EPS cells that were pretreated with BMP or BAP were sequenced using paired-end reads (PE150) on a NovaSeq following directional library preparation. All bulk RNA-seq samples from this study and previously published datasets were uploaded into the online RNA-seq analysis platform by Sequentia (A.I.R) for reads mapping and statistical analysis. NOI-seq analysis was performed and PCA plot was constructed. Single-cell RNA-sequencing Human EPS-blastoids were picked up by mouth pipette and washed with PBS containing 0.05% BSA. About day 6 EPS-blastoids (200), day 8 (40) and day 10 (20) human embryonic structures were collected for single-cell RNAseq. Samples on day 6 were dissociated with enzyme mix composed of 0.5× Versene (Lonza, 17711E), 0.5× Accumax (STEMCELL Technologies, 07921) and 0.05× DNa-seI (STEMCELL Technologies, 07900) at 37°C for 30 min with agitation and terminated by 5% BSA in PBS. Samples on day 8 or 10 were dissociated with enzyme mix composed of 0.25% Trypsin (Thermo Fisher Scientific, 25200056), and 0.05× DNaseI (STEMCELL Technologies, 07900) at 37°C for 15 min with agitation and terminated by 5% BSA in PBS. Dissociated cells were repeated pipetting and washed with PBS containing 0.05% BSA. Using single-cell 3' Library and Gel Bead Kit V3 (10× Genomics, 1000075), the cell suspension (700-1000 living cells per microliter determined by Count Star, about 10,000 dissociated cells each sample) were loaded onto the Chromium Single Cell B Chip (10× Genomics, 1000074) and processed in the Chromium single-cell controller (10× Genomics) to generate single-cell gel beads in the emulsion according to the manufacturer's protocol. In short, single cells were suspended in PBS containing 0.04% BSA (700-1000 cells per ml). No more than 10,000 cells were added to each channel. Captured cells were lysed and the released RNA was barcoded through reverse transcription in individual GEMs 35 . Using a S1000TM Touch Thermal Cycler (Bio Rad) to reverse transcribe, the GEMs were programed at 53°C for 45 min, followed 85°C for 5 min, and hold at 4°C. The cDNA was generated and then amplified, and the quality was assessed using the Agilent 4200. According to the manufacture's instruction, Single-cell RNA-seq libraries were constructed using Single Cell 3' Library and Gel Bead Kit V3. Finally, sequencing was performed on the Illumina Novaseq6000 sequencer with a sequencing depth of at least 60,000 reads per cell and 150 bp (PE150) paired-end reads (performed by Capital, Beijing and Novogene, Beijing). Analysis of single-cell RNA-sequencing (scRNA-seq) data The Cell Ranger software was obtained from 10× Genomics website: https://support.10xgenomics.com/ single-cell-gene-expression/software/downloads/latest. Alignment, filtering, barcode counting, and UMI counting were performed with Cell Ranger count module to generate feature-barcode matrix and determine clusters. Dimensionality reduction was performed using PCA and the first ten principal components were used to generate clusters by K-means algorithm and graph-based algorithm, respectively. The other clustering method is Seurat 4.0.3 (R package). The R package Seurat 4.0.3 was used to analyze feature-barcode matrix as following steps: (1) Cells whose gene numbers were less than 100, unique features count over 60,000, and mitochondrial gene ratio was more than 20% according to quality control matrix plots, were regarded as abnormal and filtered out. (2) UMI counts were normalized with SCTransform function by default settings. A nonlinear dimensionality reduction was performed using PCA, clustered with resolution setting at 0.6, and visualized by TSNE and UMAP. (3) Differentially expressed genes (DEGs) in clusters were determined by the FindAll-Markers function (Seurat 4.0.3) using a minimum upregulation of 0.25 log-fold. DEGs between lineages with uncorrected P values were smaller than 0.01, and log-fold change was larger than 0.25 (log2FC > 0.25) in one group were regarded as DEGs. The GO terms of DEGs in biological process were enriched using DAVID. Heat map of top DEGs between clusters was performed with DoHeatmap (Seurat 4.0.3). Then the overlapped DEGs between blastoids and blastocysts were normalized and showed in heat map by DoHeatmap (Figs. 3, 4). The basic information of the scRNA-seq profiles were available in Supplementary Table S2. Integrated scRNA-seq analysis Previously published single-cell dataset from Petropoulos et al. 7 was integrated with the day 6 blastoids dataset. Petropoulos's 1529 cells were filtered for blastocyst cells, removing the pre-blastocyst stages to leave 1096 E5-E7 EPI, TE, and PE cells. Petropoulos's data was processed using SCTransform. The dataset was integrated into the day 6 blastoids data after identifying integration genes (using FindIntegrationAnchors and IntegrateData) using 4000 anchor genes derived from the SCT assays. The integrated dataset of blastoids has 4000 integrated genes. UMAP were used for dimensionality reduction and Statistical analysis Statistical analyses were performed with GraphPad Prism 8 software, using unpaired two-tailed Student's t-tests and one-way ANOVA. All the statistical tests performed are indicated in the figure legends. The data are presented as means ± SD, and P < 0.05 was regarded as significant differences. For cell numbers and gene expression, the significant differences between two samples were analyzed by GraphPad Prism 8 software.
2023-02-06T15:19:07.796Z
2021-09-07T00:00:00.000
{ "year": 2021, "sha1": "907d01c27a1ee1efa698361310da2d899edbe107", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41421-021-00316-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "907d01c27a1ee1efa698361310da2d899edbe107", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
244276788
pes2o/s2orc
v3-fos-license
Quality and equality: Verifying “My reading experience” questionnaire for a sustainable literacy education framework This paper reports one aspect of a larger project that set out to narrow the literacy gap among Malaysia’s rural and urban children in terms of their literacy achievement. Using Perak as a case-state, this overall project scrutinises why despite the Education Ministry being the biggest recipient of the recent national budget 2020, with an estimated allocation of RM64.1billion, there are still children especially in rural schools who are unable to master the ability to read and write, even in their own mother tongue language. Through this project’s on-going work which attempts to connect the theories of literacy with actual-on-the-ground issues of children’s reading experience especially in rural schools, important matters were flagged up. This paper will highlight these matters as they are uncovered vis-à-vis the verification of “My reading experience” questionnaire which was one of the main research tools that was used for this project. Mainly, these matters were located along three aspects of literacy i.e. context, definition and language as they relate to how the questionnaire was designed. This has important implications towards how a sustainable literacy education framework can be shaped. Introduction UNESCO's 17 Sustainable Development Goals (SDGs) is considered to be a roadmap for the global community to right some wrongs. In the last two to three decades of the 20th century, it became obvious that developed and developing nations were practising a consumeristic lifestyle that had begun to negatively impact earth and its community. The lifestyle was considered to be unsustainable and for that, world citizens were urged to return to basics. Among the 17 SDGs, the 4th SDG focuses on Quality Education. This goal is a continuous global aim at narrowing the gap between the haves and have-nots, having been brought forward from the Education For All (EFA) 1990 agreement in Jomtien and the succeeding efforts in Malaysia and the rest of the developing world. In this time, the narrowing of social gap is aimed at bringing about sustainable transformation through quality living and equality sharing of opportunities across world populations. In this light however, the rural-urban, socioeconomic and now, digital divide has resulted in the prediction of unequal life chances for children across the world. Although some measure of success was achieved in the final few decades of the 20 th century with respect to literacy and education gap especially for marginalised girls, the fact that the gap remains is in itself, worrying ( (Duncan & Murnane, 2015;Ministry of Education Malaysia, 2012). Therefore, we argue that in any education system of high quality, basic literacy must be considered the pre-requisite to any other cognitive and intellectual developmental process. Yet, this problem of unequal literacy attainment receives far less attention than concerns regarding the digital divide. The heavy emphasis on computer and computer-based infrastructure in schools is gravely misplaced because any effort at addressing its problems will be futile if the problem of marginalised children's basic literacy is not tackled. Therefore, it is critical that the nation's poorest and marginalised groups are empowered through overcoming low levels of literacy and reading practice. We argue that once we can understand the experience from seeing it through the children's perspectives, we may be better positioned to appropriate the vehicles of technology. Problem statement Yet, in the effort to arrive at the real situation in schools across the country, we found that there is a lack of a comprehensively designed survey tool which could gauge the reading experience and preferences of primary school children in the national and vernacular schools. Our search for literature regarding this continually showed up dated information that was no longer relevant (Abidin, Pour-Mohammadi, & Jesmin, 2011;Atan Long, 1984;Pandian, 2000;Small, 1996) or questionnaires that could not accommodate the various languages being used in Malaysian schools, both the vernacular and national. The complexities of the different languages will have important bearings to how the questionnaire will be represented. Social inequity in reading practice and attainment In Malaysia, this social inequity is clearly present. However of late, there is a tendency to address the social inequity as if it were exclusively a problem of digital equipment and broadband connectivity (Hong & Koh, 2002;Mohamed, Judi, Nor, & Yusof, 2012). As such, a major portion of the national budget is dedicated to equipping rural schools with the latest in technology (Ministry of Finance Malaysia, 2016, 2018 and the setting up of infrastructure for classrooms seems to be a yardstick to measure educational progress (Borneo Post Online, 2018). Yet, language educators continue to highlight that literacy in both English and Malay language is still underachieved for the bottom quartile of school students (Ong, Roselan, Anwardeen, & Mohd Mustapa, 2015;Yamaguchi & Deterding, 2016). Thus, current initiatives to narrow the digital divide will fail if fundamental problems associated with literacy and reading are not addressed. The main implication of this initiative is how children who enter primary schools are assumed to be school-ready, particularly in terms of their emergent literacy abilities. Yet, underprivileged children in Malaysian rural schools continuously lag behind in literacy achievements (Kaur, 2017;Ministry of Education Malaysia, 2008, 2012, 2015. In a research which set out to understand literacy in terms of materiality, Chong and Renganathan (2016) found that approximately half of a class of Primary 2 children in a rural school still struggled with basic literacy even in the language that is native to them. Like many national schools up and down the country, the issue of literacy in this particular school was largely tackled from the academic perspective. For example, LINUS 2.0 was, at that time, counted on as a means to address the problem of low levels of literacy. In these situations, remedial help is supposed to be given to those who struggle with decoding. Chong and Renganathan (2016) also noted another important finding in their study. That finding was related to how the children's sociocultural spaces were less explored and virtually untapped. Not only that, it seemed that the children's underprivileged backgrounds were a disadvantage that had to be accepted. As a result, these children were expected to learn to be literate despite their difficult backgrounds. Yet, it has become critical to understand the socioeconomic and sociocultural contexts of underprivileged, rural-school children who are at the cusp of learning literacy skills so as to help them optimize their contexts for them to have a fighting chance at doing school. Theories of language and new literacies In the area of language learning, the research focus is necessarily pedagogical. Therefore, central to research in this area is the 'technical', classroom aspects of language teaching, learning, motivation, performance and testing (Dornyei & Ushioda, 2009;Evans, 2013). In this line of thinking, teachers are assumed to be paramount in imparting knowledge to students who are sometimes regarded as 'not knowledgeable' or even 'deficient'. As such, the responsibility of language learning lies squarely on the teaching and to a lesser extent, the larger sociocultural environment which is inclusive of family background and community-based factors. In New Literacies, Kalantzis, Cope, Chan, and Dalley-Trim (2016) demonstrate that there is value in broadening the notion of literacy as being beyond the written word and beyond linguistic parameters. "We believe that we need to recalibrate our approaches to literacy teaching to align with contemporary conditions for meaning-making …in the wide range of social and cultural contexts in our daily lives" (Kalantzis et al., 2016, p. 73 This has implications to the way we design our data collection tool because more often than not, the considerations of sociocultural factors fail to be represented in the tool especially when the quantitative paradigm underpins the tool. Methodology and methods Overall, the mixed methodological design was adopted for this study in that the idea of the phenomenon of being literate was assumed to be somewhat measurable (i.e. via the assessment component of the questionnaire) whilst also giving liberty to cater for unexpected interpretations from the children and teachers. In Phase 1, quantitatively-informed paradigm underpinned the research design. The method of survey was used to collect broad, generalisable data. In Phase 2, qualitatively-informed paradigm underpinned the research design. The methods of in-depth interviews of participants, as well as site and field observations were used. In this paper, the method of questionnaire utilised in Phase 1 will be discussed. Particularly, the verification of the questionnaire design will be examined. In terms of the sampling group, Perak serves as a suitable case-state because out of a total of 1097 registered primary and secondary schools in Perak state, approximately 78% or 845 schools are primary schools while only 22% or 252 are secondary schools (Jabatan Pendidikan Negeri Perak, 2015). Out of 845 primary schools in Perak, 75% or 636 schools are defined as rural schools while 25% or 209 schools are urban (Ministry of Education Malaysia, 2017). In contrast with some states (e.g. Penang and Selangor where the ruralurban divide is very small), ¾ of Perak's schools are most likely to be disadvantaged. For this reason, Perak can become a useful case-state from which the findings can serve to address the rural-urban gap. Considerations for questionnaire design The researchers designed a survey questionnaire that captured information regarding the students' background and experience of reading. This paper will report findings drawn from an analysis of how the survey tool i.e. the questionnaire was developed, particularly in how the questionnaire's validity was considered. The following sections delineate the three major considerations for how the questionnaire was designed. It is important to note that we were guided by the theoretical constructs of literacy as a social practice in our deliberation of the questionnaire design. Validity of questionnaire The measure of validity in a questionnaire is understood to be the extent to which a measurement actually measures what it aims to measure (Creswell, 2014). For example, if a questionnaire sets out to measure the construct of reading experience, the experts who design the questionnaire must be able to operationalise this construct via suitably-worded items in the questionnaire, carefully constructed layout and settings of the questionnaire. Particularly, the experts must be able to translate theoretical constructs into operational statements fit for the respondents (Bolarinwa, 2015). Thus, in the following sections, we discuss three considerations that we as the experienced designers of the questionnaire undertook to balance its validity, namely, the context of literacy, the definition of literacy and the language of literacy. Context of literacy Before we venture further into a discussion about our idea of the context of literacy, it is important to begin by clarifying the credibility of the researchers. In this project, we as the main researchers bring to bear our vast experience and expertise in examining matters pertaining to literacy and reading. Our collective experience is drawn from our combined years of international academic training (PhD theses) in literacy and education, awards in the form of national and international grant projects and publications in a wide array of journals, newspapers, books and book chapters (Chong, 2018(Chong, , 2019Janan & Wray, 2014;Lim, 2018aLim, , 2018bLim, , 2019Renganathan & Kral, 2018 ;Wray & Janan, 2013). Based on our experience drawn from international understandings of literacy, we took into consideration the different contexts within which literacy questionnaires in other parts of the world have been designed. Particularly, our decision was guided by our previous research efforts in identifying well-designed questionnaires that focus on reading. From here, we narrowed down our selection to four sources for the following reasons. First, Malaysia remains the most important source as the context is immediately relevant to our study. Within this context, the national survey carried out by the National Library with published and publicly available findings is still considered to be the most comprehensive (Small, 1996). Second, a recent survey report representing Singapore's teenagers' reading habits was considered to be current and potentially useful (Loh & Sun, 2018). Third, United Kingdom's National Literacy Trust is known to have a long-standing record in reporting literacy practices of UK's students (C. Clark & Foster, 2005; C. C. Clark & Rumbold, 2006). Finally, a team of researchers in the United States also have a long-standing history of designing and verifying the Motivation for Reading Questionnaire (Wigfield, Guthrie, Tonks, & Perencevich, 2004). When comparing and contrasting these sources, we found some portions of each of them to be compatible while other aspects to be incompatible. Below, in Table 1, is a summary of each of the questionnaire's compatibility with our study's aim and the action we took to incorporate into our questionnaire. The stable questionnaire items on importance of reading and reader's attitude towards reading were considered From the considerations, the constructs that make up the Malaysian child's reading experience were decided to be the following: their self-reported reading ability, the importance of reading, their parents'/family support on their reading and their school/public support for reading and their access to multi-media (e.g. mobile phone, computer). Definition of literacy As previously mentioned, our project was conceived based on the theoretical understanding that reading is more than decoding a language. Thus, we adopted the sociocultural perspective for literacy practice (Gee, 2008;The New London Group, 2000). However, we were also cognizant that the term 'literacy' is not widely used in the Malaysian context. Reading is still understood by the masses, as making sense of linguistic systems. For this reason, we retained the word 'reading' and used that as the base for its translation into Malay, Mandarin and Tamil. Language of literacy In terms of the language of literacy, we acknowledged the multilingual nature of the student respondents in this research. This meant that the language of literacy both of the questionnaire and of other related reading materials were critically understood to be represented via multiple linguistic systems. This brought up the realisation that although we had to consider how some of the students could be struggling with reading and writing in a fundamental way, there could also be students who were highly-literate in one language but poorly-literate in another. This thinking was underscored by our application of the theories of literacy as a social practice because we acknowledged that these students brought different social and cultural practice from home to school (Heath, 2012;Street, 1995). The base questionnaire was initially designed in English. This is because the original sources of the other questionnaires were in English. However, a fair amount of effort was spent in translating and testing the questionnaire items in Malay, Mandarin and Tamil. Questionnaires in Mandarin and Tamil were also provided with their equivalent Malay translations. Besides adhering to Ministry of Education's requirement of using bilingual sentences where the original were not in Malay, we were also interested to accommodate students who may be differently proficient in two languages. This would provide a sense of equality for the different backgrounds Malaysian students come from. Related to language is also the consideration of appropriate use of symbols. This affected the questionnaire in two ways. First, we considered respondents who may struggle with reading long sentences. Thus in reducing them, we used emojis (the scale of four faces denoting 'very happy' to 'very sad') to represent the likert scale so that the respondents could access meaning through non-alphabetic/non-linguistic means. Second, where Romanised language was used for Malay questionnaires, the UD Digi Kyokasho NK-R font was used. This was to ensure that the letter 'a' that was used in the questionnaire was consistent with the letter 'a' that was learnt by the students via handwritten form. The letter 'a' can only be found in UD Digi Kyokasho NK-R font. Closing remarks The way in which our questionnaire was verified points to how our research project contributes to quality education through accounting for the challenges our respondents may face in their participation of our research. Indeed, the fundamental consideration of any research must be located in the way the respondents are anticipated to lend their perspective.
2021-11-18T16:17:41.788Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2674e3cfd6f1942e7cebc1a271ca3bfec257f8fb", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/35/shsconf_icmesh2020_06004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0d4a3c8e7aba9de7723efeec5637ff24a9618125", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
49865075
pes2o/s2orc
v3-fos-license
Comparison of Outcome Measures for Traditional and Online Support Groups for Breast Cancer Patients: An Integrative Literature Review. Despite widespread use of support groups in the breast cancer patient population, there are heterogeneous outcome measurements and inconsistencies in their perceived benefits. The purpose of this integrative literature review is to compare the efficacies of traditional and online support groups for breast cancer survivors through analysis of outcome measurements and determination of strengths and weaknesses. After examining the literature, it was found that online support groups are ideal for women who require additional support or who are unable to attend a traditional group. Alternatively, traditional support groups allow for discussion and support tailored to specific cultures and are especially beneficial when a breast cancer survivor is included in the process. These findings suggest that because both traditional and online support groups have unique roles in the psychosocial support of female breast cancer survivors, individual preferences and needs should be considered when determining which support groups will be beneficial. of current treatments, there are still many challenges, side effects, and survivorship issues related to the disease processes and treatments, including body image concerns, psychological distress, psychosocial functioning, quality of life, sexuality, financial concerns, and fatigue (Miller, 2008;Sadler-Gerhardt, Reynolds, Britton, & Kruse, 2010;Shannonhouse et al., 2014). These side effects are severe and can have long-lasting consequences. A 5-year study on breast cancer patients suggested that nearly half of the women demonstrated signs of depression, anxiety, or both 1 year after diagnosis, and 25% experienced depression, anxiety, or both up to 4 years after diagnosis (Burgess et al., 2005). Additional studies showed that 30% to 50% of breast cancer patients present with increased fatigue, which persists for up to 5 years after treatment (Björneklett et al., 2012a;Minton & Stone, 2008). These unwanted effects warrant a need to address the unique concerns of breast cancer survivors. SOCIAL SUPPORT GROUPS One way to address the needs of these survivors is through the use of a social support system. The concept of social support is defined as a collaborative exchange of information, emotions, and practical advice between donors and recipients (Bender, Katz, Ferris, & Jadad, 2013). A tangible application of this concept is face-to-face or inperson support groups, which will be defined as traditional support groups throughout this literature review. Traditional support groups emerged in the cancer patient population in the 1970s (Klemm et al., 2003) and have been continually used to improve the health and well-being of individuals with breast cancer over the past several decades (Bender et al., 2013). In recent years, increased access to and popularity of the internet have led to the utilization of online social support for breast cancer survivors (Van Uden-Kraan et al., 2008). Like traditional support groups, online support groups are designed to provide an environment in which individuals can share experiences and exchange information, advice, and support (Griffiths, Calear, & Banfield, 2009). Online support groups have been praised because of their convenience, anonymity, and affordability (Klemm et al., 2003;Lepore et al., 2014). However, despite sufficient research on the role of online support groups in the breast cancer patient population, there are heterogeneous outcome measurements and mixed evidence of their efficacy (Lepore et al., 2014). In addition to the inconclusiveness of the efficacy of online support groups, there are few articles that compare the effectiveness of traditional and online support groups for this population. The inconsistencies in the perceived benefits of the efficacy of online support groups as compared with traditional support groups warrant more research. This integrative literature review analyzes both traditional and online support groups for breast cancer patients. It also demonstrates a preferred avenue of support for breast cancer patients, which could promote improved quality of life and overall health in this population. Therefore, the purpose of this integrative literature review is to compare the efficacies of traditional and online support groups for breast cancer survivors through analysis of outcome measurements. From this comparison, strengths and weaknesses of both online and traditional groups are determined. The following research questions were created to guide this study: 1. What are the overall outcomes of online support groups as compared with traditional support groups for breast cancer survivors? 2. Is one type of support group better suited for breast cancer survivors or a subset of this population than another? CONCEPTUAL FRAMEWORK The Social Network Theory (SNT) serves as the conceptual framework for this article. The theory proposes that social interactions among individuals generate heterogeneous relationships with varying levels of supportiveness (Pierce, Sarason, & Sarason, 1991). The fundamental concept of the SNT is the network, which is defined as a set of individuals and a set of common ties that connect these individuals (Daly, 2010). Unlike a simple relational orientation, the SNT considers the incorporation of individuals in a web of relationships and the impact these relationships can have on a given individual's opportunities (Daly, 2010). An important aspect of this theory is that it does not treat interactions between individuals in isolation; HOULIHAN and TARIMAN REVIEW rather, it considers the pathways through which information flows and the indirect effects of interactions (Daly, 2010). According to the SNT, both the information pathway and the effects of interactions form a structure, which is occupied by a particular individual. This structure determines the opportunities and obstacles an individual encounters, thus affecting the outcome of that individual's experience in a certain network. Because the SNT accounts for both the structure and property of a given network, it will provide a framework for analyzing the outcomes of traditional and online support groups. The SNT provides a framework that captures the multidimensional relationships among individuals in both traditional and online support group environments. The theory advocates for individuals to make appropriate and effective use of social support by engaging them in the identification of potential social supports groups (Kang'ethe, 2011). The SNT identifies potential support groups by contextualizing the structure, relationships, and outcomes of different support groups (Daly, 2010). By providing a context for how individuals interact within a given environment, the SNT offers better understanding of the outcomes of breast cancer survivors' interactions in traditional and online support networks. Additionally, use of the SNT to understand the social interactions that connect individuals to others allows for evaluation and consideration of the social capital of traditional and online groups and the individual members who comprise them (Kang'ethe, 2011). Design An integrative literature review is a method that analyzes and synthesizes literature to provide a comprehensive understanding of a particular phenomenon or problem (Whittemore & Knafl, 2005). Therefore, this methodology was used to compare the outcomes of online and traditional support groups for breast cancer patients, because there are discrepancies in the perceived benefits of both groups due to heterogeneous outcome measures. Due to the varied outcome measures and the lack of an integrative review on this topic, the benefits and disadvantages of both online and traditional support groups for breast cancer patients are the focus of analysis and synthesis in this integrative literature review. Literature Search Strategies Three computerized databases were searched in this review: CINAHL, PsychInfo, and PubMed. The terms "breast neoplasm" and "support groups" were used to search both CINAHL and PsychInfo databases. The terms "breast neoplasm" and "selfhelp groups" were used to search the PubMed database. Additionally, "psychosocial support" was used as a search term in all three databases. After initial searches using these terms, inclusion criteria to select only articles with the "support group" major headings were chosen in the CINAHL and PsychInfo databases. Inclusion criteria were used in PubMed to select articles with the MeSH terms "breast neoplasm" and "selfhelp groups." Among these articles, only peerreviewed journals were chosen. Search criteria were then limited to articles that were published after 2005. Relevant abstracts were then chosen from each database, and duplicate articles were removed. Finally, articles were read and further examined to determine whether they address outcome measures for online breast cancer supports, traditional breast cancer support groups, or both. Articles that met these criteria were used for this integrative review. A review of the literature was performed, and articles were gathered based on selection criteria (Figure). Data Analysis The five-step integrative method by Whittemore and Knafl (2005) was used in this literature review. The steps for this method include: (1) problem identification; (2) literature search with inclusion and exclusion criteria; (3) data evaluation; (4) data analysis through extraction and reduction; and (5) presentation (Whittemore & Knafl, 2005). First, research questions were defined to guide this study and facilitate data extraction from primary sources. Next, appropriate primary sources were identified in the literature search step using three databases and several specific key terms. Data were then evaluated using methods and criteria to ensure they were authentic and appropriate. Following evaluation, data were analyzed to interpret the effective-ness of different cancer support groups and identify specific themes and patterns. Finally, conclusions were drawn from data analysis, and results were displayed in two tables (Tables 1 and 2). Additionally, the data integration process used to identify primary sources is displayed in the Figure. RESULTS Articles used in this study are grouped according to program design (online or traditional) and inclusion criteria (qualitative or quantitative). Qualitative analyses were performed in two of the online support group articles; quantitative analysis, on one; and mixed analyses, on the remaining five ar-ticles. For traditional support group articles, three articles used qualitative analyses, nine used quantitative analysis, and two used mixed analysis. Research questions that guided this review were answered to compare overall outcomes and determine unique group features that might benefit a specific population of breast cancer survivors. It was found that online groups allow for user anonymity, flexibility, and low commitment, making them beneficial for women who require additional support or are unable to attend a traditional support group due to geographic or time constraints. Traditional groups have proven effective because they can provide culturally competent and lin- Figure. Process of inclusion and exclusion of studies used in this integrative review. HOULIHAN and TARIMAN REVIEW guistically appropriate support tailored to specific communities of breast cancer survivors. Summary findings of the articles analyzed in this review are found in Tables 1 and 2. DISCUSSION Due to their unique features, online and tradition-al support groups can offer important resources for women with breast cancer. It has been dem- onstrated that both types of support groups can positively impact well-being and decrease anxiety levels (Cameron, Booth, Schlatter, Ziginskas, & Harman, 2007;Lieberman & Goldstein, 2006). However, each group has inherent strengths and weaknesses that impact its effectiveness in different populations of breast cancer survivors. Online support groups might be useful for women who require supplemental support, but these groups do not necessarily compensate for the lack of support from relatives and deteriorated health status. Traditional groups can be used to provide culturally competent and linguistically appropriate support for women but might not be helpful for women who are physically unable to attend a group (Ashing-Giwa et al., 2012;Kwok & Ho, 2011). As predicted by the SNT, because both online and traditional groups have inherently different structures and information pathways, there are distinct social interactions and unique relationships formed between individuals in each group. This results in different outcomes due to an individual's experience in a certain social environment and provides perspective on member interactions and expression in these separate domains. This framework provides an understanding of how social environment and interactions impact the outcomes of both types of support groups throughout this literature review. OUTCOMES OF ONLINE AND TRADITIONAL SUPPORT GROUPS The first objective of this review was to compare the overall outcomes of online and traditional support groups. As mentioned previously, it is necessary to consider the information pathway and social interactions to understand and compare group outcomes. Studies on both online and traditional groups focused on emotional exchanges and individual interactions to measure their effectiveness. Five studies focused specifically on expression of emotions in online groups and the impact a professional leader has on outcomes (Lepore et al., 2014;Lieberman, Golant, Winzelberg, McTavish, & Gustafson, 2005;Lieberman & Goldstein, 2006;Namkoong et al., 2013;Shaw, Hawkins, McTavish, Pingree, & Gustafson, 2006). As demonstrated by Namkoong et al. (2013), participating in supportive exchanges with others in online support groups leads to a positive effect on perceived bonding between individuals, which can improve coping strategies. In contrast, additional findings propose that lack of both supportive exchange and communication about an individual's cancer experience in online groups may serve as a negative stressor on the body adversely affecting physical wellbeing (Shaw et al., 2006). In addition to the negative impact lack of communication has on an individual, it has been demonstrated that professionally led online groups show decreased psychological well-being when there are frequent expressions of anxiety, depression, and hostility as well as fewer positive emotions than self-led online groups (Lieberman et al., 2004). According to a similar study, self-led groups demonstrated more self-effacing emotions of sadness and anger, which resulted in improved psychosocial well-being for individuals (Lieberman & Goldstein, 2006). Professionally led online groups that have an involved group leader may increase anxiety and unintentionally lead participants to be acutely selfaware in their responses and hold back feelings in fear of upsetting others (Lepore et al., 2014). In contrast, self-directed groups demonstrate more emotional expression than groups with a leader, suggesting that participants had fewer concerns about burdening others with their cancer-related concerns due to the unique chance to talk freely with empathic others in an online self-help group (Lepore et al., 2014). Based on these findings, selfled online support groups might prove beneficial to women with breast cancer because they encourage free exchange of feelings and emotional disclosure among participants. Professionally led online groups, however, may provide a more constructive forum that could allow for individuals to engage in more directed, therapeutic conversations with a trained leader. Similar to with online groups, emotional exchange of experiences between participants is important to the success of traditional groups (Pinheiro, da Silva, Mamede, & Fernandes, 2008). As mentioned previously, studies of online support groups with an involved facilitator lead to more frequent negative outcomes for individuals (Lepore et al., 2014;Lieberman et al, 2004;Lieberman & Goldstein, 2006). Unlike online groups, however, an active facilitator who is also a cancer survivor in traditional groups has been shown to empower participants because of the mutual identification this individual provides (Power & Hegarty, 2010). Having a breast cancer survivor as a facilitator in traditional groups has been shown to be instrumental in providing participants with the necessary skills needed to cope with the daily problems associated with a breast cancer diagnosis (Power & Hegarty, 2010). In addition to considering information pathways and social interactions among group participants, it is necessary to consider the type of data analysis performed to determine outcomes. Literature on traditional groups throughout this review includes a variety of qualitative, quanti-tative, and mixed analyses. Most studies of online support groups, though, use mixed qualitative and quantitative analysis, which provides multiple avenues through which the success of a group can be measured. A majority of these studies use diagnostic software to perform a thematic analysis on group discussions in conjunction with quantitative questionnaires. Software analysis often allows for extrapolation of common themes in online discussions and can provide an integral understanding of the type of support provided. Although most online studies used a mixed approach, one study utilized solely a quantitative approach to measure emotional coping, emotional well-being, and depression. Like many studies that utilize only a quantitative approach, this study did not find any correlation between improvement in depression or emotional well-being after use of an online support group (Batenburg & Das, 2014). This finding suggests that other factors outside of the online environment may affect outcome measures and therefore may confound some of the data. For example, upon further analysis of data, Batenburg and Das (2014) found that patients who received support from family and friends reported a higher well-being than those without support. Additionally, a study on traditional support groups did not produce any statistically significant data; however, after the intervention was complete, women were interviewed, and those who participated in the group psychotherapy stated they had learned more to express themselves and exhibited increases in the use of emotion-regulation strategies and perceived control after the intervention (Cameron et al., 2007). Future studies should consider utilizing a mixture of qualitative and quantitative measures for data analysis. Although quantitative data can provide measurable data and include covariates that affect measurement outcomes, they can also fail to delve into more informative data, which could be obtained using a combination of both qualitative and quantitative studies. Most studies resulted in weak trends or nonsignificant data. However, two studies demonstrated significant, positive change in participants after the use of a traditional support group. Capozzo and colleagues (2010) demonstrated an improvement in mental adaptation and significant reduction in anxious preoccupation in participants, and Vos and colleagues (2007) measured a positive change for body image and recreation. These significant data may be due to a longer time frame of intervention (6 weeks and 3 months) and the smaller sample sizes of these groups (n = 28 and 67) compared with the other studies. A longer time frame for these two groups would allow participants to acclimate to the group, and a smaller sample size would result in less variation between different support groups within the study. The other four studies that did not produce any significant data included shorter time frames (7 days to 5 weeks) and larger sample sizes (165-382 participants), allowing for more variation between individual groups within a study. Despite the many obstacles produced from analysis of short-term traditional support group interventions, further study of these groups may illustrate how a shortterm intervention might benefit individuals during a difficult period, such as when patients are newly diagnosed or receive adjuvant chemotherapy or radiotherapy. Methodologic flaws used in studies can confound the actual benefits of support group participation on quality of life as it relates to different types of breast cancer survivors (Michalec, 2006). Therefore, it is important to consider covariates and demographics when analyzing or comparing data. For example, one study found that being Af-rican American was found to have a significantly negative effect on social quality of life (Michalec, 2006). Additionally, considering the mean age of women who participate in online support groups is necessary, because lack of computer skills in middle-aged and older adult populations could account for discontent with online support groups. Finally, it is important to consider different factors and social determinants when evaluating data and suggesting support groups for specific populations, because they affect how an individual perceives support. UNIQUE GROUP FEATURES TO DETERMINE USER PREFERENCE The second objective of this review was to determine whether one type of support group is better suited for breast cancer survivors or a subset of this population than another. As previously mentioned, support groups have unique features that allow them to provide specialized care and resources for breast cancer survivors. Because online groups are accessible anywhere there is an internet connection, they can work as a supplement to traditional face-to-face groups and can provide support for those who wish to remain anonymous or are unable to attend an in-person meeting (Bender et al., 2013). Due to the lack of physical and time constraints in online groups, exchanges between individuals can lead to more fluid, less restrictive conversations than in traditional support groups, which can lead to higher levels of emotional support from others and fewer breast cancer-related concerns (Kim et al., 2012). However, despite their flexibility and usefulness, users of online support groups have also been shown to lack emotional connections and to have increased misunderstandings due to lack of face-to-face contact or poor computer skills (Bender et al., 2013). Since traditional support groups require faceto-face communication and are often community-based, they allow for discussion and support tailored to specific cultures, which is not always feasible with an online group (Ashing-Giwa et al., 2012;Kwok & Ho, 2011). Ashing-Giwa et al. (2012) found that African American breast cancer survivors preferred a culturally sensitive forum that responds to their unique psychosocial, spiritual, physical, and informational needs. Peer-based groups facilitated by an African American breast cancer survivor allow members to be more comfortable by relating to other members with similar cancers and cultural experiences (Ashing-Giwa et al., 2012). Two other studies found that providing culturally competent support and resources for Chinese breast cancer survivors resulted in improved access to information and a sense of interconnectedness among individuals (Kwok & Ho, 2011;Chan et al., 2006). In contrast, Cousson-Gelie, Bruchon-Schweitzer, Atzeni, and Houede (2011) found that few breast cancer survivors in France wished to attend a psychological support group. The few who did benefit were the most vulnerable, with worse initial quality of emotional life and smaller social network, suggesting that group therapy may not be well accepted by a majority of patients in this setting (Cousson-Gelie et al., 2011). In addition to considering structure, information pathway, and interactions in a group, it is necessary to consider culture, beliefs, and support systems to provide supportive resources for breast cancer survivors. STUDY LIMITATIONS Although this review provides insight into outcomes of breast cancer support groups, there are several limitations to consider. To begin, since only psychosocial studies were chosen for this review, a selection bias may exist. Despite exclusively including studies on breast cancer in women, this review did not consider the stage or progression of different breast cancers during analysis. This is important to consider because disease progression could impact outcome measurements and create inconsistencies when comparing studies. Another limitation to this study includes the heterogeneity among support group types. Although the 22 different studies focused on psychosocial support, each support group is unique and therefore inherently difficult to compare with other support groups. A final limitation of this review is the lack of studies that consider both online and traditional support groups. Although studies exist that compare online and traditional groups, the search criteria used in this review did not include any of them. IMPLICATIONS FOR NURSING PRACTICE These findings underscore the importance of considering individual differences in dealing with illness when examining health outcomes of support communities. No support community provides a blanket solution for individuals, because not all support groups compensate for the potential needs of all breast cancer survivors (Bender et al., 2013). Implications for nursing practice include considering the individual's wants and needs when recommending a breast cancer support group. An individual's schedule, literacy, and access to the Internet and transportation should be assessed when recommending a group. An individual's psychosocial needs should also be considered to determine which environment would best support the individual. In addition to accounting for individualized needs while recommending support groups, providers should consider the timing for breast cancer support group recommendations. Introducing individuals to a support group at the time of diagnosis could allow them to obtain support almost immediately or allow them to access a group at their own pace. Reinforcing their options throughout their treatment and survivorship phases could also optimize an individual's support throughout their diagnosis. These recommendations can be applied more broadly to nursing practice when counseling patients on other treatment options or health care decisions. Additionally, the recommendations from this study stress that advanced practice nurses and other advanced practice providers must be active listeners and culturally competent care providers to address the needs of their patient population. CONCLUSION This review compared the outcomes of both online and traditional breast cancer support groups through an integrative analysis of articles. Because both traditional and online support groups have unique roles in the psychosocial support of female breast cancer survivors, individual preferences and needs should be considered when determining which support groups will be beneficial. Advanced practitioners should invest in future studies focusing on online support due to improvements in online access and internet knowledge in the past several years. Additionally, studies should focus on how online support can best be used to help individuals from different cultures and globally underserved communities. l
2018-08-01T20:48:44.219Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "29f293cfbd4ff48ebb748ecdc22171fbb3872962", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "abf96ca96a9dc678dbb7966997f6b8749b6af9f0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
38474215
pes2o/s2orc
v3-fos-license
Activated Protein C Mutant with Minimal Anticoagulant Activity, Normal Cytoprotective Activity, and Preservation of Thrombin Activable Fibrinolysis Inhibitor-dependent Cytoprotective Functions* Activated protein C (APC) reduces mortality in severe sepsis patients and exhibits beneficial effects in multiple animal injury models. APC anticoagulant activity involves inactivation of factors Va and VIIIa, whereas APC cytoprotective activities involve the endothelial protein C receptor and protease-activated receptor-1 (PAR-1). The relative importance of the anticoagulant activity of APC versus the direct cytoprotective effects of APC on cells for the in vivo benefits is unclear. To distinguish cytoprotective from the anticoagulant activities of APC, a protease domain mutant, 5A-APC (RR229/230AA and KKK191-193AAA), was made and compared with recombinant wild-type (rwt)-APC. This mutant had minimal anticoagulant activity but normal cytoprotective activities that were dependent on endothelial protein C receptor and protease-activated receptor-1. Whereas anticoagulantly active rwt-APC inhibited secondary-extended thrombin generation and concomitant thrombin-dependent activation of thrombin activable fibrinolysis inhibitor (TAFI) in plasma, secondary-extended thrombin generation and the activation of TAFI were essentially unopposed by 5A-APC due to its low anticoagulant activity. Compared with rwt-APC, 5A-APC had minimal profibrinolytic activity and preserved TAFI-mediated anti-inflammatory carboxypeptidase activities toward bradykinin and presumably toward the anaphlatoxins, C3a and C5a, which are well known pathological mediators in sepsis. Thus, genetic engineering can selectively alter the multiple activities of APC and provide APC mutants that retain the beneficial cytoprotective effects of APC while diminishing bleeding risk due to reduction in APC's anticoagulant and APC-dependent profibrinolytic activities. The clinical success of APC in reducing mortality in severe sepsis patients (PROWESS trial) gave impetus to new research on the direct effects of APC on cells, collectively referred to as "the protein C cytoprotective pathway" (17,18). The direct effects of APC on cells, here termed "APC's cytoprotective effects," require the APC receptors endothelial protein C receptor (EPCR) and protease-activated receptor 1 (PAR-1) and include: 1) alteration of gene expression profiles; 2) anti-inflammatory activities; 3) anti-apoptotic activity; and 4) endothelial barrier stabilization. Although potentially mechanistically related and involving shared molecular pathways, each of these activities of APC is distinct in its anticipated contribution to physiological beneficial effects. The relative importance of these APC direct cytoprotective effects on cells versus the anticoagulant activity of APC for in vivo benefits of APC remains unclear. To distinguish the cytoprotective from the anticoagulant activities of APC, protease domain mutants (229/230-APC (RR229/230AA) and 3K3A-APC (KKK191-193AAA)) were generated with selectively diminished anticoagulant activity without affecting normal cytoprotective activities of APC (19). In vivo studies show that the heterologous murine APC mutants, RR230/231AA-APC and KKK192-194AAA-APC, prevent endotoxemia-induced death in mice (20). These observations support the notion that the cytoprotective effects of APC are beneficial in vivo and that the beneficial effects of APC are, at least in part, independent of APC anticoagulant activity (19,20). Despite the efficacy of these APC mutants in reduction of endotoxin-induced mortality, a possible contribution of anticoagulant activity to the observed effects could not be negated as these mutants retained residual anticoagulant activity (human mutants, 5-15%; mouse mutants, 25-35%) (19,20). The current study characterizes a new APC mutant, designated 5A-APC, with almost no anticoagulant activity (Ͻ0.1% factor Va inactivation activity compared with rwt-APC) that retains normal cytoprotective activity on cells. This new APC mutant shows markedly reduced residual anticoagulant properties compared with the related APC mutants 229/230-APC and 3K3A-APC in multiple assays that are sensitive to secondary-extended thrombin generation and to the implications thereof. This new 5A-APC mutant may be useful in studies to elucidate the relative contributions of APC anticoagulant versus cytoprotective activities to the APC beneficial effects in various settings. Furthermore, this new APC mutant may improve therapies in various settings where the cytoprotective actions of APC are most beneficial while its anticoagulant action adversely increases bleeding risk. Mutagenesis, Expression, and Purification of Recombinant Protein Cs-To construct the 5A-protein C mutant (containing alanine substitutions at Lys-191, Lys-192, Lys-193, Arg-229, and Arg-230), the cDNA of wild-type protein C in pcDNA3.1(ϩ)neo (Invitrogen) was used as template and substitutions were introduced using QuikChange mutagenesis (Stratagene) as described previously (21). Sequencing of the protein C coding region confirmed accuracy of the mutagenesis. Protein C was purified from serum-free conditioned media derived from stable transfected HEK-293 cells, by two passes on fast flow Q-Sepharose (Amersham Biosciences) using CaCl 2 and NaCl elution as described previously (22). Protein C concentrations were estimated by absorbance using an extinction coefficient of 14.5 (280 nm, 1%, 1 cm). The 3K3A-APC mutant was prepared as described previously (19,23). Activation of Protein C and Catalytic Activity against Small Substrates-Purified protein C was activated by thrombin (1/50, w/w) in the presence of 2 mM EDTA to maximal activity (2.5-3 h, 37°C), followed by the addition of hirudin (Sigma) to inactivate the thrombin and fast flow Q-Sepharose chromatography to remove thrombin (24). Residual thrombin, as determined by fibrin clotting, was undetectable and accounted for Ͻ0.00025% (moles of thrombin/mole of APC) of the protein. Factor Va Inactivation-Recombinant factor Va mutants with APC cleavage sites modified at Arg-306 (R306Q mutant) or Arg-506 (R506Q mutant) were prepared in the R679Q/ S2183A B-domainless factor V background (30). The time course of APC-mediated inactivation of factor Va was determined by following the factor Va cofactor function in prothrombinase assays as described previously (30). Analysis of factor Va proteolytic fragments generated by APC was performed by incubating APC with factor Va bound to immobilized phospholipids. In brief, high binding microtiter plates (Nunc Maxisorp) were coated with 100 M PC/PS/PE phospholipids vesicles in TBS (50 mM Tris, 150 mM NaCl, pH 7.4) and blocked with Tris-buffered saline/0.5% gelatin. Wells were incubated for 10 min with 10 nM factor Va (Haemtech) in Hepes-buffered saline/0.1% BSA/5 mM CaCl 2 , and APC was added after unbound factor Va was removed. Reactions were terminated by the addition of reducing SDS-PAGE sample buffer and analyzed by Western blot using the AHV5146 monoclonal antibody against factor Va (Haemtech) with its epitope located in the 307-506 fragment. Clot Lysis Assay-Clot lysis was studied in a plasma system of thrombin-induced clot formation and tissue-type plasminogen activator (tPA)-mediated fibrinolysis as described previously (28). The change in turbidity (405 nm) at 37°C was measured (Thermomax, Molecular Devices) in 50% normal pooled plasma (v/v) in the presence of APC, 10 nM thrombin, 10 M PC/PS/PE phospholipid vesicles, 17 mM CaCl 2 , and 30 units/ml tPA (Chromogenix). The clot lysis time was defined as the time to reach a half-maximal decrease in turbidity. Carboxypeptidase inhibitor (CPI) from potato tubers (Calbiochem) was used at 20 g/ml to inhibit TAFIa. APC Cytoprotective Activity Assays Anti-inflammatory Activity Assays-APC anti-inflammatory activity was determined as the inhibition of cytokine release by APC of lipopolysaccharide (LPS)-stimulated monocytes. Typically, U937 cells (5 ϫ 10 5 /well) were challenged with 25 ng/ml LPS (serotype 055:B5, Sigma) in the presence of APC. After 18 h, secretion of tumor necrosis factor (TNF␣) or interleukin 6 in the media was detected by enzyme-linked immunosorbent assay (Invitrogen). Modification of Gene Expression APC-mediated alteration of gene expression was determined as the inhibition of TNF␣-induced p53 expression. Endothelial cells (EA.hy926) were incubated with TNF␣ (2 nM; Sigma) for 12 h followed by incubation with APC (20 nM) or thrombin (20 nM) for 90 min. Total RNA was isolated (RNeasy, Qiagen), and mRNA levels for p53 and ␤-actin were determined by semiquantitative reverse transcription-PCR (Superscript III, Invitrogen) as described previously (32,33). Endothelial Barrier Protection Permeability of endothelial cell barrier function was determined as described with minor modifications (34,35). Briefly, endothelial cells (EA.hy926, 5 ϫ 10 4 cells/well) were grown on polycarbonate membrane Transwells (Costar, 3-m pore size, 12-mm diameter). Upon reaching confluency, cells were incubated with APC (50 nM). After 4 h, the media in the inner chamber was replaced with serum-free media containing 4% BSA (fatty acid poor and endotoxin free fraction V, Calbiochem) and 0.67 mg/ml Evans blue in the absence (control) and presence of thrombin (20 nM) to induce endothelial permeability. Changes in thrombin-induced endothelial cell permeability were determined by following the increase in absorbance at 650 nm in the outer chamber over time due to the transmigration of Evans blue-BSA complexes. Percent permeability is expressed as the change in absorbance after 30 min relative to that in the absence of cells (defined as 100%) and in the absence of Evans blue (defined as 0%). Inactivation of BK Inactivation of BK in plasma was analyzed using a combination of plasma filtration and high-performance liquid chromatography (HPLC). Normal pooled plasma was supplemented with 0.5 mM lisinopril (Sigma) and 1.48 M corn trypsin inhibitor. Various concentrations of APC plus 17 mM CaCl 2 , 10 M phospholipids vesicles (PC/PS/PE, 40/20/40), 4 pM tissue factor, 100 M bradykinin (Sigma), 1 nM thrombomodulin (American Diagnostica) and 50% (v/v) normal pooled plasma in Hepes-buffered saline/0.1% BSA were mixed. At the indicated times, reactions were halted by addition of 1/8 (v/v) 160 mM EDTA, 10 mM benzamidine, and 800 M Plummer's inhibitor (DL-2-mercaptomethyl-3-guanidinoethylthiopropanoic acid, Calbiochem), clots were removed, and the remaining supernatant (200 l) was filtered (Microcon Ultracel YM-10, Millipore). The filtrate (100 l) mixed with 20 l of 1.2 M perchloric acid was applied for HPLC analysis of BK and des-Arg 9 -BK. Separation was performed on an HP 1100 series HPLC system using a Deltapak C-18 reversed-phase column (3.9 ϫ 150 mm) of 5-m particle size (Waters) and a linear gradient of 0 -67% acetonitrile (v/v) in 0.1% (v/v) trifluoroacetic acid in deionized water. A flow rate of 1 ml/min was maintained at ambient temperature, and products were detected at 214 nm. Design of an APC Mutant with Minimal Anticoagulant Activity-The anticoagulant activity of APC involves limited, specific cleavages of factor VIIIa and, more importantly, factor Va. A positively charged surface on the protease domain of APC is required for normal interactions of APC with factor Va (21,23,36,37). This extended exosite is generally located in an area similar to the anion binding exosite I of thrombin and includes loop 37 (protein C residues 190 -193 equivalent to chymotrypsin residues 36 -39), the Ca 2ϩ -binding loop (residues 225-235, chymotrypsin 70 -80), and the autolysis loop (residues 301-316, chymotrypsin 142-153). In contrast, APC's cytoprotective activities that are dependent on cleavage at Arg-41 by APC in the N-terminal tail of PAR-1 are assumed to involve different APC exosite requirements than cleavage at Arg-506 in factor Va. Mutation of the basic residues in loop 37 (KKK191-193AAA) and the Ca 2ϩ -binding loop (RR229/230AA) of the protease domain of APC showed the importance of these resi-dues for interaction with factor Va and anticoagulant activity but not for EPCR-and PAR-1-dependent APC cytoprotective activities (19,23). The anticoagulant activity of these two APC mutants was reduced but not ablated relative to rwt-APC in APTT clotting assays (19). We assumed that the thermodynamic contributions of residues in different APC exosites responsible for APC-factor Va interactions would be approximately additive. Thus, to obtain an APC mutant with essentially no anticoagulant activity while retaining normal cytoprotective activities, the above mutations were combined to create a novel mutant, designated 5A-APC (R229A ϩ R230A ϩ K191A ϩ K192A ϩ K193A), which we predicted would have significantly lower anticoagulant activity than either mutant alone. 5A-protein C expression levels were comparable to those of rwt-protein C. After purification, activation, and active enzyme concentration determination by active site titration, the amidolytic activities of 5A-APC and rwt-APC against a small substrate (S-2366) in the presence of CaCl 2 were indistinguishable ( Fig. 1A). In the presence of EDTA, the amidolytic activity of 5A-APC was modestly decreased compared with rwt-APC (5A-APC activity 75% of rwt-APC, data not shown). As expected, the active site Ser to Ala mutant, S360A-APC, had no detectable amidolytic activity (Fig. 1A). In the presence of CaCl 2 , 5A-APC cleaved two other small chromogenic peptide substrates (Spectrozyme aPC or Pefachrome PCa) with catalytic efficiencies similar to rwt-APC (Table 1). Furthermore, inhibition of 5A-APC by plasma protease inhibitors, as determined by the half-life of the amidolytic activity of APC in plasma (23), was indistinguishable from inhibition of rwt-APC (half-lives, 19 versus 20 min, respectively), further indicating that the mutations did not have any detectable global effect on conformation and/or folding of the region around the APC active site. Anticoagulant Activity of 5A-APC-In APTT clotting assays, the five alanine replacements in the 5A-APC mutant essentially ablated anticoagulant activity as it was below the detection limit (Ͻ3% of rwt-APC) under the conditions employed (Fig. 1B). The 229/230-APC and 3K3A-APC mutants have 14 and 5% anticoagulant activity under similar conditions, respectively (19). The catalytically inactive S360A-APC mutant had significantly higher anticoagulant activity that is presumably mediated by binding of S360A-APC to factor Va via the positively charged exosites of APC (Fig. 1B). This suggests that a major portion of the exosite-mediated interactions of APC with factor Va was abolished in 5A-APC. Similar results were obtained showing essentially no anticoagulant activity for 5A-APC anticoagulant activity when assayed in diluted prothrombin time assays (23) ( Table 2). In contrast to the small amounts of thrombin that are sufficient for fibrin clot formation (i.e. primary-initial thrombin formation measured by clotting assays), relatively much higher concentrations of thrombin are generated after the initial clot formation due to thrombin-catalyzed activation of factor XI and amplification via both the tenase and the prothrombinase complex. This extended thrombin generation may be termed secondary-extended thrombin generation, and this secondary, extended burst of thrombin formation contributes importantly to clot stability, in part via the activation of TAFI (38). To determine the anticoagulant effects of 5A-APC on the inhibition of total thrombin generation during and after clot formation, tissue factor-induced thrombin formation was monitored using the "endogenous thrombin potential" (ETP) method (29). Thrombin generation was readily inhibited by rwt-APC (IC 50 ϭ 0.5 nM), whereas inhibition of thrombin generation by 5A-APC required Ͼ10-times as much enzyme as rwt-APC (IC 50 Ն 5 nM) (Fig. 1C). In contrast to data for APTT assays, S360A-APC showed almost no anticoagulant activity in ETP assays under the conditions employed. Factor Va Inactivation by 5A-APC-To determine whether the inhibition of thrombin generation at higher 5A-APC concentrations is derived from factor Va proteolysis or from residual exosite interaction with factor Va, as is the case for the S360A-APC anticoagulant activity, generation of proteolytic factor Va inactivation fragments by rwt-APC and 5A-APC was analyzed by Western blot. rwt-APC generated the typical pattern of factor Va inactivation fragments with rapid cleavage at Arg-506 (fragment 1-506) followed by subsequent cleavage at Arg-306 (fragment 307-506) ( Fig. 2A). In contrast, initial cleavage at Arg-506 (fragment 1-506) by 5A-APC could not be detected. Instead, the initial cleavage of 5A-APC seems to occur at Arg-306 resulting in accumulation of the fragments 307-679 and 307-709, which were only transiently formed by rwt-APC ( Fig. 2A). Subsequent cleavage at Arg-506 by 5A-APC occurred but required, as estimated from the appearance of the 307-506 fragment, at least a 100-fold higher concentration of 5A-APC compared with rwt-APC (Fig. 2B). The inability of 5A-APC to cleave factor Va at Arg-506 and the reduced ability to cleave at Arg-306 were confirmed using recombinant mutants of factor Va with either the Arg-306 or Arg-506 cleavage site ablated (R306Q/R679Q-factor Va or R506Q/R679Q-factor Va). No inactivation of factor Va at Arg-506 was detected under conditions where Arg-506 was readily cleaved by rwt-APC (Fig. 2C). Instead, a 1000-fold higher concentration of 5A-APC was required to give a factor Va inactivation pattern similar to that of rwt-APC. Interestingly, inactivation of factor Va at Arg-306 was much less affected by the mutations in 5A-APC and, compared with rwt-APC, only a ϳ5-fold higher concentration of 5A-APC was needed to give a similar factor Va inactivation cleavage pattern (Fig. 2D). Inactivation of factor Va at Arg-506 illustrates the approximately additive effect of combining the APC mutations at residues 229/230 with those at 191-193. The 3K3A-APC mutant cleaves factor Va at Arg-506 at ϳ11% of the rate of rwt-APC (Table 2); however, the combination of Ala substitutions in 5A-APC greatly reduced the rate of cleavage at Arg-506 in factor Va to only 0.07% of the rate of rwt-APC, i.e. a 157-fold difference ( Table 2). Cytoprotective Activities of 5A-APC-APC cytoprotective effects have been variously described as anti-inflammatory activity, anti-apoptotic activity, alteration of gene expression profiles or protection of endothelial barrier function (17). The 5A-APC showed cytoprotective activities that were indistinguishable from rwt-APC in all of these four categories as shown below. Anti-inflammatory activity was analyzed as the inhibition of LPS-induced cytokine release by monocytic U937 cells. Both rwt-APC and 5A-APC inhibited LPS-induced TNF␣ release from monocytes (Fig. 3A). Dose-response titrations for rwt-APC and 5A-APC indicated that the anti-inflammatory potencies of rwt-APC and 5A-APC were indistinguishable. Similar results were obtained for analysis of inhibition of LPS-induced interleukin 6 released from monocytes by rwt-APC and 5A-APC (Fig. 3B). These results indicate that 5A-APC had normal APC anti-inflammatory activity compared with rwt-APC. Alteration of gene expression profiles by APC was determined by analyzing changes in TNF␣-induced endothelial p53 mRNA expression in EA.hy926 endothelial cells (Fig. 3C). Both rwt-APC and 5A-APC similarly down-regulated p53 mRNA expression, whereas neither S360A-APC nor thrombin did so (Fig. 3D). Similar results were obtained for down-modulation of thrombospondin-1 mRNA expression by rwt-APC and 5A-APC (data not shown). Both rwt-APC and 5A-APC inhibited staurosporine-induced endothelial cell apoptosis (31), and the concentrations of rwt-APC and 5A-APC required to achieve half-maximal inhibition of apoptosis were indistinguishable (Fig. 3E), showing that 5A-APC had normal anti-apoptotic activity. APC anti-apoptotic effects on endothelial cells require PAR-1 and EPCR (31,39). Similarly, the antiapoptotic activity of 5A-APC in assays of staurosporine-induced endothelial cell apoptosis required PAR-1, because antibodies blocking the cleavage of PAR-1 at Arg-41 abolished anti-apoptotic activity conveyed by 5A-APC (Fig. 3F). In the presence of antibodies against EPCR that block receptor binding of APC, 5A-APC anti-apoptotic activity was markedly impaired, indicating that 5A-APC binding to EPCR mediates its anti-apoptotic activity (Fig. 3F). These results indicate that anti-apoptotic interactions between cells and the 5A-APC mutant, like rwt-APC and the two directly related mutants, 229/230-APC and 3K3A-APC, require PAR-1 and EPCR. APC-mediated protection of endothelial cell barrier function was tested in a dual chamber system measuring albumin flux (34). Thrombin induced a 5-fold increase in endothelial cell permeability, an effect that could be blocked by rwt-APC or 5A-APC but not by S360A-APC (Fig. 3G). These results indicate that the barrier protective effects of 5A-APC were similar to those of rwt-APC and that the active site of APC was needed for its ability to stabilize endothelial barriers. Profibrinolytic Activity of 5A-APC-APC profibrinolytic effects in plasma depend at least in part on inhibition of TAFI activation by high levels of thrombin, and this mechanism might contribute to the antithrombotic activities of APC. Inhibition of TAFI activation by APC is impaired in plasma from patients with factor V Leiden (R506Q-factor V) (40). When the effect of 5A-APC on TAFIdependent inhibition of fibrinolysis in normal plasma was determined and compared with rwt-APC (Fig. 4), rwt-APC readily inhibited anti-fibrinolytic activity with an IC 50 ϭ 0.36 nM whereas 10-times more 5A-APC was required for 50% inhibition (IC 50 ϭ 3.1 nM). Complete inhibition of TAFIa-dependent clot lysis protection (i.e. TAFIa activation) required 5 nM rwt-APC, whereas 20-times more 5A-APC (ϳ100 nM) was required to inhibit TAFIa-dependent clot lysis protection completely. The relative potencies of rwt-APC and the 5A-APC mutant (Fig. 4) are similar to those required to inhibit thrombin generation in ETP assays (Fig. 1C and Table 2). The active site of ). Samples were analyzed under reducing condition on 10% SDS-PAGE and factor Va fragments were detected by Western blot using a monoclonal anti-factor Va antibody whose epitope is located between Arg-306 and Arg-506. Migration of molecular weight standards (left) and deduced factor Va fragments (right) are indicated. Functional consequences of factor Va inactivation by rwt-APC and 5A-APC were determined in R679Q/S2183A B-domainless factor Va preparations in which either the APC cleavage site at Arg-306 (C) or at Arg-506 (D) had been mutated (30). Inactivation of Q306/Q679-factor Va at Arg-506 (C) or Q506/Q679-factor Va at Arg-306 (D) by rwt-APC and 5A-APC was determined in a prothrombinase assay using purified clotting factors. Each point represents the mean Ϯ S.E. from at least three independent experiments. In C: ▫, 125 pM rwt-APC (1ϫ); E, 125 pM 5A-APC (1ϫ); ‚, 12.5 nM 5A-APC (100ϫ); and ◊, 125 nM 5A-APC (1000ϫ). In D: ▫, 1 nM rwt-APC (1ϫ); E, 1 nM 5A-APC (1ϫ); and ◊, 7 nM 5A-APC (7ϫ). APC is required for profibrinolytic action as S360A-APC required a 65-fold higher concentration for 50% inhibition (IC 50 ϭ 24 nM). In notable contrast to 5A-APC, the 3K3A-APC mutant retained significant residual activity to promote clot lysis, as its potency was almost indistinguishable from that of rwt-APC (Fig. 4). These data highlight the marked reduced anticoagulant characteristics of 5A-APC compared with 3K3A-APC for reduction of anticoagulant activity related to secondary-extended thrombin formation and to the implications thereof for activation of TAFI. Preservation of TAFIa-mediated Anti-inflammatory Activity by 5A-APC-In addition to inhibition of fibrinolysis, TAFIa also exhibits anti-inflammatory activities via the inactivation of BK and of the C3a and C5a anaphylatoxins by removal of C-terminal arginine residues. The des-Arg forms of these peptides have diminished bioactivities and are intermediates on pathways for metabolism of these peptide mediators (41)(42)(43). To compare the effects of rwt-APC and 5A-APC on TAFIa antiinflammatory activities in the plasma milieu, inactivation of BK in plasma was studied using an HPLC-based quantitative analysis of BK inactivation and generation of des-Arg 9 -BK. Approximately 85% of the BK added to the plasma was recovered based on HPLC chromatography, and following removal of the C-terminal arginine from BK, des-Arg 9 -BK had a longer retention time on the C18 column that was sufficient to separate BK from des-Arg 9 -BK (Fig. 5A). Carboxypeptidase N (CPN) has been regarded as the physiological inhibitor of BK in plasma that generates des-Arg 9 -BK, whereas angiotensin-converting enzyme (ACE) proteolytically inactivates BK by cleavage of an internal peptide bond to give two smaller peptide fragments. To determine the relative contribution of TAFIa-mediated BK inactivation versus CPN-and ACE-mediated BK inactivation, the inhibitors lisinopril (ACE inhibitor), Plummer's inhibitor (CPN and TAFIa inhibitor), and CPI (TAFIa inhibitor but not CPN inhibitor) were used (Fig. 5B). In the presence of lisinopril and Plummer's inhibitor, no significant inactivation of BK was observed (Ͻ5%), indicating that ACE, CPN, and TAFIa are required for inactivation of BK in plasma under the conditions employed. Omission of lisinopril showed a small but reproducible decease in BK (11%) but no generation of des-Arg 9 -BK, as expected. In contrast, BK inactivation by CPN results in a similar decrease in BK (13%) but with a concomitant generation of des-Arg 9 -BK. BK inactivation by CPN and by thrombin-generated TAFIa was approximately double that compared with CPN alone (28% versus 13%) indicating that CPN and thrombin-activated TAFIa can contribute equally to BK inactivation in plasma. ACE further increased BK inactivation by an additional 9% to 37% without increasing des-Arg 9 -BK, consistent with BK inactivation by ACE alone (11%). These results indicate that ACE, CPN, and thrombin-activated TAFIa each account for an approximately equal portion of BK inactivation in plasma under these conditions. Thrombomodulin stimulates activation of both protein C and TAFI by thrombin (44). In the presence of thrombomodulin, BK was fully inactivated (100%) and completely converted to des-Arg 9 -BK in plasma, whereas under similar condition in the absence of thrombomodulin, only 28% of BK was inactivated (Fig. 5B). The accelerated inactivation of BK in the presence of thrombomodulin could be attributed to TAFIa, because CPI greatly reduced the extent of BK inactivation in the presence of thrombomodulin (Fig. 5B). Analysis of the time course of BK inactivation in the presence of thrombomodulin indicated that TAFIa rapidly converts BK to des-Arg 9 -BK with only a modest contribution of CPN (Fig. 5C). APC is very effective for inhibiting thrombin-dependent activation of TAFI, because APC inhibits thrombin generation (7,8). Under conditions where TAFIa fully converted BK to des-Arg 9 -BK, rwt-APC dosedependently blocked BK inactivation and des-Arg 9 -BK generation (Fig. 5D). At the highest tested concentrations of rwt-APC (40 nM), no BK inactivation was observed beyond that expected to be caused by CPN, suggesting that rwt-APC completely inhibited TAFI activation and thereby inhibited BK inactivation by TAFIa. In contrast to rwt-APC, 5A-APC showed no significant inhibition of BK inactivation and did not decrease des-Arg 9 -BK levels appreciably, implying that 5A-APC permitted normal TAFI activation. Thus, 5A-APC left intact the TAFIa-mediated anti-inflammatory mechanism for inactivation of BK and presumably for the ability of TAFIa to inactivate the C3a and C5a anaphylatoxins (Fig. 5D). DISCUSSION Several basic residues in two surface loops, loop 37 and the Ca 2ϩ -binding loop, of the APC protease domain contribute to the enzyme's anticoagulant activity; however, the anti-apoptotic activity of APC does not require these residues (19,45). Thus, certain positively charged residues in APC exosites that bind the factor Va substrate for cleavage at Arg-506 are not required for APC anti-apoptotic activity that depends on APC interactions with PAR-1 and EPCR. Recent preliminary in vivo studies in a mouse model of endotoxin-induced lethality indicated that APC mutants with greatly diminished anticoagulant activity were as effective as rwt-APC in reducing endotoxininduced mortality (20,46). These developments help provide a novel set of tools, namely APC mutants with selectively altered activities, to determine the relative in vivo importance of the anticoagulant actions of APC versus its cytoprotective actions for reducing morbidity and mortality in severe sepsis, ischemic stroke, and other serious acute and chronic injuries (18,39,47). Here we used a combinational mutagenesis approach to generate an APC mutant with essentially no anticoagulant activity. Combining alanine mutations in five residues in protease domain exosite loops of APC in the 5A-APC mutant shortened each amino acid side chain and neutralized the charge of a large positively charged surface area that is thought to bind factor Va and promote Arg-506 cleavage (Fig. 6A). These five mutations in 5A-APC reduced factor Va cleavage at Arg-506 by more than three orders of magnitude ( Fig. 2 and Table 2). Accordingly, 5A-APC showed undetectable anticoagulant activity (Ͻ3%) in prothrombin time and APTT clotting assays ( Fig. 1 and Table 2). For comparison, two mutants related to 5A-APC, namely 229/230-APC and 3K3A-APC, showed 25 and 11% activity for cleavage at Arg-506 and exhibited 13 and 5% anticoagulant activity in APTT assays, respectively (19,23). Thus, combination of mutations at Arg-229/230 and Lys-191-193 provided a substantial, additional 150-fold reduction for cleavage of factor Va at Arg-506 (0.07%) beyond that seen for either of these two related mutants. Most importantly, 5A-APC retained cytoprotective activities that were qualitatively and quantitatively indistinguishable from those of rwt-APC as determined by anti-apoptotic and anti-inflammatory activities, inhibition of pro-apoptotic p53 gene expression, and stabilization of endothelial barrier function ( Fig. 3 and Table 2). Therefore, extensive exosite engineering selectively alters APC interactions with different substrates and enables generation of APC mutants, such as 5A-APC, that has extremely low anticoagulant activity but normal cytoprotective activities. The specificity of APC exosites for its protein-protein interactions is quite remarkable. For instance, residue Lys-192 is indispensable for interaction of APC with thrombomodulin, whereas factor Va-dependent anticoagulant activity is only marginally affected (75% of rwt-APC) by alanine replacement of . The effect of alanine replacement of the adjacent Lys-193 residue shows an inverse effect, namely ϳ20% anticoagulant activity but normal FIGURE 5. Influence of rwt-APC and 5A-APC on TAFIa-mediated bradykinin cleavage in plasma. BK inactivation in plasma was quantified using a method based on tissue factor-induced coagulation, plasma filtration, and HPLC-assisted analysis of BK and des-Arg 9 -BK. A typical HPLC chromatogram is shown for BK (A, top), des-Arg 9 -BK generated by the incubation of BK with carboxypeptidase B (BKϩCPB) from porcine pancreas (A, middle) and des-Arg 9 -BK reconstituted with BK ((BKϩCPB)ϩBK; A, bottom). The relative contributions of enzymes responsible for BK inactivation in plasma during tissue factor-induced coagulation were determined using various combinations of specific inhibitors (B). Lisinopril (ACE inhibitor), Plummer's inhibitor (CPN and TAFIa inhibitor), and CPI (TAFIa inhibitor) were added to the plasma as indicated (B, left) and BK inactivation/des-Arg 9 -BK generation were determined following 30 min of tissue factor-induced coagulation. The enzymes that were implicated from the inhibitor profile to be responsible for the observed inactivation of BK and des-Arg 9 -BK generation are indicated (B, right). Panel C shows the time-dependent inactivation of BK (continuous line, closed symbols) and generation of des-Arg 9 -BK (interrupted line, open symbols) over 0 -30 min following tissue factor-induced clotting in the presence of thrombomodulin. Lisinopril was added to the plasma to eliminate the effects of ACE on BK inactivation, and the contribution of TAFIa was determined by the difference between BK inactivation in the presence (C; F and E) and absence (C; f and ▫) of CPI. Inhibition of TAFIa-mediated BK inactivation by rwt-APC (D; f and ▫) and preservation of TAFIa-mediated BK inactivation by 5A-APC (D; F and E) were analyzed using the same condition as for C following 30 min of tissue factor-induced coagulation in the presence of thrombomodulin and various concentrations of APC as indicated (D). Note: under these conditions in the absence of APC, BK is completely converted to des-Arg 9 -BK. Each point represents the mean Ϯ S.E. from at least three independent experiments. interaction of APC with thrombomodulin (48). Similar APC exosite specificity is observed for 5A-APC with respect to cleavage at Arg-306 in factor Va compared with rwt-APC (30% of normal rate for 5A-APC and 67% for 3K3A-APC) versus cleavage at Arg-506 in factor Va compared with rwt-APC (Ͻ0.1% of the normal rate for 5A-APC and 11% for 3K3A-APC) ( Fig. 2 and Table 2). The PROWESS (Protein C Evaluation in Severe Sepsis) trial demonstrated a significant reduction of 28-day all-cause mortality in patients given recombinant APC (Drotrecogin ␣-activated) (18). Despite confirmation of the results for severe sepsis patients with a high risk of death in the ENHANCE US trial, the absence of an effect of APC on mortality in patients with sepsis with a lower risk of death in the ADDRESS trial indicates that the currently employed APC therapeutic regimen has its limitations (49,50). Implementation of more aggressive APC dosing regimen to increase its therapeutic efficacy is hampered by a low but significant increase in serious bleeding events associated with administration of APC for sepsis (18,51). Based on the assumptions that APC anticoagulant activity is primarily responsible for the increased risk of bleeding in sepsis patients, whereas APC cytoprotective activities are primarily responsible for the reduction in mortality (20,46), 5A-APC or mutants that resemble 5A-APC could provide a safer alternative to rwt-APC therapy by reducing serious bleeding risk caused by the anticoagulant effects of APC and by providing the retained beneficial effects of APC acting directly on cells. These assumptions and implications merit critical assessment, and in this regard, the ability of our APC mutants to reduce mortality in mouse endotoxemia models is encouraging (20,46). In comparing 5A-APC to the simpler, related mutant, 3K3A-APC, we found that 5A-APC had significantly superior reduced anticoagulant characteristics in terms of the rate for cleavage at Arg-506 in factor Va and also for inhibition of extended thrombin generation in plasma ( Fig. 1C and Table 2). In comparing 5A-APC to the related 3K3A-APC mutant or rwt-APC, we found largely unopposed extended thrombin generation in the presence of 5A-APC but not in the presence of 3K3A-APC or rwt-APC. This has multiple implications related to TAFIa generation in plasma, because TAFIa inhibits both fibrinolysis and inflammatory reactions caused by BK and complement activation. Firstly, in terms of fibrinolysis, TAFIa is a fibrinolysis inhibitor because it removes C-terminal lysine residues from fibrin that promote plasminogen activation and plasmin-dependent fibrinolysis. Hence, by inhibiting extended thrombin generation, rwt-APC blunts TAFI activation and thus promotes fibrinolysis (Fig. 4) (6,7,44,52). As shown here, 5A-APC was much less active than rwt-APC in promoting clot lysis, most likely because 5A-APC is not very effective in reducing extended generation of thrombin which activates TAFI. TAFIa protection of fibrin structures from lysis might aid in the prevention of bleeding that is promoted by rwt-APC. Moreover, FIGURE 6. Schemes depicting the positively charged residues of rwt-APC that were replaced by Ala in the 5A-APC mutant and the implications for TAFIa-dependent cytoprotective effects of APC. A, the serine protease domain of rwt-APC is shown with the active site triad residues shown in green and the surface contours of positive residues as blue, negative residues as red, and neutral residues as gray. The yellow rectangle indicates a remarkable cluster of five positively charged surface residues in loop 37 and the Ca 2ϩ -binding loop (Lys-191, Lys-192, Lys-193, Arg-229, and Arg-230) that define exosites for binding factor Va and that were mutated to alanine to yield 5A-APC. B, implications of the effects of APC on extended thrombin generation for TAFIadependent anti-inflammatory effects. rwt-APC inactivates factor Va to block activation of coagulation and thrombin generation. Consequently this action of rwt-APC blocks TAFI activation and blocks the subsequent inactivation of BK by TAFIa. BK formation is a major consequence of activation of the kallikrein-kinin system that involves the proteolytic release of BK from plasma kininogens by kallikrein. BK, generated by activation of the kallikrein-kinin system, is one of the important mediators of inflammation as it can cause four classic signs of inflammation, namely, swelling, heat, redness, and pain. The B 2 receptor is specific for BK, and inactivation of BK to des-Arg 9 -BK largely ablates B 2 receptor-mediated effects. TAFIa converts BK into inactive des-Arg 9 -BK that has diminished bioactivity. Because the 5A-APC mutant cannot effectively inhibit coagulation or consequent extended thrombin (IIa) generation (dashed line), this APC mutant has minimal profibrinolytic activity and preserves normal TAFI activation and subsequent TAFIa-mediated anti-inflammatory activities toward BK and presumably toward the C3a and C5a anaphylatoxins all of which are inactivated by C-terminal Arg release due to TAFIa. Therefore, preservation of TAFI activation by normally cytoprotective 5A-APC in plasma indirectly but significantly contributes to TAFIa-dependent anti-inflammatory effects that are lost in the presence of anticoagulantly active rwt-APC. Not shown in this scheme is the fact that the other known cytoprotective activities of APC that are dependent on PAR-1 and EPCR are the same for rwt-APC and 5A-APC. The model of the 5A-APC protease domain was generated from the published structure of APC (1AUT) using Modeler (60, 61). because several common human pathogens express plasminogen activators that serve as virulence factors, robust TAFIa activity can counteract these virulence factors (53,54). Secondly, in terms of anti-inflammatory actions, the arginine carboxypeptidase activity of TAFIa can provide physiologic inactivation of the complement-derived anaphylatoxins, C3a and C5a, and likely does the same for BK, especially when thrombomodulin is present in plasma (Fig. 5) (13-16). The potent inflammatory responses mediated by BK include hypotension and increased vascular permeability and are implicated as major contributors to sepsis-associated pathologies (55). Some bacterial species (e.g. Staphylococcus aureus) take advantage of these BK effects to facilitate dissemination and virulence by using contact activation to induce a steady release BK from the bacterial wall, and these effects of BK stimulated clinical studies for a BK receptor antagonist in sepsis (56 -58). The B 2 receptor is specific for BK, and inactivation of BK to des-Arg 9 -BK largely ablates B 2 receptor-mediated effects (41). Similar considerations for C3a and C5a and their receptors, especially the C5a receptor (C5aR, CD88), stimulated evaluations of these molecules as possible anti-sepsis targets (41)(42)(43)59). Therefore, it is tempting to speculate that the ability of 5A-APC to preserve TAFI activation by thrombin might significantly albeit indirectly contribute to additional TAFIa-dependent anti-inflammatory effects that are much less available with rwt-APC therapy (Fig. 6B). In summary, fundamental questions exist about the relative importance of the anticoagulant actions of APC versus its cytoprotective actions for reducing mortality in patients with severe sepsis and for APC beneficial effects in ischemic stroke and other acute and chronic injury settings. Recombinant mutants with selectively engineered alterations in the various activities of APC, such as 5A-APC, can provide tools to answer these fundamental mechanistic questions and may also lead to novel second generation APC mutants for improved therapeutic applications.
2018-04-03T04:17:02.868Z
2007-11-09T00:00:00.000
{ "year": 2007, "sha1": "77375b5e9a9c19b2c0ed31010525ba9c6480f632", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/282/45/33022.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "4e9f0890ce438e3fe82cb5083aff74f4edf8378b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
228820420
pes2o/s2orc
v3-fos-license
Comparative Reliability Assessment of Tooth Volume Measurement with Different Three-Dimensional Imaging Software Department of Pedodontics, Orthodontics, and Prevention, College of Dentistry, University of Sulaimani, Sulaimani-shorsh 67/4, Kurdistan Region, Iraq Department of Pedodontics, Orthodontics, and Prevention, College of Dentistry, University of Mosul, Mosul-Alsidik 345/207/512, Iraq Department of Pedodontics, Orthodontics, and Prevention, College of Dentistry, University of Sulaimani, SulaimaniTooymalik 211/61, Kurdistan Region, Iraq Introduction Evaluation of tooth volume is of great importance in dentistry generally and as specific consideration in orthodontic treatments and biomechanics. In Europe, Cone Beam, Computed Tomography (CBCT) was introduced to dentistry in 1998 [1], while in the USA, it was approved for use in 2001 [2]. Since then, CBCT technology has gone through a drastic evolution, due to great demands from each specialty for its accurate, reproducible, and safe three-dimensional images. In orthodontics, three-dimensional imaging raises the possibility of increasing diagnostic ability, with very practical and easy application in daily orthodontic procedures [3,4]. In orthodontics, generally, CBCT images are used with a small field of view (FOV) [5,6] which has approximately high contrast image compared with the larger ones [7]. FOV refers to the size of the scan volume that is necessary to adequately capture the region of interest [8][9][10]. Developmental advancements in the use of CBCT technology and its three-dimensional imaging software have been seen in orthodontic practice, which has basically branched from an increased employment of the tool in this dental branch clinically. The applications of three-dimensional imaging are certainly developed with advances in CBCT technology and expanding software capability. Tooth size cannot be measured by two-dimensional images such as CBCT, while the three-dimensional imaging of tooth anatomical structure can aid in calculating the tooth volume [11]. The practicability of in vivo dental volume measurements by using CBCT imaging was reported by Liu et al. [12]. Furthermore, CBCT images are anatomically true to size, compared to conventional cephalometric radiographs [13][14][15]. Designation of a three-dimensional model and the virtual imaging which illustrates the whole tooth structure and the surrounding hard tissues with craniofacial structures would greatly assist the orthodontist to think about different treatment possibilities throughout the steps of diagnosis and treatment planning of a case. Additionally, by observing the changes that occur during the treatment, the final results can be estimated and predicted accurately [16]. The bracket positioning by indirect technique is another important indication of the virtual model in the orthodontic branch [7,17], especially for lingual appliances [5], producing accurate wire bending, and correct surgical simulation during orthognathic surgery [6,18]. Elimination of all structures surrounding an object such as a tooth in CBCT, called segmentation, is necessary during volumetric analysis and it can be done by automated, semiautomated, or manual image thresholding. The thresholding which is applied during segmentation is the process of dividing an image into many smaller images with boundaries defined by grayscale values [21][22][23]. Due to the presence of human bias, the image segmentation process can often be challenging [24]. According to the amount of radiation absorbed, each voxel has its own specific grayscale value. If a single voxel consists of tissues with different densities, the average of the grayscale of that voxel is taken during three-dimensional reconstruction. The grayscale values obtained from CBCT imaging cannot be used quantitatively as a result of voxel averaging [8,25,26]. The volume of the tooth can also be calculated physically by the water displacement method through subtracting the initial water volume from the final volume after immersing the tooth in the water [27,28]. Material and Method 2.1. The Sample. A total of 26 sound teeth (upper first premolars) from 13 Kurdish orthodontic patients who needed bilateral extraction of both the upper first premolars as a part of the orthodontic treatment plan were evaluated in this study. All the patients were 18 to 25 years of age, having CBCT taken before their orthodontic procedure. Radiographic Imaging. A CBCT image was taken for each patient as a part of their orthodontic treatment plan with a CBCT machine (NewTom VGi scanner; QR s.r.l., Verona, Italy); the exposure was set as 90 kV (tube voltage), 3.00 mA (tube current), 9.0 second exposure time, and a voxel size of 0.150 mm (the voxel is the minimum unit of digital data segmentation in three-dimensional space). The acquired data from the X-ray machine were exported using a specific type of file (Digital Imaging and Communication in Medicine (DICOM)). 2.3. Three-Dimensional Measurement. The volume of each tooth was calculated using two different software as follows: at the first occasion, the Digital Imaging and Communication in Medicine (DICOM) files were imported into an image processing software for three-dimensional design and modeling called Materialise Interactive Medical Image Control System (MIMICS) program which is usually used to create a three-dimensional surface model from stacks of twodimensional image data ( Figure 1). After the three-dimensional tooth model was completed, the DICOM file was sent to the 3Matic (3-Matic Analyze-Mimics Innovation Suite, Materialise, Leuven, Belgium) program from MIMICS in which the first calculation tooth volume was done ( Figure 2). VRMesh (Virtual Grid, Bellevue City, WA) was another software being used in the present study to estimate the tooth volume ( Figure 3); this software was developed by an American company and is used in many fields including dentistry. The VRMesh program cannot recognize DICOM files, so the 3Matic software was used in converting the files into stereolithographic ( Tooth segmentation was performed on consecutive twodimensional slices by semiautomated segmentation with manual localized visual refinement on a repeated twodimensional basis as it was regarded as a reliable method for leading to an accurate approach of volume quantitative analysis in different studies [29,30]. A visually defined optimal threshold value was set for each tooth in the sagittal (YZ) plane. The threshold level was set individually with regard to each patient according to the density values which are called Hounsfield units (HU), and once it is determined, it would not be altered during the entire process of the 3D model construction. The tooth anatomy should be obvious and clear with minimal interference from the surrounding bone and adjacent structures when the threshold value was set. Manual refinements were handled slice-by-slice for more accuracy by correcting the under-and/or overcontoured voxels in the tooth volume [31]. Initial refinements were done in the YZ plane, and the second ones were performed in the axial (XY) plane to clarify the root structure and interproximal dental contact points, while the third refinements were completed in the XZ (coronal) plane to preserve tooth anatomy and focus on the delineation of dental root structure from the buccal and palatal cortical plates ( Figure 1). The 3D resultant tooth was assessed for approximately normal maxillary first premolar dental anatomy. In all manual outlining and refinement steps of the teeth, an external mouse pen was used as a tool connected to the computer for more accurate results. No smoothing functions were applied to the three-dimensional tooth structure to prevent the flattening of minor root defects/imperfections or possible resorption lacunae [30]. The segmentation of the images was a sensitive process; to avoid any misinterpretation, the images were segmented by two observers after which an interexaminer calibration was done to avoid any kind of reading bias. Moreover, the segmentation process was repeated blindly by the researcher within two-week intervals, and they were color coded in the same DICOM volumetric data to facilitate differentiation. After segmentation, the software automatically computed each tooth's radiographic volume from the stack of segmented two-dimensional slices, and the three-dimensional shape of the tooth was prepared. The measurements obtained from the software were in cubic millimeters (mm 3 ) which are converted to milliliters (mL) to unify the readings with the physical volume calculations. Measurement of Tooth Physical Volume. In order to measure the real volume, each studied tooth was extracted, gently The physical volume (PhV) of the tooth was measured by the water displacement method [32] in a 5-mL graduated cylinder with gradations of 0.1 mL (Fisher Scientific, Pittsburgh, Penn). The cylinder was filled with water at room temperature (23.5°C) to the 4-mL mark. The tooth was cleaned thoroughly and dried with an air syringe then immersed completely in the cylinder, and the new water level was recorded as shown in Figures 7(a) and 7(b)). The readings should be taken at the lowest portion of the meniscus. The volume of the displaced water which determines the PhV was obtained by subtracting the initial water volume from the final volume after immersing the tooth in the water [27,28]. For reducing the bias, the volume of each tooth was measured twice and the average of the two readings was calculated. 2.5. Data Statistical Analysis. The data was analyzed in SPSS advanced statistics (Statistical Package for Social Sciences), version 21 (SPSS Inc., Chicago, IL). The data was initially examined for normality in distribution using the Kolmogorov-Smirnov test and the Shapiro-Wilk test. The comparisons between the groups for normally distributed numeric variables were done by using the ANOVA test. P ≤ 0:05 was considered statistically significant. A comparison between the software and real volume measurements of each tooth was started to check the sensitivity of each program. A correlation within and between groups was performed on the recorded measurements to determine the level of reliability of the data. The F value was obtained for evaluating the statistical significance of the programs. Results CBCT data can be used to generate 3D printed models that provide accurate lateral cephalograms, visualize growth, help in estimating the age, and evaluate oral and maxillofacial structures which cannot be accurately assessed with traditional 2D radiographs [33][34][35]. Most of the studies suggested that during the CBCT scanning, increasing the voxel size will increase the volume measurements of the teeth [21,36,37]. So, the current study selected the voxel size of 0.150 mm which is regarded to be within a small range. The interexaminer calibration between the two observers showed a statistically nonsignificant difference as shown in (Table 1). The overall mean PhV of the tooth specimens was 0.57592 mL, while the overall mean of VRMesh and 3Matic tooth volumes was 0.57419 mL which is equal to 574.19 mm 3 for the first evaluation. The mean value of VRMesh and 3Matic volumes was 0.57850 mL for the second time evaluation as in Table 2. As observed, the first time evaluated mean values of the software are lower than the PhV volume, while the second time mean evaluations are greater. But the differences are of no significance from a statistical point of view. The mean of each observation was compared to the PhV to determine the accuracy of segmentation. There was a statistically nonsignificant difference between the values obtained from CBCT volumes, which were evaluated by the 3Matic and VRMesh software after the segmentation, and PhV. Also, when the groups of different evaluations were analyzed by the ANOVA test, they showed statistically nonsignificant differences (Table 3). These results support the accuracy of both software in relation to the real tooth size. All the values are normally distributed when tested by both Kolmogorov-Smirnov and Shapiro-Wilk as illustrated in Table 4. When evaluating the volume of the tooth by the 3Matic and the VRMesh software, it was noticed that they were almost had the same value, except for some negligible decimals. In Figure 8, the normal distribution of physical volume measurements that were compared to the expected normal was observed. Both the CBCT software measurements of the first time were normally distributed (VRMesh 1, 3Matic 1) when compared to the expected normal as illustrated in Figures 9 and 10. Figures 11 and 12 explain the normal distribution of the second groups of the CBCT software measurements (VRMesh 2, 3Matic 2) when compared to the expected normal. Discussion The introduction of CBCT imaging into the orthodontic branch has popularized the idea of volumetric analysis for both anatomic visualization and biomechanical considerations [12]. Advances in the CBCT technology make it feasible for this imaging process to be the standard of care in orthodontic practice. However, the realization of its full potential in everyday diagnosis and treatment planning [38]. CBCT imaging has a high degree of measurement accuracy in horizontal, vertical, and angular measurements as well as the panorama and three-dimensional views in the areas of the dentomaxillofacial region [39]. The volumetric analysis of the tooth requires segmentation from the surrounding structures [21,22]. The MIMICS software has different options for segmentation and has a slight learning curve [40]. In the current study, semiautomated segmentation was chosen with the entire tooth three-dimensional volume examination instead of only the dental roots apical to the cemento-enamel junction (CEJ) as it is considered to be more definite [30]. Among the various possible uses of segmentation, calculating the volume is of great interest in dentistry. In legal odontology, measuring the volume of a tooth provides access to estimating the individual age [55]. A number of previous studies also performed the in vivo tooth evaluation from CBCT by the segmentation of the tooth from the surrounding structures [12,19,56,57]. The strength and accuracy of CBCT for dental volume measurements in in vivo has been shown statistically not significantly different as in vitro measurements in a study done by Li et al. [54] and even when compared to in vitro micro-CT imaging methods in its accuracy [43]. Some studies like the current study compared the physical volume of the tooth to the volume obtained from the CBCT [12]. The purpose of the current study was to test the accuracy of the volume measurements derived from CBCT images by two different software. In this study, the CBCT scan tooth volume measured by two different software was compared to physical real volume measurements, and the selected teeth extracted were the upper first premolars. To the extent of our knowledge, it was the first time the VRMesh and 3Matic (from MIMICS) software were used to evaluate the in vivo volumetric determinations of the teeth from CBCT. The interexaminer calibrations by the ANOVA test revealed statistically nonsignificant differences which indicates the reliability of the observers in the segmentation procedure. The result agreed with Liu et al. [12] and Fadili et al. [29]. In this study, the results showed a nonsignificant difference between the VRMesh and 3Matic groups and the physical volume group. These findings mean that both software can be dependable in the in vivo measurements if the segmentation method was done in a qualified way. This showed a disagreement with Liu et al. [12] who found statistically significant differences between the physical volume of the extracted tooth and the volume obtained from the CBCT images after segmentation, and the authors believed that the quality of CBCT is an important factor for the extraction accuracy. In contrast to the present study, Wang et al. [63] revealed a statistically significant difference in in vivo bucco-lingual, mesio-distal, and root length dimensions between the reference and test models. They suggest that the possible reason for the difference may be due to the differences in the scanning accuracy of CBCT which was used in the test group and the Smart Optics system they used in the reference group. The nonsignificant difference in this study may be due to the fact that a more accurate segmentation technique was applied in the current study through the use of a mouse pen as a tool for the slice-by-slice refinement, which increased the control during the outlining of the tooth, while the result of the present study agreed with Sang et al. [22] as they found the reconstructed 3D tooth model from CBCT data can obtain a high linear, volumetric, and geometric accuracy. Generally, when voxel sizes are constant, it seems that the changes of software or CBCT machine have no significant clinical importance due to the fact that all the types of 3D software programs are used as a tool to calculate a volume obtained from the CBCT. In recent years, several studies have focused on the linear and volumetric accuracy measurements of CBCT images, but the results are controversial. In studies done by Maret et al. [41] and Wang et al. [42], they found that the CBCT volume measurements of the teeth were similar to those with tomography scan parameters. Although Liu et al. [12] even support the feasibility of in vivo dental volume measurements by using CBCT [12], they suggested that the CBCT volume measurements differed slightly from the physical volumes, by -4% to 7%. In the current study, the deviation of tooth volumes may be due to the segmentation procedure and/or the Hounsfield unit threshold value setting during segregation. In addition, voxel sizes, tube voltages, and fields of view from the same CBCT device could also affect tooth volumes. In the current study, the mean physical volumes obtained through PhV measurements were larger by 0.00173 mL than the mean volumetric measurements by CBCT. On the contrary, in the second round of calculations, the mean volumetric measurements by CBCT were generally larger by 0.00258 mL than the mean physical volumes obtained through PhV measurements. This difference in the mean suggests that the subjective aspects of segmentation and any false during the manual slice-by-slice refinement procedure may affect the volumetric measurements. Because the differences were so little, they are regarded as clinically nonsignificant. So, in agreement with Liu et al. [12], the in vivo tooth volume determination from CBCT data is practicable. A common use of segmented tooth models in orthodontics is to conduct various study model analyses, such as arch length discrepancy and Bolton analysis. These data suggest that the differences in segmentation are relatively small and would not likely influence common study model analysis for the diagnosis and treatment planning. There are many factors that could affect the accuracy of segmentation, such as image quality, which is highly predominant in segmentation. CBCT imaging quality can be related to machine settings, patient positioning and management, volume reconstruction, and DICOM export. So, if all these factors are set in a good manner, the results will be more dependable in the diagnosis and treatment planning. The results of the current study are in agreement with Li et al. [54] who revealed a statistically nonsignificant difference in the accuracy between the in vivo measurements obtained from the CBCT and in vitro measurements from laser scanning. Nimbalkar [40]concluded that the volume of the teeth depends on the threshold interval and the segmentation methods, which differed for each software and operator. All the software packages used different segmentation tools, and there is no fixed protocol or algorithm for preparing the DICOM images for the assessment of the volume of teeth, and there are various methods for volume assessment that are commercially available. Conclusion The assessment of in vivo tooth volume measurement with different three-dimensional imaging software (VRMesh and 3Matic programs) in comparison with the tooth physical volume is reliable. The use of a mouse pen as a tool during the refining stage of semiautomated segmentation may have helped in decreasing the errors and increasing the accuracy of the procedure. With advancements in the technology and the CBCT devices, the in vivo volumetric determinations of the teeth are dependable and can be applied in orthodontic diagnosis and treatment planning. Data Availability All the data supporting the results can be found under request through the corresponder's email at any time. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this article.
2020-11-12T09:09:44.848Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "98bdbd5f78b9fc0b1aed16dc570279080358cd81", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/5870472.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "44622e95f337cc146261bcb07d715920bef64c7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics" ] }
90069467
pes2o/s2orc
v3-fos-license
Reserve Sizes Needed to Protect Coral Reef Fishes Marine reserves are a commonly applied conservation tool, but their size is often chosen based on considerations of socioeconomic rather than ecological impact. Here, we use a simple individual‐based model together with the latest empirical information on home ranges, densities and schooling behaviour in 66 coral reef fishes to quantify the conservation effectiveness of various reserve sizes. We find that standard reserves with a diameter of 1–2 km can achieve partial protection (≥50% of the maximum number of individuals) of 56% of all simulated species. Partial protection of the most important fishery species, and of species with diverse functional roles, required 2–10 km wide reserves. Full protection of nearly all simulated species required 100 km wide reserves. Linear regressions based on the mean home range and density, and even just the maximum length, of fish species approximated these results reliably, and can therefore be used to support locally effective decision making. Introduction Coral reefs around the world are threatened by multiple anthropogenic stressors, including local fishing activities as well as global climate change (Hughes et al. 2003). Unsustainable and destructive fishing alone can culminate in the collapse not only of fisheries but entire coral reef ecosystems (Jackson et al. 2001). However, fishery impacts can be tackled locally by limiting the use of certain fishing methods in marine protected areas (MPAs), and by prohibiting any type of fishing inside strict notake marine reserves (see Dudley 2008 for definitions). Specifically to help protect coral reefs, most of which are situated in developing countries with low fisheries management capacity (Mora et al. 2009), marine reserves are seen as a feasible and critical conservation tool (White et al. 2014). Following the rapid global implementation of marine reserves since 1992 (the Rio "Earth Summit"), numerous studies have analyzed reserve functioning, identifying important social, economic, and ecological drivers of conservation effectiveness (Lester et al. 2009;Mora & Sale 2011;Edgar et al. 2014). In addition to good governance with effective leadership, it is now clear that marine reserves are most likely to protect fish populations if they are sufficiently large, persistent, and enforced (Edgar et al. 2014). Yet, management decisions on the location and size of reserves tend to be driven by considerations of socioeconomic impact rather than conservation effectiveness (Margules & Pressey 2000), specifically if a lack of knowledge on the abundance and movements of local fishery species precludes effective decision making (Sale et al. 2005). If reserves are too small, they can fail to ensure species conservation, simply because the abundance of fishes is not uniform and because the movements of fishes might extend beyond reserve boundaries so that they can still be fished (Moffitt et al. 2009;Gaines et al. 2010). Over recent years, data on the densities and, critically, the movements of coral reef fishes have become increasingly available. These data show that many small species, including damselfishes, butterflyfishes, and angelfishes, have daily home ranges that are restricted to 500 m or less (Kramer & Chapman 1999;Green et al. 2015). Many larger species are more wide-ranging (up to 10 km), but extensive movements appear to be limited to some emperors, snappers, jacks, reef sharks, and seasonally migrating groupers (10-100 km) as well as large sharks and tunas (100s-1,000s km; Green et al. 2015). Information on coral reef fish movements has previously been used to make predictions about the conservation effectiveness of reserves, including species-specific guidelines on minimum reserve sizes (Kramer & Chapman 1999;Green et al. 2015). However, guidelines do not specify the consequences of alternative conservation decisions (Green et al. 2015), which would support stakeholder engagement and decision making. Here, we use a combination of empirical data on the density, schooling behavior, and home range of a representative selection of coral reef fishes in order to specify the relationship between reserve size and conservation effectiveness. We measure conservation effectiveness by using a simple spatial model to determine numbers of individual fish a given reserve can be expected to protect, and we analyze these numbers with respect to locally expected population sizes, the fishery value, and functional role of simulated species. We then derive linear regression coefficients that can be used by conservation planners to approximate our results and support locally effective conservation decisions. Data collection Movement data were compiled by accessing a recent review (Green et al. 2015). All coral reef fishes with data classified as "home range" were initially included. We then used fish surveys from lightly fished areas in the Solomon Islands (Pacific) and from MPAs in Belize (Caribbean) to add data on density and schooling behavior to matching species in our list (n = 38 from the Solomon Islands, n = 18 from Belize). Fish surveys using a very similar methodology for 10 other species in our list were identified by literature review, yielding a total of 66 species with robust and comparable data on home range, density, and schooling behavior. Additional information on the geographic distribution, maximum length, fishery value, habitat, and diet of this final set of species was downloaded from Fishbase (Froese & Pauly 2016; see Supporting Information Methods, Table S1 and Figure S1 for details). Modeling procedure To quantify the relationship between reserve size and conservation effectiveness, we developed a simple spatial model. The model sampled our empirical data set in order to capture natural variability in both the number and home range of individual fish encountered in hypothetical reserves. All simulations used a one-dimensional modeling environment at a resolution of 1 m, implicitly assuming that home ranges are circular. All simulations were started by firstly determining the size of the seascape in which a reserve was enforced (e.g., 1 km). The model then implemented multiple, hypothetical fish surveys (n = 100 replicates per reserve size and species). These hypothetical surveys determined the number and distribution of fish present in the reserve. In the next step, the model assigned home range values to each fish (or group of fish), assuming that their previously assigned locations represent the centers of their home ranges, and that movements are thus confined to 0.5 × the assigned home range value on either side of the assigned location. In the final step, the model then calculated whether the movements of individuals exceeded reserve boundaries. Conservatively, we assumed that this situation would lead to eventual mortality from fishing. That is, only individuals whose entire home range was contained within reserve boundaries were assumed to be protected. A more detailed description of the modeling procedure is given in the Supporting Information Methods. Metrics of conservation effectiveness Conservation effectiveness was calculated as the mean number of protected individuals per species across 100 replicate simulations for each reserve size between 100 m and 100 km. In combination with complementary data, such as total reserve coverage across species ranges, this metric is suitable for population viability analyses. Here, we focused on analyzing a more intuitive and localized metric by normalizing predictions of protected individuals based on the maximum number of individuals a given reserve could be expected to protect. For example, a mean density of 0.005 ± 0.005 fish/m 2 implies that a 1 km wide reserve can be expected to provide for the protection of maximally 5 (±5) individuals. We assumed that "full" local protection was achieved if model predictions equaled or exceeded 95% of this expected maximum. Thus, if on average at least 0.95 × 5 = 4.75 individuals were predicted to be protected, a reserve size of 1 km was assumed to provide for full protection. "Partial" protection was assumed if fewer than 95% but at least 50% of the expected maximum number of individuals was predicted to be protected. Data analysis Relationships between mean home ranges, mean densities, and maximum lengths of fishes were visualized in scatter plots and characterized by calculating Pearson's correlation coefficient. Least-square linear regressions were used to determine how well these primary data alone could be used to approximate simulationbased results (see Supporting Information Methods for details). Maximum lengths of all species in an existing coral reef fish community in Kimbe Bay, Papua New Guinea (Table S2), were used as an example to apply resulting regression coefficients in order to quantify the conservation effectiveness of various reserve sizes. A link to download reserve design software that performs such calculations of reserve size conservation effectiveness based on regression coefficients for mean home ranges, mean densities, and maximum fish lengths will soon be available at www.marinespatialecologylab.org. Practical application of model predictions Assuming that fish are equally likely to move in any direction, our one-dimensional simulations represent the conservation effectiveness of the minimum diameter of a real (two-dimensional) reserve. However, conservation planners are more likely to apply model predictions in support of decisions on reserve sizes in one specific direction. A recent example of this is our own experience with the designation of no-take reserves in Indonesia. Most coral reefs in our study areas were fringing reefs, which extend along the coastline. While reef fishes might then have a potentially extensive home range in alongshore direction, their movements across a depth gradient in offshore direction are restricted by the extent of available reef habitat (from lagoons to reef slopes and any offshore reef patches). In consequence, we used predictions of conservation effectiveness to quantify the consequences of alternative decisions on reserve sizes in alongshore direction, while suitable reserve sizes in offshore direction were assessed based on local reef geomorphology. Results Relationships between the home range, density, and length of simulated fish species followed our general expectations. That is, home ranges and lengths were strongly positively correlated while densities and lengths were negatively correlated. The relationship between home ranges and densities was not clear ( Figure S2). Predicted numbers of protected individuals were highly variable (Table S3), corresponding, for example, to >2,000 individuals of butterflyfishes and not a single individual of some parrotfishes in a 1 km 2 reserve. Small reserves that are only 400 m wide, such as commonly found in the Philippines, achieved the partial protection of only 17% of all simulated species (i.e., ࣙ50% of the expected number of resident individuals moved within reserve boundaries). Most of these species were of comparatively low fishery value. Not surprisingly, larger reserves achieved higher protection levels for more species. For example, a 2 km wide reserve (global median) ensured the partial protection of 56% of all species, including several of high fishery value ( Figures 1A and 2A). A reserve size of 10 km was required to achieve the partial protection of almost all species (94%). Full protection, which we assumed if ࣙ95% of the expected number of resident individuals moved exclusively inside reserve boundaries, required much larger reserves than partial protection. With a global median diameter of 2 km, for example, standard reserves can be expected to achieve the full protection of only 2% of all simulated species. And, even much larger reserves with a diameter of 10 km must be expected to protect only 35% of all species fully, merely starting to include species of high fishery value ( Figures 1B and 2C). Across all modeling scenarios, we found that the number of partially protected species increased most rapidly up to a reserve size of 2 km. However, reserve sizes >2 km were needed to avoid underrepresenting certain functional groups ( Figure 2B), which are groups of species assigned based on ecological role, such as feeding mode, rather than phylogeny (Table S1). Small reserves underrepresented primarily piscivores (including groupers, snappers, trevallies, and sharks), while 10 km wide reserves achieved the partial protection of almost all species and, thus, functional diversity. Full protection of nearly all species required a 100 km wide reserve ( Figure 2D). Conservation effectiveness predicted based on multiple linear regressions provided a close match to these outcomes based on explicit simulations (R 2 ࣙ 0.81, P < 0.0001, see Table 1). In general, data on the mean home range of fishes alone allowed for a robust estimate of reserve sizes required to achieve variable protection levels Table S1. N, not defined. Figure 2 Percentages of species protected in reserves of increasing size. Results refer to "partial" protection (ࣙ50% of the expected maximum number of individuals; A, B) and "full" protection (ࣙ95% of the expected maximum number of individuals; C, D). Unequal proportions of colors in B and D highlight variation in the representation of functional groups. Functional groups are ordered based on reserve sizes needed to achieve maximum protection. Corallivores, for example, was the easiest group to protect, while piscivores required the largest reserves to achieve a given protection level. All regressions were highly significant (P < 0.0001), explaining at least 81% of the variation in simulation-based predictions. Mean home range was always a highly significant predictor (P ࣘ 1.2 × 10 −7 ), yielding close fits to simulated reserve sizes also in univariate regressions (see Supporting Information). Mean density was also a significant predictor (P ࣘ 6.4 × 10 −5 ), but primarily when home ranges were small, resulting in a significant interaction term (P ࣘ 9.1 × 10 −4 ). Values in brackets specify lower and upper 95% confidence intervals. Figure 3 Relationship between predictors used for linear regressions and the simulated reserve sizes required for "partial" (green) and "full" (blue) protection of coral reef fishes. Mean home range (A) and density (B) were used in multiple regressions to predict simulated reserve sizes. More readily available estimates of maximum fish lengths (C) were used in univariate regressions to predict simulated reserve sizes. Mean home ranges >200 m were the single most important predictor (see thick regression line in A and Table S5). See Tables 1 and 2 and Supporting Information for details. ( Figure 3A). However, specifically when home ranges were small (<200 m) and densities highly variable (common for the many aggregating coral reef fish species), or if territoriality was assumed to minimize direct interactions among conspecifics, then density was an important predictor of conservation effectiveness ( Figure 3B). A close match between simulation-and regressionbased predictions was achieved also by using maximum fish lengths as the single predictor (R 2 = 0.38-0.45; Table 2, Figure 3C). Length-based predictions were on average 1.4 (±1.2 SD) times higher than predictions based on the multivariate model. For partial protection, deviations in reserve size predictions were unlikely to exceed 1 km ( Figure S3B), but deviations in predictions for full protection were more substantial ( Figure S3C). In our applied example, reserve sizes predicted to protect the coral reef fish community in Kimbe Bay, Papua New Guinea, yielded results similar to those based on explicit simulations. Reserve sizes of 2-10 km achieved the partial protection of almost the entire fish community, while full protection of most species, representing the complete functional or size-frequency spectrum, required >10 km wide reserves (Figure 4). Maximum fish length was a significant predictor across all protection levels (P < 0.0001), explaining at least 38% of the variation in simulation-based predictions ( Figure 3C). Values in brackets specify lower and upper 95% confidence intervals. Discussion Marine reserves are increasingly used to help conserve functional coral reef ecosystems, specifically where the capacity to regulate human activities by other means is limited (White et al. 2014). Knowledge of the relationship between reserve size and conservation effectiveness is a fundamental requirement for decision makers to achieve this objective (Edgar et al. 2014). Here, we quantified the protection of coral reef fishes in reserves of various sizes, providing generic formulas that can easily be applied by Predictions are based on maximum fish lengths, approximating simulated reserve sizes required to achieve "partial" (50%; A) and "full" (95%; B) protection (see Table 2 and Supporting Information Methods for details). Light blue areas highlight 95% confidence limits. Numbers above dotted lines specify the size composition of protected fish communities, giving means ± SDs and in brackets medians and maximum lengths. Across all species, mean length was 42 ± 96 cm (20, 2,000). conservation planners in order to support locally effective decision making. Importantly, this will be possible even if no data on the movements and densities of resident fish species are available. Our study suggests that most coral reef fishes are at least partially protected in standard reserves around the world. However, our findings raise concerns that currently implemented reserves are biased toward the protection of small species of comparatively low fishery value. Reserves in the Philippines are an example of this potential bias, with diameters generally less than 1 km. Despite this, and even though underlying reasons are unclear, empirical data have shown that reserves in the Philippines function to restore the biomass of fishery species (e.g. Russ et al. 2004). Even minor increases in the diameter of reserves up to about 10 km should nev-ertheless help to increase both the number and functional representation of species experiencing increasing levels of protection. Clearly, decisions on the size of reserves are based not only on considerations of conservation effectiveness but also of socioeconomic impact (Margules & Pressey 2000). For example, the short-term fishery impacts of large reserves might be overly burdensome (Brown et al. 2014;Ovando et al. 2016). It is also possible that decision makers seek to enforce small reserves, because they are expected to benefit fisheries by allowing for higher exports of adults and larvae to fished areas than large reserves (Kramer & Chapman 1999;Hastings & Botsford 2003;Gaines et al. 2010). In some cases, this fisheries management objective will compromise effective fish population recovery and species conservation in reserves (Moffitt et al. 2009). However, in coral reef fishes, current knowledge suggests that the scales of larval dispersal far exceed those of adult home ranges, including species that are large and important for fisheries (Almany et al. 2007;Harrison et al. 2012;Green et al. 2015;Jones 2015;Williamson et al. 2016;Almany et al. 2017). This novel insight suggests that trade-offs in decisions on reserve sizes to support biodiversity conservation versus fisheries management might often be negligible. Even a 10 km wide reserve, for example, which is likely to protect many coral reef fishes very effectively, is likely to allow for sufficient export of locally produced larvae to benefit adjacent fishing grounds (Krueck et al. 2017). Most previously published recommendations of reserves sizes are close to or higher than 10 km (Metcalfe et al. 2015). Estimates of larval dispersal distances, for example, led to reserve size recommendations between 4 and 20 km in order to ensure that enough larvae are locally retained (Palumbi 2003;Shanks 2009;Shanks et al. 2003). Intuitively, the protection of adults might require even larger reserves, specifically if species are highly mobile (Palumbi 2004;Kaiser 2005). However, minimum recommended reserve sizes based on "rule of thumb" guidelines for the protection of adult coral reef fishes rarely exceed a few kilometers (Green et al. 2015). These "rule of thumb" guidelines assume that the diameter of a reserve should be larger than twice the mean home range of any focal species it aims to protect (Green et al. 2015). Applying this rule is likely to support the partial protection of many species, but our results suggest that full protection will require reserve diameters that are 28 ± 23 (mean ± SD) times higher than the mean home ranges of resident species. Earlier work on this relationship between home ranges and reserve sizes revealed higher conservation effectiveness (Kramer & Chapman 1999), but did not capture substantial natural variation in both home ranges and densities of individual fishes. Notable uncertainties underlying the recommendations of reserve sizes in this and other studies include our current lack of understanding of the long-term movements of most fish species (Green et al. 2015). Moreover, even reserves that are larger than necessary to protect resident populations could fail to conserve species, if they do not cover an overall sufficient proportion of the meta-population (Botsford et al. 2001). Importantly, the meta-population includes all life history stages and critical habitats, such as nursery areas and spawning grounds. Multiple additional ecological criteria also need to be taken into account alongside decisions on the appropriate size and total coverage of reserves to ensure effective species conservation (Roberts et al. 2003;Green et al. 2015). In addition to these considerations, one of the major uncertainties about the conservation effects of marine reserves is their impact on ecological interactions among multiple species. Our model did not capture such interactions, primarily because we would not have been able to parameterize them meaningfully. Nevertheless, our results provide support of ecosystem-based management by specifying the maximum level of functional representation a reserve of a given size can be expected to achieve. If species interactions generate trophic cascades (Mumby et al. 2012), then this will benefit some and disadvantage other species. However, specifically where reserves are needed to protect species under threat from heavy overfishing, species interaction strengths will tend to be weak (Bascompte et al. 2005). Both predators and their prey might then be able to recover (Micheli, Amarasekare et al. 2004). Empirical data on the conservation effects of reserves suggest that declines in numbers or biomass relative to prereserve conditions are uncommon, affecting about 20% of all species (Micheli, Halpern et al. 2004). The vast majority of species across taxonomic and functional groups worldwide has been found to increase in both density and biomass within reserve boundaries (Halpern 2003;Lester et al. 2009). Interestingly, the relative magnitude of these positive effects does not generally scale with the size of reserves (Halpern 2003;Lester et al. 2009). Potential explanations of this counterintuitive observation include that important drivers of reserve impacts, such as the duration since reserve establishment, fisher compliance and relative fishing pressure, were not considered. Another potential explanation is that fishes actively avoid the exposure to fishing mortality. But, perhaps most importantly, most reviews of reserve impacts have not been able to focus on studies explicitly designed to test the effect of reserve area or size, which might yield fundamentally different outcomes (Lester et al. 2009). Along with a better empirical understanding of reserve size impacts, our work should be extended by developing a modeling approach that incorporates additional complexity and data. An important advancement of this study is the move from population-to individual-based assessments (Codling 2008) that capture our combined empirical understanding of natural variability in the density, schooling behavior, and home range movements of coral reef fishes. However, some defensible but conservative assumptions should be relaxed in future studies, including: (1) that individual home ranges are temporally stable and (2) that all individuals whose home ranges extend beyond reserve boundaries will eventually be fished. More complex modeling approaches could allow for simulating how joint decisions on reserve size, placement and total coverage interact to influence the behavior of both fishers and fishes. Behavioral changes might involve not only locally variable levels of fisher compliance and fishing pressure, but also contracted, relocated, or extended home ranges (see e.g. Abesamis & Russ 2005). A recent meta-analysis of the effectiveness of reserves highlights that reserve size is one of the five key drivers of conservation success (Edgar et al. 2014). Thus, a precautionary approach to species protection and fisheries management demands explicit consideration of the ecological implications of decisions on reserve size. Our findings and regression-based coefficients allow reserve design practitioners to do so by quantifying the protection (and likely spillover) of locally important species under various alternative reserve size scenarios. Comparative illustrations of outcomes can highlight steep increases as well as plateaus in predicted reserve size effectiveness, which provided highly regarded decision support for recent reserve network designs in Indonesia that some of us were involved in. Figure S2: Relationships between the home range, density, and maximum body length in 66 coral reef fishes. Figure S3: Reserve sizes predicted based on multivariate versus univariate regression models. Table S1: The data set used for simulations in this study (Excel) Table S2: Taxonomic information and maximum length of coral reef fish species recorded in Kimbe Bay, Papua New Guinea (Excel) Table S3: Mean number of protected individuals for all simulated species and reserve sizes (Excel) Table S4: Mean percentage of expected maximum protection for all simulated species and reserve sizes (Excel ) Table S5: Summary of univariate linear regressions based on mean home ranges >200 m fitted to simulationbased predictions of effective reserve sizes. Mean home ranges >200 m were the single most important predictor across all protection levels (P < 0.0001), explaining at least 64% of the variation in simulation-based reserve size predictions ( Figure 3A). Values in brackets specify lower and upper 95% confidence intervals. For species with home ranges ࣘ200 m, reserve sizes of 1, 2, and 5 km can be estimated to protect 50%, 75%, and 95% of all individuals, respectively ( Figure 3A).
2019-01-23T16:24:51.036Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "74802180f8f635c3d72ea3882fa45fe8a010ba12", "oa_license": "CCBY", "oa_url": "https://conbio.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/conl.12415", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "815c58d70659b6001ae290342597fee273dde802", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
256017156
pes2o/s2orc
v3-fos-license
Neuroimaging genomics in psychiatry—a translational approach Neuroimaging genomics is a relatively new field focused on integrating genomic and imaging data in order to investigate the mechanisms underlying brain phenotypes and neuropsychiatric disorders. While early work in neuroimaging genomics focused on mapping the associations of candidate gene variants with neuroimaging measures in small cohorts, the lack of reproducible results inspired better-powered and unbiased large-scale approaches. Notably, genome-wide association studies (GWAS) of brain imaging in thousands of individuals around the world have led to a range of promising findings. Extensions of such approaches are now addressing epigenetics, gene–gene epistasis, and gene–environment interactions, not only in brain structure, but also in brain function. Complementary developments in systems biology might facilitate the translation of findings from basic neuroscience and neuroimaging genomics to clinical practice. Here, we review recent approaches in neuroimaging genomics—we highlight the latest discoveries, discuss advantages and limitations of current approaches, and consider directions by which the field can move forward to shed light on brain disorders. Background Neuroimaging genomics is a relatively new and rapidly evolving field that integrates brain imaging and individual-level genetic data to investigate the genetic risk factors shaping variations in brain phenotypes. Although this covers a broad range of research, one of the most important aims of the field is to improve understanding of the genetic and neurobiological mechanisms * Correspondence: neda.jahanshad@usc.edu 5 Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of the University of Southern California, Los Angeles, CA 90292, USA Full list of author information is available at the end of the article underlying various aspects of neuropsychiatric disorders-from symptoms and etiology, to prognosis and treatment. The goal is to identify key components in biological pathways that can be evaluated or monitored to improve diagnostic and prognostic assessments, and that can ultimately be targeted by novel therapies. Broadly speaking, existing brain imaging methods can be divided into those that provide data on structure-for example, computed tomography (CT), structural magnetic resonance imaging (MRI), and diffusion-tensor imaging (DTI); function-for example, functional MRI (fMRI), arterial spin labeling (ASL); and molecular imaging-for example, single-photon emission computed tomography (SPECT) and positron-emission tomography (PET) using receptor-binding ligands and magnetic resonance spectroscopy (MRS) [1]. A range of additional new methods have become available for animal and/or human brain imaging, including optical imaging, cranial ultrasound, and magnetoencephalography (MEG), but to date these have been less widely studied in relation to genomics. Future work in imaging genomics will rely on further advances in neuroimaging technology, as well as on multi-modal approaches. Progress in both neuroimaging and genomic methods has contributed to important advances-from candidategene (or more precisely, single-variant) approaches initiated almost two decades ago [2,3], to recent breakthroughs made by global collaborations focused on GWAS [4], gene-gene effects [5], epigenetic findings [6], and gene-environment interactions [7] (Fig. 1). Developments in the field of neuroimaging genomics have only recently begun to provide biological insights through replicated findings and overlapping links to disease-we now know the field holds much promise, but further work and developments are needed to translate findings from neuroimaging genomics into clinical practice. In this review, we discuss the most recent work in neuroimaging genomics, highlighting progress and pitfalls, and discussing the advantages and limitations of the different approaches and methods now used in this field. Heritability estimates and candidate gene associations with imaging-derived traits Approximately two decades ago, neuroimaging genomics had its inception-twin and family designs from population genetics were used to calculate heritability estimates for neuroimaging-derived measures, such as brain volume [8], shape [9,10], activity [11], connectivity [12], and whitematter microstructure [13]. For almost all these imagingderived brain measures, monozygotic twin pairs showed greater correlations than dizygotic twins, who in turn showed greater correlations than more-distant relatives and unrelated individuals. These studies confirm that brain measures derived from non-invasive scans have a moderate to strong genetic underpinning [14,15] and open the doors for more-targeted investigations. These brain features might now be considered useful endophenotypes (using only certain symptoms-for example, altered brain volume-of a trait such as schizophrenia, which might have a more-robust genetic underpinning) for psychiatric disorders [16]. A focus on the underlying mechanisms is central to the now highly regarded Research Domain Criteria (RDoC) research framework [17]. In contrast to classifications that focus on diagnoses or categories of disorders [18,19], RDoC emphasizes transdiagnostic mechanisms (investigating overlapping symptoms across diagnoses) that emerge from translational neuroscience [20]. Early imaging genomics work (from approximately 2000 to 2010; Fig. 1) focused predominantly on candidate-gene approaches-in the absence of large GWAS datasets, investigators relied on biological knowledge to develop hypotheses. Genetic variants or single-nucleotide polymorphisms (SNPs) identified through linkage studies or located near or within genes with putative biological roles, particularly those involved in neurotransmission, were investigated in brain imaging studies. Early candidate genes studied in relation to brain phenotypes included the sodium-dependent serotonin transporter gene (SLC6A4) in individuals with anxiety and depression [21][22][23] and the catechol-O-methyltransferase gene (COMT) in individuals with schizophrenia [24][25][26][27][28]. A key criticism of this early work was that candidategene studies were insufficiently powered, with the possibility that small false-positive studies were being published, whereas larger negative analyses were being "filed away" [29,30]. In support of this view, several meta-analyses have emphasized the inconsistency of small candidate-gene studies [31][32][33]. These studies noted that, given relatively small effect sizes, larger studies were needed and that a clear focus on harmonization of methods across studies was needed for meaningful meta-analyses. For example, a meta-analysis of candidate studies of the rs25532 polymorphism of SLC6A4 Timeline of methodological approaches common in neuroimaging-genomics studies of neuropsychological disorders. The field of neuroimaging genomics was initiated in the early 2000s using a hypothesis-driven candidate-gene approach to investigate brain and behavior phenotypes [2,3]. Towards the end of the decade, other candidate-gene approaches, investigating alternative genetic models, began to emerge. These included gene-gene interactions [172], gene-environment interactions [7], and epigenetic effects [6]. Simultaneously, hypothesis-free approaches such as genome-wide association studies (GWAS) were initiated [173] and the need for increased statistical power to detect variants of small individual effects soon led to the formation of large-scale consortia and collaborations [36,37]. The emergence of the "big data" era presented many statistical challenges and drove the development of multivariate approaches to account for these [174]. GWAS of neuropsychological disorders soon identified significant associations with genetic variants with unknown biological roles, resulting in candidate neuroimaging genomics studies to investigate and validate the genetic effects on brain phenotypes [175]. The emergent polygenic nature of these traits encouraged the development of polygenic models and strategies to leverage this for increased power in genetic-overlap studies between clinical and brain phenotypes [114]. Most recently, hypothesis-free approaches are starting to extend to alternative genetic models, such as gene-gene interactions [70] (commonly referred to as the "short variation") and amygdala activation, which incorporated unpublished data, was unable to identify a significant association [31]. This finding cast doubt on the representativeness of effect sizes reported in early studies with positive findings, highlighting a potential "winner's curse" and emphasized the importance of publication bias in the field. However, borrowing strategic approaches from studies of anthropometric traits (GIANT consortium), psychiatric disorders (PGC, psychiatric genomics consortium [34]), cancer (CGC, cancer genomics consortium [35]), and cardiovascular health and aging (CHARGE [36]), the imaging-genomics community has built large-scale collaborations and consortia in order to obtain the statistical power necessary to disentangle the genetic architecture of brain phenotypes [37]. Genome-wide association studies in imaging genomics Imaging genomics has increasingly moved towards a GWAS approach, using large-scale collaborations to improve power for the detection of variants with small independent effects [29]. Examples of such consortia include the Enhancing Neuro-imaging through Meta-analysis (ENIGMA) consortium [37], Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) consortium [36], Alzheimer's Disease Neuroimaging Initiative (ADNI), IMAGEN, which is focused on adolescents [38], and the Uniform Neuro-Imaging of Virchow-Robin Spaces Enlargement (UNIVRSE) consortium [39]. The growing number of GWAS of brain phenotypes and of neuropsychiatric disorders has, on occasion, lent support to previously reported candidate variants [40], but importantly has identified many new variants of interest [41]. An early study by the ENIGMA consortium consisted of approximately 8000 participants, including healthy controls and cases with psychiatric disorders [42]. This study identified significant associations between intracranial volume and a high-mobility group AT-hook 2 (HMGA2) polymorphism (rs10784502), and between hippocampal volume and an intergenic variant (rs7294919). A subsequent collaboration with the CHARGE consortium, including over 9000 participants, replicated the association between hippocampal volume and rs7294919, as well as identifying another significant association with rs17178006 [43]. In addition, this collaboration has further validated and identified other variants associated with hippocampal volume [44] and intracranial volume [45], with cohorts of over 35,000 and 37,000 participants, respectively. Another analysis of several subcortical volumes (ENIGMA2), with approximately 30,000 participants, identified a significant association with a novel intergenic variant (rs945270) and the volume of the putamen, a subcortical structure of the basal ganglia [4]. More recently, a meta-analysis of GWAS of subcortical brain structures from ENIGMA, CHARGE, and the United Kingdom Biobank was conducted [46]. This study claims to identify 25 variants (20 novel) significantly associated with the volumes of the nucleus accumbens, amygdala, brainstem, caudate nucleus, globus pallidus, putamen, and thalamus amongst 40,000 participants (see the "Emerging pathways" section later for a more detailed discussion). Moreover, many large-scale analyses [15,46] are now first being distributed through preprint servers and social media. In another example, in over 9000 participants from the UK Biobank, Elliot and colleagues [15] used six different imaging modalities to perform a GWAS of more than 3000 imaging-derived phenotypes, and identified statistically significant heritability estimates for most of these traits and implicated numerous associated singlenucleotide polymorphisms (SNPs) [15]. Such works still need to undergo rigorous peer-review and maintain strict replication standards for a full understanding of findings, yet this work highlights the fact that the depth of possibilities now available within the field of neuroimaging genomics appears to be outpacing the current rate of publications. As of November 2017, ENIGMA is currently undertaking GWAS of the change in regional brain volumes over time (ENIGMA-Plasticity), cortical thickness and surface area (ENIGMA-3), white-matter microstructure (ENIGMA-DTI), and brain function as measured by EEG (ENIGMA-EEG). Although neuroimaging measurements only indirectly reflect the underlying biology of the brain, they remain useful for in vivo validation of genes implicated in GWAS and lend insight into their biological significance. For example, the rs1006737 polymorphism in the gene encoding voltage-dependent L-type calcium channel subunit alpha-1C (CACNA1C) was identified in early GWAS of bipolar disorder [47,48] and schizophrenia [49,50], but its biology was unknown. Imaginggenomics studies of healthy controls and individuals with schizophrenia attempted to explain the underlying biological mechanisms. Studies reported associations of this variant with increased expression in the human brain, altered hippocampal activity during emotional processing, increased prefrontal activity during executive cognition, and impaired working memory during the n-back task [51][52][53], a series of task-based assessments relying on recognition memory capacity. As the psychiatric genomics field advances and more reliable and reproducible genetic risk factors are identified, imaging genomics will continue to help understand the underlying biology. The limitations of GWAS of complex traits and neuropsychiatric disorders deserve acknowledgment. In particular, although GWAS can identify statistically significant associations, these have particularly small individual effect sizes and, even cumulatively, do not account for a substantial fraction of the heritability of the relevant phenotype estimated from family models [54]. Furthermore, many associated variants are currently not functionally annotated and most often are found in noncoding regions of the genome, which are not always well understood [55,56]. Increasing power, through increasing sample sizes, will likely implicate additional variants, but these might not necessarily play a directly causal role [57]. This could be because of the small effect sizes of causative variants, linkage disequilibrium with other variants, and the indirect effects of other variants in highly interconnected pathways [57]. Currently, most studies utilize participants of European ancestry, and replication studies using alternative ethnic groups are required for further discovery and validation of significant associations, which might be influenced by the populations under investigation [58]. Thus, additional strategies are needed to understand fully the genetic architecture of brain phenotypes and neuropsychiatric disorders. These methods can be summarized into three categories: first, delving deeper into rarer genetic variations; second, incorporating models of interactions; and, third, investigating more than a single locus and instead expanding to incorporate aggregate or multivariate effects; these methods and more are discussed below [57]. Copy-number variation and brain variability Growing recognition of the neuropsychiatric and developmental abnormalities that arise from rare genetic conditions, such as 22q11 deletion syndrome [59], has led imaging-genomic studies to further explore the relationships between copy-number variations (CNVs) and neural phenotypes [60][61][62][63]. For example, in a recent large-scale study of over 700 individuals, 71 individuals with a deletion at 15q11.2 were studied to examine the effects of the genetic deletion on cognitive variables [60]. These individuals also underwent brain MRI scans to determine the patterns of altered brain structure and function in those with the genetic deletion. This study identified significant associations between this CNV and combined dyslexia and dyscalculia, and with a smaller left fusiform gyrus and altered activation in the left fusiform and angular gyri (regions in the temporal and parietal lobes of the brain, respectively). Another study investigating the 16p11.2 CNV, with established associations with schizophrenia and autism, found that the CNVs modulated brain networks associated with established patterns of brain differences seen in patients with clinical diagnoses of schizophrenia or autism [61]. These studies indicate that CNVs might play an important role in neural phenotypes, and initiatives such as ENIGMA-CNV [63] aim to explore this further. Gene-gene interactions Gene-gene interactions (epistasis), where the phenotypic effect of one locus is affected by the genotype(s) of another, can also play significant roles in the biology of psychiatric disorders [64]; such interactions might help account for the missing heritability observed with genetic association testing [54]. Singe-locus tests and GWAS might not detect these interactions as they use additive genetic models [64]. The inclusion of interaction tests has also, for example, been shown to improve the power for detection of the main effects in type 1 diabetes [65]. Recently, this has emerged as a focus of imaging-genomic studies, predominantly using a candidate-gene approach [66][67][68][69]. Studies of epistasis are, however, at an early stage and currently have relatively small sample sizes and lack replication attempts, limiting the validity of these findings [70]. Selecting candidate genes for investigation, usually based on significance in previous association studies, may miss important interactions with large effects [71]. Genome-wide interaction approaches may provide for a more unbiased approach towards understanding epistatic effects. As a proof of concept, one such study investigated genome wide SNP-SNP interactions using participants from the ADNI cohort, and the Queensland Twin Imaging study for replication [70]. While larger scale studies are needed to confirm specific findings, this study identified a significant association between a single SNP-SNP interaction and temporal lobe volume, which accounted for an additional 2% of the variance in temporal lobe volume (additional to the main effects of SNPs) [70]. As the power for GWAS in imaging genomics increases through growing consortia and biobanks, large-scale epistatic studies may become possible and explain more of the genetic variance underlying brain structure and function. Gene-environment interactions Most neuropsychiatric disorders have a multifactorial etiology [72,73], with varying heritability estimates under different conditions [74]. Imaging-genomics studies have begun to investigate how genes and the environment interact (GxE) to influence brain structure and function in relation to neuropsychiatric disorders [75]. These interactions are of further interest as emerging evidence indicates that some individuals exposed to certain environmental factors have altered treatment responses [75]. For example, GxE studies of the rs25532 polymorphism within the SLC6A4 gene indicate that carriers with depression, and who are exposed to recent life stressors, respond poorly to treatment with certain antidepressants [76][77][78][79], but have better responses to psychotherapy compared to those with the alternative genotype [80]. Therefore, imaging genomics is ideally suited to identify possible interactions that may affect treatment responses, lend insight into these mechanisms potentially leading to altered or new therapeutic regimens, and identify at-risk individuals who may benefit from early interventions [81,82]. Small exploratory studies have suggested that potentially interesting gene-gene interactions might exist [7,[83][84][85][86][87][88][89]; however, the statistical power of published analyses is low, and replication is key [90,91]. Candidate-gene approaches to GxE studies have been commonplace, but these might oversimplify genetic models, as each of these variants contributes minimally to disease risk [90,91]. To ensure the effect is indeed an interaction and not due to one component of the interaction, all terms (G, E, GxE) will need to be included in a regression model. Naturally, this implies genome-wide interaction studies would require even larger sample sizes than GWAS if they are to be appropriately powered [90,91]. Concerns about the measures of both phenotype and the exposome (lifetime environmental exposures) have also been raised, as studies using different measures and at different stages of life can produce conflicting results [91][92][93]. Large-scale collaborations using carefully harmonized protocols will likely be able to mitigate these limitations. Epigenetics Approaches investigating the associations between epigenetic alterations and brain measures once again began with candidate genes [94,95]. However, disparities between the methylation states of blood, saliva, and brain tissue remain important limitations for untangling the discrepancies found with epigenetic studies [96]. To illustrate this, several projects, such as the Human Roadmap Epigenomics project [97], the International Human Epigenome Consortium [98], and Braincloud [99], have begun developing reference epigenomes, which could pave the way for harmonizing and pooling data across independent datasets. These projects might also provide new biologically based candidates for research-it has been suggested that genes most similarly methylated between blood and brain tissue be investigated first in neuroimaging studies [100,101]. Recently, imaging consortia such as ENIGMA have begun epigenome-wide association studies for key brain measures such as hippocampal volume, revealing promising associations [102]. Longitudinal and transgenerational studies of both healthy and at-risk individuals might also prove useful for understanding the impact of the environment on the epigenome [101]. Mapping the genetic structure of psychiatric disease onto brain circuitry Recent large-scale GWAS of psychiatric disorders have begun to identify significantly associated variants [41,103]-however, the effect sizes of these variants are small (usually less than 1%) and do not account for the predicted heritability of these traits (as high as 64-80% in schizophrenia [104,105]). It is hypothesized that many psychiatric disorders have a polygenic (effected by multiple genetic variants) and heterogeneous (disease-causing variants can differ between affected individuals) genetic architecture, resulting in a failure to reach statistical significance and contributing to the phenomenon of missing heritability [106]. GWAS of subcortical brain structure and cortical surface area have also started to reveal significant genetic associations and a polygenic etiology [44][45][46]107], although the extent of polygenicity appears to be less than that predicted for psychiatric disorders [107]. Recent studies have begun to disentangle whether the genetics of brain phenotypes overlap with that of psychiatric disorders by making use of their polygenic nature [108,109]. Polygenic risk scoring (PRS) is one such analytical technique that exploits the polygenic nature of complex traits by generating a weighted sum of associated variants [106,110,111]. PRS uses variants of small effect (with p values below a given threshold), identified in a GWAS from a discovery dataset to predict disease status for each participant in an independent replication dataset [111]. In large-scale GWAS of schizophrenia, for example, the PRS now accounts for 18% of the variance observed [41]. PRS in imaging genomics has the potential advantage of addressing many confounders, such as the effects of medication and the disease itself through investigation of unaffected and at-risk individuals [112,113]. For example, PRS for major depressive disorder (MDD; n = 18,749) has been associated with reduced cortical thickness in the left amygdala-medial prefrontal circuitry among healthy individuals (n = 438) of European descent [114]. However, as with other approaches, PRS is not without limitations. For example, an additive model of variant effects is assumed, disregarding potentially morecomplex genetic interactions [115]. The predictive capacity of PRS is also largely dependent on the size of the discovery dataset (ideally greater than 2000 individuals), which is likely still underpowered in many instances [106]. Furthermore, PRS does not provide proportionate weight to biologically relevant genes for neural phenotypes as it is also subject to the confounding elements of GWAS emphasized earlier [57,113,116]. Thus, other approaches such as linkage disequilibrium score regression for genetic correlation (a technique that uses GWAS summary statistics to estimate the degree of genetic overlap between traits) [117], Bayesian-type analyses [118], and biologically informed multilocus profile scoring [119,120] might be alternatives worth exploring, perhaps in conjunction with PRS [121]. More recently, an omnigenic model has been proposed-which takes into account the interconnected nature of cellular regulatory networks that can confound other polygenic models [57]. Linkage-disequilibrium score regression [117] did not identify genetic overlap between schizophrenia (33,636 cases, 43,008 controls) and subcortical volumes (n = 11,840 healthy controls), but provided a useful proof-ofprinciple of this approach [108]. A partitioning-based heritability analysis [122], which estimates the variance explained by all the SNPs on a chromosome or the whole genome rather than testing the association of particular SNPs with the trait, indicated that variants associated with schizophrenia (n = 1750) overlapped with eight brain structural phenotypes, including intracranial volume and superior frontal gyrus thickness [109]. Publicly available GWAS data for several other psychiatric disorders were also investigated and indicated that intracranial volume was enriched for variants associated with autism spectrum disorder (ASD), and right temporal pole surface area was enriched for variants associated with MDD, and left entorhinal cortex thickness showed enrichment for bipolar disorder risk variants [109]. These types of analyses confirm a common genetic basis between risk for altered brain structure and neuropsychiatric disorders [16]. Multivariate approaches To explain more of the variance in gene-imaging findings, techniques for data-driven discovery using multivariate approaches have begun to emerge in this field. These techniques include methods such as independent component analysis (ICA) [123], canonical correlation analysis [124], sparse partial least squares [125], and sparse reduced-rank regression [126]. To date, the increased explanatory power provided by these approaches has mainly been shown in single datasets or relatively small studies-these often claim to identify significant associations at a genome-wide level [127][128][129]. Owing to the large number of input variables and parameters (many dimensions), often paired with limited datapoints and split-sample training and testing from the same cohort, there can be concerns about overfitting and models that do not generalize. Thus, dimensionality reduction, in the imaging or genetic domain, is often necessary. Dimensionality-reduction techniques can group or cluster these large sets of variables (dimensions) in either domain; approaches guided by a priori knowledge might prove useful as the field advances [130]. Each multivariate approach has particular advantages and limitations. Data-driven multivariate techniques, such as ICA, in particular, can lead to sample-specific solutions that are difficult to replicate in independent datasets. The large datasets now available through collaborative efforts provide the opportunity to assess and compare the utility of these approaches [37]; on the other hand, larger datasets can also overcome the need for dimensionality-reduction methods if the sample sizes prove sufficient for mass univariate testing. Emerging pathways Understanding the pathways involved in brain development, structure, function, and plasticity will ultimately lead to an improved ability to navigate neuropsychiatric disease pathophysiology. Investigation of the signatures of selection affecting neuropsychiatric, behavioral, and brain phenotypes have indicated both recent and evolutionarily conserved polygenic adaptation, with enrichment in genes affecting neurodevelopment or immune pathways [131] (Table 1). Annotation of the loci associated with subcortical brain volumes has already identified an enrichment of genes related to neurodevelopment, synaptic signaling, ion transport and storage, axonal transport, neuronal apoptosis, and neural growth and differentiation Brain connectivity Brain white matter microstructure is disrupted globally in schizophrenia [179] • ATP synthesis and metabolism • Axon guidance • Fasciculation during development Fornito et al. 2015 [133] Vértes et al. 2016 [134] Transcriptional profiles Transcription factor EGR1 significantly downregulated in brains of schizophrenic patients compared with controls [180] • Ion channels • Synaptic activity • ATP metabolism Wang et al. 2015 [136] Richiardi et al. 2015 [137] [4, 15,46] (Table 1). Studies have also implicated pleiotropy (a single locus that affects multiple phenotypes) amongst these loci [46]. Furthermore, many of the associated neurodevelopmental genes are conserved across species, providing a foundation for translational research in imaging genomics [46]. Advances in our concepts of brain connectivity can provide a useful framework for further integration of imaging and genomics data. Recent work has emphasized that hubs of neural connectivity are associated with transcriptional differences in genes affecting ATP synthesis and metabolism in mice [132], consistent with their high energy demands [132]. Analogous findings have been found in humans [133,134]. Studies of the transcriptome and the metabolome, now curated by efforts such as the Allen Brain atlas [135], increasingly allow study of issues such as the relationship between resting-state functional connectivity and gene-expression profiles, with early work indicating enrichment in hubs of genes related to ion channels, synaptic activity, and ATP metabolism [136,137]. Key considerations in imaging-genomic analyses While imaging genomics has great potential, the limitations associated with both genetic [57,138] and imaging [139] studies, as well as some unique concerns, deserve consideration. Here we discuss three important issues, namely (i) possible confounders of heritability estimates in imaging measures, (ii) the necessity of methodological harmonization for cross-site collaborations, and (iii) accounting for the multiple testing burden. Environmental, physiological, and demographic influences can affect heritability estimates and measurements of brain-related features [72,73,140]. Most psychiatric disorders produce subtle changes in brain phenotypes and multiple potential confounding factors might obscure disease-related effects, limiting their utility as endophenotypes. Examples of such potential factors include motion [141,142] and dehydration [143,144], to name a few. Differences in data acquisition and analysis types might also contribute to variation between studies [145], particularly for small structures and grey-matter volumes [146][147][148]. These potential confounding factors can, however, be included as covariates and adjusted. This approach was used, for example, to control for the effects of height in the largest imaging-genetics metaanalysis of intracranial volume [45]. The distribution of these covariates can also be balanced between cases and controls. Furthermore, potential confounders can be mitigated by investigating healthy individuals only or a single ethnic group, sex, or age group, for example [149]. However, healthy individuals with certain genotypes might be more susceptible to certain confounding factors, such as smoking, which could lead to spurious associations [139]. Furthermore, caution should be taken when interpreting results from fMRI studies, owing to the dependence on quality of both the control and task of interest [150]. These tasks should improve sensitivity and power of genetic effects, adequately stimulate regions of interest, be appropriate for the disorder of interest, reliably evoke reactions amongst individuals, and highlight variability between them [150][151][152]. Resting-state fMRI studies also require consideration as these might be experienced differently between patients and controls [153]. Studies of unaffected siblings could be beneficial to minimize the potential confounders of disease on brain measures [154]. Meta-analytical approaches need to take the comparability of tasks into account, as apparently slight differences can considerably confound associations [155]. ENIGMA, for example, attempts to reduce these effects through predetermined protocols and criteria for study inclusion [37]. There is often a need to account for multiple testing in imaging genomics beyond that which is done in genetics alone. This is an important issue to emphasize [149,156]. Studies performing a greater number of tests, especially genome-wide analyses [157] and multimodal and multivariate approaches [130], might require morestringent corrections. Approaches to reduce the dimensions of these datasets are being developed and include the use of imaging or genetic clusters [66,[158][159][160][161][162] and machine learning methods [163]. However, replication studies and meta-analyses of highly harmonized studies remain the most reliable method for reducing falsepositive associations [164]. Conclusions and future directions The field of imaging genomics is moving forward in several research directions to overcome the initial lack of reproducible findings and to identify true findings that can be used in clinical practice. First, well-powered hypothesis-free genome-wide approaches remain key. Research groups are now routinely collaborating to ensure adequate power to investigate CNVs and epigenetic, gene-gene, and gene-environment interactions. Second, advances in both imaging and genetic technologies are being used to refine the brain-gene associations; nextgeneration sequencing (NGS) approaches now allow for more-in-depth investigation of the genome and deeper sequencing (whole-exome and genome); and morerefined brain mapping will ideally allow the field to localize genetic effects to specific tissue layers and subfields as opposed to global structural volumes. Third, replication attempts are crucial, and investigations in various population groups might validate associations and discover new targets that lend further insights into the biological pathways involved in these traits. Finally, specific initiatives to integrate neurogenetics and neuroimaging data for translation into clinical practice are being routinely advocated. These might include efforts in translational neuroscience [165], a systems-biology perspective [16,[166][167][168], and longitudinal data collection in community and clinical contexts [169]. Current psychiatric treatments have important limitations. First, many patients are refractory to treatment. For example, only approximately 60% of patients with depression achieve remission after either, or a combination of, psychotherapy and pharmacotherapy [170]. Second, clinical guidelines often focus on the "typical" patient, with relatively little ability to tailor individual treatments to the specific individual. Such limitations speak to the complex nature of the brain and of psychiatric disorders, and the multiple mechanisms that underlie the relevant phenotypes and dysfunctions. [20]. In order to progress into an era of personalized medicine, addressing the unique environmental exposures and genetic makeup of individuals [171], further efforts to improve statistical power and analyses are needed. Ultimately, understanding the mechanisms involved in associated and interconnected pathways could lead to identification of biological markers for more-refined diagnostic assessment and new, more effective, and precise pharmacological targets [20,171]. These goals can be fostered through continued efforts to strengthen collaboration and data sharing. Indeed, such efforts have led to a growing hope that findings in imaging genomics might well be translated into clinical practice [166][167][168]. The studies reviewed here provide important initial insights into the complex architecture of brain phenotypes; ongoing efforts in imaging genetics are well positioned to advance our understanding of the brain and of the underlying neurobiology of complex mental disorders, but, at the same time, continued and expanded efforts in neuroimaging genomics are required to ensure that this work has clinical impact. Authors' contributions All authors contributed to the writing of this manuscript. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests.
2023-01-20T14:12:58.270Z
2017-11-27T00:00:00.000
{ "year": 2017, "sha1": "b19931cf8bc7eb16d466872010611b655ed28fb6", "oa_license": "CCBY", "oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/s13073-017-0496-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b19931cf8bc7eb16d466872010611b655ed28fb6", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [] }
256278163
pes2o/s2orc
v3-fos-license
Phenotypic and genotypic characterization of carbapenem-resistant Acinetobacter baumannii isolates from Egypt Antibiotic use is largely under-regulated in Egypt leading to the emergence of resistant isolates. Carbapenems are last resort agents to treat Acinetobacter baumannii infections resistant to other classes of antibiotics. However, carbapenem-resistant isolates are emerging at an alarming rate. This study aimed at phenotypically and molecularly characterizing seventy four carbapenem-unsusceptible A. baumannii isolates from Egypt to detect the different enzymes responsible for carbapenem resistance. Carbapenemase production was assessed by a number of phenotypic methods: modified Hodge test (MHT), carbapenem inactivation method (CIM), combined disc test (CDT), CarbAcineto NP test and boronic acid disc test. Polymerase chain reaction (PCR) was used to screen the isolates for the presence of some genes responsible for resistance to carbapenems, as well as some insertion sequences. PCR amplification of class D carbapenemases revealed the prevalence of blaOXA-51 and blaOXA-23 in 100% of the isolates and of blaOXA-58 in only one isolate (1.4%). blaVIM and blaNDM-1 belonging to class B metallo-β-lactamases were present in 100 and 12.1% of the isolates, respectively. The prevalence of ISAba1, ISAba2 and ISAba3 was 100, 2.7 and 4.1%, respectively. None of the tested isolates carried blaOXA-40, blaIMP, blaSIM, blaSPM, blaGIM or the class A blaKPC. Taking PCR as the gold standard method for the detection of different carbapenemases, the sensitivities of the MHT, CIM, CDT, CarbAcineto NP test and boronic acid disc/imipenem or meropenem test for this particular collection of isolates were 78.4, 68.9, 79.7, 95.9, and 56.8% or 70.3%, respectively. The widespread detection of carbapenem-resistant A. baumannii (CR-AB) has become a real threat to the efficacy of treatment regimens. Among the studied cohort of CR-AB clinical isolates, blaOXA-51, blaOXA-23 and blaVIM were the most prevalent, followed by blaNDM-1 and blaOXA-58. The genotypic detection of carbapenemases among CR-AB clinical isolates using PCR was most conclusive, followed closely by the phenotypic testing using CarbAcineto NP test. The unnecessary and extensive use of antibiotics in healthcare settings led to the acquisition of novel genetic determinants for antibiotic resistance. These, added to the intrinsic resistance of A. baumannii to many antibiotics, resulted in the development of multidrug-resistant, extensively drug-resistant and even pan drug-resistant A. baumannii strains, mainly in intensive care settings [12][13][14], limiting the options available to treat such infections to a few agents such as the carbapenems [15]. Yet, A. baumannii has also developed resistance to carbapenems and carbapenem-resistant A. baumannii (CR-AB) isolates have been reported all over the world, challenging modern day medicine [16]. Carbapenem resistance among A. baumannii strains occurs due to the loss or modification of porins or in some rare cases due to modification of penicillin binding proteins [17]. However, the main resistance mechanism is the production of β-lactamase enzymes [18][19][20]. Four molecular β-lactamase classes (A, B, C and D) have been detected in A. baumannii [21,22]. Only a few K. pneumoniae carbapenemase (KPC) type enzymes from class A β-lactamases have an effect on carbapenems contrary to classes B and D which act efficiently on carbapenems [23]. Class B β-lactamases are metallo-β-lactamases (MBL) that need zinc ions for their catalytic activity [23]. Four examples of MBL are known in A. baumannii, including New Delhi Metallo-β-lactamase (NDM), Imipenemase (IMP), Seoul Imipenemase (SIM) and Verona integron-encoded metallo-β-lactamase (VIM) [24,25]. Two variants of NDM have been reported (NDM-1 and NDM-2). The two variants have been reported in Egyptian A. baumannii clinical isolates. The first variant (NDM-1) was detected in a repatriated Czech citizen after being admitted to a hospital in Egypt in 2011, it is not clear whether the patient was colonized or infected [26]. The second variant (NDM-2) was isolated in Germany from a venous line catheter placed for a child while being hospitalized in Egypt [27]. Regarding Class D β-lactamases, also named oxacillinases (OXAs) for their activity on oxacillin [28], there are six subgroups: the naturally occurring, chromosomal and intrinsic OXA-51 and the acquired OXA-23-like, OXA-58-like, OXA-24/40-like, OXA-235-like and OXA-143-like βlactamases [29]. The presence of certain insertion sequences upstream of carbapenem-hydrolyzing class D βlactamase (CHDLs) genes leads to their over-expression, conferring carbapenem resistance [30]. Detection of the carbapenemases is crucial to determine the severity of the problem and to direct the application of antimicrobial stewardship guidelines to limit further evolution of carbapenem-resistant variants among A. baumannii isolates. The current study is reporting the prevalence of certain carbapenemases among CR-AB isolates from Alexandria, Egypt, comparing the different phenotypic and molecular techniques to detect these enzymes among CR-AB isolates in an Egyptian setting. Carbapenem susceptibility of the clinical isolates was checked by the standard disc diffusion technique on Müller-Hinton agar using imipenem and meropenem discs (Oxoid, Basingstoke, United Kingdom), according to CLSI 2018 guidelines [35]. The minimum inhibitory concentration (MIC) of imipenem (Merck Sharp & Dohme B.V., The Netherlands) and meropenem (Astrazeneca, United Kingdom) were determined against the tested clinical isolates using agar dilution method to confirm carbapenem resistance, and the results were interpreted according to CLSI 2018 guidelines (data not shown) [35]. Phenotypic detection of carbapenemases Modified Hodge test (MHT) MHT was performed as descried before [36]. One to ten dilution of 0.5 McFarland suspension of the carbapenemsusceptible E. coli ATCC 8739 was aseptically swabbed onto a sterile Müller-Hinton agar plate. A meropenem disc (10 μg) was aseptically placed in the center of the plate. In a straight line from the interior to the exterior of the plate, each tested isolate was streaked. K. pneumoniae ATCC 10031 was used as a negative control. The plates were then incubated for 18-24 h at 37°C then examined for a clover leaf-type indentation in the inhibition zone of the carbapenem disc at the intersection of the test organism and E. coli ATCC 8739. Carbapenem inactivation method (CIM) CIM test was performed as previously described [37], with some modifications. Meropenem disc (10 μg) was incubated for 4 h in an overnight culture of the tested bacterial isolates. A 0.5 McFarland suspension of E. coli ATCC 8739 was swabbed onto Müller-Hinton agar. After incubation, the meropenem disc was placed onto the inoculated Müller-Hinton agar plate and incubated for 18-24 h at 37°C. The presence of a clear inhibition zone (≥ 20 mm) indicated the absence of carbapenemase activity. Combined disc test (CDT) On a Müller-Hinton agar plate inoculated with 1:10 dilution of 0.5 McFarland suspension of E. coli ATCC 8739, imipenem (10 μg) and imipenem/EDTA (10/ 930 μg) discs (Oxoid, Basingstoke, United Kingdom) were placed, at a distance of no less than 20 mm between the centers of the discs. After 18-24 h of incubation at 37°C, the diameters of the inhibition zones around the discs were compared [38]. An increased inhibition zone of ≥ 7 mm with the imipenem/EDTA disc compared to the imipenem disc alone was considered an indication for MBL production. CarbAcineto NP test Two to three colonies of each tested isolate growing on Luria-Bertani (LB) agar plate were picked up and suspended in two Eppendorf tubes (A and B) containing 100 μL of 5 M NaCl. Both tubes A and B also contained 100 μL of revealing solution in addition to 6 mg/mL imipenem in tube B. The revealing solution comprised of phenol red as pH indicator and 0.1 mmol/L ZnSO 4 . The phenol red solution was prepared by adding 2 mL of a phenol red solution 0.5% (wt/vol) to 16.6 mL of distilled water and then adjusting the pH value to 7.8 by adding 1 N NaOH. After a maximum incubation time of 2 h at 37°C, tubes A and B were visually inspected for color change. In tube B, the carbapenemase activity was detected by a color change of phenol red solution (red to yellow/orange) resulting from the hydrolysis of imipenem into a carboxylic derivative, leading to a decrease of the pH value [39]. Boronic acid disc test Ten microliters of 3-aminophenylboronic acid (PBA) dissolved in dimethyl sulfoxide (DMSO) (40 mg/mL) equivalent to 400 μg PBA solution were aseptically dropped onto imipenem and meropenem discs. Treated and untreated imipenem and meropenem discs were also transferred onto Müller-Hinton agar plate inoculated with the tested isolate. In addition, 400 μg PBA disc lacking either antibiotic was used on the same plate as a control. After incubation for 18-24 h at 37°C, the inhibition zone diameters around the treated imipenem and meropenem discs were compared with the diameters around the plain antibiotic discs. A ≥ 5 mm difference in zone diameter was considered as a positive result [40]. Molecular characterization of resistance determinants For preparation of DNA template, four colonies of each tested clinical isolate were suspended in 200 μL sterile deionized water. The suspension was heated at 95°C for 30 min and then frozen at − 20°C for 30 min. After thawing, the tube was centrifuged at 14,000 rpm for 10 min. The supernatant was then aliquoted and preserved at − 20°C for future use [32]. The presence of carbapenemase genes belonging to class A (bla KPC ), B (bla IMP , bla VIM , bla SIM , bla SPM , bla GIM and bla NDM ) and D (bla oxa-23, bla oxa-40, bla oxa-51 and bla oxa-58 ) and insertion sequences: ISAba1, ISAba2 and ISAba3 was investigated in the extracted DNA using PCR, following the conditions detailed in Table 1. All primers are listed in Table 2. Both strands of the NDM amplicons were sequenced using an ABI 3500XL genetic analyzer (Inqaba Biotechnologies, Pretoria, South Africa). Phenotypic detection of carbapenemases Seventy four carbapenem-resistant isolates were screened for carbapenemase production by a number of phenotypic methods (MHT, CIM, CDT, CarbAcineto NP test and boronic acid disc test) ( Discussion A. baumannii is becoming a major threat because of the dreadful number of nosocomial infections caused by this pathogen, mostly in ICUs worldwide [45,46]. In addition, A. baumannii has become resistant to several antimicrobial classes due to the irrational use of antibiotics, leading to the predominance of multidrugresistant strains particularly in hospital settings [47]. Moreover, carbapenem resistance among A. baumannii isolates restricts therapeutic options for treatment of such infections which might lead to higher morbidity and mortality rates [48,49]. A few previous studies commented on the prevalence of carbapenemases among Egyptian A. baumannii clinical isolates [26,27,[50][51][52]. Different mechanisms can contribute to carbapenem resistance, however, the production of MBL and CHDLs remain the most common and prevalent mechanisms among A. baumannii isolates [20]. MBLs are especially problematic because their genes are harbored on mobile elements, allowing their easy dissemination among the clinical isolates [49]. On the other hand, CHDLs can be either intrinsic/chromosomal or acquired β-lactamases [53]. Therefore, detection of carbapenemases among resistant strains is paramount to direct the proper treatment regimen. This study aimed to phenotypically and molecularly characterize 74 Egyptian A. baumannii isolates to identify the different enzymes responsible for carbapenem resistance. Several phenotypic methods, including MHT, CIM, CDT, CarbAcineto NP test and boronic acid disc test were used. Phenotypic detection of carbapenemases has the advantages of low cost, ease of procedure and the absence of complicated or expensive equipment; however, it suffers from poor specificity and sensitivity. Therefore, PCR screening for some genes responsible for carbapenem resistance, as well as some insertion sequences was taken as the gold standard to evaluate the sensitivity of the different phenotypic methods. Of the genes encoding class B carbapenemases, bla VIM and bla NDM were detected in 100 and 12.1% of the isolates (9 out of 74), respectively. Three of the nine isolates were collected in 2015 showing a prevalence rate of 8.6% among the new collection whereas the remaining six isolates belonged to the older collection with a prevalence rate of 15.4%. More data are needed, preferably from different regions in the country, before we can safely conclude that the prevalence of bla NDM is decreasing in Egypt. Benmahmod et al. [58] reported a bla VIM and bla NDM prevalence rates of 20 and 30%, respectively. The PCR screening results of bla NDM were validated by sequencing. These results are in accordance with studies reported in China and Saudi Arabia [66][67][68]. Besides, bla NDM-1 and bla NDM-2 were reported in A. baumannii from Egypt [26,27] and then disseminated in the entire Middle East [69]. None of the isolates were shown by PCR to carry bla KPC . Similar results were reported by Raible et al. [70], however Benmahmod et al. [58] reported a bla KPC prevalence rate of 56%. Although bla SPM-1, bla GIM, bla SIM and bla IMP have been previously detected among Egyptian A. baumannii isolates [52,56], none of the A. baumannii isolates in the current study harbored any of these genes. Comparing the results of the phenotypic tests to the results of the molecular detection of carbapenemases showed that the sensitivity of MHT, CIM, CDT, CarbAcineto NP, boronic acid with imipenem and meropenem was 78.4, 68.9, 79.7, 95.9, 56.8 and 70.3%, respectively. In the CarbAcineto NP test, four isolates carrying bla OXA-51, bla OXA-23 , bla VIM, including isolate no. A81 that additionally carried bla OXA-58 developed the positive result in less than 15 min which could be attributed to the activity of the enzymes in these isolates. The false negative result recorded with A2 that carried bla VIM , bla OXA-51 and bla OXA-23 could be explained by the low zinc concentration in the culture medium [71] or due to very low carbapenemase activity in the tested isolate [72]. These findings agree with the previously reported high sensitivity of CarbAcineto NP in carbapenemase detection among Acinetobacter spp. [39]. The sensitivity of MHT in the current study is also in agreement with previously published reports in which MHT was able to detect carbapenemase production in 83.3, 71 and 73% of the screened carbapenem-resistant isolates [73][74][75]. According to CLSI 2018, MHT is no longer recommended as a phenotypic test for carbapenemase detection, presumably because of the poor specificity of the test when detecting someextended spectrum βlactamase production occurring with porin loss [76]. However, in the present study, all isolates shown by MHT to be carbapenemase producers also carried one or more carbapenemase genes as shown by PCR. The failure of CIM to detect carbapenemase production in 23 isolates could be due to the short incubation period of the meropenem disc relative to other studies that recommended six hours of incubation particularly with low level carbapenemase activity [77]. In the current study, CDT was capable of detecting the carbapenemases in 79.7% of the cases which is lower than the detection rate reported by Pandya et al. [78] (96.3%), Irfan et al. [79] (96.6%) and Anwar et al. [73] (95.4%). Boronic acid disc test has been reported to be an accurate phenotypic test for the detection of KPC carbapenemases [80][81][82][83]. However, the data concerning the application of the test for detection of other carbapenemases is unsatisfactory [84]. In the present work, no bla KPC was detected. Although only 21 isolates showed positive tests allover, all nine isolates that were shown to carry NDM by PCR also gave positive test in MHT, CIM and CarbAcineto NP making the sensitivity of these tests to detect MBL 100%. On the other hand, CDT failed to detect NDM in one isolate: A85 and boronic acid disc test with imipenem and meropenem failed to detect NDM in 2 isolates each: A59 and A81 and A40 and A81, respectively. It is noteworthy that A81 was the only isolate shown to carry bla OXA-58 . When present upstream to CHDL encoding genes, insertion sequences may increase the production of βlactamases [65,85]. In the current study, the prevalence of ISAba1, ISAba2 and ISAba3 was 100, 2.7 and 4.1%, respectively. The prevalence of different insertion sequences in A. baumannii clinical isolates from Saudi Arabia was in agreement with the findings in the current study [68]. Conclusions With the exception of CarbAcineto NP that showed superior sensitivity approaching PCR results, a combination of phenotypic tests, including MHT, CIM, CDT and boronic acid disc tests seems essential for the conclusive detection of carbapenemases. NDM prevalence levels detected here are smaller than previously reported from other parts of the country which suggests the need for larger screening encompassing different Egyptian governorates to determine the exact prevalence rate. However, OXA-23 and VIM prevalence rates remain equally high.
2023-01-27T14:18:48.059Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "7bcd532d417cfdb38a562cdea8131c76218bdc2d", "oa_license": "CCBY", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-019-0611-6", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "7bcd532d417cfdb38a562cdea8131c76218bdc2d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
245103677
pes2o/s2orc
v3-fos-license
Mnemonic-opto-synaptic transistor for in-sensor vision system A mnemonic-opto-synaptic transistor (MOST) that has triple functions is demonstrated for an in-sensor vision system. It memorizes a photoresponsivity that corresponds to a synaptic weight as a memory cell, senses light as a photodetector, and performs weight updates as a synapse for machine vision with an artificial neural network (ANN). Herein the memory function added to a previous photodetecting device combined with a photodetector and a synapse provides a technical breakthrough for realizing in-sensor processing that is able to perform image sensing and signal processing in a sensor. A charge trap layer (CTL) was intercalated to gate dielectrics of a vertical pillar-shaped transistor for the memory function. Weight memorized in the CTL makes photoresponsivity tunable for real-time multiplication of the image with a memorized photoresponsivity matrix. Therefore, these multi-faceted features can allow in-sensor processing without external memory for the in-sensor vision system. In particular, the in-sensor vision system can enhance speed and energy efficiency compared to a conventional vision system due to the simultaneous preprocessing of massive data at sensor nodes prior to ANN nodes. Recognition of a simple pattern was demonstrated with full sets of the fabricated MOSTs. Furthermore, recognition of complex hand-written digits in the MNIST database was also demonstrated with software simulations. The von Neumann architecture provides accurate calculations, however, it is not suitable for low power applications because of the data bottleneck between the memory and the processor 1 . In order to overcome the limitations of the von Neumann architecture, various artificial neuromorphic devices were explored to imitate functions of the brain. In details, two-terminal memristors such as resistive random-access memory (RRAM) and phase-change memory (PCM), and the three-terminal charge trap memory and electrochemical random-access memory (ECRAM) with separated reading and writing paths have been actively studied as synaptic devices for artificial neural networks (ANN) [2][3][4][5][6] . By the way, vision systems assisted by neural processing allow accurate object detection, pattern recognition, and real-time image processing for robotics, autonomous vehicles, and sensory electronics [7][8][9][10][11] . A conventional vision system separates image sensing and signal processing. Its performance is thus adversely limited owing to signal latency and power consumption that arises from a huge amount of data processing with the inclusion of redundant data passing through a converting circuit such as an analog-to-digital converter (ADC), as illustrated in Fig. 1a [12][13][14] . In contrast, a biological retina performs sensing and simultaneous pre-processing of visual information in order to extract key features from the input visual data [15][16][17][18] . By the elimination of redundant visual data, subsequent information processing in the brain such as object detection and pattern recognition can become faster with lower power consumption. Recently, inspired by a biological vision system, various optoelectronic synaptic devices that can act as both a photodetector and a synapse used for an ANN by preprocessing of the data in a sensor have been demonstrated [9][10][11] . During the optical sensing, however, their synaptic weight is changed owing to an optically controllable synaptic weight. This optical weight update is useful for recognizing one pattern or similar patterns, but it is difficult to recognize various subsequent patterns because the synaptic weights are customized to a previous pattern. Therefore, repetitive reset operations are needed before accepting new patterns. Unlike the abovementioned optoelectronic synaptic devices, Wang et al. and Mennal et al. demonstrated vision sensors where repetitive reset operations were unnecessary due to the invariant synaptic weight during the optical sensing. They reported tunable photoresponsivity using a photodetecting device composed of two-dimensional (2D) materials, such as a phototransistor or a photodiode 18,19 . The tunable photoresponsivity in a photodetecting device corresponds to the controllability of weight update in a synapse, and it is a significant advantage for an in-sensor vision system, because photoresponsivity tunable photodetecting device can act as a synapse for an www.nature.com/scientificreports/ ANN as well as a photodetector for a sensor. Thus, the in-sensor processing with the inclusion of image sensing and signal processing allows real-time multiplication of the image with a memorized photoresponsivity matrix. Such an in-sensor vision system is attractive for reduction of signal latency and power consumption, which occur at converting circuits such as the ADC, as illustrated in Fig. 1b. It is worth noting that the previous photodetecting device with tunable photoresponsivity requires external memory, which is indispensable for storing the value of gate voltage to tune the photoresponsivity 18,19 . This memory can impose a burden on accessing a designated memory cell with high speed and realizing a mobile vision system with a compact size for an all-in-one chip. Thus, signal latency and power consumption that arise from external memory become increasingly problematic. In addition, 2D materials cannot be easily integrated by microfabrication of a complementary metal-oxide-semiconductor (CMOS) based image sensor system with high throughput owing to less CMOS compatibility. For a large-scale vision system, a CMOS compatible photodetecting device such as a photodiode and a phototransistor is preferred; however, tunable photoresponsivity is not available. Each approach for tunable photoresponsivity without CMOS compatibility and CMOS compatibility without tunable photoresponsivity has its respective strengths and weaknesses. Therefore, it is very timely to explore another photodetecting device with tunable photoresponsivity, CMOS compatibility, and even more memorability. In this work, a mnemonic-opto-synaptic transistor (MOST) is demonstrated in the form of a metal-oxide-semiconductor field-effect transistor (MOSFET). This MOSFET has a vertical pillar-shaped channel protruded from a silicon bulk substrate and a gate wraps a sidewall of the pillared channel completely with a gate-all-around structure. This vertical MOSFET is advantageous from the perspective of the footprint area and light absorption [20][21][22] . Moreover, by embedding a charge trap layer (CTL) of a nitride (Si 3 N 4 ) to the gate dielectrics of the MOST for the memory function, individual control of photoresponsivity for each MOST is achieved and real-time multiplication of the image with a memorized photoresponsivity matrix is performed. Therefore, it can act as a photodetector and a synapse with non-volatile retention of learned weights in the ANN for the insensor vision system due to the intrinsic memory function of the intercalated CTL. It does not need repetitive reset operations because the synaptic weight is not changed during the optical sensing. This characteristic is attributed to fully electrical control of the synaptic weight. Furthermore, by virtue of 100% CMOS compatible Figure 1c represents the ANN for the in-sensor vision system using the MOSTs. The MOSTs are located at the forefront of the ANN for detecting the light intensity and transmitting pre-processed weights with a reflection of optical signals to the next layer. The photocurrent (I photo ) summed from each neuron at the next layer is produced by the multiplication of the memorized photoresponsivity matrix and the light intensity of each pixel. When the vision system has N pixels and M neurons at the next layer, current summed in the mth neuron of the next layer (I m ) can be represented by the following equation: I m = N n=1 I photo = N n=1 R mn P n , where n = 1, 2, …, N and m = 1, 2, …, M denote the indices of the pixel and the neuron at the next layer, respectively. R mn represents the memorized photoresponsivity matrix and P n represents the light intensity of each pixel. In this way, the insensor processing with the inclusion of image sensing and signal processing allows real-time multiplication of the image with the memorized photoresponsivity matrix 19 . Figure 1d shows a schematic of an n-channel MOST with a vertical pillar structure. n + heavily doped source (S) and drain (D) are located at the top and the bottom of each pillar in the array of MOSTs shown in Fig. 1e, which protrudes from a bulk-silicon wafer, respectively. Between the S and D, there is a p-type channel. As gate dielectrics, quintuple-layers (O I /N I /O II /N II /O III ) composed of triple-layered tunneling dielectrics (O I /N I / O II ), the aforementioned CTL nitride (N II ), and a blocking oxide (O III ) wrap around a sidewall of the pillared channel, as shown in Fig. 1f. The triple layers of the O I /N I /O II were adopted to reduce the operating voltage by barrier engineering (BE) of the tunneling dielectrics 23,24 . Each thickness of the gate dielectrics is 1.3 nm/1.3 nm /1.6 nm/5.6 nm/6.3 nm in the order of O I /N I /O II /N II /O III , respectively. A triple-layered metal gate composed of titanium, titanium nitride, and tungsten (Ti/TiN/W) also surrounds the sidewall exterior of the gate dielectrics and pillar. When the light is illuminated, the carriers are generated and flown in the channel in the form of I photo that drives the photodetector. I photo is actually the drain current (I D ) flowing between the source and the drain, which is controlled by the gate voltage (V G ) and drain voltage (V D ). The gate electrode makes the photoresponsivity tunable by charging and discharging the CTL of N II (hereafter simply abbreviated as 'CTL') and controls the memory function. Note that N I in the tunneling dielectrics cannot serve as a CTL because O I is too thin to block tunneling of the trapped charges. Fabrication details of the MOST are described in Figure S1. Results and discussion In the MOST, threshold voltage (V T ) can be adjusted by two factors, photo-carriers controlled by light illumination and trapped electrons modulated by the V G in the CTL. Figure 2 shows the transfer characteristic curve of I D versus V G (I D -V G ) according to the light intensity (P) and the number of gate pulses (N pulse ). This N pulse determines the level of I D at each state in the synaptic operation, i.e., the number of states. As an example, N pulse of 0 is the initial state with the highest I D due to the lowest V T , and N pulse of 31 is composed of 31 gate pulses that produce the lowest I D due to the highest V T in the depression for multi-states of 32. In this work, a variable pulse number with an identical pulse amplitude and width is used for a potentiation-depression (P-D) operation. An LED (SOL 3.0, Fiber Optic Korea Co., Ltd.) was used as a white light source. The P indicated in Fig. 2 is the measured value in a blue region with a wavelength of 405 nm. It was quantified by a power meter that has a detection spot area of 0.785 cm 2 . Figure 2a shows a leftward V T shift. This is caused by the photocarrier generation, which arises from light illumination 25 . In contrast, Fig. 2b exhibits a rightward V T shift. It is attributed to electron trapping in the CTL by applied positive depression gate voltage (V G,dep ); i.e., it suppresses inversion at the channel surface. This is analogous to the depression operation to reduce the synaptic weight in an artificial synapse [26][27][28] . The magnitude of V G,dep is 9 V and its pulse width is 10 μs. It should be noted that the www.nature.com/scientificreports/ rightward V T shift by the electron trapping is semi-permanent and the leftward V T shift by the light illumination is temporal. In other words, the V T shift is returned to a pristine state when the light illumination is removed. Figure 2c superimposes I D -V G with the photo-carrier generation by incident light and the electron trapping by the applied V G,dep in one graph. The ratio (η) of photoresponsivity without charge trapping to that with charge trapping by V G is approximately 800 at a V G,read of 0 V. In this way, photoresponsivity can be modulated effectively by controlling the trapped electrons in the CTL. Therefore, the MOST acts as a photodetector by sensing I photo with light, a synapse by updating a weight with V G , and a non-volatile memory by holding a weighted state with trapped charges for the in-sensor vision system. This tunable photoresponsivity is utilized as a controllable synaptic weight in the ANN. Unlike the previously reported photodetecting device, extra memory is no longer needed because the MOST itself harnesses an inherent non-volatile memory function 18,19 . Figure 3a shows the depression where I D was decreased by an increased N pulse for various P. Herein N pulse is varied from 0 to 31; i.e., there are 32 states. The magnitude of V G,dep is 9 V and its pulse width is 1 μs. This result shows that the photoresponsivity was finely tunable with multi-states. For a typical synaptic operation, the potentiation that increases the synaptic weight should be available, similar to the depression that decreases the synaptic weight. Figure S2(a) represents the P-D characteristics for various P, i.e., with light illumination. The conductance (G) is defined as I D /V D , which can be simplified to I D because the applied V D was 1 V. The photoresponsivity was finely tunable during the potentiation as well as the depression. The magnitude of potentiation gate voltage (V G,pot ) is − 10 V and its pulse width is 200 μs. Figure S2(b) shows another P-D characteristic in a dark environment, i.e., without light illumination. From Figure S2(b), the nonlinearity parameters (α) were extracted using the following equation: where G max is the maximum conductance, G min is the minimum conductance, α is a nonlinear parameter, and w is an internal variable that ranges from 0 to 1 29 . The extracted α pot and α dep were − 0.02 and − 0.58, respectively. These parameters are used for the subsequent software simulations. It is well known that a large number of states is preferred to enhance the performance of pattern recognition in a synaptic device [26][27][28] . In this context, it was www.nature.com/scientificreports/ also confirmed that the P-D characteristics for N pulse of 64 and 128 were achievable by delicately tuning the gate pulse, as shown in Figure S3. Figure 3b, c show the real-time I D for various P and N pulse , respectively, when the light is turned on and off. At a fixed N pulse , I D was increased as P increased. At a fixed P, I D decreased as the N pulse increased. It is worth noting that I D returned to the initial state when the light was off. This feature assures that the synaptic weight is not changed during the optical sensing and repetitive reset operations are not needed. As shown in Fig. 3d, I D was sustained even after 40,000 s owing to the superior retention characteristics of the CTL-based memory. This attribution has been proven by commercial flash memory adopting the CTL. It should be recalled that good retention characteristics of a synaptic device are crucial for reliable operation over time 28 . Figure S4 shows the P-D characteristics of the MOST for various wavelengths (λ). Measurements were performed by using a blue (B), red (R), and infrared (IR) light source. Each λ of B, R, and IR light is 405 nm, 638 nm, and 1550 nm, respectively. As shown in Figure S4, tunable photoresponsivity was observed for visible light of B and R, whereas it was not for the IR light. This is because the B and R light can generate photo-carriers to increase I photo . However, the IR light cannot create them owing to a small photon energy of 0.80 eV compared to the silicon energy bandgap of 1.12 eV 30,31 . It should also be noted that the photoresponsivity of the B light was smaller than that of the R light because the penetration depth is decreased with shorter λ 32 . The demonstrated wavelength dependency as well as the intensity dependency of the tunable photoresponsivity can help in recognizing a color mixed pattern 33,34 . As mentioned above, BE tunneling dielectrics composed of the triple layers renamed BE layers were adopted to reduce the operating voltage. In order to confirm this effect, simplified MOSTs were fabricated as a control group. The BE layers of O I /N I /O II were replaced by a single layer of thermal oxide (O single ). Other structures were set to be the same. As plotted in Figure S5(a), the measured transfer characteristics of the fabricated MOST with O single /N II /O III showed similar photoresponsivity compared to those with O I /N I /O II /N II /O III . This is because the gate dielectric has no effect on the photo-carrier generation by light. Whereas V T was shifted rightward by a V G,dep of 9 V in the case of the O I /N I /O II /N II /O III (Fig. 2), it was not changed by that in the case of the O single /N II /O III , as shown in Figure S5(b). A V G,dep larger than 11 V should be applied to change the V T and update the synaptic weight, as shown in Figure S5b. As a consequence, the P-D characteristics in Figure S5(c) show that synaptic weight update is impossible with the same V G,dep in the case of the O single /N II /O III . Therefore, it is confirmed that the gate dielectric structure of O I /N I /O II /N II /O III is more attractive than that of O single /N II /O III for low-power neuromorphic hardware. Using a full set of the fabricated MOSTs, simple pattern recognition was performed using a single-layer perceptron (SLP). As illustrated in Fig. 4a, two images, ' A' of an off-diagonal pattern and 'B' of a diagonal pattern, were prepared. Each pattern comprises 2 × 2 black-and-white pixels. Classification of the two patterns was attempted. A neural network was composed of four input pixels labeled P 1 , P 2 , P 3 , and P 4 and two nodes in the Fig. 4b. By detecting the output current of the MOSTs connected to each output node, each pattern was recognized. The photoresponsivity that corresponds to the synaptic weight was preset with a binary value, the maximum photoresponsivity and the minimum photoresponsivity, from the data of Fig. 3a. The solid lines and the dashed lines in Fig. 4b represent the device with the maximum photoresponsivity and the minimum photoresponsivity, respectively. Each photoresponsivity is represented as 'R' in the neural network configuration. This in-sensor processing with the inclusion of image sensing and signal processing performs real-time multiplication of the image with a memorized photoresponsivity matrix 19 . Figure 4c shows the circuit diagram to construct the neural network of Fig. 4b. V G and V D were set as 0 V and 1 V, respectively. Each output was measured in the form of the output current: I out,A and I out,B ; i.e., I out,A was measured in the output node O A for the input image of ' A' and I out,B was measured in the output node O B for the input image of 'B' , as shown in Fig. 4d. As a result, inference for the simple pattern was experimentally verified. It is worth comparing the required components to distinguish the abovementioned two simple patterns. This work that is applicable to an in-sensor vision system demands only eight MOSTs without extra photodetectors, ADCs or synaptic devices. In contrast, a conventional approach that is suitable for a conventional vision system may need four photodetectors, an ADC, and eight synaptic devices. Thanks to this in-sensor vision system, rapid classification within 1 ms was achieved with low power consumption under 150 nW. This is very small compared to the power consumption of an ADC used for a conventional vision system, which ranges from a few tens of μW to a few mW 35,36 . To demonstrate recognition of more complex patterns such as hand-written digits in the MNIST dataset, a multi-layer perceptron (MLP) network composed of two hidden layers was constructed, as illustrated in Fig. 5a. An input layer corresponds to 528 input pixels, which were cropped from the 28 × 28 pixels, and an output layer corresponds to the 10 numbers from 0 to 9. Each hidden layer is composed of 250 neurons. The MOSTs were located at the forefront of the network for detecting the light intensity and transmitting pre-processed weights with a reflection of optical signals to the first hidden layer. Each device has its own photoresponsivity corresponding to the synaptic weight, which is represented as 'R' in the neural network configuration. This simultaneous image sensing and signal processing allow real-time multiplication of the image with a memorized photoresponsivity matrix 19 . The measured photoresponsive and P-D characteristics from the fabricated MOSTs in a dark environment were reflected in the software simulations. Figure 5b shows a flow chart that summarizes www.nature.com/scientificreports/ the simulation sequence to reflect the measured photoresponse characteristics and electrical characteristics of the fabricated MOST. I photo is the drain current with light illumination (I D,light ) and I dark is the referenced drain current without light illumination (I D,dark ). Except light-on and light-off, all other conditions are the same. Herein the ratio of I photo /I dark , i.e., I D,light /I D,dark is defined as γ, which is extracted from the experimental results. Prior to the simulation, γ was extracted for various light intensities (P) by linear interpolation, as shown in Fig. 5c. For improvement of the simulation accuracy, this step was repeated for each synaptic state. γ of each pixel was extracted by substituting the MNIST dataset into the interpolated curve, because the MNIST dataset represents the pixel intensity. Afterwards, the conductance of each synapse in a dark environment (G dark ), which was extracted from the P-D characteristic of Figure S2(b), was multiplied by γ. Because the applied V D of the MOST is 1 V, G dark , defined as I dark /V D , is simplified to the I dark . The multiplication thus results in I photo . Finally, I photo that contains information of the pixel intensity and the photoresponsivity of the synapse is transmitted to the first hidden layer for summation at each neuron. In detail, current summed in the mth neuron in the first hidden layer (I m ) can be represented by the following equation: where n = 1, 2, …, 528 and m = 1, 2, …, 250 denote the indices of the pixel and the neuron at the first hidden layer, respectively. For a normal synapse between the first hidden layer and the second hidden layer or between the second hidden layer and the output layer, only the electrical characteristics (e.g., P-D characteristics at dark environment) were reflected because they could not respond to the light owing to deficiency of a photo-effect. The sigmoid activation function was adopted and supervised learning with back propagation was employed for the learning process to update the synaptic weight of the MOST and a normal synapse. Figure 5d shows the simulated recognition accuracy according to the number of training epochs and the saturated recognition rate was 85.7%. This recognition rate is comparable to an upper limit of 88.3%, which is achievable by software-based pattern recognition simulations that directly multiply the MNIST dataset by the conductance of each synapse, which has ideal P-D characteristics of perfect linearity and symmetry; i.e., α pot = 1 and α dep = 1. Conclusions In summary, a mnemonic-opto-synaptic transistor (MOST) was demonstrated for an in-sensor vision system by embedding a non-volatile memory function into a photodetecting device. Because the threshold voltage of the MOST was controlled both by light illumination and by an electrical pulse, the photoresponsivity was tunable by changing the trapped electrons in the charge trap layer (CTL) that enable the non-volatile memory function. Thereby it performed triple functions: photoresponsivity memorizing as a memory cell, light-sensing as a photodetector, and weight updating as a synapse. At the forefront of the ANN, the MOST simultaneously detects light and generates a pre-processed signal to perform real-time multiplication of an image with a memorized photoresponsivity matrix in sensors. More advantageously, it does not require repetitive reset operations because of the invariant synaptic weight during the optical sensing (Table S6). Furthermore, it does not require external memory because of the inherent memory function of the CTL. In addition, the MOST can be integrated with a conventional CMOS image sensor composed of numerous small-sized pixels because it was fabricated with 100% CMOS compatible microfabrication. Data availability Scientific Reports requires the inclusion of a data availability statement with all submitted manuscripts, as this journal requires authors to make available materials, data, and associated protocols to readers.
2021-12-12T16:59:46.238Z
2021-12-09T00:00:00.000
{ "year": 2022, "sha1": "286f81df89827f31eb6dfac4f27ab9fe32b04ced", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-05944-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bb60b589606dcdcf83cc1a135b5b604a34e0657", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
238744384
pes2o/s2orc
v3-fos-license
Plugging Self-Supervised Monocular Depth into Unsupervised Domain Adaptation for Semantic Segmentation Although recent semantic segmentation methods have made remarkable progress, they still rely on large amounts of annotated training data, which are often infeasible to collect in the autonomous driving scenario. Previous works usually tackle this issue with Unsupervised Domain Adaptation (UDA), which entails training a network on synthetic images and applying the model to real ones while minimizing the discrepancy between the two domains. Yet, these techniques do not consider additional information that may be obtained from other tasks. Differently, we propose to exploit self-supervised monocular depth estimation to improve UDA for semantic segmentation. On one hand, we deploy depth to realize a plug-in component which can inject complementary geometric cues into any existing UDA method. We further rely on depth to generate a large and varied set of samples to Self-Train the final model. Our whole proposal allows for achieving state-of-the-art performance (58.8 mIoU) in the GTA5->CS benchmark benchmark. Code is available at https://github.com/CVLAB-Unibo/d4-dbst. Introduction Semantic segmentation is the task of classifying each pixel of an image. Nowadays, Convolutional Neural Networks can achieve impressive results in this task but require huge quantities of labelled images at training time [44,3,34,41]. A popular trend to address this issue concerns leveraging on computer graphics simulations [42] or game engines [40] to obtain automatically synthetic images endowed with per-pixel semantic labels. Yet, a network trained on synthetic data only will perform poorly in real environments due to the so called domain-shift problem. In the last few years, many Unsupervised Domain Adaptation (UDA) techniques aimed at alleviating the domainshift problem have been proposed in literature. These ap-LTIR [21] Stuff and Things [53] Stuff and Things [53] LTIR [21] Figure 1. D4 can be plugged seamlessly into any existing method to improve UDA for Semantic Segmentation. Here we show how the introduction of D4 can ameliorate the performance of two recent methods like LTIR [22] and Stuff and Things [55]. proaches try to minimize the gap between the labeled source domain (e.g. synthetic images) and the unlabeled target domain (e.g. real images) by either hallucinating input images, manipulating the learned features space or imposing statistical constraints on the predictions [58,8,65,18]. At a more abstract level, UDA may be thought of as the process of transferring more effectively to the target domain the knowledge from a task solved in the source domain. This suggests that it may be possible to improve UDA by transferring also knowledge learned from another task to improve performance in the real domain. In fact, the existence of tightly related representations within CNNs trained for different tasks has been highlighted since the early works in the field [60], and it is nowadays standard practice to initialize CNNs deployed for a variety of diverse tasks, such as, e.g., object detection [46], semantic segmentation [4] and monocular depth estimation [14], with weights learned on Imagenet Classification [11]. The notion of transferability of representations among CNNs trained to solve different visual tasks has been formalized computationally by the Taskonomy proposed in [63]. Later, [38] has shown that it is possible to train a CNN to hallucinate deep features learned to address one task into features amenable to another task related to the former. Inspired by these findings, we argue that monocular depth estimation could be an excellent task in order to gather additional knowledge useful to address semantic seg-mentation in UDA settings. First of all, a monocular depth estimation network makes predictions based on 3D cues dealing with the appearance, shape, relative sizes and spatial relationships of the stuff and things observed in the training images. This suggests that the network has to predict geometry by implicitly learning to understand the semantics of the scene. Indeed, [37,21,24,15] show that a monocular depth estimation network obtains better performances if forced to learn jointly a semantic segmentation task. We argue, though, the correlation between geometry and semantics to hold bidirectionally, such that a semantic segmentation network may obtain useful hints from depth information. This intuition is supported by [38], which shows that it is possible to learn a mapping in both directions between features learned to predict depth and per-pixel semantic labels. It is also worth observing how depth prediction networks tend to extract accurate information for regions characterized by repeatable and simple geometries, such as roads and buildings, which feature strong spatial and geometric priors (e.g. the road is typically a plane in the bottom part of the image) [13,14,47,57]. Therefore, on one hand predicting accurately the semantics of such regions from depth information alone should be possible. On the other, a semantic network capable of reasoning on the geometry of the scene should be less prone to mistakes caused by appearance variations between synthetic and real images, the key issue in UDA for semantic segmentation. Despite the above observations, injection of geometric cues into UDA frameworks for semantic segmentation has been largely unexplored in literature, with the exception of a few proposals, which either assume availability of depth labels in the real domain [56], a very restrictive assumption, or can leverage on depth information only in the synthetic domain due to availability of cheap labels [53,27,6]. In this respect, we set forth an additional consideration: nowadays effective self-supervised procedures allow for training a monocular depth estimation network without the need of ground-truth labels [14,12,70]. Based on the above intuitions and considerations, in this paper we propose the first approach that, thanks to selfsupervision, allows for deploying depth information from both synthetic and unlabelled real images in order to inject geometric cues in UDA for semantic segmentation. Purposely, we adapt the knowledge learned to pursue depth estimation into a representation amenable to semantic segmentation by the feature transfer architecture proposed in [38]. As the geometric cues learned from monocular images yield semantic predictions that are often complementary to those attainable by current UDA methods, we realize our proposal as a depth-based add-on, dubbed D4 (Depth For), which can be plugged seamlessly into any UDA method to boost its performances, as illustrated in Fig. 1. A recent trend in UDA for semantic segmentation is Self-Training (ST), which consists in further fine-tuning the trained network by its own predictions [72,73,68,29,33,30]. We propose a novel Depth-Based Self-Training (DBST) approach which deploys once more the availability of depth information for real images in order to build a large and varied dataset of plausible samples to be deployed in the Self-Training (ST) procedure 1 . Our framework can improve many state-of-the-art methods by a large margin in two UDA for semantic segmentation benchmarks, where networks are trained either on GTA5 [40] or SYNTHIA VIDEO SEQUENCES [42] and tested on Cityscapes [10]. Moreover, we show that our DBST procedure enables to distill the whole framework into a single ResNet101 [16] and achieve state-of-the-art performance. Our contributions can be summarized as follows: • We are the first to show how to exploit self-supervised monocular depth estimation on real images to pursue semantic segmentation in a domain adaptation settings. • We propose a depth-based module (D4) which can be plugged into any UDA for semantic segmentation method to boost performance. • We introduce a new protocol (DBST) that exploits depth predictions to synthesize augmented training samples for the final self-training step deployed oftentimes in UDA for semantic segmentation pipelines. • We show that leveraging on both D4 and DBST allows for achieving 58.8 mIoU in the popular GTA5→CS UDA benchmark, i.e., to the best of our knowledge, the new state-of-the-art. Related Work Domain Adaptation. Domain Adaptation is a promising way of solving semantic segmentation without annotations. Pioneering works [17,58,2,9,31,66,62,28,22] rely on CycleGANs [71] to convert source data into the style of target data, reducing the low-level visual appearance discrepancy among domains. Other works exploit adversarial training to enforce domain alignment [49,50,54,67,59,36,1,52]. [55] extended this idea by aligning differently objects with low and high variability in terms of appearance. Few works tried to exploit depth information to boost UDA for semantic segmentation. [53], for example, proposes a unified depth-aware UDA framework that leverages the knowledge of depth maps in the source domain to perform feature space alignment. [43] extends this idea by modelling explicitly the relation between different visual semantic classes and depth ranges. [7], instead, considers depth as a way to obtain adaptation at both the input and output level. [56] is the first work to consider depth Figure 2. From left to right: ground truth, semantics from depth, semantics by LTIR [22]. The semantic labels predicted from depth are more accurate than those yielded by UDA methods in regularly-shaped objects (such as the wall in the top image and the sidewalk in the bottom one), whilst UDA approaches tend to perform better on small objects (see the traffic signs in both images). in the target domain, although assuming supervision to be available. Conversely, we show how to deploy depth in the target domain without availability of ground-truth depths. Self-Training. More recently, a new line of research focuses on self-training [26], where a semantic classifier is fine-tuned directly on the target domain, using its own predictions as pseudo-labels. [72,73,30] cleverly set class-confidence thresholds to mask wrong predictions. [69,33,68] propose to use pseudo-labels with different regularization techniques to minimize both the inter-domain and intra-domain gap. [64] instead, estimates the likelihood of pseudo-labels to perform online correction and denoising during training. Differentely, [48] synthesizes new samples for the target domain by cropping objects from source images using ground truth labels and pasting them onto target images. Inspired by this work, we propose a novel algorithm for generating new samples to perform self-training on the target domain. In contrast to [48], our strategy is applied to target images only and relies on the availability of depth maps obtained through self-supervision. Task Adaptation. All existing approaches tackle independently task adaptation or domain adaptation. [51] was the first paper to propose a cross-tasks and cross-domains adaptation approach, considering two image classification problems as different tasks. UM-Adapt [25] employs a cross-task distillation module to force inter-task coherency. Differently, [38], directly exploits the relationship among tasks to reduce the need for labelled data. This is done by learning a mapping function in feature space between two networks trained independently for two separate tasks, a pretext and target one. We leverage on this intuition but, unlike [38], our approach does not require supervision to solve the pretext task in the target domain. Method In Unsupervised Domain Adaptation (UDA) for semantic segmentation one wishes to solve semantic segmentation in a target domain, D T , though labels are available only in another domain, referred to as source domain D S . In the following we describe the two ingredients of our proposal to better tackle this problem. In Sec. 3.1 we show how to transfer information from self-supervised monocular depth to semantic segmentation and merge this knowledge with any UDA method (D4-UDA, Depth For UDA). Then, in Sec. 3.2 we introduce a Depth-Based Self-Training strategy (DBST) to further improve semantic predictions while distilling the whole framework into a single CNN. D4 (Depth For UDA) Semantics from Depth. The main intuition behind our work is that semantic segmentation masks obtained exploiting depth information have peculiar properties that make them suitable to improve segmentation masks obtained with standard UDA methods. However, predicting semantics from depth is an arduous task. Indeed, we experiment several alternatives (see Sec. 4.4 Alternative strategies to exploit depth) and find out that the most effective way is a procedure similar to the one proposed in [38], which we adapt to the UDA scenario. The pipeline works as follows: train one CNN to solve a first task on D S and D T , train another CNN to solve a second task on D S only (i.e. the only domain where ground truth labels for the second task are available) and, finally, train a transfer function to map deep features extracted by the first CNN into deep features amenable to the second one. As the second CNN has been trained only on D S , also the transfer function can be trained only on D S but, interestingly, it can generalize to D T . As a consequence, at inference time one can solve the second task in D T based on the features transferred from the first task. We refer to [38] for further details. Hence, if we assume the first and second task to consist in depth estimation and semantic segmentation, respectively, the idea of transferring features might be deployed in a UDA scenario since it gives the possibility to solve the second task on D T without the need of ground truth labels. However, the learning framework delineated in [38] assumes availability of ground-truth labels for the first task (depth estimation in our setting) also in D T (real images). As pointed out in Sec. 1, this assumption does not comply with the standard UDA for semantic segmentation problem formulation, which requires availability of semantic labels for source images (D S ) alongside with unlabelled target images only (D T ). To address this issue we propose to rely on depth proxy-labels attainable from images belonging to both D S and D T without the need of any ground-truth information. In particular, we propose to deploy one of the recently proposed deep neural networks, such as [14], that can be trained to perform monocular depth estimation based on a self-supervised loss that requires availability of raw image sequences only, i.e. without ground-truth depth labels. Thus, in our method we introduce the following protocol. First, we train a self-supervised monocular depth estimation network on both D S and D T . Then, we use this network to generate depth proxy-labels for both domains. We point out that we use such network as an off-the-shelf algorithm without the aim of improving depth estimation. Finally, according to [38], we train a first CNN to predict depth from images on both domains by the previously computed depth proxy-labels, a second CNN to predict semantic labels on D S and a transfer network which allows for predicting semantic labels from depth features in D T . In the following, we will refer to such predictions as "semantics from depth" because they concern semantic information extracted from features amenable to perform monocular depth estimation. Combine with UDA. Fig. 2 compares semantic predictions obtained from depth by the protocol described in the previous sub-section and from a recent UDA method. The reader may observe a clear pattern: predictions from depth tend to be smoother and more accurate on objects with large and regular shapes, like road, sidewalk, wall and building. However, they turn out often imprecise in regions where depth predictions are less informative, like thin things partially overlapped with other objects or finegrained structures in the background. As UDA methods tend to perform better on such classes (see Fig. 2), our D4 approach is designed to combine the semantic knowledge extracted from depth with that provided by any chosen UDA method in order to achieve more accurate semantic predictions. Depth information helps on large objects with regular shapes, which usually account for the majority of pixels in an image. On the contrary, UDA methods perform well in predicting semantic labels for categories that typically concern much smaller fractions of the total number of pixel in an image, like e.g. the traffic signs in Fig. 2. This orthogonality suggests that a simple yet effective way to combine the semantic knowledge drawn from depth with that provided by UDA methods consists in a weighted sum of predictions, with weights computed according to the frequency of classes in D S (the domain where semantic labels are available). As weights given to UDA predictions (w uda ) should be larger for rarer classes, they can be computed as: where C denotes the number of classes and f i = n i n tot denotes their frequencies at the pixel level, i.e. the ratio between the number n i of pixels labelled with class i in D S and the total number n tot of labelled pixels in D S . Eq. 1 is the standard formulation introduced in [34] to compute bounded weights inversely proportional to the frequency of classes. We set δ in Eq. 1 to 1.02, akin to [34]. Accordingly, weights applied to semantic predictions drawn from depth (w dep ) are given by: (2) Thus, at each pixel of a given image we propose to combine semantics from depth and predictions yielded by any chosen UDA method as follows: where y f is the final prediction,ỹ dep andỹ uda are the logits associated with semantics from depth and the selected UDA method, respectively, ϕ T denotes the softmax function with a temperature term T that we set to 6 in our experiments. As illustrated in Fig. 3, the formulation presented in Eq. 3 and symbolized as can be used seamlessly to plug semantics obtained from self-supervised monocular depth into any existing UDA method. We will refer to the combination of a given UDA method with our D4 with the expression D4-UDA. Experimental results (Sec. 4.3) show that all recent s.o.t.a. UDA methods do benefit significantly from the complementary geometric cues brought in by D4. Figure 4. The rightmost column is synthesized by copying pixels from the left column into the central one. Pixels are chosen according to their semantic class (second row) and stacked according to their depths (third row). The white pixels in the depth maps represent areas too far from the camera that cannot be selected. DBST (Depth-Based Self-Training) We describe here our proposal to further improve semantic predictions and distill the knowledge of the entire system into a single network easily deployable at inference time. First, we predict semantic labels for every image in D T by our whole framework (i.e. D4 alongside a selected UDA method, referred to as D4-UDA); then, we use these labels to train a new model on D T . This procedure, also known as self-training [26], has become popular in recent UDA for semantic segmentation literature [72,73,68,29,33,30] and consists in training a model by its own predictions, referred to as pseudo-labels, sometimes through multiple iterations. On the other hand, we only perform one iteration, and the novelty of our approach concerns the peculiar ability to leverage on the depth information available for the images in D T to generate plausible new samples. Running D4-UDA on D T yields semantic pseudo-labels for every image in D T . Yet, as described in Sec. 3.1 (Semantics from Depth), each image in D T is also endowed with a depth prediction, provided by a self-supervised monocular depth estimation network. We can take advantage of this information to formulate a novel depth-aware data augmentation strategy whereby portion of images and corresponding pseudo-labels are copied onto others so as to synthesize samples for the self-training procedure. The crucial difference between similar approaches presented in [32,48] and ours consists in the deployment of depth information to steer the data augmentation procedure towards more plausible samples. Indeed, a first intuition behind our method deals with semantic predictions being less accurate for objects distant from the camera: as such predictions play the role of labels in self-training, we prefer to pick closer rather than distant objects in order to generate training samples. Moreover, we reckon certain kinds of objects, like persons, vehicles and traffic signs, to be more plausibly transferable across different images as they tend to be small and less bound to specific spatial locations. On the contrary, it is quite unlikely to merge seamlessly a piece of road or building from a given image into a different one. Given N randomly selected images x n from D T , with n ∈ {1, . . . , N }, paired with semantic pseudo-labels s n and depth predictions d n , we augment x 1 , by copying on it pixels from the set X src = {x 2 , · · · , x N }. For each pixel of the augmented image we have N possible candidates, one from x 1 itself and N − 1 from the images in X src . We filter such candidates according to two criteria: the predicted depth should be lower than a threshold t and the semantic prediction should belong to a predefined set of classes, C * . Hence, we define the set of depths of the filtered candidates at each spatial location p as: In our experiments, for each image the depth threshold t is set to the 80 th percentile of the depth distribution, so as to avoid selecting pixels from the farthest objects in the scene. C * contains all things classes (e.g. person, car, traffic light, etc.), which include foreground elements that can be copied onto other images without altering the plausibility of the scene, while excluding all the stuff classes, which include background elements that cannot be easily moved across scenes. This categorization is similar to the one proposed in [55] and we consider it easy to replicate in other datasets. Then, we synthesize a new image x z and corresponding pseudo-labels s z , by assigning at each spatial location p the candidate with the lowest depth, so that objects from different images will overlap plausibly into the synthesized one: In Fig. 4 we depict our depth-based procedure to synthesize new training samples, considering, for the sake of simplicity, the case where N is 2. Hence, with the procedure detailed above, we synthesize an augmented version of D T , used to distill the whole D4-UDA framework into a single model by a self-training process. This dataset is much larger and exhibits more variability than the original D T . Due to its reliance on depth information, we dub our novel technique as DBST (Depth-Based Self-Training). The results reported in Sec. 4.3 prove its remarkable effectiveness, both when used as the final stage following D4 as well as when deployed as a standalone selftraining procedure applied to any other UDA method. Implementation Details Network Architectures. We use Monodepth2 [14] to generate depth proxy-labels for the procedure described in Sec. 3.1. We adapt the general framework presented in [38] to our setting by deploying the popular Deeplab-v2 [3] for depth estimation and semantic segmentation networks. Both networks consist of a backbone and an ASPP module [3], which substitute, respectively, the encoder and decoder used in [38]. The backbone is implemented as a dilated ResNet50 [61]. We also remove the downsampling and upsampling operations used in [38] when learning the transfer function between depth and semantics. More precisely, in our architecture the transfer function is realized as a simple 6-layers CNN with kernel size 3 × 3 and Batch Norm [20]. Following the recent trend in UDA for semantic segmentation [49,5,28,69,55,54,22], during DBST we train a single Deeplab-v2 [3] model, with a dilated ResNet101 pre-trained on Imagenet [11] as backbone. Training Details. Our pipeline is implemented using PyTorch [35] and trained on one NVIDIA Tesla V100 GPU with 16GB of memory. In every training and test phase we resize input images to 1024×512, with the exception of DBST, when we first perform random scaling and then random crop with size 1024×512. During DBST we use also color jitter to avoid overfitting on the pseudo-labels. In our version of [38], the depth and the transfer network are optimized by Adam [23] with batch size 2 for 70 and 40 epochs, respectively, while the semantic segmentation network is trained by SGD with batch size 2 for 70 epochs.The final model obtained by DBST is trained again with SGD, batch size 3 and for 30 epochs. We adopt the One Cycle learning rate policy [45] in every training, setting the maximum learning rate to 10 −4 but in DBST, where we use 10 −3 . Datasets We briefly describe the datasets adopted in our experiments, pointing to the Suppl. Mat. for additional details. We follow common practice [49,22,28] and test our framework in the synthetic-to-real case using GTA5 [39,40] or SYNTHIA [42] as synthetic datasets. The former consists in synthetic images captured with the game Grand Theft Auto V, while the latter is composed of images generated by rendering a virtual city. Since our method requires video sequences to train Monodepth2 [14], we use the split SYN-THIA VIDEO SEQUENCES (SYNTHIA-SEQ) in the experiments involving the SYNTHIA dataset. As for real images, we leverage the popular Cityscapes dataset [10], which consists in a large collection of video sequences of driving scenes from 50 different cities in Germany. Results We report here experimental results obtained in two domain adaptation benchmarks, which show how the combination with our D4 method allows to boost performance of recent UDA for semantic segmentation approaches. Figure 5. From left to right: RGB image, prediction from UDA method, prediction from D4-UDA + DBST, GT. The top two rows deal with GTA5→CS, the other two with SYNSEQ→CS. Selected methods are, from top to bottom: LTIR [22], BDL [28], MaxSquare [5] and MRNET [69]. In all these examples our proposal can ameliorate dramatically the output of the given stand-alone method, especially on classes featuring large and regular shapes, like road in rows 1-3, sidewalk in rows 2-4 and wall in row 2. GTA5→CS. Tab. 1 reports results on the most popular UDA benchmark for semantic segmentation, i.e. GTA5→CS, where methods are trained on GTA5 and tested on Cityscapes. We selected the most relevant UDA approaches proposed in the last years [49,5,28,69,55,54,22,64], using checkpoints provided by authors when available. We report per-class and overall results in terms of mean intersection over union (mIoU) and pixel accuracy (Acc), when each method is either used stand-alone or deployed within our proposal (i.e. D4 + DBST). The reader may notice how every UDA method does improve considerably if combined with our proposal, despite the variability of their stand-alone performances. Indeed, AdaptSeg-Net [49], which yields about 42 in terms of mIoU, reaches 50 when embedded into our framework. Likewise, ProDA, currently considered the s.o.t.a. UDA method, improves in mIoU from 57.5 to 58.8. Moreover, we can observe in Tab. 1 that our method produces a general improvement for all classes, although we experience a certain performance variability for some of them (such as train, motorbike and bicycle), probably due to noisy pseudo-labels used during DBST. Conversely, our method yields consistently a significant gain on classes characterized by large and regular shapes, namely road, sidewalk, building, wall and sky. This validates the effectiveness of a) the geometric cues derivable from depth to predict the semantics of these kind of objects and b) the methodology we propose to leverage on these additional cues in UDA settings. This behavior is also clearly observable from qualitatives in Fig. 5. We point out that, to the best of our knowledge, the performance obtained by D4-ProDA + DBST, i.e. 58.8 mIoU (last row of Tab. 1) establishes the new state-of-the-art for GTA5→CS. SYNSEQ→CS. Akin to common practice in literature we present results also on the popular SYNTHIA dataset. As our pipeline requires video sequences to train the selfsupervised monocular depth estimation network, we select the SYNTHIA VIDEO SEQUENCES split for training and the Cityscapes dataset for testing. We will call this setting SYNSEQ→CS. To address it, we re-trained the UDA methods for which the code is available and the training procedure is more affordable in terms of memory and run-time requirements, namely AdaptSegNet [49], MaxSquare [5] and MRNET [69]. The results in Tab. 2 show that all the selected UDA approaches exhibit a substantial performance gain when coupled with our proposal, with a general improvement in all classes. In particular, similarly to the results obtained in GTA5→CS, we observe a consistent improvement for classes related to objects with large and regular shapes (as depicted also in Fig. 5), with the only exception of a slight performance drop for the class building when using MRNET [69] (last row of Tab. 2). We argue that our approach is relatively less effective with MRNET [69] as, unlike AdaptSegNet [49] and MaxSquare [5], it yields already satisfactory results in those classes which are usually improved by the geometric clues injected by D4. In the Suppl. Mat. we show that it is also possible to exploit the depth ground-truths provided by the SYNTHIA dataset as an additional source of supervision during the training of Monodepth2 [14], obtaining a small improvement in the performances of the overall framework. Analysis We report here the most relevant analysis concerning our work. Additional ones can be found in the Suppl. Mat.. Ablation studies. In Tab. 3, we analyze the impact on the performance of our two main contributions, i.e. injec-tion of geometric cues into UDA methods by D4 and DBST. Purposely, we select the GTA5→CS benchmark and, for the top performing UDA methods, we report the mIoU figures obtained by using the stand-alone UDA method (column UDA), combining it with D4 (column D4-UDA), applying DBST directly on the stand-alone method (column UDA + DBST) and embedding the method into our full pipeline (column D4-UDA + DBST). We can observe that each of our novel contributions improves the performance of the most recent UDA methods by a large margin, which is even more remarkable considering that the selected methods already include one or more step of self-training. Moreover, D4 and DBST further enhance the performances of any selected method when deployed jointly, as shown in the column D4-UDA + DBST, suggesting that they are complementary. In order to further assess the effectiveness of DBST, in the column D4-UDA + ST we report results obtained by D4-UDA in combination with a baseline selftraining procedure, which consists in simply fine-tuning the model by its own predictions on the images of the target domain. As the only difference between this procedure and our DBST is the dataset employed for fine-tuning, the results prove the effectiveness of DBST in generating a varied set of plausible samples more amenable to self-training than the original images belonging to the target domain. Alternative strategies to exploit depth. As explained in Sec. 3.1 Semantics from Depth, we rely on the mechanism of transferring features across tasks and domains from [38] to inject depth cues into semantic segmentation. To validate our choice, we explore two possible alternatives, namely DeepLabV2-RGBD and DeepLabV2-Depth. Both consist in the popular DeepLabV2 [3] network, with RGBD images in input in the first case and depth maps (no RGB) in the second (more details in the Suppl. Mat.). Tab. 4 compares the performance of these alternatives with our method, either when used standalone (rows 2, 3, and 4) or when combined with LTIR [22] according to the strategy presented in Sec. 3.1 Combine with UDA. Results allow us to make some important considerations. First, our intuition on the possibility of exploiting depth to improve semantics is correct since also simple approaches improve over the baseline (reported in the first row of the table). Nonetheless, these naive methods produce a significantly smaller improvement compared to our approach, showing that our decision to adapt [38] to the UDA scenario is not obvious. Moreover, [38] requires only RGB images at test time. Finally, when combined with LTIR [22], a stronger depth-to-semantic model provides better results, validating our choice once again. Impact of video sequences. As described in Sec. 3.1, we obtain depth proxy-labels with a self-supervised depth estimation network [14], that we train using the raw video sequences (just RGB images) provided by the datasets involved in our experiments. In order to validate that using Table 3. Impact on performance of the two components of our proposal (D4, DBST) when applied separately or jointly to selected UDA methods on GTA5→CS. * indicates that the method was retrained by us. Results are reported in terms of mIoU. Table 4. Comparison between alternative methods to infer semantics from depth. DeepLabV2-RGB, DeepLabV2-RGBD and DeepLabV2-Depth stand for DeepLabV2 [3] trained on DS , using respectively RGB images, RGBD images or depth proxy-labels as input, while "Semantics from depth" is the approach described in Sec. 3.1 Semantics from Depth. The symbol represents the merge operation described in Sec. 3.1 Combine with UDA. Results are reported in terms of mIoU on the Cityscapes dataset. video sequences from the target domain doesn't provide any advantage to our framework, we train AdaptSegNet [49] on GTA5→CS using the whole training split available for Cityscapes (i.e. 83300 images with temporal consistency). We choose AdaptSegNet [49] since it can be considered the building block of many UDA methods. We observe a drop in performances from 42.4 to 41.9 mIoU, showing that using video sequences does not boost semantic segmentation in a UDA setting, probably because of the similarity between consecutive frames, and that the improvement produced by our framework is provided by the effective strategy that we adopt to exploit depth. Conclusion We have shown how to exploit self-supervised monocular depth estimation in UDA problems to obtain accurate semantic predictions for objects with strong geometric priors (like road and buildings). As all recent UDA approaches lack such geometric knowledge, we build our D4 method as a depth-based add-on, pluggable into any UDA method to boost performances. Finally, we employed self-supervised depth estimation to realize an effective data augmentation strategy for self-training. Our work highlights the possibility of exploiting auxiliary tasks learned by self-supervision to better tackle UDA for semantic segmentation, paving the way for novel research directions. Additional Implementation Details As stated in Sec. 3.1 of the main paper, we obtain depth proxy-labels by deploying a self-supervised method for solving monocular depth estimation from video sequences. Specifically, we train Monodepth2 [6] following the training protocol and hyper-parameters used in the original paper. We train it for 20 epochs using mixed mini-batch of size 6, composed of 3 real and 3 synthetic images. We resize samples at resolution 1024×512 for training and testing. It is important to train the network on both domains jointly because we want depth predictions to be aligned across domains. Self-supervised depth methods typically estimates depth maps up to a scale-factor. Thus, we train on both domains simultaneously to force the network to yield predictions from the two domains that share the same range and scale. When D S is synthetic, we can collect depth ground-truth labels with minimum effort. In such case, we could exploit these labels to provide an additional source of supervision to Monodepth2. SYNTHIA-SEQ provides much less images with smaller variability with respect to GTA5, but provides depth ground-truth labels. Thus, in the SYNSEQ→CS setting, we could train Monodepth2 by adding a L 1 loss between predictions and ground-truths of SYNTHIA-SEQ to the set of Monodepth2 losses, so as to achieve better pseudo-labels results. Nevertheless, the availability of ground-truth labels is not crucial to improve the performance of the considered UDA method. Indeed, in Tab. 1 we can observe that the use of synthetic depth ground-truth labels provides just a slight performance improvement (i.e. 1% mIoU or less). As regards the training of semantics prediction from depth features, we follow the protocol explained in [9]. We train the depth network simultaneously on D S and D T , by minimizing the mean absolute error (i.e. L 1 loss) between predicted depth maps and depth proxy-labels, previously generated for both domains. Then, we train the semantic network only on D S , using a weighted Cross Entropy loss with weights computed as in [18]. The weights of the two networks are pre-initialized on ImageNet, and, following a common protocol [14,18,7], all Batch Normalization layers are frozen both at training and test time to use ImageNet statistics. Differently from [9], we deploy the more performant DeepLabV2 [1] architecture for both networks: as the framework requires to split the network into an encoder and a decoder, we consider the backbone as the encoder and the ASPP module as the decoder. Hence, the transfer function in D4 is learned by minimizing the mean squared distance (i.e. L 2 loss) between the semantic features extracted by the semantic network encoder and the ones hallucinated by the transfer function itself starting from the depth encoder. Finally, during DBST, the final distilled model is obtained by minimizing a standard Cross Entropy loss on D T and exploiting only the pseudo-labels, as explained in Sec. 3.2 of the main paper. Additional Datasets Details Cityscapes. The Cityscapes dataset [4] provides a large collection of video sequences of driving scenes from 50 different European cities. The dataset is composed of 150000 video-sequence images, of which 83300 are used for training. A subset of 5000 images from Cityscapes is commonly used as benchmark for semantic segmentation, as these images are annotated with high-quality pixel-level semantic labels (19 classes). This subset is split into train, validation and test with 2975, 500 and 1525 images respectively. In our experiments we train Monodepth2 [6] on the 83300 training sequences. For training D4 and DBST we use the 2975 train images (without their semantic labels) and, following the protocol adopted in recent works [14,2,8,18,17,16,7], we evaluate our final model on the validation split. The augmented dataset obtained during DBST starting from the 2975 images accounts for 7500 samples. GTA5. The GTA5 dataset [10,11] consists in synthetic images captured while playing the video-game Grand Theft Auto V. It consists of 120000 video-sequence images that we use in the Monodepth2 [6] training procedure. Moreover, the dataset provides 24966 samples with fine semantic annotations (same 19 classes as Cityscapes). We train the depth network of D4 on only 3000 randomly sampled images among the 24966 to keep the training balanced with the 2975 images of Cityscapes. Finally, we train the semantic and transfer network of D4 on the whole 24966 synthetic images. SYNTHIA VIDEO SEQUENCES. The SYNTHIA dataset [12] is composed of images generated by rendering a virtual city created with the Unity development platform. Since our method requires video sequences to train Monodepth2 [6], we use the split SYNTHIA VIDEO SE-QUENCES, selecting sub-sequences Spring, Summer, Fall, Winter, Dawn and Fog. We collect thus a total of 26948 images, paired with fine-grained semantic labels (12 classes in common with Cityscapes). In particular, we train on sky, building, road, sidewalk, fence, vegetation, pole, car, traffic sign, person, bicycle, traffic light. It is worth noticing that to make the Cityscapes dataset consistent with SYN-THIA VIDEO SEQUENCES, it is necessary to map the Cityscapes class rider into bicycle and collapse bus and truck into car. We use only 3000 randomly sampled images to train the depth, semantic and transfer network of D4, as well as for the training of the other considered methods which were retrained by us (* in Tab. 2 of the main paper) due to the authors not providing their results on SYNTHIA VIDEO SEQUENCES. Semantics From Depth In this section, we evaluate alternative ways to predict semantics in the target domain by exploiting also the depth cues available once depth proxy-labels have been computed as discussed in sec 3.1 (Semantics from depth) of the main paper. This study motivates our choice to rely on the mechanism of transferring features across tasks and domains [9], with the improvements and modifications discussed in Sec. 4.1 of the main paper and Sec. 1 of this supplementary document. As we have semantic labels only for the source domain D S , all approaches are trained only on D S , and their ability to generalize is assessed on the target domain D T . We investigate two possible alternatives, namely: • a semantic segmentation network that processes RGB-D images, where the proxy depth of each image is stacked as an additional channel • a semantic segmentation network that processes directly proxy depths, without using RGB information. We realize both options by training the popular DeepLabV2 [1] architecture to perform semantic segmen-Method mIoU AdaptSegNet* [14] 49.5 D4-AdaptSegNet + DBST (w/o synthetic GT) 55.9 D4-AdaptSegNet + DBST (w/ synthetic GT) 56 tation on D S , initializing the network with ImageNet [5] pre-trained weights. Moreover, in the first case, we add a convolutional layer at the beginning of the architecture, to reduce the input RGBD channels from 4 to 3, while in the second case we obtain 3-channels input images by stacking three times the proxy depth map. In the following, we will call DeepLabV2-RGBD the first network and DeepLabV2-Depth the second one. We also consider as baseline the performance of DeepLabV2 trained only on RGB images, referred to as DeepLabV2-RGB. In Tab. 2 we report mIoU results obtained on Cityscapes (i.e. our target domain) by DeepLabV2-RGB, DeepLabV2-RGBD, DeepLabV2-Depth, and our method. We observe that the RGBD and the Depth versions yield slightly better results compared to the RGB baseline. Interestingly, DeepLabV2-Depth provides better results than DeepLabV2-RGB and DeepLabV2-RGBD, which supports our intuition about semantic cues extracted from depth alone being more effectively transferable across different domains due to their reliance on geometry rather than appearance. Yet, the ability to overcome the domain shift by DeepLabV2-RGBD and DeepLabV2-Depth is limited, as performance is low for both variants. On the contrary, by tackling the problem with the method proposed in the main paper, we can improve the baseline by 8.6% in terms of mIoU. Moreover, we evaluate DeepLabV2-RGBD and DeepLabV2-Depth also in combination with an UDA method, as proposed in Sec. 3.1 (Combine with UDA) of the main paper. In the last three rows of Tab. 2, we report mIoU results obtained by such combinations (row 5 and 6), compared to our proposal (last row), while considering one of the best performing UDA methods, namely LTIR [7]. As intuitively expected, we observe that a better depth-based semantic model leads to a better combination with the selected UDA method, motivating once again the need for an approach robust to domain-shift in order to infer semantics from depth cues in UDA settings. Rather than relying on self-supervised depth on both do- Table 2. Comparison between alternative methods to infer semantics with the aid of depth cues. DeepLabV2-RGB, DeepLabV2-RGBD and DeepLabV2-Depth stand for DeepLabV2 [1] trained on DS , using respectively RGB images, RGBD images or depth proxy-labels as input, while "Semantics from depth" is the approach described in the subsection with the same name of sec 3.1 in the main paper. The symbol represents the merge operation described in subsection Combine with UDA of Sec. 3.1 of the main paper. Results are reported in terms of mIoU on the Cityscapes dataset. mains as done for the previous cases, one may try to use just the depth provided by synthetic source dataset. To the best of our knowledge, only two works [15,3] proposed to exploit depth in a UDA context for outdoor scenes segmentation. We compare here our D4 module with [15], the only publicly available framework, to show that the additional information for the target domain is a key component for Domain Adaptation. We retrained [15] with the same hyper-parameters, and changed only the training split (i.e. SYNTHIA-SEQ instead of SYTNHIA-RAND-CITYSCAPES). As Tab 3 shows, D4 surpasses by a large margin (3.6%) [15], suggesting that self-supervised information for the target domain can be used to boost performance in Domain Adaptation. DBST vs DACS [13] In Tab. 4 we compare our DBST with the method presented in DACS [13], as they share some similarities. In particular, both approaches generate training samples by copying portions of images onto other images. However, they differ in three main aspects: • [13] copies portions of images from D S onto images from D T , while in our DBST we use exclusively images from D T . • In our proposal, we copy only image patches whose semantic predictions belong to a predefined set of classes Table 4. Comparison between the approach proposed in [13] (DACS) and our DBST, when applied to our D4 combined with [7]. Results are reported in terms of mIoU in the GTA5→CS benchmark. Method mIoU AdaptSegNet (w/o video) [14] 42.4 AdaptSegNet (w/ video) 41.9 Table 5. AdaptSegNet [14] trained with or without additional unlabeled target images that we deem as more amenable to be moved across images, like, e.g., person, car and pole; conversely, in [13] no semantic filter is applied to select the patches that will be copied across the images. • Unlike [13], we exploit depth information to plausibly stack objects in the generated sample. In addition to these points, in our DBST we further exploit depth information to guide the selection of the patches to be copied by excluding areas of the scene that are too far away from the camera, where semantic predictions are less likely accurate. In Tab. 4 we report results in the GTA5→CS benchmark when applying DBST or [13] to D4 combined with [7]: our DBST outperforms the strategy proposed in [13], though the latter can also yield a notable performance improvement. Adding videos to UDA methods In this section, we empirically demonstrate that using additional raw information is not directly useful for the UDA setting in semantic segmentation. To this purpose, we adopt [14], which makes use of adversarial training and it can be considered as the main building block of many UDA methods proposed in the literature. Moreover, adversarial training is a plausible strategy to exploit additional unlabeled images for the target domain. Driven by this reasoning, we retrained [14] in the GTA5→CS benchmark using the whole training split available in Cityscapes (i.e. 83300 images with temporal consistency). The result reported in Tab. 5 suggests that simply collecting more data is not enough to boost semantic semantic segmentation in a UDA setting, and more advanced techniques as the one proposed in this work are necessary to extrapolate useful data. Qualitative Results In Fig. 1, 2, 3, 4, 5, 6 we report several qualitative results of our D4 proposal combined with the different UDA methods reported in Tab. 1 and Tab. 2 of the main paper. In every case, we observe an overall improvement in the quality of the predictions. In particular, thanks to the additional information provided by depth maps, the errors in large objects with regular shapes are partially removed (see first and second column of Fig. 1). Moreover, with the proposed merging algorithm (Sec 3.1) and with the DBST algorithm detailed in Sec. 3.2, we also preserve the good performance of the selected UDA method for certain classes. For instance, all the predictions concerning classes such as pole and traffic sign are always maintained or even improved (see second row of Fig. 2). DBST -Qualitative Results In Fig. 7 and 8 we show some training samples obtained with our DBST algorithm. As explained in Sec. 3.2 of the main paper, we use multiple images from D T as source, alongside with the corresponding depth maps and predictions (referred to as pseudo-labels), to synthesize new training pairs. We can notice how the newly generated samples contain a lot of patterns that would not be present in the original images, enabling a more effective Self-Training procedure. We also point out how, thanks to the use of depth maps, the generated pairs look realistic. For example, in the third row of Fig. 7, the rider on the left side of the image is pasted in front of the pole since it appears closer in the depth maps of the two images. Fig. 9, 10, 11 report depth proxy-labels obtained in the first step of our pipeline by the self-supervised approach proposed in Monodepth2 [6]. We note how the produced depth maps are smooth and accurate on the static parts of the scene (such as road and buildings), while they tend to be noisy on moving objects (like cars and pedestrians). Despite these imperfections, depth proxy-labels produced by [6] provide a solid base of geometric clues for objects with large and regular shapes, which are extensively exploited in our proposal. RGB AdaptSegNet D4-AdaptSegNet + DBST GT Top to bottom: rgb, depth gta, depth synthia Depth gta Figure 10. Depth proxy-labels for the GTA5 dataset obtained with Monodepth2 [6]. We show RGB images (first row) and corresponding depth maps (second row), shown as inverse depth maps for a better visualization. Depth synthia Figure 11. Depth proxy-labels for the SYNTHIA-SEQ dataset obtained with Monodepth2 [6]. We show RGB images (first row) and corresponding depth maps (second row), shown as inverse depth maps for a better visualization.
2021-10-14T01:15:35.705Z
2021-10-13T00:00:00.000
{ "year": 2021, "sha1": "47ab31bfe1b65550ef44c8435a1643f452f0fddc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "47ab31bfe1b65550ef44c8435a1643f452f0fddc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
264615533
pes2o/s2orc
v3-fos-license
Journal of Strategic Security Journal of Strategic Security Abstract The international community has united in its mission to halt the hijacking of merchant ships in the Gulf of Aden and the Red Sea with a massive naval presence that monitors the vast, strategic seas in which Somali pirates operate. This naval presence consequently has had some success in reducing pirate attacks in 2012, but why are the Somalis turning to piracy in the first place? The economic history of piracy has been well documented with other former “pirate hotspots” worldwide; however, there is little data available on the microeconomic affects of piracy. This article explores the underlying reasons of why Somalis have turned to piracy as a “profession,” and offers recommendations for the international community to eliminate piracy effectively through non-military means. Introduction The root causes of why piracy has grown in the Puntland area of Somalia and the rest of the Horn of Africa is debated, but most agree that piracy stems from the economic hardship of Somalis.When discussing the exact cause of economic desperation and whether or not it is a valid reason for piracy, many scholars pinpoint the acceptance of piracy in the Horn of Africa to fishing rights violations.For instance, fishing by other countries in the territorial waters of Somalia is a serious problem for Puntland.Estimates indicate that the coastal waters off Puntland have a vast supply of fish that are considered ideal for export causing many neighboring countries to illegally fish there.1 Veteran African journalist, Peter Eichstaedt, disagrees with the notion that illegal fishing is a valid reason for why piracy continues to be allowed.Eichstaedt believes that piracy stems from the self-interests of the pirates and their financiers and not from illegal fishing. 2 While it is unclear exactly when piracy in Somalia began, testimonies from Somali pirates indicate that piracy surfaced shortly after the fall of the Somali Government in 1994 when foreign fishing boats sailed to fish in the Somali waters in the Gulf of Aden.Yemeni fishermen became the largest group of foreigners to fish illegally off Somalia's coast.There are claims by former Somali fishermen that Yemenis' fishing vessels would dismantle and/or steal Somali fishing nets so that the Yemenis could catch more fish.To combat this illegal fishing tactic, Somali fishermen would use small boats and basic weapons in order to force their victims overboard before spraying them with water to retaliate.They would further threaten other potential ships that would consider entering Somali waters.Some foreign fishermen would be released without any harm as long as they paid a considerable fine. 3Thus, it is plausible to assume that piracy in the region began with Somali fishermen who threatened to attack foreign sailors if they did not leave their waters. 4Ultimately, Somali piracy would grow into an international problem when two events occurred that would significantly alter the living conditions in the country. In a relatively short period of time, two major events struck Somalia that made the economic situation worse.First, the number of fish in the Gulf of Aden dropped significantly causing the area to no longer be economically feasible for Somalis to fish.The waters were simply overfished by both illegal fishing vessels and Somalis. 5This dramatically affected the livelihoods of the native population as the inability to catch and sell fish resulted in local families not being able to purchase food.This led many Somalis to supplement their wages, but it was nearly impossible for Somali fishermen to find another profession that would earn a living wage in the war-torn country. 6An example of testimony recorded by Eichstaedt is of a pirate named Musi.He became a pirate simply because of poverty and the lack of opportunities in the Horn of Africa.His mother earned only a couple of dollars a day by selling milk in a local market in Galkayo.His father, on the other hand, tended camels, cows, and goats -feeding them on the sparse vegetation found through out the arid windswept inlands.To provide money to purchase necessary supplies and maintain the house for his family, Musi became a pirate. 7For many Somali men like Musi, piracy is the only economic alternative for former fishermen.In addition, the 2004 Tsunami destroyed much of the necessary equipment for Somalis to fish. The 2004 Tsunami, representing the second major event, destroyed the lives of millions of people who resided and visited countries that surround the Indian Ocean.Somalia was struck badly by this event and subsequently suffered enormous economic hardships.This natural disaster caused the deaths of an estimated 289 people in Somalia, and the economic damage was much greater than in any other part of Africa.For instance, the Tsunami destroyed an estimated 800 buildings and over 600 fishing boats. 8nstantly, the livelihoods of many Somalis were destroyed, and unlike any other affected parts of the Indian coastline, Somalia received very little foreign aid from the international community.Because of the lack of a centralized legitimate government in Somali, the United Nations and other organizations have sent very little aid to Somalia for their reconstruction efforts. 9Accordingly, many Somalis began to experience health concerns as a result of illegally-dumped harmful waste that began washing up on the local shores. 10These negative health conditions exacerbated the already dire economic situation, increasing the lure of piracy as a means for survival. Pastoralists and other Somalis who were affected by the depleted fishing areas off the shores of Somalia turned to hijacking foreign vessels in a combination of economic desperation and frustration at their perceived abandonment by the international community.Since the 2004 Tsunami, piracy has quickly escalated from that of simple hijackings of foreign fishing vessels to the more complex hijackings of cargo ships and private luxury ships.Formerly unorganized pirates have become sophisticated and well organized with financiers, mother ships, and financial distribution of ransoms. 11Its international cost is estimated between $13 and $16 billion annually, with the real number possibly higher. 12Undoubtedly, this number includes the ransoms, the cost of maintaining a naval presence in the Gulf of Aden, and the economic affects of the hijacking of boats.It is estimated that 10 percent of foreign shipping traffic is refraining from traveling through or near this region, because of the fear of hijacking. 13In November 2010, the highest ransom, $9.5 million, was paid to release the South Korean oil tanker, Samho Dream and its sailors. 14In 2008, estimated ransoms were approximately $50 million, a tenfold increase from 2007.Most importantly, the number has increased considerably ever since. 15The most famous hijacked ship is the U.S. freight vessel Maersk Alabama that was captured on April 8, 2009, and held by pirates for a $10 million ransom.Four days later, U.S. Navy Seals fatally shot three hijackers and subsequently rescued the hostages. 16The only hijacker who survived the sniper attack was Abduwali Abdukhadir Muse who was onboard the U.S.S.Bainbridge conducting negotiations and was later convicted of piracy. 17The Maersk Alabama was not the only major ship recued from Somalia, but it has become a symbol of the determination of the international community to respond to Somali piracy. The Microeconomic Effects of Piracy Operations To effectively combat piracy in the Horn of Africa, the international community needs to examine the economic reasons and affects of why Somalis turn to piracy.The "organization of piracy" includes more than just pirates as it encompasses financiers who give the start up financial capital for the pirates to purchase their supplies, such as boats, GPS devices, food, and fuel.After a successful hijacking, ships are often brought to Somali coastal cities where its residents sell goods to the pirates, stimulating the economies of the cities.This unusual circle brings an interesting question on whether inflation and other economic factors are affecting the region as a whole. When a group of Somalis examine their economic options and are forced to become pirates because of the lack of opportunities elsewhere, they usually do not have the necessary start-up financial capital needed for the occupation.Hence, they go to financial sponsors -usually local businessman -who give the new Somali pirates the needed funds to purchase the necessary items to successfully hijack a vessel. 18In return for their investment, financiers generally receive 30 percent of the ransom money. 19However, some estimates suggest that 50 percent of the ransom money actually goes to the financiers. 20More importantly, the start-up money from local businessmen comes from two sources.The first source is through the informal banking system in Somalia through deposits from Somalis living abroad. 21In fact, the Diaspora has been able to support the flow of money to the region, between the amounts of $500 million to $1 billion dollars a year to their relatives who are still living in Somalia, Somaliland, and Puntland. 22This money has been used via loans from local businessmen to give the necessary start-up financial capital to the pirates.The second source is the increase in the way local businessmen are receiving the necessary funds from abroad: Iran, Syria, Libya, Egypt, and terrorist organizations.As a result, foreign businessmen are giving local Somali financiers the necessary start-up cash. 23And it is usual for the financier to invest in the skiff and motor of the boat, the weapons and ammunitions needed for the hijacking, the tools to get onto the hijacked vessel, and the technology (a GPS device) for the pirates to know their location, and of which Somali coastal city they will bring the hijacked boat.The total cost of the financier's investment comes to an estimated $21,200. 24Once the funds are collected, they are given to a new pirates' operation in the hope that they can return the start-up capital as well as produce a profit for them. After the new pirates receive the start-up cash from their financer, they then need to purchase the necessary goods for hijacking maritime vessels.The estimated cost for the pirates range from whether or not the pirates are experienced, have set up accounts with their pirate networks, and to where they bring the hijacked vessels.The average operational cost for the pirates is close to $300,000. 25This includes the cost of food, supplies, equipment maintenance, care of hijack victims, and methods to collect the ransoms. 26The hope is that the ransom will be able to cover these start-up costs and have enough left over for their personal expenses.It is worth noting that the first man who boards the hijacked ship receives a bonus of usually $5,000.The idea behind this stems from the fact that the pirate who first initiates close contact with the soon-to-be hostages has the most risk, for the ship passengers might try to attack and harm him before he is supported by the other pirates who begin to board the vessel. 27Once the ship is under the control of the pirates, they then have to sail the ship ashore, and a new set of economic transactions occur while they wait for their ransoms. After hijacking a ship, the pirates usually take the vessel to a coastal city in Puntland, where they can monitor and negotiate with the company that owns the hijacked vessel. 28The best-known locations are Eyl, Garad, Hoboyo, Hardheere, Mogadishu, and Bosasso. 29Once at port, the pirates are responsible to pay the community a fee for the docking of the ship at the city's port.This fee is approximately 10 percent of the profit of the expected ransom. 30Once the ship is on the coast and the docking fee is paid, the hijackers will then hire a negotiator so that he can give their demands for the hijacked vessel to its owners.When at dock, local merchants will sell food and supplies that are necessary to keep the hostages alive.Items that are often purchased by pirates are: sheep, goats, water, rice, fuel, pasta, and milk.Similar to a tourist area in any country, the prices for basic goods jump in price.The notable examples are: $25 for a pack of cigarettes; $10 for a can of Coca-Cola soda; and $250 for a goat. 31The money that the pirates are using to pay for these expenses comes out of their total revenue and not from the financiers or sponsors.If the negotiations drag on, their total revenue from the hijacking could disappear. A basic economic question involves whether or not there is a possibility of inflation arising from these transactions between the pirates and the local ports.With the dramatic increase of prices of goods, inflation does affect the value of goods and services for the coastal cities, but not for the region as a whole.The reason is that inflation stems from a different source in the Horn of Africa than is commonly understood by economists.The increase in prices and of money supply to the cities does not cause the price increase.The source behind the increases of prices of food, security, bribes, etc., stems from ships not wanting to dock and unload their cargo in Somalia because of the fear that they will be hijacked. 32hus, the aggregate demand of Somalis in the Horn of Africa has not changed, but the aggregate supply has dramatically decreased.Even though there is artificial inflation by merchants in the pirate cities, the overall Somali economy is hurting from the decline in aggregate supply of goods from around the world. The Microeconomic Affects of Post-hijacking in Puntland How the ransom is used can tell the international community as much about the situation as can the conditions of why Somalis choose to become pirates.When discussing the distribution of the ransoms, three main actors surface.The first are pirates who spend their money for goods, products, and repayment of debts.The second are the financer(s) who use their part of the ransom for reinvestment for future pirate missions and for their personal interest.The third involves the clan leaders, warlords, and the Puntland Government who all use the funds for various reasons -personal wealth, investments in the community.Overall, these actors are the most important to understand how piracy continues, even when foreign governments send their naval fleets to stop Somali pirates. As noted earlier, Somali pirates turn toward this illegal trade because of a lack of any other real economic opportunities.Estimates of the exact number of Somali pirates range around the 5,000 mark. 33Data suggests that most pirates spend their share of the ransom money on two items.The first is the repayment of debts.Previous purchases of homes, cars, and supplies needed for the hijackings force many Somalis to take on loans.An example is from the testimony of a pirate named Khalil who used ransom money to repay debts that he and his family collected over the last few years. 34After repayment of previous debts, Somali pirates often spend the rest of the money purchasing items for their consumption such as luxury cars, new houses, electronics, and other western gadgets.This brings into question whether Somalis becoming pirates is based from the need to afford necessary items for survival.Eichstaedt's research suggests that their purchasing patterns are not rational and are only based on simple desires. 35With these new properties and gadgets, the demand increases to purchase new weapons or hiring security forces to protect their property.The pirate's and financier's purchasing of new houses has created a housing bubble with the demand of housing rising faster than the supply of housing. 36Outside of the actual act of piracy, the economic lives of Somali pirates seem to be better than the Somalis who are not pirates based from a consumption basis.However, there is very little available information on whether they save any of the profits that they make, or if they invest it in any programs. The ransom money that pirate's financiers receive is utilized for three purposes.The first is for the reinvestment for future pirate missions.The money is spent for either new equipment for their existing pirates so they can hijack more ships in shorter periods of time, or it is used for the sponsoring of new pirates. 37Similar to the pirates themselves, the financiers also purchase new homes, cars, security forces, etc., based purely on personal wants. 38The investors also use their share of the ransoms to invest in other businesses around the world. 39What seems to be a major part of the ransom goes into the bribing of warlords, clan leaders, and Puntland Government officials. It is still very much of a mystery of exactly how much warlords and clan leaders receive from the ransoms of hijacked ships.It is clear that they do participate in the piracy trade to some extent.Clan leaders in the coastal cities request a payment of $100,000, which is spent on the development of the community.In Bosasso, there is photographic evidence depicting clan leaders and warlords using some of the pirate ransom money for the construction of schools and mosques. 40Some Somali clan leaders have instructed that if a pirate is captured, he must not "talk."This stems from the fear that Somali clan leaders have information that the captured pirates might disclose the networks and relationships between the pirates, financiers, and the clan leaders. 41The Puntland Government has been known to receive funds in the form of bribes from clan leaders, warlords, and the financiers so that they do not stop or prosecute Somali pirates.It is estimated that 30% of the ransom money that goes to clan leaders and financiers who lobby and bribe the political leaders of Puntland to ignore the International Communities' complaints about the piracy trade. 42In return for these funds, not only are the pirates protected, they are also given information that will aid them in future hijackings such as giving information of the location of foreign ships. 43This has caused outrage by the international community; however, Puntland is not a recognized state, so it does not have to obey international maritime laws. 44Many Puntland officials are even scared to cross paths with the pirate businesses in fear of assassination.In fact, a Puntland judge who handed out jail terms to captured Somali pirates was mysteriously killed for the sentencing. 45There has also been some speculation that high members of the Puntland Government are personally involved in the piracy business. 46The interest of clan leaders, warlords, and the Puntland Government is to keep, continue, and build the business of piracy. Overall, the answer to the question of whether piracy has aided in the development of Puntland is disappointing.The communities themselves do not see a real increase in their economic development, based from real wages.Food prices have risen in towns predominantly used by pirates, and the local's wages have decreased.The people who are solely benefiting economically are the pirates might be better (excluding the actual hijacking of the ships), and the lives of the financiers, clan leaders, warlords, and the bribed Puntland Government workers.However, their benefits do not trickle down to the average Somali in Puntland who is greatly suffering from the piracy trade. 47The question then becomes how to stop piracy by either making it economically unattractive or by incorporating military policies to counter the illegal activities. Possible Solutions to Reduce Piracy How does the international community stop piracy?There seems to be two main theories on how it can be solved.The first is what is currently being implemented, a military operation to stop or contain pirate attacks.The second option is to economically develop all of Somalia to stop the preliminary reasons of why Somalis turn to piracy.However, both arguments presented here are simplified versions of what could be done to stop Somali piracy. The international community has put forth United Nations (UN) resolutions, funds, and the sending of naval forces to halt Somali piracy.The theory that stems from these policies is that the only way to show Somali pirates that they cannot commit acts of piracy is to punish those who are caught in the courts in Kenya or in other parts of the World. 48What seems to be the current strategy of the international community is "to engage" the pirates with force once they have hijacked a ship -very similar to the Maersk Alabama incident in 2009.In fact, this has been approved by UN Resolution 1816 that gives permission to international naval fleets to engage Somali pirates.Although this resolution would later be repealed, a similar resolution was agreed upon shortly afterwards allowing the monitoring of the Gulf of Aden by international naval vessels and the establishment of judicial courts to prosecute any suspected captured pirates. 49Having a military presence in the Gulf of Aden, however, has cost foreign nations a total of $1.3 to $2.0 billion per year. 50Additionally, there has been minor land forces used to attack, arrest, and/or kill pirates and their leaders.Specifically, American troops target suspected pirates and are transported via helicopter to the region to either arrest or kill the suspected pirates. 51It must be noted that these policies are neither to stop hijacking for the long term in the Gulf nor to improve the conditions of Somalia, which breeds the piracy. 52They are there as either a response or a very short-term prevention of pirate attacks.This has led some to believe that the only way to cease piracy is to eliminate the causes of it through economic development. It would seem hard to image that Somalia could have a bright future waiting for the country, however, after examining the economic structure of Somalia, we see that this nation's economy has operated without a central bank or an effective government for nearly two decades.In nearly all-economic theories, there needs to be at least some type of government to protect private property. 53In Somalia, they have been able to sidestep this belief.If proper free market capitalism is implemented to the fullest (such as the example of telecommunications in Somalia), then it is possible to strengthen the current government based from their economy. 54The areas in which Somalia can rebuild its state is with the help from the world markets that will allow the country to utilize its potential oil reserves, tourism, strategic ports in Berbera, Somaliland and Bossasso, and the traditional livestock trade.A historical problem with Somalia has been their leadership, whereas, the lists of potential leaders from the different clans in Somalia and Puntland are not impressive.However, the Somalis who have emigrated from Somalia to live in the West are more impressive.For instance, the notable Somali scholar Ahmed I. Samatar and many others are involved in the fields of art, academia, and music to name some areas.They can be considered the leaders of Somalia because they have witnessed efficient leadership from living in the West.Most importantly, they have committed their support for their homelands with the contribution of monetary donations in the amounts of $500 million to $1 billion dollars per year to relatives still in Somalia, Somaliland, and Puntland. 55Ultimately, success can be achieved if free market capitalism is allowed to take hold in Somalia with the leadership of the current Diaspora. To stop the cycle of piracy in the Horn of Africa, there are two main options that can be implemented.The first is the current strategy of the international community to use a multinational naval force in the Gulf of Aden to stop hijacks once they have begun, free the hijacked vessels, and helicopter operations to try to stop pirates before they attempt to hijack a vessel.In no way is it being suggested that the service of these naval fleets are being condemned or denoted, but it seems impractical to continue this short-term strategy.The other option to stop piracy is for the focus to be on economic development of the region.If attained, Somalis who have the ability to earn a living wage will likely abandon piracy. 56The problem with development is that it takes time and resources.It seems that the international community has decided that it is more economical for them to just deal with the piracy in either paying the ransoms or attacking with their militaries. Conclusion Most of the international community abandoned Somalia after the horrific events of the downing of the American Blackhawk helicopters during the end of the Battle of Mogadishu and after the complete failure of several attempts to establish any type of government in Somalia.This has led to desperation by many Somalis.With no effective government able to protect their coastal waters, foreign fishing fleets have overfished the coastal waters to a point that fishing is no longer economical.This was compounded by the tragic 2004 Tsunami that destroyed hundreds of fishing boats and destroyed many parts of Puntland and the rest of Somalia. Piracy originated in Somalis by giving fines or harassing foreign fishing fleets, but as the situation worsened in Somalia, the targets became the freight ships that carried the products from around the World into the global market.Piracy became more organized by Somali pirates receiving start-up financial capital from financiers.These financiers not only gave the start-up funds necessary for piracy but also much of the necessary equipment for the pirates.Once a ship was successfully hijacked, the ship was brought back to port where the coastal city would sell the overpriced items such as food and fuel to the pirates.The hijackers are forced to purchase their supplies from these merchants because they need to keep their hostages alive and wait until their broker sends the ransom demands to whoever owns the hijacked vessel.Once the ransom is paid, the pirates, financiers, and other parties (the clan leaders and warlords as well as the Puntland Government) distribute it between themselves.Tragically, this economic system is sustaining the piracy in the region, but this is the only option for many Somalis. 53Fusfeld, Daniel, Age of the Economist (New York: Hapercollins College Division, 2001), 37. 54 Little, Peter, Somalia: A Country Study (Bloomington: Indiana University Press, 2003), 144. 55Bradbury, Becoming Somaliland, 148; Shire, "Transactions with Homeland: Remittance," 102. 56 Eichstaedt, Pirate State: Inside Somalia's Terrorism at Sea, 105-19.Beloff: How Piracy is Affecting Economic Development in Puntland, Somalia Produced by The Berkeley Electronic Press, 2013
2019-05-06T14:05:11.412Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "a8562cbf2466236e2e50091d7dfc65495c80024d", "oa_license": "CCBYNC", "oa_url": "https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1222&context=jss", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b143715b4d606c3561671cc5b7f5f4f33725a15d", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
201400802
pes2o/s2orc
v3-fos-license
Study of Etiology, Fatal Outcome and Different Surgical Techniques in the Management of Small Bowel Obstruction. Prem Prakash, Nadeem Ahmad, Kanchan Sone Lal Baitha, Seema Rani Sinha, Anand Deoraj, Abhishek chaudhary Associate Professor, Department Of General Surgery, Indira Gandhi Institute Of Medical Sciences, Patna, India, Assistant Professor, Department of General Surgery, Indira Gandhi Institute of Medical Sciences, Patna, India, Junior Resident, Department of Biochemistry, Indira Gandhi Institute Of Medical Sciences, Patna, Senior Resident, Department of General Surgery, Indira Gandhi Institute Of Medical Sciences, Patna. Introduction Bowel obstruction occurs when normal propulsion and passage of intestinal contents cannot occur for whatever reason. [1] It represents a substantial burden on the national health care system of any country. In a study in United States it has been estimated that 1% of all hospitalisations, 3% of emergency surgical admissions to general hospitals and 4% of major celiotomies are done for bowel obstruction or procedures that require adhesiolysis. [2] Bowel obstruction can be dynamic (mechanical) where peristalsis is working against a mechanical obstruction or adynamic (functional) which results from atony of the intestine in absence of any mechanical cause. The obstruction can be simple where the blood supply is intact, strangulated where the blood supply is interrupted and closed loop where a segment of the bowel is obstructed at both proximal and distal ends. About 80% of bowel obstructions occur in small intestine; the other 10-20% occur in colon. [3] Acute mechanical obstruction is a surgical emergency. Emergency operation being defined as those types of surgeries that should be performed by necessity within 24 h of a patient's admission, or within 24 h of the development of a specific complication. [4] One of the key components in the management is early diagnosis as delay may result in bowel ischemia ,necrosis and perforation. Most of the common causes of small bowel obstruction are adhesions, intestinal tuberculosis, obstructed hernias etc. Subjects and Methods All consecutive patients admitted with a provisional diagnosis of small bowel obstruction from April 2016 to March 2018 were included in the study. Exclusion criteria: Individuals <14years of age and terminally ill patients were excluded from the study. Conservatively treated patients were also excluded from the study. Inclusion criteria: All patients >14years and surgically treated were included in the study. Patients were followed up only up to their hospital stay which was 10 to 14 days. All patients were subjected to thorough clinical examination, routine blood investigations (e.g. Complete blood count, random blood sugar) and radiological investigations (e.g. Plain x-ray abdomen supine and erect, ultrasound of whole abdomen and pelvis, computerised tomography scan of whole abdomen) wherever necessary. According to the etiology definitive surgical treatment was done. Discussion It was a retrospective study of the 95 patients admitted in a tertiary care hospital in eastern India. Analysis of different causes of obstruction showed that adhesion (31 out of 33 had a history of previous abdominal operation) was the commonest cause of small bowel obstruction (34.8%) followed by intestinal tuberculosis (26.31%) and obstructed/strangulated hernia (23.15%). A study conducted in Western Sudan, showed obstructed/strangulated hernia to be more prevalent than adhesion, while small bowel volvulus was found to be the least prevalent by far. [5] Intestinal tuberculosis with an incidence of 26.31% was found to be an important cause of intestinal obstruction in our present study correlating with the other studies done in a developing country like India e.g. Adhikari et al -14.17%. [6] Most common pathology was simple adhesion while stricture was found in 4 out of 16 patients. Terminal ileum and ileocecal region were predominantly involved. Hernias (obstructed/strangulated) as a cause of small bowel obstruction in the present series was about 23.15. In a different study done by other workers like Haridimos et al, and Ihedioha its incidence was 18.5% , and 18% respectively. [7,8] A study in Saudi Arabia, incidence was found to be 18.5%. [9] High incidence of strangulation in our study can be explained by the fact that most of the patients presents late for surgery when hernia becomes obstructed or strangulated. Incidence of small bowel volvulus was 4.2%. Gurleyik et al (1998) found the incidence to be 13%. [10] In miscellaneous group rare cases were included which comprised of 2 (2.1%) cases each of round worm impaction, meckel's diverticulum, intra-abdominal abscess and mesenteric cyst. One case (1.05%) was that of gall stone ileus. Incidence of malignancy was low 2.1% which was lower than studies done in western countries like Haridimos et al (13.4%) as they took large bowel tumors also into account and also incidence of malignancy here is lower than in western countries. [7] Resection and primary end to end anatomises was done in case of gangrenous segment of bowel due to any cause using 3.0 polyglactin suture. Where the rest of the gut was found unhealthy resection and ileostomy was done. In obstructed hernias with signs of healthy gut, simple reduction with open herniorrhaphy with 1 no. prolene suture was done. In some cases of intestinal tuberculosis where stricture was the culprit, stricturoplasty was done. In malignancy, resection and anastomosis was the operation of choice. In round worm impaction and gall stone ileus, simple enterotomy and removal of the causative agent was done. In mesenteric cyst (chylolymphatic cyst), enucleation was the operation of choice while in meckel's diverticulum, resection anastomosis was done. In intraabdominal abscess exploratory laparotomy and drainage of the abscess was done. Depending on the clinical settings and presence of related or unrelated comorbidities, mortality rates range from up to 3% for simple obstructions to as great as 30% when there is vascular compromise or perforation of the obstructed bowel. [11] The mortality rate in the present study was 10.52%. There was no intraoperative mortality. In a study in Tenwek hospital by Philip et al mortality was 4.5%. [12] It was important to note that in most of the patients there was more than one fatal complication like pulmonary complications (9.4%), anastomotic leak, shock, burst abdomen, fluid and electrolyte imbalance each being (5.2%). The high mortality rate could be explained by the fact that most of the patients were malnourished and had comorbidities, and they presented late leading to delay in the diagnosis ultimately presenting with strangulated bowel. Conclusion The surgical management of postoperative adhesions form a major share in the management of small bowel obstruction. Laparotomy as soon as possible should be done. Obstruction due to intestinal tuberculosis is common and often presents very late and most of the time requires ileostomy creation. It is better to create an ileostomy than to go for primary anastomosis whenever there is doubt of viability of rest of the gut or suspicion of any pathology e.g tuberculosis. In cases where rest of the bowel is found healthy primary anastomosis can be done. One of the keys to management of intestinal obstruction is early diagnosis. Particularly, accurate early recognition of strangulation is crucial because if ignored it leads to bowel ischemia, necrosis and perforation which increases morbidity and mortality significantly. Limitation of this study was lack of long term follow up and individuals < 14 years of age were not included.
2019-08-23T16:22:00.697Z
2019-05-25T00:00:00.000
{ "year": 2019, "sha1": "a7ae19647777ce325f7b6a8940cf24ef561cace6", "oa_license": "CCBY", "oa_url": "https://aijournals.com/index.php/ajs/article/download/781/584", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "86355c93ec33617f385364ac208463ebbecd1a5c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Sociology" ] }
221997740
pes2o/s2orc
v3-fos-license
Coronavirus Disease Model to Inform Transmission-Reducing Measures and Health System Preparedness, Australia The ability of health systems to cope with coronavirus disease (COVID-19) cases is of major concern. In preparation, we used clinical pathway models to estimate healthcare requirements for COVID-19 patients in the context of broader public health measures in Australia. An age- and risk-stratified transmission model of COVID-19 demonstrated that an unmitigated epidemic would dramatically exceed the capacity of the health system of Australia over a prolonged period. Case isolation and contact quarantine alone are insufficient to constrain healthcare needs within feasible levels of expansion of health sector capacity. Overlaid social restrictions must be applied over the course of the epidemic to ensure systems do not become overwhelmed and essential health sector functions, including care of COVID-19 patients, can be maintained. Attention to the full pathway of clinical care is needed, along with ongoing strengthening of capacity. The ability of health systems to cope with coronavirus disease (COVID-19) cases is of major concern. In preparation, we used clinical pathway models to estimate healthcare requirements for COVID-19 patients in the context of broader public health measures in Australia. An age-and risk-stratified transmission model of COVID-19 demonstrated that an unmitigated epidemic would dramatically exceed the capacity of the health system of Australia over a prolonged period. Case isolation and contact quarantine alone are insufficient to constrain healthcare needs within feasible levels of expansion of health sector capacity. Overlaid social restrictions must be applied over the course of the epidemic to ensure systems do not become overwhelmed and essential health sector functions, including care of COVID-19 patients, can be maintained. Attention to the full pathway of clinical care is needed, along with ongoing strengthening of capacity. 14, 2020, a total of 6,366 cases and 61 deaths had been reported in the country (9). We report on the use of a clinical care pathways model that represents the national capacity of the health system of Australia. This framework initially was developed for influenza pandemic preparedness (10) and has been modified to estimate healthcare requirements for COVID-19 patients and inform needed service expansion. The ability of different sectors to meet anticipated demand was assessed by modeling plausible COVID-19 epidemic scenarios, overlaid on available capacity and models of patient flow and care delivery. An unmitigated outbreak is anticipated to completely overwhelm the healthcare system in Australia. Given realistic limits on capacity expansion, these models have made the case for ongoing case-targeted measures, combined with broader social restrictions, to reduce transmission and flatten the curve of the local epidemic to preserve health sector continuity. Disease Transmission Model We developed an age-and risk-stratified transmission model of COVID-19 infection based on a susceptible-exposed-infected-recovered (SEIR) paradigm (Appendix, https://wwwnc.cdc.gov/EID/ article/26/12/20-2530-App1.pdf). Transmission parameters were based on information synthesis from multiple sources, with an assumed basic reproduction number (R 0 ) of 2.53 and a doubling time of 6.4 days ( Table 1). Potential for presymptomatic transmission was assumed to be <48 hours before symptom onset. Despite an increasing body of evidence regarding requirements of hospitalized patients for critical care, considerable uncertainty remains regarding the full pyramid of mild and moderately symptomatic disease. Therefore, we simulated a range of scenarios by using Latin hypercube sampling from distributions in which the proportion of all infections severe enough to require hospitalization ranged from 4.3%-8.6%. These totals represent the aggregate of strongly age-skewed parameter assumptions (Table 2). For each scenario, corresponding distributions of mild cases being seen by primary care were sampled, ranging from 30%-45% at the lower range of the severe spectrum to 50%-75% for the most extreme cases and increasing linearly between the 2 ranges. Persons not seeking care in the healthcare system were assumed undetected cases without differentiation between those with mild or no symptoms. Case-Targeted Interventions We simulated a case-targeted public health intervention. Cases were isolated at the point of diagnosis. We assumed isolation occurred 48 hours after symptom onset, limiting the effective infectious period and reducing infectiousness from the point of identification by 80%, enabling imperfect implementation. Targeted quarantine of close contacts was implemented in the model framework by dynamic assignment of a transient "contact" label. Each time a new infectious case appears in the model, a fixed number of temporary contacts are labeled. Only contacts can progress through the exposed and infectious states, however, most remain uninfected and return to their original noncontact status <72 hours. We assumed that 80% of identified contacts adhered to quarantine measures and that the overall infectiousness of truly exposed and infected contacts was halved by quarantine, given delayed and imperfect contact tracing and the risk for transmission to household members. Clinical Pathways Model At baseline of our clinical pathways model, we assume that half of available consulting and admission capacity across all healthcare sectors and services is available to COVID-19 patients. Mild cases are seen at primary care until capacity is exceeded. Severe cases access the hospital system through an ED and are triaged to a ward or ICU bed, if available, according to need. Requirements for critical care are assumed to increase steeply with age with the consequence that >60% of all infections requiring ICU admission occur in persons >70 years of age ( Table 2). As ward beds reach capacity, the ability of EDs to adequately assess patients is reduced because of bed block, meaning that not all patients who need care are medically assessed, although some will still be able to access primary care. We assume that secondary infections are not affected by a person's access to clinical care. The model allows for repeat patient visits within and between primary care and hospital services, and progression from ward to intensive care, with length of stay ( Figure 1; Table 2). The model structure and assumptions are based on publicly 2846 Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 26, No. 12, December 2020 Of note, the more severe epidemic is more delayed by public health interventions due to a higher case proportion seeking medical attention. In a milder event, persons with non-medical seeking cases will continue to transmit in the community. This finding is contingent on the public health response capacity. ICU, intensive care unit. available data on the healthcare system of Australia and expert elicitation (Appendix). Critical Care Capacity Expansion The baseline assumption in our model was that half of currently available ICU beds would be available to COVID-19 patients. We considered 3 capacity expansion scenarios, assuming routine models of care for patient triage and assessment within the hospital system: total ICU capacity expansion to 150% of baseline, doubling the number of beds available to treat COVID-19 patients (2× ICU capacity); total ICU capacity expansion to 200% of baseline, tripling the number of beds available to treat COVID-19 patients (3× ICU capacity); or total ICU capacity expansion to 300% of baseline, increasing by 5-fold the number of beds available to treat COVID-19 patients (5× ICU capacity). We also considered a theoretical alternative clinical pathway, COVID-19 clinics, which had constraints on bed numbers but double the capacity to assess severe cases in hospitals. The purpose of including this pathway was to reveal unmet clinical needs arising when bed block constrains ED triage capacity, potentially preventing needed admissions to the ICU. Social Distancing Interventions Broad -based social distancing measures overcome ongoing opportunities for transmission arising from imperfect ascertainment of all cases and contacts, and from presymptomatic and asymptomatic persons. In settings where nonpharmaceutical social interventions have been applied, associated case-targeted measures also have been in place, making the effectiveness of each difficult to quantify (19). Data from Hong Kong showing a reduction in influenza incidence arising from a combination of distancing measures introduced in response to COVID-19 provides good evidence of generalized transmission reduction (20). However, the relative quantitative contributions of different interventions, such as canceling mass gatherings, working remotely, closing schools, and ceasing nonessential services, cannot be differentiated reliably at this time (18). Therefore, we focused on the overall objective of distancing, which is to reduce the reproduction number. We modeled the effect of constraining spread by 25% and 33%, overlaid on existing case-targeted interventions, which is consistent with observed impacts of combined measures less restrictive than total lockdown (18). These reductions in transmission equated to input reproduction numbers of 1.90 at 25% and 1.69 at 33%; the effective reproduction number in each scenario further was reduced by quarantine and isolation measures, which limit spread of established infection. Results According to our model, an unmitigated COVID-19 epidemic would dramatically exceed the capacity of the health system of Australia over a prolonged period (Figure 2). Case isolation and contact quarantine applied at the same level of effective coverage throughout the epidemic have the potential to substantially reduce transmission. By flattening the curve, these measures produce a prolonged epidemic with lower peak incidence and fewer overall infections ( Figure 2). Epidemic scenarios with higher assumed severity, such as a 95th percentile case, are more effectively delayed by these public health measures than less severe scenarios, such as a 50th percentile case, because a higher proportion of all cases are seen by health services and can be identified for isolation and contact tracing. In a mitigated epidemic, overall use of the health system is increased because more patients are able to access needed care over the extended epidemic duration (Appendix Figure 3, panel A). Increasing the number of ICU beds available to patients with COVID-19 reduces the time over which ICU capacity is anticipated to be exceeded, potentially by more than half (Figure 3). The duration of exceedance for each capacity scenario is increased by quarantine and isolation because the overall epidemic is longer (Figure 3). During the period of exceedance, a degree of unmet need remains, even for the mitigated scenario (Figure 4). A 5-fold increase in the number of ICU beds available to patients with CO-VID-19 dramatically reduces the period and peak of excess demand (Figures 3, 4). These figures do not accurately reflect the true requirement for services, however, because blocks in assessment pathways resulting from ED and ward overload are an upstream constraint on incident ICU admissions. The alternative triage scenario, the CO-VID-19 clinic, reveals a high level of unmet clinical need for both ward and critical care beds given baseline bed capacity (Figures 3, 4). Case-targeted measures overcame this limitation, to some extent, and 2848 Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 26, No. 12, December 2020 effectively improved overall access to care ( Figures 3, 4). Overall, if ICU beds available to COVID-19 patients are doubled, 10%-30% of those who require critical care receive it. The proportion rises to >20%-40% if capacity increases by 5-fold (Appendix Figure 3). These figures are quantified as total excess demand per million over the course of the epidemic (Appendix Figure 4). Our simulated scenarios show that case isolation and contact quarantine alone will be insufficient to keep clinical requirements of COVID-19 cases within plausibly achievable expansion of health system capacity, even if very high and likely unrealistic levels of case finding can be maintained. We therefore explored the effects of additional social distancing measures that reduced input reproduction numbers by 25% and 33% on ICU requirements in relation to the same clinical care capacity constraints ( Figure 5). Simulations assume ongoing application of measures of fixed effectiveness, which is also unlikely to be consistently achievable over an extended duration. The overlay of distancing measures, applied from the initial stages of the epidemic and maintained throughout, suppresses epidemic growth to a level that is within the range of plausible ICU capacity expansion. The duration of ICU exceedance remains long in the 25% case ( Figure 6), but this overflow oc-curs to a far lesser degree than following case-targeted strategies only ( Figure 7). As anticipated, a 33% reduction in transmission achieves greater benefits. Of note, pressure on ED consultations and ward beds also is eased substantially in these scenarios, maintaining capacity along the full pathway of care. As a result, the proportion of critical cases that can access care is greatly increased. Transmission reduction of 33% makes treatment for all cases achievable in most simulations if 3-to 5-fold ICU bed capacity can be achieved (Appendix Figure 3, panel B). This improvement is reflected in a large reduction in unmet need (Appendix Figure 4, panel B). Discussion This modeling study shows that an unmitigated CO-VID-19 epidemic would rapidly overwhelm Australia's health sector capacity. Case-targeted measures including isolation of those known to be infected, and quarantine of their close contacts, must remain an ongoing cornerstone of the public health response. These interventions effectively reduce transmission but are unlikely to be maintained throughout the epidemic course at the high coverage modeled here. As public health response capacity is exceeded, greater constraint of disease spread will be essential to ensure that feasible levels of expansion in available health- care can maintain ongoing system functions, including care of COVID-19 patients. Broader based social and physical distancing measures reduce the number of potential contacts made by each case, minimizing public health workload and supporting sustainable case-targeted disease control efforts. Our findings are consistent with a recently published model (21) that relates the clinical burden of COVID-19 cases to global health sector capacity, characterized at a high level. In unmitigated epidemics, demand rapidly outstrips supply, even in highincome settings, by a factor of 7 (21). Because hospital bed capacity is strongly correlated with income, this factor is greatly increased in low-and middle-income countries where underlying health status likely is poorer (21). Globally, marked variability in the definition of intensive care is observed, even in highincome countries where the descriptor covers many levels of ventilatory and other support. We concur with our conclusion that social distancing measures to suppress disease are required to save lives. In addition, we acknowledge that the marked social and economic consequences of such measures will limit their ongoing application, particularly in the settings where health systems are least able to cope with disease burden (21). Much attention has been focused on expansion of available ICU beds per se, but our clinical model reveals that critical care admissions are further limited by the ability to adequately assess patients during times of system stress. In line with model recommendations, Australia, along with other countries, has implemented COVID-19 clinics as an initial assessment pathway to reduce impacts on primary care and ED services (22). Such facilities have additional benefits of ensuring appropriate testing, aligning local case definitions, and reducing the overall consumption of personal protective equipment by cohorting likely infectious patients. Evidence of bottlenecks as the epidemic progresses indicates that other measures to improve patient flows also should be considered, such as overflow expansion in EDs, encouraging and supporting home-based care, or early discharge to supported isolation facilities. Quantitative findings from our model are limited by ongoing uncertainties about the true disease pyramid for COVID-19 and a lack of nuanced information about determinants of severe disease, which we represented by age as a best proxy. The clinical pathways model assumes that half of available bed capacity is available for patients with the disease but does not anticipate the seasonal surge in influenza admissions that might be overlaid with the epidemic peak, although even in our most recent severe season, 2017, only 6% of hospital beds were occupied by influenza cases (23). Available beds will likely be increased by other factors, such as secondary reductions in all respiratory infections and road trauma resulting from social restrictions, and purposive decisions to cancel nonessential surgery. Of note, we did not consider healthcare worker absenteeism due to illness, caregiving responsibilities, or burnout, all of which are anticipated challenges over a very prolonged epidemic accompanied by marked social disruption. We also cannot account for shortages in critical medical supplies because the true extent of these and their likely future impacts on service provision are currently unknown. Our model indicates that a combination of casetargeted and social measures will need to be applied over an extended period to reduce the rate of epidemic growth. In reality, the stringency of imposed controls, their public acceptability, and compliance, likely will all vary over time. In Australia, compliance with isolation and self-quarantining was largely on the basis of trust in the early response during February-March, but active monitoring and enforcement of these public health measures is now occurring in many jurisdictions. Hong Kong and Singapore initiated electronic monitoring technologies from the outset to track the location of persons and enforce compliance (24). Proxy indicators of compliance, such as transport and mobile phone data, have informed understanding of the effect of social and movement restrictions on mobility and behavior in other settings (19), and will be further investigated in the context of Australia. The effectiveness of multiple distancing measures, including lockdown, has been demonstrated in Europe, but the contributions of individual measures cannot yet be reliably differentiated (18). The Figure 7. Estimated peak excess demand for healthcare sector services, expressed as percent available capacity, compared with quarantine and isolation scenarios during the COVID-19 epidemic, Australia. The graphs compare exceedance for COVID-19 admissions for A) ICU beds; B) hospital ward beds; C) emergency departments; and D) general practitioner services at baseline, 2×, 3×, and 5× ICU capacity. Blue lines indicate quarantine and isolation only scenarios; green lines indicate overlaid social distancing measures that reduce transmission by an additional 25%; and purple lines indicate overlaid social distancing measures that reduce transmission by an additional 33%. The COVID-19 clinics scenario reflects an alternative triage pathway, and baseline capacity. Dots denote the median; lines range from 5th-95th percentiles of simulations. COVID-19, coronavirus disease; ICU, intensive care unit. effect of local measures to curb transmission will be estimated from real time data on epidemic growth in Australia, on the basis of multiple epidemiologic and clinical data streams. Estimates of the local effective reproduction number will enable forecasting of epidemic trajectories (25) to be fed into our analysis pathway. Anticipated case numbers will be used to assess the ability to remain within health system capacity represented by the clinical pathways model, given current levels of social intervention. Such evidence will support strengthening and, when appropriate, cautious relaxation of distancing measures. Further work will examine the effects of varying the intensity of measures over time, to inform the necessary conditions that would enable exit strategies from current stringent lockdown conditions to ensure maintenance of social and economic functioning over an extended time. All these strategies, which combine to flatten the curve, will buy time for further health system strengthening and sourcing of needed supplies. Protecting the health and wellbeing of healthcare workers will be essential to ensure ongoing service provision. ICU capacity will need to be increased several-fold in anticipation of the looming rise in cases. Multiple challenges must be overcome along the path to delivering safe and effective COVID-19 vaccines, and the timeframe for availability is highly uncertain (26). The search for effective therapies continues. Therefore, reducing COVID-19 illness and death relies on broadly applied public health measures to interrupt overall transmission, protect vulnerable groups, and maintain and strengthen the capacity of healthcare systems and workers to manage cases.
2020-09-28T23:05:12.827Z
2020-09-28T00:00:00.000
{ "year": 2020, "sha1": "f99e4a03dc9e830ae3bf3eab1ce560aa6212649d", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/26/12/pdfs/20-2530.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed59f2782826f2023778b202efb388cb5b342457", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
234706587
pes2o/s2orc
v3-fos-license
Behavioral study of urban watersheds in Bhopal-city of lakes Urbanization have crossed the limits of natural carrying capacity challenging mankind and its development in terms of progress. The most notable changes in the natural system observed are related to the urban and hydrological system where built up areas in urban region have increased from 100,000km 2 in 1994 to 5,000,000km 2 in 2005. It is assumed that almost 0.5% of the world surface is occupied by urban areas The aim of the study is to study the behavioural changes in Urban watersheds for recharge and runoff along with increasing Built up areas contributing impervious surfaces over natural ones. Objective is to develop a better understanding of the interactions between surface water flows and water replenishment with changes in land cover characteristics resulting from urbanization at the local, neighbourhood and regional scales. Another objective is to find out relationship between built up and water (surface and sub surface) with empirical, observational and simulation processes for an area with specific climate and physical characteristics. The methodology adopted to study and observe this correlation broadly consist of observation for variations in spatial scale for sub watersheds around 500 or more hectares for urban expansion, changes in land use land cover and hydrological components such as water level in aquifers, wells, runoff and drainages from past to present at temporal scale of about 40years. Conclusion were made on final modelling results with validation and assessment of parameters concluding that runoff is directly proportional to built up and its intensity varies with given roughness land cover, built up and ground water infiltration. Introduction Urbanization is defined as developing a natural land to an engineered and landscaped one to make it suitable for urban life. World is urbanizing at faster pace with almost 149% excess built up for present population having 55 to 60% population residing in urban areas (GRUMP). 1 The alteration to natural feature reflects in responses of these parameters to previous natural cycle. Almost all processes in a cycle are linked to each other. The processes of evaporation and transpiration (evapotranspiration) are closely linked to the water found in soil moisture; 2 these processes act as driving forces on water transferred in the hydrological cycle. 3 Evapotranspiration rates depend on many locally specific parameters and variables that are difficult to measure and require demanding analyses in order to calculate an acceptable level of accuracy. Evaporation from surface water bodies such as lakes, rivers, wetlands and reservoirs is also an important component of the hydrological cycle and integral to basin development and regional water management in urban watersheds. Near-surface soil moisture content strongly influences whether precipitation and irrigation waters either run off to surface water bodies or infiltrate into the soil column. Regionally, mapping soil moisture deficit is becoming a widely used technique to link climatologically and hydrological information in agriculture 4 and to reflect drought conditions. 5,6 Soil and water conservation is an integral part of watershed management. Although Watershed Management was formerly considered to be nearly synonymous with soil and water conservation, it goes far beyond it today, comprising a variety of further activities that get associated with it in urban areas comprising man -made and natural structures (e.g., building, drainages, roads, parking's etc). Urban watersheds tend difficult to work naturally in modified environments with soil and water being disturbed eventually with activities and land covers. Varying precipitation and population pressure adds to the loss of identification of basic soil and water cycle in watershed. Permissible and exceeding percentage values for built up and resulting runoff on different soil and overlaying land cover roughness as physical planning features affect the water conservation and runoff in colony level to ward level. Since, presently there is no such definition or limits defined for carrying capacity of natural cycles, the amount of built up to be imposed cannot be restricted, but the impacts of present built up on the soil and resulting recharge and runoff reveal that already Built up has exceeded 65% of the natural carrying capacity and the permissible limits have been crossed for some watersheds in the study area. 20 to 40% of the natural working cycle is affected already due to urbanization of watersheds without proper urban planning regulations. With such consideration and observations of water levels in study area a range of built up induction over natural surfaces, the paper attempts to analysis the quantum of impact and proposes physical planning solutions that help in reducing the impacts of urban features on natural cycle. A study of 19 catchments with different soil and land cover is studied with existing laws and resulting impacts on soil and water facts. Methodology consists of analyzing the Built up, modification of natural drainages, water levels for last 40years and application of physical planning laws on the area. Study area The study area consists of subcatchments as shown in Figure 2, Catchment map of Bhopal city, India having three different types of geology as shown in Map no.3 and appendix A. The purpose of selecting subcatchments with different soils was to correlate the built and resulting runoff with respect to the prevailing soil characteristics and the roughness values for different land covers. Hence for analysis procedure the subcatchments were divided in three groups with respective soils and forth group is considered for impervious soil, which is present in all catchments in some or other form. There were 5catchments initially, which were again sub divided into small catchments so that details observations could be done for runoff within the catchments and runoff within the whole study area. There are now total 19catchments, having characteristics as per table 20. These 19catchments are categorized as per the porosity, water holding capacity, slope and impervious surfaces with open spaces and drainages. The colonies falling in the catchments were identified for further pot and colony level observations for runoff and recharge. These details were transferred into the other software's for modeling purpose. Figure 4 depicts the changes in Built up in study area urban watersheds from 1971 to 2013. After classification of the land covers, as per Geology and were again separated into two categories i.e. impervious & pervious. After dividing them in two heads each land cover was assigned by a Manning' n for the type and category as mentioned above by USGS for SWMM modelling to get Runoff, infiltration, evaporation, Evapotranspiration at given rainfall and soil properties along with contours. Accordingly a basic model of the study area in Bhopal city was built using the Arc GIS with integration of different layers for different time periods. The precipitation data obtained from the meteorological department and the report of (CWC 1988) WRD 2008 the normal rainfall series and the critical rainfall series were decided. The rainfall data was available in hourly basis as well as monthly basis for monsoon period. Auto cad civil 3D software was used to categories sub catchments on basis of water drop flows and contours of Google surface to facilitate analysis of surface flows. Analysis and conclusion Thus a whole set of parameters were formed to set a model where simulation for various runoff, peak runoff etc can be observed. The two methods for simulation used werei. By keeping precipitation and slope, width constant for respective years and ii. By changing precipitation on hourly and daily basis to observe long term Runoff changes and short term peak flow changes. The runoff changes for constant precipitation with decadal built up were analyzed first. The combinations of inter location runoff and the intra location runoff were used to form the correlation equation and the constants (y=ax2+bx+c) where y is runoff and x is built up. The correlation observations for each catchment for each decade was plotted to see the trend line and the resulting equation. To calibrate the hydrologic model the LULC data was prepared in an 8m grid. The time series data was made for hourly rainfall event. Finally the simulation was run for different years as per conditions prevailing for that specified period and the continuity errors as well as flow routing errors were minimized to get accurate results. The obtained report of the rainfall runoff was then observed and compared with varying precipitation and built up observations for viewing results and changes in the parameters. The correlation analysis was performed to obtain a correlation and best fit method was used to get minimum error value. The correlation thus formed was observed for various catchments individually i.e. inter location and then with others with same characteristics over all the study area i.e. intra location. Basalt Increase in Built-up is 42.75%and simultaneously increase in Ru-noff% is 56.63%. Increase in runoff is gradual till 1991-01, but sudden increase is observed in after 1991 from 2.01% to 45. Increase in Built is observed from 46.78% to 100% and runoff from 1.411% to 73.02%. In this catchment though initial built up was 46.78% the runoff generated was only 1.4% materializing the fact that Predevelopment stage had been conserving nature at its best. There is sudden rise in runoff from 1991-2001 from 13.26 to 50.04%, which is the decade year for Bhopal city to get urbanized as master Plan had increased its planning area to double the city. The rate of runoff is found greater than the built up in last 10years. Correlation coefficient is 0.869 and Value of coefficient of determination R2 is 0.912. Alluvium This catchment has a part with geological characteristics of Alluvium soil and has experienced increase in built up from 2.36% to 87.18% and runoff from 0.18% to 64.77%. The roughness values for this catchment showed a variance for imperious surface of 0.012 roughness to 0.011 and for pervious 0.13 to 0.05, before and after predevelopment as most of the natural area was made impervious, though some natural parks and open spaces are still restored as natural ones. The recharge potential for this area is too high and has maintained satisfactory water levels in all seasons. Correlation coefficient is 0.915 and value of coefficient of determination R2=0.906 ( Figure 2). This catchment has been developed only after 1991 and showed no sign of built up in initial stages having 0.121% of runoff. After 1991 the urbanization took place steadily and the built up% reached to 92.75 and resulting runoff to 75.86%, maximum of the study area. This catchment is having the most commercial activities along main roads and open spaces. The runoff didn't show initial rise and has low profile till 1991 then a sudden 54% increase is observed for 30 increases in Built up. Correlation coefficient is 0.906 and value of coefficient of determination R2=0.860. This catchment has water body around 50% of its area and experiences increase in built up from 0.49 to 46.25% and resulting runoff from 0.0031 to 5.14%. The commercial activities in and around the water body has disturbed the peak flows for this area still maintaining the over all runoff very much less thus helping the recharge potential a lot. Water levels in this area show satisfactory recharge. Correlation coefficient is 0.749 and value of coefficient of determination R2=0.830. Sandstone The catchment experienced increase in built up from 7.11% to 82.71% and Runoff from 7.3% to 60.17% in the period. The observations reveal that built up and runoff had been going hand in hand. The flow of runoff from this catchment is towards catchment S6. In the middle period there is a slight fall in runoff from 1991 onwards. Correlation coefficient is 0.97 and value of coefficient of determination R2=0.817 (Figure 3). Increase in built up to 51.87% from 0 and 35.16% from 0.15%. Catchment had no urbanization till 1996 and was basically a green belt the land use changed in Master Plan and dramatically honeycombs like colonies flourished in this area with very small plots and colony sizes. Some colonies are observed to have only 8 plots. This honeycomb development haphazardly left no space for proposed city parks or green areas for nurseries as proposed. The ribbon like development interrupts the surface flows from the dam to lake, hence peak flows and flooding are observed for intense rainfall events. Also some conduits show instability for higher rainfall. It has 45% residential, 47% natural areas. Basically a slum area out of the city municipal limits having kaccha houses and less R.C.C Construction near the paths of water flows, now turned to developing colonies over the hill tops and rapidly converting hills to flat mounds by excessive excavation of rocks. It also consist a natural park along its natural drains in one part. The increase in built up was observed gradually but being a peri urban area it didn't affected runoff till 1995 and showed a weak relation till then. The runoff increased after 2001. Correlation coefficient is 0.68 and value of coefficient of determination R2=0.867. Observation for runoff pattern in Figure 5 to 13 indicated a particular response by the hydrological system during urbanization. As urban areas increased, the runoff also increased simultaneously. In most of the catchments same response was observed. But some catchments responded in a different way. These had runoff not in linear correlation with built up. Instead the runoff was quite below than the built up as compared to same built up with higher runoff as shown in chart 27, catchments like BSHC-S1, BSHC-S6(high runoff%) BOBC-S1, BLPC-S1 and BSHC-S3, BSHC-S2 (low runoff%). The variation in this behaviour was certainly related to some components of both the systems having different response to same situations. To fine out these components it was necessary to analyse individual catchment with its physical characteristics and components reacting to variations in each other. Hence a detail study of catchment, its characteristics like geology, Physiography, drainage, slope, built up, impervious layer, pervious layer and increasing built up pressure was Method adopted included same as previous except impervious surfaces and pervious surfaces were given the roughness value as per the existing land covers for the catchments. The simulation was run with constant precipitation for 6months of monsoon season with built up increase as per decade. Analysis Results obtained from simulation included variations in various components like, evaporation loss, infiltration loss, surface runoff, final surface storage, dry weather flow, wet weather flow etc The most affected components were runoff and infiltration. So, first runoff was selected to continue with the analysis so that a correlation of runoff and built-up can be established. Later infiltration in storage volume in terms of ground water level was also compared. [9][10][11] Results and discussion Catchments with Basalt soil worked moderately with Built up over it for a range of 65 to 70% impervious land cover. Sandstone had good response to built up upto 70 to 80% with supporting land cover as grass and bushes. Infiltration is very good even at high intensity rainfall. Alluvial soil is best to store and recharge rain water if treated with natural soil, dense bushes, connected pervious pathways and small portions of pervious-impervious land covers to make water flow slow getting infiltrated deep in soil. The roughness values, land cover and the geology when compared together with runoff pattern it is clear that alluvial soil when subjected to dense bushes and natural soil cover assures best working of natural cycle. Since urbanization does not support this combination all over, it can be planned at some pockets of urban area having alluvial soil as geological base. Similarly sandstone is good in fast infiltrating the runoff water with the help of grass and medium vegetation land cover. Basalt shows some scope of infiltration if land cover imposed on it is a dense bush or forest. Hence city parks and natural drainages with basalt base should be landscaped with dense bushes. 12 Land cover with scanty grass and concrete does not support infiltration to a satisfactory level and hence development should be proposed on such pockets with impervious base and least grass or natural cover. These observations help to decide the planning perspective based on geology and land cover. 13,14 Conclusion As per the discussions for carrying capacity of natural water resources, it is clear that presently no such definition or limits could have been defined for carrying capacity of natural cycles, but still an impact on natural working can be seen of the same. Hence considering the natural process as basic cycle and proportions of components in natural cycle, the permissible limits can be ascertained with effect to a point where natural processes are getting hampered by the human activities. With such consideration 20 to 40% above natural cycle, the performance of components etc affected as per other climatic and anthropogenic features. With such consideration and observations of water levels in study area a range of built up induction over natural surfaces was finalized which was found to be in limit of natural processes. III. Alluvial soil has very good water holding capacity but with slow runoff only hence these areas need to intercept the runoff at many places and hence every plot or land should have some percentage of land reserved as recharge pit within it so that immediate runoff starts later and then it has to being intercepted by natural ponding areas or places called as recharge zones (khanti) in India where artificial pond is made to collect extra water. IV. Almost all catchments supported built up to 45 to 50% with marginal variations in runoff recharge for water cycle. Hence proper urban planning is needed for inducing more built up on the same area so that working of natural cycle is not hindered and both system work simultaneously. For these many policies are being incorporated such as SUD, water sensitive urban design, Catch water where it falls, Artificial water recharging units etc. apart from these proper land use planning is the most important thing which helps in managing runoff and recharge with natural means and thus making urban planning a more organized and water sensitive based.
2021-05-18T00:03:18.561Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "fe7741d10614a49be9cb54a9d3dde9957e8af498", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/IJH/IJH-04-00234.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1c1703bb945311064213169cd7436b2d0b16e2b3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
54689749
pes2o/s2orc
v3-fos-license
Fasciolopsis buski Vomited Out by a Child ; The First Case Reported from Nepal Live adult worms of Fasciolopsis buski are rarely seen in humans except in autopsy. Only a few such cases have been reported in the world literature. We reported a case of fasciolopsiasis in a child of age 14 months who coughed out the live adult Fasciolopsis buski after administration of antihelminthic drug. The patient was a resident of Terai (Far Western) region of Nepal and had history of travelling to India. This is the first case of fasciolopsis reported from Nepal. Introduction Fasciolopsiasis is a gastrointestinal infestation by a trematode; Fasciolopsis buski mainly involving duodenum and jejunum.The fluke is the largest intestinal fluke parasitizing humans and was first noted by Busk in 1943 from the duodenum of a deceased Indian sailor.Fasciolopsiasis is prevalent in various parts of South East Asia including the neighboring countries China and India.The infections by Fasciolopsis buski are common in impoverished countries where proper sanitation systems are lacking [1].The disease occurs due to ingestion of encysted metacercariae on aquatic vegetation or direct water [2].Mostly the infection is asymptomatic but in severe infection the common symptoms are abdominal pain, diarrhea, lowgrade fever, toxemia, allergy, anemia, ascites, generalized edema, obstruction of intestine sometimes leading to death [1,3].Diagnosis is made by detection of eggs in stool but the differentiation between Fasciolopsis buski and Fasciola hepatica is very difficult in routine examination of stool [1].Here we report a case of fasciolopsiasis in a child of age 14 months who coughed out the live adult Fasciolopsis buski after administration of antihelminthic drug.This is the first case of fasciolopsis reported from Nepal. Case report: A 14 months male child attended outpatient department with the chief complain of diarrhea, vomiting, refusal to eat, fever, irritability, and weakness.The child was a resident of Terai (Far Western) region of Nepal and had travel history to Lucknow, India for treatment of urinary tract infection and follow up for the treatment of epilepsy.The patient was under medication (ofloxacin and carbamazepine).He had significant leucocytosis (14000 cells/mm 3 ) with eosinophillia (12%).Stool routine examination and culture didn't reveal any significant findings.Since the patient was already in broad spectrum antibiotic for treatment of urinary tract infection the possibility of bacteria being the cause of the illness was quite low.So the patient was given metronidazole and mebendazole to cover all other possible causes of the diarrhea.After around 12 hours of administration of the medication the patient vomited out a moving worm of size 30×19 mm 2 , leaf shaped with anterior end narrower and the posterior end broadly rounded, dorsoventrally flattened, unsegmented and flesh colored (Figure 1).The worm was identified as Fasciolopsis buski on the basis of the morphological characteristics such as lack of cephalic cone, poorly-developed suckers (oral and ventral) and the unbranched ceca.All symptoms subsided after the full course of treatment and total leucocyte counts and eosinophil counts became normal. Discussion Fasciolopsiasis is endemic in India, cases being reported mainly from areas including Bihar and Uttar Pradesh but no cases have been yet reported from Nepal which is surprising as those areas are connected with the Terai region of Nepal with open borders and shares cultural and geographical similarities [4].The cases may have been underdiagnosed due to poor health facility and hence unreported.In the country like Nepal where the open defecation around the water bodies is common and the pigs are kept in close contact with humans the prevalence of the disease may be alarming as the habit of eating aquatic vegetation and drinking untreated water is common in Nepal.So a study is necessary to determine the prevalence of the Fasciolopsis buski infection at least in the areas of Nepal which are connected with the high prevalence area of India. The child had history of drinking water from a pond during his stay in India.So it is high chance that the child may have got infection from the water he drank from the pond as no other history which might involve the risk of getting infection by Fasciolopsis buski could be elucidated.Live adult worms of this parasite are very rarely seen in humans except in autopsy [5].There are two more reports of live adult worms being vomited out [1,5].Other cases of live adult worms causing different clinical conditions are reported by Cao et al. Conclusion With reporting of this case it can be concluded that the cases of fasciolopsiasis are possible in Nepal and it should also be considered as differential diagnosis in case of the suspected patients with gastrointestinal symptoms.
2018-12-16T03:53:45.119Z
2015-12-30T00:00:00.000
{ "year": 2015, "sha1": "f7c12b8387151e476f23fdab7d4dac5fa23d9131", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/NJB/article/download/14235/11560", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f7c12b8387151e476f23fdab7d4dac5fa23d9131", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261526174
pes2o/s2orc
v3-fos-license
What is the nature, extent and impact of bullying in surgical settings? Insights of surgeons in Australia and Aotearoa New Zealand A significant body of literature has examined the impact of verbal and non‐verbal bullying in surgical settings, where a central focus has been on the experiences of trainee and junior members of the surgical team, women in surgery and other health professionals, such as nurses. Research on how surgeons' perceive or experience bullying is more limited. Therefore, this study aims to investigate the views of surgeons on negative and disrespectful verbal and non‐verbal behaviour and bullying in surgical settings, including its impact on surgeons themselves and the surgical staff they oversee. Introduction In 2015, the Royal Australasian College of Surgeons (RACS) formed an Expert Advisory Group (EAG) to investigate the prevalence of discrimination, bullying, and sexual harassment in surgical settings across Australia and Aotearoa New Zealand.The subsequent survey conducted revealed concerning levels of these behaviours among surgeons and surgical workplaces. 1 The RACS responded promptly by implementing an Action Plan 2 focused on promoting respect, improving patient safety, and countering unacceptable behaviours.This included the introduction of mandatory Operating with Respect (OWR) training for supervisory surgeons, committee members and targeted educators. 13][4][5][6][7] While there have been significant efforts to examine the prevalence and impact of bullying behaviour in surgical settings, 8 there is limited research on surgeons' experiences, viewpoints or role in relation to bullying in surgical environments, despite their significant positions of leadership and leverage in these settings.This study aims to explore the current perceptions of surgeons on the nature, extent and impact of bullying in their work settings. 0][11] Other behaviours include withholding pertinent information, unfair assignments, allocating undesirable tasks and indulging in sabotage. 9ndividual surgeons have different perspectives on what constitutes disrespectful and bullying behaviour and their role in how it should be managed. 9This is shaped by their seniority, level of experience, gender, surgical speciality, as well as the broader context and dynamic in each surgical setting. 12 Approach We utilized a qualitative, phenomenological research design to investigate surgeons' lived experiences and perspectives on workplace bullying in surgical settings.This design is suitable for exploring complex and subjective issues. 13An interpretivist and inductive approach was adopted to account for multiple realities and contexts.Furthermore, we drew upon social identity theory as a theoretical framework to examine how surgeons' perceptions and roles in bullying behaviour are influenced by factors such as their seniority, gender, belonging to the RACS collectively, specific surgical specialities and diverse work settings. Procedure Surgeons recruited for this study were all enrolled to participate in the RACS, mandatory Operating with Respect (OWR) training workshops which targeted supervisory surgeons, RACS committee members and Foundation Skills for Surgical Educators (FSSE). 2 The workshops were run during 2018-2019.Recruitment occurred with the assistance of the RACS, through advertisements in Surgical News.In addition, registered attendees were sent reminders about the workshop they were booked to attend which included an explanation of this research and direct invitation to participate.Part A of the invitation was a request to participate in an online pre-test quantitative survey.The quantitative survey included a further invitation, part B, to participate in a 1:1 interview with the primary researcher.At all stages, consenting provisions were included and confidentiality assured.Ethical approval for this study was granted by the Science, Health & Engineering College, Human Ethics Sub-Committee at La Trobe University in August 2018 (HEC 18308). Semi-structured interviews were conducted between February and November 2019.Surgeons were asked to focus on the 6-month period prior to their interview to avoid them potentially focusing on memorable negative events in the deeper past.The interview schedule comprised 11 questions designed to explore their direct experiences of bullying in their workplaces including where they were an agent in the negative verbal or non-verbal behaviour or were witness to the disrespectful behaviour of others. All interviews were fully transcribed prior to analysis.The interview transcription data were initially analysed for relevant themes by the lead researcher.The two co-authors to this study then undertook an analysis of the semantic themes they identified in a shared combination of six interview transcripts with the lead researcher.This was designed to establish a foundation for the broader analysis to ensure rigour and reflexivity in the analysis.The resultant thematic map was used to guide the analysis and reporting of the interview data. Data analyses Interview transcripts were analysed using narrative analysis and Braun and Clarke's methodology for thematic analysis. 13NVIVO software was used to inductively code data related to semantic themes.Thematic discourse analysis was used to examine how individual surgeons derived meaning from their experiences and how their social context influenced that meaning.Data reporting followed the Consolidated Criteria for Reporting Qualitative Studies (COREQ) checklist. 14 Results Initially, 63 surgeons expressed an interest in being interviewed.After follow-up and the cancellation of some interviews (often due to urgent work matters), a total of 31 interviews were conducted (49.2%) ranging between 22 and 65 min (mean 42.03). Nineteen male and 12 female surgeons were interviewed, including 26 Australian and 5 Aotearoa New Zealander respondents.The age range was 37-68 years (mean 51.52 years).Surgical specialties represented included: six urologists; eight paediatric surgeons (four orthopaedic and four general); four ear nose and throat surgeons; three adult orthopaedic surgeons; three general surgeons; and seven from a variety of specialties including plastics, neurosurgery, vascular and gastrointestinal surgery. Twenty-seven interviewees worked in multiple surgical workplaces, including major public teaching hospitals and private hospitals or practices.Three respondents worked exclusively in public teaching hospitals and one respondent worked in a private setting only. Key themes Following thematic analysis of the interview data, three key themes emerged that addressed the research question on changes in the nature and extent of bullying behaviour and its impact on surgeons and other staff in surgical settings.The themes and subthemes are as follows: verbal bullying (five subthemes), non-verbal bullying (seven subthemes), and impact and outcomes of bullying (six subthemes). Verbal bullying Verbal bullying was commonly reported by respondents and is summarized in Table 1. Respondents, primarily in public teaching settings, reported disrespectful communication and nit-picking as common in surgical workplaces.Such criticism was perceived as intense and frequent enough to constitute a pattern of bullying.Undermining and humiliation of colleagues verbally in meetings was commonly reported in some public teaching settings, and a smaller number of private hospitals, and appears to be an increasingly common way known 'bullies' dominate others. Although raised voices and yelling were noted in stressful situations, they were generally not associated with bullying.In situations with escalated clinical risk, surgeons took charge and acknowledge they used a command-and-control communication style.The surgeons' communication was often considered contextually necessary, and thus forgiven. Non-verbal bullying The throwing of instruments or physical assault of theatre staff was reported to be highly unlikely by surgeons across specialties and settings.By contrast, manipulative and undermining behaviour was more commonly reported across contemporary surgical settings.This is summarized in Table 2. Environments that were reported to be more prone to bullying were associated with factors like local leadership, level of oversight, governance and complaint reporting processes.Moreover, respondents in rural and remote health settings reported more serious cases of bullying. Both male and female respondents commented male surgeons exhibited more overt and dominant forms of negative behaviour.Challenges varied from ensuring gender balance in conference panels to acknowledgement of the realities of balancing professional family and carer responsibilities.Despite most female respondents noting the toll 'gender' issues had taken on them through their careers, they recognized that seniority afforded them greater agency to manage moments when gender issues arose for themselves or other female staff. Another notable finding in the data was 'bullies' reportedly coercing individuals in the broader surgical team (or hospital hierarchy) to act against a surgical team member, essentially getting someone else to do their 'dirty work'. Respondents reported experiencing direct bullying mainly from fellow surgeons (often from a different surgical speciality), or individuals in positions of power, such as anaesthetists or senior executives in the hospital hierarchy.A range of male and female respondents across different surgical specialties, and work settings recognized that their seniority enabled greater confidence in addressing disrespectful or bullying behaviour when required while others acknowledged they actively avoid aggressive protagonists.Some respondents, across surgical specialities and work settings discussed the tactic of exploiting the complex and protracted complaints processes to 'bog down' a peer or colleague they want to undermine or sideline, describing it as a pre-emptive strike by the bully, fearing a complaint would be made against them. A small number of respondents interviewed considered that too much emphasis is being placed on bullying in surgery.Some of these respondents reported that they wanted to be part of the study to provide balance to what they believed could be a biased narrative otherwise. Impact and outcomes of bullying While respondents broadly reported the intensity of bullying behaviour had reduced over time, they reflected that the impact was still felt deeply.Table 3 outlines the theme, impact and outcomes of bullying, and associated subthemes and illustrative quotes. Many respondents, particularly working in public teaching hospitals noted, staff working in surgical setting are highly educated about workplace behaviour and sensitized to 'notice' negative and communication and behaviour.A smaller number of respondents went further believing some surgical staff are overly sensitive to criticism in any form and easily offended. Respondents acknowledged theatre staff's awareness of the surgical workplace hierarchy, where careers can be derailed by negative relationships with more senior managers.Junior staff avoid rocking the boat and endure a lot before resorting to complaints or leaving their job.They reported avoiding problematic individuals or situations instead of complaining about or confronting them.Respondents also lacked confidence in complaints mechanisms due to barriers such as lengthy investigations, the high energy demand to see them through, and fear of negative impacts on their career.6][7] Some respondents reported feeling 'burnt' by past experiences.Many doubted the value and effectiveness of complaints processes, as they rarely led to positive change or sanctions against perpetrators.Serial bullies reportedly remain in positions of power, eroding trust in organizations and professional bodies that failed to address this known problem. One of the most concerning findings was the acknowledged mental health impact of bullying on some respondents.A smaller subset of male respondents went as far as to report experiencing deep depression and suicidal thoughts at points in their surgical career. Discussion This study aimed to explore the perceptions of surgeons in Australia and Aotearoa New Zealand on the nature and extent of bullying in surgical workplaces and its impact on them and other staff working in surgical settings. The study found evidence that respondents perceive the nature and extent of the behaviour has changed.A broad base of respondents suggesting behaviour had improved, noting a shift to becoming less intense, less physical and violent over time.While some of the more extreme forms of bullying may be consigned to history, there would appear to be a notable shift to minor incivilities, microaggressions, indirect and manipulative forms of communication and behaviour.This shift and pattern of bullying behaviour is consistent with research findings published recently in surgical workplaces notwithstanding most of this research has been conducted on Workplace bullying in surgery nursing staff, 11,15 trainee surgeons, 3,6,7,[16][17][18] medical students, 19,20 and women pursuing surgical careers. 18,21,22he other aspect of the research question related to the impact of verbal and non-verbal bullying on surgeons and staff working in surgical settings.While some respondents acknowledged their optimism about improvements in workplace communication and behaviour and belief 'things are headed in the right direction', others were less optimistic.A substantial body of research reports the negative impacts of bullying on trainee surgeons, 3,6,7,[16][17][18] female surgeons, 18,21,22 nursing staff, 11,15 and junior theatre staff but there is less published evidence that surgeons themselves are similarly impacted. Despite the variety of reactions reported in response to personal experiences of bullying, respondents were more inclined to try to actively support and guide junior surgical staff with their negative experiences.Many participants spoke of wanting to be a good role model, positive leader and memorable to staff coming up through the ranks.There is a lack of literature on the role of surgeons as key agents in addressing workplace bullying through the lenses of leadership. Most surgeons in this study acknowledge their crucial leadership role in their workplaces with many recognizing they 'set the tone' for what behaviour is and is not tolerated or modelled.Respondents almost universally took their leadership roles seriously and were committed to be part of the solution, focusing on improving the bullying culture in surgery in the future.This contrasts with the literature that suggests surgeons continue to be influenced by myths and practices of the past, driving poor behaviour and consistently high levels of bullying in modern surgical workplaces. 4he literature explores how social identity theory shapes surgeons' perceptions and role in bullying.This theory suggests that individuals' self-concept is influenced by their social groups, including the surgical specialties.This study found that surgeons' social identity is complex and changes over time, with some identifying more with their specialty, others with colleagues of the same gender, and others as mentors to junior staff.This lens can aid in understanding the realities and priorities of surgeons, as a potential leverage point for further positive reform. Limitations Respondents' past experiences of bullying may have influenced their responses during the interview, despite being asked to focus on the previous 6-months.The difficulty in setting aside these experiences may have affected the accuracy of the data collected.Although a long career and exposure to past bullying events may shape current perceptions, it could have influenced the study results. In this study, selection bias is possible due to the methodology used to recruit surgeons.Those keen to lead the positive change may have self-selected to participate in this research.Therefore, their perspectives may not be able to be generalized to reflect the experience and voice of surgeons collectively. Conclusion This study examined the experiences and perspectives of surgeons across Australia and Aotearoa New Zealand about the prevalence, impact and nature, of bullying within surgical workplaces during the 6-month period prior to interviews being conducted in 2019.The findings uncovered a compelling shift in the nature and intensity of communication and behaviour, with respondents recognizing a gradual move away from overt physical and violent acts towards more insidious, covert, and manipulative forms of communication and behaviour.Although the severity of bullying may have subsided over time, its impact on the wellbeing and performance of staff working in surgical settings is still evident.In particular, the study highlights the distinct effect such behaviour has on the surgeons throughout their careers and the ongoing impact for many today.The insidious effects of subtle bullying and manipulation persist, and more must be done to address this issue. Finally, the crucial role of surgeons as key agents in promoting and enhancing respectful communication and behaviour in surgical workplaces cannot be overstated.As a result, it is strongly advised that the RACS and employing organizations intensify their endeavours to harness the influential role of surgeons as catalysts for positive change and help build more respectful surgical workplaces. Table 1 Description of theme 'verbal bullying' and subthemes with illustrative quotes And my colleague was just launching into him about how stupid he was'.(Respondent 12) 'And then if they speak disrespectfully, particularly if they yell or swear or abuse them, then those people aren't going to then feel safe to speak up if they see any problems which makes it a very unsafe environment'.it resulted in me crying and being yelled at ….I counted that as bullying because that kind of loud yelling, condescending behaviour, is repetitive, and makes interactions incredibly difficult at all times'.(Respondent 14) Public humiliation: undermining reputation 'He has then gone and badmouthed me further to lots of other colleagues … anything I do will be greatly criticized in front of large numbers of people'.(Respondent 30) Table 2 Description of theme 'non-verbal bullying' and subthemes with illustrative quotes 'So, the best way that I can describe it is kind of sniping'.(Respondent 7) Body language: shunning and dismissive 'So, it's just the rolling of the eyeballs, it's the toss back of the head, which is just as powerful in terms of the gut reaction that it generates in me'.(Respondent 16) 'He was very loud and he was also very physically intimidating, because he's very tall and he was using a lot of intimidating body language, sort of standing right over her'.(Respondent 4) 'Sometimes the way we say things…trainees are not stupid, they can feel that we don't like them'.(Respondent 3) 'He has also previously physically pushed me aside in a clinic witnessed by multiple nursing staff who complained to the head of department, and nothing was done about that either'.(Respondent30) Using positional power to dominate 'So I think there's hierarchy and ego which kind of go hand-in-hand'.(Respondent19) Table 3 Description of theme 'impact and outcomes of bullying' and subthemes with illustrative quotes I called myself out on that and apologized to them'.(Respondent 7) 'And I think people are very forgiving about it.They realize that when there's a job to be done, there's a job to be done'.(Respondent 5) Active avoidance or leave 'I think the, the most, the best mechanism I have found … is complete avoidance.So, I have set out 15 years to make sure that my paths virtually never cross with his'.(Respondent 16) Intimidation and fear of retribution 'You cannot expect the low power person to write a written complaint to HR. 'If it wasn't for my family, and for one very close friend in the institution, I can see how people could go down the path of suicide, get depressed.It was a professional nightmare'.(Respondent 13) © 2023 The Authors.ANZ Journal of Surgery published by John Wiley & Sons Australia, Ltd on behalf of Royal Australasian College of Surgeons.
2023-09-06T06:17:33.080Z
2023-09-05T00:00:00.000
{ "year": 2024, "sha1": "873eb5e8c22182e2f8359316d4ba259a0bacfcb0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ans.18661", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "ddeb6d4ff42d9b2fcccb42cd7f61137c948a7688", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232384855
pes2o/s2orc
v3-fos-license
Single- versus Dual-Targeted Nanoparticles with Folic Acid and Biotin for Anticancer Drug Delivery Cancer is one of the major causes of death worldwide and its treatment remains very challenging. The effectiveness of cancer therapy significantly depends upon tumour-specific delivery of the drug. Nanoparticle drug delivery systems have been developed to avoid the side effects of the conventional chemotherapy. However, according to the most recent recommendations, future nanomedicine should be focused mainly on active targeting of nanocarriers based on ligand-receptor recognition, which may show better efficacy than passive targeting in human cancer therapy. Nevertheless, the efficacy of single-ligand nanomedicines is still limited due to the complexity of the tumour microenvironment. Thus, the NPs are improved toward an additional functionality, e.g., pH-sensitivity (advanced single-targeted NPs). Moreover, dual-targeted nanoparticles which contain two different types of targeting agents on the same drug delivery system are developed. The advanced single-targeted NPs and dual-targeted nanocarriers present superior properties related to cell selectivity, cellular uptake and cytotoxicity toward cancer cells than conventional drug, non-targeted systems and single-targeted systems without additional functionality. Folic acid and biotin are used as targeting ligands for cancer chemotherapy, since they are available, inexpensive, nontoxic, nonimmunogenic and easy to modify. These ligands are used in both, single- and dual-targeted systems although the latter are still a novel approach. This review presents the recent achievements in the development of single- or dual-targeted nanoparticles for anticancer drug delivery. Introduction According to the World Cancer Report [1], cancer is the first or second leading cause of premature death (i.e., at ages 30-69 years) in 134 of 183 countries and it ranks third or fourth in an additional 45 countries. Therefore, cancer treatment represents one of the most crucial issues in clinical management [2]. First-line therapy of solid tumours is based on surgery, radiotherapy and/or chemotherapy [3]. For metastasized tumours, or for lesions, which cannot be removed surgically, chemotherapy is among the very few treatment options available. The serious problem of intravenous systemic chemotherapy is the unspecific targeting to the tumour and difficulties to achieve therapeutic levels of drug within or adjacent to the tumour. For example, in the case of intravenously infused paclitaxel (Ptx), less than 0.5% of the total dose is locally available within the tumour. Furthermore, significant concentrations of drug frequently accumulate in healthy tissue, leading to severe side effects and dose-limiting toxicity [4]. Different strategies have been tried to develop novel tumour-specific delivery systems for chemotherapeutics to reduce Pharmaceutics 2021, 13, 326 2 of 38 toxicity and recent progress in nanomedicine have created an opportunity for the development of more potent and tumour-targeted dosage forms [5]. So far, various kinds of nanomedicines such as antibody-drug conjugates (ADCs), drug conjugates and nanocarriers for cancer therapy have been approved by the US Food and Drug Administration (FDA) and European Medicines Agencies (EMA) [6]. Liposomal doxorubicin (Doxil™/Caelyx™) was the first anti-cancer nanomedicine approved by the FDA in 1995 and it achieved a nearly 300-fold increase in area under the curve, relative to free doxorubicin [7]. Liposomes are spherical structures composed of phospholipid bilayers and cholesterol that enclose centre space that can carry small drug molecules and large macromolecules like DNA. They characterize a large cargo space and capability to simultaneously deliver both, hydrophilic and hydrophobic agents. Liposomes also provide longer circulation time than free drug and can be easily modified to obtain the expected therapeutic effect [8]. Apart from surface modification for achieving the active targeting, the liposomes may also respond to the specific properties of the cancer cell environment to release anti-cancer drugs under well-defined conditions, e.g., pH, ultrasounds, magnetic field exposure, light, enzymes, thereby reducing systemic toxicity to healthy cells [9]. An example of stimuli-responsive liposomes are magnetoliposomes, which possess a magnetic core surrounded by lipid bilayer. The magnetoliposomes may be excited by magnetic radiation that cause local hyperthermia within the tumor leading to the cancer cell death. Such transport systems, in addition, to their chemotherapeutic properties, can also be used in cancer diagnostics (e.g., with the use of MRI) and may be promising in the future cancer therapy [10,11]. However, apart from liposomes, also other kinds of nanoparticles are already studied for anticancer drug delivery, e.g., dendrimers or polymeric micelles. Dendrimers have a well-defined macromolecular structure, spherical geometry and abundant functional groups on the surface that can be modified or physically changed for nontoxicity, enhanced efficiency and specific cargo abilities [12]. Research studies consider dendrimers as delivery systems for drugs, genes and contrast agents [13]. There are few different types of compounds used for preparation of dendrimers, e.g., poly(propylene imine) (PPI), poly(Llysine), poly(amidoamine) (PAMAM), PAMAM-organosilicon (PAMAMOS). Dendrimers, in comparison to other polymeric delivery systems, have many specific advantages: (1) the well-known, predictable size, macromolecular weight and structure, which results in desirable abilities to deliver active agent, as well as the option to choose an appropriate structure with expected properties; (2) controllable size, lipophilicity and ability to cross cell walls; (3) adjustability by many compounds of their terminal ends, resulting in a change of the dendrimers properties, toxicity level, target point, etc. [14]. Polymeric micelles are small core-shell structures, formed by self-assembly of amphiphilic block copolymers that have the ability to increase drug solubility and reduce toxicity [15]. The hydrophobic core is surrounded by a hydrophilic shell. The most popular polymer forming the outer shell is poly(ethylene glycol) (PEG) due to its hydrophilicity and biocompatibility. Poly(lactide) (PLA) and poly(ε-caprolactone) (PCL) are one of the most commonly used hydrophobic blocks of the amphiphilic polymers because they are biocompatible and biodegradable [16]. The size of polymer micelles between 50-200 nm support long-term circulation and avoiding capturing by the reticuloendothelial system. Compared to surfactant micelles, block copolymer-based micelles are characterized by higher stability and larger versatility for controlling micellar structure and functionality by choices of polymer composition, architecture, molecular weight and monomer chemistry [17]. Moreover, surface modification enables enhanced tissue penetration and targeting properties [18][19][20]. It has been demonstrated that substantial intra-and intertumoural variability is present in the tumour cells and the tumour microenvironment that results in the heterogeneity of molecular, pathologic and clinical features of each tumour type. In this aspect, the nanoparticle design plays a significant role in influencing tumour targeting [5]. Active Targeting of Nanoparticles Passive targeting associated with the enhanced permeability and retention (EPR) effect was proposed as the major underlying mechanism for nanomedicine-based cancer therapy. However, more and more studies have revealed that although the EPR effect, present in animals like mice, plays a less important role in humans due to tumour heterogeneity or lack of fenestrations in the tumour endothelium [6]. Therefore, the most recent reports indicate that future nanomedicine may require new design principles toward an active targeting of nanocarriers [3,6,21,22]. In order to specifically target and eradicate cancer cells, there is a requirement for precise distinguishing target cells (tumour cells) from non-target cells and development of smart drug delivery platforms [23]. Active-targeting based on ligand-receptor recognition may show better efficacy than passive targeting in human cancer therapy and several active-targeting nanomedicines have already progressed into clinical trials [24]. Targeted drug delivery systems can be also useful to overcome the acquired multidrug resistance (MDR), which is supposed to be caused by overexpression of superfamily of ATP-binding cassette (ABC) proteins, e.g., P-glycoprotein (P-gp) and multi resistance-associated protein (MRP) that result in enhanced cellular efflux [25,26]. In the past decade, several strategies for receptor mediated endocytosis have been developed to enhance tumour cell uptake of drug-loaded nanoparticles. A variety of small molecules such as folic acid (FA) and biotin (BIO) (Figure 1) have been used as targeting ligands for cancer chemotherapy, because they are readily available, inexpensive, nontoxic, nonimmunogenic and easy to modify [27]. Based on the overexpression of specific receptors on tumour cells, active targeting nanomedicines may efficiently deliver drugs into tumour cells via receptor-mediated endocytosis, so the targeting effect is affected by the receptor expression. The folate and biotin as a low-molecular-weight vitamins play essential roles in cell survival and bind to the respective receptors with high affinity. The folate and biotin receptors are up-regulated in various carcinomas while being expressed at low levels in normal cells and tissues, thereby minimizing the potential off-target toxicities [28]. of molecular, pathologic and clinical features of each tumour type. In this aspect, the nanoparticle design plays a significant role in influencing tumour targeting [5]. Active Targeting of Nanoparticles Passive targeting associated with the enhanced permeability and retention (EPR) effect was proposed as the major underlying mechanism for nanomedicine-based cancer therapy. However, more and more studies have revealed that although the EPR effect, present in animals like mice, plays a less important role in humans due to tumour heterogeneity or lack of fenestrations in the tumour endothelium [6]. Therefore, the most recent reports indicate that future nanomedicine may require new design principles toward an active targeting of nanocarriers [3,6,21,22]. In order to specifically target and eradicate cancer cells, there is a requirement for precise distinguishing target cells (tumour cells) from non-target cells and development of smart drug delivery platforms [23]. Active-targeting based on ligand-receptor recognition may show better efficacy than passive targeting in human cancer therapy and several active-targeting nanomedicines have already progressed into clinical trials [24]. Targeted drug delivery systems can be also useful to overcome the acquired multidrug resistance (MDR), which is supposed to be caused by overexpression of superfamily of ATP-binding cassette (ABC) proteins, e.g., P-glycoprotein (P-gp) and multi resistance-associated protein (MRP) that result in enhanced cellular efflux [25,26]. In the past decade, several strategies for receptor mediated endocytosis have been developed to enhance tumour cell uptake of drug-loaded nanoparticles. A variety of small molecules such as folic acid (FA) and biotin (BIO) (Figure 1) have been used as targeting ligands for cancer chemotherapy, because they are readily available, inexpensive, nontoxic, nonimmunogenic and easy to modify [27]. Based on the overexpression of specific receptors on tumour cells, active targeting nanomedicines may efficiently deliver drugs into tumour cells via receptor-mediated endocytosis, so the targeting effect is affected by the receptor expression. The folate and biotin as a low-molecular-weight vitamins play essential roles in cell survival and bind to the respective receptors with high affinity. The folate and biotin receptors are up-regulated in various carcinomas while being expressed at low levels in normal cells and tissues, thereby minimizing the potential off-target toxicities [28]. However, since the receptors (surface markers) of tumour cells change dynamically with tumour progression [29], the multiple ligand-coated nanostructures have been found to improve the identification of the tumour cells [6,23]. However, since the receptors (surface markers) of tumour cells change dynamically with tumour progression [29], the multiple ligand-coated nanostructures have been found to improve the identification of the tumour cells [6,23]. Folic Acid as a Targeting Ligand Recent studies show the great potential of folic acid-targeted drug delivery systems [13]. The FA (B9 vitamin) is necessary for the synthesis of purines and thymidine-the crucial nucleic acids components. Folate receptors (FRs) are membrane glycoproteins that Pharmaceutics 2021, 13, 326 4 of 38 uptake folians through endocytosis. There are three different subforms of FRs-Frα, Frβ and Frγ-identified in human tissues. An increased folic requirement in fast proliferating cells, such as cancer tissues, cause a higher expression level of FRα compared to healthy cells [30]. Thus, FA receptors are overexpressed in human carcinomas including breast, ovary, endometrium, kidney, lung, head and neck, brain, colon and myeloid cancers, while only minimally distributed in normal tissues [31][32][33][34][35]. The abundance of folate receptors per cell varies dramatically, from approximately~3 × 10 5 in KB oral carcinoma cells and about 10 4 in C6 glioma cells down to undetectable in E9 chick cortical cells [36]. The expression of the folate receptor alpha (FRα) is significantly increased in patients with triple-negative breast cancer and is therefore, a potential biomarker and therapeutic target [37][38][39]. The FR is also highly expressed in bone metastatic cells and osteoclasts [40], being an attractive target for bone-related cancers. Thus, using the FRs as a target for drug delivery systems is based on the specific characteristic of these receptor types: much higher overexpression on tumours cells (100-300×), rapid receptor recirculation after cell internalization and exposition on a cell surface without releasing to circulation [13,41]. Moreover, small molecules like folic acid present advantages over peptides or antibodies as targeting ligands, e.g., improved stability during storage, increased stability in acidic or basic media and better resistance to high temperature. Folic acid carries no risk of toxicity or immune reactions, offers unlimited availability and low cost as well as low immunogenicity. As a targeting ligand FA is also easy to scale-up for clinical applications and facile chemical modification [13,41]. Therefore, the FR-targeted drug delivery for different cell types or specific organs can potentially maximize therapeutic efficacy while minimizing side effects. Folic Acid as a Targeting Ligand Recent studies show the great potential of folic acid-targeted drug delivery systems [13]. The FA (B9 vitamin) is necessary for the synthesis of purines and thymidine-the crucial nucleic acids components. Folate receptors (FRs) are membrane glycoproteins that uptake folians through endocytosis. There are three different subforms of FRs-Frα, Frβ and Frγ-identified in human tissues. An increased folic requirement in fast proliferating cells, such as cancer tissues, cause a higher expression level of FRα compared to healthy cells [30]. Thus, FA receptors are overexpressed in human carcinomas including breast, ovary, endometrium, kidney, lung, head and neck, brain, colon and myeloid cancers, while only minimally distributed in normal tissues [31][32][33][34][35]. The abundance of folate receptors per cell varies dramatically, from approximately ~3 × 10 5 in KB oral carcinoma cells and about 10 4 in C6 glioma cells down to undetectable in E9 chick cortical cells [36]. The expression of the folate receptor alpha (FRα) is significantly increased in patients with triplenegative breast cancer and is therefore, a potential biomarker and therapeutic target [37][38][39]. The FR is also highly expressed in bone metastatic cells and osteoclasts [40], being an attractive target for bone-related cancers. Thus, using the FRs as a target for drug delivery systems is based on the specific characteristic of these receptor types: much higher overexpression on tumours cells (100-300×), rapid receptor recirculation after cell internalization and exposition on a cell surface without releasing to circulation [13,41]. Moreover, small molecules like folic acid present advantages over peptides or antibodies as targeting ligands, e.g., improved stability during storage, increased stability in acidic or basic media and better resistance to high temperature. Folic acid carries no risk of toxicity or immune reactions, offers unlimited availability and low cost as well as low immunogenicity. As a targeting ligand FA is also easy to scale-up for clinical applications and facile chemical modification [13,41]. Therefore, the FR-targeted drug delivery for different cell types or specific organs can potentially maximize therapeutic efficacy while minimizing side effects. There are extensive studies on FA-targeted micelles. Varshosaz et al. synthesized folic acid targeted micelles of Synperonic PE/F 127-cholesteryl hemisuccinate (PF127-Chol) loaded with docetaxel. In vitro studies exhibited the high drug encapsulation efficiency of 99.6% and superior cytotoxicity and cellular uptake in comparison to not-targeted micelles and free drug. Moreover, a reduction in tumour volume was observed in mice bearing melanoma [78]. pH-sensitive and folic acid-targeted mixed micelles were developed from a mixture of poly(ethylene glycol)/methyl ether-poly(histidine) (MPEG-PHIS) and folic acid-poly(ethylene glycol)-(+)-α-tocopherol (FA-PEG-VE). These micelles presented high cytotoxicity and destabilisation in the acid environment of endosomes, which caused the release of paclitaxel. The rate of sarcoma tumour inhibition in female Kunming mice was 85.97% [19]. Recent studies evaluated the drug delivery system encapsulated with doxorubicin, which was composed of Bletilla striata polysaccharide modified with stearic acid (SA) and targeted with folate (FA-BSP-SA). The pH-responsive release effect of FA-BSP-SA micelles was observed, leading to increased release of Dox in acidic condition. The in vivo study conducted in mice showed a decrease of tumour weight and volume [77]. The new redox-sensitive polymeric micelles targeted with folic acid (FHSV-folic acidhyaluronic acid SS-vitamin E succinate) for paclitaxel delivery (Ptx/FHSV) were designed by Yang et al. The nanoparticles characterize enhanced cell internalization caused by folic acid and redox-sensitivity leading to rapid drug release in presence of high concentration of glutathione (GSH). Ptx/FHSV micelles were compared to the single-targeted micelles and free paclitaxel. In vitro evaluation showed increased cellular uptake of the Ptx/FHSV micelles. In vivo study revealed enhanced tumour accumulation, inhibition of tumour growth and minimal toxicity to normal cells [89]. A novel approach was the development of FA-targeted filomicelles from a combination of poly(L-lactide)-Jeffamine-folic acid and poly(L-lactide)-poly(ethylene glycol) for delivery of betulin derivative, which reveals high cytotoxicity against cancer cells. Filomicelles (worm-like micelles) possess high drug loading capacity and long circulation time in the bloodstream. The successful in vitro internalization of PLA-Jeff-FA/PLAPEG micelles by FR-positive human cervix adenocarcinoma cells (HeLa) was confirmed by flow cytometry and confocal laser scanning microscopy (CLSM). Importantly, drug-free micelles did not affect the viability of cells ( Figure 3) [81]. Fasehee et al. investigated disulfiram loaded NPs as a therapeutic system to treat breast cancer. Disulfiram is well known drug for alcoholism treatment with recently confirmed anticancer action by induction of reactive oxygen generation (ROS), which is responsible for activation of apoptosis. The folate-targeted PLGA-PEG NPs loaded with disulfiram have shown a significant decrease of breast cancer tumour growth rate in Balb/c mice. Additionally, no body weight loss or death was observed, chronic toxicity was lower and tumour growth inhibition was more significant [42]. Another example of using the not obvious drug for cancer treatment can be NPs loaded with orlistat, which is the FDA-approved anti-obesity drug with the ability to block the lipogenic activity of fatty acid synthase (FAS) present in 50% of cancer cells. Unfortunately, orlistat has Pharmaceutics 2021, 13, 326 7 of 38 poor bioavailability (≈1%). The prevention of fast degradation was achieved by drug loading to NPs synthesized by copolymerization of 2-hydroxyethylacrylate (HEA) and 2-ethylhexylacrylate (EHA) with FA. The nanoparticles were developed for the effective treatment of triple-negative breast cancer. The in vivo study showed 70% of volume reduction of MDA-MB-231 tumour xenografts in mice, suggesting that the anti-obesity drug can be considered as a novel strategy to treat highly challenging cancer without overexpression of progesterone and oestrogen receptors (HER) [43]. redox-sensitive polymeric micelles targeted with folic acid (FHSV-folic acid-hyaluronic acid SS-vitamin E succinate) for paclitaxel delivery (Ptx/FHSV) were designed by Yang et al. The nanoparticles characterize enhanced cell internalization caused by folic acid and redox-sensitivity leading to rapid drug release in presence of high concentration of glutathione (GSH). Ptx/FHSV micelles were compared to the single-targeted micelles and free paclitaxel. In vitro evaluation showed increased cellular uptake of the Ptx/FHSV micelles. In vivo study revealed enhanced tumour accumulation, inhibition of tumour growth and minimal toxicity to normal cells [89]. A novel approach was the development of FA-targeted filomicelles from a combination of poly(L-lactide)-Jeffamine-folic acid and poly(L-lactide)-poly(ethylene glycol) for delivery of betulin derivative, which reveals high cytotoxicity against cancer cells. Filomicelles (worm-like micelles) possess high drug loading capacity and long circulation time in the bloodstream. The successful in vitro internalization of PLA-Jeff-FA/PLAPEG micelles by FR-positive human cervix adenocarcinoma cells (HeLa) was confirmed by flow cytometry and confocal laser scanning microscopy (CLSM). Importantly, drug-free micelles did not affect the viability of cells (Figure 3) [81]. Fasehee et al. investigated disulfiram loaded NPs as a therapeutic system to treat breast cancer. Disulfiram is well known drug for alcoholism treatment with recently confirmed anticancer action by induction of reactive oxygen generation (ROS), which is responsible for activation of apoptosis. The folate-targeted PLGA-PEG NPs loaded with disulfiram have shown a significant decrease of breast cancer tumour growth rate in Balb/c mice. Additionally, no body weight loss or death was observed, chronic toxicity was lower and tumour growth inhibition was more significant [42]. Another example of using the not obvious drug for cancer treatment can be NPs loaded with orlistat, which is the FDAapproved anti-obesity drug with the ability to block the lipogenic activity of fatty acid synthase (FAS) present in 50% of cancer cells. Unfortunately, orlistat has poor bioavailability (≈1%). The prevention of fast degradation was achieved by drug loading to NPs synthesized by copolymerization of 2-hydroxyethylacrylate (HEA) and 2-ethylhexylacrylate (EHA) with FA. The nanoparticles were developed for the effective treatment of triple-negative breast cancer. The in vivo study showed 70% of volume reduction of MDA- Worth mentioning is a novel nanoformulation composed of solid lipid nanoparticles (SLNs), obtained from lipid molecules which occur in the solid state at room temperature It is claimed that the advantage of SLNPs is their low biotoxicity and convenient large-scale production. In 2018 Rajpoot and Jain published results of their study of oxaliplatin loaded SLNs containing tristearin, 1,2-distearoyl-sn-glycero-3-phosphoethanolamine (DSPE), Lipoid S75 and Tween 80 conjugated with folic acid for colorectal cancer treatment. The formulation enabled to obtain higher anticancer activity on HT-29 (human colon cancer cell line) compared to free oxaliplatin and not-targeted oxaliplatin loaded SLNPs [90]. The further studies reported that the SLNs can be used for delivery of irinotecan to colorectal cancer. The SLNs improved in their sustained drug release properties compared to liposomes and niosomes, showing good stability and the possibility of conjugation. In vitro studies of irinotecan loaded SLNs targeted with FA (FA-SLNs)-exhibited prolonged drug release profile, high encapsulation efficiency and small particle size (varied from 201.88 ± 9.92 to 164.14 ± 5.57 nm) [83]. The exploration of SLNs is continued and the oral pH-responsive alginate microbeads loaded with irinotecan and targeted with folic acid (FA-SLNs) were developed for the treatment of colorectal cancer [91]. Furthermore, the FA-SLNs were covered with Eudragit S100 to achieve pH-responsive alginate microbeads oral delivery system, folic acid targeted which was used for irinotecan delivery. The system was evaluated in vitro for drug release in various pH and the findings proved drug release in an intestinal region only (pH > 7.0). In vivo research was conducted on Balb/c nude mice model bearing HT-29 tumour to observe targeting potential and organ biodistribution. The Eudragit coated, irinotecan loaded FA-SLNs showed the enhanced tumour growth inhibition in comparison to no-targeted Eudragit coated SLNs with irinotecan and free drug [91]. Dual-Drug Delivery Apart from the simple liposome encapsulation by one active substance, the innovative, compound formulations, are being synthesized. Combination of more than one cytostatic drug in a single cycle of chemotherapy is well known strategy, which relies on the combining of different mechanisms of action of the used drugs that may provide better anticancer effects. Therefore, co-encapsulation of more than one drug-cisplatin (Cis) and paclitaxel in folic acid-modified liposomes revealed the greater chemotherapeutical response of FRpositive non-small lung cancer cells [48]. FR-targeted liposomes loaded with paclitaxel and imatinib have shown effectiveness in promoting cell death and suppressing vascular endothelial growth factor (VEGF) expression in folate-receptor-overexpressing cancer cell lines [92]. In addition, the cytotoxic activity of double loaded FA-modified liposomes with mitomycin C and doxorubicin was proved against prostate-specific membrane antigen (PSMA)-positive cancer cells [8]. Gazzano et al. designed folate-targeted liposomes with doxorubicin conjugated with nitric oxide that inhibited the Pgp efflux. In preclinical trial, this formulation showed superior efficiency to doxorubicin and Caelyx ® , with similar toxicity, so it can be potentially advantageous for the treatment of FAR-positive/Pgp-positive breast tumours [93]. NPs were used to achieve an efficient drug delivery system for cisplatin and docetaxel in breast cancer treatment [47] or cisplatin and paclitaxel against non-small lung cancer [48,49]. Thapa et al. prepared folic acid-targeted liquid crystalline nanoparticles (LCN) loaded with docetaxel and cisplatin for effective metastatic breast cancer treatment. The LCN characterized small size (~250 nm), good encapsulation properties and controlled drug release. In vivo trial conducted on MDA-MB-231 tumour bearing xenograft mouse model confirmed that inhibition of tumour growth was more significant in mice treated simultaneously with cisplatin and docetaxel than in mice treated with separately administered drugs. However, even more significant tumour inhibition was obtained in mice treated with FA-targeted LCNs co-encapsulated with both drugs, indicating their therapeutic potential [47]. Cisplatin has also been co-delivered with paclitaxel for the treatment of lung cancer in a nanosystem composed of folic acid modified PEG-PLGA. The analysis of the proper drug ratio was evaluated toward four different lung cancer cell lines (L929, R1610, A549 and M109). It has been found that Cis/Ptx concentration of 1:2 shows the greatest cell growth inhibition. Additionally, the synergistic effects of those two drugs could promote and accelerate the tumour cells death, that was confirmed in vivo. The M109 and A549 lung cancer cell grafted tumour-bearing nude mice were treated with PBS, free cisplatin, free paclitaxel, FA-Cis-NPs, FA-Ptx-NPs, Cis-Ptx-NPs, FA-Cis-Ptx-NPs and combination of free drugs to compare anticancer activity and systemic toxicity of various formulations. It has been observed that co-delivering of two drugs in the targeted NPs (FA-Cis-Ptx-NPs) exhibited the highest antitumour efficiency with minimal side effects [48]. The research was continued to confirm hemocompatibility of these nanoparticles and no toxic effect (e.g., blood haemolysis, blood clotting or complement activation) [49]. Gene Delivery FA-targeted liposomes may be also used as carriers of genes, which are susceptible for degradation by several nucleases in the serum, so need protection and delivery systems providing enhanced cell uptake and controlled release. A promising strategy for improved anti-tumour effect was achieved by co-delivering of small interfering RNA with cisplatin, doxorubicin and ursolic acid in folate-targeted liposomes [94][95][96]. In addition, dendrimers made from cationic polymers can be considered as carriers appropriate for gene delivery, e.g., siRNA [97,98], because they can provide good protective properties against rapid enzyme degradation and effective gene transport to intracellular space. The problem of poor cationic polymer safety was overcome by the idea of a modification of terminal amines of the dendrimers with gold nanoparticles and conjugation of folic acid. In vitro study has shown that PAMAMs dendrimers modified with Au and FA were not cytotoxic Pharmaceutics 2021, 13, 326 9 of 38 (cell viability up to 90%) and induced transgene silencing up to 75% in HeLa (human adenocarcinoma cell line) [97]. FA-Targeted NPs for Thermo-, Photo-, Radiotherapy and Diagnostic or Theranostic Application There are also a number of a novel approach of polymer-coated magnetic nanoparticles (MNPs), which possess magnetic inner core (usually Fe 3 O 4 or Fe 2 O 3 ) covered by a polymeric shell (e.g., PEG, starch, dextran). MNPs can be targeted to the tumour cells and release a drug in response to environmental factors [63][64][65]. Combination of magnetic properties and targeting to folic acid receptors has been assessed. According to the in vitro research conducted by Gunduz et al., idarubicin-loaded FA-PEG-MNPs showed increased internalization by MCF-7 (human breast adenocarcinoma cell lines) cells located near to the magnet site and double increased cytotoxicity in comparison to the free idarubicin [63]. The influence of PEG-ylated MNPs on MCF-7 cells was also investigated by Saragazi et al. The findings confirmed that FA-conjugated MNPs loaded with methotrexate (MTX) had a significant inhibitory effect on MCF-7 cells and showed controlled enzyme-dependent release of MTX [82]. The in vitro and in vivo properties of 188Re labelled folate-targeted albumin nanoparticle coupled with cisplatin were analysed. These MNPs combine three strategies of cancer treatment: chemotherapy, radiotherapy and thermotherapy. Three groups of mice showed a significant reduction of tumour mass of more than 80% (I-treated with chemotherapy and thermotherapy; II-treated with radiotherapy and thermotherapy; III-treated with thermotherapy, chemotherapy and radiotherapy) and the triple therapy showed the most significant inhibition (88.52%). Furthermore, only a few mice exhibited weight loss and decreased appetite, but none of them died. These findings give a chance for the treatment of the difficult to cure ovarian cancer, overcoming MDR effect and increase of a chance for recovery [44]. Nanoscale metal-organic frameworks (NMOFs) are another promising drug delivery carriers. NMOFs are a class of porous and crystalline materials obtained from metal ions (clusters) and organic linkers. The advantages of NMOFs involve high area size for functionalization, large pore size for drug loading and biodegradability. Folic acid was successfully added to zirconium-based MOFs, MOF-808 and Nh2-Uio-66 followed by encapsulation of 5-FU. The NMOFs loaded with 5-FU exhibited pH-sensitive drug release, enhanced cellular uptake and cytotoxicity against HeLa cells [99]. The novel therapeutic idea was developed for a combination of cancer treatment and diagnosis [57][58][59][60][61]. Adequate diagnosis and precise imaging tools to localize tumour cells, lymph node checking and margin assessment are a very important part of cancer treatment. There are still serious limitations of cancer diagnosis by using contrast agents, such as problems with imaging of a whole tumour mass, with crossing brain blood barrier or small cancer detection. FA-targeted NPs are a promising tool to overcome those limitations. Intraoperative imaging was improved by Keating et al., who produced NIR FA-targeted contrast agent [57]. Folate-carbon dots (FA-CDs) made from poly-(acrylate sodium) (PAAS) were designed as a passivating agent and a turn-on fluorescence probe for detection of cancer cells [58]. Aconitic acid was used to prepare fluorescent carbon dots conjugated with FA, which showed potential as a turn-on fluorescent imaging tool for different kinds of tumour cells with FRs overexpression [60]. Perylenemonoimide (PMI) dye-doped polymer nanoparticle (PNP) with NIR emission for live-cell imaging was demonstrated by Pal et al. [59]. The positron emission tomography (PET) radiotracers can also be in the form of FA-targeted nanoparticles, e.g., folate-PEG-NOTA-Al18F [61]. The folate-targeted lipid-polymer hybrid nanoparticles (LPHNPs) loaded with indocyanine green (ICG) and perfluoropentane (PFP)-carrying oxygen (TOI_HNPs) were developed. The LPHNPs are core-shell structures with lipid shell and polymeric core, combining properties of liposomes and nanoparticles, which allow to overcome limitations of both types of drug carriers and enhance their delivery properties, e.g., controlled release, long circulation time, encapsulation efficiency. Additionally, the LPHNPs was modified with folic acid to target FR-overexpressing ovarian cancer cells. ICG is a sensitizer for phototherapy and perfluorocarbons (PFCs) are oxygen carriers. Studies confirmed the stability of LPHPNs, their active targeting ability (compared to PLGA NPs) and activation by laser exposure. In vitro trial on SKOV3 cells showed that TOI_HNPs can release oxygen. However, some unexpected disadvantages connected with photo-sonodynamic therapy (PSDT) were reported, e.g., increased risk of activating drug resistance pathway. Thus, future in vivo studies are required as well as some improvements of NPs, e.g., encapsulation of anticancer drug [100]. FA-targeted dendrimers can also be used as carriers of contrast agents [101][102][103] and in radiotherapy [104]. A new therapeutic nanocomplex based on dendrimers modified by FA, carrying two active chemotherapeutic agents-fluorouracil and 99m Tc to breast cancer cells has been achieved. In vivo studies in breast tumour-bearing BALB/C mice resulted in excellent tumour inhibition rate (reduced growth), significant reduction of tumour size and prolonged survival time. Furthermore, data obtained after testing with a gamma camera indicated that nanocomplex can be useful as an imaging tool [105,106]. The recent achievement was also gained by the development of methotrexate-loaded Au@SiO 2 nanoparticles conjugated with folic acid for use in breast cancer phototherapy with low-level laser therapy (LLLT). The apoptosis was observed in MCF-7 and MDA-MB-231 cells treated with Au@SiO 2 NPs, MTX-FA loaded Au@SiO 2 NPs with and without laser therapy. The most significant cytotoxicity exhibited MTX and FA loaded Au@SiO 2 NPs used in combination with LLLT. Synergistic effect of MTX-FA loaded Au@SiO 2 and LLLT observed in both cell lines and decreased cell viability may be a rationale for future investigations of designing nanoparticles for cancers phototherapy with low-level lasers [54]. Clinical Trials and Patents Some of the FA-targeted drugs have been the objects of clinical trials ( Table 2). In the tested formulations, one or more active agents were conjugated with folic acid via spacers, which may be polysaccharides, proteins or PEGs, separating the drug from folic acid to avoid steric interference between them. Vintafolide (MK-8109, EC145) is a conjugate of vinblastine and folic acid, which was examined in platinum-resistant ovarian cancer treatment and reached Phase III of a clinical trial. In 2014, the trial was discontinued, due to no increase in tumour progression-free time. However, it has been found that patients who did not respond to treatment had a high P-glycoprotein level, which could be a reason for a failure [107]. The second generation of spacers were used to create another potentially useful conjugate, i.e., E0489 (folate-desacetylvinblastine hydrazide with modified linker), which exhibited 70% lower toxicity in preclinical studies compared to the EC145. The E0489 was tested in clinical trial NCT00852189 opened to patients with refractory or metastatic cancer who have exhausted standard therapeutic options [108]. Another example is a folate-indole-cyanine green-like conjugate (OTL38), which is an intra-operative imaging tool for FR-positive ovarian cancer. The Phase II of clinical trial allowed the detection of an additional lesion in 48.3% of patients with the use of OTL38 [109] and phase III (NCT03180307) was completed at the end of 2020. In addition, to single drug conjugates, the conjugate of folate-desacetylvinblastine hydrazide or folate-mitomycin C (E0225) was also tested in NCT00441870 trial. In this study, the E0225 was administrated with 99mTC-EC20 (folic acid-technetium 99m conjugate). The number of clinical trial of folic-acid conjugates developed for therapy of various types of cancer is increasing, as presented in Table 2. In recent years, many different types of Fa-targeted drug transport systems have also been patented ( Table 3). Some of the patents concern nanoparticles encapsulated with an anti-cancer drug or having magnetic properties that are used simultaneously for therapeutic and diagnostic purposes. A large number of patents confirm the great interest in nanoparticles targeting folic acid receptors and show the tendency to create more and more advanced transport systems in anti-cancer therapy combining various mechanisms of targeted therapy. Summary The common choice of folic acid as a targeting ligand is related to its outstanding properties, e.g., lack of toxicity or immune reactions, availability, facility to chemical modification and to scale-up for clinical applications. Another reason is the fact, that the FA receptors are overexpressed in human carcinomas including breast, ovary, endometrium, kidney, lung, head and neck, brain, colon and myeloid cancers while only minimally distributed in normal tissues. Various kinds of FA-targeted nanocarriers have been developed (Table 1), for single-drug delivery and co-delivery of other anticancer drug or gene. In addition, advanced NPs decorated with folic acid have been developed from smart materials that release a drug in response to environmental factors or enable combination of chemotherapy with thermo-, photo-, radiotherapy, diagnostic or theranostic application. The large numbers of patented and involved in clinical trials FA-targeted nanoparticles (Tables 2 and 3) proves a significant progress in this field. Biotin as a Targeting Ligand Biotin, also known as vitamin H or coenzyme R, is a basic co-factor for the activity of carboxylases. It is associated with metabolic processes such as gluconeogenesis, synthesis of the fatty acids or catabolism of branched amino acids. It also promotes cell growth and is delivered especially into cells with high proliferation rates including tumour cells [110]. Biotin can be conjugated to the different molecules via valeric acid tail to achieve biotinylation. The surface of drug delivery systems (DDSs) may be biotin-functionalized through biotin coupling to the polymer chain before obtaining particular nanocarriers (pre-conjugation) [111] or by connecting biotin onto the surface of nanocarriers after prepa-ration (post-conjugation). The two biotinylation strategies of micelles are presented in Figure 4 [112]. Biotin as a Targeting Ligand Biotin, also known as vitamin H or coenzyme R, is a basic co-factor for the activity of carboxylases. It is associated with metabolic processes such as gluconeogenesis, synthesis of the fatty acids or catabolism of branched amino acids. It also promotes cell growth and is delivered especially into cells with high proliferation rates including tumour cells [110]. Biotin can be conjugated to the different molecules via valeric acid tail to achieve biotinylation. The surface of drug delivery systems (DDSs) may be biotin-functionalized through biotin coupling to the polymer chain before obtaining particular nanocarriers (pre-conjugation) [111] or by connecting biotin onto the surface of nanocarriers after preparation (post-conjugation). The two biotinylation strategies of micelles are presented in Figure 4 [112]. [112]. Biotin enhances the internalization of nanoscale drug delivery systems by the target cells/tissue while exerting a lesser effect on normal cells and thus, became one of the most attractive targeting moiety. Since biotin receptors-sodium dependent multivitamin transporters (SMVT)-are overexpressed on the cancer cell surface, they, in turn, became a target of biotin-functionalized DDSs [113]. This fact has been utilized as a new strategy of cancer therapy, diagnostic and theranostic, that combines both, therapy and diagnostic [114]. Many reports have confirmed that biotinylated drug vehicles increase selectivity and uptake of active agents in tumour cells when compared to non-biotinylated DDSs [115,116]. Moreover, enhancement of the cellular uptake of various drug carriers results in active Biotin enhances the internalization of nanoscale drug delivery systems by the target cells/tissue while exerting a lesser effect on normal cells and thus, became one of the most attractive targeting moiety. Since biotin receptors-sodium dependent multivitamin transporters (SMVT)-are overexpressed on the cancer cell surface, they, in turn, became a target of biotin-functionalized DDSs [113]. This fact has been utilized as a new strategy of cancer therapy, diagnostic and theranostic, that combines both, therapy and diagnostic [114]. Many reports have confirmed that biotinylated drug vehicles increase selectivity and uptake of active agents in tumour cells when compared to non-biotinylated DDSs [115,116]. Moreover, enhancement of the cellular uptake of various drug carriers results in active recognition of biotin by the receptors, regardless of the type of DDS modification, so biotin may be covalently linked with a carrier or surface-attached. In addition, in some cancer cells such as colon, breast, lung, renal or ovarian, expression of SMVT is higher than other transporters e.g., folate receptors (FARs). These findings made the biotinylation a strategy of aggressive cancer targeting, especially those with SMVT but without FAR overexpression [117]. Biotin-Targeted Nanoparticles The anticancer properties of biotinylated nanosystems have received great attention from researchers (Table 4) [118]. The era of biotinylated DDS has been started from the article published in 2004 by Russell-Jones et al. It has been reported that the therapy of colon cancer (using Colo-26 xenograft model) with the biotin-labelled doxorubicinhydroxypropylmethacrylic acid (Dox-HMPA) complex provides enhanced efficiency when compared to the control group. Furthermore, this effect was not observed when folic acid or vitamin B12 were applied as targeting moieties [119]. Ptx-paclitaxel; Dox-doxorubicin; Que-quercetin; Gem-gemcitabine; Art-artemisinin; Gnb-gefitinib; Nar-naringenin; Ds-disulfiram; Asl-asulacrine; GA-gallic acid. Drug-Delivery Application The biotin-mediated DDS for targeted tumour delivery of doxorubicin was obtained by a modified nanoprecipitation method combined with self-assembly [120]. The structure of the multilayer Dox-PLGA-lecithin-PEG-biotin nanoparticles with an average diameter of 110 nm is presented in Figure 5. Cytotoxicity of nanoparticles was studied in vitro in hepatoma cell cultures (HepG2-human liver hepatocellular carcinoma cell line) and in vivo in tumour-bearing mice. Both studies showed a greater inhibition effect on cancer cell proliferation than nanoparticles without biotin or free drug. Quercetin belongs to flavonoids that are present in many vegetables and fruits. It has been reported that quercetin inhibits drug efflux by interacting with some protein of ATPbinding cassette (ABC) transporter, but also inhibits gene expression of P-gp and MDR1 and thus may serve as a chemosensitizer in the therapy of cancer showing MDR effect [134]. However, extremely hydrophobicity of quercetin seems to be the main limitation to start clinical trials. To overcome this drawback, polymer nanoparticles were used as a dualdrug delivery system. In order to enhance the anticancer property of doxorubicin and prevent from the MDR effect of cancer cells, Dox and quercetin have been incorporated into biotin-decorated methoxy poly(ethylene glycol)-b-poly(ε-caprolactone) (PEG/PCL) using thin-film hydration method [122]. Studies have shown an increase of doxorubicin concentration within the cells and improved retention which arose from higher drug uptake and lower efflux rate in the studied breast cancer cells (MCF-7/ADR). These results were consistent with the in vivo study demonstrating significantly reduced Dox resistance in MCF-7/ADR-bearing mice. Nanoparticles noparticles stabilized by amine terminated lipoic acid-polyethylene glycol (PEG) copper(II) complex unspecified [133] HaCaT cells In vivo-mice Ptx-paclitaxel; Dox-doxorubicin; Que-quercetin; Gem-gemcitabine; Art-artemisinin; Gnbgefitinib; Nar-naringenin; Ds-disulfiram; Asl-asulacrine; GA-gallic acid. Drug-Delivery Application The biotin-mediated DDS for targeted tumour delivery of doxorubicin was obtained by a modified nanoprecipitation method combined with self-assembly [120]. The structure of the multilayer Dox-PLGA-lecithin-PEG-biotin nanoparticles with an average diameter of 110 nm is presented in Figure 5. Cytotoxicity of nanoparticles was studied in vitro in hepatoma cell cultures (HepG2-human liver hepatocellular carcinoma cell line) and in vivo in tumour-bearing mice. Both studies showed a greater inhibition effect on cancer cell proliferation than nanoparticles without biotin or free drug. Quercetin belongs to flavonoids that are present in many vegetables and fruits. It has been reported that quercetin inhibits drug efflux by interacting with some protein of ATPbinding cassette (ABC) transporter, but also inhibits gene expression of P-gp and MDR1 and Paclitaxel (Ptx) is another common anti-neoplastic agent used for the treatment of various cancer types, such as breast, ovarian, lung, head cancers or AIDS-associated Kaposi sarcoma. Ptx acts as the microtubule-stabilizer and is considered as the most significant advance in the chemotherapy of the past years [135]. The first biotin-mediated DDS containing Ptx was reported by Kim et al. in 2007 [115]. In this study, micelles obtained from biotin-conjugated PEG/PCL block copolymer served as a drug carrier. Nanosystems with the size of 88-118 nm were tested in vitro to determine the cytotoxicity as well as cellular uptake. Biotin-conjugated PEG/PCL micelles exhibited relatively high cell viabilities, while Ptx-loaded biotinylated PEG/PCL carriers highly selective toxicity for HeLa 229 and MCF-7 cells. Polymeric micelles based only on block copolymers, amphiphilic poly(N-2-hydroxypropyl methacrylamide)-block-poly(N-2-benzoyloxypropyl methacrylamide) (p(HPMAm)-bp(HPMAm-Bz) were synthesized with biotin-CDTPA or CDTPA (4-cyano-4-[(dodecylsulfan ylthiocarbonyl)-sulfanyl]pentanoic acid) [123]. In aqueous environment, the polymer selfassembled into micelles with the size of 40-90 nm, which was dependent on the length of hydrophobic chain in copolymers. Drug-free and paclitaxel-containing micelles of 10 wt% drug loading were obtained and examined in vitro. It was observed that biotintagged DDSs were internalized more efficiently than non-biotinylated ones by the cells overexpressing biotin receptors (A549 lung cancer cells), whereas both types of micelles characterized very low cellular uptake by HEK293 human embryonic kidney cells lacking SMVT. As a result, targeted micelles containing Ptx showed stronger cytotoxic activity against lung cancer cells than micelles without biotin. In another study, biotinylated poly(amidoamine) (PAMAM) dendrimers of generation 4 (G4) were employed to targeted delivery of Ptx, which was covalently attached to the surface of dendrimer via succinic acid linker [124]. To prevent the systemic circulation of dendrimers but also to minimize their toxicity due to cationic properties, poly(ethylene glycol) was linked. In vitro studies revealed higher efficiency in cellular internalization as well as cytotoxicity of A549 cells referred to dendrimer-Ptx conjugated with biotin when compared to carriers without vitamin. Moreover, in 3D tumour cell spheroids, biotindecorated dendrimers displayed better penetration, cytotoxicity and growth inhibition than non-targeted PAMAM and free drug. Recently, dendrimers were also applied as DDS for gemcitabine [125]. This drug can replace cytidine in DNA thus suppress proliferation. It showed activity in various solid tumours and has been approved for therapy of many cancer types as pancreatic, bladder, breast, colon, ovarian and cervical cancer [136]. Gemcitabine was loaded into halfgeneration PAMAM dendrimer (PG4.5) modified with diethylenetriamine (DETA) served as a linker to biotin conjugation. The obtained nanoparticles were tested in HeLa cells cultures. The PG4.5-DETA-biotin was non-toxic while PG4.5-DETA-biotin/gemcitabine exhibited cytotoxicity and anti-proliferative activity based mainly on apoptosis induction. However, the cytotoxicity of drug free-PG4.5-DETA-biotin/gemcitabine was slightly lower than that of free drug, which may be the effect of lower molecular size and thus, faster cellular uptake of gemcitabine [125]. There are also studies concerning other active agents delivered via biotinylated, polymeric nanocarriers with in vitro and in vivo confirmed anti-cancer activity. Among them, artemisinin should be mentioned but also naringenin co-delivered with gefitinib to enhance the anti-tumour effect of gefitinib [126,127]. In these cases, biotin-conjugated PEG/PCL block copolymer was used as a drug carrier in form of micelles for artemisinin and nanoparticles for naringenin and gefitinib. The results of both works constitute the basis for the continuation of research in the next clinical phases. Apart from polymers, there are also lipid-based carriers with biotin modification. Recently, biotin-polyethylene glycol 2000-distearyl phosphatidyl ethanolamine (biotin-PEG-DSPE) was used in the emulsification-ultrasonic and low temperature-solidification method to obtain nanostructured lipid carriers (NLCs) for disulfiram delivery in the presence of copper ion [128]. DDSs exhibited good stability and cytotoxicity toward breast cancer cells (4T1) in vitro and in vivo. Multi-seed polymeric liposomes loaded with asulacrine (ASL-BIO-MPL) was prepared through encapsulating micelles as seeds in the aqueous phase of biotinylated polymeric liposomes using micelle gradient method [137]. The outer layer of such systems was modified with polymer-polypeptide, which is cleaved by matrix metalloproteinases-9 and attached with the tumour targeting agent. After reaching the target site, metalloproteinases-9 degraded cleavable peptides into short peptides causing the changes in liposomal membrane architecture. This resulted in the release of inner asulacrine, containing small micelles for deep intratumour distribution. In this study, a two-step method was used to scavenge the blood of ASL-BIO-MPL with the use of injected avidin employed to protect normal tissues from nanocarriers ( Figure 6). The augment cell penetration and cytotoxicity of ASL-BIO-MPL in 3D tumour spheroids and tumour-bearing mice were confirmed [129]. In vivo safety experiments shown that targeted nanocarriers in the presence of avidin caused mild pathological changes in studied tissues such as heart, liver, spleen, lung and kidney. Nevertheless, the use of avidin may be considered controversial due to its origin, non-specific binding and possible immunogenicity [138]. Lu [139]. The in vitro and in vivo studies revealed that increasing the biotin density on the liposome surface significantly improved tumour targeting. The (Bio 2 -Chol)Lip exerted low systemic toxicity since normal cells showed very low uptake. Moreover, cytotoxicity and apoptosis assays exhibited that (Bio 2 -Chol)Lip containing paclitaxel, as a model anticancer drug, had better therapeutic properties than other prepared Ptx-loaded liposomes. scavenge the blood of ASL-BIO-MPL with the use of injected avidin employed to pro normal tissues from nanocarriers ( Figure 6). The augment cell penetration and cytoto city of ASL-BIO-MPL in 3D tumour spheroids and tumour-bearing mice were confirm [129]. In vivo safety experiments shown that targeted nanocarriers in the presence of a din caused mild pathological changes in studied tissues such as heart, liver, spleen, lu and kidney. Nevertheless, the use of avidin may be considered controversial due to origin, non-specific binding and possible immunogenicity [138]. Figure 6. The structure of biotinylated multi-seed polymeric liposomes and mechanism of stimuli-responsive size/ligand adapting strategy with the two-step method of the biotin-avidin system. 1-targeting delivery of nanocarriers to the tumour; 2-off-targeting scavenging of nanocarriers in blood and normal tissues by avidin. With permission from Jin et al. [129]. Figure 6. The structure of biotinylated multi-seed polymeric liposomes and mechanism of stimuliresponsive size/ligand adapting strategy with the two-step method of the biotin-avidin system. 1-targeting delivery of nanocarriers to the tumour; 2-off-targeting scavenging of nanocarriers in blood and normal tissues by avidin. With permission from Jin et al. [129]. Another study of carbon-based drug delivery systems with biotin moiety was carried out by Gupta et al. [132]. The reduced-graphene oxide nanocarriers containing gallic acid were coated with biotin-decorated PEG-bis-(amine) and, as two-dimensional DDSs (BPBA@GA-rGONC), were subjected to evaluation of cytotoxicity and cellular uptake with the use of A549 (human lung carcinoma) cell culture. The in vitro studies confirmed better targetability to cancer cells and significantly decreased IC 50 value of the obtained nanocarriers compared to free gallic acid as well as nanocarriers without coating (113.9 µg/mL versus 171.6 µg/mL and 137.5 µg/mL). Gold nanoparticles are investigated as a drug delivery platform that may be decorated with different targeting agents as monoclonal antibodies, peptides, folic acid or biotin [140,141]. Such modification improves the anti-cancer activity of delivered agents, which was confirmed by the work of Pramanik et al. [133] The synthesized copper (II) complex was attached to 20 nm gold NPs and stabilized by amine terminated lipoic acid-PEG. The NPs were then biotinylated to achieve targeted delivery properties toward cancer cells. All of the obtained types of NPs, with and without biotin functionalization, were then examined in HeLa and HaCaT (human nontumourigenic immortalized keratinocyte) cells line cultures and mice. Biotin-decorated gold NPs highly suppressed tumour growth of HeLa cell xenografts in mice. In a tested group of animals, no significant weight loss was observed which suggests the non-toxic systematic effect of the biotin-conjugated gold NPs. Biotin-Targeted NPs for Chemo-Photodynamic Combination Therapy Biotin-conjugated and PEG-ylated porphyrin nanoparticles for mitochondria and lysosomes co-targeting loaded with doxorubicin were developed for chemo-photodynamic combination therapy [121]. In this study, meso-tetraphenylporphyrin (TPP) served as a photosensitizer to produce highly reactive oxygen species in light conditions, while doxorubicin was used as a chemo-agent. The acid-amine reaction of biotin-PEG-COOH with TPP-amine enabled the conjugation of TPP-PEG-biotin and the self-assembly of TPP-PEGbiotin with doxorubicin encapsulation at the level of 25.03%. Studies of the intracellular localization revealed that conjugate, self-assembled nanoparticles and NPs with Dox were distributed mainly to the mitochondria and partly to the nuclei and lysosomes of MCF-7 cells, while meso-tetra(4-sulfonatophenyl)porphyrin (TPPS) was found only in lysosomes. It was suggested that PEG-biotin modification of TPP helps not only in selective cellular, but also intracellular targeting. However, the cellular uptake of TPP-PEG-biotin NPs was slower than TPP-PEG-biotin conjugate. Nanoparticles containing doxorubicin generated cytosolic calcium and caspase 3 at a higher level in light conditions than in dark, which caused more apoptosis of cancer cells. Gene Delivery Chitosan entered in biomedical fields in the 1990s and had been used in wound dressing as an antimicrobial agent [142] or tissue engineering as extracellular tissue matrixes [143]. It is also extensively investigated as a platform for delivery of drugs [144]. As chitosan is relatively less toxic than other cationic polymers, it is thought of as a promising excipient for gene delivery systems [145]. This natural polymer protects the naked genes from DNAses and thus improves the cellular uptake of DNA-based drugs administered to the body. To increase the cellular targeting of chitosan-based gene delivery systems to liver cancer, the biotinylation has been utilized by Cheng et al. [130]. The biotin-modified chitosan nanoparticles with plasmid DNA were prepared for stimulation of immune response in liver cancer cells. The synthesized plasmid (pGM-CSF-GFP-IRES-Rae-1-IL-21) contained genes of granulocyte-macrophage colony stimulating factor (GM-CSF) and interleukin-21 (IL-21) to trigger activation of cytotoxic T lymphocytes and natural killer cells. The obtained biotinylated chitosan NPs exhibited significantly improved gene and protein expression of GM-CSF, IL-1 and Rae-1 when compared to the nanocarriers without biotin moieties. Additionally, biotinylated nanocarriers highly increased the survival time of the tumour-bearing mouse. It was concluded that biotinylated chitosan NPs can mediate gene transfer and exert an inhibitory effect on the hepatoma cell model in situ, without any side effect on other cells. Summary Biotin promotes cell growth and is delivered especially into cells with high proliferation rates, including tumour cells. Biotin receptors-sodium dependent multivitamin transporters (SMVT)-are overexpressed on the surface cancer cells, e.g., colon, breast, lung, renal or ovarian and they are a target of biotin-functionalized DDSs. The diversity of biotin-targeted nano-delivery systems designed for single-and multi-drug delivery, gene delivery or chemo-photodynamic combination therapy is presented in Table 4. Tumour Heterogeneity Tumour heterogeneity, which occurs not only between patients, but even between primary tumour and metastases and within the tumour itself, is a critical aspect of cancer biology and remains a complex and challenging hurdle in the development of effective cancer therapy strategies [146,147]. Intratumoural and intertumoural heterogeneity (including cellular morphology, cell signalling, cell surface markers, receptors, metabolism, motility, drug resistance, angiogenic, proliferative, immunogenic and metastatic potential) is the result of genetic and epigenetic changes that occur both in cancer cells and tumour stromal cells [148,149]. In heterogenic tumours, a pool of heterogeneous cells exists where each clonal cell population will differ in the expression of various molecular targets, the expression levels (quantity) and quality of these molecules (accessibility and affinity). Thus, the relative levels of vitamin receptor overexpression may differ from cell to cell in the tumour. Since active targeting is based on the recognition and binding of ligands to tumour cell surface receptors, the targeting effect is affected by the receptor expression (surface markers), which may change dynamically with tumour progression [29]. Moreover, ligandreceptor binding is a saturable process as the recycling and synthesis of receptors takes time [6]. It was also reported that different receptors are often upregulated on tumour cells and drug resistance is often associated with upregulation of alternative receptors as well as pathway switching between two receptors [150]. These factors will affect the delivery efficiency of single-ligand nanomedicines, causing variation in response to targeted therapy within a tumour and reducing drug efficacy [23,147]. Therapy-sensitive cells will die upon the treatment; however, in light of the selective mode of therapy, a fraction of the cells can evade death and emerge as therapy-resistant cells [151]. These surviving cells generally harbour an aggressive phenotype and they may remain dormant or disseminate into the bloodstream and culminate in tumour metastasis, which severely complicates further treatment options [152,153]. Several methods, which generally rely on the cooperative work of nanosystems, are proposed as possible solutions; however, "tumour target amplification" appears to be a superior alternative. Tumour target amplification approaches can be classified into four categories: (1) self-amplifying systems that focus on increasing the levels of existing tumour-specific antigens (quantity); (2) artificial receptors that can be added to provide new targets (quantity); (3) peptide modification, where, instead of increasing the amount of cell-surface protein receptor targets, the endogenous receptors are manually engineered to increase binding affinity and recognition by the therapeutic ligand (quality); and (4) dual-targeting systems that characterize simultaneous targeting two cancer-specific factors (quality and quantity) [23]. The dual-targeting strategy will be discussed in Section 5.2. Dual-Molecular Targeting Based on the overexpression of specific receptors on tumour cells, active targeting nanomedicines have been developed with the ability to efficiently deliver active agents to the tumour cells via receptor-mediated endocytosis. Nevertheless, the efficacy of singleligand nanoparticulate delivery systems is still limited due to the complexity of the tumour microenvironment and tumour heterogeneity. In recent years, dual-ligand nanomedicines have attracted a lot of interest due to presenting versatile functions and thus have the potential to improve the efficacy of tumour-targeted delivery [6]. Dual-targeting is still a novel approach in which a delivery system is equipped with two distinct ligands to target different receptors which may be either expressed on/in one type of cell or on different cells. This strategy aims to enhance targeted delivery of a cytotoxic drug cargo into tumour cells [154]. Combining two targeting ligands may improve the selectivity and uptake of the nanomedicine by specific tumour cells and provide the possibility to target different cells, which are involved in the development of the tumour or cells that possess two kinds of receptors on the surface. Combinations of ligands are classified into three different based on the types of targeted cells and the action sites ( Figure 7): (1) two ligands target one kind of cell, which simultaneously overexpresses two kinds of receptors ( Figure 7A), two ligands target two kinds of cells ( Figure 7B) and combining cell membrane targeting with intracellular organelle targeting (nuclear targeting or mitochondrial targeting) ( Figure 7C) [6]. So far, the most common dual-ligand combinations are those in which a second ligand is combined with either RGD, HA or transferrin (Tf), since they are overexpressed on various cancer cells and have been extensively studied [6]. There is also a growing interest in using combination with folate or biotin. take of the nanomedicine by specific tumour cells and provide the possibility to target different cells, which are involved in the development of the tumour or cells that possess two kinds of receptors on the surface. Combinations of ligands are classified into three different based on the types of targeted cells and the action sites ( Figure 7): (1) two ligands target one kind of cell, which simultaneously overexpresses two kinds of receptors ( Figure 7A), two ligands target two kinds of cells ( Figure 7B) and combining cell membrane targeting with intracellular organelle targeting (nuclear targeting or mitochondrial targeting) ( Figure 7C) [6]. Dual-Targeting with Folic Acid The novel achievements in development of dual-targeted nanoparticles with folic acid are summarized in Table 5. Comparison of single-targeting (CD44 receptor) and dual-targeting (folate and CD44 receptors) micellar formulations obtained from hyaluronic acid-octadecyl (HA-C18) and loaded with paclitaxel showed that although all kinds of micelles possessed much longer half-life and moderately larger AUC than Taxol solution, the dual-targeted micelles provided better MDR-overcoming effect and exhibited excellent tumour-targeting ability. Thus, dual-targeting CD44 and FA receptors may be an effective strategy for intracellular drug-targeting delivery, overcoming drug resistance and tumour targeting [155]. Paclitaxel was also co-loaded with DNA in the hyaluronic acid (HA) and folate (FA)modified polyethylenimine liposomes, to obtain the dual-targeting biomimetic nanovector. The dual-targeted liposomes could effectively target tumour cells, enhance transfection efficiency and subsequently achieve the co-delivery of Ptx and DNA, displaying great potential for optimal combination therapy [156]. Doxorubicin-loaded liposomes targeted with folate and transferrin were proven effective in penetrating the BBB and targeting the brain glioma. In vivo studies demonstrated that the FA and Tf-targeted liposomes could transport across the BBB and mainly accumulated in the brain glioma. The liposomes caused an increase of survival time and decrease of tumour volume [157]. Other liposomal nanocarriers loaded with doxorubicin bearing controlled numbers of both folic acid and a monoclonal antibody against the epidermal growth factor receptors (EGFR) were developed. Unlike single-ligand targeting, dual-ligand liposomes reduced viability only in target cells bearing both targeted receptors while sparing off-target cells. Selectivity enhancements determined by LC50 ratios for single-and dual-ligand formulations showed that dual-ligand liposomes were capable of achieving a 10-fold enhancement relative to off-target cells without the folate receptor and a 4-fold enhancement relative to off-target cells without the EGFR [158]. The nanoparticles targeted with folate and trastuzumab obtained from redox responsive multiblock copolymer (MB-PLA-ss-FA-Her-Dox-NPs) were developed. In vitro tests showed high drug encapsulation (≈22%) and significantly enhanced cellular uptake. Moreover, a 91% regression in Ehrlich ascites tumour was demonstrated and lack of significant heart, liver or kidney toxicity. Although the doxorubicin was chosen for the study, the obtained nanosystem can be employed to deliver also other kinds of drugs used in breast cancer therapy [165]. Novel folate (FA) and TAT peptide co-modified doxorubicin-loaded liposome (FA/TAT-LP-Dox) was developed [159]. The role of TAT peptide was to improve the capacity of translocating cell membranes and provide efficient intracellular delivery. Although the mechanisms for TAT peptide across cell membrane are still unclear, many studies have suggested that positive charges of TAT peptide play a key role in the uptake process. Liposomes with optimal ligand density (5% of FA and 2.5% of TAT) exhibited improved cytotoxicity and cellular uptake efficiency compared to single-ligand counterparts. Moreover, the targeting moieties, FA and TAT peptide, revealed synergistic effect in facilitating intracellular transport of the liposomes. The superiority of FA/TAT-LP in tumour targeting and accumulation was confirmed under in vivo conditions [159]. The liposomes modified with two kinds of ligands-folic acid and glutamic hexapeptide (Ptx-Glu 6 -FA-Lip) have been obtained for therapy of metastatic bone cancer [160]. It has been found that glutamic oligopeptide, especially glutamic hexapeptide, had an excellent bone-targeting ability, because multi-carboxy groups of peptides provide ionic interaction between negative charges and calcium ions in the mineral component (HAP) of bone [166]. The Ptx-Glu 6 -FA-Lip showed superior targeting ability in vitro and in vivo in comparison to free Ptx, non-coated, singly-modified and co-modified by physical blending liposomes [160]. A novel brain drug delivery system based on pluronic P105 polymeric micelles functionalized with glucose and folic acid for doxorubicin delivery (GF-Dox) were designed [162]. The glucose transporter (GLUT1), which is particularly highly concentrated in brain microvessels, is an important nutrient transporter in the human body, because glucose almost entirely supplies the high and continuous energy requirement of the brain and glucose can effectively penetrate through the BBB via facilitative GLUT1. Thus, the glucose-targeted ligand could potentially be used to facilitate the passage of carriers across the BBB. An in vivo study in rats showed a significant increase of Dox in the brain after micelles' administration. In addition, bioavailability of Dox in GF-Dox micelles significantly increased (4.6-fold) in comparison to control Dox solution. The survival time of tumour-bearing mice of the GF-Dox group (32 days) was significantly longer than that of the free Dox group (19 days), or other control groups due to the dual-targeting effect of GF-Dox [162]. Folic acid-pectin-eight-arm polyethylene glycol-dihydroartemisinin/hydroxycamptothecin nanoparticles (FPPDH NPs) have been developed for targeted delivery of dihydroartemisinin (DHA). Folic acid and pectin were used as targeting ligands by taking advantage of the high expression of asialoglycoprotein receptors on the surface of liver binding pectin with galactose residues and high expression of folic acid receptors. FPPDH NPs had higher cytotoxicity than free DHA (204.5-fold in the case of H22 (mouse hepatocellular carcinoma cell line) and 178.4-fold in the case of HepG2) [163]. Fluorescent Au nanocomposite with Dox dual-targeted with folic acid and G-quadruplex oligonucleotide AS1411 (AF-D-AuNPs) for bioimaging was developed [161]. AS1411 is a novel nucleolin-targeted DNA aptamer, formed from a single-stranded, G-rich, phosphodiester, 26-mer oligonucleotide. It targets nucleolin, a multifunctional protein located primarily in the nucleolus, but also found in the cytoplasm and cell membrane, which is overexpressed in many types of cancer. AS1411entered a phase II clinical trial in metastatic renal cell carcinoma [167]. The authors observed very strong fluorescence signal of the AF-D-AuNPs incubated with cells. The nanocomposite may have great potential in early detection of cancer and bioanalysis field [161]. Gold nanoclusters (AuNCs) functionalized with folic acid and trastuzumab (Herceptin ® ) as dual-targeted radiosensitizer agents have been analysed [164]. Herceptin (HER) is widely applied in some types of cancer treatments as a chemotherapy agent but can also be used as a bioligand to increase tumour cell internalization of nanoparticles in HER2-positive breast cancer cells [168,169]. In addition, trastuzumab by enhancing radiation-induced apoptosis can increase the synergy between chemotherapy and radiotherapy as a chemoradiotherapy modality. The dual-targeting strategy led to an enhancement accumulation of the nanoclusters in the cancer cells via active targeting mechanisms and to enhance significantly radiation therapy efficiency with the sensitization enhancement factor (SER) 1.77 and 1.5 fold larger than those obtained using non-targeted AuNCs [164]. Dual-Targeting with Biotin The recent studies on nanoparticles dual-targeted with biotin are presented in Table 6. Liposomes dual-targeted with biotin and glucose (Bio-Glu-Lip) and loaded with paclitaxel were evaluated for breast tumour-specific drug delivery, improvement of the efficacy and reduction of the chemotherapy side effects [85]. The glucose transporter 1 (GLUT1) is known to be overexpressed in various types of cancer cells due to the Warburg effect, an insufficient glycolysis pathway to generate adenosine triphosphate making glucose a suitable targeting ligand for drug delivery. Bio-Glu-Lip was recognized by the biotin transporter SMVT and the glucose transporter GLUT1 on the cell membrane via the residues on the liposome surface and was energy-dependently internalized via a synthetic endocytic pathway, including clathrin-mediated, caveolaemediated and micropinocytosis-mediated endocytosis. Bio-Glu-Lip had the highest cell Pharmaceutics 2021, 13, 326 24 of 38 uptake in 4T1 and MCF-7 cells when compared to the non-targeting liposome (Lip), Bio-Lip and Glu-Lip. In addition, significantly increased accumulation of the Bio-and Glu-targeted liposomes in the breast tumour sites was observed [85]. A new strategy was developed to simultaneously introduce a nuclear protein-highmotility group box (HMGB)-1 for nuclear transport and biotin for particular tumour cell targeting [170]. The (HMGB)-1 is the focus of recent cancer research, because it plays a critical role in cancer development, progression and metastasis by activation of cancer cells, enhancement of tumour angiogenesis and suppression of host anti-cancer immunity [172]. The in vitro study showed that the presence of HMGB1 facilitates the nuclear transport of DNA, leading to enhanced transfection efficiency. In addition, the complexes exhibited an enhanced cellular uptake into HeLa cells due to the specific interactions between biotin moieties and biotin receptors on HeLa cells [170]. Chitosan nanoparticles (Bio-GC) modified with galactose and biotin for efficient targeting of fluorouracil to hepatoma cells were obtained. The specific binding of galactose ligand with asialoglycoprotein receptor (ASGPR) on hepatocyte membranes has been shown to induce liver-targeted drug transfer. The ASGPR is found on sinusoidal surfaces of mammalian liver cells and is a glycoprotein that specifically recognizes terminal galactose residues or acetylamino galactose. Each hepatocyte contains about 200,000 binding sites for ASGPR. In addition, the biotin has a great potential of increasing NP efficiency, because the expression of biotin receptor is 39.6 times higher in hepatocellular carcinoma cells than in normal liver cells. The Bio-GC nanoparticles with 5-FU had stronger in vitro and in vivo inhibitory effect on the proliferation and migration of liver cancer cells compared with 5-FU [171]. Dual-Targeting with Combination of Folic Acid and Biotin There is a concept of using combination of folic acid and biotin as targeting ligands [173]: one of the first papers dealing with the evaluation of the effectiveness of folate, vitamin B12 or biotin-functionalized polymeric materials as active drug targeting agents to tumour cells was published in 2004 by Russell-Jones et al. It has been reported that the cells overexpressing the receptors for folate or vitamin B12, overexpress also receptors for biotin [119]. The examples of cell lines with overexpression of either biotin receptor, folate receptor or both are presented in Table 7 and the cells negative for biotin and/or folate receptors expression are listed in Table 8. [114,160,178,182,193] leukaemia * If available, the affinity of binding to the biotin or folate receptors is given in parentheses. WI38 human normal lung fibroblasts -- [114,178,190,[193][194][195] mouse melanoma -- [114,182,196] The detailed analysis of the interaction of cells with surfaces modified with the folic acid and biotin was studied by Subedi et al. The mixed monolayers were prepared with a small amount (1%) of folic acid or biotin on a long poly(ethylene glycol) linker of Mw = 3400 Da and the remaining 99% was covered by a short oligo(ethylene glycol) (Mw ≈ 550-750 Da). This approach allowed to assess the targeting effect independently from endocytosis that typically accompanies a similar analysis for targeted nanoparticles. Significant targeting with greater attachment of human cervical cancer cells (HeLa) and human breast adenocarcinoma MCF7 cells was observed in comparison to the nontumourigenic breast epithelial cells (MCF10A). Thus, the results confirmed that the targeting moieties can be used in drug delivery systems for targeting cancerous cells while sparing the surrounding noncancerous tissue. In addition, a drastically different behaviour was observed for competition with free vitamins, because the addition of free folic acid caused inhibition of cell attachment, but free biotin induced enhancement of cell attachment (anti-inhibition effect). The competition effect was observed also at low temperature (4 • C), which suggests that the proteins responsible for this effect are not recruited from cytosol during cell attachment [175]. An interesting aspect of development of dual-targeted drug delivery remains optimal ligand density on the surface of carriers for a maximal internalization. For a sake of increasing of nanoparticles functionality, it is necessary to clarify the effect of multiple ligands with different densities and ratios on the cell internalization efficiency. An attempt of solving this problem was undertaken by Liu et al. [28] with the use of fibre rods with hierarchically targeting capabilities. fibre rods obtained from poly(ethylene glycol)-poly(DL-lactide) or styrene-maleic anhydride copolymer were conjugated with folate/biotin ligands via PEG linkers and poly(sulfobetaine methacrylate) (PSBMA) ligands via acid-labile linkers. The zwitterionic polymers, containing both anionic and cationic charges in the same unit, have shown strong resistance to protein adsorption and low immunogenicity, so conjugation of PSBMA on fibre rods aimed to prolong blood circulation by preventing from protein opsonization and shielding the targeting ligands. The acid-responsive removal of zwitterionic ligands leads to exposure of targeting ligands (folic acid and biotin) and activation of tumour cell internalization. In this study, the densities of folate (0.09-0.66 µmol/g) and biotin (0.40-1.7 µmol/g) on the rods' surface were analysed. It has been determined that the cellular uptake of FA-grafted fibre rods (FA-R) was time-dependent and increased with the incubation time. The highest uptake by 4T1 cells was observed for FA-R with the folate density of 0.48 µmol/g (up to 3.9 folds higher than other FA-R). The cellular uptake of biotin-grafted fibre rods (BIO-R) increased with increasing the biotin density to 0.89 µmol/g, but it was significantly reduced beyond this value. For the fibre rods grafted with both ligands, folate and biotin, the tumour cell uptake is maximized at the folate density of 0.36 µmol/g and biotin of 0.67 µmol/g. This suggested that the dual ligands with a critical density and an optimal ratio were essential for tumour cell internalization. The uptake of non-targeted rods was mediated via micropinocytosis, while the internalization of rods targeted with FA and BIO proceeded via clathrin-mediated endocytosis. It has been also exhibited that treatment with Dox-loaded rods grafted with FA, BIO and PSBMA enhanced the tumour growth inhibition, prolonged the animal survival and caused fewer lung metastases in comparison to the free Dox, non-targeted rods and rods grafted with FA and BIO [28]. NPs Targeted with Folic Acid and Biotin Table 9 presents examples of the nanoparticles dual-targeted with folic acid and biotin. The multifunctional pH-responsive silica-coated nanoparticles with both, fluorescent and magnetic properties, triple conjugated to biotin, folic acid and doxorubicin (Fe 3 O 4 @SiO 2 (FITC)-BTN/folic acid/Dox) were synthesized for increasing tumour drug accumulation by active targeting and endosomal drug release properties. The Dox was conjugated with pH-labile Schiff-base formation, which caused stability of this acid-sensitive linkage at pH 7.4 (mimicking the physiological pH in the blood circulation) and breaking the bond at pH 5.0 (mimicking endosomal environment), leading to the significant drug release. The developed nanoparticles improved the accumulation of anticancer drug at the target site due to dual-targeting properties and ability to control drug release after cell internalization [197]. Dopamine-capped iron oxide nanoparticles containing two surface-grafted biologically relevant ligands, folic acid (FA) and biotin (BIO) (FA-Fe 3 O 4 -BIO) have been developed. The NP conjugated with FA and biotin NPs may interact with multiple receptors overexpressed on the surface of a diseased cell and offer enhanced cellular uptake via receptor-mediated endocytosis. The FA-Fe 3 O 4 -BIO were delivered into E-G7 and human HeLa cancer cell lines and tested toward their cellular uptake by immunofluorescence and flow cytometry analysis. Cell internalization of the FA-Fe 3 O 4 -BIO was time-dependent and the highest uptake was found after 24 h. The nanoparticles possessing single ligand on the surface, either FA or BIO showed several-fold lower uptake in the tested cell lines in comparison to the dual-conjugated FA-Fe 3 O 4 -BIO nanovectors. Importantly, the surface-functionalized magnetic nanoparticles did not exhibit cytotoxicity, which was demonstrated by high cell viability (>95%) [198]. Biodegradable dual-targeting micelles from combination of poly(L-lactide) co-poly (ethylene glycol)-folic acid (PLA-PEG-FA) and poly(L-lactide)-co-poly(ethylene glycol)biotin (PLA-PEG-BIO) were developed for delivery of paclitaxel. The micelles showed high paclitaxel loading and low CMC value (0.001 mg/mL), which ensures their stability. The paclitaxel-loaded micelles characterized double morphology-filomicelles of over 100 nm length and 20-30 nm diameter and spherical micelles of ≈20 nm diameter. The in vitro cytotoxicity of paclitaxel-loaded PLA-PEG-FA+PLA-PEG-BIO micelles against ovarian cancer cells (OVCAR3) that characterize overexpression of both, folate and biotin receptors was confirmed [199]. Triple-targeted nanomicelles (162.7 ± 5 nm) for therapy of breast cancer were obtained from oligomeric hyaluronic acid (oHA), which is a macromolecular polysaccharide with good tumour targeting, biodegradability, non-immunogenicity and nontoxicity ( Figure 8) were developed. Moreover, hyaluronic acid and its derivatives can also bind to specific receptors on external of cancer cells, owing to their high expression, for instance, CD44 receptors. The CD44 is a specific marker on breast cancer stem cells (BCSCs) that have the ability to self-renew and unlimitedly proliferate, which is the main reason for tumourigenesis, metastasis and relapse. The additional targeting properties were obtained by using two other targeting moieties-biotin and folic acid. The micelles were developed for delivery of two active agents-icariin (Ica) and curcumin (Cur) [200]. Ica is one of the main active ingredient of Epimedium, a traditional Chinese herbal medicine and it presents an antitumour effect by inhibition of proliferation and differentiation of tumour cells, promotion of tumour suppressor gene expression, inducing tumour cell cycle arrest [201]. Curcumin is another natural agent of natural origin that has potential in preventing and treating various by inhibiting tumour-associated gene expression and angiogenesis [200]. The pHsensitivity of the nanomicelles were obtained by introduction of the hydrazone (Hyd) bond. In fact, the in vitro study revealed higher release of Ica and Cur from the Bio-oHA-Hyd-FA micelles in the acidic environment. The Ica and Cur-loaded Bio-oHA-Hyd-FA micelles presented higher cytotoxicity to cancer cells compared to the control groups (free Cur, free Ica, free Ica + Cur, Cur-loaded micelles and Ica-loaded micelles). The inhibitory effect on tumours was confirmed in vivo [200]. Self-assembled folate-biotin-pullulan (FBP) nanoparticles (NPs) have been developed for delivery of doxorubicin. Pullulan is a linear polysaccharide consisting of consecutive maltotriose units, which are connected by an -1,6-glycosidic bond. Due to its outstanding biological properties, such as biocompatibility, biodegradability, low immunogenicity, nontoxicity and water solubility, pullulan has been widely used in the pharmaceutical industry. However, in this case, the biotin was used not as a targeting moiety, but rather as a hydrophobic moiety to form self-assembled NPs. When biotin is conjugated with other biomolecules via ester or amide linkages, its water solubility dramatically decreases because of the loss of hydrophilic carboxyl group [201]. Summary Dual-targeted nanocarriers have been developed as one of the solutions that may overcome tumour heterogeneity, which occurs between patients, between primary tumour and metastases and within the tumour itself. As presented in Table 5, there is an increasing numbers of dual targeted nanocarriers with FA in combination with HA, Tf, glucose, etc., or NPs targeted with biotin with glucose, galactose or HMGB1 (Table 6). A novel approach is also combination of these two ligands in the same kind of nanocarrier (Table 9). There is also increasing number of NPs decorated with both, folic acid and biotin ( Table 9). Overexpression of both kinds of receptors, FAR and SMVT, has been identified in many types of cancer cells (Table 7). Single-Targeted versus Dual-Targeted Nanoparticles The progress in development of nanocarriers for anticancer drug delivery is presented in Figure 9. Self-assembled folate-biotin-pullulan (FBP) nanoparticles (NPs) have been developed for delivery of doxorubicin. Pullulan is a linear polysaccharide consisting of consecutive maltotriose units, which are connected by an α-1,6-glycosidic bond. Due to its outstanding biological properties, such as biocompatibility, biodegradability, low immunogenicity, nontoxicity and water solubility, pullulan has been widely used in the pharmaceutical industry. However, in this case, the biotin was used not as a targeting moiety, but rather as a hydrophobic moiety to form self-assembled NPs. When biotin is conjugated with other biomolecules via ester or amide linkages, its water solubility dramatically decreases because of the loss of hydrophilic carboxyl group [201]. Summary Dual-targeted nanocarriers have been developed as one of the solutions that may overcome tumour heterogeneity, which occurs between patients, between primary tumour and metastases and within the tumour itself. As presented in Table 5, there is an increasing numbers of dual targeted nanocarriers with FA in combination with HA, Tf, glucose, etc., or NPs targeted with biotin with glucose, galactose or HMGB1 (Table 6). A novel approach is also combination of these two ligands in the same kind of nanocarrier (Table 9). There is also increasing number of NPs decorated with both, folic acid and biotin ( Table 9). Overexpression of both kinds of receptors, FAR and SMVT, has been identified in many types of cancer cells (Table 7). Single-Targeted versus Dual-Targeted Nanoparticles The progress in development of nanocarriers for anticancer drug delivery is presented in Figure 9. The concept that nanocarriers can enhance the in vivo stability of anticancer compounds by protecting them from biodegradation or excretion, reducing their toxicity and enhancing the maximum tolerated dose by changing their systemic distribution, thereby improving the efficacy of anticancer compounds, is well known and has also been demonstrated clinically. The first generation of nanoparticles that are passively distributed to the tumour tissue by EPR effect is advantageous compared to the conventional dosage forms. However, more and more studies have revealed drawbacks of the enhanced permeability and retention (EPR), the most often proposed mechanism for nanomedicine delivery, which is mainly caused by tumour heterogeneity [21]. The current effort focuses on better understanding the tumour microenvironment and heterogeneity, because this knowledge is necessary to establish systems and strategies that characterize enhanced targetability. The currently, developed nanoparticles are focused mainly on active targeting of nanocarriers based on ligand-receptor recognition, which may show better efficacy than passive targeting in human cancer therapy. Moreover, the further progress may be obtained using dual-targeted nanoparticles and this concept has been supported by a solid experimental data. This review clearly presents that the dual-targeted nanocarriers are more effective compared to free drug, unmodified NPs and nanoparticles decorated with one kind of ligand. These NPs are superior in several aspects, involving cellular uptake, toxicity against tumour cells and decreased toxicity against normal cells. However, the dual-targeted nanoparticles are usually compared to the nanocarriers of the same type, which possess only one targeting ligand on the cell surface instead of the two. It should be emphasized that a great deal of effort is also being undertaken to improve the targetability and specificity of the single-targeted NPs. This direction involves using smart materials to augment functionality of NPs, e.g., pH-and redox-sensitivity. Additionally, there is also a dynamically increasing group of novel single-targeted nanocarriers developed for combined chemotherapy and photo-, thermo-, radiotherapy or for theranostic application. These "next generation" single-targeted NPs (advanced single-targeted NPs) and dualtargeted nanoparticles show superior efficiency in anticancer drug delivery than conventional therapy and single-targeted NPs. Conclusions and Future Perspectives Based on the studies conducted to date on the drug delivery systems of anticancer drugs, it has been recommended that future nanomedicine should be focused mainly on active targeting based on ligand-receptor recognition, because it is expected to show better efficacy than passive targeting in human cancer therapy. Some of the low-molecularweight vitamins that play essential roles in cell survival and bind to the respective receptors with high affinity are folic acid and biotin. They are commonly used as targeting ligands for cancer chemotherapy, because of their selectivity: the specific receptors are over- The concept that nanocarriers can enhance the in vivo stability of anticancer compounds by protecting them from biodegradation or excretion, reducing their toxicity and enhancing the maximum tolerated dose by changing their systemic distribution, thereby improving the efficacy of anticancer compounds, is well known and has also been demonstrated clinically. The first generation of nanoparticles that are passively distributed to the tumour tissue by EPR effect is advantageous compared to the conventional dosage forms. However, more and more studies have revealed drawbacks of the enhanced permeability and retention (EPR), the most often proposed mechanism for nanomedicine delivery, which is mainly caused by tumour heterogeneity [21]. The current effort focuses on better understanding the tumour microenvironment and heterogeneity, because this knowledge is necessary to establish systems and strategies that characterize enhanced targetability. The currently, developed nanoparticles are focused mainly on active targeting of nanocarriers based on ligand-receptor recognition, which may show better efficacy than passive targeting in human cancer therapy. Moreover, the further progress may be obtained using dual-targeted nanoparticles and this concept has been supported by a solid experimental data. This review clearly presents that the dual-targeted nanocarriers are more effective compared to free drug, unmodified NPs and nanoparticles decorated with one kind of ligand. These NPs are superior in several aspects, involving cellular uptake, toxicity against tumour cells and decreased toxicity against normal cells. However, the dual-targeted nanoparticles are usually compared to the nanocarriers of the same type, which possess only one targeting ligand on the cell surface instead of the two. It should be emphasized that a great deal of effort is also being undertaken to improve the targetability and specificity of the single-targeted NPs. This direction involves using smart materials to augment functionality of NPs, e.g., pH-and redox-sensitivity. Additionally, there is also a dynamically increasing group of novel single-targeted nanocarriers developed for combined chemotherapy and photo-, thermo-, radiotherapy or for theranostic application. These "next generation" single-targeted NPs (advanced single-targeted NPs) and dual-targeted nanoparticles show superior efficiency in anticancer drug delivery than conventional therapy and single-targeted NPs. Conclusions and Future Perspectives Based on the studies conducted to date on the drug delivery systems of anticancer drugs, it has been recommended that future nanomedicine should be focused mainly on active targeting based on ligand-receptor recognition, because it is expected to show better efficacy than passive targeting in human cancer therapy. Some of the low-molecular-weight vitamins that play essential roles in cell survival and bind to the respective receptors with high affinity are folic acid and biotin. They are commonly used as targeting ligands for cancer chemotherapy, because of their selectivity: the specific receptors are overexpressed in a number of human cancer cells, but they are minimally distributed in normal tissues. The other advantages of these molecules include their availability, lack of toxicity, nonimmunogenicity and ease of modification. However, the nanoparticle design plays a significant role in tumour targeting, because of intra-and intertumoural variability present in the tumour cells and the tumour microenvironment that results in the heterogeneity of molecular, pathologic and clinical features of each tumour type. The recent achievements in targeted nanoparticle drug delivery indicate a great progress in this field. The nanoparticles of various morphology have been developed: liposomes, micelles, nanospheres, dendrimers, carbon nanotubes, nanorods, core-shell quantum dots and mesoporous nanoparticles. There are also different strategies to achieve targeting effect. On the one hand, the nanoparticles having more than one targeting ligand are developed and these systems present increased efficiency confirmed by in vitro and in vivo study. In this group, the nanoparticles decorated with both targeting moieties, folic acid and biotin, are still a novel approach, although gaining an increasing interest. In addition, the studies on singletargeted NPs are also continued. The advanced systems use smart materials for achieving an additional properties, e.g., pH-or redox-sensitivity. There are also increasing numbers of systems combining cancer treatment and diagnosis. These advanced single-targeted NPs, with an additional functionality, similarly to dual-targeted nanocarriers, present superior properties over conventional drugs, non-targeted systems and single-targeted carriers. Taking into account the promising results of both advanced single-targeted NPs and dual-targeted nanoparticles for anticancer drug delivery, further progress is expected in the nearer future. However, apart from gaining new knowledge about the advanced targeted nanocarriers, additional efforts should be focused on their successful transition into clinics. Despite the significant therapeutic advantages of the nanoparticles, their clinical translation is still limited and does not progress as rapidly as expected, considering the positive preclinical results. From this point of view, several experimental challenges need to be addressed. First, of all, it is necessary to better understand the in vivo fate and interactions of nanoparticles with tumour tissue, cells and blood. The recent achievements concern aspects related to the design and preparation of delivery system, e.g., optimal ligand density on the surface of carriers for a maximal cell internalization. This direction of the research is going to be continued because of the inter-and intratumoral differences, which cause that even the same kind of NPs may differ in cellular uptake. Moreover, differences between nanocarriers may affect their biodistribution and cellular uptake that may require individual approach. Moreover, design and development of targeted NPs should consider their safety, biocompatibility and stability upon storage and, after in vivo administration reproducibly and the possibility of large-scale manufacturing. Last but not least, the promising in vitro results should be confirmed in vivo using appropriate animal models.
2021-03-29T05:22:55.459Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "b459e91061d8ff348242b5d958ff0970d7ad247c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/13/3/326/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b459e91061d8ff348242b5d958ff0970d7ad247c", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
29149791
pes2o/s2orc
v3-fos-license
Enhanced single photon emission from carbon nanotube dopant states coupled to silicon microcavities Single-walled carbon nanotubes are a promising material as quantum light sources at room temperature and as nanoscale light sources for integrated photonic circuits on silicon. Here we show that integration of dopant states in carbon nanotubes and silicon microcavities can provide bright and high-purity single photon emitters on silicon photonics platform at room temperature. We perform photoluminescence spectroscopy and observe enhancement of emission from the dopant states by a factor of $\sim$100, and cavity-enhanced radiative decay is confirmed using time-resolved measurements, where $\sim$30% decrease of emission lifetime is observed. Statistics of photons emitted from the cavity-coupled dopant states are investigated by photon correlation measurements, and high-purity single photon generation is observed. Excitation power dependence of photon emission statistics shows that the degree of photon antibunching can be kept low even when the excitation power increases, while single photon emission rate can be increased up to $\sim 1.7 \times 10^7$ Hz. Single photon emitters are a fundamental element for quantum information technologies [1], and a wide range of materials has been explored to obtain ideal single photon emitting devices [2]. In particular, semiconducting carbon nanotubes (CNTs) are regarded as a promising material for such an application because they are a nanoscale light emitting material [3] having stable excitonic states which arise from the one-dimensional structure of CNTs [4,5]. Under cryogenic temperatures, excitons in CNTs are localized and behave as quantumdot-like states [6], exhibiting a quantum light signature [7,8]. At room temperature, single photon generation using CNTs has already been accomplished by two approaches [9][10][11][12][13]. The first is where exciton trapping sites are created to localize excitons [10,11], and the second is where efficient exciton-exciton annihilation process is used to reduce the number of mobile excitons to unity [12,13]. The approach using exciton trapping sites allows for high purity single photon generation, use of chirality sorted CNTs, and direct deposition on various types of substrates. Furthermore, trapping sites protect excitons from quenching sites in CNTs, and optically-allowed defect states appear below the dark states of E 11 excitons, resulting in significant brightening of photoluminescence [14,15]. Recently, aryl sp 3 defects have received considerable attention because of the wide range of selectability of CNT chiralities, dopant species, and reaction conditions, which allows for tunable emission wavelength and decay lifetime [16]. Using this method, single photon generation with a purity of 99% and an emission wavelength of 1550 nm is achieved at room temperature [11]. For practical single photon sources, not only single photon purity and operating temperature, but also emission wavelength, linewidth, brightness, and photon extraction efficiency are important. From this aspect, cavity structures are widely used to improve performances of single photon emitters [17][18][19]. As for CNT single photon emitters, photonic [20,21] and plasmonic [22] cavity configurations have been used to enhance the brightness of single photon emission at low temperature. Further development is expected by integrating single photon emitters into silicon photonics because it can lead to on-chip integrated quantum devices [23], and CNTs have a potential for such an application due to their emission wavelengths having low transmission losses in silicon. Microcavities on silicon substrates have been used to enhance photoluminescence (PL) [24,25] and Raman [26] signal, and efficient coupling even to a single carbon nanotube has also been achieved [27][28][29], demonstrating that CNTs are suitable for integration with silicon photonics. Here we report on integration of CNT dopant state emitters with silicon microcavities. Emission from aryl sp 3 defect states in CNTs coupled to two-dimensional photonic crystal microcavities is characterized by PL microscopy, and significant enhancement of PL intensity is observed. Time-resolved PL measurements on the same device show a direct evidence of enhanced emission decay rates by the Purcell effect, and we confirm single photon emission from the device by performing photon correlation measurements. The zero-delay second-order autocorrelation g (2) (0) is as low as 0.1, showing high-purity single photon generation, and the value is stable even at high power excitation, which allows for single photon emission rates as high as ∼ 1.7 × 10 7 Hz. We start sample preparation by fabrication of photonic crystal microcavities on a silicon-on-insulator substrate [ Fig. 1(a)]. Electron beam lithography defines the photonic crystal pattern with shift-L3 cavities [30], and the 200-nm-thick top Si layer is etched through by dry etching. The buried SiO 2 layer with a thickness of 1000 nm is then etched by 20 wt% hydrofluoric acid, and thermal oxidation is performed at 900 • C for an hour to form a 10nm-thick SiO 2 layer on the top Si layer. A scanning electron micrograph of a typical device is shown in Fig. 1(b). Doped carbon nanotubes are prepared from chiralityenriched (6,5) CNTs encapsulated in a sodium deoxycholate (DOC) surfactant. Aryl functionalization is done using a diazonium dopant (4-methoxybenzenediazonium (MeO-Dz)), where the details are described in the literature [31]. We dilute the doped CNT solution with water to avoid bundling or piling up of CNTs on a substrate, and finally the solution is drop casted on the devices using a glass micropipette. PL measurements are performed with a homebuilt sample-scanning confocal microscopy system [13]. We use a Ti:sapphire laser where the output can be switched between continuous-wave (CW) and ∼100-fs pulses with a repetition rate of 76 MHz. We use an excitation wavelength of 855 nm, which matches the phonon side-band absorption for (6,5) CNTs [32]. The excitation laser beam with a power P is focused onto the sample by an objective lens with a numerical aperture of 0.85. PL and the reflected beam are collected by the same objective lens and separated by a dichroic filter. A Si photodiode detects the reflected beam for imaging, while a translating mirror is used to switch between PL spectroscopy and time-resolved PL measurements. PL spectra are measured with an InGaAs photodiode array attached to a spectrometer. For time-resolved measurements, E 11 emission at around 1000 nm is filtered out by a wavelength-tunable band-pass filter with a transmission window of 10 nm or a long-pass filter with a cut-on wavelength of 1100 nm. Fiber-coupled two-channel superconducting single photon detector (SSPD) connected to a 50:50 signal-splitting fiber is used to perform PL decay and photon correlation measurements. All measurements are conducted at room temperature in a nitrogen-purged environment. We perform automated collection of PL spectra [33] at all cavity positions to find devices with good optical coupling. For a device where a significant enhancement of the dopant state (E 11 * ) emission [16] is observed, reflectivity and PL images are taken [Figs. 1(c) and (d)]. The enhanced PL is localized at the center of the cavity, as expected from PL enhancement due to coupling with the resonance modes of the cavity. In Figs. 1(e) and (f), PL spectra on and off the cavity are shown, where the off-cavity signal is taken on the photonic crystal pattern. PL spectrum on the cavity shows multiple modes coupled to the dopant state emitters, while a broad emission peak from the dopant state is observed at the off-cavity position. In the on-cavity spectrum, the peak showing the highest intensity at an emission wavelength of 1187 nm has a full-width at half-maximum of 3.9 nm, corresponding to a quality factor Q = 300. We assign the highest intensity peak to the 2nd mode of the L3 cavity [34]. By comparing the peak height of the on-cavity and off-cavity PL spectra, we obtain a PL enhancement factor of ∼100. The PL enhancement can become large as there are other cavity-induced effects in addition to the Purcell effect. In our devices, it is known that localized guided modes can increase the excitation by more than a factor of 50 [34], and coupling to such an absorption resonance can explain the strong excitation polarization dependence (inset of Fig. 1(d)). Furthermore, the directionality of the cavity radiation can improve the PL collection efficiency by as much as a factor of 4 [35]. Combined with the Purcell effect, these cavity effects can significantly brighten the nanotube emitters, and thus the obtained enhancement factor of ∼100 would be a reasonable result. In order to investigate the Purcell enhancement of the radiative decay rate, we perform time-resolved PL measurements on the same device shown in Figs. 1(cf). For on-cavity PL, a single peak is spectrally filtered by tuning the transmission wavelength of the band-pass filter [ Fig. 2(a)], while the long-pass filter is used in- stead for off-cavity PL. In Fig. 2(b), PL decay curves taken at the on-cavity and off-cavity positions are shown, and fits are performed using a mono-exponential decay function convoluted with a Gaussian profile representing the instrument response function (IRF) of the system. Although PL decay of doped CNTs typically exhibits a bi-exponential curve [11,16], here we use a mono-exponential decay for simplicity. From the fits, we obtain the on-cavity PL lifetime τ on = 122.0 ± 0.2 ps and the off-cavity PL lifetime τ off = 173.8 ± 0.4 ps. If we assume the radiative quantum efficiency η of 2.4%, which is estimated by the unaffected quantum efficiency of ∼11% for MeO-Dz doped (6,5) CNTs in water [14] and PL quenching by a factor of ∼4.5 caused by an interaction with the SiO 2 substrate [11], the ∼30% reduction of the emission lifetime corresponds to a Purcell factor F p = (τ off /τ on − 1)η −1 = 18, a coupling factor β = F p /(1+F p ) = 0.95, and an enhanced radiative quantum efficiency of 31%. We perform such lifetime measurements on ten other devices and obtain an average lifetime of 131.2±43.8 ps. The lifetime varies on different devices, suggesting that coupling is affected by some uncontrolled factors. Coupling efficiency is in general affected by spectral overlap, spatial overlap, and polarization overlap between the emitters and cavity modes. Moreover, CNT density fluctuations may have a significant effect in our samples. We note that variations in the cavity quality factors are not the main reason because Q of dopant state emission is much lower than that of the cavity peak. Next we measure photon correlation for the same cavity peak shown in Figs. 2(a) and (b), and clear photon antibunching is observed as shown in Fig. 2(c). We evaluate the normalized second-order autocorrelation at zero time delay g (2) (0) from the autocorrelation histogram by subtracting the dark counts and binning each peak with a binning width of 2 ns. We note that the histogram is taken within a time window from −60 to 300 ns, and side peaks after τ = 60 ns are used for normalization to avoid underestimation due to photon bunching. For the data shown in Fig. 2(c), we obtain g (2) (0) = 0.136 ± 0.005, indicating high-purity single photon emission. It is surprising that we obtain such high-purity single photon emission from a sample with drop-casted CNTs, where numerous emitters are expected within the laser spot. One explanation is that cavity coupling and spectral filtering allow selective photon collection from a few number of emitters. In fact, we actually observe higher g (2) (0) when the band-pass filter is removed [Supporting Information S1]. It is worth mentioning that we could not measure photon correlation of off-cavity signal as the emission intensity is too low, indicating the advantages of cavity coupling. During the photon correlation measurements, time traces of the photon detection rate are also recorded [ Fig. 2(d)]. The total photon detection rate Γ, defined as the sum of detection rates at the two channels, is obtained from the autocorrelation count rate C using the relation where r is the signal splitting ratio between the two channels, and T = 353.7 ns is the effective time window for the 27 peaks which are included in the correlation histograms [Supporting Information S2]. In Fig. 2(d), the PL intensity shows a relatively large fluctuation over time, whose standard deviation is ∼23 times larger than that of shotnoise limited fluctuation. We observe such intensity fluctuation of the cavity-coupled peak for all devices we have measured, which may be caused by the influence of the substrate [20]. Finally, we investigate excitation power dependence of photon emission statistics on three other devices. The PL spectra on the cavities with and without the bandpass filter are shown in Figs. 3(a-c). For these spectrally filtered peaks, we measure g (2) (0) and Γ [Figs. 3(d-f)] while increasing P until Γ shows a rapid drop, which indicates deterioration of the devices. As P increases, Γ increases linearly while g (2) (0) remains almost constant, except for the high power region in Fig. 3(e), where Γ saturates and g (2) (0) slightly increases. In all devices, g (2) (0) remains lower than 0.5 throughout the range of 3. (a-c) PL spectra of three different devices taken on the cavities before (black) and after (red) the band-pass filter is set. The filter is tuned to the highest intensity peaks, where we assign the modes at (a) 1133 nm to 2nd mode, (b) 1135 nm to 2nd mode, and (c) 1135 nm to 5th mode. Insets show the laser polarization dependence of the PL intensity for each peak. CW laser with P = 1 µW is used for excitation. (d-f) Excitation power dependence of g (2) (0) (red circles) and Γ (blue squares) on the cavities measured in (a-c). Pulsed laser is used for excitation. Error bars are the standard deviation of Γ obtained by analyzing the time-trace data for each data point. For g (2) (0), error bars are not shown as they are smaller than the symbols in almost all of the data points. For (a) and (d), X-polarized laser is used, while Y -polarized laser is used for (b, c) and (e, f). P , indicating the robustness of the quantum light signature. This behavior parallels the previous report [11] where defect states for CNTs on a polymer film can also show excellent g (2) (0) values even at relatively high pump powers. At P = 4 µW in Fig. 3(f), we obtain the highest Γ of 4.5 × 10 5 counts/s. We note that clear bunching is observed when P is high [Supporting Information S3], which may be caused by an increase of background signals from CNTs which are not coupled to the cavity mode. In the four devices for which we have measured the photon statistics, we find a positive correlation between single-photon purity and the degree of polarization ρ, which is defined by where I max and I min are the highest and lowest PL intensity, respectively, obtained by fitting the excitation polarization dependence to a sine function [insets of Fig. 1(d) and Figs 3(a-c)]. For the two devices whose g (2) (0) are shown in Fig. 2(c) and Fig. 3(d), we obtain g (2) (0) ∼ 0.1 and ρ ∼ 0.8, while g (2) (0) ∼ 0.35 and ρ ∼ 0.6 are obtained for the other two devices. This correlation is reasonable, because low ρ implies that the CNT axis and the localized guided mode polarization do not match or that multiple CNTs with different orientations are coupling to the same mode of the cavity. This observation suggests that controlling the CNT density and orientation on the cavities is a key factor to obtain high quality single photon emitting devices. The obtained values of Γ can be converted to actual photon emission rates at the devices using the total photon collection efficiency in our optical system, which is estimated to be ∼2.6% [Supporting Information S4]. For the highest photon detection rate Γ = 4.5 × 10 5 counts/s in our measurements, we obtain the corresponding photon emission rate of ∼ 1.7 × 10 7 photons/s. By considering the laser pulse repetition rate of 7.6×10 7 Hz, the photon emission rate corresponds to the single photon emission efficiency of ∼22%, which is consistent with the estimated quantum efficiency by the lifetime shortening observed in time-resolved measurements. Compared to the previously reported value for similar aryl-functionalized CNTs but wrapped by PFO-bpy and deposited onto a Au-deposited substrate with a separation layer of 160nm-thick polystyrene [11], the single photon emission efficiency is almost two times higher. For achieving further improvement of our devices, optimization of CNT concentration is a key factor as mentioned above. Lowering the CNT density down to an individual CNT level will produce an ideal situation for cavity coupling, but such low density of CNTs results in extremely low yield of cavity-coupled devices. Once appropriate conditions for CNT deposition are determined, spin-coating can be used to obtain more uniform and reproducible deposition of CNTs on cavities [24], which enables fabrication of integrated quantum light emitters on silicon chips. As another approach, positioncontrolled limited-area deposition using a micropipette or nano-droplet [36,37] may yield better results because it does not degrade the cavity quality, although such small-volume CNT deposition only at the cavity positions is challenging in practice. Improvement of cavity-coupling efficiency by avoiding quenching from substrates may be possible by using a thinner and more efficient separation layer, like hexagonal boron nitride thin films [38]. In addition, the relationship between coupling efficiency and cavity modes is worth investigating, where quality factor and mode profile are different depending on the mode order [39,40]. Larger mode volumes are beneficial for obtaining coupling to CNTs on the substrate but results in a lower Purcell effect at the same time. Finally we comment on the tunability of our devices. The emission wavelength of aryl functionalized CNTs can be tuned by selecting chiralities and dopant species [11,16], and photonic crystal microcavities have a high flexibility both for absorption and emission resonances [34]. Our approach should therefore lead to bright single photon emitters at 1550 nm. In principle, it should also be possible to obtain indistinguishable single photon sources at room temperature by using higher quality cavities. In summary, we demonstrate integration of carbon nanotube dopant state emitters with silicon microcavities, and PL characteristics and photon statistics of the devices are investigated. PL intensity enhancement by a factor of ∼100 is observed from the dopant state emission coupled to the cavity mode, and time-resolved measurements reveal a ∼30% lifetime shortening by the Purcell effect on the cavity-coupled emission. Photon correlation measurements are performed on the devices, and we confirm that room-temperature single photon emission capability, a key feature of sp 3 -doped CNTs, is preserved in the cavity-enhanced PL emission. We obtain g (2) (0) as low as 0.1 and find that the degree of photon antibunching is stable over a wide range of excitation power. By increasing the excitation power, we obtain a single photon detection rate as high as 4.5 × 10 5 Hz, which corresponds to a single photon emission rate of ∼ 1.7 × 10 7 Hz and a single photon emission efficiency of ∼22% per laser pulse. Our results indicate that integration of dopant state emitters in CNTs with silicon microcavities can provide bright and high-purity quantum light sources at room temperature on silicon photonics platform, raising expectations toward integrated quantum photonic devices. SUPPORTING INFORMATION See supporting information for autocorrelation histograms taken with and without the band-pass filter, derivation of Γ, excitation power dependence of autocorrelation histograms, and estimation of photon collection efficiency of the system.
2018-03-23T01:18:12.000Z
2018-03-23T00:00:00.000
{ "year": 2018, "sha1": "c1c3620988d28e772b16f630d99dcc149a6c879f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acs.nanolett.8b01170", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "c1c3620988d28e772b16f630d99dcc149a6c879f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics" ] }
226332679
pes2o/s2orc
v3-fos-license
Safety and efficacy of first‐line cryoablation for para‐hisian ventricular arrhythmias using a cryomapping protocol approach: A case series Abstract A first‐line cryoablation for para‐Hisian VAs using a strict cryomapping protocol is useful and safe, even if the His bundle potential is recorded on the ablation catheter. | INTRODUCTION Four patients with para-Hisian ventricular arrhythmias (VAs) underwent successful first-line cryoablation without atrioventricular conduction disturbance using a strict cryomapping approach. Even if the His bundle potential was recorded on the ablation catheter, cautious first-line cryoablation for para-Hisian VAs using a cryomapping protocol can be performed with safety and efficacy. Ventricular arrhythmias (VAs) arising from the para-Hisian region sometimes occur. Previous reports have demonstrated that para-Hisian VAs accounted for approximately 3% of all idiopathic ventricular tachycardias. 1,2 However, catheter ablation for VAs arising from the para-Hisian region was reported to be challenging due to the risk of atrioventricular (AV) conduction disturbance. 3,4 A cryoablation system has been recently developed as an alternative approach to treat arrhythmia, and the system is considered as a feasible approach to avoid the risk of injury to conduction systems, such as the His bundle. [5][6][7][8][9] Nevertheless, there is no guarantee that AV conduction disturbances will not occur even while using cryoenergy. Previous study reported a permanent AV block that occurred during cryoablation for para-Hisian VAs. 8 We herein reported the safety and efficacy of cryoablation for the treatment of VAs originating from the para-Hisian region using a strict cryomapping protocol. | Cryomapping and ablation protocol The 4-polar electrode catheter was placed in the right ventricle or the His bundle region for recording an intracardiac electrogram and for pacing stimulation. First, the location of the His bundle and the earliest activation site of the VAs were searched using a multielectrode mapping catheter (PENTARAY, Biosense Webster, Inc) under fluoroscopy guidance and a three-dimensional (3-D) mapping system (CARTO 3 system, Biosense Webster, Inc). A 6-mm tip cryoablation catheter (Freezor Xtra, Medtronic, Inc, Minneapolis, MN) was used for ablation. At first, cryomapping at −30°C was performed to assess the disappearance of VAs and lack of occurrence of AV conduction disorder. After confirming the safety and efficacy of cryoablation by cryomapping, subsequent freezing with a target temperature of −70 to −80°C was applied while monitoring the AV conduction system. If an AV block developed, the cryoapplication was immediately stopped, and a cryoablation catheter was repositioned slightly toward to ventricular apex site which was below the previous ablation site. A procedural protocol for cryomapping and cryoablation was based both on safety (avoiding AV block) and efficacy (eliminating VAs) (Figure 1). If the His bundle potential was not visible in the distal electrode of the ablation catheter placed at the earliest ventricular activation site, cryoenergy was delivered at the myocardial site exhibiting the earliest ventricular activation after confirmation of the QS pattern on a local unipolar electrocardiogram and perfect pace mapping. When the His bundle potential was visible in the distal electrode of the ablation catheter placed at the earliest activation site, we first evaluated the effect of the elimination of VAs on cryomapping slightly toward to ventricular apex site (below the earliest activation site). The efficacy was defined as the disappearance of VAs within 20 seconds after starting cryomapping. 10 If the effect was poor but no AV block was found, cryomapping was performed by shifting the catheter to the earliest ventricular activation site where the His bundle potential was visible in the distal electrode of the catheter. When no AV block occurred during cryomapping, freezing was started with AV conduction monitoring. Before performing cryoablation at any site, tests to confirm the efficacy and safety of cryomapping were consistently performed. If the efficacy was poor, but no AV block was found during cryomapping at the earliest ventricular activation site, a cryoablation catheter was repositioned slightly above the previous ablation site. When an AV block occurred during cryomapping at any region, cryomapping was immediately stopped, and the ablation catheter was moved slightly toward to ventricular apex site, and then, cryomapping was attempted repeatedly to evaluate the safety and efficacy in the same manner. The primary target ablation site was the earliest activation site of VAs without visible His bundle potential at which the efficacy and safety were confirmed by cryomapping. However, the earliest activation site of VAs with visible His bundle potential, at which the efficacy and safety of cryomapping were confirmed, was also acceptable as the ablation site. When there were no progressive complications during subsequent cryoablations, cryoapplication was continued for up to 240 seconds with freeze-thaw-freeze cycles. One cycle of freeze-thaw-freeze was counted as twice of cryoablation. We confirmed no recurrence of VAs by isoproterenol infusion or burst pacing from the ventricle for 20 minutes after cryoablation. If the clinical VAs recurred during waiting time, recryomapping was attempted again to evaluate the safety and efficacy according to the protocol and cryoablation was applied in the same manner. A bonus freezing was not generally performed after successful freeze-thaw-freeze ablation. When the efficacy and safety of cryomapping were not obtained at any region, an alternative approach like retrograde approach or radiofrequency ablation was considered. The procedure was performed at Yokkaichi Municipal Hospital, Mie, Japan. All patients provided written informed F I G U R E 1 Protocol of the cryomapping and cryoablation procedures. AV, atrioventricular; VAs, Ventricular arrhythmias | 3249 consent before the ablation procedure. The procedure complied with the principles of the Declaration of Helsinki. | Case 1 A 72-year-old female patient had frequent VAs originating from the His bundle region. She had a history of dual-chamber pacemaker implantation due to sick sinus syndrome. The 24-h Holter monitoring detected 40,618 of VAs per day. The VA morphology on ECG suggested the origin of VA to be near the His bundle region (Figure 2A). The earliest activation site of VAs was very close to the His bundle region. At the earliest activation site, the QRS morphology of pace mapping was similar to that of the clinical VA ( Figure 2B). The His bundle potential was recorded on the distal electrode of the ablation catheter placed at the earliest activation site. Thus, cryomapping was attempted below the earliest activation site, but VAs did not disappear. Thereafter, cryomapping was attempted at the earliest activation site. The intracardiac electrogram showed a local potential which preceded the QRS on the surface ECG by 30 ms at VA and a His bundle potential in the distal electrode of the ablation catheter at sinus rhythm ( Figure 2C). After starting cryomapping, the VA disappeared within 12 seconds, and no conduction disorder occurred at this point ( Figure 2D). Freezing at a target temperature of −70°C to −80°C was performed subsequently. Cryoapplication was performed for 240 seconds with freeze-thaw-freeze cycles. However, the clinical VA recurred during waiting time after freezing. Therefore, cryomapping was attempted above the earliest activation site. After we confirmed the efficacy and safety of the cryomapping at the site, cryoablation was performed subsequently. However, a second-degree AV block occurred 26 seconds after cryoablation. Thus, cryoapplication was immediately stopped. After the recovery of the normal PQ interval, the cryoablation catheter was repositioned toward to ventricular apex site. A tiny His bundle potential was still recorded on Figure 2E). Once again, we confirmed the efficacy and safety of the cryomapping at this site, and then, cryoapplication was performed for 240 seconds with freeze-thaw-freeze cycles ( Figure 2F). The target VAs were successfully eliminated, and no permanent AV block or recurrence of VA occurred during waiting time after ablation. The numbers of the cryomapping and cryoablation times were 7 and 5 times, respectively. Figure 3 shows the detailed ECGs and mapping images of 3 patients (Cases 2-4) with VAs originating from the para-Hisian region, which were successfully eliminated by the cryoapplication system without a transient AV block. Cryoablation was finally performed even at the site where the His bundle potential could be confirmed on the distal electrode of the ablation catheter in 2 patients and on the proximal electrode of the ablation catheter in 1 patient. However, the disappearance of VAs without the occurrence of AV block was confirmed during previous cryomapping at these sites. All 3 patients achieved the disappearance of VAs within 20 seconds after starting cryomapping (11, 18, and 6 seconds, in cases 2, 3, and 4, respectively). Cryoenergy was delivered up to the target freezing time with safety and efficacy in all 3 patients. Cryoapplication was reperformed in cases 2 and 4 because the VAs recurred during the waiting time after freezing. The numbers of times that cryomapping and cryoablation were applied were 4, 2, and 2 times and 6, 2, and 4 times in cases 2, 3, and 4, respectively. | Procedural and clinical outcome The procedure results and outcomes in the population are summarized in Table 1. The total number of cryomapping and cryoablation were 3.8 ± 2.4 times and 4.3 ± 1.7 times, respectively. All patients underwent 24-h Holter monitoring during a median follow-up period of 125 days (75-227 days) and had no significant recurrences without the use of antiarrhythmic therapy. Further, no patients had any AV block after the procedure. | Discussion This case series demonstrates the utility of the cryomapping approach and reports the outcomes of first-line cryoablation for VAs arising from the para-Hisian region. The radiofrequency (RF) catheter ablation near the electrical conduction system has the risk of AV block, whereas the cryothermal system has an advantage of avoiding AV conduction disturbance. 6,11 However, to date, little has been reported on the safety and efficacy of cryoablation for para-Hisian VAs. 5,8,9 Miyamoto et al 8 reported that cryoablation was performed in 10 patients with VAs arising from the para-Hisian region and clinical success was obtained in 4 patients. In their study, only 2 patients (20%) underwent cryoablation as first-line treatment, while 8 patients underwent cryoablation secondary to the failure of previous RF ablation. Most patients underwent RF ablation as first-line treatment, and only a few patients underwent cryoablation as first-line treatment in their study. Therefore, the modification of the previous RF ablation could somewhat effect on the tissue of the VA origin and AV conduction before cryoablation. In contrast, we reported that the VAs originating near the His bundle region were successfully eliminated by first-line cryoablation in all 4 patients. Furthermore, the efficacy may be explained by a strict cryomapping protocol of the disappearance of VAs within 20 seconds after starting cryomapping. 10 Another study also reported a successful cryoablation for VAs from para-Hisian region. However, cryotest (cryomapping) was attempted before the cryoablation only in 40% of the patients, and detailed approach and mapping protocol of the cryotest were not reported in their study. 5 In the present case series, we reported detailed procedure-related results and clinical outcomes for all 4 patients who underwent first-line cryoablation using a strict cryomapping protocol. The VAs originating near the His bundle region were successfully eliminated by cryoablation. Furthermore, there were no permanent complications, including AV block or recurrence in those patients. First-line cryoablation using a cryomapping protocol may be considered as an alternative approach to RF ablation for para-Hisian VAs with appropriate feasibility and safety. Of interest, in all patients, we had to finally freeze the site at which the His bundle potential was recorded on the ablation catheter, which was a high-risk area in terms of AV block. A unique therapeutic approach of cryomapping made it possible for us to apply ablation energy to the para-Hisian region or even to a site at which the His bundle potential was recorded. Cryoenergy has several desirable characteristics in terms of avoiding AV conduction disturbances. First, effectiveness and safety can be confirmed by cryomapping at less severe temperatures before using the cryoablation mode. Reversible and smaller lesions formed during cryomapping could reduce the risk of AV blocks. 12,13 Therefore, we could approach a more optimal ablation site that is in proximity to the His bundle using cryomapping based on the results of electrophysiological studies. The safety of this approach could be explained by the cryomapping and cryoablation protocol. Before cryoablation, we consistently used cryomapping at the target site to determine whether an AV block and a prolonged AV interval occurred even at the mild temperature. Cryoablation was never applied to the site at which a transient AV block occurred during cryomapping. Second, the cryoablation catheter tip adheres to the myocardium once cryoablation starts, with freezing of both the tip and myocardium, and the adherence is strong and not affected by the heartbeat. [5][6][7] In contrast, the RF ablation catheter moves according to the heartbeat during ablation, which may cause extensive unintentional tissue damage. Additionally, since the VAs in our cases might arise from not deep in the myocardium rather than the His bundle, the delivery of cryoenergy to the site of origin could be achieved. Nevertheless, there is no guarantee that AV conduction disturbances will not occur even while using cryoenergy. Transient AV blocks have been reported in 2%-23% of cryoablation procedures, and this proportion is relatively high, although most AV blocks were transient. 5,14 Furthermore, Miyamoto et al 8 reported a complete AV block that occurred during cryoablation for para-Hisian VAs in a patient with a first-degree AV block at baseline, which required the implantation of a permanent pacemaker. Thus, extreme caution is needed, especially in patients with AV conduction disturbances at baseline or when both RF ablation and cryoablation are performed for para-Hisian VAs. | CONCLUSION Cautious first-line cryoablation near the His bundle region using the strict cryomapping approach can be performed with safety and efficacy. Nonetheless, caution is needed because of the risk of AV conduction disturbance during ablation. ETHICS STATEMENT All patients provided written informed consent before the ablation procedure. The procedure complied with the principles of the Declaration of Helsinki.
2020-10-28T19:13:12.579Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "c8ef6861945d16560ec067c62d7183cc17311c45", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.3401", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a04ec4e7914edc8b0e756bca6b0110495452f6e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221702694
pes2o/s2orc
v3-fos-license
Comparision of iron-regulated outer membrane proteins (IrOMP) and iron-sufficient outer membrane proteins (IsOMP) of Pasteurella multocida strains of porcine origin An investigation was carried out to compare the effect of growth of P. multocida type A of pig origin in iron restricted and in iron sufficient media on the basis of their outer membrane protein extract. Pasteurella multocida serotype A was cultured in two ways. In one process only BHI medium was used and on the other process same medium was used which was supplemented with 2, 2’Dipyridyl (ironrestricted medium). Both these cultures were used to obtain Outer Membrane Proteins (OMPs) by extraction of bacterial cells with 1% Sarcosyl. Separation of the OMPs in SDS-PAGE showed that OMPs were mixture of protein fraction with molecular weight ranging from 110 to 22.6 kDa in case of ironrestricted OMP and 47.3 to 29.9 kDa in case of iron sufficient OMP. The OMP with molecular masses 29.9 kDa grew in both iron-restricted as well as in iron sufficient medium. Introduction Pasteurella multocida is widely distributed throughout the world and is known to cause a variety of diseases in animals and birds. P. multocida Capsular type A is an etiological agent of swine pneumonic pasteurellosis, which is a common disease of pigs. The pathogenicity of P. multocida is associated with various virulence factors (Harper et al., 2006) [1] . The key factors which play important role in pathogenesis of pasteurellosis include the capsule and lipopolysaccharide, adhesions, toxins, siderophores, sialidases and outer membrane proteins (e.g., OmpA, OmpH, Oma87 and PlpB) (Martin and Ferri, 1993) [2] . The outer membrane proteins (OMPs) play a significant role in the pathogenesis of pasteurellosis (Srivastava et al., 1998) [3] . Several OMPs are immunogens and the antibodies produced against these OMPs demonstrate a strong protective action. Such antigens may be used as components of subunit vaccines. Iron is essential for bacterial growth and replication and plays a role in the establishment and progression of infection. To survive and grow under iron limiting conditions bacteria require an efficient iron sequestering system. Several iron uptake systems of pathogenic bacteria have been identified (Wooldridge and Williams, 1993) [4] . One system involves the secretion of siderophores capable of removing iron from iron-binding glycoproteins and the expression of OMPs that are receptors for the iron-siderophore complex (Neilands, 1993) [5] . Outer membrane proteins grown in iron restricted medium are found to be having higher molecular weight protein fraction as compared to OMPs from iron sufficient media. Veken et al. (1996) [6] reported that OMPs of certain serotype of P. multocida when grown under iron-deficient conditions showed several iron-repressible membrane polypeptides seroreactive against P. multocida antibodies. It was believed that such iron-repressible membrane polypeptides found to be more immunogenic and could be a candidate for developing improved vaccine. The aim of this present study was to compare the effect of growth of P. multocida type A of pig origin in iron restricted and iron sufficient media on the basis of their outer membrane protein profile extraction. Material and Methods Bacterial isolates and growth conditions: One liophilized isolate of P. multocida was obtained from the repository of All India Network Project on Haemorrhagic Septicaemia, Department of Microbiology, College of Veterinary Science, Assam Agricultural University, Khanapara, Guwahati. Pasteurella multocida isolate was revived by inoculating on 5% sheep blood (Collines and Lyne, 1970) [7] at 37 °C for overnight. Preparation of OMPs from P. multocida strains for SDS-Page: The strain was cultured on two Brain Heart Infusion (BHI) broth preparation-(i) Iron sufficient BHI broth and (ii) Iron restricted BHI broth containing an iron chelating agent, 150 µM of 2, 2'-Dipyridyl (Kharb and Charan, 2010) [8] and incubated in shaking incubator using 120 rpm at 37°C over night. The overnight cultures were centrifuged at 5000 x g for 15 minutes at 4°C. The supernatant was decanted and the palleted bacteria was washed thrice in PBS (pH 7.4). The washed bacterial cells were resuspended in 10 mM HEPES buffer, pH 7.4. The cell suspension of iron sufficient and iron restricted media was sonicated separately. After sonication, centrifugation was done at 5000 x g for 20 minutes at 4°C. The supernatant was ultra centrifuged at 100000 x g for 60 minutes at 4°C. After that, the pellet was resuspended in 2% sodium lauryl sarcocinate prepared in 10mM HEPES buffer and incubated at 22°C for 60 minutes. After incubation, the suspension was ultracentrifuged at 100000 x g for 60 minutes at 4°C. The pellet was washed twice with sterile distilled water and the final pellet was resuspended in 0.1MM PBS (pH 7.4). Estimation of protein concentration of OMP: The OD value protein of iron sufficient OMP (IsOMP) and iron restricted OMP (IrOMP) extract were determined by spectrophotometer at 660 nm. Separation in SDS-PAGE: Electrophoretical separation of the proteins of P. multocida strains was performed in 12% polyacrylamide gels according to the procedure described by Laemmli (1970) [9] . The separation was carried out at 20 V constant voltage, at room temperature until the dye front (bromophenol blue) was as close as 1 mm to the end of gel. The gels were stained with Coomassie for overnight. After staining, gel was distained with distaining solution. Results and Discussion Pasteurella multocida capsuler type A which was cultured in BHI broth with 150 µM of 2, 2-Dipyridyl showed a reduced growth during over night incubation in comparison to the culture which was grown in BHI broth alone. The protein concentration of P. multocida grown in iron sufficient media was 0.104 mg/ml and in iron restricted media it was 0.240 mg/ml. These findings revealed that OMP extracted from iron restricted medium was more than that of iron sufficient medium. In a study in sheep and goat Nagpal et al. (2013) [10] reported higher concentration of OMP (225 µg/ml broth). The difference in OMP concentration might be due to the variation in bacterial strain and OMP extraction procedure. Protein profile of cells grown in iron sufficient as well as in iron restricted media are shown in Fig.1. A total of 10 nos. major bands were seen in cells of P. multocida grown in BHI broth with Dipyridyl. These were 110 kDa, 99 kDa, 75 kDa, 63.3 kDa, 54.6 kDa, 46.6 kDa, 40.9 kDa, 37.5 kDa, 29.9 kDa and 22.6 kDa. The corresponding bands for iron sufficient OMPs were 47.3 kDa, 41.3 kDa and 29.9 kDa. Depend on band intensity, 5 polypeptide with molecular weight of 75 kDa, 40.9 kDa, 37.5 kDa, 29.9 kDa and 22.6 kDa for IrOMP and one polypeptide with molecular weight of 47.3 kDa for IsOMP were considered to be major OMP. Borkowska-Opacka and Kedrak (2002) [11] also found OMP extract of P. multocida showing protein band ranging from 112 to 22 kDa for IrOMP and 86 to 22 kDa for IsOMP. Choi-Kim et al. (1991) [12] reported 34 kDa protien fraction to be major OMP for serotype A while Zhang et al. (1994) [13] reported 35.5 kDa to be major OMP. The OMPs with molecular masses 29.9 kDa grew in both iron-restricted and in iron sufficient medium. Srivastava et al. (1998) [3] also reported 22 kDa major protein band which was common for both IrOMP and IsOMP. The Difference in molecular size of OMP might be due to the strain variation and extraction procedure. Conclusion The extracted OMPs from P. multocida type A of pig origin grown in iron restricted and in iron sufficient media were found to be consisted of variable no. of proteins with different molecular sizes. Growth of the isolate under the influence of iron chelating agents could favour the expression of high molecular weight protein (110 kDa, 99 kDa, 75 kDa, 63.3 kDa and 54.6 kDa).
2020-09-03T09:03:53.717Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "b67680bbc5017431f77bcad07e0b625bd50b0758", "oa_license": null, "oa_url": "https://www.chemijournal.com/archives/2020/vol8issue4/PartAQ/8-4-420-791.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "19e211f8df43d27b1b7ee41533f4e95bd58afce9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
245143141
pes2o/s2orc
v3-fos-license
Research on Multi-label Text Classification Method Based on tALBERT-CNN Single-label classification technology has difficulty meeting the needs of text classification, and multi-label text classification has become an important research issue in natural language processing (NLP). Extracting semantic features from different levels and granularities of text is a basic and key task in multi-label text classification research. A topic model is an effective method for the automatic organization and induction of text information. It can reveal the latent semantics of documents and analyze the topics contained in massive information. Therefore, this paper proposes a multi-label text classification method based on tALBERT-CNN: an LDA topic model and ALBERT model are used to obtain the topic vector and semantic context vector of each word (document), a certain fusion mechanism is adopted to obtain in-depth topic and semantic representations of the document, and the multi-label features of the text are extracted through the TextCNN model to train a multi-label classifier. The experimental results obtained on standard datasets show that the proposed method can extract multi-label features from documents, and its performance is better than that of the existing state-of-the-art multi-label text classification algorithms. Introduction Automatic text classification is an important means for humans to process massive amounts of text information. In the real world, due to complex and changeable text data environments and the existence of polysemous objects, text classification face many severe challenges. The traditional single-label text classification method has not fully met the needs of users. To better meet the needs of users for text classification tasks, the multi-label learning method came into being [1]. Multi-label learning refers to the process of assigning the most relevant subset of class labels to each instance from the overall label set, thereby intuitively reflecting the various semantic information contents of ambiguous objects. For example, a news report about coronavirus disease 2019 ("COVID-19") is likely to belong to the "fighting epidemic" category, the "medical and health" category, and the "economic crisis" or "national security" category. Multi-label text classification is one of the important branches of multi-label learning, and it is mainly used in sentiment analysis, topic labeling, question answering, and dialog behavior classification [2][3][4][5]. Multi-label text data have the following characteristics. Multi-label text classification allows a document to belong to multiple labels, so the different levels and aspects of semantic features need to be captured; documents may be relatively long, and complex semantic information may be hidden in noisy or redundant content; most documents belong to only a few labels, and a large number of "tail labels" have only a few training documents [6]. Due to the characteristics of multi-label text data, researchers mainly focus on three aspects: how to accurately mine the correlation between labels; how to accurately represent the complex semantics of the given documents, especially through the use of domain knowledge to supplement the semantic information of the document; and how to fully capture the effective information from each document and extract the feature information related to the corresponding label. The emergence of attention mechanisms, combined with deep neural networks, can effectively solve the problem of long-distance word dependencies, and capture important words in the document. In particular, in 2017, Vaswani et al. [7] proposed a new transformer network structure in the paper titled "Attention Is All You Need"; this structure is not only faster than other approaches during training, but is also more suitable for modeling long-distance dependencies. It has achieved very good results on many NLP tasks. Since then, an increasing number of institutions and scholars have conducted extensive research based on transformers and produced many excellent language models, such as the OpenAI GPT and BERT. These excellent language models have been widely used in multi-label learning tasks. However, these models generally have large numbers of parameters and express only the local semantics accurately, so they cannot represent the macrosemantic information of documents. In 2020, Lan et al. [8] proposed "A Lite BERT" (ALBERT) model, which greatly simplifies the number of required parameters. Peinelt et al. [9] combined a topic model with a BERT model for the task of semantic similarity detection. We have reason to use ALBERT and topic models to extract important information of different granularities form documents to further improve the effect of multi-label text classification. Using deep learning methods to solve multi-label problems, the purpose is to find the mapping relationship between text features and labels. At present, this mapping relationship is not very clear. Therefore, we will attempt to use different levels and granularity of features (e.g., semantic information and topic information) to represent the depth features 1 of the text, and map them to the label space. Through the above analysis, although multi-label learning has received extensive attention and made much progress, some problems and challenges still must be further studied and solved. Among them, how to combine topic information and semantic information to guide multi-label text classification is the key problem. Therefore, this paper proposes a depth semantic model that integrates the topic information of the document domain and the local contextual semantic information of the input document to obtain the depth feature representation of the document; then, a convolution neural network (CNN) is used to extract depth features at different levels. In this paper, a latent Dirichlet allocation (LDA) topic model is used to obtain word-level and document-level topic information, and a certain fusion mechanism is used to represent the topic and semantic depths of the document. Then, the depth feature of the document is extracted by the CNN model, and the probability of each label is calculated by a fully connected network (FCN) and a sigmoid function. Finally, the cross-entropy loss function is used for training. Our contributions can be summarized as follows: 1. In this paper, we propose a method called topic ALBERT (tALBERT), which combines an LDA topic model and the ALBERT model to represent the depth features of documents. 2. We design a multi-label text classification model based on tALBERT and TextCNN. The combined model can obtain different levels of semantic document information, extract the depth features 2 of documents, and improve the prediction effect of the model. 3. This paper evaluates the performance of the proposed method and compares it with the current representative multi-label text classification methods using three benchmark datasets. The experimental results show that the proposed method is better than the baseline models. Related Works With the rapid development of machine learning, especially deep learning, many classification methods have been proposed to solve the multi-label learning problem. These methods mainly focus on research of with traditional machine learning algorithms and deep learning models. Among them, traditional machine learning methods include problem transformation methods and algorithm adaptation methods; deep learning methods are mainly divided into CNN-based, RNNbased, and transformer-based multi-label text classification methods according to their model structures. Traditional Machine Learning Methods According to different solution strategies, traditional machine learning methods can be divided into two categories: problem transformation methods and algorithm adaptation methods [10]. Problem transformation methods: This category of algorithms tackles multi-label learning problems by transforming them into single-label learning tasks. Representative algorithms include first-order approaches, second-order approaches, and high-order approaches. Binary relevance (BR) [11] is the most representative first-order problem transformation method. The basic idea of this algorithm is to decompose a multi-label learning problem into several independent binary classification problems. However, due to its in ability to discover the interdependence between labels, BR may lead to a decrease in prediction performance. The typical algorithm among the second-order approaches is calibrated label ranking (CLR) [12]. The basic idea of the CLR algorithm is to transform a multi-label learning problem into a label ranking problem and use pairwise comparison technology to realize the rankings between labels. Although CLR has the advantage of reducing the imbalance between label categories, the number of binary classifiers constructed by CLR increases from a linear value to a square value as the number of labels changes. Therefore, this method has limitations and is not suitable for sample data with a large number of labels. Classifier chains (CCs) [13] and label powersets (LPs) [1] are typical high-order problem transformation methods. A CC is an improvement of the BR method that does not consider the correlations between labels and leads to the loss of information. The basic idea of the CC algorithm is to transform a multi-label learning problem into a series of binary classification problems, in which the subsequent binary classifiers in the chain are based on the prediction of the previous classifier. Therefore, when the previous label predicts an error, the error is passed down the chain. The basic idea used by the LP algorithm to solve problems is to transform a multi-label learning problem into a set of multi-class classification problems. Each subset generates a new set of labels via LP technology, and a multi-class label is finally learned for each subset. However, this method may result in sample imbalance after the initial problem is transformed. In other words, with increases in the number of labels and the sample space, these methods face great challenges in terms of computational efficiency and performance. Algorithm adaptation methods: This category of algorithms tackles multi-label learning problems by adapting popular learning techniques to deal with multi-label data directly. These techniques mainly include first-order approaches and second-order approaches. Multi-label k-Nearest Neighbors (ML-kNN) [14] and Multi-label Decision Trees (ML-DTs) [15] are typical first-order approaches. The basic idea of the ML-kNN algorithm is to adapt k-nearest neighbor techniques to deal with multi-label data, where the maximum a posteriori (MAP) rule is utilized to make predictions by reasoning with the labeling information embodied in neighbors. The ML-kNN algorithm can mitigate the class-imbalance issue by estimating the prior probability of each class label, but the computational complexity of this approach is high. The basic idea of the ML-DT algorithm is to adopt decision tree techniques to deal with multi-label data, where an information gain criterion based on multi-label entropy is utilized to build the decision tree recursively; however, the algorithm assumes that the labels are independent when calculating the multi-label entropy. Typical second-order approaches include Ranking Support Vector Machine (Rank-SVMs) [16] and Collective Multi-Label Classifier (CMLs) [17]. The basic idea of the Rank-SVM algorithm is to adapt a maximum margin strategy to deal with multi-label data, where a set of linear classifiers is optimized to minimize the empirical ranking loss and enabled to handle nonlinear cases with kernel tricks. Rank-SVM is a machine learning algorithm based on statistical learning theory that extends the classical SVM to multi-label learning problem. The basic idea of the CML algorithm is to adapt the maximum entropy principle to deal with multilabel data, where the correlations among labels are encoded as constraints that the resulting distribution must satisfy. The CML algorithm takes the correlation between labels into account, but the complexity of the algorithm is too high. Deep Learning Methods With the development of deep neural networks, researchers have proposed a variety of deep learning methods for multi-label text classification, including CNNs, RNNs, and transformer-based deep neural network models. In 2014, Kim et al. [18] proposed the TextCNN model, which first uses a CNN structure for text classification and then uses a CNN for sentence-level classification; the authors carried out a series of experiments based on Word2vec word embeddings. However, this model cannot avoid the disadvantage of utilizing fixed windows in CNNs, so it cannot model long sequence information. Liu et al. [19] improved the structure of TextCNN and proposed an XML-CNN model. This model is different from TextCNN in that dynamic pooling is used in the pooling operation, the loss function is improved, the binary-cross-entropy loss function is adopted, and a hidden layer is added between the pooling layer and output layer; this layer can map high-dimensional labels to low-dimensional to reduce the number of required calculations. Yang et al. [20] proposed a twin hyperspectral CNN (HSCNN) for multi-label text classification with unbalanced data. This network mainly deals with small-sample problem with the twin network structure and uses a hybrid mechanism to solve extremely unbalanced multi-label text classifications. The head label adopts a single network structure, and the tail label adopts a twin network with less sampling. A multilabel text classification method based on a CNN is relatively simple and does not need to incur a massive computational cost. However, the pooling operation of a CNN causes the loss of semantic information, and when the text is too long, a CNN is not conducive to capturing the relationship between the preceding and the following information, resulting in semantic deviation. Nam et al. [21] used an RNN to replace the classifier chain in a CNN and used a sequence-to-sequence (seq2seq) based on an RNN to perform modeling. The method can generate label sequences in turn by RNN to capture the correlation between labels. This was the first time that the seq2seq model was applied to multi-label text classification. After that, more seq2seq models were proposed to deal with multi-label text classification. Chen et al. [22] proposed a fusion mechanism for a CNN and an RNN. First, a word vector is sent to the CNN to obtain the corresponding text feature sequence, and then, the feature is input into the RNN to obtain the corresponding prediction label. However, the model is greatly influenced by the size of the given training set. If the training set is too small, overfitting may result. Most multi-label text classification methods based on RNNs are implemented using the seq2seq structure, which considers the relationships between labels using sequence generation. The latter label is often dependent on the former label, so the impact of incorrect labels is often superimposed. Although some methods have been improved in this regard, some defects remain. This improvement improves the model effect to some extent, but whether the model can effectively learn the correlations between well remains to be discussed. The typical network structure of a transformer adopts an attention mechanism; this is unlike the traditional encoder-decoder model, which needs to be combined with an RNN or a CNN. The proposal of the transformer has greatly influenced the field of NLP, especially the proposal of the BERT model based on a transformer structure, which is said to be a milestone of NLP. Yarullin et al. [23] first tried BERT and explored it under multi-label settings and in hierarchical text classification problems and proposed a sequence-generating BERT model in the field of multi-label text classification. Chang et al. [24] proposed the X-Transformer model, which is composed of three parts, including a semantic label sequence component, a deep neural matching component, and an overall ranking component. Gong et al. [25] proposed the deep learning model of HG-transformer, which first models the input text as a graph structure; then uses a multi-layer transformer structure with a multi-attention mechanism at the word, sentence, and graph levels to fully capture the characteristics of the text; and finally utilizes the hierarchical relationships among the labels to generate t label representations. A weighted loss function was designed based on the semantic distances among labels. The effect of a multi-label text classification model based on a transformer structure is often better than that of models based on CNN and RNN structures, but the number of model parameters required for a transformer model is often large, and the network structure is complex, producing some limitations in practical application. To further improve the applicability and performance of multi-label text classification in real scenarios, this paper proposes a joint model called tALBERT, which combines LDA and ALBERT, to obtain different multi-level document representations. On this basis, TextCNN is used to extract the depth features of documents and to conduct multi-label text classification. tALBERT-CNN Method This section mainly introduces our multi-label text classification method called tALBERT-CNN, primarily including a description of the multi-label classification problem, the model framework, topic information extraction based on LDA, text representation based on tALBERT, multi-label learning, and prediction. Problem Description Assume that X = ℝ d represents the d-dimensional feature vector input space of the instance and that Y = {y 1 , y 2 , … , y q } represents the q-dimensional label output space of the instance. Then, the dataset for multi-label learning can be defined as feature vector of the instance, and Y i ⊆ Y is the label set corresponding to instance x i . In this way, the multi-label learning task can actually be transformed into finding a suitable mapping function h ∶ X → 2 y from the training set, so that the input space of the feature vector can be mapped to the output space of the label set through this mapping function. When instance x with an unknown label is reached, the label set can be predicted through the mapping function h(x) ⊆ Y . Commonly, when the feature vector x of an instance is given, we can use the learning function to obtain a set of 0/1 vectors about the label space. Model Framework In this section, the method proposed in this paper is introduced in detail. Due to the large number of parameters required by the BERT model and its advantages in local semantic representation, this model cannot represent the macro-domain information of the input document other than its own semantics. Inspired by reference [8] and reference [9], we propose a document semantic acquisition method based on tALBERT. We obtain the topic information at the word level and document level through an LDA topic model, obtain a semantic representation at the wordlevel and document-level through ALBERT, and fuse the above information through a concatenation mechanism to represent the document. Compared with other attention mechanisms, a CNN has the characteristic of efficiently capturing features between different words, so we choose TextCNN as the multi-label feature extraction model for multi-label learning and classification prediction. The model frame is shown in Fig. 1. In addition, before the text sequence is input into the multi-label classification framework (Fig. 1), need to do the following work: 1. Document preprocessing. It mainly includes removing invalid symbols, digital normalization, converting all uppercase English characters to lowercase and lemmatization, etc. the text sequence set is formed after preprocessing. 2. Training LDA model. 3. Fine-tune the ALBERT model. If the length of the text sequence is greater than 512 words, it is truncated and then sent to the ALBERT model. Topic Information Extraction A topic model was the first developed text analysis tool and is a popular language model. It is an effective unsupervised tool that can reveal the latent semantic information in the input text corpus based on the global text context information of the corpus. Topic models refer to probabilistic latent semantic analysis (PLSA), LDA, and various extensions. Among them, LDA is the most complete probabilistic topic model. An LDA topic model is a feature extraction method based on the Bag of Words (BOW) model. It ignores the order information of words and the information between context words. It consists of three levels of probability distributions: document, topic, and word levels. The topic information is added to the document-word feature level, the word information is mapped to the topic space, and the global underlying semantic structure of the text is captured to achieve a good representation of the text features in the hidden topic space. An LDA topic model directly captures the global semantics related to words in the text and obtains a global feature representation of the text. Figure 2 shows the detailed process of the LDA model for generating topic information. 1. For any topic z, obtain the polynomial distribution of the words under this topic according to the Dirichlet distribution k , i.e., k ∼ Dirichlet( ) , where is an a priori hyperparameter that is generally set to 0.01. 2. For each document w m , its topic probability m obeys a Dirichlet distribution, which is m ∼ Dirichlet( ) , where is an a priori hyperparameter that is generally set as 50∕K and K is the number of topics. 3. For each document w m in the training corpus and all vocabulary w m,n in the document, traverse: choose topics z m,n and w m,n ; they all obey multinomial distr ibutions, which are z m,n ∼ Multinomial m , w m,n ∼ Multinomial( k ). Based on reference [26], in which word-level and document-level topics were successfully combined with a neural architecture, we can easily obtain the topic Z i of each document, and all tags in a document are passed to the topic model to infer each document theme distribution; see Eq. (1) where i denotes the number of document and K denotes the number of topics. In addition, for a word-level topic W , a topic distribution w j is inferred from each tag T i . See Eq. (2) (1) Text Representation Based on tALBERT BERT as a replacement for Word2vec has greatly improved its accuracy in 11 directions of the NLP field. A BERT model has the following three characteristics. By utilizing a transformer as the main framework of the algorithm, the bidirectional relationships in sentences can be more thoroughly captured. The algorithm uses a mask language model (MLM) [27] and next sentence prediction (NSP) as the goal of multi-task training. Large-scale training data have enabled the results of BERT reach new heights, and Google has made their BERT model open source. Researchers can directly use BERT as the conversion matrix of Word2vec and efficiently apply it to their own tasks. Although BERT has many advantages, the basic version of the BERT model possesses as many as 110 M parameters, and the GPU memory utilization is as greater as 7 GB during the training process. A large BERT model has as many as 340 M parameters, and the GPU memory occupied by the training process is as high as 32 GB, which is a problem for researchers. Therefore, ALBERT has emerged to fill this need. Compared with BERT, ALBERT is mainly improved in terms of two aspects to reduce the number of parameters. Factorized Embedding Parameterization For BERT, the word vector dimensionality E and the hidden layer dimensionality H are equal. With the increase in the dimensions of the model (that is, as the word vector dimensionality and hidden layer dimensionality increase), the parameter quantity of the model increases rapidly. Reference [8] provided a method to decompose parameters: in the mapping process from the vocabulary dimensionality V in the input layer to the hidden layer dimensionality H, V is first projected into a low-dimensional embedded space E, and then, E is projected into hidden layer H (usually, the dimensionality of H is much larger than that of E), and the number of parameters after completing a transformation is O(V × H)toO(V × E + E × H), thereby reducing the number of embedding parameters. Cross-Layer Parameter Sharing The author of the paper that introduced ALBERT proposed cross-layer parameter sharing as another method to improve parameter efficiency. There are four ways to share parameters, namely, attention parameter sharing, feed-forward network (FFN) parameter sharing, cross-layer parameter sharing, and no sharing. Table 1 compares the configurations of the BERT and ALBERT models. Table 2 compares the parameters of different cross-layer sharing methods with the base ALBERT. Based on the full experiment of ALBERT in reference [8], we use the ALBERT model and adopt a single-sentence (document) input mode to obtain word-level (documentlevel) semantic representations. where d denotes the internal hidden size of ALBERT (768 for base ALBERT or1024 for large ALBERT). Based on formulas (1)-(4), we use two fusion methods and three specific strategies for document feature representation. (3) Among them, S 1 is used for the input of the fully connected layer and is finally used for multi-label classification prediction, and S 2 , S 3 are used as the inputs of TextCNN to extract multi-label features at different levels for multi-label classification and prediction. Multi-label Learning and Prediction In this section, we mainly introduce the proposed multilabel prediction model based on TextCNN. The specific model structure is shown in Fig. 3. The training model consists of an embedding layer, a convolutional layer, a pooling layer, and a fully connected layer. Embedding Layer Each document and word in the dataset can be represented by a semantic feature vector. The semantic feature vector acquisition method adopted in this paper combines a dynamic semantic vector and a static topic vector. Through the ALBERT language model, the semantic vector of the document and the contextual semantic vector of each word in the document can be obtained to solve ambiguity problem; the document-level topic vector and the topic vector of each word can be obtained through the LDA topic model. These topic vectors largely imply the domain knowledge in which the document or word is located. To meet the data format requirements of the embedding layer, we uniformly set the document length to L words. For documents longer than L, we intercept the first L words, and for words whose lengths are less than w, we fill the remainder of the document with 0 s. Based on the research in the previous section, we set the dimensionality d of the topic vector and the word vector to 768 and set the fusion strategies S 2 ∈ R 2L×,768 , S 3 ∈ R 2(L+1)×768 as for the embedding vector. Convolutional Layer The convolutional layer is used to extract the different pieces of granular feature information contained in the semantic feature vector. This task can be achieved by setting convolution kernels with different size. The width of the convolution kernels defined in this paper is the word vector dimensionality d. According to different languages, different heights h can be selected for the convolution kernels. Using more convolution kernels with different heights, a richer feature representation can be obtained (in this paper, the heights h of the convolution kernels are set to 2, 3, 4, and 5). Pooling Layer The pooling layer reduces the output result of the convolutional layer and extracts a deeper feature representation. The sizes of the feature sets obtained by the convolution kernels with different heights are different. This paper uses the pooling function for each feature set and uses max pooling to extract the maximum value in the feature collection. For each convolution kernel, the output feature is the maximum value of the feature set, max pooling is used for all convolution kernels, all output feature values are concatenated, and the final feature vector representation of the document is obtained. Fully Connected Layer and Loss Function The feature vector obtained after concatenating the output result of the pooling layer is fully connected with q neurons (the same as the number of label sets) as the output layer of the model. At the same time, the sigmoid function is used as the output function of the model, and its formula is Finally, to determine whether the document belongs to the given label, this paper sets the threshold to 0.5; that is, p ij ≥ 0.5 means that this label is used as one of the output labels of the current instance; otherwise, it is not used as the output label of the current instance. In addition, this paper uses the cross-entropy function as the loss function for training the model. The formula is where N denotes the number of documents and q is the number of labels; p ij ∈ [0, 1], y ij ∈ {0, 1} , which are the predicted value and true value of the jth label of the ith instance, respectively. Experiments To prove the effectiveness of the multi-label text classification method proposed in this paper, in this section, we provide a discussion in four parts: a description of the datasets, the selection of the evaluation metrics, comparisons among various methods and parameter settings, and a comparison of experimental results. Datasets In our paper, we use the following three multi-label text classification datasets: the arXiv Academic Paper Dataset (AAPD), the Internet Movie Database (IMDB), and Reuters Corpus Volume I (RCV1). The AAPD collects the abstracts and the corresponding subjects of 55,840 papers in the field of computer science on the arXiv website. Each paper may involve multiple subjects (labels) (for a total of 54 subjects), and each abstract has one or more subject marks. Since the text content includes academic papers, the text is relatively standardized, and the label settings are relatively reasonable. The model can predict the corresponding subject of the paper based on the abstract content, making the AAPD very suitable as a dataset for multi-label text classification models and algorithm research. The IMDB contains 117,196 movie introductions (in English), with a total of 27 movie categories. Each movie introduction has one or more possible types. The dataset provides a multi-label binary mask for each movie according to whether the movie belongs to a specific type. Therefore, this dataset is suitable for multi-label classification models and algorithm research. RCV1 has a total of 804,414 news reports, involving 103 categories. Each report may contain one or more categories. On average, each news report contains 3.2 category labels. The data can be used to test the performance of the method proposed in this paper in a case with a large-scale dataset and a large number of labels. Table 3 lists the statistics of these datasets, where N is the number of total instances, W is the average number of words per document in the dataset, Q is the total number of classes, and Q is the average number of labels per document. Since the lengths (number of words) of the documents in the original datasets are different, a document that is too short will result in the inability to accurately determine the category of the text, and a document that is too long will result in a waste of space. According to the characteristics of each dataset, this paper fits the document lengths of the AAPD, IMDB, and RCV1 datasets to 250,150, and 300 for the input of the model. If the document length exceeds the set value, we cut it off, and if the length is insufficient, we fill it with 0 s. In addition, if the length of the document is less than 20 words, the document is directly discarded in the paper. Evaluation Metrics To comprehensively evaluate the method proposed in this paper, we choose commonly used sample-based evaluation metrics. The effectiveness of the method is mainly evaluated by its precision (P), recall (R), F1 score (F1), subset accuracy (SA), and Hamming loss (HL), and these scores are compared with those of other methods. P: precision reflects the average of the percentage of correctly predicted labels and predicted labels in all samples. where |N| denotes the total number of test samples. , R: recall reflects the average of the percentages of correctly predicted labels and true labels in all samples. F1: the F1 Score is a comprehensive metric that combines precision and recall. The larger the value, the better the system performance. SA: subset accuracy evaluates the fraction of correctly classified examples, i.e., whether the predicted label set is identical to the ground-truth label set. Intuitively, subset accuracy can be regarded as a multi-label counterpart of the traditional accuracy metric and tends to be overly strict especially when the size of the label space is large. where 1{ŷ i = y i } means that if the label is true 1 is returned; otherwise, 0 is returned. HL: The Hamming loss measures the proportion of misclassified labels, the proportion of labels whose correct labels are not predicted, and the proportion of labels whose incorrect labels are predicted. The smaller the value of the Hamming loss is, the more effective the tested model or method. where q denotes the total number of labels, ŷ i and y i denote the predicted label and the real label, respectively, and xor denotes the XOR operation. Comparison Method and Parameter Setting The multi-label text classification method proposed in this paper is fundamentally composed of two parts: deep topic and semantic representation based on tALBERT and multi-label feature learning based on a CNN. Therefore, the chosen baseline models also adopt similar network structures. In addition, to fully verify the performance of the method in this paper, we also choose other excellent models based on RNNs and attention mechanisms for comparison purposes. Comparison Method TextCNN [18]: This method is based on Word2vec for word embedding, and for the first time, a CNN structure is used for text classification. XML-CNN [19]: Using a CNN to design a dynamic pool to deal with text classification, this method is a representative algorithm for processing text classification task. DTFEM-ML_KNN [28]: This method uses a combination of LDA and bidirectional long short-term memory (Bi-LSTM) to extract deep document topic features and is combined with the traditional machine learning method ML_KNN for multi-label text classification. Label-Specific Attention Network (LSAN) [6]: The algorithm uses an adaptive fusion strategy to obtain document representations via a self-attention mechanism and a label attention mechanism, and finally combines the two types of document representations to construct a multi-label text classifier. Our Method According to the tALBERT document feature representation method and the three different information fusion strategies proposed in this paper for different levels ( S 1 ,S 2 and S 3 ), the tested multi-label text classification methods mainly include tALBERT-S1, tALBERT-CNN-S2, and tALBERT-CNN-S3. Parameter Setting For TextCNN and XML-CNN, we use Google's pre-trained Word2vec as the word embedding mechanism, the embedding dimensionality d = 300 , the convolution kernel width is set to 300, and the heights are set to {2, 3, 4, 5}. We set DTFEM-ML_KNN and the LSAN according to the parameters in their original papers. The proposed method selects the pre-training model under the base ALBERT ( d = 768 ) in the all sharing mode with E = 128 (see Table 1). For the LDA topic model, we set the number of topics k = 128 , and the hyperparameters = 0.5 and = 0.01 . The widths of the convolution kernels are set to 768, and the heights are set to {2, 3, 4, 5}. Experimental Results In this section, the proposed tALBERT-CNN is evaluated on three benchmark datasets via a comparison with five baselines in terms of P, R, F1, SA, and HL. Tables 4, 5, and 6 show the performance of the LDA, ALBERT, and tALBERT-CNN models with different fusion strategies for all test documents. The LDA and ALBERT models adopt document-level vector representation, an FCN, and a " + " denotes that the larger the value is, the better the model performance, and "−" represents that the smaller the value is, the better the model performance. In each line, the best result is marked in bold. From Tables 4, 5, 6, we can find that our method is obviously superior to the LDA topic model due to its use of probability feature statistics and the deep semantic model ALBERT. This fully shows that the combination of a topic model and deep semantic model significantly improves NLP downstream tasks performance, which is consistent with the conclusion of reference [9]. In addition, the effects of different fusion methods on multi-label text classification are also different. The effect of only fusing document-level topic vectors and semantic vectors is the worst, but the results are better than those of a single model. The effect of fusing word-level and document-level semantic vectors is optimal for multi-label text classification. Two reasons can explain this finding. On one hand, the fusion of word-level and document-level vectors to represent the original features of the input document increases the length of the document, which then inevitably contains more information. On the other hand, with the increase in the size of the fusion vector, more hidden multi-label features are provided; thus, with the advantage of TextCNN in terms of feature extraction, the effect of multi-label text classification is further improved. Therefore, the following comparative experiments only compare tALBERT-CNN-S3 with other advanced models. Tables 7, 8, 9 show the comparison results of our proposed model and other basic models and excellent models. On the whole, aside from the better individual evaluation metrics of the LSAN model on the RCV1 dataset, our model effect is relatively excellent, on the whole, while the CNNbased model has the worst effect; this is related to the static Word2vec word-level vectors used by TextCNN and the use of XML-CNN as the original semantic vector representation of the document. Because Word2vec is based on static word vectors, once the model is trained according to the given corpus, the meaning of each word will not change; that is, t if the word is not placed in context, the problem of polysemy cannot be solved. The LSAN model performs better than our model in terms of some metrics on the RCV1 dataset; this is mainly because the LSAN transforms the label set into a semantic vector and then obtains multi-label text features through a similarity comparison with document semantic vectors. However, this method relies heavily on the given label sets, and only when the number of labels is large, it can fully show its advantages. Our model achieves good results on three different datasets and has stronger applicability than the competing approaches. Especially on the AAPD and IMDB, our model is obviously better than other models, and on RCV1, our model is also better when the evaluation metrics are SA and HL. Conclusions To solve the multi-label text classification problem, this paper proposes a multi-label text classification method that combines document representations of topic information and deep semantic information with a multi-label learning model based on TextCNN. We perform many experiments on three benchmark datasets and explore the influence of the fusion of different levels of topic information and deep semantic information on multi-label text classification. In short, the strategy of fusing topic information and deep semantic information at the word level and document level can achieve the best performance. In addition, to further verify the effectiveness of our proposed method, we also compare it with the excellent methods based on RNNs, CNNs, and the combination of an attention mechanism and a topic model. Aside from the LSAN model being superior to our method in terms of some evaluation metrics on a specific dataset, our method based on the tALBERT-CNN multi-label text classification approach has achieved the best performance, and our method has better applicability than competing approaches. Although our method has achieved good performance on three standard datasets and alleviated the common tail label problem in multi-label classification to a certain extent, we did not propose a thorough solution to the tail label problem. This is also the direction of our continued efforts. Moreover, we also analyze the characteristics of the LSAN model; that is, the label set is represented by a semantic vector, an attention mechanism is adopted so that the model can learn the similarities between document semantics and label semantics, and then, multi-label classification is carried out. Therefore, we will pay more attention to how to improve the performance of our multi-label model text classification using the similarities between label semantics and document semantics and an attention mechanism based on the transformer architecture. In addition, solving the problem of few sample classification through meta-learning is also an effective method to solve the tail label problem, which has been favored by scholars in recent years. In fact, our team is currently studying how to use meta-learning to solve the problem of fewshot text classification. This method is mainly aimed at the problems of more text label categories and fewer dataset instances. At present, our research has also made some progress.
2021-12-16T05:11:03.082Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "d581587c87c6c1e9cd2c84646edd0231c13fcc63", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s44196-021-00055-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d581587c87c6c1e9cd2c84646edd0231c13fcc63", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
41268605
pes2o/s2orc
v3-fos-license
Acting Up or Opting Out : An Analytical Literature Review of Extant South African and International School Truancy Studies The common purpose of this article is to review recent international and South African research on school truancy and the implementation of intervention strategies. The specific aim is to examine the effects of interventions on school attendance in order to inform policy, practice and research. Consequently, this review is limited to a consideration of definitional issues, causes of truancy and non-attendance and on recent trends in truancy intervention research. Much of the research draws attention to the fact that there is a diversity of views as to causal factors of this behaviour. Some suggest that school truancy is the effect of dysfunctional family backgrounds and home life, education systems which fail learners, unjust social and economic systems as well as psychological traits among learners such as low level of self-esteem and poor self-concept. The article concludes with a discussion on intervention strategies aimed at reducing the prevalence of truant behaviour. It is also envisaged that the implications of this review will provide evidence, guidance as well as some caution for future researchers who wish to advance the study of truant behaviour among students and the implementation of intervention strategies. Introduction Playing truant used to be a one-time lark for many school going students.More often than not, these antics were shortlived as local shopkeepers, neighbours and family friends were quick to report truants to parents or school authorities.However, the past three to four decades, saw increasing evidence and an evolving recognition that school truancy is fast growing into a profound problem and major concern locally and internationally.Truant behaviour has become an issue that has incrementally been identified as one of the risk factors that can be linked to delinquent activity, substance abuse and educational failure (Ovink, 2011;Hendricks, Sale, Evans, Mckinley and Carter, 2010).In many instances it may signal the beginning of a lifetime of challenges for students who choose to skip classes deliberately and often do not realise the negative repercussions of their behaviour.These students are likely to fall extremely far behind in their school work and many of them eventually drop out, since in their judgement, dropping out of school is comparatively speaking much easier than catching up. Despite numerous anti-truancy campaigns and social work intervention launched internationally to address the issue of school truancy, there seems to be a broadening body of evidence among researchers that poor school attendance and skipping classes intermittently not only has substantial cost implications for the individual alone, but also for the wider society (Maynard, McCrea, Pigott and Kelly, 2013;Valentine, Pigott and Rothstein, 2009).Not only does it cost students an education, but invariably results in limited job opportunities and earning power, thereby restricting chances for future education and training.The implications for schools with high rates of absenteeism and truancy related challenges include loss of funds, failure to meet performance requirements and targets set by local, provincial and national education authorities.Significant costs to communities associated with truancy include higher rates of criminal activity, citizens not productively contributing to the community as well as higher government spending for social services (Goldstein, Little and Akin-Little, 2003).Other potentially risky outcomes associated with truancy include delinquent activity, substance abuse, gang-related activity, involvement in criminal behaviour such as burglary, vehicle theft and school expulsion (Maynard, Salas-Wright, Vaughn & Peters, 2012;Petrides, Chamorro-Premuzic, Frederickson & Furnham, 2005;Reid 2002).Reid (1994) contended that in the United States of America it is estimated that approximately 55% of students in secondary schools either do not attend school on a regular basis or skip classes regularly during the day.According to this author, many teachers of truanting students, being only human, are often not too concerned when they find only 20 out of the regular 30 students present in their classes.Evidently, such a situation simply translates into less work, less head ache and more manageable units to work with.Reid (1994) further emphasized that in the American context, the majority of persistent truants are students who are not performing well at school, lack parental encouragement and who face a variety of material or social challenges which are often unrelated to school.Unlike in America and in European countries where extensive truancy studies have been conducted, the exploration of this behavioural phenomenon has remained relatively limited in South African. As is the case internationally, the tendency by students to skip certain classes particularly towards the end of the school day in South African schools, demonstrates that some form of "hidden truancy" is prevalent and that students can be marked present in the school's attendance register, but fail to attend all their lesson.On the other hand, students may arrive at school late and be marked absent or simply wander around the school premises, but yet marked as being present.However, the incrementally high rates of school truancy, along with wide and far-reaching life consequences of this social and public health issue, suggest the need for further research to better understand truancy and truant youth from a South African perspective. The purpose of this article is to review extant South African and international research on school truancy studies with the aim to reflect on conceptual issues, causes of truant behaviour and on current research trends as well as its implications for future truancy intervention and policy making.Furthermore, it is also envisaged that the implications of this review will provide evidence, guidance as well as some caution for future researchers who wish to advance the study of truant behaviour among students. Conceptual Issues Among the key issues when focussing on school absenteeism and truancy, is to understand precisely the meaning and definition of these concepts.Reid (2010) sheds light on this issue by referring to various types of truancy.These include deliberately missing school without good cause.A range of various forms of school absenteeism can also be classified as truancy.These include specific lesson absence or specific lesson truancy, post-registration absence or post-registration truancy, psychological absence or psychological truancy and most controversially parentally condoned absence and parentally condoned truancy.Reid (2010) cited that in certain circles, specific lesson absence, post-registration absence and parentally condoned absence are regarded as not being truancy.Others disagree and equate 'being absent without good reason' with truancy, irrespective of its cause (Maynard, Salas-Wright, Vaughn & Peters, 2012).It seems this is one of the reasons why operational definitions of truancy usually varied from study to study. Other researchers held different views on exactly what constitute school truancy.For Stolls (1990), truancy can be regarded as being absent from school for no legitimate reason.Atkinson, Haysey, Wilkin & Kinder (2000) introduced the concept of time into their definitions as they referred to differences in the extent of absences, from avoidance of a single lesson to truanting for several days, weeks or in rare cases, even months.O' Keef (1994) emphasized the challenges in classifying post-registration truancy and specific-lesson truancy, as these forms of absence are normally omitted from official school returns.Similarly, Kinder, Wakefield and Wilkin (1996) acknowledged that post-registration truants are not necessarily absent from school, as they may be hiding somewhere on the school premises in order to skip particular classes.The American National Centre for School Engagement proposed a brief and concise definition: any form of unexcused absence from school (Seeley, 2013). However, more recently, truancy is commonly defined as excessive unexcused absence (Kim & Page, 2013).Clinically, truancy has been categorised as a kind of conduct disordered behaviour along the same lines as stealing, lying, destructiveness and cheating (American Psychiatric Association, 2013).As an early childhood risk, school truancy is considered as part of a developmental pathway to more serious, later criminal careers.Chronic truanting students often miss opportunities to follow their school curricula, show low academic achievement and are likely to lose interest in school.School truancy, therefore should be considered an important marker of a child's social adjustment and specifically risk for future challenging behaviour. Needless to say, it is for these reasons that official statistics on school truancy or unauthorised absence need to be treated with a great measure of circumspection.Official statistics often keep changing their own definitions and sometimes even the timing of data.However, whatever methods are employed to quantify pupils' absences from school, there is a growing concern that despite all the best efforts of schools and local and national education departments, global school attendance had not improved over the last thirty years.Recent evidence suggest that as many girls as boys currently engage in truant behaviour.More disturbing is that the onset of truancy is reportedly becoming younger, with approximately 36 percent of all truants beginning their patterns of irregular school and lesson attendance whilst at primary school (Reid, 2010). Why do Truants Opt not to Attend School? There is increasing evidence that the phenomenon of truancy (i.e.unexcused absence form from school) is associated with a host of interconnected and overlapping negative individual, family, social and community risks factors (Ovink, 2011).Its causes continuously seem to be changing and are becoming increasingly complex.There are indications that research findings can vary depending upon the methodology used.For example, data obtained through methodology utilising school-based surveys, community surveys, pupils' self-referral instruments, parentally obtained information or teacher assessed questionnaires, often reveal significant differences in the outcomes on matters pertaining to the extent of parentally condoned absenteeism. For Hayes (2011), home background and social circumstances were primary reasons for truancy.Reid (2010) in a detailed study of 128 persistent absentees and two matching control groups (n=384) reported that school based factors such as bullying, the curriculum and poor teaching were the preponderant factors in a clear majority of cases.However, all individual cases to a greater or lesser extent contained aspects of social, psychological and institutional features, bearing in mind that an important aspect about truancy is that each case is largely unique in nature.In the aforementioned study conducted by Reid (2010), some related trends however were found in a high percentage of cases of truants presenting with lower levels of self-esteem and poorer academic self-concepts compared to their regular attending peers. Other recent studies have found the causes of truancy to be directly linked to the lack of child-rearing skills among parents or carers (Donoghue, 2011) and the effects of dysfunctional local communities (Hong, Algood, Chiu and Lee, 2011).Henry (2010) found that the prime causes were personal, family, school and community based factors.Individual aspects included issues like -lack of self-esteem, social skills and confidence; poor peer-group relationships; lack of academic ability; special needs; lack of concentration and self-management skills.Family factors included parentally condoned absences, not valuing education, domestic challenges, inconsistent or inadequate parenting and economic deprivation.Community issues revolved around socio-economic factors, locations, housing, local attitudes, culture, criminality, vandalism and a sense of feeling safe.Within schools, the main issues were poor management, the ease at which some pupils could slip away unnoticed, poor teacher -pupil relations, the school 'ethos', a perceived irrelevance of some aspects of the national curriculum, bulling and poor learning-teaching strategies. Prior Reviews of Truancy Studies and Interventions A number of prior reviews have synthesized knowledge on truancy and interventions to combat this kind of learner behaviour pattern and improve school attendance.Most of these reviews have been narrative in nature and have not presented their finding systematically, often reviewing the same programs, emphasizing particular studies considered `effective` (Maynard, McCrea, Piggot and Kelly, 2013).Klima, Miller and Nunlist (2009) undertook a meta-analysis of 22 experimental and quasi-experimental studies which aimed at evaluating the effects of dropout and truancy interventions.These authors reported small positive impacts on dropping out, achievement and attendance.For attendance and enrolment outcomes, it was reported that alternative education programs, behavioural programs and school based mentoring programs were the modalities found to be most effective. Sutphen, Ford & Flaherty (2010) conducted a systematic review of the effects of truancy interventions.Their review included 16 studies of truancy intervention studies published in peer-reviewed journal between 1990 and 2007 and comprised experimental, quasi-experimental and single group pretest-post-test studies and a broad range of intervention modalities including universal, selective and indicated programs.The authors of this review highlighted a paucity of truancy intervention research and a lack of consistency in definitions of truancy used by researchers.They also identified individual interventions that demonstrated beneficial effects, including interventions using contingency management, group guidance and parental notification as well as some community based and collaborative interventions. In their study, Maynard, Sala-Wright, Vaughn and Peters (2012), explored the presence of heterogeneity among truant youth to provide a more nuanced examination of the nature of adolescent truancy and examine distinct profiles of truant youth as they relate to externalising behaviours.Latent profile analysis was employed to examine the heterogeneity of truant youth by using a nationally representative sample of 1,646 truants.Five key indicator variables were utilized to identify latent classes: school engagement, participation in school-based activities, grades, parental academic involvement and number of school days skipped.Additionally, multinomial regression was employed to examine the relationship between latent truant youth classes and externalizing behaviours.Four classes of truant youth were identified: achievers (28.55%), moderate students (24.30%) academically disengaged (40.89%) and chronic truants (6.26%).Based on the findings of this study, it emerged that group membership was associated differentially with marijuana use, fighting, theft and selling drugs.It was also found that truants are not a homogenous group, but rather presents with different risk profiles as they relate to key indicators, demographic characteristics and externalizing behaviours. In a study of limited scope conducted by Sheppard (2009), data was collected of 57 students' which measured their attitudes to school and schoolwork and their perceptions of their parents' involvement in their education.The author examined the role of these variables in relation to school attendance, 'good' and 'poor' attenders of 12 and 13 years of age and compared it on a number of quantifiable measures regarding their perceptions of schoolwork and their parents' behaviour in relation to aspects of their schooling. The findings of this study suggested that good, medium and poor school attenders avoided class if possible, but good attenders were more likely to do their homework and perceived their parents as more involved in their education.Further differences between the groups showed that most of the students, irrespective of the group which they were in, indicated that they did their school work when it was easy and fun.However, more students in the good attending group gave a reason that suggested that they understood the importance of education particularly for career opportunities.The fact that such an attitude was expressed in response to an open-ended question suggested that it was salient for them and may have represented a reflection of their parents' values.The results regarding homework suggested that generally parents were perceived as acknowledging the importance of homework, although medium and low attenders were more likely to defy parental instruction to do homework.Therefore, while all groups avoided class if they could, good attenders more often obeyed parental requests to do homework assignments.If this is a reliable result, it may reflect children's perception of parental involvement in their education and their learning.In addition, poor attenders and truants, not having completed their homework, would perhaps have a reason to be absent from school or skip particular lessons, most likely being the absence of their parents' interest and involvement in their schooling.Monobe and Baloyi (2012) cited that with the emergence of the democratic South Africa, a number of disciplinary problems emerged, of which student truant behaviour is one of the most prominent.Yet, there appears to be a gap in the literature which report on the phenomenon of truancy and intervention programmes to improve school and lesson attendance by students in South Africa compared to the international context.In the section which follows, truancy studies conducted in South Africa will be focussed on. Based on a study conducted by Masithela (1992), it was found that learners tend to miss lessons during the first and second periods as well as during the last five periods of the school day.The tendency of missing certain classes towards the end of the school day shows that some of "hidden truancy" is prevalent, and that some students are marked present in the attendance register, but fails to attend all lessons.On the other hand, they come late and are marked absent or be somewhere on the school premises not attending certain lessons, but still be marked as being present in the attendance registers (Smith, 1996). In her study of the nature of truancy and to explore the life world of truants in secondary schools, Moseki's (2004) investigation was undertaken with a sample of 758 Grade 10 students from three secondary schools which were randomly selected out of 14 secondary schools in the town of Kimberley in the Northern Cape Province of South Africa.Data was collected by means of a questionnaire which was completed by the sample of participants during the first period on the day of the investigation.Teachers, who were responsible for either class registers or teaching a lesson at the time, assisted with the distribution of the questionnaires and supervised the learners as they completed the questionnaire. Based to the main findings of this study, it seemed that truancy appears to be a universal phenomenon which is not restricted to students from one particular socio-economic background.Other significant findings demonstrated the following: more male learners than female learners engage in truancy; of the subjects or learning areas that learners tend to skip often Mathematics and Life Science, top of the list; learners whose parents are readily involved in school activities, would seldom or never engage in truancy, while those whose parents are seldom or never involved have always skipped certain classes.Finally, significantly more learners who have a good relationship with teachers indicated that they never skipped school or any specific lessons, while the reverse clearly seemed to be the case for learners who had poor relationships with authoritative figures in their learning environment (Moseki 2004). A more recent South African truancy study conducted by van Breda (2006) aimed at addressing the following question: How can teachers, in loco parentis, be equipped with the necessary skills and resources to deal with the issue of truant behaviour among early adolescent learners.The empirical investigation was carried out through quantitative as well as qualitative research methodology.A focus group interview was conducted with 6 learners affording them an opportunity to express their perceptions and experiences as truants.Interviews were conducted with managers of schools in the area where the study was conducted to obtain their views with regards to truant behaviour among students who attended their respective schools.A questionnaire which investigated truancy related aspects among adolescent learners such as interaction with peers, parents and caregivers' involvement in learners' school work and learners' self-esteem regarding their schooling was administered to 300 randomly selected Grade 8 and 9 male and female students in the Metro East Education District of the Western Cape Province of South Africa. It emerged from the data analysis of this study that 66.33% of the respondents who participated have demonstrated that they have engaged in truant behaviour since attending secondary school.The balance of the respondents indicated that they have not yet truanted in any manner.The following are some of the recurring themes which emerged from respondents' qualitative responses to the open-ended question included in the questionnaire:" Have you ever truanted ("bunked") any of your classes?"The following themes emerged from the responses of the participants who responded affirmatively to an open-ended question in the questionnaire as their reasons for engaging in truant behaviour: Teacher ill-treatment of students including unfairness, undue corporal punishment administered by teachers; marginalisation of certain learners by their teachers; an uncaring and unsympathetic attitude displayed towards students by teachers and authoritative figures at school; perceived discriminatory behaviour demonstrated by some educators against learners; being insulted and 'picked upon' by certain teachers; truants viewing themselves as outcasts who were rejected at school by certain teachers and fellow learners, causing them to feel unhappy and unwelcome in their learning environment; dysfunctional family lives, lack of parental interest in their scholastic activities and no moral support received at home and embarrassment about physical appearance particularly among girls who felt that they didn't always look presentable enough to go to school. Interventions to Increase Student Attendance or Reduce Truancy In the light of the fact that truancy is a recognised problem among various disciplines -including education, psychology, social work, sociology, criminal justice and others -not only the conceptualisation of the problem, but also the approaches used to intervene are diverse.Emanating from a review of a number of interventions designed to increase student school attendance, it appears that interventions targeting school attendance seem to fall into several different categories and are delivered through a variety of modalities.Interventions generally target individual risk factors, such as school anxiety or phobia, low esteem, poor social skills, family factors, lack of parental involvement and school factors including school climate inconsistent attendant policies and poor relationships between teachers and students (Maynard, Sala-Wright, Vaughn and Peters (2012). In addition to the variety of risk factors targeted, interventions also differ in terms of the settings in which they are implemented.Interventions have been implemented in clinical and community agency settings, schools, courts and police agencies.Interventions are also conducted as part of a collaborative effort between community agencies, schools, courts or by a single entity. Due to the small number of studies in this synthesis, and the heterogeneous nature of the indicated studies, it is the view of the author that the findings from this review can provide some evidence and guidance, as well as some caution, for those who are concerned about, and trying to take action and develop policy to improve attendance of truant students.On the other hand, despite the increased pressure for evidence-based practice and policy and the serious and widespread problem of truancy, there continues to be a paucity of research in the area of interventions to improve school and class attendance for truants.Although more research is needed, more of the same is not suffice.Furthermore, many of the studies included in this review were plagued with methodological shortcomings and a number of gaps in the evidence base were identified.In the light of the afore-mentioned, recommendations to improve the quality and fill gaps in truancy research are discussed below. Recommendations to Improve Truancy Study Quality Due to the inherent limitations to single, group and pre-post test design studies, it is recommended that future research evaluating outcomes of interventions utilize a comparison group design, preferably with random assignment to limit other potential confounds.Should a single group pre -post-test design be utilized, researchers should not overstate their findings, discuss the limitations of the design and replicate their intervention and evaluate the outcomes utilizing a comparison group design. To address the overall lack of adequate description of the intervention strategies, it is recommended that future research include detailed descriptions of the intervention to allow for replication.Descriptions should include details of each of the components of the intervention, the duration of each of the components, who implemented each of the components, the cost and funding of the intervention.In addition it is also recommended that researchers and authors clearly state their involvement in the development or implementation of the intervention. Keeping attrition to a minimal is important.For future research it is strongly recommended that researchers take attrition into account when designing the study and develop plans to mitigate potential threats to participant dropout.If there are participants who did not complete the program or dropped out from the research, a comparison between completers and non-completers should be provided and any statistically significant differences should be explained and taken into account. Larger sample sizes are needed in future studies.It is recommended that when planning truancy studies and determining sample size researchers take into account potential challenges in gaining access and consent of parents and students, as well as anticipate mobility and dropout as the school year progresses.Researchers also need to take steps to ensure access to more complete student records and data. Finally, it is strongly recommended that attendance be measured and reported in a consistent and clear way to allow for easier comparison across studies, as well as to allow for better transparency.Future research should report either attendance or absences in terms of a percentage of days absent or present; clearly specify the number of school days for which attendance was possible and the time period over which it was measured and measure both excused and unexcused absences as well as partial days absent and report these separately so that meaningful comparisons can be made across studies.In addition, it is recommended that researchers and authors present their findings in terms of clinical significance in addition to statistical significance. Conclusion There are numerous truancy interventions in operation with the goal of increasing attendance and it seems that many of these have been described in the literature as positively impacting the students and communities they are serving.Unfortunately, rigorous research to support truancy intervention is either not being conducted or is not being disseminated in a way that can inform others.Either way, evidence is not being built in a way that can add to the evidence base of effects of truancy interventions to inform practice and policy. In order to move the field forward, the various disciplines engaged in truancy research need to take a critical look at barriers affecting research and dissemination.The social, political and practical issues and barriers will need to be acknowledged, examined and addressed if we hope to positively impact the attendance problem plaguing this country and others around the world.
2017-09-08T18:28:25.664Z
2014-07-04T00:00:00.000
{ "year": 2014, "sha1": "ddbc37743cdb9116d85b346625d06909539502fe", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/3312/3266", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ddbc37743cdb9116d85b346625d06909539502fe", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
4802746
pes2o/s2orc
v3-fos-license
Midlife cardiovascular fitness and dementia Objective To investigate whether greater cardiovascular fitness in midlife is associated with decreased dementia risk in women followed up for 44 years. Methods A population-based sample of 1,462 women 38 to 60 years of age was examined in 1968. Of these, a systematic subsample comprising 191 women completed a stepwise-increased maximal ergometer cycling test to evaluate cardiovascular fitness. Subsequent examinations of dementia incidence were done in 1974, 1980, 1992, 2000, 2005, and 2009. Dementia was diagnosed according to DSM-III-R criteria on the basis of information from neuropsychiatric examinations, informant interviews, hospital records, and registry data up to 2012. Cox regressions were performed with adjustment for socioeconomic, lifestyle, and medical confounders. Results Compared with medium fitness, the adjusted hazard ratio for all-cause dementia during the 44-year follow-up was 0.12 (95% confidence interval [CI] 0.03–0.54) among those with high fitness and 1.41 (95% CI 0.72–2.79) among those with low fitness. High fitness delayed age at dementia onset by 9.5 years and time to dementia onset by 5 years compared to medium fitness. Conclusions Among Swedish women, a high cardiovascular fitness in midlife was associated with a decreased risk of subsequent dementia. Promotion of a high cardiovascular fitness may be included in strategies to mitigate or prevent dementia. Findings are not causal, and future research needs to focus on whether improved fitness could have positive effects on dementia risk and when during the life course a high cardiovascular fitness is most important. Systematic reviews and meta-analyses of observational studies constantly link physical activity to preserved cognitive functioning and decreased risk for dementia. [1][2][3] These studies are limited by reliance on self-reported physical activity and not objectively assessed fitness. Thus, it remains unclear whether the association between physical activity and dementia is mediated by social and cognitive stimulation rather than by level of physical fitness. Furthermore, most studies are conducted in people >60 years of age at baseline, and few have a follow-up of >20 years (mean follow-up 3-7 years), making causal inferences difficult. [4][5][6] Aerobic exercise programs aiming at improving cardiovascular fitness seem to have moderate effects on cognitive function among healthy older person. 5,7 However, current data from randomized controlled trials (RCTs) are insufficient to show that these improvements are due to improved cardiovascular fitness. 5 Presently, no RCTs and very few long-term prospective studies have been able to relate fitness to dementia incidence. The US Cooper Center Longitudinal Study recently reported that a high midlife fitness, assessed by a maximal treadmill test, was associated with lower risk of developing dementia over a mean follow-up period of 24 years. 8 Furthermore, 1 large register study among men in Sweden reported that low cardiovascular fitness, assessed with a bicycle ergometer test at 18 years of age, was associated with an increased risk of earlyonset (<60 years) dementia. 9 This is interesting because the etiology of early-onset dementia is supposed to have strong genetic components. Finally, 1 population study from Finland found that poor self-rated fitness in mid to late life was associated with increased dementia risk over 25 years of followup. 10 Thus, there is a need for studies that examine objective fitness before old age with follow-up of dementia until very old age. Midlife has been suggested as a "sensitive period" for the effect of cardiovascular risk factors on dementia. 11,12 We therefore tracked dementia incidence for a period of 44 years among women enrolled in the Prospective Population Study of Women (PPSW) who performed a test of maximal cardiovascular fitness in midlife. Methods The study is part of the PPSW, which was initiated in 1968. 13 Women born in 1908, 1914, 1918, 1922, and 1930 were systematically sampled from the Swedish Population Register on the basis of specific birth dates. Among those sampled, 1,462 women were examined (participation rate 90%). The details and procedures for the examination of the original sample have been described elsewhere. 13 A systematic subsample (born on the sixth day of uneven months, e.g., January, March, etc) were admitted to an exercise test, and 191 took part (response rate 81%): 29 who were 38 years, 41 who were 46 years, 37 who were 50 years, 47 who were 54 years, and 37 who were 60 years of age. 14 Participants in the exercise test did not differ from the total sample in age or in cumulative dementia incidence (23.0% vs 22.1%, p = 0.780). Standard protocol approvals, registrations, and patient consents The Ethics Committee of the University of Gothenburg approved the study. All women gave informed consent to participate in accordance with the provisions of the Declaration of Helsinki. Work capacity Cardiovascular fitness was tested at baseline in 1968 by a stepwise-increased ergometer cycling test until exhaustion that was supervised by a physician. Details on the full procedure and exclusion criteria have been described previously. 14 Briefly, after initial submaximal tests of 6 minutes on 200 kilopond m/min (32 W) and 400 kilopond m/min (64 W), the test was interrupted for 5 minutes before the women were brought to maximal workload. The level of maximal workload was chosen on basis of the results from the preceding submaximal test with the aim of achieving an approximate working time of 6 minutes before voluntary fatigue. If the person had not reached her limit of exhaustion, the workload was increased by an additional 50 to 100 kilopond m/min toward the end of the test. During the period of maximal work, heart rate and ECG were registered every minute, blood pressure was registered after 1 and 2 minutes, and respiratory frequency and perceived exertion according to the Borg-scale 15 were noted after 3 minutes and then every minute. The maximal exercise test aimed at arriving at maximal subjective exhaustion as indicated by the Borg scale 15 ; altogether, 93% perceived their maximal load as strenuous (scale point ≥15) and half of the participants as very, very strenuous (scale point [19][20]. 14 The term peak workload is Glossary CI = confidence interval; FINGER = Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability; PPSW = Prospective Population Study of Women; RCT = randomized controlled trial. used here because no objective criteria were used for reaching the maximal workload, corresponding to maximal oxygen uptake. Among 20 women, the test was interrupted during the submaximal test because of changes in ECG (n = 6), too high blood pressure (n = 3), claudication (n = 2), chest pain (n = 1), insufficient cooperation (n = 2), or other reasons (n = 6). 14 For analytic purposes, these women were categorized as having low fitness. The main results did not change when these persons were excluded. Neuropsychiatric examinations and dementia diagnosis The neuropsychiatric examinations were performed by psychiatrists in 1968 to 1969, 1974 to 1975, 1980 to 1981, and 1992 16 The diagnosis of dementia was based on information from psychiatric examinations, close informant interviews, medical records, and the Swedish Hospital Discharge Registry, as described in detail previously. 16 For participants in the neuropsychiatric examinations, dementia diagnoses were made by geriatric psychiatrists after reviewing information from both neuropsychiatric examinations and the close informant interview. The diagnosis was made if the participant had dementia according to both sources of information or if there was clear evidence of dementia from 1 source and subthreshold symptoms in the other. For individuals lost to follow-up, dementia diagnoses were based on information from medical records evaluated by geriatric psychiatrists in consensus conferences and from the Swedish Hospital Discharge Register. The latter provided diagnostic information until December 2012 for all individuals discharged from hospitals on a nationwide basis since 1978. 17 We have previously reported that the Hospital Discharge Register detects 44% of persons diagnosed at the examinations. 18 See supplemental data on dementia diagnosis in appendix e-1 (links.lww.com/WNL/A330). Confounders Potential covariates were chosen on the basis of previous research 4 and biologically relevant variables at the baseline examination in 1968. Education was dichotomized as compulsory (6 years for those born in 1908-1922, 7 years for those born in 1930) or more than compulsory. Smoking was classified as current/ex-smoker vs never smoker. Physical activity during leisure and occupation was assessed according to a slightly modified version of the 4-level Saltin-Grimby scale. 19,20 Level 1 (almost completely inactive) was classified as physical inactivity. Wine consumption was dichotomized as never drinker or drinker. Hypertension was defined as systolic blood pressure ≥140 mm Hg, diastolic blood pressure ≥90 mm Hg, and/or taking antihypertensive medication. Body height was measured to the nearest centimeter and weight to the nearest 0.1 kg. Body mass index was calculated as kilograms per meter squared. Serum cholesterol and triglyceride levels were assessed after an overnight fast. Diabetes mellitus was self-reported and defined as diagnosis told by a physician or being on antidiabetic therapy (insulin and/or tablets). History of myocardial infarction and angina pectoris was self-reported and defined as a diagnosis told by a physician. The diagnosis of stroke was based on information from participants and key informants, the Swedish Hospital Discharge Registry, and hospital medical records. Statistics Incidence proportions of dementia are presented as cumulative incidence. Differences between fitness groups were analyzed with the χ 2 test for dichotomous variables and 1-way analysis of variance for continuous variables. We calculated Cox proportional hazards models with all-cause dementia as the outcome and fitness as the predictor. Person-years were calculated from date of baseline examination to (1) For analytic purposes, fitness was described as follows: the crude peak workload categorized into quintiles, but because the 3 middle groups had very similar incidence of dementia, analyses were performed with peak workload categorized into low (≤80 W or interrupted at submaximal workload), medium (88-112 W), and high (≥120 W) fitness; and peak workload/ body weight transformed into stanine scores and categorized as low (stanine score 1-3 or interrupted at submaximal workload), medium (stanine score 4-6), and high (stanine score 7-9) fitness. In model 1, we included age and body height as confounders. In model 2, further confounders were included if bivariate associations in logistic regressions had values of p < 0.20 with all-cause dementia (i.e., serum triglycerides p < 0.001, smoker p = 0.18) or with fitness (i.e., hypertension p < 0.001, wine consumption p = 0.008, physical inactivity p = 0.161, income p = 0.010). All analyses were done with IBM SPSS Statistics 22 (IBM, Armonk, NY). Tests were 2 sided, and the level of significance was set to p < 0.05. Results The mean peak workload at the ergometer cycling test in 1968 was 103 (SD 21) W. The midlife characteristics of the study population are presented in table 1. Women with high fitness more often had their own income and higher wine consumption and less often had hypertension compared to those with medium or low fitness. Mean age at death was 80.4 years, and 15% were still alive at the end of the study. We found no statistical difference between the groups in age at death or survival. In total, 44 women (23.0%) developed dementia during 5,544 person-years of follow-up from 1968 to 2012. The mean follow-up period was 29 years. Diagnoses included 20 cases of pure Alzheimer dementia, 8 of vascular dementia, 12 of mixed dementia, and 4 of other dementias. Altogether, 28 cases of dementia were diagnosed on the basis of information from the examinations, and another 16 (36%) were diagnosed from registers and case records. The mean time to dementia onset from midlife examination was 29.0 years, and the mean age at dementia onset was 80.5 years. Table 2 shows the relation between peak workload and cumulative dementia incidence. It is noteworthy that the dementia incidence among those who interrupted the test at submaximal workload was 45%. When categorized into 3 fitness groups based on the peak workload, the cumulative incidence of all-cause dementia was 32% for low, 25% for medium, and 5% for high fitness. Similar results were seen for peak workload/body weight (table 3). The mean time to dementia onset was 5 years longer for those with high compared to those with medium peak workload. The mean age at dementia onset was 11 years higher among those with high peak workload compared to those with medium peak workload (table 3). Compared to medium peak workload, the adjusted hazard ratio for all-cause dementia was 0.12 (95% confidence interval Fitness is assessed by a stepwise-increased ergometer cycling test until exhaustion. Low fitness = crude peak workload <72 W or test interrupted at <64 W; medium fitness = crude peak workload 80 to 112 W; and high fitness = crude peak workload ≥122 W. The p value for trend is by χ 2 test for dichotomous data and analysis of variance for continuous data. [CI] 0.03-0.54) among those with high peak workload and 1.41 (0.72-2.79) among those with low workload (table 4). Compared to medium peak workload/body weight, the adjusted hazard ratio for all-cause dementia was 0.35 (95% CI 0.13-0.97) for those with high fitness and 1.37 (95% CI 0.62-3.02) for those with low fitness. To minimize the influence of incipient dementia on associations between fitness and dementia, we reanalyzed the data excluding those with dementia onset before 70 years of age and dementia onset before the years 1992 and 2000. This did not change the associations (data not shown). Discussion We found that high cardiovascular fitness in midlife was associated with decreased risk of dementia in a population of women followed up for up to 44 years. High compared to medium fitness decreased the risk of dementia by 88%. The most pronounced risk reduction was seen among participants with the highest fitness. The 3 previous longitudinal studies on fitness and dementia reported a dose-response relation. The US study, which assessed fitness with a maximal treadmill test, found a decreased dementia risk for every fitness quintile. Similar to our study, the lowest risk was seen among those with highest fitness. 8 On the other hand, the large register study on Swedish men, which assessed fitness according to a bicycle ergometer test at 18 years of age, found an increased risk of early-onset dementia (<60 years) for those with medium fitness compared to those with high fitness and further increased risk for those with low fitness. 9 The Finnish study, which used a single question of self-rated fitness, found primarily an increased dementia risk among those with poor fitness. 10 A possible dose-response relation between fitness and dementia risk needs to be further investigated. We found a very high dementia incidence among those for whom the bicycle test had to be interrupted at submaximal workload. This indicates that adverse cardiovascular processes might be going on in midlife that seem to increase the risk for dementia. High fitness (n = 44) 6 (14) b 1 (2) a 33 (11) b 79 (11) Fitness assessed by a stepwise-increased ergometer cycling test until exhaustion. Crude peak workload: low fitness = peak work load ≤80 W; medium fitness = peak work load 88 to 112 W; and high fitness = peak work load ≥120 W. Peak work load/body weight: low fitness = stanine score 1 to 3; medium fitness = stanine score 4 to 6; and high fitness stanine score 7 to 9. a For trend between fitness groups, p < 0.01 (χ 2 for proportions and analysis of variance for dichotomous). b For trend between fitness groups, p < 0.05 (χ 2 for proportions and analysis of variance for dichotomous). The risk reduction of high fitness on dementia was stronger for the crude peak workload than for peak workload/body weight. This is similar to studies on all-cause mortality in which obese fit individuals have a mortality risk similar to that of normal-weight fit individuals. 21 This highlights the need for fitness-driven, rather than weight loss-driven, approaches. Fitness and physical activity are related but not identical. 22 The hazard ratio in our study was stronger than those reported for physical activity. 2,23 This is also reported in relation to cardiovascular disease, 22 indicating that cardiovascular fitness is a more valid measure or that high fitness per se is a stronger protective factor than physical activity. It needs to be emphasized that fitness has a strong genetic component. 24 Genotype may also modify the association between fitness and dementia. However, evidence is mixed regarding the modifying effect of the APOE e4 allele, the main genetic risk factor for dementia. 1,10 We had data on genes for only a subsample and cannot draw any conclusions about the impact of APOE e4 on the relation between fitness and dementia. Certain time periods across the life course might be especially important for the effect of cardiovascular fitness. Factors early in life might increase brain reserve, which moderates the expression of brain damage and age-related changes. 25 Several dementia-prevention RCTs are on the way, all of which target older persons. One is the multidomain Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER) study, which targets older persons with cardiovascular risk factors. 26 This study reported promising results on cognition after 2 years. Another study targeted sedentary older persons and included moderate-intensity aerobic (walking) and strength training, 27 but it found no effect on cognition after 2 years. Recently, a 6-year multidomain intervention reported no effects on dementia incidence. 28 Future intervention studies are needed that target whether the actual improvement in cardiovascular fitness (and muscle strength) is the pathway between physical activity and cognitive functioning. 7 In practice, it will take a very long time to have RCTs that examine the effect of improved midlife (or childhood) fitness on dementia. Meanwhile, longitudinal observational studies such as ours can provide information. Several mechanisms might be involved in how fitness reduces dementia risk. These include both indirect effects such as influence on hypertension, hypercholesterolemia, obesity, and diabetes mellitus and directs effects on the brain, with, for example, enhancement of neuronal structures, neurotransmitter synthesis, and growth factors. 1,29 Our study and the 3 other longitudinal studies on fitness and dementia [8][9][10] show similar results in unadjusted analyses and analyses adjusted for indirect effects. This indicates that direct effects on the brain need to be further investigated. In line with this, a recent study found that lower cardiovascular fitness was associated with smaller brain volume 2 decades later. 30 The brain regions that seem most influenced by physical activity are those that are also vulnerable to age-related changes and early pathologic changes in Alzheimer disease such as the hippocampus. 31 Further research on long-term direct effects of fitness on brain structure is needed to improve strategies for dementia prevention. Major strengths of our study are the objective assessment of fitness, the fact that baseline examinations were carried out in midlife, the 44 years of follow-up, that the dementia diagnosis was made by neuropsychiatrist according to extensive examinations, the population-based sample, and the extensive collection of potential confounders. However, there are several limitations. First, this study had an observational design; therefore, we cannot draw conclusions on cause and effect. Second, the sample was relatively small, leading to a lack of statistical power and limiting the possibility for subanalyses. Third, the study includes a relatively homogeneous sample of Swedish women. We thus cannot generalize to other populations. In addition, women in the study probably received more medical care than other women because persons in whom we identified pathologic conditions (e.g., hypertension) were referred for medical treatment. Fourth, cumulative dropout is a problem in long-term follow-up studies. While this problem was, to some extent, alleviated by the use of hospital registry data for those lost to follow-up, this probably results in an underestimation of the number of dementia cases. It should be noted that almost all people in Sweden receive hospital treatment within the public health system, and the Swedish Hospital Discharge Register covers the entire country. Fifth, the exercise test in 1968 measured work capacity, not maximal oxygen consumption with expired gas Fitness assessed by a stepwise-increased ergometer cycling test until exhaustion. Crude peak workload: low fitness = peak work load ≤80 W; medium fitness = peak work load 88 to 112 W; and high fitness = peak work load ≥120 W. Peak workload/body weight: low fitness = stanine score 1 to 3; medium fitness = stanine score 4 to 6; and high fitness stanine score 7 to 9. a Cox proportional hazard ratios: adjusted for age and body height. b Cox proportional hazard ratios: adjusted for age, body height, triglycerides, smoker, hypertension, wine consumption, physical inactivity, and income. analysis, the gold standard for cardiorespiratory fitness. Sixth, the maximal workload for the women in our study is lower compared to previously reported reference values. 32 This might be due to different procedures for the exercise test. Seventh, we did not have data on changes in fitness across the life course. Eighth, competing risk may influence the results of a study with long-follow-up because both dementia and low fitness may increase the risk for death. This might result in an underestimation of the association between these conditions. The use of risk-years in the Cox regression analyses partly takes care of competing risk because persons who die earlier will contribute fewer years. Our findings indicate that high cardiovascular fitness in midlife is associated with decreased risk of dementia. Improved cardiovascular fitness in midlife might be a modifiable factor to delay or prevent dementia. Findings are not causal, and future research needs to focus on whether improved fitness could have positive effects on dementia risk and when during the life course a high cardiovascular fitness is most important. Authors contributions H.H. did the literature search, data analyses, data interpretation, created the figures and drafted the manuscript. L. J., X.G., G.G., S.K., S.Ö., and I.S. did data interpretation, reviewed, modified, and approved the manuscript.
2018-04-03T04:09:34.975Z
2018-04-10T00:00:00.000
{ "year": 2018, "sha1": "99407e9bcf72bc1d918b001ae1fcf4236f9b5bf4", "oa_license": "CCBY", "oa_url": "https://n.neurology.org/content/neurology/90/15/e1298.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "99407e9bcf72bc1d918b001ae1fcf4236f9b5bf4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
98074451
pes2o/s2orc
v3-fos-license
Effective treatment of cadmium–cyanide complex by a reagent with combined function of oxidation and coagulation (cid:2) PACC is an efficient dual function reagent for [Cd(CN) 4 ] 2 (cid:3) treatment. (cid:2) CN (cid:3) oxidation and Cd 2+ coagulation can be simultaneously achieved. (cid:2) Two stages need be carried out for complete removal of [Cd(CN) 4 ] 2 (cid:3) . (cid:2) The optimum working conditions of PACC are presented. ). The results indicated that PACC is able to simultaneously achieve the complete oxidation of cyanide (CN (cid:3) ) by active chlorine and the subsequent coagulation of cadmium ion (Cd 2+ ) by Al 13 polymer. Two stages were carried out for complete CN (cid:3) oxidation and effective Cd 2+ coagulation. The first stage involves the conversion of CN (cid:3) to cyanate (CNO (cid:3) ), and the second stage involves the conversion of CNO (cid:3) to nitrogen and the coagulation of the liberated Cd 2+ . The optimum pH values for the first stage and the second stage are pH 11 and pH 8.5, respectively. The two stages for effective treatment of [Cd(CN) 4 ] 2 (cid:3) at the optimal pH condition totally need about 43 min at active chlorine dosage 130% of the theoretical requirement for CN (cid:3) decomposition. Under the optimal conditions for [Cd(CN) 4 ] 2 (cid:3) treatment, the stoichiometric weight ratio of Cl 2 /Al in PACC is 2. This study presents a novel reagent and method to remove heavy metal–cyanide complexes from wastewater. Introduction Cyanide and Cadmium (Cd) are very toxic to many life forms, and appear on international priority pollution lists. Cyanide ion (CN À ) has a great tendency to act as a ligand, and associates with almost any metal ions to form complex [1]. Cadmium-cyanide complex ([Cd(CN) 4 ] 2À ) is widely found in electroplating and mining effluents. Since neutral and acidic conditions favor the conversion of cyanide to hydrogen cyanide that is exceedingly poisonous and readily evolves from water in gaseous phase [2], the treatment of cyanide must be conducted at alkaline condition. The bound Cd 2+ by CN À is quiet stable at alkaline condition [2]. Thus, it is difficult to be removed by conventional hydrolysis precipitation. The addition of oxidizing chemicals is the most popular method to destroy and remove cyanides [2], and then Cd 2+ is liberated and http://dx.doi.org/10.1016/j.cej.2014.09.080 1385-8947/Ó 2014 Elsevier B.V. All rights reserved. available to be removed by hydrolysis precipitation and coagulation [3,4]. Fe(VI) has been reported to be effective for treating heavy metal-cyanide complex due to cyanide oxidation by Fe(VI) and subsequent removal of heavy metal by Fe(III) coagulation [5,6]. The alkaline chlorination method is the most widely applied for destruction of heavy metal cyanide complex and removal of cyanide from wastewater [2,7]. Coagulation using Al and Fe based salts followed by sedimentation and filtration is also employed to remove heavy metal from wastewater [4,8]. With high content of Al 13 polymer (AlO 4 Al 12 (OH) 24 (H 2 O) 12 7+ ) and active chlorine, a novel water treatment reagent (PACC) that can be synthesized by an electrochemical method [9,10] presents the dual function of coagulation and oxidation [9,11]. It is believed that Al 13 polymer, with high positive charge and strong binding ability, is the most active species in Al based coagulants responsible for coagulation [12,13]. Active chlorine is the most widely used as a disinfectant and pre-oxidant in water treatment process. Therefore, PACC has the potential to simultaneously remove CN À and Cd 2+ , which may offer significant advantages in practice since the treatment process of [Cd(CN) 4 ] 2À can be shortened. Water treatment buildings are expected to be more compact and less management is required when using PACC, in comparison with the conventional two-unit system using alkaline chlorination and coagulation separately. The present study was to evaluate the performance of PACC on [Cd(CN) 4 ] 2À removal through CN À oxidation by active chlorine and subsequent removal of Cd 2+ by Al 13 species coagulation. The kinetics and stoichiometry of the complete oxidation of CN À by active chlorine in PACC were investigated. The effect of pH, dosage and reaction time on the CN À oxidation and Cd 2+ coagulation were studied to illuminate the optimum work conditions for [Cd(CN) 4 ] 2À removal by PACC. Water samples All reagents and chemicals used were of analytical grades. The stock solution of NaCN was prepared in NaOH solution, then it mixed with CdCl 2 solution for 6 h to attain [Cd(CN) 4 ] 2À stock solution. Water sample was synthesized by spiking a certain volume of stock solution into deionized water containing 5 Â 10 À4 mol/L of NaHCO 3 and NaNO 3 . The characteristics of PACC PACC samples were prepared according to the method was described in our previous papers [9,10]. The properties of PACC used are summarized in Table 1. PACC1 was a general reagent for the most experiments, while PACC2 was specially prepared to evaluate the performance of [Cd(CN)4] 2À removal. The Al 13 species was the predominant Al speciation for PACC. Total Al concentrations (Al T ) were determined using ICP-OES (PerkinElmer, Optima 2000, UK). Basicity values (B, OH/Al molar ratio) were determined by standard titrimetric methods (Standard method of the chemical industry of China). Active chlorine was determined by spectrophotometry using N,N-diethyl-1,4-phenylenediamine. The weight ratio of Cl 2 /Al in PACC can be adjusted by regulating electrolyte Al T , B value, and temperature during preparation. We used 27 Al nuclear magnetic resonance (NMR) spectroscopy to characterize the Al species with 27 Al NMR spectra obtained on a Varian UNITY INOVA (500 MHz) spectrometer. Each of Al m (i.e. monomer + dimer), Al 13 , and Al u [i.e. larger polymer species and/or solid phase Al(OH) 3 )] can be quantitatively analyzed according to the intensities of 27 Al signals. Details of the quantitative analysis of the Al species can be found in literature [9,10]. Performance of [Cd(CN) 4 ] 2À removal The experiments of CN À and Cd 2+ removal by PACC or NaClO were conducted using jar test, which was performed using a sixpaddle stirrer. The concentration of CN À assessed in this study was 0.18-2.8 mmol/L, which simulated the practical water quality of industry effluent [2]. The concentration of Cd 2+ in this study was determined according to the concentration of CN À , since CN À is the ligand of Cd 2+ . The procedure of jar test consisted of a rapid mix of 250 rpm, slow mix of 40 rpm, and a 30 min settling period. After settling for 30 min, supernatants were sampled and filtered by 0.45 lm pore size membrane filter. PACC or NaClO were added into water samples at the beginning of rapid mix period. The filtrates were tested for cyanate (CNO À) concentration using ion chromatograph (Dionex, ICS-2000, USA) and Cd 2+ concentration using ICP-OES (PerkinElmer, Optima 2000, UK). Before PACC dosing a predetermined amount of 0.2 mol/L NaOH or 0.05 mol/L HCl solution was added into water samples to approximatively get an expected pH value. After dosing, water pH was accurately regulated to the expected value during rapid mix period by adding HCl or NaOH solution, after which water pH was constant during subsequent oxidation and coagulation process. Stoichiometry The stoichiometry of CN À and CNO À oxidation by PACC were examined by analysis of the formed and residual CNO À , respectively. The reactions were conducted using a magnetic stirrer. The reaction time of CN À and CNO À oxidation by PACC were 30 min and 1 h, respectively. A certain amount of water was sampled at the end of reactions for CNO À analysis. Before reaction, water pH values were regulated at either 11 or 8.5 by adding 0.2 mol/L of NaOH or 0.05 mol/L of HCl solution. Water pH was not adjusted during the reaction process. The PACC dosages gradually increased, while the CN À and CNO À concentration was fixed at 0.34 mmol/L and 0.48 mmol/L, respectively. Stopped-flow kinetic Apparent rate constants at various pH values for the reaction of CN À and CNO À oxidation by PACC were determined using an Applied Photophysics SX20 stopped-flow spectrophotometer. Kinetic studies were carried out under pseudo-first-order conditions at 25°C. The concentrations of CN À or CNO À were kept in excess of active chlorine by at least 1 order of magnitude. Active chlorine absorbance at 292 nm was followed as a function of time to determine rate constants. In all experiments, PACC and [Cd(CN) 4 ] 2À solutions were buffered by 0.1 M phosphate to attain the desired pH. Stoichiometric study A complete CN À treatment by alkaline-chlorination-oxidation method should be carried out into two stages [14,15]. The first stage is the conversion of CN À to CNO À at strong alkaline environment and the second stage involves the transformation of CNO À to nitrogen and carbonates at mild alkaline condition. The stoichiometries of CN À and CNO À oxidation by PACC were determined at pH 11 and pH 8.5, respectively. When the initial active chlorine in PACC increased from 0 to 0.34 mmol/L, CNO À linearly increased with a slop of 0.96 (Fig. 1). When the initial active chlorine concentration was greater than 0.34 mmol/L, the formed CNO À was constant at 0.34 mmol/L, which was equal to the initial concentration of CN À . These results indicated that the conversion of CN À to CNO À by the oxidation of active chlorine followed a stoichiometric rate (SR) of 1.04 mol Cl 2 /mol CN À , which was very close to the theoretical SR [1 mol Cl 2 /mol CN À ] according to the reaction of CN À with hypochlorite (Eq. (1)). An increase of PACC dosage resulted in a decrease of residual CNO À that followed linear relationship (Fig. 2). The slope of the linear line presented stoichiometries for the reaction of CNO À with active chlorine as À0.64, which is approximately consistent to the theoretical SR [1.5 mol Cl 2 /mol CNO À ] according to the reaction of CNO À with hypochlorite (Eq. (2)). The final product of CN À is nitrogen and bicarbonate. Considering the two stage the proposed net reaction of [Cd(CN) 4 ] 2À with active chlorine in PACC is as Eq. (3), which indicates that the SR is 2.5 mol Cl 2 /mol CN À for complete oxidation of CN À by PACC. 3.2. Determination of optimum working conditions pH The reaction rates of CN À and CNO À with active chlorine were determined using a stopped-flow spectrophotometer at different pH conditions. The rate expression for the reactions can be written as where k obs (determined by model fitting of experimental kinetic data) represents apparent first-order rate constant at a particular pH value. Fig. 3, which shows the magnitude of k obs at various pH values, illustrates that the reaction rate of PACC with CN À and CNO À decrease with the increase of pH value. At alkaline pH environment, the formed CNCl formed undergoes rapid hydrolysis according to the following reaction (Eq. (6)). The k obs values in Fig. 3 at different pH actually reflects the reaction rates of the conversion of CN À to CNCl, and shows that PACC reacts very rapidly with CN À . For example, k obs is 300.42 s À1 at pH 11, corresponding to a t 1/2, CN À of 2.31 ms. A previous kinetics study [16] indicated that the half-lives of CNCl at pH 11 and 9 were estimated to be 1.31 and 131 min, respectively. Therefore, the decisive process for the reaction (Eq. (1)) rate is the hydrolysis of formed CNCl (Eq. (6)). The rate of CNCl hydrolysis is positively correlated with pH value. The time for 99.9% conversion takes about 13 min at pH 11 [14,16]. Considering the facilitation of CN À destruction and the cost of pH adjustment, pH 11 was selected as the optimum pH condition for the first stage of [Cd(CN) 4 ] 2À treatment by PACC. The effect of pH on Cd 2+ removal by PACC coagulation has been examined under mildly alkaline (Fig. 4). The result indicates that high pH facilitates Cd 2+ removal. It may be attributed to the lower solubility of Cd 2+ at high pH, which favors the generation of Cd(OH) 2 (s) that has a stronger affinity to the surface of hydrolyzed aluminum flocs. It has been demonstrated that Al 13 species is very stable and is the predominant Al species during the coagulation process of Al 13 -rich polyaluminum chloride (PACl) even at alkaline condition [17,18]. In addition, Al 13 -aggregate is the main composition in the hydroxide flocs of Al 13 -rich PACl [19]. Thereby Al 13 polymer is the most active species responsible for the liberated Cd 2+ removal by PACC coagulation. On the other hand, from Fig. 3, lowering pH in the alkaline region increased the reaction rate of active chlorine in PACC with CNO À , which is mainly due to the increase in the concentration of hypochlorous that is a more powerful oxidant comparing with hypochlorite [15]. Considering the facilitation of CNO À decomposition and Cd 2+ removal together, pH 8.5 was selected as the optimum pH condition for the second stage [Cd(CN) 4 ] 2À treatment by PACC, because it could not only meet the required pH environment for CNO À oxidation by active chlorine but also provide a suitable pH condition for Cd 2+ coagulation by Al 13 polymer. Reaction time and dosage The effect of reaction time on the complete CN À oxidation by PACC was investigated under one time dosage strategy at the start of the first stage. The conversion of CN À to CNO À with time is not presented here, since the reaction (Eq. (6)) of CNCl hydrolysis cannot be terminated. CNO À decomposition as a function of reaction time was monitored (Fig. 5). According to the last section and in prior studies, 13 min was used as the reaction time for the first stage, after which 2 min was used to regulate an appropriate pH for the second stage. The amount of PACC dosage was determined according to the SR (Eq. (3)), i.e. complete oxidation of CN À by active chlorine. Fig. 5 shows that the rate of CNO À decomposition increases with increasing active chlorine concentration, which was consistent with the previous results [14]. Complete CNO À decomposition needed about 150 min when active chlorine dosage of theoretical requirement was used, while with an active chlorine dosage 130% of theoretical requirement a retention time of about 30 min was enough for complete oxidation. It was reported that overdosing of active chlorine could increase the conversion of CNO À to nitrate [20]. The active chlorine dosage 130% of the theoretical requirement and 30 min reaction time for CNO À decomposition are to be optimum. The effect of PACC dose on Cd 2+ removal by coagulation was studied at pH 8.5. As shown in Fig. 6, with PACC doses greater than 20 mg Al/L, Cd 2+ removal maintained about 93% that is the maximal efficiency. As PACC doses increased from 0 to 20 mg Al/L, Cd 2+ was linearly removed with a gradient of approximate 0.25. These results suggests that for the treatment of 1 mg/L Cd 2+ , the required minimal dosage to remove Cd 2+ to the maximal efficiency by PACC was approximately 4 mg Al/L at pH 8.5. In view of the treatment of [Cd(CN) 4 ] 2À by PACC under the optimal working conditions, the stoichiometric weight ratio of Cl 2 /Al in PACC was approximately 2, which is the optimal Cl 2 /Al weight ratio of PACC for [Cd(CN) 4 ] 2À removal. We prepared PACC2 with a Cl 2 /Al weight ratio of 2, which was used to further investigate the performance on [Cd(CN) 4 ] 2À removal. Process and performance of [Cd(CN) 4 ] 2À removal The performance of [Cd(CN) 4 ] 2À removal by PACC is closely related with dosage, pH and reaction time. According to above results, the process of [Cd(CN) 4 ] 2À removal by PACC is proposed in Fig. 7. Treatment process involved the two stages, whose pH values are 11 treatment and solid-liquid separation, effluent may meet the requirement of water treatment for CN À and Cd 2+ . Special experiment was conducted to evaluate the performance of [Cd(CN) 4 ] 2À removal by PACC2 at the optimal working conditions. NaClO was used to make a comparison with PACC2. As shown in Fig. 8, CNO À was produced and accumulated continuously before the dosage of 12.8 mg Cl 2 /L, which is the SR value of active chlorine to oxidize CN À to CNO À . The maximal concentration of formed CNO À was about 7.6 mg/L. Whereafter, CNO À concentration decreased with the increase of active chlorine dosage, because the formed CNO À was further oxidized into nitrogen and bicarbonate. When active chlorine dosage was greater than 40 mg Cl 2 /L, CNO À concentration was near zero. It indicated that CN À was completely oxidized by active chlorine. The trend of CNO À concentration with addition of PACC2 was very similar to that of NaClO. However, there was a significant difference in removing Cd 2+ between PACC and NaClO. PACC showed a much higher ability to remove Cd 2+ due to the function of Al 13 coagulation. Cd 2+ concentration decreased with increase of PACC2 dosage. After CN À destruction by active chlorine in PACC2, Cd 2+ was released from [Cd(CN) 4 ] 2À complex and subsequently removed by Al 13 coagulation. Conclusions With high content of active chlorine and Al 13 polymer, PACC is very effective for the treatment of Cd(CN) 4 2À . CN À and Cd 2+ can be simultaneously removed due to the combined function of oxidation and coagulation. The SR is 2.5 mol Cl 2 / mol CN À for the complete CN À oxidation by PACC. Two stages should be carried out for complete CN À oxidation and effective Cd 2+ coagulation. The first stage involves the conversion of CN À to CNO À at pH 11 and the second stage from CNO À to final product and the coagulation of the liberated Cd 2+ at pH 8.5. The active chlorine dosage 130% of the theoretical requirement for CN À decomposition appears to be optimum. Under the optimal pH and dosage conditions, the reaction time for the first stage and the second stage should be 13 min and 30 min, respectively. Under the optimal working conditions, the stoichiometric weight ratio of Cl 2 /Al in PACC is 2 for the treatment of [Cd(CN) 4
2019-04-06T00:43:52.584Z
2015-02-15T00:00:00.000
{ "year": 2015, "sha1": "a6b91bee6598f7a82768be442eadc687e6659dd4", "oa_license": "CCBYNCSA", "oa_url": "https://ir.rcees.ac.cn/bitstream/311016/32760/1/Effective%20treatment%20of%20cadmium%E2%80%93cyanide%20complex%20by%20a%20reagent%20with%20combined%20function%20of%20oxidation%20and%20coagulation.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "4b65860aa79a153e10a18e143ae58b64dea8cf8c", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
263920038
pes2o/s2orc
v3-fos-license
Panitumumab Induced Forearm Panniculitis in Two Women With Metastatic Colon Cancer Background: Panitumumab is an EGFR inhibitor used for the treatment of metastatic colorectal cancer (mCRC), even if its use is related to skin toxicity. Case Presentation: We report the development of forearm panniculitis in two women during the treatment with Panitumumab (6 mg/Kg intravenous every 2 weeks) + FOLFOX-6 (leucovorin, 5-fluorouracil, and oxaliplatin at higher dosage) for the treatment of mCRC. Results: In both patients, clinical, laboratory and radiological evaluation documented the presence of a local panniculitis, probably related to panitumumab (Naranjo score: 6). Panatimumab discontinuation and antimicrobial + corticosteroid treatment induced a remission of skin manifestations. Conclusion: We reported for the first time the development of panniculitis during Panitumumab treatment, and we documented that the treatment with beta-lactams to either fluoroquinolones or oxazolidinone in the presence of corticosteroid improves clinical symptoms in young patients with mCRC, without the development of adverse drug reactions or drug-drug interactions. INTRODUCTION Dermatologic toxicities represent a relevant problem during each drug treatment and it significantly impact the patients' quality of life (QoL), reducing compliance and clinical outcomes [1][2][3]. Skin toxicities are commonly described during the treatment with the epidermal growth factor receptor (EGFR) inhibitors.Panitumumab is an EGFR inhibitor used for treatment of metastatic colorectal cancer (mCRC). In particular, the administration of Panitumumab to FOLFOX (leucovorin 200 mg/m² IV infusion, 5-fluorouracil 600 mg/m² IV infusion and oxaliplatin 85 mg/m² IV infusion) or FOLFOX4 (FOLFOX + 5-fluorouracil 400 mg/m² bolus), or FOLFOX6 (FOLFOX4 at higher dosage) as firstline treatment for both RAS (rat sarcoma viral oncogene homolog) wild-type or KRAS (Kirsten rat sarcoma viral oncogene homolog) mCRC significantly improved overall *Address correspondence to this author at the Department of Health Sciences, Clinical Pharmacology and Pharmacovigilance Operative Unit, Ma-terDomini Hospital, University of Catanzaro, Via T Campanella 115 -88100 Catanzaro, Italy; Tel: +390961712322; Fax +390961774424; E-mail: gallelli@unicz.itsurvival compared to FOLFOX alone [4] and to FOLFOX plus bevacizumab [5].Unfortunately, the development of skin rash, papules and pustules in the face, scalp, and trunk, typically with or without pain have been described within the first 3 weeks of treatment with EGFR inhibitors [6][7][8], although these adverse drug reactions could be an indicator of a biological effect [8].Herein, we report two patients that developed severe panniculitis during panitumumab treatment successfully treated with empirical antibiotic therapy + corticosteroid. Case 1 A 44-year-old woman, with a clinical history of descending colon cancer (stage IV) with liver and peritoneal metastases, received an anticancer treatment with FOLFOX-6 + Panitumumab (standard dosage: 6 mg/Kg intravenous every 2 weeks) from April 2017 up to November 2017 and then with 5-Fluorouracil + Panitumumab until May 2018. In May 2018, the patient presented fever (39°C) and severe inflammation on left forearm (Fig. 1).Clinical evaluation revealed the presence of edema, rubor and sever pain (VAS score 8/10) on left forearm, while laboratory test showed a significant increase in both neutrophil cells count (25,500 cell/mmc; normal range 2,500-7,700) and procalcitonin (PCT 5.79 ng/ml; normal range < 0.5 ng/mL), while blood culture failed to detect infections in the bloodstream.Forearm ultrasound documented an area with blurred margins of thickening and with marked structural disruption of the subcutaneous adipose panniculus confirmed by Magnetic Resonance Imaging (MRI), that also showed an edema of both subcutaneous adipose matrix and fibrous septa on the whole forearm on the ulnar side without involvement of the underlying muscle (Figs. 2 and 3).A diagnosis of panniculi-tis was postulated and Naranjo probability scale, [9] documented a possible association between Panitumumab and panniculitis (score 6). Panitumumab was and an empirical antibiotic treatment with Linezolid (600 mg bid) and Ceftriaxone (1 gr bid) plus a corticosteroid (betamethasone, 2 mg bid) was started with an improvement of clinical symptoms in 3 days (fever 36.5°C,PCT <0.5 ng/mL).Betamethasone was dismissed and about 3 days later a new clinical and laboratory evaluation documented a neutrophilic leukocytosis (16,000 cell/mmc), therefore ceftriaxone was discontinued, meropenem (2 gr tid) was started for 10 days and then linezolid was changed to tedizolid (200 mg/day) with a complete remission of both clinical symptoms and radiological signs in about 2 months and without the development of adverse drug reactions or drug-drug interactions.About 1 month later, Panitumumab was added again in the treatment and a new follow-up on January 2019 did not show any further adverse drug reactions. Case 2 A 49-year-old woman, with a clinical history of descending colon cancer (stage IV) with liver metastases, on April 2018 started a treatment with FOLFOX-6 + P and 1 month later (after the second administration of P) she developed a skin rash on the face that induced about 2 months later (fifth administration of P) the discontinuation of the biological drug (P).On June 2018, the patient developed panniculitis on the left forearm with oleocranic bursitis.Clinical evaluation revealed the absence of fever (36.5°C), while laboratory findings were in normal range (neutrophil 6,000 cell/mmc; normal range 2,500-7,700; PCT <0.5 ng/mL).Naranjo probability scale, [9] documented a possible association between Panitumumab and forearm disease (score 6), therefore an empirical antibiotic treatment with Ceftriaxone (1 gr bid) and ciprofloxacin (500 mg bid) plus a corticosteroid (methylprednisolone, 4 mg/day for 4 days) was started with a complete improvement of signs and symptoms in 2 weeks and without the development of adverse drug reactions or drug-drug interactions (Fig. 4). DISCUSSION In this study, we reported the development of panniculitis in two women with metastatic colorectal cancer.Previously, several authors reported that many factors (e.g.drugs, allergy, bit of animals, parasites) are able to induce skin manifestations [10][11][12][13][14].In our cases, clinical history suggested that probably Panitumumab could play a role in these skin manifestations. Bergman et al. [15], in a retrospective study, documented that 32 of 34 patients treated with panitumumab developed a skin rash that required an antimicrobial treatment documenting an association between drug and adverse drug reaction. Even if the specific mechanism of skin toxicity related to EGFR inhibitors has not been well demonstrated, some authors suggested that it could be related to the inhibition of EGFR in the basal lamina that induces a local inflammation, with the release of chemokines and leukocyte recruitment leading to keratinocyte apoptosis and skin damage [16,17]. In an experimental study Liu et al. [18], documented that erlotinib hydrochloride induced skin toxicity proceeding from skin irritation to scleroderma and it was related to the inhibition of dermal EGFR with the development of skin inflammation and release of secondary inflammatory mediators (e.g.IL-10, IL-2, IL-6, TNF-α, and IL12A) prompting to skin toxicity. In agreement with our previous studies [19][20][21][22], using the Naranjo probability scale, we documented a possible association between severe panniculitis and panitumumab in two women with mCRC (Naranjo score 6) that required a treatment with corticosteroids and empirical antimicrobial drugs. Usually, the management of skin manifestations during EGFR inhibitors treatment is not fully standardized, however several recommendations based on small studies or case reports suggest a treatment with hydrocortisone 1% plus doxycycline (100 mg), twice a day, for the first 6 weeks (level II evidence) [23][24][25]. In contrast, in the present study considering the clinical characteristics of the patients (metastatic cancer and immune depression), we did not use tetracycline + topical corticosteroid but we preferred a more aggressive treatment with systemic corticosteroid + linezolid/ceftriaxone in a patient and systemic corticosteroid + ceftriaxone/ciprofloxacin in another patient with an improvement of symptoms. This study has some limitations that are related to the type of the study (case report) and also the absence of skin biopsy. However, it confirms that the development of skin toxicity represents a relevant problem during the treatment with EGFR inhibitors and that a treatment with corticosteroid and antimicrobials is able to improve clinical symptoms.In our institution, recently, we performed a study able to identify polymorphic variants associated with erlotinib-related skin toxicity that could be used to predict this severe adverse event in patients treated with anti-EGFR agents [26]. CONCLUSION In conclusion, we reported for the first time the development of panniculitis during the treatment with Panitumumab and we documented that beta-lactams with fluoroquinolones or with oxazolidinone may be useful to improve symptoms in young patients with mCRC without the development of adverse drug reactions or drug interactions. ETHICS APPROVAL AND CONSENT TO PARTICI-PATE Not Applicable. HUMAN AND ANIMAL RIGHTS Not applicable. CONSENT FOR PUBLICATION Written informed consent was obtained from both patients for this study. STANDARD FOR REPORTING The CARE guidelines and methodologies were followed in this study. Fig. ( 1 Fig. (1).Panniculitis in the first woman at the admission.It is possible to see the presence of a large area of erythema and lymphangitis. Fig. ( 2 ) Fig. (2).Ultrasound of the forearm: it is possible to note inhomogeneity of the sub-cutis with tissue edema and marked structural disruption of the subcutaneous adipose panniculus. Fig. ( 4 ). Fig. (4).Panniculitis in the second woman at the admission.It is possible to see the large area of erythema and olecranon bursitis at admission.
2019-05-23T13:02:50.530Z
2019-09-17T00:00:00.000
{ "year": 2019, "sha1": "da56aef5265dd569131c19e20ef3e3c244d9ed7a", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6864607?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "acd175361b3e7b44d6f8ee04c42c238750942607", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4410198
pes2o/s2orc
v3-fos-license
Optimum size of a calibration phantom for x-ray CT to convert the Hounsfield units to stopping power ratios in charged particle therapy treatment planning Abstract In charged-particle therapy treatment planning, the volumetric distribution of stopping power ratios (SPRs) of body tissues relative to water is used for patient dose calculation. The distribution is conventionally obtained from computed tomography (CT) images of a patient using predetermined conversion functions from the CT numbers to the SPRs. One of the biggest uncertainty sources of patient SPR estimation is insufficient correction of beam hardening arising from the mismatch between the size of the patient cross section and the calibration phantom for producing the conversion functions. The uncertainty would be minimized by selecting a suitable size for the cylindrical water calibration phantom, referred to as an ‘effective size’ of the patient cross section, Leffective. We investigated the Leffective for pelvis, abdomen, thorax, and head and neck regions by simulating an ideal CT system using volumetric models of the reference male and female phantoms. The Leffective values were 23.3, 20.3, 22.7 and 18.8 cm for the pelvis, abdomen, thorax, and head and neck regions, respectively, and the Leffective for whole body was 21.0 cm. Using the conversion function for a 21.0-cm-diameter cylindrical water phantom, we could reduce the root mean square deviation of the SPRs and their mean deviation to ≤0.011 and ≤0.001, respectively, in the whole body. Accordingly, for simplicity, the effective size of 21.0 cm can be used for the whole body, irrespective of body-part regions for treatment planning in clinical practice. INTRODUCTION In charged-particle therapy treatment planning, the accurate prediction of particle range in patients is essential for conformal dose delivery to the target. Particle range is determined by integrating the stopping power ratios (SPRs) of body tissues relative to water along the beam path in a patient. The volumetric distribution of the SPRs in a patient is conventionally obtained from the x-ray computed tomography (CT) data, using a predetermined polyline relationship between CT number and SPR of the body tissues, referred to as the CT number-to-SPR conversion function [1][2][3]. Uncertainties of SPR estimation can induce range uncertainties of up to 3.5% [4,5], which in current clinical practice are considered by adjusting the corresponding distal and proximal margins to the target. Yang et al. [6] grouped the uncertainties in the SPR estimation into several categories according to their sources and estimated their degrees for (lung tissues, soft tissues, bone tissues): uncertainties in patient CT imaging (3.3%, 0.6%, 1.5%), uncertainties related to the CT-number-to-SPR conversion functions (3.8%, 1.4%, 1.7%), uncertainties in mean excitation energies (0.2%, 0.2%, 0.6%) and uncertainties due to the energy dependence of SPR not commonly accounted for by a dose algorithm (0.2%, 0.2%, 0.4%). To fully exploit the advantages of charged-particle therapy, these uncertainties should ultimately all be minimized, no matter how large or small they actually are. In this study, we focused on the uncertainties in patient CT imaging mainly caused by the acquisition of the CT numbers themselves. The CT number is directly related to the linear attenuation coefficient of the object for x-rays, and is usually calibrated to 0 for water and −1000 for air. The CT number is strongly affected by the x-ray energy spectrum at the point of measurement. The initial energy spectrum depends on the scanner properties, such as tube voltage, target material, filter, and detector sensitivity. The spectrum varies in the materials traversed by the x-rays up to the point of interest due to energy-dependent attenuations, namely beam hardening. The variation in CT number between scanners can be handled by creating a CT-number-to-SPR conversion function specifically for each scanning condition of each scanner. The variation in CT number due to the beam hardening is considered to be minimized by creating the CT-number-to-SPR conversion functions using x-ray calibration phantoms with typical body sizes, e.g. 10-40 cm [6][7][8]. However, the mismatch between the size of the object and the calibration phantom induces non-negligible effects in SPR of body tissues. Schaffner and Pedroni [9] reported that the CT-number variation of bone tissues between calibration phantoms of 15 cm and 30 cm diameters leads to an uncertainty in SPR of 1.5%. Yang et al. [6] reported that CT-number variations in bone and lung tissues between phantoms of 16 cm and 32 cm diameters lead to uncertainties of 1.9% and 2.6%, respectively. These studies, however, were solely based on the CT-number measurements of the tissue substitutes in cylindrical calibration phantoms. The impact of the beam hardening on the uncertainty of SPR estimation in a patient has never been investigated using a realistic patient geometry. In addition, the optimum size of the x-ray calibration phantom for minimizing the uncertainty has not been determined or proposed. The optimum phantom size, in sum, represents an 'effective size' of the patient cross section for patient SPR estimation. In this study, we evaluated the uncertainty of SPR estimation due to beam hardening, and investigated the effective patient-crosssection size for patient SPR estimation in charged-particle therapy using an ideal CT system and realistic human tissue computational phantoms. X-ray CT We modeled an ideal CT scanner consisting of an x-ray source generating parallel broad beams and a detector-array with a 100% detection efficiency with perfect antiscatter grids. In addition, CT scanning was simulated based on a theoretical x-ray spectrum with an infinite imaging dose. The reconstructed CT image was not affected by factors such as statistical noise, electronic noise or scatter, so we focused our investigation on the error in SPR estimation due to the beam-hardening effects. The x-ray energy spectrum of a CT scanner Φ(E) was generated using the SpecCalc x-ray generator program [10]. For the generation, we used the following parameters: a normal tube voltage of 120 kVp with variations of ±10 kVp, tungsten as the target material, a 7°anode angle and a 7.4-mm-thickness aluminum filter. The generated energy spectra of the CT scanner are shown in Fig. 1. The simulated detector signal of the incident x-rays without an object was calculated as: where E max is the maximum energy of the x-rays, i.e. 120 ± 10 keV. The measured signal of the x-rays transmitted through an object that enter the detector was calculated as: is the linear attenuation coefficient of the object at a position t along the projection line with length l. Projection of the object by the x-rays in a given direction was defined as: To eliminate the effect of beam hardening by the polychromatic x-rays, λ P of each projection line was corrected to the projection by a monochromatic x-ray with equivalent attenuation at the incident, λ M , using a look-up table describing water-equivalent thickness [11]. Tomographic image reconstruction from a sinogram of scanned and corrected projections gave the mean attenuation coefficient of the object over the incident energy spectrum, μ 〈 〉= In this study, projections λ M were generated for every 1°step, and a 2D filtered backprojection algorithm with a ramp filter was used for the reconstruction. The values of μ 〈 〉 were converted into CT number H in Hounsfield units (HUs): where μ 〈 〉 water and μ 〈 〉 air are the mean attenuation coefficients of water and air, respectively. H-to-(S/S w ) conversion functions To produce the CT number to SPR conversion functions, H-to-(S/S w ), we followed the procedure reported by Kanematsu et al. [1], based on the standard tissue data in the International Commission on Radiological Protection Publication 110 [12]. They defined 11 representative tissues of the human body with mass density ρ and elemental weights w and compiled their SPRs. When the CT numbers H of the 11 tissues are known, based on the hypothesis that an arbitrary tissue is a binary mixture of the representative tissues adjacent with higher and lower H, the SPR of the arbitrary tissue can be derived from its CT number H by polyline interpolation. To determine the CT number H of the 11 tissues, we simulated cylindrical water phantoms of diameter L with a 2.5-cm diameter insert of one of these tissue materials. The energy-dependent attenuation coefficient μ of water, air, and the tissue materials were determined by interpolation from XCOM, a database of photon cross sections provided by the National Institute of Standards and Technology (NIST) at http://physics.nist.gov/PhysRefData/Xcom/ html/xcom1.html. The inserts at the center of the water phantoms were scanned by the CT scanner individually, and their CT numbers H were derived from the reconstructed CT images. For the CT-number calculation with equation (4), the values of μ 〈 〉 water and μ 〈 〉 air determined with a 25-cm diameter cylindrical water phantom were used throughout this study. The H-to-(S/S w ) conversion function for the phantom size L was produced by interpolating the CT number H and SPR of the 11 tissue materials. To confirm the accuracy of the produced H-to-(S/S w ) conversion functions, 53 standard body tissues with ρ and w listed in the ICRP report were applied to the simulation of CT scanning as inserts of water phantoms with L = 10, 30 and 50 cm. The H-to-(S/S w ) relations of the standard body tissues determined for respective phantom sizes L were compared with the corresponding H-to-(S/S w ) conversion functions. ICRP computational phantom We used volumetric models of the reference male and female phantoms provided in the ICRP Publication 110 [12]. The height and mass of the models were (176 cm, 70 kg) and (167 cm, 59 kg), respectively. Their slice thickness (voxel height) and voxel in-plane resolution were (8.0 mm, 2.137 mm) and (4.84 mm, 1.775 mm). For better spatial resolution, in this study, the voxel in-plane resolutions were reduced to one-third of the original resolutions, i.e. 0.7123 mm and 0.5917 mm, respectively. Since charged-particle therapy is applied to a variety of tumors [13], the phantom data for pelvis, abdomen, thorax, and head and neck regions were included in the analysis, as shown in Fig. 4a. The phantom arms were removed to simulate the arm abduction used in treatment. Reference SPR The volumetric models of the male and female phantoms were converted to the SPR distributions by the following steps. The electron density n e of body tissues with a given ρ and w was derived by: where u = 931.5 MeV/c 2 is the atomic mass unit, and Z i and A ri are the atomic number and relative atomic mass for element i, respectively. The SPR, S/S w , of the tissues was calculated using the Bethe-Bloch equation, which can be approximated by: where n ew = 3.343 × 10 23 cm −3 is the electron density of water, m e c 2 = 0.511 MeV is the electron rest energy, β is the particle velocity relative to the speed of light in a vacuum, and I i is the mean excitation energy for element i of solid or liquid compounds [14]. This led to I w = 75.3 eV for water [15]. In equation (6), we used a fixed value of β 2 = 0.135, which minimized the range errors in a patient for proton radiotherapy [16]. We refer to the SPR calculated by equation (6) as the 'reference SPR', and we symbolize it as (S/S w ) ref hereafter. Predicted SPR The volumetric models of male and female phantoms were scanned by the CT scanner slice-by-slice. The energy-dependent attenuation coefficients of 53 tissue materials μ were determined with XCOM. The reconstructed CT images represented in HUs were then converted to the SPR maps using the H-to-(S/S w ) conversion function for a phantom size L. We refer to such a derived SPR as the 'predicted SPR' and we symbolize it as (S/S w ) prd hereafter. SPR error analysis From the reference and predicted SPR distributions of the male and female phantoms, voxel-by-voxel absolute deviation of the SPR, δ S = (S/S w ) prd − (S/S w ) ref , was calculated. The root mean square deviation of the SPR, δ S 2 , was derived to investigate the absolute error in the SPR estimation with the H-to-(S/S w ) conversion functions. The mean deviation of the SPR, δ S , was also derived to estimate the range error of proton beams in a patient. Effective size of patient cross section for H-to-(S/S w ) conversion functions We varied the diameter of the x-ray calibration phantom L from 6 cm to 50 cm in 1 cm steps, and 45 H-to-(S/S w ) conversion functions were produced. For each conversion function, the δ S map was obtained on each slice. We derived the root mean square deviation of the SPR δ S 2 in each body-part region (pelvis, abdomen, thorax, and head and neck) of the male and female phantoms. The average value of δ S 2 between the two phantoms was used to determine the effective size of the patient cross section for SPR estimation for each body-part region, L effective . The effective size L effective was also determined for respective slice positions of the male and female phantoms. To investigate the slice-by-slice variation of L effective across the body-part regions, the standard deviation of L effective , σ effective , was determined for the respective regions. The uncertainty in the determined L effective in each body-part region due to variations in body size was evaluated by one-half of the difference between the effective size of the male phantom L male and that of the female phantom L female , i.e. ΔL ≡ (L male − L female )/2. In addition, the uncertainty in the determined L effective due to variations in initial energy spectrum of the CT scanner was evaluated by the deviations in the effective phantom sizes for the energy spectra with tube voltages of 110 kVp and 130 kVp from that with 120 kVp, ΔL 110 and ΔL 130 . Figure 2 shows the H-to-(S/S w ) conversion functions produced by the x-ray calibration phantoms of L = 10, 30 and 50 cm. Although there was a small discrepancy around −150 HU for adipose tissues, three conversion functions practically coincided with each other up to 100 HU for soft tissues. Subsequently, these functions deviated gradually with HU. Since beam hardening up to the point of the insert was milder in the smaller phantom, the slope of the conversion function for L = 10 cm was gentler than the slopes for L = 30 or 50 cm. The SPRs at 1400 HU, i.e. for the bone tissues, derived by the functions for L = 10, 30 and 50 cm were 1.61, 1.70 and 1.76, respectively. This implied that the error in the predicted SPR of bone tissues could be ±4% due to beam hardening, even if a moderate size of L = 30 cm was used for the conversion function. Further, the error in the predicted SPR could be ±6% for the teeth at ≈2700 HU. H-to-(S/S w ) conversion functions Correlations between H and S/S w of 53 ICRP body tissues determined with the x-ray calibration phantoms of L = 10, 30 and 50 cm are plotted by colored plus symbols in Fig. 2. The ICRP tissues distribute around the polylines with RMS below 0.009 in S/S w for all phantom sizes L. On the thorax plane, δ S was the smallest for the function with L = 10 cm, which was identified by the smallest value of δ S 2 . The largest value of δ S 2 was observed for L = 50 cm. However, the absolute value of δ S for L = 50 cm was smaller than that for L = 10 cm, since the overestimation of the SPR in bone tissues was well compensated for by the underestimation in adipose tissues of breasts. On the abdominal plane, δ S was equally small for the functions with L = 10 and 30 cm, while it was the smallest for the function with L = 30 cm on the pelvic plane. δ S 2 and δ S for each plane are described as legends in the figures. Figure 4a shows the water-equivalent thickness distribution of the ICRP female phantom in the AP (PA) direction. Figure 4b and c show variations of δ S 2 and δ S with slice position in the female phantom, respectively. At all slice positions, δ S 2 was the largest for the conversion function with L = 50 cm. In the head and neck region, δ S 2 amounted to 0.047 due to the high absorption of x-rays in teeth and bone tissues. For the same reason, δ S was also large in the head and neck region, and it amounted to 0.019 at the slice positions including the teeth. SPR error Effective size of patient cross section for H-to-(S/S w ) conversion functions Figure 5 shows the δ S 2 within the whole body and the body-part regions of the male and female phantoms for different L. The average δ S 2 between the two phantoms is also shown there. The variation of the average δ S 2 was different for different body-part regions, e.g. it reached 0.030 in the head and neck region for L = 50 cm, while it was ≤0.012 in the abdominal region for 6 cm ≤ L ≤ 50 cm. However, in all regions, the average δ S 2 showed a convex shape with respect to L. The phantom size corresponding to the minimum root mean square deviation δ S 2 was defined as the effective patient-crosssection size L effective for constructing the H-to-(S/S w ) conversion function. Table 1 shows L effective corresponding to the minimum δ S 2 for the whole body and the body-part regions. The effective sizes L effective Fig. 4b and c, respectively. The conversion functions by the effective sizes considerably reduced the values of δ S 2 and δ S , especially in the head and neck region. Figure 3(1e)−(4e) (the bottom row images) show axial δ S distributions by the H-to-(S/S w ) conversion functions with L effective determined for the slice positions. The σ effective quantifying the slice-by-slice variation of L effective is also shown in Table 1. The x-ray attenuation in the lung tissues was insignificant due to their low mass density, while the attenuation in the blade bones was significant. The thorax region contained both of these tissues, inducing a large σ effective , i.e. ≥6 cm, in the region as shown in Fig. 4d. Figure 6 shows the δ S within the whole body and the body-part regions of the male and female phantoms, and their average. The variations of the average δ S for 6 cm < L < 50 cm were ≤0.004 for all of the body-part regions except for the head and neck region. The fraction of the bone tissues was high in the head and neck region, which resulted in the overestimation of SPR for large L, e.g. δ S = 0.011 for L = 50 cm. In other regions, the overestimation of SPR in bone tissues was compensated for by the underestimation in adipose tissues, resulting in a moderate value of δ S . The effective size deviations due to variation in body size, ΔL, and due to the variation in initial energy spectrum of the CT scanner, ΔL 110 and ΔL 130 , are shown in Table 1. The deviation ΔL in the thorax region was ≥4 cm due to the significant differences in shoulder width and volume of the breasts between male and female models. In contrast, the deviations ΔL 110 and ΔL 130 were insignificant compared with ΔL. DISCUSSION The effective size of the patient cross section for minimizing the uncertainty in patient SPR estimation was determined for charged-particle therapy treatment planning using human-tissue computational phantoms provided by ICRP. The effective size was 21.0 cm for the whole body. Using the determined effective size, we were able to reduce the root mean square deviation of SPR δ S 2 to ≤0.011, and the mean deviation δ S to ≤0.001 in the whole body. In the human body, we found that the overestimation (or underestimation) of the SPR in bone tissues is often compensated for by the underestimation (or overestimation) in adipose tissues. The absolute value of δ S realized by L effective = 21.0 cm was in fact one order of magnitude smaller than the corresponding value of δ S 2 . The absolute value of δ S was <0.001 in the pelvic region, while it was <0.004 in the head and neck region, with L effective = 21.0 cm. These results indicate, for instance, that for a proton beam with 25-cm water equivalent length (WEL) range in the pelvic region and a proton beam with 12-cm WEL range in the head and neck region, the expected mean range errors are <0.03 and <0.05 cm WEL in their regions, respectively. Schaffner and Pedroni [9] reported larger range errors of 0.33 cm WEL in a prostate patient case and 0.14 cm WEL in a brain patient case for proton beams with similar ranges, i.e. 25 and 12 cm WEL, where they derived the range errors by linearly adding the expected range uncertainties of soft tissue and bone, corresponding to the calculation of δ S 2 , for their patient cases. The slice-by-slice variation in L effective in each body-part region quantified by σ effective was larger than the deviations ΔL, ΔL 110 and ΔL 130 as shown in Table 1. Further, σ effective was larger than the variation in L effective among the body-part regions. These results suggest that the preparation of the H-to-(S/S w ) conversion functions according to the body-part regions is unnecessary for practical purposes. For simplicity, an x-ray calibration phantom with a fixed size of L = 21.0 cm should rather be used to produce the H-to-(S/S w ) conversion function, irrespective of the body-part regions, by which δ S 2 and δ S are reasonably reduced in all body-part regions, as shown in Fig. 4. At the beginning of this study, we expected that the impact of the beam hardening on the uncertainty of SPR estimation in a patient can be minimized by selecting the optimum phantom sizes for calibrating H-to-(S/S w ) conversion functions. However, even with the conversion function derived from the optimum calibration for each slice, the predicted SPRs deviated from the reference SPRs. This may be the intrinsic limitation of a singleenergy CT with polychromatic x-rays. The H-to-(S/S w ) conversion functions constructed with the selected representative tissues specifically for the body-part regions may potentially reduce the deviations. The beam-hardening correction based on the phantoms with more realistic compositions and configurations may also reduce the deviations. A dual-energy CT for patient SPR estimation may be another method for reducing the deviations [17][18][19]. There are several sources of uncertainty in patient SPR estimation, and one of them is insufficient correction of beam hardening arising from the mismatch between the size of the object and the calibration phantoms investigated in this study. To fully exploit the advantages of charged particle therapy, the remaining uncertainty sources should also be minimized. CONCLUSION One of the uncertainty sources in patient SPR estimation in charged particle therapy treatment planning is insufficient correction of the The standard deviation of L effective across each body-part region, σ effective . The deviation in L effective due to variation in body size, ΔL, and the deviations due to the variation in initial energy spectrum of the CT scanner, ΔL 110 and ΔL 130 . beam-hardening effect arising from the mismatch between the size of the object and the calibration phantom used. The uncertainty would be minimized by selecting a suitably sized x-ray calibration phantom in construction of the H-to-(S/S w ) conversion function, namely the effective size of the patient cross section. We determined the effective size using an ideal CT system and realistic human tissue computational phantoms provided by ICRP. The effective size was 21.0 cm, with which the root mean square deviation of SPRs δ S 2 and the mean deviation of SPRs δ S could be reduced to ≤0.011 and ≤0.001, respectively, in the whole body. The effective patient-cross-section size of 21.0 cm can be used for the whole patient body, irrespective of body-part regions, for treatment planning in clinical practice.
2018-04-03T02:02:04.843Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "25057a933def82f99ede9a1cb50de385e1b34391", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jrr/article-pdf/59/2/216/26356248/rrx059.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25057a933def82f99ede9a1cb50de385e1b34391", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
502411
pes2o/s2orc
v3-fos-license
Compact Fusion There are many advantages to writing functional programs in a compositional style, such as clarity and modularity. However, the intermediate data structures produced may mean that the resulting program is inefficient in terms of space. These may be removed using deforestation techniques, but whether the space performance is actually improved depends upon the structures being consumed in the same order that they are produced. In this paper we explore this problem for the case when the intermediate structure is a list, and present a solution. We then formalise the space behaviour of our solution by means of program transformation techniques and the use of abstract machines INTRODUCTION Hylomorphisms [1] represent a common programming pattern of using an intermediate data structure, that is first built and then collapsed, to give a result.More formally, it is the composition of an unfold and a fold: the unfold uses a seed value to generate a data structure and the fold takes this structure and collapses it in some way.The space efficiency of this composition may be improved by applying fusion techniques to eliminate the intermediate data structure.However, whether the space performance is actually improved depends on the fold being able to consume elements as they are generated.If this is not the case, then the result is the creation of the whole structure before any folding evaluation can take place, and the intermediate structure still effectively exists in the fused function. Here we will illustrate this problem with some examples and show how using an accumulating fold, fold-left, will improve the space performance.We then show how to formalise these space results, by using abstract machines to expose the underlying data structures, which can then be measured.The contributions are i) a new hylomorphism theorem, that captures the idea of consuming elements as they are generated, and ii) the process of producing space results.To achieve the second contribution, we derive an abstract machine using program transformation techniques.Once we have such a machine we can produce a high-level function that measures space usage.All our examples are given in Haskell [2]. HYLOMORPHISMS We will consider hylomorphisms where the intermediate data structure is a list; that is, the unfold function generates a list from a seed value, and the fold then consumes this list. Unfold The unfold function builds a list from an initial seed value.It takes three additional arguments: a predicate, p, to determine when to stop generating list elements, and two other functions, hd and tl , to make the head of the list and to modify the seed value to pass to the recursive call, and generate the rest of the list: The resulting list is therefore of the form: unfold p hd tl x = [hd x , hd (tl x ), hd (tl (tl x )), ...] For example, we can define a function downFrom using unfold , which takes a natural number n and produces a list of all the numbers from n down to 1, where id and pred are the identity and predecessor functions: Applying downFrom to the number 3 produces evaluation trace A in figure 1.We can use the shape of the trace to informally measure the space requirements in evaluation of the expression.The expression size can be estimated by counting constructor symbols and the space requirements for evaluation of an expression is given by the maximum expression size generated during evaluation, since space may be re-used at each step of evaluation.As we can see in the trace, the expression size reaches its maximum when the list has been completely generated, producing a list of length equal to the argument to downFrom.Evaluating downFrom therefore requires additional space proportional to its argument, and so has linear space requirements. Fold-right The standard fold operator for lists [3] takes two arguments, a binary operator (⊕) and value v , replacing every list constructor (:) with (⊕) and v in place of the empty list [ ].It is defined as follows: For example, a list [a, b, c ] would be folded as: Calculating the product of a list of numbers can be expressed by folding the multiplication operator over the list, and substituting the unit of multiplication in the empty list case: Applying product to the list [3,2,1] gives evaluation trace B shown in figure 1, and takes space proportional to the length of the list.This fold is called fold-right because, as shown in the trace, after replacing each (:) with ( * ), the application brackets to the right. Hylomorphisms A hylomorphism is the composition of a unfold with a fold, and is defined as follows: We use the name hylor for this function, rather than the standard hylo, to emphasise that it is specified in terms of fold-right.Within the definition for hylor , a list is generated by the unfold function and passed to the fold, which consumes it.However, the well-known hylo theorem [1] states that the two functions may be fused together to eliminate this intermediate data structure. The hylomorphism theorem for lists is: Now we will look at an example hylomorphism and see how the space performance is affected by applying this theorem. Example: factorial The factorial of a natural number, n, can be calculated by taking the product of a list from n down to 1.We can therefore express the factorial function as the composition of the two functions product and downFrom: This composition is a hylomorphism, since the downFrom function is an unfold and product is a fold, and so we can apply the hylor theorem, inlining the pred function, to give the following fused program: The purpose of fusing the program is to eliminate the creation of the intermediate list.In this case the input and output are both integers, but a list is built in the process, so potentially we could perform the multiplication after each element of the list is generated and achieve evaluation in a constant amount of space.However, the unwinding of the fused definition of factorial, given in trace A of figure 2, shows that this isn't the case.The trace shows that all of the list elements do have to be generated before any multiplication evaluation can occur.Although there is not an explicit list, the structure is still there, with the list constructor replaced by the multiplication operator.Multiplication evaluation can only occur once the unfold has finished producing list elements, and the structure is then collapsed from the right. The maximum expression size produced in the factorial example occurs when the list has been completely generated.Therefore, the amount of space required in evaluating the factorial of a number is directly proportional to that number, so it is linear and not the constant desired. Impedance mismatch The problem is the impedance mismatch 1 between unfold and fold-right; the former generates the list elements in left-to-right order, but the latter consumes them in right-to-left order.The hylor theorem eliminates the overhead of constructing/destructing the intermediate list, but retains the impedance mismatch and hence gives poor space performance. Fold-left An alternative way to fold a list is to bracket the operator from the left: This version, called fold-left, uses an accumulator that is returned in the empty list case, and, in the non-empty case, combined with the head of the list, using the operator, and then the updated accumulator is passed to fold the tail of the list.The definition for fold-left is: Duality A well known duality property [4] is that when the operator (⊕) is associative and has the element e as its unit, foldr and foldl always give the same result.In fact, the opposite result also holds, giving the following equivalence: In the case of the product function, ( * ) is associative and has 1 as its unit, so it can be re-expressed using fold-left: Under Haskell's lazy evaluation strategy, the outermost redex is chosen to be evaluated first, so the recursive call is evaluated before the accumulator expression.This is illustrated in evaluation trace A of figure 3. To force evaluation of the multiplication first we can introduce a strictness annotation, $!.In the expression, f $! x , the strictness annotation will ensure that x is evaluated first, though only enough to check that it is not undefined (head-normal form), before f x is evaluated [4].Fold-left can be modified using the strictness operator as so: Re-expressing product using foldl now means it is evaluated as in trace B in figure 3, with the evaluation of the multiplication now occurring before the recursive call. Left hylomorphism The corresponding hylomorphism theorem for fold-left is: Although straightforward, to the best of our knowledge, this operator has not been considered before. Proof of left hylomorphism theorem Structural induction cannot be used to prove that this definition satisfies the specification above, because there is nothing to do induction over; we do not know the structure of the seed value to the unfold.There is also no structured result to do co-induction over.However, because both foldl and unfold are defined as fixpoints and therefore hylol is a composition of two fixpoints, we can apply the "total fusion" [5] theorem.This states that a function that is the composition of two fixpoints, is related by: We can prove the total fusion theorem using fixpoint induction [6]. The assumptions here are that types are complete partial orders (CPOs), which are sets with a partial-ordering , a least element ⊥, and limits of all non-empty chains, and programs are continuous functions, functions between CPOs that preserve the partial-order and limit structure. Showing that the first conjunct is satisfied is trivial (⊥ • ⊥ ≡ ⊥), so we proceed straight to verifying the second conjunct: This completes the proof, apart from showing that the predicate P is admissible (preserves limits of chains), which is immediate from the fact that any equality between continuous functions can be shown to be admissible, and that the composition of any two continuous functions is continuous. To apply total fusion first we need to re-express unfold , foldl and hylol in terms of least fixpoints: The list and accumulator arguments have been swapped over in the foldl and hylol functions, so the list is now the first argument.This is to make it easier to compose the fold-left and unfold in the proof, in that the result of the unfold (a list) is the first argument to the fold-left. We can now prove the hylol theorem: The final equation can be verified as follows: The functions unfold , foldl and hylol are only locally defined above and so contain free variables, but we use them for clarity. Example: left factorial The factorial function can be re-expressed using the fold-left version of the product function: Applying the left-hylomorphism theorem gives the following fused definition: The resulting trace (B in figure 2) shows that the multiplication evaluation now happens as soon as the list elements are generated.The shape of the evaluation trace is different, because the evaluation now occurs in constant space; only the additional space to hold the accumulator is required. Calculating an accumulator version It is interesting to consider whether a function produced from the hylor theorem can be turned into a space efficient version by calculation.In general, an accumulator version f can be calculated, with an appropriate ⊗, for a function f using the specification: In the factorial example, we can attempt to calculate an accumulating version: The proof would proceed directly as: The next step would be to substitute facta a (x − 1) for (a * x ) * fact (x − 1), but we cannot do this because there is no induction hypothesis.One could be created for this specific case by induction on natural numbers, but not for the general case of functions produced using the hylor rule.It is therefore not possible to produce an accumulator version in the general case from hylor , but this can instead be done by applying the hylol theorem instead. Strictness The space performance of the original hylomorphism definition may in some cases still be constant.This occurs when the fold operator is non-strict in its second argument; it does not require the value of it to produce a result. Example: prime We can naively define a function that tests if a number is prime by creating a list from two up to the integer argument and checking to see that none of the list elements are divisors. Applying the hylor theorem, gives the fused function: In Haskell, the conjunction function ∧ is strict on its first argument, and non-strict in its second: Using this definition of ∧, the evaluation trace for prime 9 is: The resulting trace has constant space requirements, because ∧ can be evaluated solely based on the value of its first argument.If the conjunction was implemented differently, so that it was strict in both its arguments, then evaluation would occur as in the previous examples.The fold-left version of this function still has constant space requirements, though the time requirements are worse if the number isn't prime, because the fold-left always has a tail-recursive call, it can never exploit the laziness of the ∧ if the first argument evaluates to False. FORMALISING We now seek to formalise the space performance results of the previous section.Inspired by our earlier work on measuring time performance [7], the approach here is to first transform the function whose space performance we wish to measure into an abstract machine that makes explicit how evaluation proceeds.This technique has been developed by Danvy et al [8] and has been applied in a calculational way by Hutton and Wright [9].We then label the transitions of the machine with explicit space information, and reverse the transformation process to obtain a high-level function that measures the space behaviour of the original function.In the remainder of this section we show how this proceeds for the particular case of the hylor function. Abstract machines Let us start with the definition of the hylor function: The first step in the process of obtaining an abstract machine that implements this function is to make the control flow explicit, by transforming the function into continuation-passing style [10], giving the following result: The next step is to replace the use of continuations by an explicit stack data structure, by applying the technique of defunctionalization [10], which results in the following definition: We can now rewrite this function in the form of transition rules for an abstract machine with two states-the state (x , c) corresponds to evaluating an expression using the function call h x c, and c, v to executing a stack using the function call exec c v : Finally, we also specify the evaluation order of the else branch within these rules, by introducing explicit let bindings with strict semantics: Further details of this approach to transforming a function to an abstract machine can be found in [8,9]. Memory management To keep track of the space usage a memory manager data structure is introduced, consisting of a pair of non-negative integers: The first component of the pair is the amount of memory that has been explicitly freed at the current point, and the second is the amount that has been explicitly allocated: As we shall see, both parts are necessary to capture an accurate space model, in that memory freed by earlier evaluation may be re-used by a later on.Two functions are defined on the manager to allocate and free memory, alloc and free.To free some memory, the amount to be freed is simply added to the free memory integer, and is then available to use in later allocation requests: When allocating memory, the request is first satisfied using the pool of free memory that is currently available, by subtracting the amount from the free memory integer until it is zero, with the difference then added to the allocated memory integer: For simplicity we assume an infinite amount of memory, and hence allocation requests are always successful.The auxiliary subtraction function, x .− y, is defined as the maximum of x − y and 0, thereby ensuring that the result is never negative: For the purposes of later proofs, we will exploit the following properties for these functions, which can easily be proved from the above definitions: The first and second properties express that repeated occurrences of free or alloc may be accumulated.The third states that an alloc immediately followed by a free of the same amount has no effect, since the allocation can use up the previously freed amount.Finally, the last property expresses that freeing memory does not affect the amount allocated. Space costs For the purposes of assigning space costs we use the notation x s to denote the space requirements for evaluating x.In the case when x is a piece of data, this will be a non-negative integer representing the size of that data, which we measure by simply counting constructors.For example, the cost of the stack data structure is defined recursively as follows: In the case of a function f of a single argument, f s will be a function that takes this argument along with a memory manager, and returns a modified memory manager that reflects the cost of this application.For example, the cost of applying the tail function on lists can be expressed as follows: Functions with multiple arguments can be treated in the same way by exploiting currying, resulting in a function of n arguments having n unary cost functions. Transition costs To add space information to the abstract machine, a way of instrumenting each transition with its cost is required.The space requirements are added using an accumulator, so that it remains an abstract machine.The accumulator is a memory manager and is updated according to the structure of the transition.For a basic transition of the form x → y we can perform an update operation update x s y s , when provided with the sizes of the data structures on the left and righthand of the transition (before and after the transition occurs).The update captures the idea that as much space is-used as possible.First the space occupied by structures in x that don't occur in y is freed, allowing it to be re-used, and then the space for additional structures, that appear only in y, is allocated.We can defined the update function as: There are two special cases to consider, when transitions are of the structure let or if .For transitions of the form x → let y = f x in z , initially the space for the argument x is allocated, then the space requirements of the function f applied to x is performed, and finally an update occurs, with the sizes of the left hand (which now includes the new bound data y) and right-handside, update (x s + y s ) z s .Altogether this occurs as: Similarly in the if case, the space cost of performing the transition x → if p x then y else z first allocates the space for x , then applies the cost of applying the function p to x .If the predicate p x Mathematically Structured Functional Programming evaluates to True then an update occurs with the size of the left-hand-side x s + True s and righthand-side y s , and if it is False then the size of the left-hand-side is x s + False s and right-hand-side z s . (if p x then update (x s + True s ) y s else update (x s + False s ) z s ) • p s x • alloc x s In the new machine each argument is paired with its space cost, as defined in the previous section.For example x is replaced by (x , x s ).The resulting machine, which has also been simplified by inlining the definition of update and applying the properties in section 3.2, is given below: spaceMach (p, p s ) (hd , hd s ) (tl , tl s ) ((⊕), (⊕ s1 ), (⊕ s2 )) v (x , x s ) m = h x (alloc 1 m) TOP where h x m c = if p x then exec c v ((free (x s + 1) The next step is to perform the same program transformations, but in the reverse order, to produce a high-level function that measures the space from the abstract machine.After refunctionalizing the continuation and transforming from CPS, the following accumulator version is produced: In the next section, we will use this derived function to prove the space properties of the factorial example function. Example: factorial space We can analyse the space performance of the factorial function by first producing space requirements functions for the primitive functions: equivalence to zero, multiplication and predecessor functions.This is done simply by taking the difference in size between the input and output, for example, if we define the size of an integer to be one unit of space, then the multiplication function will free one unit of space, since it takes two integers as arguments and the result is one integer. Applying the spacer function and inlining the primitive space functions gives the following result: The resulting function shows how, for each recursive call, two units need to be allocated before the call, which are then released afterwards. To prove that the space requirements are linear we can form a specification that says, if we free n units of space initially and execute the space requirements function, then the allocated amount of memory will be unchanged.This means there was no need to request more memory, since the pool of n units of free memory was sufficient for evaluation.If we can prove this specification, then we can say that the function executes in n units of memory.may be rewritten as a fold-left if their operators associate with each other, and the empty list value, v , is the right and left unit for the fold-right and fold-left operator respectively: x The left-hylo space function can then be applied to get the space requirements function: spaceToBinl = h where h x = if x ≡ 0 then free 3 • alloc 2 else h (x 'div ' 2) • alloc 2 This function gives the space requirements 4+2 * log 2 n, but not including the space that the result is occupying, it only requires a constant 3 units for evaluation. The results of the two space requirements functions show that the right-hylo version requires additional space proportional to the size of the result, whereas the left version only requires a constant amount of additional space. CONCLUSION AND FURTHER WORK The aim of applying fusion theorems, such as the hylomorphism theorem, is to eliminate the intermediate data structure produced.However, we have shown that this is only achieved if the generating function produces elements in the same order as they are consumed.The examples given illustrate this impedance mismatch and show how an accumulator version, using fold-left, is often a solution.The accumulator is then able to evaluate the elements generated in-place, rather than waiting until the end, giving improved space performance. The space results may be observed informally by looking at evaluation traces, but we can get more concrete space measures by using program transformation techniques to derive the underlying abstract machine.At this level we can measure data structures that were not visible at the original function level.The machine can then be instrumented with space usage and then the transformations reversed to get a resulting space requirements function.This can then be used to prove the space performance. Applying this technique to more general structures is not so simple, since fold-left cannot be generalised as fold-right can.There are more restrictive functions that can be generalised, such as crush [11], where the structure is first flattened to a list and then folded.The same idea, of using an accumulating fold, may be applied to improve the space usage.How to extend this approach to other structures would be an interesting topic for further work. FIGURE 1 : FIGURE 1: Evaluation traces for downFrom and product FIGURE 2 : FIGURE 2: Evaluation traces for fact and factl
2014-10-01T00:00:00.000Z
2006-07-02T00:00:00.000
{ "year": 2006, "sha1": "2977523964a78d605d2b249f293ad1680487271e", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/259a1ac6-bdf7-4abb-8c06-c76739c2252b/ScienceOpen/001_Hope.pdf", "oa_status": "HYBRID", "pdf_src": "CiteSeerX", "pdf_hash": "cd2960961556204de1327adeae5e4c0d9a6719f6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
269440266
pes2o/s2orc
v3-fos-license
Navigating the Research Landscape of Emotional and Social Intelligence Among Young Adults: A Bibliometric Perspective This bibliometric analysis examines the research landscape on emotional and social intelligence in young adults, highlighting its impact on both personal and professional spheres, including overall wellness and happiness. Understanding the current research status in this area is crucial for identifying existing knowledge gaps, emerging trends, and possible future research and application directions. Utilizing bibliometric techniques, the study evaluates numerous scholarly articles from the Scopus database, employing tools like VOSviewer and Biblioshiny for data analysis. The research scope covers articles published from 1990 to 2023 across multiple disciplines such as psychology, education, sociology, and management, focusing on key bibliometric indicators like publication trends, citation patterns, prominent institutions and authors, and thematic clusters. The findings indicate a growing interest among youth in emotional and social intelligence over the past three decades, as demonstrated by the increasing volume of literature. This growing interest underscores the significance of this topic in modern research. By mapping the academic network and collaborative efforts in the field, the study identifies leading contributors whose work has significantly advanced understanding in this area. The insights gained could help shape future research endeavors, inform policy-making, and aid educators in incorporating emotional and social intelligence into educational programs to support the holistic development and well-being of young people. The research also identifies key authors, key journals, key institutions, and possible collaboration and publication opportunities. Using thematic mapping and keyword patterns, some emerging trends are brought to light with the prime objective of guiding future studies and initiatives. Introduction And Background Emotional and social intelligence play crucial roles in the personal and social development of young adults.As they transition from adolescence to adulthood, young adults encounter new challenges, such as forging meaningful relationships, managing their emotions effectively, and navigating complex social interactions.They are going through a period of confusion, both inwardly and outwardly [1]. The ability to identify, comprehend, and handle one's own emotions, as well as properly react to others' emotions, is referred to as emotional intelligence (EI) [2].According to Brasseur et al. [3], it consists of selfawareness, self-control, drive, empathy, and social abilities.Social intelligence is the capacity to perceive, comprehend, and regulate social relationships.It is similar to emotional intelligence.This includes the ability to establish and sustain connections as well as work well with others [4]. It is vital for young adults to comprehend the ideas of emotional and social intelligence and how they apply to them.The studies show that social intelligence, emotional intelligence, and cultural intelligence are closely connected [5].Later studies proved that emotional and social intelligence has an impact on a variety of life domains, including self-esteem, interpersonal connections, social skills, critical thinking, creative thinking, academic success, job success, and fulfilling interpersonal relationships [6].In addition, developing young people's emotional and social intelligence can lead to favourable societal outcomes [7], including fewer disputes, improved empathy, higher prosocial behaviour, and a happier existence [8,9]. Using this bibliometric evaluation, we aim to enhance the body of knowledge on emotional and social intelligence in young adults, assisting researchers, practitioners, and policymakers in their initiatives to support emotional health, constructive social interactions, and individual development during this crucial stage of life.This study intends to aid in creating evidence-based treatments, programmes, and strategies that promote the emotional and social development of young people and eventually improve their general well-being and success by identifying gaps, trends, and new research areas. A potent software programme for visualising and examining bibliometric networks is called VOSviewer.It gives academics a complete set of tools to browse and interpret massive amounts of bibliographic data. VOSviewer allows people to find patterns, correlations, and developments in academic literature owing to its user-friendly interface and sophisticated visualisation capabilities [10,11].The "bibliometrix" R package's user interface is referred to as "Biblioshiny."An R package called "Bibliometrix" was created particularly for bibliometric analysis.It offers a wide range of tools and features for processing and displaying bibliographic information.Researchers without a lot of programming knowledge can utilise the "Biblioshiny" component of the package, which is an interactive user interface that enables users to do bibliometric studies using a graphical interface.Users can easily import their bibliographic information into "Biblioshiny," preprocess the information, and perform a number of bibliometric studies, such as co-authorship analysis, citation analysis, and keyword co-occurrence analysis. Review of literature Throughout history, various definitions of intelligence have been used.There have been a variety of definitions, from Pythagoras' explanation of intelligence as "winds" to Descartes' understanding of intelligence as the ability to distinguish between true and false [12]. The ability to comprehend, control, and navigate one's own emotions, as well as successfully engage in interpersonal relationships with others, is referred to as emotional and social intelligence [13].EI, in the view of Brasseur et al., comprises being conscious of one's feelings, understanding them, and being aware of how feelings may influence decisions.You may lessen your tension, express yourself effectively, empathise with people, get beyond difficulties, and settle disputes as a result of it.Understanding emotion itself can help one improve emotional intelligence [14].Even though the study of emotional intelligence is an emerging discipline, Salovey and Mayer coined the phrase in 1990 [15] in a piece of literature.It also entails the ability to manage and control one's own emotions with the goal of adapting effectively to different circumstances and handling pressure.The concept of EI has gained attention in recent years.There are different models of assessing and interpreting emotional intelligence.The emotional competence inventory (ECI) incorporates selfassessment and others' assessments, which try to provide a 360° perspective [16].The ability model, proposed by Mayer and Salovey, understood "emotional intelligence as the ability to perceive, appraise, and express emotion; to use emotions to facilitate thinking; to understand emotions; and to regulate emotions for growth.Recent studies show that emotional intelligence is closely associated with subjective well-being [17]. Sundvik and Davis studied the role of emotional intelligence in handling social media stress and mental health.They concluded that emotional intelligence can reduce the possibility of social media stress.The study also stated that emotional confidence can help with mental health [18]. Theorists have provided a wide range of definitions of social intelligence, but they all have two things in common: their understanding and reaction to others, and how they adjust to social settings."Social intelligence helps an individual develop healthy coexistence with other people.Socially intelligent people behave tactfully and prosper in life.Social intelligence is useful in solving the problems of social life and helps in tackling various social tasks" [19]. According to several research studies, social intelligence is complex and different from general intelligence domains.According to Carr and Hancock [20], these conceptions of social intelligence take into account both internal and external perceptions, social skills, and other psychosocial characteristics.The study by Marlowe speaks of five domains of social intelligence, namely prosocial attitude, social skills, empathy skills, emotionality, and social anxiety [21].The study by Saxena and Kumar [22] discussed the social intelligence of undergraduate students, examining the influence of gender and subject stream.The study reveals that female students exhibit higher social intelligence compared to male students, and arts students have higher social intelligence than science students.The study emphasises the importance of social intelligence in managing personal life and interpersonal relationships. Ford and Tisak [23] discussed the concept of social intelligence and its relationship to academic intelligence. The study used various measures to assess social intelligence and found that it is indeed a separate factor. The study makes recommendations for social cognition and competency research, as well as for social skills and education initiatives. Swain discusses the social and emotional challenges faced by adolescents during their transition from childhood to adulthood.The study examines characteristics of adolescence, such as moodiness, emotional tension, and restlessness, and addresses various issues that adolescents encounter, including physical growth, mental competition, emotional disturbances, home and sex adjustment, vocational problems, student activism, use of alcohol and drugs, quarrels, impatient behaviour, and peer group influence.The article concludes by suggesting that adolescents require social and emotional support from adults to navigate through this challenging period [24].Zakirova and Irina discuss the relationship between the success of training activities and social intelligence.The study involved 140 students and evaluated the success of training and social intelligence using data from the last session.The article emphasises the importance of social intelligence in effective interpersonal interaction and successful social adaptation [25]. Anwar et al. studied the relationship between emotional intelligence, attachment style, and social intelligence and concluded that individuals with secure attachment styles had higher levels of emotional and social intelligence, while those with insecure attachment styles had lower levels.Social intelligence was found to moderate the relationship between insecure attachment styles and emotional intelligence.The study suggests that attachment styles and emotional intelligence play important roles in understanding social relationships [26].The study by Louise Cherry Wilkinson on Social Intelligence and the Development of Communicative Competency reveals the importance of social intelligence and advocates for the inclusion of social and personal factors in understanding human problem-solving behaviour [27].With a particular focus on students, Avlaev investigated the function of social intelligence in the formation of the self.It explores the dynamics, purposes, and interactional levels of social intelligence.The idea of maturity is also examined, encompassing many sorts like ego maturity, psychological maturity, psychosocial maturity, and psychosocial maturity.The paper includes study findings on the association between various facets of social intelligence as well as the relationship between social intelligence and maturity.The result highlights the significance of social intelligence in a person's total maturity, especially in a learning environment.In earlier studies, self-awareness was considered to be the basis of social intelligence [28].The study by Gulliford et al. proved that social intelligence can be developed in an individual.Their study put forward gratitude and self-monitoring as ways to build social intelligence [29]. Relevance of the study The bibliometric study on "social intelligence and emotional intelligence in young adults" is highly relevant as it systematically analyses and quantifies the impact and trends of research in this vital area.The study can identify key contributors and influential works by examining the number of publications, citations, and collaborations, providing valuable insights for policymakers, educators, and researchers.Understanding the dynamics of social and emotional intelligence in young adults can contribute to developing effective educational and psychological interventions, ultimately promoting healthier social interactions and emotional well-being in this critical demographic. Materials and methods We procured scientific publications relevant to our investigation from the primary collection of the Scopus database.On July 21, 2023, we conducted a search using specific keywords such as "Emotional Intelligence," "Social Intelligence," and "Young Adults."Our search was not limited by language and focused solely on articles, excluding book chapters and reviews.In total, we gathered 917 articles from 397 different sources, spanning the years 1998 to 2023.To ensure accuracy, we screened the Scopus records to eliminate duplicates.The results were then saved as a "CSV" file.For analysis, we employed VOSviewer version 1.6.19 and Bibloshiny software to perform bibliometric analysis on the collected data [30].Table 1 provides comprehensive information about the critical aspects of our investigation. Annual Scientific Production Between 1998 and 2023, the number of publications focusing on social intelligence and emotional intelligence in young adults has experienced fluctuations, with periods of both growth and decline.Notably, there was a significant increase in published works from 2008 to 2011, followed by a subsequent decrease.However, from 2017 to 2018, there was another upswing in publications.This cyclic pattern of alternating rises and falls is observable throughout the years.The highest number of publications occurred in 2018, with a total of 105 publications. Most Significant Authors The realm of social intelligence and emotional intelligence in young adults has seen the contributions of numerous authors, totaling 3350, whose published articles were taken as a gauge of their significance.Notably, Fernandez-Berrocal emerges as the primary contributor, boasting 18 published articles, closely followed by Petrides with 12 articles. Most Relevant Sources Our analysis of 917 papers gathered from 397 different journals indicates that the journal 'Emotion' exhibited the highest level of productivity, contributing an impressive total of 43 articles.Taking the second spot was the 'International Journal of Environmental Research and Public Health,' which published 37 papers.Following closely behind was 'Plos One' with 32 articles. Three Field Plot of Keyword, Author, and Source The Sankey diagram in Figure 1 explores the relationship between keywords, authors, and sources within the context of social intelligence and emotional intelligence in young adults' literature.The purpose of the investigation was to identify frequently used keywords in the literature across various authors and published journals, suggesting a focus on understanding the role of emotional intelligence and social intelligence in young adults.The analysis of the top keywords revealed several prominent phrases frequently used in the literature.These include "emotional intelligence," "trait emotional intelligence," "emotion regulation," and "social cognition."The study found that specific authors were extensively involved in research related to emotional intelligence in young adults.Notable authors include Zysberg, Fernandez Berrocal, Extremera, Eack, and Schermer, among others.Their frequent use of the identified keywords suggests their expertise and contribution to this field of study.The investigation identified certain journals where research on emotional intelligence in young adults was frequently published.Some of these prominent sources include the journals Plos One, Emotion, and Personality and Individual Differences. Co-occurrence of Keywords The VOSviewer software was employed to generate a visual representation of groups of keywords that appear together in the field of social intelligence and emotional intelligence in young adults.For this analysis, a subset of 805 keywords was chosen, all of which appeared at least five times out of a total of 5034 keywords.The findings are depicted in Figure 2. In this figure, the size of the nodes and the font used for each keyword depend on its weight value, which indicates its frequency of occurrence.Larger nodes and fonts are assigned to keywords that appear more frequently.The connections between nodes signify common co-occurrences between the keywords, with the thickness of the lines representing the strength of these co-occurrences.Thicker lines indicate a higher frequency of co-occurrence between two keywords.Upon analysing Figure 2, researchers identified eight distinct clusters.These clusters vary in size, with the first cluster containing 181 items, the second cluster having 170 items, the third cluster comprising 123 items, the fourth cluster including 122 items, the fifth cluster consisting of 86 items, the sixth cluster containing 64 items, the seventh cluster having 30 items, and the eighth cluster comprising 29 items.Among all the keywords, the term "human" appeared the most frequently, occurring 858 times, followed closely by "young adult" with 847 occurrences.The total link strengths (TLS) for these two keywords were 21,343 and 21,119, respectively. FIGURE 2: Visualizing the co-occurrence of all keywords using VOSviewer in a network format. The figure employs various colours to distinguish between eight different clusters.Lines are utilized to show the connections between each cluster.The colours themselves do not hold any specific meaning. Bibliographic Coupling of Sources The visual depiction shows a network that presents the connections between research articles related to social intelligence and emotional intelligence in young adults.Among the 397 sources that published these articles, only 37 were deemed suitable according to specific criteria.These criteria involved selecting sources with a minimum of five published articles and utilising a comprehensive counting method to evaluate their relevance.This network visualisation depicts the connections and interdependencies among various research articles and their respective sources.A comprehensive computation was performed to gauge the strength of the bibliographic coupling links within the 37 sources.The analysis identified a particularly significant TLS of 5066, which served as the basis for categorising the sources into five clusters, comprising a total of 37 items.The first two clusters consisted of 12 items each, while the third, fourth, and fifth clusters contained nine, three, and one item, respectively.Further examination of the data revealed that the most substantial combined link strength attained was 1257.This remarkable figure involved 43 articles, which collectively received 4337 citations from the journal "Emotion," earning it the top rank in this network mapping.Following closely was the journal "Plos One," securing the second position with 776 combined link strengths derived from 32 research articles.These results strongly suggest a notable collaborative effort between the two journals in publishing academic papers.Figure 3 is the visualisation of bibliographic coupling. FIGURE 3: The network visualization of bibliographic coupling with sources. Different colours represent various clusters of publications: red signifies articles in the field of psychology, green denotes education, blue is used for medical-related publications, and yellow highlights those pertaining to personality. Countries' Collaboration World Map The information in Figure 4 reflects a thriving global research community focused on social intelligence and emotional intelligence in young adults.The use of blue to signify research cooperation among countries highlights the extent of global collaboration in this area.The United States of America stands out as the leading country in research collaborations in this field.The frequency of collaborations with China is exceptionally high, with 14 instances suggesting a solid partnership between these two countries in studying social and emotional intelligence in young adults.In addition to its collaboration with China, the United States actively engages in substantial partnerships with Canada (frequency of 12) and Australia (frequency of 11).This demonstrates the USA's commitment to fostering research relationships with these countries regarding social and emotional intelligence.The United Kingdom is also actively involved in research collaborations concerning social and emotional intelligence in young adults.It shows significant collaborative relationships with Australia (frequency of 9) and Canada (frequency of 9).Australia and Canada appear to be essential players in the global research network as they are involved in substantial collaborative efforts with the United States (Australia with a frequency of 11, Canada with a frequency of 12), and the United Kingdom (with a frequency of 9).As indicated by the visualisation, the extensive network of research cooperation among scientists on a global scale underscores the significance of studying social intelligence and emotional intelligence in young adults.Figure 4 is a depiction of the country's collaboration in this research area. Discussion The research employs articles from the Scopus database, spanning from 1990 to 2023, to analyse trends, collaborations, and thematic importance within the domains of emotional and social intelligence.Findings reveal an increasing scholarly interest in correlating these intelligences with critical personal and professional outcomes, including well-being and interpersonal success.The analysis highlights the role of prominent institutions and authors in advancing research, with thematic mappings showing significant ties between emotional intelligence and life skills such as self-esteem, empathy, and social aptitude.The study also underscores the growing acknowledgment of emotional and social intelligence as essential to young adults' holistic development, suggesting educational programmes integrate these concepts to enhance academic and life success.Future research directions include deeper integrative studies on these intelligences' educational impacts and their broader social implications. The study provides a quantitatively enriched perspective on the progression and focal points of research in the area of emotional and social intelligence and the interplay of both in the life of a young adult.This study aligns with foundational theories by Salovey and Mayer [31], who originally proposed the term "emotional intelligence," and Goleman [32], who popularised it, affirming that these intelligences are critical abilities influencing personal success, professional achievements, and general well-being.This analysis corroborates earlier findings, as postulated by Brackett et al. [33], demonstrating a strong relationship between these intelligences and improved life outcomes such as academic success and workplace effectiveness. Contrary to the narrow look at the influence of emotional and social intelligence on outcomes in academic and workplace setups, characterised by previous research [34], this bibliometric analysis perceives a wider scope of implications as being less emphasised in foundational models by Salovey and Mayer [31] and Goleman [32], ranging from those that pertain to well-being and holistic development.In a number of previous studies, these intelligences have normally been highlighted as separable or overlapping domains [35], while the present review will underline the convergent and strategic benefits of these intelligences in educational and policy-making frameworks [33].The current findings, therefore, emphasise the importance of consolidating emotional and social intelligence for the fostering of communal well-being, thus marking a pivot towards the holistic approach in the understanding of young adult development, which hitherto has been dissimilar from the utilitarian focus in earlier works. Practical implications The study serves as a blueprint for the current scenario of research on this subject for those involved with emotional and social intelligence research among young adults, including researchers, educators, policymakers, and practitioners.It also maps key authors, journals, and institutions while identifying opportunities for collaboration and publication.These discussions would provide a comprehensive review of the field.Thematic mapping and keyword analysis can help discover emerging trends and guide further research and initiatives.This information is really invaluable for educators who include these findings in the curricula developed for enhanced, holistic development and well-being.Policymakers can also use such information to improve the condition of emotional health and encourage positive social interactions that would enable young adults to acquire the skills necessary for life. Limitations Despite its extensive utility, this review acknowledges certain limitations in its bibliometric approach.The exclusive use of the Scopus database may overlook significant studies present in other databases, potentially skewing the comprehensive understanding of the field.Furthermore, the inherently quantitative nature of bibliometric analyses typically excludes qualitative dimensions such as the depth and context of discussions found in individual studies.While the review efficiently identifies key contributors and maps the field's dynamics, it does not evaluate the quality or the impact of the included studies, which could introduce biases related to self-citation and the varying citation practices across different scientific communities.These factors could influence the depicted trends and suggest areas for future research and collaboration, highlighting the need for a balanced approach that considers both quantitative and qualitative aspects of the literature. Conclusions The study provides a thorough review of the evolution of the field of emotional and social intelligence among young adults.It seeks to achieve this goal through an analysis of articles indexed in the Scopus database.The analysis throws light on important trends, key contributors, and even some future sites of research and application in the context of education.The findings revealed the significance of these intelligences in personal and professional development and thus proposed that they be part of the curriculum for the improved balanced development and well-being of young adults.The review provides the researchers with information for further gaps to be explored.Future studies are recommended to explore deeper integrations of emotional and social intelligence with educational outcomes and wider impacts on society. FIGURE 1 : FIGURE 1: Three Field Plot, illustrating the correlation among author keywords (DE), authors (AU), and sources (SO).In the three-field plot, colours are used to represent additional layers of information or to differentiate between categories within each field.Keywords (DE): Colors associated with keywords indicate different sub-topics or themes within the research field.Authors (AU): Colors linked to authors signify their affiliations, the research group or institution they belong to.Sources (SO): The color coding of sources denote different publication. The capacity to understand and navigate social dynamics, interpret nonverbal cues, and effectively communicate are all examples of social intelligence.It necessitates skills like empathy, attentive listening, assuming different viewpoints, and conflict resolution. Table 2 presents the top 10 journals that displayed the most significant output regarding social intelligence and emotional intelligence in young adults' research papers. TABLE 2 : The top 10 relevant sources. Most Relevant AffiliationsTable3presents a visual representation of the key institutions actively involved in research on social intelligence and emotional intelligence in young adults.The University of Malaga stands out at the forefront, boasting the highest number of publications at an impressive 57.Following closely is the Harvard Medical School, which contributes significantly with 43 publications.Other notable contributors to this field of research include the University of California, the University of Melbourne, Tohoku University, the University of Almeria, the University of Arizona, the University of Western Ontario, Kings College London, and Trent University.
2024-04-29T15:22:25.045Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "4ace10631f0af20d14c5eca2eaa00c3d9198acb6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "926dc404f08efb21672254c715ed5ee17728cb51", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
247628229
pes2o/s2orc
v3-fos-license
Global anomalies in 8d supergravity We study gauge and gravitational anomalies of fermions and 2-form fields on eight-dimensional spin manifolds. Possible global gauge anomalies are classified by spin bordism groups $\Omega^{\text{spin}}_9(BG)$ which we determine by spectral sequence techniques, and we also identify their explicit generator manifolds. It turns out that a fermion in the adjoint representation of any simple Lie group, and a gravitino in $8d$ $\mathcal{N}=1$ supergravity theory, have anomalies. We discuss how a 2-form field, which also appears in supergravity, produces anomalies which cancel against these fermion anomalies in a certain class of supergravity theories. In another class of theories, the anomaly of the gravitino is not cancelled by the 2-form field, but by topological degrees of freedom. It gives a restriction on the topology of spacetime manifolds which is not visible at the level of differential-form analysis. Introduction and Summary Gauge theories are by definition invariant under gauge transformations, otherwise they are anomalous and inconsistent. A simple manifestation of the anomaly is non-invariance of the partition function Z[A] when the gauge field A is transformed to A g , (1.1) Typical sources of anomalies are massless chiral fermions. As is well known, perturbative anomalies are related to indices of Dirac operators in two-higher dimensions [AGG84, AS84,ASZ84], and the Atiyah-Singer index theorem allows us to describe them in terms of anomaly polynomials. However, even when such perturbative anomalies are absent, there can also be non-perturbative (global) gauge transformations which cannot be smoothly deformed to the trivial one, under which theories are not invariant [Wit82,Wit85]. More generally, there are also fermion anomalies which are not as simply represented as (1.1). Current understanding of anomalies is that they arise in the definition of the partition function Z[A] itself, rather than its gauge transformations. In particular, chiral fermions in d-dimensions can be realized as boundary modes of massive bulk theories in (d + 1)-dimensions, and the anomaly of the original d-dimensional boundary theory is given by the partition function of the (d + 1)-dimensional bulk theory [Wit99,Wit15,WY19], which is called the invertible field theory [FM04]. Furthermore, this sort of argument is not restricted to the cases of fermionic fields, but also incorporates the cases of bosonic fields. For example, we know that the contribution of a 2-form field to the anomaly of ten-dimensional superstring theories is very important for Green-Schwarz mechanism of anomaly cancellation [GS84]. They are studied at the perturbative level in the past, but they can also produce non-perturbative global anomalies. Unfortunately, due to their conceptual subtleties as well as technical difficulties, the global anomalies have not been thoroughly investigated for decades, especially in higher dimensions. But thanks to the recent developments, they are now within range of analyses, and this paper aims to obtain new results on the case of eight-dimensional theories. 1 For the purpose of systematic studies of anomalies, an important point is as follows. As we mentioned above in the context of fermions, the anomalies of a quantum field theory (QFT) are believed to be captured by a one-higher dimensional invertible QFT. These invertible field theories are known to be described in terms of bordism groups [Kap14,KTTW14,FH16,Yon18,YY21], and in particular, the information of the global anomalies of fermions in d-dimensional G gauge theories on spin manifolds are encoded in the bordism group Ω spin d+1 (BG), where BG is the classifying space of the gauge group G. 2 Our focus will be on eight-dimensional (8d) N = 1 supergravity theories, 3 where some of the gauge groups G are known to be actually realized despite the possible anomalies. For instance, many such theories can be realized by F-theory compactification on elliptic K3 surfaces [Vaf96,MV96a,MV96b], including those with "frozen" singularities [Wit97,Tac15,BMTT18]. 4 One curious observation on these theories with known string-theory realizations is that, the rank of the total gauge algebra of vector multiplets is either 18, 10, or 2. We will see in this paper that the structure of anomalies are quite different between these three cases, due to the difference in the structure of the 2-form field. Let us recall some facts about the 2-form field. The field strength 3-form H of the 2-form field B in 8d N = 1 supergravity satisfies the equation of the form where R is the Riemann curvature tensor, F G is the field strength of the G gauge field, N grav and N G are appropriate normalization factors such that N grav tr R∧R and N G tr(F G ∧F G ) correspond to the characteristic class by the Chern-Weil construction which represent integral cohomology classes, and k grav and k G are the gravitational and gauge Chern-Simons levels. We will show that the gaugino of any simple Lie algebra and the gravitino have anomalies, and hence they must be cancelled by some mechanism. The situation in the three cases mentioned above are as follows. • rank 18 : k grav (= 1) and k G are both odd for all known examples realized by string theory, and therefore all the fermion anomalies can be cancelled by the 2-form field. • rank 2 : k grav = 0 for all known examples, 5 and the anomaly of the gravitino cannot be cancelled by the 2-form field. We claim that a topological 3-form Z 2 gauge field is responsible for the anomaly cancellation, and discuss its origin when the theory is obtained by the compactification of M-theory on Klein bottle. • rank 10 : we still do not have a complete understanding. This case includes the Sp(n) gauge algebras which have an additional anomaly compared to other Lie algebras [GEHO + 17], and also simply-laced Lie algebras with even k G . The rest of the paper is organized as follows. consider other structures such as pin − structure. See [MV20] for new constraints in eight and nine dimensions when we take those additional structures into account. 3 It would also be interesting to study how the anomalies of 7-branes in non-compact ten-dimensional spaces are cancelled, possibly by a coupling to bulk RR-fields as discussed in [FH00]. 4 See also [MV20, CDLZ20, HV21] and references therein for investigations of possible gauge groups in 8d supergravity by bottom-up approaches. 5 The possibility of realizing k grav = 1 is not ruled out at the time of writing, and if realized, the details will depend on the parity of k G . For the theoretical constraints on the value of k grav , see [KTV19]. In Sec. 2, we first take a bottom-up approach and compute the η-invariants of Dirac operators D 9 on nine-manifolds S 1 × M 8 for some special eight-manifolds M 8 or gauge bundles over them. The Atiyah-Patodi-Singer index theorem [APS75a] tells us that they are in fact bordism invariants, and as a result we obtain a partial list of generators of bordism groups Ω spin 9 (BG) of classifying spaces BG of connected, simply-connected, compact simple Lie groups G. In particular, we find that there is a universal global gauge anomaly to which fermions in adjoint representation contribute for those gauge groups of interest, which has not been identified by conventional analyses using homotopy groups π 8 (G). In Sec. 3, we turn to a top-down approach and compute various bordism groups Ω spin 9 (BG) by Atiyah-Hirzebruch spectral sequences and Adams spectral sequences. The results show that the list obtained in the last section is actually complete, and correspondingly the possible global gauge anomalies are exhausted by those of fermions charged under representations considered there. In addition, we also mention the (non simply-connected) G = SO(n) case along the way. In Sec. 4, we examine the anomaly of 2-form fields and discuss the anomaly cancellation utilizing them, which can be thought of as an 8d analog of the Green-Schwarz mechanism [GS84]. This in fact renders some of the apparently-noxious theories of fermions anomaly-free, including those realized as low-energy effective theories of string theories, just as in the original version in ten dimensions. In Sec. 5, we also take a look at some of the exceptions to the above, namely theories with anomalies which cannot be cancelled by 2-form fields. We take up one of them and argue that the anomaly is actually canceled in the end, but it requires topological degrees of freedom. Some manifolds and fermion anomalies In this section, we discuss some concrete examples of global anomalies of Weyl fermions in eightdimensional G gauge theories. The analysis in Sec. 3 will show that these examples in fact exhaust all possible anomalies for connected, simply-connected, compact simple Lie groups G. First of all, fermion anomalies in d-dimensions are described by the Atiyah-Patodi-Singer (APS) η-invariant in (d + 1)-dimensions [Wit85,Wit15,WY19]. Let D d+1 be the relevant Dirac operator in (d + 1)-dimensions 6 and λ i 's be its eigenvalues. The η-invariant is defined as 7 Since the anomalies take the form of e −2πiη , we are interested in the values of η modulo Z. Now let us focus on the d = 8 case. All the examples of 9-manifolds we discuss are of the form S 1 × M 8 , where M 8 is a closed 8-manifold possibly equipped with a G-bundle, and S 1 is assumed to have the periodic (i.e. non-bounding) spin structure which gives the nontrivial element of the bordism group Ω spin 1 (pt). On S 1 × M 8 , the Dirac operator is of the form where D 8 is the Dirac operator on M 8 , and τ is the coordinate of S 1 . Suppose that Ψ(τ, y) is an eigenfunction of D 9 , where y is a coordinate system on M 8 . Then γ τ Ψ(−τ, y) is also an eigenfunction with the opposite-sign eigenvalue. Thus all nonzero modes λ i = 0 appear in pairs with eigenvalues ±|λ i |, and therefore cancel out in the definition (2.1) of η. Also, Ker D 9 is the space of zero modes of D 9 , and these zero modes need to satisfy ∂ τ Ψ(τ, y) = 0 and D 8 Ψ(τ, y) = 0 since (D 9 ) 2 = (i∂ τ ) 2 + (D 8 ) 2 for non-negative operators (i∂ τ ) 2 and (D 8 ) 2 . Thus dim Ker D 9 = dim Ker D 8 = index D 8 modulo 2Z. As a result, and we only need to compute index D 8 mod 2. 8 Let R be the Riemann curvature 2-form and F be the field strength 2-form for the G-bundle. Suppose that the fermion is coupled to the G-bundle in a representation ρ. Then, the index theorem states that is the A-roof genus, and p i 's are the Pontrjagin classes given in terms of R, which have degree 4i. We now want to consider the following 8-manifolds M 8 possibly equipped with G-bundles: • Quaternionic projective plane HP 2 . Its cohomology ring H * (HP 2 ; Z) is known to be generated by a single generator x ∈ H 4 (HP 2 ; Z) = Z such that HP 2 x 2 = 1. The Pontrjagin classes are p 1 = 2x and p 2 = 7x 2 respectively, and therefore the third term of (2.4) vanishes. • Bott manifold B. The Pontrjagin classes are p 1 = 0 and p 2 = −1440b where b is such that B b = 1, and therefore the third term of (2.4) integrates to dim ρ. • G-bundle P G → HP 2 . The base HP 2 has a tautological quaternionic line bundle whose structure group is Sp(1) = SU(2), and P G is obtained by using a map SU(2) → G associated with a simple long root. • G-bundle Q G → S 4 × S 4 . Taking an appropriate map SU(2) × SU(2) → G discussed below, we take a bundle over the first (resp. second) S 4 with the unit second Chern class for the first (resp. second) SU(2). For more details on the facts about manifolds HP 2 and B mentioned above, see e.g. [FH19, Sec. 5]. Let us use these manifolds to deduce some possible anomalies. First, recall that the anomaly of a gravitino can be described by taking the tensor product of the spinor bundle and T M 8 − R, where T M 8 is the tangent bundle of M 8 and R is the trivial bundle, in place of a G-bundle [AGW84]. Taking F = R correspondingly, one yields trF 4 = 2(p 2 1 − 2p 2 ) and trF 2 = 2p 1 , and hence 9 From this result, we see that the index for a gravitino is −1 on HP 2 and 247 on B, both of which are 1 mod 2. Next, consider the bundle P G → HP 2 constructed from a quaternionic Sp(1) = SU(2) bundle. In the fundamental representation 2 of SU(2), we have tr 2F 4 = 2x 2 and tr 2F 2 = 2x in terms of x ∈ H 4 (HP 2 ; Z), and thus index D 8 = 0. On the other hand, in the adjoint representation 3, we have tr 3F 4 = 32x 2 and tr 3F 2 = 8x, and thus index D 8 = 1. Under the map SU(2) → G associated with a simple long root, the adjoint representation adj(G) of generic G decomposes as 10 and as a result we get index D 8 = 1 for adj(G). This is universal in the sense that it is true for any compact simple Lie group G. Finally, let us take up the bundle Q G → S 4 × S 4 . This was already discussed in [GEHO + 17], and here we briefly review the argument. Under the map SU(2) × SU(2) → G which we explain in a moment, suppose that a representation ρ of G decomposes as (2.7) This condition is satisfied, for example, in the following cases. For these groups and representations, we get index D 8 = 1. We remark that the adjoint representation of G 2 has index D 8 = 0 mod 2 for the bundle Q G studied here. The results are summarized in Table 1. Notice that all the representations discussed above are real, so there are no perturbative anomalies for fermions charged under them, and the anomalies detected by index D 8 mod 2 are all global anomalies. Correspondingly, the η-invariants become bordism invariants as inferred from the index theorem, and from Table 1 we see that where Ω spin • is the reduced spin bordism group (i.e. Ω spin . It is known that Ω spin 9 (pt) = Z ⊕2 2 , and by using spectral sequences, we will further show that the manifolds and bundles discussed above exhaust all the generators of Ω spin 9 (BG) for connected, simply-connected, compact simple Lie groups G in the next section. Table 1: index D 8 mod 2 on various manifolds for the fermion representations discussed in the main text. For Spin(n) we only consider n ≥ 4, and for Sp(n) we only consider n ≥ 2. Bordism group computation In this section, we compute the spin bordism group Ω spin d+1 (BG) for some simple Lie groups G's. Roughly speaking, it is a group formed by equivalence classes of closed manifolds equipped with spin structure and G-bundle, where two manifolds are defined to be equivalent if there is a onehigher dimensional compact manifold connecting them. It can be computed using various types of spectral sequences; for general introduction to spectral sequences see e.g. [Hat04,DK01,McC01], while we also refer to [GEM18] for the introduction to Atiyah-Hirzebruch spectral sequences aimed at physicists, and [BC18] for the introduction to Adams spectral sequences. Atiyah-Hirzebruch spectral sequence For the Atiyah-Hirzebruch spectral sequence associated with the trivial fibration the E 2 -terms are given by ordinary homology groups H p (X; Ω spin q (pt)), and it converges to the bordism group Ω spin p+q (X). SU(n) gauge anomaly Let us first carry out an explicit computation for the X = BSU(n ≥ 5) case. The (co)homology of BSU(n) is known to be where c i ∈ H 2i (BSU(n); Z) are Chern classes. One can easily fill in the E 2 -page of the Atiyah-Hirzebruch spectral sequence as follows: 1 2 3 4 5 6 7 8 9 10 where the horizontal and vertical axes correspond to p and q respectively. Here, the differentials d 2 : E 2 p,q → E 2 p−2,q+1 for q = 0, 1 are known [Tei93] to be the duals of the Steenrod square Sq 2 (composed with mod-2 reduction for q = 0). From the knowledge on the cohomology of BSU(n), one can confirm that d 2 : E 2 10,0 → E 2 8,1 is non-trivial since Sq 2 c 4 = c 5 for mod-2 reduced Chern classes, while d 2 : E 2 8,1 → E 2 6,2 is trivial, and also d 4 : E 2 8,1 → E 2 4,4 is obviously trivial as it should be a homomorphism. As a result, one is led to and this detects the universal anomaly of adjoint fermions in 8d SU(n) gauge theories described in the previous section. Sp(n) gauge anomaly Similarly, for X = BSp(n ≥ 2), it is known that the (co)homology is where q i ∈ H 4i (BSp(n); Z). One can again easily fill in the E 2 -page of the Atiyah-Hirzebruch spectral sequence as follows: 1 2 3 4 5 6 7 8 9 10 (3.6) Since E 2 10,0 is empty opposed to the X = BSU(n ≥ 5) case, one is led to conclude that where the additional Z 2 should correspond to the subtler anomaly discussed in [GEHO + 17]. However, it is not always the case that the Atiyah-Hirzebruch spectral sequence is adequate to obtain the desired bordism groups. In the next subsection, we will introduce another spectral sequence which can be further exploited in such cases. Adams spectral sequence For the case of interest, the E 2 -terms of the Adams spectral sequence are given as and converge to the 2-completion of a stable homotopy group, which is isomorphic to that of the desired (reduced) bordism group via the Pontrjagin-Thom construction. Here, A is the mod-2 Steenrod algebra generated by certain cohomology operations, Ext R is a certain functor in the category of (graded) R-modules which takes values in Abelian groups, and M Spin is the Thom spectrum of the universal bundle over BSpin. Using the Künneth formula, the (reduced) cohomology of a smash product is decomposed as where A(1) is the subalgebra of A generated by Sq 1 and Sq 2 , J is a certain A(1)-module called the joker, and M ≥16 is also an A(1)-module which is trivial in degrees less than 16. Then, the combination of the shearing isomorphism and the adjunction formula allows us to rewrite the Fortunately, for simply-connected compact simple Lie groups G, things become significantly easier since the lowest degree of elements in H * (BG; Z) is 4, meaning that for t − s ≤ 11 the E 2 -terms can actually be reduced to which converges to the (reduced) ko group. Therefore, for such G we have (3.13) G 2 gauge anomaly Now, let us look at the X = BG 2 case. It is known [Bor54,MT91] that the cohomology of BG 2 is p-torsion free for p ≥ 3, which assures that the full bordism group can be derived from its 2-completion. Furthermore, the Z 2 cohomology ring is given as (3.14) and the cohomology operations act as Sq 2 y 4 = y 6 , Sq 1 y 6 = y 7 . (3.15) Then, the A(1)-module structure of H * (BG 2 ; Z 2 ) for the range of interest is represented as where the straight lines and curved lines represent the actions of Sq 1 and Sq 2 respectively. Namely, as an A(1)-module one has where Q and J are the "named" A(1)-modules. Correspondingly, the associated Adams chart which pictorially describes the E 2 -page Ext s,t A(1) H * (BG 2 ; Z 2 ), Z 2 is given [BC18] by where the horizontal and vertical axes correspond to t−s and s respectively. Here, the dots denote the Z 2 -generators in Ext s,t A(1) (H * (BG 2 ; Z 2 ), Z 2 ), while the vertical (resp. sloped) lines represent the action by h 0 ∈ Ext 1,1 A(1) (Z 2 , Z 2 ) (resp. h 1 ∈ Ext 1,2 A(1) (Z 2 , Z 2 )). The possibly-nontrivial differentials are the ones with the source at (t−s, s) = (10, 0) and would hit the classes in t−s = 9, but such differentials are not consistent with the action of h 1 , and therefore there are no differentials at all. As a result, the Adams spectral sequence converges as follows: d 0 1 2 3 4 5 6 7 8 9 10 11 Comparing the degree-9 part with the Atiyah-Hirzebruch spectral sequence 1 2 3 4 5 6 7 8 9 (3.19) one indeed notices that there is another type of global (non-perturbative) gauge anomaly for gauge group G 2 which corresponds to the E 2 7,2 , in addition to the universal one corresponding to the E 2 8,1 . We claim this to be the traditional anomaly captured by the homotopy group π 8 (G 2 ). The fact that the representation 7 has the anomaly associated to π 8 (G 2 ) has been shown in [GEHO + 17]. F 4 gauge anomaly Similar but a little more complicated case is X = BF 4 . The mod-2 (co)homology is known to be where the action of cohomology operations are the same as BG 2 Sq 2 y 4 = y 6 , Sq 1 y 6 = y 7 , (3.21) which leads to the same analysis on the Adams spectral sequence for the range of interest. However, this time there are 3-torsions [Tod73] (but p-torsion free for p ≥ 5), and correspondingly the E 2 -page of the Atiyah-Hirzebruch spectral sequence becomes 1 2 3 4 5 6 7 8 9 (3.22) Fortunately, the 3-torsion part is irrelevant for our purpose as one can read off, and in the same way as discussed in the G 2 case, there is an additional Z 2 in E 2 7,2 which should correspond to the traditional anomaly captured by π 8 (F 4 ). Again, it is known that the adjoint representation of F 4 has the anomaly associated to π 8 (F 4 ) [GEHO + 17]. where Sq 2 u 4 = u 6 , Sq 4 u 6 = u 10 , Sq 1 u 10 = u 11 , Sq 1 u 6 = u 7 , Sq 6 u 7 = u 13 , Sq 2 u 11 = u 13 . which is namely where Q is a new (unnamed) module, and the corresponding Adams chart is given [Fra05] as According to [Fra05], there are nontrivial differentials as shown by dotted lines, and correspondingly the Adams spectral sequence converges as In particular, the degree-9 part turns out to be Ω spin 9 (K(Z, 4)) = Z 2 (3.29) which further describes the E 6,7,8 gauge anomaly based on the aforementioned reasoning. As will be discussed in Sec. 4, this group also captures the anomaly of dynamical 2-form fields, and as a result allows us to explain the cancellation of the universal gauge anomalies by the 2-form fields. SO(n) gauge anomaly The cohomology of BSO(n) is also known [Bor54, MT91] to be p-torsion free for p ≥ 3, so let us also look at the G = SO(n) case. The Z 2 cohomology ring is well known and given as where w i 's are the Stiefel-Whitney classes, on which the cohomology operations act as (3.31) Although SO(n) is not simply-connected, the lowest degree of elements in H * (BSO(n); Z) is 2, meaning that one can derive the bordism group as in the previous case for t − s ≤ 9, which is barely sufficient for our purpose. The A(1)-module structure of H * (BSO(n); Z 2 ) for the range of interest (with large enough n) is represented as (3.32) which is namely As before, there are no differentials from t − s ≤ 9, and the Adams spectral sequence converges as d 0 1 2 3 4 5 6 7 8 · · · Ω spin d (BSO(n)) 0 0 (3.34) and the degree-9 part contains at least two Z 2 's corresponding to (t − s, s) = (9, 0) and (9, 1) which cannot be killed by differentials from t − s = 10. Spin(n) gauge anomaly Also, the Z 2 cohomology of BSpin(n) is known [Qui71, Theorem 6.5] to be where h ≈ n/2 and J is an ideal generated by Sq 1 w 2 , . . . Z 2 3-form fields Here we also compute the bordism group for X = K(Z 2 , 4) for later use in Sec. 5. It is supposed to capture the anomalies of 3-form Z 2 gauge fields, in a similar vein to the 2-form fields' case. Structure of generator manifolds The sloped lines in Adams charts representing h 1 ∈ Ext 1,2 A (Z 2 , Z 2 ) correspond to multiplication π st 1 (pt) × π st • (−) → π st •+1 (−) in terms of stable homotopy groups [Hat04]. Under the Pontrjagin-Thom construction, this multiplication can be geometrically interpreted as Moreover, for the elements [M t ] ∈ Ω spin t−0 (X) coming from Adams filtration s = 0, i.e. E s=0,t 2 = Ext s=0,t A(1) H * (X; Z 2 ), Z 2 ) = Hom t A(1) H * (X; Z 2 ), Z 2 ), there may be a simple interpretation in terms of cohomology (see e.g. [FH19]). Let f : M t → X be a representative of an element of Ω spin t (X) and let us label an element in the row s = 0 by a cohomology class c t ∈ H t (X; Z 2 ). Then the integral Mt f * c t ∈ Z 2 (or more precisely the evaluation of f * c t by the fundamental class of M t ) has a nontrivial value. Anomaly cancellation via 2-form fields In the previous sections, we have found that a fermion in the adjoint representation always has an anomaly for any simple Lie group G, which is detected by G-bundles P G → HP 2 (×S 1 ). Also, a gravitino had a pure gravitational anomaly detected by HP 2 (×S 1 ) which cannot be cancelled by spin 1/2 fermions. However, we know that both an adjoint fermion (namely gaugino) and a gravitino are realized in string theory with N = 1 supersymmetry in 8-dimensions for G = SU(n), Spin(2n), Sp(n), E 6,7,8 (e.g. by F-theory), so there must be a mechanism to cancel these anomalies. In this section, we discuss anomaly cancellation via 2-form fields, which is exactly a non-perturbative version of the Green-Schwarz mechanism. 2-form fields A dynamical 2-form field B in d spacetime dimensions yields two conserved currents j e ∼ * dB (where * is the Hodge star) and j m ∼ dB, and correspondingly the theory actually possesses electric 2-form U(1) symmetry and magnetic (d−4)-form U(1) symmetry. Modern understanding of the Green-Schwarz mechanism (and its relatives) is that, it should be interpreted as describing a 't Hooft anomaly of these higher-form symmetries [GKSW14], which enables the cancellation against other anomalies after turning on their background gauge fields A e and A m [HTY20]. Our claim is that the global anomaly of the theory when it is coupled to the 3-form field A e corresponds to an element of Hom( Ω spin d+1 (K(Z, 4)), U(1)). 11 Here, K(Z, 4) appears because the topology of the background 3-form field A e is classified by its 4-form flux (or more precisely its integral-cohomology version). For our purpose, A e will be taken to be a Chern-Simons 3-form of the G gauge field. To explain the anomaly, we construct a (d + 1)-dimensional bulk theory which hosts the original theory on the boundary. We follow the discussions in [HTY20] suitably modified according to the present situation. Let Q be an action in (d + 1)-dimensions describing the anomaly in d-dimensions in question. For the purpose of this paper, we are merely concerned with global anomalies and thus Q is taken to be an element of Hom( Ω spin d+1 (K(Z, 4)), U(1)), but we remark that the discussions below can in principle be generalized to the case where perturbative anomalies are present, especially the case of the original 10d Green-Schwarz mechanism. 12 Let us introduce a dynamical 3-form field C and a dynamical (d − 3)-form field D, both in (d + 1)-dimensional bulk. 13 All the p-form fields are normalized so that their fluxes are integervalued. Then, we take the Euclidean action given by where e and e are parameters, and * is the Hodge star. The product ee has mass dimension 1. More precise definitions of "p-form fields" and terms like " D ∧ dC " are given by the theory of differential cohomology [CS85]. 14 First, let us consider the above theory on a closed (d + 1)-manifold W d+1 (i.e. ∂W d+1 = ∅). After taking the limit e, e → ∞, the kinetic term can be neglected. Carrying out the path integral over D which serves as a Lagrange multiplier setting C → A e , we get (4.2) In this way, the bulk theory only depends on the background field A e , and does not have any dynamical degrees of freedom. Next, let us put the theory on a manifold W d+1 with boundary ∂W d+1 = M d . Here we impose a standard Dirichlet-type boundary condition such that Under this boundary condition, the second and third terms of (4.1) indeed make sense for the following reason [HTY20]. Take any manifold W d+1 with the same boundary M d but with the opposite orientation to W d+1 , so that we can glue them to get a closed manifold W closed = W d+1 ∪ W d+1 . Since C and D vanish on the boundary M d = ∂W d+1 , we can trivially extend them by demanding that they are zero on W d+1 . In this way, we get field configurations on the entire manifold W closed . Then, we define the values for 2πi D ∧ d(C − A) and 2πi · Q(C) on W d+1 to be those on W closed , which can be safely obtained. These values do not depend on the choice of W d+1 ; the possible difference between two choices W d+1 and W d+1 is given by the action evaluated on W d+1 ∪ W d+1 , where W d+1 is the orientation reversal of W d+1 , 15 and the value of 2πi D∧d(C −A) is zero since D = 0 on W d+1 ∪W d+1 . The value of 2πi·Q(C) is also zero because C = 0 on W d+1 ∪W d+1 and we have assumed that Q is determined by an element of Hom( Ω spin d+1 (K(Z, 4)), U(1)). Notice that the reduced bordism group Ω spin d+1 (K(Z, 4)) is used rather than Ω spin d+1 (K(Z, 4)), and it is implicitly assumed that Q(0) = 0. As we have argued, there are no dynamical degrees of freedom inside the bulk. Therefore, all the degrees of freedom are localized near the boundary. These degrees of freedom are described as follows. For simplicity, let us first consider the case where the background field is set to zero, A e = 0. We also assume that Q(C) is either cubic in C or topological so that it is irrelevant for the linearized equations of motion. Then the equations of motion in the Lorentzian signature metric (rather than the Euclidean signature metric) is where F C := dC and F D := dD are the field strengths. Let τ ≤ 0 be the coordinate orthonormal to the boundary such that the boundary is located at τ = 0 and the bulk is in the region τ < 0. The equations of motion have localized solutions of the form where F B is a 3-form which depends only on the coordinates of the boundary manifold M d , and * d is the Hodge star on the boundary. The boundary condition (4.3) is satisfied since the differential form dτ becomes zero when it is pulled back to the boundary τ = 0. These expressions for F C and F D are solutions of the equations of motion, if F B satisfies Therefore, F B is interpreted as the field strength of a 2-form field B as F B = dB, where the 2-form fields are the boundary degrees of freedom. The above solution is exponentially localized near the boundary with the length scale (2πee ) −1 , so it is completely localized in the limit ee → ∞. When we turn on the background field A e , one of the equations of motion is changed to (−1) d d( * F D ) = 2πe 2 (F C − F Ae ), where F Ae = dA e . Let us define a 3-form at the boundary by Note that, although the pullback of F D to the boundary is zero by the boundary condition (4.3), its Hodge dual * F D need not be zero at the boundary; indeed, if A e = 0, then H = F B = dB from the solution (4.5). On the other hand, since the pullback of F C is zero at the boundary, we have dH = −F Ae , (4.8) meaning that H can actually be written as H = dB − A e . Anomaly cancellation Let us recapitulate the above results. We introduced a theory which is defined on (d+1)-manifolds possibly with boundaries. Inside the bulk, there are no dynamical degrees of freedom and the partition function is 2πi · Q(A e ). When boundaries exist, there is a localized degree of freedom which is namely a 2-form field B. This means that the 2-form field on the d-dimensional boundary has the anomaly described by Q(A e ). Now we can discuss the anomaly cancellation. Recall that the homotopy groups of the classifying space BE 8 are the same as those of the Eilenberg-MacLane space K(Z, 4) up to very high dimensions (3.23), so that one can identify K(Z, 4) and BE 8 for the present purpose. More concretely, E 8 -bundles on a manifold X are classified by the homotopy classes of classifying maps f : X → BE 8 , and they correspond one-to-one with characteristic classes f * y ∈ H 4 (X; Z) [X, K(Z, 4)] associated with the generator y ∈ H 4 (BE 8 ; Z), if dim X < 15. This fact can be shown by obstruction-theoretic argument as represented in [Wit86]. Then, let us take the action Q of 9d bulk to be the nontrivial element of Hom( Ω spin 9 (K(Z, 4)), U(1)) = Hom( Ω spin 9 (BE 8 ), U(1)) = Z 2 . (4.9) If we take the background 3-form field A e on X = HP 2 such that its 4-form flux F Ae is equal to the generator x ∈ H 4 (HP 2 ; Z), then Q(A e ) = 1 2 mod Z on HP 2 × S 1 . This is because the E 8 adjoint fermion had a nontrivial anomaly detected by the E 8 -bundle P E 8 → HP 2 as seen in Sec. 2 (which corresponds to the nontrivial element of Hom( Ω spin 9 (BE 8 ), U(1)) = Z 2 ), and the characteristic class f * y ∈ H 4 (HP 2 ; Z) is equal to x for the bundle P E 8 . More generally, if the flux is F Ae = mx (m ∈ Z), then the anomaly is given by Q(A e ) = m 2 mod Z. To cancel the anomaly of the adjoint fermion for generic G detected by the bundle P G → HP 2 , we proceed as follows. Take A e to be the Chern-Simons 3-form associated with the group G such that its restriction to SU(2) via SU(2) → G gives a Chern-Simons 3-form for SU(2) with an odd level, where the map SU(2) → G is the one used in the construction of the bundle P G . This is always possible for simply-connected G, so suppose that G is simply-connected for the moment. Then, we have H i (BG; Z) = 0 for i < 4 and H 4 (BG; Z) = Z, where the generator c of the latter corresponds to an "instanton number" if we consider a classifying map f : X → BG and integrate the pullback f * c on a 4-manifold X. This "instanton number" of G pulls back to that of SU(2) under the map SU(2) → G, and thus c pulls back to the generator of H 4 (BSU(2); Z). The reason that we allow any odd Chern-Simons level k G is that our anomaly is Z 2 valued; it must be odd for the anomaly of an adjoint fermion to be cancelled by 2-form fields. But note that, at the level of the present analysis, we can only determine it modulo 2. 16 The level k G appears in the equation (4.8) where F Ae is now the 4-form constructed from the gauge field strength F G , (4.10) with N G being an appropriate normalization factor such that N G tr(F G ∧ F G ) corresponds to the characteristic class f * c by the Chern-Weil construction. Having chosen A e to be a Chern-Simons 3-form as above, we get an anomaly Q(A e ) of the gauge group G from the 2-form field. By checking its value on the bundles P G → HP 2 (×S 1 ), we see that the anomaly of the fermion in the adjoint representation of G is cancelled by Q(A e ). More generally, we can explicitly check the anomaly cancellation for each generator manifold (equipped with G-bundle) of the bordism group. Thus, we conclude that the fermion in the adjoint representation of G = SU(n), Spin(2n), E 6,7,8 , G 2 can be cancelled by the 2-form. (The situation is the same for a product group G = G 1 × G 2 × · · · of them.) We remark that when G is not simply-connected, it is not necessarily true that we can take such a generator c ∈ H 4 (BG; Z) which pulls back to the generator of H 4 (BSU(2); Z) [Wit00,CFLS19]. The situation is similar for a more general gauge group where G i 's are simple and simply-connected, H j 's are groups whose adjoint fermions do not have the anomaly under discussion (such as H j = U(1)), and Z is a center. We want a Chern-Simons 3-form for G such that if we restrict to SU(2) via SU(2) → G i → G, we get an SU(2) Chern-Simons 3-form with an odd level. Such a Chern-Simons 3-form always exists if Z is trivial, but more generally its existence depends on the global topology π 1 (G). This point has been essentially discussed in [CDLZ20], where it was found that this constraint (along with others) gives very good agreement with the gauge groups explicitly realized in F-theory, at least for the case of rank 18. Finally let us incorporate a gravitino. It has a pure gravitational anomaly which is detected by HP 2 . It can be cancelled as follows. In this paper we have been assuming that manifolds are spin, so the structure group of the tangent bundle is Spin(d) in d-dimensions. This group has the Chern-Simons 3-form whose field strength is half the first Pontrjagin class, p 1 /2. We add the Chern-Simons 3-form of this structure group to A e with an odd level. Since the first Pontrjagin class of HP 2 is p 1 = 2x, we have p 1 /2 = x. Thus we see that the anomaly of the gravitino is cancelled in the same way as that of adjoint fermions by replacing f * c with p 1 /2. The equation for H is now given by (4.12) where the notation is similar to (4.10) and the ellipses denote possible terms which are not relevant for the present purposes. In particular, k grav and k G i are the Chern-Simons levels for the gravity and the gauge group G i , respectively. For the purpose of the anomaly cancellation by the 2-form field, we need to take them to be odd. Anomaly cancellation via topological degrees of freedom In string theory, there are some situations in which the mechanism discussed in Sec. 4 is not sufficient to fully explain the anomaly cancellation. Let us mention three examples. Actually, the first two of them are obtained by S 1 compactification of 9-dimensional theories, so let us mention these 9-dimensional theories. • M-theory on Klein bottle, or equivalently, Type IIA string theory on S 1 with a nontrivial holonomy of the Z 2 symmetry (−1) F L which flips the sign of one of the two spinors of the 10-dimensional N = (1, 1) supersymmetry. After the compactification, there is an N = 1 supersymmetry in 9-dimensions and hence a single gravitino, but k grav = 0. • M-theory on Möbius strip, or equivalently, E 8 × E 8 heterotic string theory on S 1 with a nontrivial holonomy of the Z 2 symmetry which exchanges two E 8 's. After the compactification, we have a single E 8 gauge group in 9-dimensions, but k E 8 = 2 (which is the sum of the two Chern-Simons levels of the original E 8 's). • Type IIB string theory with three O7 − -planes, one O7 + -plane, and eight D7-branes on T 2 /Z 2 . Putting n D7-branes on top of the O7 + -plane, we get an Sp(n) gauge group. As discussed in Sec. 2, an adjoint fermion has an anomaly for n ≥ 2 which is detected by the bundle Q G → S 4 × S 4 . The subtlety of this anomaly was already discussed in [GEHO + 17]. In 8d N = 1 supergravity, the only ranks of the total gauge group which are known to be realized in string theory are 18, 10, and 2 [MV20]. The mechanism discussed in Sec. 4 works in the case of rank 18, where all known examples have odd k grav and k G , while the first example above is the case of rank 2, and the second and third examples are the cases of rank 10. Here we focus our attention on the first example mentioned above. We argue that there is a topological degree of freedom, namely a 3-form Z 2 gauge field, which cancels the anomaly of a gravitino. Topological degrees of freedom To see the topological degree of freedom and its effect on the topology of spacetime, we first recall some facts about M-theory [Wit96,FH19]. M-theory contains a 3-form field C, and its 4-form flux G is known to satisfy the shifted quantization condition [Wit96]. Let [2G] 2 ∈ H 4 (X 11 ; Z 2 ) be a mod-2 reduction of an (orientation-bundle twisted) integral cohomology class 2G ∈ H 4 (X 11 ; Z). Then we have [2G] 2 = w 4 , (5.1) where w 4 ∈ H 4 (X 11 ; Z 2 ) is the fourth Stiefel-Whitney class. Also, M-theory has parity (or orientation-reversal) symmetry, under which C is odd and the sign is flipped, C → −C (and correspondingly G → −G). Now, consider a manifold X 11 = M 9 × KB, where M 9 is a 9-dimensional spin manifold and KB is a Klein bottle. The Kaluza-Klein reduction of C in the KB compactification contains components C µνρ which are independent of the coordinates of KB and whose three indices are in the direction of M 9 . These components become a 3-form field on M 9 which we also denote as C by abuse of notation. However, this 3-form C on M 9 is severely constrained. By going around a loop in KB along which the orientation is reversed, the sign of C is flipped since C is parityodd. On the other hand, C is independent of the coordinates of KB. These two facts conspire to conclude C = −C up to gauge transformation, which then imply G = −G or equivalently 2G = 0. From (5.1), this means that the condition w 4 = 0 must be imposed on 9-dimensional manifolds M 9 in the low energy theory after the compactification on KB. In general, we believe (although do not show generally) that such a condition cannot be "put by hand". For example, in the case of the previous section, an analogous topological condition is that the right hand side of (4.12) is cohomologically trivial; this is not imposed by hand, but is realized by the 2-form field B. In a similar way, locality presumably requires that the condition w 4 = 0 is imposed by a 3-form Z 2 gauge field, which is described as follows. Let w 4 ∈ Z 4 (M 9 ; Z 2 ) be an explicit cocycle representing w 4 , i.e. w 4 = [w 4 ]. Then the fact that w 4 is trivial means that there is another cochain v 3 ∈ C 3 (M 9 ; Z 2 ) such that where δ is the coboundary operator. This equation is analogous to (4.12). It does not completely specify v 3 . Indeed, let v 3 be another cochain satisfying the same equation. Then we have δ(v 3 − v 3 ) = 0 and hence there is an ambiguity of v 3 − v 3 ∈ Z 3 (M 9 ; Z 2 ). This gives some topological degrees of freedom. It is likely that we should impose a gauge equivalence condition v 3 ∼ v 3 +δu 2 for cochains u 2 ∈ C 2 (M 9 ; Z 2 ). If so, the degrees of freedom contained in v 3 is described by H 3 (M 9 ; Z 2 ). This kind of structure is called the (degree-4) Wu structure [Mon16,MM18]. We remark that if we replace w 4 by w 2 and consider oriented manifolds instead of spin manifolds in the above discussion, the corresponding (degree-2) Wu structure would be a spin structure. In that case, w 2 = 0 implies the existence of a spin structure, and a choice of v 1 such that δv 1 = w 2 corresponds to a choice of an explicit spin structure. 17 Mere the existence of a spin structure is not enough for locality; we need explicit spin structures on manifolds. 18 17 The fact that v 1 corresponds to a spin structure can be seen as follows. Consider the SO(d) bundle associated with the tangent bundle of the manifold, and let g ij be transition functions between two patches U i and U j which take values in SO(d). Lettingĝ ij be a lift of g ij to Spin(d), the cocycle (w 2 ) ijk may be defined asĝ ijĝjkĝki = (−1) (w2) ijk . If we defineg ij = (−1) (v1)ijĝ ij , we getg ijgjkgki = 1 and it gives a Spin(d) bundle. Thus, a choice of v 1 gives a spin structure. 18 One can find two manifolds N d and N d with a common boundary ∂N d = ∂N d such that spin structure exists on N d and N d , but not on N d ∪ N d . Thus, "the existence of a spin structure" (rather than explicit spin structure) is not a local concept. For example, take N 4 to be a half K3 surface with the boundary T 3 (which is obtained by e.g. an elliptic fibration over a hemisphere), and N 4 to be D 2 × T 2 . These manifolds N 4 and N 4 can be glued without any problem if we do not care about spin structures, but they cannot be glued keeping spin structures consistent. A simpler In the present situation of M-theory compactified on KB, there is a perfect candidate for such a 3-form Z 2 gauge field. We have discussed that the consistency requires that C = −C or 2C = 0 on M 9 up to gauge transformations. This does not imply C = 0; rather, it implies that C is torsion. Thus C itself is a 3-form Z 2 gauge field on M 9 . More explicitly, when it is integrated over 3-cycles, it takes values 0 or 1 2 mod Z. Thus we can identify where we only consider modulo Z corresponding to the gauge equivalence. Although we do not try to make all mathematical details precise, 19 this identification suggests the desired result (5.2) since we may think that G ∼ δC and hence 2G ∼ δv 3 , while we also had 2G ∼ w 4 in (5.1). Anomaly cancellation We have discussed the existence of topological degrees of freedom v 3 which is an explicit trivialization of the cocycle w 4 representing w 4 . Now we would like to discuss how it is relevant for the anomaly cancellation. For this purpose, we use the results of [Wit96,FH19]. It was found there that the gravitino in 11-dimensional supergravity has an anomaly, but this anomaly can be cancelled by a cubic Chern-Simons term of the 3-form C, which is roughly 1 6 C ∧ G ∧ G + I 8 (R) ∧ C where I 8 (R) is an 8-form constructed from the Riemann curvature R. The anomaly of the 11dimensional gravitino is represented by a 12-dimensional invertible field theory. Although this is nonzero, it is equal (with the opposite sign) to the integral of the 12-form 1 6 G ∧ G ∧ G + I 8 (R) ∧ G as far as the 4-form G satisfies the condition (5.1). Thus the sum of the anomaly of the gravitino and the integral of this 12-form is zero. Now let us restrict our attention to the case Y 12 = W 10 × KB, where W 10 is a spin manifold with w 4 = 0. The Klein bottle also has w 4 = 0, and we can take G = 0 consistently with the condition (5.1). Then the contribution from the 12-form is zero, and hence the contribution from the gravitino must be also zero according to the results of [Wit96,FH19]. Our theory is obtained by the dimensional reduction on KB, so we conclude that the evaluation of the gravitino anomaly on manifolds W 10 with w 4 = 0 is zero. What we have found above is the fact that the anomaly of the gravitino is zero if the theory is formulated in the bordism category of manifolds with Wu structure. After the explicit path integral over the topological degrees of freedom v 3 , the Wu structure is "integrated over" and we expect to get a topological quantum field theory (TQFT) which is defined in the bordism category of manifolds with spin structure. This is analogous to the situation that the sum over spin structures give a "bosonic" (non-spin) theory which does not depend on spin structure. Now the question is example in the case of pin + structure rather than spin structure is to take N 2 to be a crosscap with the boundary S 1 , and N 2 to be D 2 . By gluing them, we get a real projective space RP 2 which does not possess a pin + structure. 19 For a more precise definition of C, one must consider the M2-brane partition function and its anomalies [Wit16]. The requirement is as follows. Let N 3 be the worldvolume of an M2-brane, and Z(N 3 ) be the (anomalous) partition function of the degrees of freedom on the M2-brane. Then the product Z(N 3 ) exp(2πi N3 C) must be well-defined. whether the (non-Wu) spin-TQFT reproduces the anomaly of the gravitino. The construction of a class of TQFTs relevant to the current situation been given in [KOT19,Sec. 2.4]. The anomaly of the TQFT coupled to a background Z 2 4-form field is classified by Ω spin 9 (K(Z 2 , 4)), and this anomaly trivializes when the background is turned off. The construction of [KOT19] works in this kind of situation. We have found in Sec. 3.8 and 3.9 that the group Ω spin 9 (K(Z 2 , 4)) contains an element represented by HP 2 × S 1 with a nontrivial background of H 4 (HP 2 ; Z 2 ) turned on. By taking the background Z 2 4-form to be w 4 , we get the desired anomaly which cancels against the gravitino anomaly. We leave the rank 10 cases mentioned at the beginning of this section (e.g. the case of E 8 with level 2 and the case of Sp(n)) for future work. Some of the rank 10 theories are constructed in heterotic string theories [CHL95], and the results of [TY21] suggests that fermion anomalies may be zero as long as the 2-form field is regarded as a background field imposing the (twisted) string structure (4.12). The explicit path integral over the 2-form field is subtle, but the appropriate action for the 2-form field may be obtained by the construction along the lines of [KOT19,Yon20]. It is important to understand what (topological) degrees of freedom exist in the theory. The same question also arises in Type IIB string theory [DDHM21].
2022-03-25T01:15:39.174Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "9fdfed53bd972a34412f75eb85a47086edf602c5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2022)125.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a31a36be84e1ff31b9766f8e351a38b56190d456", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
237278426
pes2o/s2orc
v3-fos-license
Coherent energy loss effects in dihadron azimuthal angular correlations in Deep Inelastic Scattering at small $x$ We perform an exploratory study of the role of coherent, medium-induced energy loss in azimuthal angular correlations in dihadron production in Deep Inelastic Scattering (DIS) at small $x$ where the target proton/nucleus is modeled as a Color Glass Condensate. In this approach coherent radiative energy loss is part of the higher order corrections to the leading order dihadron production cross section. We include the effects of both gluon saturation and coherent radiative energy loss and show that radiative cold-matter energy loss has a significant effect on the so-called coincidence probability for the back to back production of dihadrons in DIS. We also define a double ratio of coincidence probabilities for a nucleus and proton targets and show that it is very robust against higher order radiative corrections. I. INTRODUCTION The rise of parton (and especially gluon) distribution functions of a proton with decreasing Bjorken x as observed in Deep Inelastic Scattering (DIS) experiments at HERA [1] was a pleasant surprise which triggered intense theoretical and experimental studies of the behavior of QCD scattering cross sections at small x (equivalently, at high energy). This observed rise of the gluon distribution function can not however go on forever and must be tamed by high gluon density effects, the so-called gluon saturation [2,3]. The color glass condensate (CGC) formalism [4] is an effective theory of high energy (or equivalently small x) QCD which includes gluon saturation effects. In this formalism the small x gluon modes of a fast-moving proton or nucleus are collectively represented as a classical color field generated by the large x color degrees of freedom treated as static color charges [5,6]. A high energy collision involving two hadrons/nuclei at small x in this approach is thus treated as a collision of two classical color fields, i.e. two color shock waves. On the other hand in DIS at small x one has a two-stage process where the virtual photon first splits into a quark anti-quark pair (a dipole) which subsequently scatter from the target hadron/nucleus modeled as a classical color field. Higher order loop corrections then lead to energy (equivalently x or rapidity) dependence of the quark anti-quark dipole-hadron/nucleus scattering cross section. There have been numerous applications of the CGC formalism to particle production in high energy proton-proton, proton-nucleus and nucleus-nucleus collisions as well as to fully inclusive structure functions in DIS [7]. While there are strong hints for the presence of significant saturation effects in particle production spectra in the high energy hadronic and nuclear collisions at RHIC [8] and the LHC, more differential measurements in a cleaner environment [9] and higher precision theoretical calculations are needed to clearly establish gluon saturation as the dominant dynamics in the observed particle spectra at small x. Two-particle production and azimuthal angular correlations are perhaps the most sensitive probe of saturation dynamics and as such have been intensively studied in the CGC formalism [10][11][12][13][14][15][16][17][18][19][20][21][22][23]. The Color Glass Condensate formalism predicts a broadening and eventual disappearing of the away side peak in dihadron back-to-back correlations [12] as experimentally observed in forward rapidity proton (deuteron)-nucleus collisions at RHIC [24,25]. While Leading Order CGC calculations of dihadron production and angular correlations with (or without) running coupling corrections describe the experimental data quite well there may be other effects which also significantly contribute to this disappearance of the away side peak, for example, cold matter energy loss where one of the produced partons scatters from the nuclear target and radiates away some of its energy. Indeed it has been shown [26] that combining phenomenologically-motivated models of cold matter energy loss with models of nuclear shadowing of parton distribution functions can also describe the experimental data. Therefore it is prudent to understand how important cold matter energy loss effects are as compared with gluon saturation. It should be noted that the coherent energy loss as we define here is part of the Next to Leading Order (NLO) corrections to the Leading Order (LO) dihadron production cross section. In the Color Glass Condensate formalism this is true to any order in the coupling constant where the coherent energy loss is a higher order in α s correction to a fixed order calculation. Nevertheless as NLO corrections to this process [27] are not currently known 1 it is therefore useful to have a quantitative estimate of coherent energy loss effects on the dihadron azimuthal angular correlations. In this exploratory work we study dihadron (quark anti-quark) azimuthal angular correlations in the back-to-back kinematics in DIS at small x where both gluon saturation and coherent cold matter energy loss are included using the same formalism. First, we re-derive the cross section for production of a quark, anti-quark and a gluon in DIS which was already done in [29,30]. We then take the soft gluon limit and integrate over the final state gluon transverse momentum and compare the soft gluon radiation spectra, normalized to no radiation, between a nucleus and a proton target. Gaussian approximation is used to calculate the correlation functions of Wilson lines appearing as dipoles and quadrupoles which efficiently contain all the target information. We show that medium-induced coherent energy loss is most significant at the back to back limit and drops off as one goes away from this limit, and as one considers higher photon virtualities. We then consider the contribution of coherent energy loss to the away side peak in dihadron correlations in the back-to-back kinematics and show that it is significant. We then define a double ratio of coincidence probabilities and show that this double ratio is very robust against NLO corrections. We finish by outlining the steps needed for a more realistic study of medium-induced energy loss effects in dihadron angular correlations. II. COHERENT ENERGY LOSS IN DIS AT SMALL x FROM CGC The leading order process for dihadron 2 production in DIS at small x is the splitting of the virtual photon into a quark anti-quark pair which then multiply scatters on the target proton or nucleus. In the eikonal approximation inherent at small x it is assumed that the energy of the photon, and hence of the quark anti-quark pair, is so large that their recoil can be neglected and the pair stays on straight line trajectories while passing through the target. The scattering amplitude contains two Wilson lines [31] (multiple scatterings of each parton from the target is resummed into a Wilson line) so that dihadron production cross section involves not only dipoles but also quadrupoles, correlation functions of two and four Wilson lines. These dipoles and quadrupoles satisfy the BK/JIMWLK evolution equation [32][33][34][35][36][37][38][39] which governs their energy (rapidity or x) dependence [40][41][42]. In the Color Glass Condensate formalism multiple scatterings and rapidity evolution result in the broadening and reduction of the away side peak in dihadron azimuthal angular correlations. As either quark or anti-quark radiates a gluon, the energy carried away by the not-measured soft gluon will look as if it is lost in the process. Following Munier, Peigné and Petreska [43] we define the medium-induced radiation spectrum as (1) where y 1 , y 2 , y 3 are rapidities of the quark, anti-quark and radiated gluon respectively while p, q are the transverse momenta of the quark and anti-quark and the transverse momentum of the gluon is integrated over. Here z 3 is the radiated gluon's fraction of the photon's plus momentum. We note that the three-parton production cross section in DIS at small x is already computed in [29,30]. Integrating over the gluon momentum gives the contribution of the real corrections in the Next to Leading Order corrections to the Leading Order quark anti-quark production in DIS at small x. The medium-induced coherent energy loss is then defined [43] as the difference in radiation spectra between a nucleus and a proton target, ( where an integration over the transverse momentum of the radiated gluon is implied in the numerators. Suppression of the away side peak in two-particle correlations as a function of the azimuthal angle ∆φ between the two outgoing particles in forward rapidity deuteron-nucleus collisions was predicted in [12] using leading order calculations of the coincidence probability CP(∆φ), here defined as [13], for dihadron production in DIS. CP(∆φ) is a commonly studied observable which represents the probability per unit angle for correlated production of two hadrons; a leading (trigger) hadron with transverse momentum |p| between p min and p max accompanied by an away side hadron with transverse momentum |q| between q min and q max with an azimuthal angular separation of ∆φ. We will explore the contribution of fully coherent cold matter energy loss to CP(∆φ) by adding radiative corrections to N pair while using the leading order result for N trig . In this preliminary study we will assume fixed rapidities to simplify our calculations. We use the spinor helicity methods to calculate the quark anti-quark and quark anti-quark gluon production amplitudes. In the latter case there are four diagrams corresponding to the four possibilities when the gluon is radiated from either the quark or anti-quark and before or after the scattering from the target (see [30] for more details), which can be evaluated to give where we have factored out the overall momentum (+ component only) conserving delta function 2πδ(l + −p + −q + −k + ) (not shown and hence the reason for denoting the amplitude as iM rather than iA as in the figure). We have also defined z 1 , z 2 , z 3 as fractions of the virtual photon energy carried by the quark, anti-quark and gluon respectively. The numerators N i contain the spinor structures of the amplitude and are defined as In this exploratory work we consider a longitudinally polarized photon and use the spinor helicity formalism to evaluate these numerators for a given parton helicity. The integrals over k 1 and k 2 can then be performed. Here we show the amplitudes for the specific case of a positive helicity quark (so that anti-quark has negative helicity) and gluon, and a longitudinally polarized photon. iM a L: iM a L;+,+ iM a L;+,+ where we have defined some shorthand notations, The next step is to square a given helicity amplitude and then to add up all the squared helicity amplitudes to get the un-polarized cross section. For example, the first helicity amplitude (6) squared and summed over final state helicities, labeled as M L 11 is where To proceed further we take the soft gluon limit (z 3 z 1 , z 2 ) and perform the integrals over the transverse momentum k. We also define the two and four point functions of Wilson lines S (2) (x i , x j ) with shorthand S ij and S (4) (x i , x j , x k , x l ) with shorthand S ijkl (dipoles and quadrupoles) as which can be evaluated explicitly in the Gaussian approximation [10,[44][45][46][47] and are given by where Λ is an infrared regulator. The presence of the logarithm in Γ ij is essential for the correct power-law behavior of the cross sections at high transverse momenta. However it does make the analytic evaluation of these integrals impossible. As we will be exploring the more interesting (and experimentally accessible) region of low to intermediate transverse momenta we will ignore it in this exploratory study which corresponds to taking the Golec Biernat-Wusthoff model [48] of the dipole profile. With these approximations our full amplitude squared for production of a quark, anti-quark and a gluon with the transverse momentum of the gluon integrated can be written as with the radiation kernel ∆ ij given by While this result looks very compact it is still not very amenable to phenomenological studies of importance to experiments. Therefore we consider the more interesting limit of back-to-back azimuthal angular correlations. III. THE BACK-TO-BACK LIMIT OF AZIMUTHAL ANGULAR CORRELATIONS To make a quantitative estimate of the role of coherent energy loss and gluon saturation we will focus on the back-to-back kinematics region in dihadron production in DIS at small x. To this end we define the total and relative momenta P ≡ p + q, K ≡ z 2 p − z 1 q. (16) and take the "back-to-back correlation" limit [45,49] defined as We define new coordinate-space variables, u, v, u , v , in terms of which the back-to-back limit corresponds to taking |u| ∼ |u | 1. Taking this limit in (14) and expanding to lowest nonzero order in u and u we get where πR 2 is the transverse area of the target arising from integration over x 3 (after a substitution). The integrals over u and u can now be evaluated explicitly and the remaining integrals over v and v integrals can be written in polar coordinates to get where we have defined new dimensionless variables r and r r = Λ|v|, r = Λ|v| (21) and Λ is a constant parameter with units of mass. All angles are measured relative to the K vector so that φ P is the angle between P and K, and (φ, φ ) are the angles of (v, v ) with respect to K. We calculate the medium-induced, coherent energy loss as defined in (2). The induced radiation spectrum can be written as, Here f (r, r , φ, φ ) is the integrand in Eq. 20 and the Leading Order result [4] is used in the denominator. The only difference between the first and second terms is the different saturation scale Q s of a nucleus and a proton. It should be noted that we have taken the same back-to-back limit in the denominators above which describe the Leading Order quark anti-quark production cross sections. Furthermore, we use a cut off on the integration variables r and r in order to impose color neutrality at scales comparable to confinement scale Λ QCD (i.e. we choose Λ = Λ QCD = 200MeV and the r, r integrals then go from 0 to 1). Finally, to get an estimate of the energy loss effects we consider some specific values for the final state partons; we will consider the case when both partons have similar rapidities so that z 1 = 0.55, z 2 = 0.45 and their transverse momenta are of the order of the photon virtuality Q, specifically We show our results for the medium-induced coherent energy loss (22) in Fig.(2) for various values of external momenta, and two different values of nuclear saturation scale which mimics changing centrality, rapidity and/or nuclear A number. As seen the induced radiation is largest at the back to back kinematics and drops off sharply as one goes away from this limit. Also at the exact back to back limit the induced radiation is independent of photon virtuality, this is so since we have taken the quark and anti-quark momenta equal to photon virtuality. Furthermore the size of induced radiation increases with nuclear size as expected. This is also indicative of the size of the Next to Leading Order corrections, however keeping in mind that contributions of some of the Next to Leading terms cancel between a proton and a nucleus target in our definition of energy loss radiation spectrum in (2). This clearly shows the importance of the full Next to Leading Order corrections to dihadron production cross section in DIS at small x. This is work in progress and will be reported elsewhere [27]. Another effect that is known to be important is the Sudakov effect [18,50], however this is beyond the scope of this work and will not be considered here. To implement the cold matter energy loss effects in dihadron angular correlations, we calculate the coincidence probability (3) using numerical methods to evaluate the remaining integrals. Following [13], we choose similar values for the external momentum windows, we choose p min = 2 GeV, p max = 10 GeV, q min = 1 GeV, q max = p. We also impose color neutrality at lengths beyond 1fm by using a cutoff on the r, r integrals by choosing Λ = Λ QCD = 0.2 GeV and setting r max = 1. We fix the rapidities by choosing z 1 = 0.55, z 2 = 0.45. We use the back-to-back limit in both N pair and in the single inclusive N trig 3 . Clearly the induced radiation which is "lost" has a significant effect on the away side peak of dihadron azimuthal angular correlations. Note that we have restricted ourselves to angles in a very narrow window around the away side hadron. This is due to our strict back to back approximation which is expected to break down when going away from the away side hadron by a large angle. A proper quantitative estimate of the size of these corrections to the back to back approximation requires a detailed quantitative study using the various improved Transverse Momentum Dependent (TMD) distributions as advocated in [51,52] and in [53][54][55][56][57][58] for proton-nucleus collisions. Nevertheless for the sake of comparison we show our results for a much wider range in ∆φ once in Fig. (4) and limit the rest of our analysis to the range ∆φ ∈ [2.9, 3.4] where we expect that the back-to-back approximation will still provide accurate results. To see the effect of the "lost" radiation more clearly we show the ratio of coincidence probabilities with the induced radiation to that of no radiation in Fig.(5) where an enhancement factor of order 3 is seen with a weak angular dependence . Here we show the ratio of CP(∆φ) at next-to-leading order versus at leading order calculated for a proton and a large nucleus target. A very weak dependence of next to leading order corrections on target size and angle is observed for small angles away from π. To investigate the medium dependence of the coincidence probability we define the double ratio of coincidence probabilities for a nucleus vs a proton in analogy with the medium modification factor R pA in proton-nucleus collisions and show this double ratio in Fig.(6). A significant reduction of the coincidence probability for a nucleus target is seen which is again very robust against next to leading order corrections. A clear increase in the magnitude of this double ratio is see with the increasing photon virtuality, reminiscent of the behavior of R pA in proton-nucleus collisions. There are several ways in which our exploratory study can be improved; we have used the GBW model of dipole profile which is known to miss the high p t tail of the production spectra. One can improve the calculation by using more realistic dipole profiles which have become available recently. It will also be important to go beyond the strict back to back limit so that one can extend the present analysis to larger angles away from π. This will also shed light on the domain of applicability of back to back approximation. Furthermore, we have only considered longitudinal photons, in a more realistic analysis one will need to include transverse photons as well. However we do not expect our results to change much by this. Lastly, we have considered the effect of radiated soft gluon on the quark anti-quark production by integrating out the radiated gluon. In a more realistic approach to dihadron production one will need to consider integrating out any of the three outgoing partons. This will be done when we compute the full next to leading order corrections to dihadron production and express (some of) the final state singularities into hadron fragmentation functions. This is work in progress and will be reported elsewhere [27]. In summary we have performed an exploratory study of the contribution of coherent, medium-induced radiative energy loss to the away-side peak in dihadron azimuthal angular correlations in DIS at small x. We observe a sizable contribution to the coincidence probability from the induced radiation which indicates the significance of next to leading order corrections to dihadron azimuthal angular correlations. We have defined a double ratio of coincidence probabilities and have shown that it is very stable against higher order corrections and thus may be a more robust signature of saturation dynamics.
2021-08-25T01:15:45.891Z
2021-08-23T00:00:00.000
{ "year": 2021, "sha1": "fda1086f4ecde2439a856c6d42b1166d0ab6955c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.10428", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fda1086f4ecde2439a856c6d42b1166d0ab6955c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239000623
pes2o/s2orc
v3-fos-license
The Differences in Transaminase Enzyme Levels among Children with Acute Diarrhea due to Rotavirus and Non-rotavirus BACKGROUND: Diarrhea is the particular disease that still affects children in Indonesia, with rotavirus being the most common etiology among children under 5 years old. Rotavirus and non-rotavirus diarrhea can spread to the extraintestinal and localized to the liver which causes liver cell damage, thus, the level of the glutamic oxaloacetic and glutamic pyruvic transaminase enzymes increases. AIM: The objective of the study was to prove that there are differences in serum levels of glutamic oxaloacetic and glutamic pyruvic transaminase in children with acute diarrhea due to rotavirus and non-rotavirus infection. METHODS: This study used a cross-sectional design, the research subjects were children aged 6 months old until 60 months old with acute diarrhea in Denpasar Public Health Center, Sanglah, and Wangaya General Hospital within the period of March 2018 until March 2021. Statistical analysis used the Mann–Whitney. RESULTS: A total of 70 subjects were analyzed in this study. There were 24.28% of subjects with rotavirus. Each group had nearly the same degree of severity of 29.4% for rotavirus and 30.2% for non-rotavirus, with a median of serum levels of glutamic oxaloacetic transaminase (SGOT) 47 (19–261) and glutamic pyruvic transaminase (SGPT) 25 (7–217). The results of this study showed that the median difference in aspartate aminotransferase and alanine aminotransferase levels was not significant in rotavirus and non-rotavirus diarrhea (SGOT 45 [16–168], 32 [11–261], p = 0.077; (SGPT 22 [14–91], 18 [5–217], p = 0.12). CONCLUSION: This study concluded that there is a higher median level of SGOT and SGPT in children with acute diarrhea due to rotavirus infection compared to non-rotavirus infection, although it is not statistically significant. Edited by: Ksenija Bogoeva-Kostovska Citation: Wijaya S, Karyana IPG, Gunawijaya E, Subanada IB, Adnyana IGA, Witarini KA. The Differences in Transaminase Enzyme Levels among Children with Acute Diarrhea due to Rotavirus and Non-rotavirus. Open-Access Maced J Med Sci. 2021 Sep 11; 9(B):1075-1079. https://doi.org/10.3889/oamjms.2021.6737 Introduction Transaminase enzyme is one of the enzyme markers of liver damage. The elevated transaminase enzyme levels can be caused by an autoimmune process, metabolic, prolonged drug consumption, anatomical abnormalities, and circulatory disorders in the liver and infection processes, one of which is diarrhea. In the children with diarrhea, extraintestinal processes can occur, one of which is the spreading of the infection to the liver, which is characterized by an increase in the transaminase enzyme levels. Diarrhea is one of the most common diseases suffered by children in Indonesia. The most common cause of diarrhea in children under 5 years old is rotavirus infection. The diarrhea that has been most studied for its extraintestinal spread, especially to the liver, is diarrhea due to rotavirus. The spreading process of rotavirus to the liver is related to the severity of the diarrhea suffered by children. The infection process in the liver causes liver cell damage so that the serum levels of glutamic oxaloacetic transaminase and glutamic pyruvic transaminase increase. Non-rotaviral diarrhea can also spread to the liver and is associated with the elevated serum levels of serum levels of glutamic oxaloacetic transaminase (SGOT) and glutamic pyruvic transaminase (SGPT). In a study conducted in Turkey, there was an increase in the serum levels of SGPT 6.8% and SGOT 11.9% in the non-rotavirus subjects [1]. The incidence of the diarrhea due to rotavirus worldwide is 114 million children [2]. Research conducted in six hospitals in Indonesia found that 60% of pediatric diarrhea patients studied were caused by rotavirus [3]. Especially at Sanglah General Hospital in 2006, 61% of the children under 5 years who suffered from diarrhea found having positive results of rotavirus. Rotavirus infection in children mostly occurs in the intratestinal, but can also occur in the extraintestinal or systemic infection. Rotavirus can cause systemic infections such as hepatitis, nephritis, pneumonia, exanthema, disseminated intravascular coagulation, hemophagocytic lymphohistiocytosis, encephalitis, https://oamjms.eu/index.php/mjms/index and cerebellitis. Another research found that as many as 600 thousand children in the world died from rotavirus and among fatal cases, there was an increase in liver enzymes [4]. Another research on transaminase enzyme levels in diarrhea patients found that an increase in SGPT serum levels of 8.5% and SGOT 24.4% in patients with diarrhea due to rotavirus infection, this result was significantly increased compared to diarrhea due to norovirus and adenovirus infection [5]. This spreading was influenced by the severity of the diarrhea that occurred in children. Diarrhea due to rotavirus with a severe degree of severity indicates that the rotavirus infection process is still ongoing and increases the possibility of the viremia process and virus is spreading to other organs through the lymphatic system, one of which is to the liver [6]. In the patients who had severe diarrhea, it was found that the differences of SGOT and SGPT serum levels in rotavirus and non-rotavirus diarrhea increased, respectively (SGPT 18.1%, 5.6%; SGOT 24.8%, 14%) [1]. Data on SGOT and glutamic pyruvic transaminase (SGPT) as the markers of the virus spreading process to the liver in the acute rotavirus and non-rotavirus diarrhea are still not available in Indonesia until now. Thus, research is needed to determine differences in transaminase enzyme levels in children with acute diarrhea due to rotavirus and non-rotavirus in the liver. Study population The research samples were the children who looked for treatments in all public health centers in Denpasar, Sanglah General Hospital, and Wangaya General Hospital from 2018 to 2021. There were 71 subjects who met the inclusion criteria. The samples were determined by consecutive sampling. The followings were included in the exclusion criteria: Hepatitis virus [7], Wilson's disease [8], toxic condition due to drugs (hepatotoxic) [9], hepatic shock [10], Duchene muscular dystrophy [11], tuberculosis [12], cytomegalovirus [13], HIV infection [14], malnutrition and obesity [15], celiac disease [16], and inflammatory bowel syndrome [17]. SGOT is examined according to International Federation of Clinical Chemistry (IFCC) with pyridoxal-5-phosphate, whereas SGPT is examined according to IFCC without pyridoxal-5phosphate. The normal level of SGOT and SGPT is based on reference range for adults and children with units of U/L [18]. Statistical analysis and study design This research was an observational descriptive cross-sectional study. All of the statistical calculations used the Statistical Product and Service Solutions (SPSS) computer system software. Descriptive analysis aimed to describe the characteristics of the research subjects and the variables studied. Variables with numerical data scale will be displayed in the form of mean (SB) or median with minimum and maximum values if the data were not normally distributed. Variables with categorical data scale will be displayed in the form of relative frequency (amount and percent). The results of the descriptive analysis were presented using a single distribution table. All of the research variables with numerical scale were tested for data normality using the Kolmogorov-Smirnov test. The data distribution that was not normal was displayed in the form of median or the data transformation was carried out. The distribution of data was said to be normal if the test result find p > 0.05. The analysis used the Mann-Whitney U-test because the data normality test was not normal. The level of significance was expressed by p < 0.05. Results During the research period from July 2019 to March 2021, there were 71 subjects who met the inclusion criteria, one subject was excluded because of stepping down so that a total of 70 subjects were obtained. The research subjects included outpatients and inpatients who experienced diarrhea in all Public Health Centers in Denpasar, Wangaya General Hospital, and Sanglah General Hospital. The characteristics of the 70 research subjects had a median age of 16 months (minimummaximum, 6-56 months). Most of the research subjects were male (70.0%) with the median of SGOT was 35 (11-261) and SGPT 21 (5-217), while in females, the median of SGOT was 45 and . The nutritional status of the research subjects was mostly in normal nutritional status (68.6%), with the median SGOT was 38.5 (14-166) and . The results of this research obtained 17 (24.3%) subjects with rotavirus, with 64.7% was male with a mean age of 15.6 ± 9.4 months. As many as 94.1% of rotavirus subjects aged <24 months had the examination results obtained, median of SGOT is 40 (14-166) and SGOT 21 . In the rotavirus group, the serum levels of SGOT in 3 (17.6%) subjects and SGPT in 6 (35.2%) subjects increased above normal limits. In the non-rotavirus group, the Discussion Rotavirus is one of the most common causes of diarrhea in children, the incidence of diarrhea due to rotavirus worldwide is 114 million children [2]. Reports from Venezuela showed that rotavirus occurred in 21.3% of children under 5 years old [19], and in Indonesia, 60% of pediatric patients suffered from rotavirus diarrhea [3]. In addition, in the WHO global rotavirus monitoring, the median rotavirus data among 48 countries were 40% [20]. Our results were higher than the reported research from Venezuela, but still lower than the global number 24.3%. Akelma et al. found in their research that in the patients with rotavirus diarrhea, the mean age was 33.46 ± 31.85 months, SGPT levels in 42 (15.4%) subjects and SGOT in 69 (25.4%) subjects were found increasing [1]. In the non-rotavirus diarrhea group, the levels of SGPT in 25 (6.8%) subjects and SGOT in 44 (11.9%) subjects were found to be elevated above normal [1]. In this research, it was found that the median age was 16 (6-56) months and rotavirus was found to occur more in 16 (94.1%) children <2 years old in most cases. Our research was in agreement with Akelma et al. [1] which found that the higher sex in the rotavirus group was male 54.4% and non-rotavirus group 58.4%. Another study found that 68.75% of the rotavirus group were male and 70.83% were under 2 years old [21]. In this research, there were no clinically significant differences between rotavirus and non-rotavirus diarrhea. The description of dehydration and the need for hospitalization in patients with rotavirus diarrhea were in accordance with the results of a research done by Kucuk et al., it was found that the need for hospitalization in rotavirus diarrhea patients was 50% with mild, moderate, and severe degrees of dehydration, respectively, 20.7%, 67.1%, and 12.2% (p = 0.390) [5]. In a research conducted by Kawashima et al., it was found that the serum levels of SGOT and SGPT were both above the upper normal limit (SGOT <38 U/L; SGPT <44 U/L) in 23 of 26 subjects (88.5%), and three of 26 subjects (11.5%) [4]. Another research in Monmouth Philadelphia 2007, among 92 children with rotavirus, 75 children were tested for serum transaminase and found that 15 (20%) children had an SGOT serum levels in 7 (13.2%) subjects and SGPT in 13 (24.5%) subjects increased above normal. The general characteristics of the subjects are shown in Table 1. The severity of the diarrhea was based on the Vesikari severity clinical score system and it was found that most subjects had moderate severity 52.9% for rotavirus and 43.4% for non-rotavirus. Each group had almost the same severe degree of severity of 29.4% for rotavirus and 30.2% for non-rotavirus, with median SGOT was 47 (19-261) and SGPT 25 (7-217). The clinical features of rotavirus and non-rotavirus diarrhea are shown in Table 2. The analysis results of SGOT and SGPT examinations on rotavirus and non-rotavirus infections found that there were no significant differences. The analysis results of the differences between the levels of SGOT and SGPT on rotavirus and non-rotavirus diarrhea infections are shown in Table 3. This insignificantly different result of transaminase levels could be influenced by the ability of strain-specific rotavirus to infect liver cells (HepG2) which occurs in the process of extraintestinal spreading and affects changes in the transaminase enzyme. In a genetic-based study to determine the type of strain and identify the viral phenotype involved in extraintestinal spread to the liver, genome segment 7 encoding the non-structural protein NSP3 and genome segment 6 was found to be significantly associated with viral spread to the liver and affect changes in transaminase levels. So far, rotavirus has many variants with 32 G genotypes, 47 P genotypes, and genome sequences that affect the coding of non-structural proteins that can manifest in extraintestinal spreading and affect transaminase enzyme changes [6]. Further research is needed to determine the virus genotype in the approach of this SGOT and SGPT level differences. The median levels of SGOT and SGPT in the non rotavirus group were found to be lower than in the rotavirus group, but the difference was not statistically significant. In non-rotavirus subjects, this difference of SGOT and SGPT levels can be influenced by other factors including infections other than rotavirus such as bacteria, adenovirus, norovirus, bacteria, fungi, and other etiologies, which were not examined. Conclusion This study concluded that there is a higher median level of SGOT and SGPT in children with acute diarrhea due to rotavirus infection compared to non-rotavirus infection, although it is not statistically significant. Future research needs to consider performing rotavirus genotype testing and specific testing to diagnose disorders that affect liver function. Limitations This study has several weaknesses, including: The examination was limited to rotavirus infection only, but other causes of infection such as adenovirus, norovirus, bacteria, fungi, and other etiologies were not examined. Exclusion criteria for old patients were only based on interviews and medical record data, while for new patients only based on interviews and no examination was carried out to establish the diagnosis.
2021-10-15T15:05:12.950Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "841346d2dbccaf013065ba8b66d1845367139177", "oa_license": "CCBYNC", "oa_url": "https://oamjms.eu/index.php/mjms/article/download/6737/6129", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3920da9c7e3ee54cf98e19f2518deb53c061f418", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55896336
pes2o/s2orc
v3-fos-license
Effect of Thymus vulgaris and Bunium persicum essential oils on the oxidative stability of virgin olive oil Natural antioxidants are becoming a major focus because natural food ingredients are safer than synthetic types. The aim of this study was to investigate the protective effects of Thymus vulgaris and Bunium persicum essential oils (EO) on the oxidation of virgin olive oil (VOO) during accelerated storage. The antioxidant activities of EOs were compared with those of α-tocopherol and BHT. GC/MS analyses revealed that thymol (28.50%), p-cymene (27.14%), carvacrol (18.36%), and γ-terpinene (4.97%) are the main components of T. vulgaris EO, while cuminaldehyde (32.81%), γ-terpinene (16.02%) and p-cymene (14.07%) are the main components of B. persicum EO. Both EOs provided protection for the VOO, inhibiting the formation of primary and secondary oxidation products although T. vulgaris EO showed greater protection against the oxidation process than B. persicum EO. The effect of T. vulgaris essential oil on the oxidation inhibition of VOO was similar to that of BHT. α-Tocopherol showed no measurable effect on improving the oxidative stability of VOO. This study suggests that T. vulgaris and B. persicum EOs can be used to improve the oxidative stability of VOO. INTRODUCTION Virgin olive oil (VOO) is highly appreciated for its delicious taste and aroma, as well as for its nutritional properties.Its nutritional benefits are primarily related to its fatty acid composition, mainly due to the high content of oleic acid and also to the balanced ratio of saturated and polyunsaturated fatty acids (PUFAs).Furthermore, olive oil presents considerable amounts of natural antioxidants (Moldão-Martins et al., 2004).During storage, oxidation reactions reduce the high nutritional value of VOO and modify its characteristic flavor through the development of offflavors derived from hydro peroxide decomposition products (Morales et al., 1997). Among the most usual strategies to inhibit lipid oxidation in vegetable oils, the addition of antioxidants has been practiced for decades.However, recent studies claim that synthetic antioxidant compounds could pose possible hazards and carcinogenic effects (Sasaki et al., 2002).Moreover, according to the Codex Alimentarius Commission, synthetic antioxidants are not permitted for use in VOO (Codex, 2011).Furthermore, natural antioxidants such as tocopherols and their derivatives, which can be used as alternatives to BHA and BHT, exhibit little effectiveness in some systems and also increase manufacturing costs.Consequently, there is the need to identify alternative natural and safe sources of antioxidants to be incorporated into food products.These safer sources of antioxidants can be especially of plant origin, where relevant research has notably increased in recent years. Buniumpersicum is an important aromatic plant that belongs to the Apiaceae family.It originates from central Asia to North India.The γ-Terpinene, cuminaldehyde, p-cymene and limonene are major components of B. persicum EO (Mazidi et al., 2012).Thymus vulgaris (also known as common thyme) is a member of the Labiatae family.Dried plant materials of T. vulgaris contain 1 -2.5% EO.Meanwhile, Thymol, carvacrol, p-cymene, and γ-terpinene are the main components of T. vulgaris EO (Golmakani and Rezaei, 2008). Essential oils (EOs) obtained from aromatic plants have received considerable attention in the current era of concerns for food safety.For instance, a report claims that Carumcopticum EO (0.075%) is more effective than BHA and BHT (0.02%) in retarding the oxidation of sunflower oil (Hashemi et al., 2014).Furthermore, Inanc and Maskan (2014) reported that carvacrol can significantly improve the oxidative stability of palm oil in comparison with the control sample.Also, the antioxidant activity of carvacrol is known to be similar to that of BHT.Ruberto and Baratta, (2000) reported that γ-terpinene (which is the major component of B. persicum EO) shows a very high antioxidant activity.More specifically, γ-terpinene has a comparable activity to that of α-tocopherol. The objective of this study is to compare the effects of T. vulgaris and B. persicum EOs on the oxidation of VOO during accelerated storage.The antioxidant activities of EOs are compared with those of α-tocopherol and BHT. Materials Dried seeds of B. persicum and the dried aerial parts of T. vulgaris were purchased from a local market in Shiraz, Iran.The genus and species of both plants were confirmed by experts from the Herbarium of Biology Department at Shiraz University, Shiraz, Iran.VOO was supplied from the Etka Oil Company (Rudbar, Iran).All chemicals used in this research were of analytical grade and were purchased from Merck (Darmstad, Germany) and Sigma-Aldrich (St. Louis, MO, USA). Extraction of EO Fifty grams of each plant sample were mixed with 500 mL of distilled water.They were hydrodistillated for 3 h using a Clevenger-type apparatus (Golmakani and Rezaei, 2008).The final yields of T. vulgaris and B. persicum EOs were reported here to be 2.13±0.04%and 1.92±0.32,respectively.EO samples were dried over anhydrous sodium sulphate and stored in sealed vials at -18 °C until further use. GC Analysis of EO The identification of EO constituents, known as a qualitative analysis, was made using a GC (7890A, Agilent Technologies, Santa Clara, CA) which was coupled with a mass spectrometer (5975C, Agilent Technologies, Santa Clara, CA) operating at 70 eV ionization energy, 0.5 s/scan, and a mass range of 35-400 atomic mass units (amu), equipped with a HP-5MS capillary column (5% Phenyl Polysilphenylene-siloxane; 30 m length; and 0.25 mm internal diameter; 0.25 μm film thickness, Agilent Technologies, Santa Clara, CA).One μL of the EO sample was injected into the GC/MS in split mode (split ratio: 1/100).Helium was used as the carrier gas with a flow rate of 0.9 mL/min.The injector and detector temperatures were at 280 °C.The oven temperature was programmed to start at 60 °C and gradually heated up to a temperature of 210 °C at a rate of 3 °C/min.Thereafter, the rate of temperature elevation was such that the temperature increased by 20 °C/min until the point of 240 °C was reached, whereupon the temperature was held constant for 8.5 min.The MSD ChemStation Software (G1701EA, E.02.01.1177,Agilent Technologies, Santa Clara, CA) was employed to analyze the mass spectra and chromatograms.The compounds were identified by comparing their mass spectral fragmentation patterns with those stored in the data bank (Wiley/NBS) and with mass spectral data derived from the relevant literature (Golmakani and Rezaei, 2008;Hashemi et al., 2014;Mazidi et al., 2012;Moldão-Martins et al., 2004;Shahsavari et al., 2008;Zeng et al., 2011).In addition, a quantitative analysis of EO constituents was made under the same chromatographic conditions using a GC, coupled with a flame ionization detector (FID).The relative data for percentages were obtained from the electronic integration of chromatogram peak areas. Determination of EO antioxidant activity The antioxidant activities of the EOs were evaluated based on the free radical scavenging capacity and their reducing power. Free radical scavenging capacity The free radical scavenging capacity of the EO samples were measured using DPPH o (2,2-diphenyl-1-picrylhydrazyl radical) as described by Mazidi et al. (2012).The IC 50 value is defined as the concentration of the antioxidant which is required to inhibit 50% of the DPPH° activity.Here, the IC 50 value was determined through graph plotting, by considering the percentage of the remaining DPPH° against the EO concentrations. Ferric reducing assay The ferric reducing power of EOs and that of the positive control (L-ascorbic acid) were determined here according to the method of Ardestani and Yazdanparast (2007).The reducing power was measured by reducing the Fe (III) to Fe (II).One mL of each EO solution (100-10000 mg•L −1 ) was mixed with 2.5 mL of sodium phosphate buffer (0.2 M, pH 6.6) and 2.5 mL of 10 g•L −1 potassium ferricyanide (K 3 Fe(CN) 6 ).The mixture was incubated at 50 °C for 20 min, whereupon 2.5 mL of 100 g•L −1 trichloro acetic acid was added to the mixture and centrifuged for 10 min at 3000 g.The upper layer of solution (2.5 mL) was mixed with 2.5 mL of distilled water and 0.5 mL of FeCl 3 (1 g•L −1 ).The absorbance was measured by the spectrophotometer at 700 nm.Generally, a higher absorbance value indicates a higher reducing power.Results were expressed as mg ascorbic acid equivalents per gram of sample. Cupric ion reducing assay The cupric ion reducing power of the EOs and the positive control (L-ascorbic acid) were determined here in a test tube by mixing together 1 mL of CuCl 2 solution (10 mM), 1 mL neocuproinemethanolic solution (7.5 mM) and 1 mL ammonium acetate aqueous buffer solution (1 M).EO sample solutions (0.50 mL, 100-10000 mg•L −1 ) and H 2 O (0.60 mL) were added to the initial mixture so that the final volume reaches 4.10 mL.The tubes were stoppered and, after 30 min, the absorbance was recorded against the blank at 450 nm (Apak et al., 2008).Results are expressed as mg of ascorbic acid equivalents per gram of sample. Determination of initial characteristics of VOO The initial characteristics of VOO were determined by measuring its chemical and physical properties as follows. Determination of free fatty acid content Free fatty acid content was determined according to the AOCS official method (Cd 3d-63) and was reported as a percentage of oleic acid (AOCS, 2000). Determination of fatty acid composition of VOO Fatty acid methyl esters were prepared here according to the method described by Golmakani et al. (2012).The composition and types of fatty acids in the VOO sample was analyzed using a GC system (SP-3420A, Beijing Beifen-Ruili Analytical Instrument, Beijing, China) which is a device equipped with a split/splitless injector, a flame ionization detector (FID) and a BPX70 capillary column (Bis-cyanopropylsiloxane-silphenylene, 120 m × 0.25 mm internal diameter; 0.25 μm film thickness, SGE Analytical Science, Melbourne, Australia).The temperatures of the column, injector and detector were set at 198 (isothermal), 250 and 300 °C, respectively.Nitrogen was used as the carrier gas.One μL of fatty acid methyl esters was then injected into the column with a split ratio of 1:10 accordingly.The fatty acids in the VOO were identified according to the retention times for standard fatty acids injected under the same operating conditions.The quantities of fatty acids were measured by calculating their relative peak areas. Determination of total phenolic content Here, total phenols were isolated from a solution of oil in hexane by triple-extraction with watermethanol (60:40 v/v).The amounts of phenols were estimated using the Folin-Ciocalteu reagent at 725 nm.Results were expressed as mg of gallic acid per grams of VOO (Casal et al., 2010). Determination of chlorophyll and carotenoid contents Chlorophyll and carotenoid contents were determined at 470 and 670 nm, respectively, according to the method described by Minguez-Mosquera et al. (1990). Determination of oxidation indices of VOO The peroxide value (PV) was determined according to the AOCS official method (Cd 8-53) and was expressed as meq O 2 •kg −1 VOO.The p-Anisidine value (AV) was determined using the AOCS official method (Cd 8-53) and was expressed as mg•kg −1 VOO (AOCS, 2000).The TOTOX value (total oxidation value; AV + 2 PV) is used as an empirical measure of the relevant precursors, the nonvolatile carbonyls, present in the processed oils.The TOTOX value can also be used as a measure of any further oxidation products developed after storage (Frankel, 2012).The K 232 and K 268 extinction coefficients were determined according to the AOCS official method (ch 5-91) by measuring the absorbance of a pertinent solution (1% concentration) in isooctane at 232 and 268 nm, with 1 cm of pass length (AOCS, 2000). Accelerated storage of VOO Here, the EOs of T. vulgaris and B. persicum were added to the VOO at a concentration of 1000 mg•L −1 .The BHT and α-tocopherol at 100 mg•L −1 concentration are added to the VOO.For the control group, a sample with no added antioxidants was used.VOO samples (70 mL) were then kept in open amber bottles in an incubator at 70±1 °C for 42 days.The PV, AV, K 232 and K 268 were measured weekly.Also, the chlorophyll and carotenoids were measured every two weeks, as they have been in this research. The induction period of PV (IP pv ) is commonly considered as the number of days required for a sample to reach a PV of 20 meq O 2 •kg −1 , which is beyond the maximum permitted limit, when a samlpe consequently loses the classification of VOO category (Hashemi et al., 2014;IOC, 2015). Generally, the IP K232 and IP K268 are considered as the number of days required to reach the upper legal limits of K 232 and K 268 (with a K 232 value of 2.6 and K 268 value of 0.25).These are established by the International Olive Council (IOC) for VOO (IOC, 2015). The Average percentage of difference was calculated according to eq. ( 1) The effectiveness of the antioxidant, also known as the stabilizing effect, is defined as the induction period extension (IPE) according to eq. ( 2) (Abramovic and Abram, 2006). Statistical analysis All experiments were performed in triplicate and the data were reported as mean values of the measurements while presenting the standard deviation values in tables and the standard deviation bars in figures.A general linear model (GLM) procedure from SAS (Statistical Analysis Software, version 9.1; SAS Institute Inc. Cary, NC) was used for the comparison of mean values.The simple regression equations for the chemical variables that were obtained from the storage study of VOO (PV, K 232 , and K 268 ) were calculated by Microsoft Office Excel 2010. GC analysis of EO The chemical compositions of T. vulgaris and B. persicum EOs are presented in Table 1.The total numbers of chemical constituents identified in the EOs were measured to be 24 for B. persicum and 29 for T. vulgaris EO.The main components of B. persicum EO were cuminaldehyde (32.81%) and monoterpene hydrocarbons (γ-terpinene (16.02%) and p-cymene (14.07%)).Previous reports suggest that the antioxidant activity of γ-terpinene is significantly higher than that of p-cymene (Ruberto and Baratta, 2000).The T. vulgaris EO was characterized mainly by monoterpene phenols (thymol (28.50%) and carvacrol (18.36%)) and also by their corresponding monoterpene hydrocarbon precursors (p-cymene (27.14%) and γ-terpinene (4.97%)).Similarly, Golmakani and Rezaei (2008) reported that thymol, carvacrol, p-cymene, and γ-terpinene were the major compounds of T. vulgaris EO.On the contrary, camphor was not detected in our samples, but had been found as the main component of T. vulgaris EO from the plant source in Eastern Morocco (Imelouane et al., 2009).According to Ruberto and Baratta (2000) thymol and carvacrol possessed stronger antioxidant activity than camphor. Determination of antioxidant activity of EOs Antioxidants can scavenge radical species by hydrogen donation, which causes a decrease in DPPH° absorbance at 517 nm (Zeng et al., 2011).The radical scavenging capacity of T. vulgaris and B. persicum EOs are presented in Table 2.Both EOs managed to reduce the stable, purple-colored radical DPPH° into yellow-colored DPPH-H, reaching IC 50 values of 4.15 mg•mL −1 for B. persicum EO and 0.50 mg•mL −1 for T. vulgaris EO.There was no significant difference between the IC 50 value of T. vulgaris EO with that of BHT, whereas the IC 50 value of B. persicum EO was significantly higher than BHT.However, in previous studies, the radical scavenging capacity of B. persicum EO (IC 50 value of 0.88 mg mL −1 ) was significantly higher than that of T. vulgaris EO (IC 50 value of 8.9 mg•mL −1 ) and lower than that of BHT and α-tocopherol (IC 50 values of 0.58 and 0.2 mg•mL −1 , respectively) (Fazel et al., 2007;Shahsavari et al., 2008;Zeng et al., 2011). Ferric ion (Fe 3+ ) and cupric reduction is often used as an indicator of electron-donating activity, which is an important mechanism of phenolic antioxidant action, and can be strongly correlated with other antioxidant properties (Apak et al., 2008;Zhang et al., 2010).The reducing powers of T. vulgaris and B. persicum EOs are presented in Table 2.Both results obtained from ferric and cupric reduction assays showed nearly the same outcome.T. vulgaris and B. persicum EOs showed some degree of hydrogen-donating capacity, but the capacities were, as expected, inferior to ascorbic acid.This is in agreement with the radical scavenging capacity results that T. vulgaris EO showed stronger reducing power than B. persicum EO.These results indicate that EOs rich in phenolic monoterpenes are more potent reductants and radical scavengers than those rich in monoterpene hydrocarbons. Initial characteristics of VOO The initial characteristics of VOO are presented in Table 3.The oxidative and hydrolytic integrity of the oil was confirmed by the low PV, K 232 , and K 268 values and by the low contents of free fatty acids which were below the upper legal limit established by IOC for VOO.The VOO contained high amounts of oleic acid (68.21%) and an appropriate ratio of monounsaturated fatty acid (MUFA)-to-PUFA.It also contained considerable amounts of phenolic compounds (286.30±1.16µg gallic acid equivalents g VOO −1 ) in the beginning of the assay. Measurement of PV, AV, and TOTOX values Primary oxidation products, namely hydroperoxides, were determined by PV measurement.Changes in the PVs of VOO samples during storage at 70 °C are illustrated in Figure 1.The PV of the control increased gradually until the 35 th day, indicating the high resistance of VOO to oxidation due to its own (naturally occurring) antioxidants and low unsaturation level.However, the rate of hydroperoxide formation of the control increased sharply after reaching a PV of 38.44 meq O 2 •kg −1 .This phase (after reaching a PV value of 38.44 meq O 2 •kg −1 ) indicated an accelerated degradation process.VOO supplemented with BHT, T. vulgaris EO, and B. persicum EO showed lower PVs in comparison with that of the control throughout the storage period.In the initial stages of oxidation, BHT and T. vulgaris EOs appear to be slightly more effective than B. persicum EO.However, the PVs of the samples containing BHT and T. vulgaris EO were significantly lower than that of B. persicum EO at the end of storage.This phenomenon may be due to the fact that natural antioxidants in the VOO are consumed at the initial stage of oxidation.Also, this result may be related to the fact that the different magnitudes of antioxidant activities observed among the various antioxidants are more evident at later stages of oxidation.The PV of the sample containing α-tocopherol was slightly lower than that of the control during the entire period of the experiment.The IP PV of the control, BHT, α-tocopherol, T. vulgaris EO,and B. persicum EO samples were 16.86,27.08,18.05,25.13,and 22.49 days,respectively.The anisidine test is designed to measure high molecular weights of saturated and unsaturated carbonyl compounds in triacylglycerols (Frankel, 2012).Changes in AVs of VOO samples during storage at 70 °C are illustrated in Figure 1.During the storage, the increasing trend observed for AV was very similar to that obtained for PV. The control sample exhibited the highest AVs during the entire period of the experiment.Adding T. vulgaris and B. persicum EOs offered protection to the VOO, inhibiting the formation of undesirable flavors emanating from secondary lipid oxidation processes.Similar to the PV results, the effects of natural and synthetic antioxidants on delaying the formation of secondary oxidation products was clearly determined at the end of storage, and the order of inhibitory effects of natural and synthetic antioxidants was BHT > T. vulgaris, EO > B. persicum, EO > α-tocopherol. The TOTOX value is an indicator of primary and secondary oxidation products.The results of the TOTOX values of the VOO samples during storage at 70 °C are presented in Figure 1.BHT, T. vulgaris EO, and B. persicum EO all reduced the formation of primary and secondary oxidation products in VOO by 78.51, 69.01, and 53.99%, respectively, when considered at the end of storage.This indicates good capacity of both EOs to inhibit the oxidative process.Thymol and carvacrol are the major components of T. vulgaris EO, while γ-terpinene is one of the major components of B. persicum EO.These components have been reported to exhibit antioxidant properties (Ruberto and Baratta, 2000).Thymol and carvacrol are primary antioxidants which either delay or prevent the initiation step by reacting with a lipid-free radical or prevent the propagation step by reacting with the peroxy or alkoxy radicals (Yanishlieva et al., 1999), thereby retarding VOO oxidation.It has been previously reported that thymol is a better antioxidant in lipids than carvacrol, due to the greater steric hindrance capability of its phenolic group (Yanishlieva et al., 1999).The antioxidant activity of γ-terpinene is attributed to the presence of methylene groups in monoterpene hydrocarbons which is strongly active and may compete with the activated methylene in C-11 of linoleic acid (Ruberto and Baratta, 2000). α-Tocopherol showed no measurable effect in reducing the TOTOX value of VOO.α-tocopherol is reported to possess slight degrees of antioxidant activity and can even be pro-oxidative at times.These results depend on the chemical concentrations and the temperatures applied (Marinova and Yanishlieva, 1992;Schuler, 1990).The threshold value for α-tocopherol as a pro-oxidant in extra VOO oxidation was 60 to 70 ppm during storage at 37 and 75 °C (Deiana et al., 2002).Also, it has been reported that the pro-oxidant activity of α-tocopherol tends to decrease as the temperature increases, even at its high levels of concentration (Marinova and Yanishlieva, 1992). Measurement of K 232 and K 268 The formation of conjugated dienes in fats or oils gives rise to an absorption peak at 232 nm in the ultraviolet region (Frankel, 2012).Changes in K 232 of the VOO samples during storage at 70 °C are presented in Figure 2. A significant difference (p<0.05) in K 232 was observed between the control and the samples containing T. vulgaris EO and B. persicum EO.This indicates the significant antioxidant effect of both EOs (p<0.05).The K 232 value of BHT, T. vulgaris EO, and B. persicum EO exhibited identical increasing trends in the first 28 days of storage.After that, the K 232 value of the sample which contained B. persicum EO increased faster and reached 6.21±0.09at the end of storage, whereas K 232 of BHT and T. vulgaris EO reached 4.2±0.98 and 5.3±0.07,respectively.The α-tocopherol had no measurable effect on causing a decrease in the formation of the conjugated dienes of VOO during storage at 70 °C. Changes in K 268 of VOO samples are due to the formation of conjugated trienes (Figure 2).In line with the K 232 results, the levels of conjugated trienes at the end of storage were of lowest value in samples containing BHT followed by T. vulgaris and B. persicum EO, while the highest levels were found in the control and the sample containing α-tocopherol. The durations of time that were required to reach the upper legal limits of K 232 (IP K232 ) and K 268 (IP K268 ) during storage at 70 °C are presented in Table 4.As expected, there was a strong correlation between IP PV and IP K232 (R 2 =0.985; y=0.432x+3.232).It is also understood that a strong correlation exists between IP PV and IP K268 (R 2 =0.967; y=0.444x+4.277).Moreover, IP K232 and IP K268 correlated directly with each other (R 2 =0.987; y=1.028x-0.589).However, in all samples, IP K232 and IP K268 values were lower than that of IP PV (with the average percentage of difference being 41.51 and 48.19%, respectively).Also, it was separately observed that K 232 and K 268 reached the upper legal limits of 2.6 and 0.25, respectively, almost concurrently.This indicates that the monitoring of VOO oxidation in terms of K 232 or K 268 at 70 °C will lead to the maintenance of stability in the VOO with regard to relatively similar quantities. IP PV , IP K232 , and IP K268 were increased by the four antioxidants used in this study by approximately the same order as described before (BHT > T. vulgaris, EO > B. persicum EO > α-tocopherol).However, in all of the VOO samples under study, the IPE PV value was higher than the values for the IPE K232 and IPE K268 measurements.This indicates that both natural and synthetic antioxidants are more effective in protecting MUFA than PUFA. Determination of chlorophyll and carotenoid contents Chlorophyll compounds play an important role in the oxidative stability of VOO due to their antioxidant nature in the dark and their pro-oxidant activity in the presence of light (Criado et al., 2008).Carotenoids can act as primary antioxidants by trapping free radicals.They may also act as secondary antioxidants by quenching the singlet oxygen (Liebler, 1993).The chlorophyll contents of VOO samples during accelerated storage are presented in Figure 3.At the end of the storage period, all samples showed a substantial loss in chlorophyll content. At the end of storage, the chlorophyll contents of VOO samples containing BHT, T. vulgaris EO and B. persicum EO ultimately decreased by 70.53%, 78.94% and 83.58%, respectively, whereas the chlorophyll content of the sample containing α-tocopherol and the control decreased by 89.68 and 90.91%, respectively.Carotenoid fractions decreased faster than the chlorophyll fraction during oxidation.It is commonly documented that the presence of oxygen and free radicals might accelerate the degradation rate of carotenoids.It is believed that the oxidation of carotenoids depends on the simultaneous oxidation of unsaturated fats (Criado et al., 2008).Thus, both oxygen and the presence of free radicals could explain the drastic decrease in carotenoid contents after a short period of storage.BHT, T. vulgaris EO, and B. persicum EO significantly retarded carotenoid degradation in the treated samples, compared with the control samples and also compared with those containing α-tocopherol.Nonetheless, at the end of storage, there were 28.37, 20.47%, and 13.49% carotenoids remaining in the samples containing BHT, T. vulgaris EO, and B. persicum EO, respectively, whereas the content of carotenoids was almost completely degraded in the control group and in the sample which contained αtocopherol. CONCLUSION According to the results observed in the present research, the inclusion of T. vulgaris and B. persicum EOs in VOO can retard the lipid oxidation process, thereby delaying the increase in adverse chemical quality parameters (PV, AV, K 232 , and K 268 ) and protecting the chlorophyll and carotenoid contents of VOO.Also, those EOs rich in phenolic monoterpenes Grasas Aceites 67 (4), October-December 2016, e162.ISSN-L: 0017-3495 doi: http://dx.doi.org/10.3989/gya.0337161(T.vulgaris EO) were found here to be more effective than those rich in monoterpene hydrocarbons (B.persicum EO).The effect of T. vulgaris EO on retarding the oxidation of VOO in this study was found to be similar to that of BHT.However, BHT is not permitted to be incorporated into VOO.The α-tocopherol had only a small effect on improving the oxidative stability of VOO.Generally, T. vulgaris and B. persicum EOs can be used as potential natural antioxidants for extending the shelf life of VOO.Further studies at ambient temperatures would be required to determine the actual shelf life of VOOs containing plant EOs. Figure 1 . Figure 1.Changes in (a) peroxide values, (b) p-anisidine values, and (c) TOTOX values of virgin olive oil samples during accelerated storage at 70 °C. Figure 2 . Figure 2. Changes in (a) K 232 and (b) K 268 of virgin olive oil samples during accelerated storage at 70 °C. Figure 3 . Figure 3. Changes in (a) chlorophyll content and (b) carotenoid content of virgin olive oil samples during accelerated storage at 70 °C. Table 1 ( Continued) a Not detected. Table 2 . Radical scavenging capacity and reducing power of Thymus vulgaris and Bunium persicum essential oils (EOs) Values given are the means of three replicates ± standard deviation.In each row, means with different letters are significantly different (p < 0.05). Table 3 . Initial characteristics of virgin olive oil (VOO) Table 4 . Duration required to reach the upper legal limits of K 232 (IP K232 ) and K 268 (IP K268 ) for virgin olive oil samples during storage at 70 °C
2018-12-11T00:35:26.072Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "486c409e20d54f13318b7c5a96f44fd57d4a1d74", "oa_license": "CCBY", "oa_url": "http://grasasyaceites.revistas.csic.es/index.php/grasasyaceites/article/download/1628/2004", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "486c409e20d54f13318b7c5a96f44fd57d4a1d74", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
208642960
pes2o/s2orc
v3-fos-license
Risk of reoperation within 12 months following osteosynthesis of a displaced femoral neck fracture is linked mainly to initial fracture displacement while risk of death may be linked to bone quality: a cohort study from Danish Fracture Database Background and purpose — Most guidelines use patient age as a primary decision factor when choosing between osteosynthesis or arthroplasty in displaced femoral neck fractures. We evaluate reoperation and death risk within 1 year after osteosynthesis, and estimate the influence of age, sex, degree of displacement, and bone quality. Patients and methods — All surgeries for femoral neck fractures with parallel implants (2 or 3 screws or pins) performed between December 2011 and November 2015 were collected from the Danish Fracture Database. Radiographs were analyzed for initial displacement, quality of reduction, protrusion, and angulation of implants. The bone quality was estimated using the cortical thickness index (CTI). Garden I and II type fractures with posterior tilt < 20° were excluded. Results — 654 patients with a mean age of 69 years were included. 59% were female. 54% were Garden II with posterior tilt > 20° or Garden III, and 46% were Garden IV. Only 38% were adequately reduced. 19% underwent reoperation and 18% died within 12 months. Female sex, surgical delay between 12 and 24 hours vs. < 12 hours, Garden IV type fracture, inadequate reduction, and protrusion of an implant were associated with statistically significant increased reoperation risk. No significant association between reoperation and age, CTI, or the initial angulation of implants was found. Notably, CTI was linked inversely with death risk. Interpretation — Reoperation risk is linked mainly to primary displacement and reduction of the fracture, with no apparent effect of age or bone quality. Bone quality may be linked with risk of death. The existing guidelines for treatment of displaced femoral neck fractures differ in their recommendations: most rely primarily or solely on the age of the patient, with osteosynthesis for patients younger than 65-75 years of age and arthroplasty for patients above this age, while a few simply advise arthroplasty for all displaced femoral neck fractures (Palm and Teixidor 2015). However, in addition to patient age, several other patient-related factors are known at the time of the surgery and may be useful to guide the treatment-but the influence of these factors on risk of reoperation is not well investigated. We evaluated the risk of reoperation and death within 1 year following osteosynthesis of displaced femoral neck fractures, and estimated the influence of the age and sex of the patient, the degree of fracture displacement, and bone quality, in order to provide further evidence for nuancing the decision process and to improve outcome after a displaced femoral neck fracture. Patients and methods From December 2011 to November 2015, 5,774 surgeries for a primary femoral neck fracture (AO/OTA classification, 31B) were prospectively registered in the Danish Fracture Database (DFDB, www.dfdb.dk) (Gromov et al. 2014). Cases were selected for inclusion as described in a previous study of the same cohort (Nyholm et al. 2018), leaving 1,558 surgeries with use of screws or pins (parallel implants) (Figure 1). Data included age, sex, surgical delay, OTA/AO fracture classification, and ASA score. Time to surgery was defined as the Background and purpose -Most guidelines use patient age as a primary decision factor when choosing between osteosynthesis or arthroplasty in displaced femoral neck fractures. We evaluate reoperation and death risk within 1 year after osteosynthesis, and estimate the influence of age, sex, degree of displacement, and bone quality. Patients and methods -All surgeries for femoral neck fractures with parallel implants (2 or 3 screws or pins) performed between December 2011 and November 2015 were collected from the Danish Fracture Database. Radiographs were analyzed for initial displacement, quality of reduction, protrusion, and angulation of implants. The bone quality was estimated using the cortical thickness index (CTI). Garden I and II type fractures with posterior tilt < 20° were excluded. Results -654 patients with a mean age of 69 years were included. 59% were female. 54% were Garden II with posterior tilt > 20° or Garden III, and 46% were Garden IV. Only 38% were adequately reduced. 19% underwent reoperation and 18% died within 12 months. Female sex, surgical delay between 12 and 24 hours vs. < 12 hours, Garden IV type fracture, inadequate reduction, and protrusion of an implant were associated with statistically significant increased reoperation risk. No significant association between reoperation and age, CTI, or the initial angulation of implants was found. Notably, CTI was linked inversely with death risk. Interpretation -Reoperation risk is linked mainly to primary displacement and reduction of the fracture, with no apparent effect of age or bone quality. Bone quality may be linked with risk of death. time from fracture diagnosis (preoperative radiograph) until the onset of surgery. Pre-and postoperative radiographs (standard trauma AP and lateral view) of cases were collected from treating depart-ments and analyzed for fracture displacement in accordance with the Garden classification (Figure 2), posterior tilt as measured by Palm et al. (2009), result of reduction (displacement and posterior tilt), implant protrusion into the joint (evaluated by eye), angle of implants to the lateral cortex of the femoral shaft measured as described by Nyholm et al. (2018) and cortical thickness index (CTI) measured as the part of the diameter of the femoral shaft that consisted of cortex measured 10 cm below the tip of the trochanter minor ( Figure 3) as described by Sah et al. (2007). In this process 306 cases were excluded for various reasons (Figure 1), leaving 1,252 cases with available radiographs. Of these, 598 cases with initially non-displaced fractures with a posterior tilt of < 20° were excluded, leaving 654 cases with fracture types that according to guidelines are eligible for arthroplasty in patients ≥ 70 years of age (initially displaced fractures or non-displaced fractures with a posterior tilt ≥ 20°) for analysis ( Figure 1) (Palm et al. 2012). As described previously, intra-and inter-reader analyses were performed by 2 authors (HP and AMN), where measure- ments of 50 cases were performed twice with at least a 3-week interval between each read. This demonstrated a "Good" or Excellent" correlation for all included measures (Nyholm et al. 2018 and Table 1, see Supplementary data). Based on the radiographic measurements, the fractures were divided into 2 groups: "Mildly displaced" (Garden II type fractures with > 20° posterior tilt and Garden III type fractures) and "Severely displaced" (Garden IV type fracture) (see Figure 2). For the quality of the reduction, the fractures were divided into 3 groups: "Fully reduced" (non-displaced in AP view, < 10° posterior tilt), "Partly reduced" (non-displaced in AP view, > = 10° posterior tilt) and "Not reduced" (displaced in AP view). After finishing radiographic analysis, data on any further surgery of the hip (ICD-10 KNF*) were collected from the National Patient Register (Landspatientregisteret, NPR) and analyzed to identify relevant reoperations as previously described (Nyholm et al. 2018). A relevant reoperation was defined as either a re-osteosynthesis of the primary fracture, an implant and femoral head removal, or an arthroplasty. Simple removal of the implants was not considered a relevant reoperation. Relevant reoperations were side-matched to the fracture surgery to ensure that the reoperation was not conducted in a contralateral hip. Data on vital status were collected from the NPR as well. Follow-up for all cases was 12 months. Statistics The variables of interest were patient age, sex, initial fracture displacement, and bone quality (CTI). As surgical delay, result of reduction, protrusion of an implant into the joint, and angulation of the implants to the femoral shaft have previously been shown to influence risk of reoperation, these factors were all included as co-variables in an effort to optimize the models. The apparent effect of the included variables on the risk of reoperation was evaluated using Cox regression analysis. Time at risk was defined as time from the surgery until either reoperation, death, another non-relevant reoperation or surgery of the hip (reoperation for infection, a new fracture, femoral amputation), or end of follow-up. Because death is a frequent occurrence in this population and influences the risk of reoperation in a patient, to support the interpretation of the analysis of risk of reoperation a separate Cox regression with death as outcome was performed. Time at risk was defined as time from surgery to death or end of follow-up. For variables with several levels the overall effect in the model was evaluated using a likelihood ratio test. The fit of both models was evaluated using a proportional hazards test based on weighted residuals and was found to be acceptable. To evaluate the possibility of over-fitting, the variance of estimates in the model was compared with smaller models and found to be consistent, which suggest the models were not over-fitted. To illustrate the magnitude of the risk of death and reoperation in different patient groups, several estimates of the probability of reoperation and death were made based on the Cox regression models for reoperation and death. 95% confidence intervals (CI) were used. All data handling and analysis was performed using R software (version 3.4.3; 11/30/2017; R Foundation for Statistical Computing, Vienna, Austria) (R 2017). Ethics, registration, data sharing plan, funding, and potential conflicts of interests This is a retrospective study, with all data collected from databases or radiographic analyses. No intervention was made, and the patients and families have not been contacted. There were therefore no ethical issues in relation to this study. A protocol with specified methods and outcomes was written prior to onset of the study. Permission to obtain and process data was obtained from the Danish Data Protection Agency (Datatilsynet, j.nr.: 2012-58-0004, local j.nr.: AHH-2015-032, I-Suite nr.: 03738) prior to the onset of the study. Study protocol and data managing/analysis files from R-studio will be available upon reasonable request; please contact corresponding author. Permissions to access data will have to be obtained from the relevant authorities, registries, and departments. All costs were financed by the Department of Orthopaedics, Copenhagen University Hospital Hvidovre, Denmark. There were no conflicts of interest for any authors in relation to this study. Results 654 cases were included. Mean age was 69 years (21-102) and 385 (59%) were female. In 356 (54%) of the cases the fracture was mildly displaced (Garden II with posterior tilt > 20° or Garden III) and in 298 (46%) it was severely displaced (Garden IV) ( Table 2). 28% had surgery within 12 hours, 78% within 24 hours, and 89% within 36 hours. 245 (38%) were adequately reduced, while the fracture was still ad latus displaced in the neck region on AP view or with > 10° posterior tilt in 409 (62%). In 18 (3%) cases an implant protruded into the joint. In 124 (19%) cases, the patient underwent a relevant reoperation, and in 117 (18%) cases the patient died. A larger proportion of the patients with a mildly displaced fracture died, but the patients in this group tended to be older (60% of patients with mild displacement were older than 70 years vs. 19% of patients with severely displaced fractures) ( Table 2). In the death risk analysis increasing age of the patient, male sex, and high ASA score were associated with increasing risk of death. An inverse correlation between increasing CTI and risk of death was found (thinner cortex was associated with increased risk of death). Severely displaced fractures had a higher risk of death, but no statistically significant association with the quality of the reduction was found ( Table 4). Estimation of likelihood of death and reoperation for predefined patients with optimal surgical result (surgical delay < 12 hours, good reposition with implants angled > 125° to the lateral cortex of the femoral shaft, and no protrusion into the joint) demonstrated that risk of death depended in great part on the age, the sex, and the ASA score of the patient, while the risk of reoperation was primarily determined by the initial fracture displacement. A decrease in the CTI from 0.5 (average of the included group) to 0.4 (below the cut-off by Sah et al. (2007) for BMD-T score of -2.5) did not affect the risk of reoperation but did increase the risk of death for all estimates (Table 5). For an 80-year-old female with a severely displaced fracture, the estimated risk of reoperation within 1 year is > 20% (Table 5). If, however, the fracture is only mildly displaced, the risk of reoperation for all patient types is < 10%, indicating that if no severely displaced fractures are treated with osteosynthesis with parallel implants, the risk of reoperation following osteosynthesis should be 3-10% (Table 5) (providing they are sufficiently reduced prior to fixation). In our cohort, 12% of the cases with mildly displaced fractures underwent a relevant reoperation (Table 2). If only cases with sufficiently reduced fractures were considered, the reoperation rate dropped to 8% (11 reoperations in 137 patients with only mildly displaced fractures that were sufficiently reduced). Discussion In this registry-based study of risk factors for reoperation and death following displaced femoral neck fractures treated with osteosynthesis no significant association between patient age All estimates are made with the surgical parameters as for an optimal surgery: surgical delay < 12 hours, reduction to non-displaced with < 10° posterior tilt, with implant-angle > 125° and without implant protrusion into the joint. CTI = Cortical thickness index or cortical thickness index (CTI) and risk of reoperation was found. The main risk factors for reoperation were the amount of initial displacement, insufficient reduction, implant protrusion, increasing surgical delay, and female sex. In our secondary death risk analysis, an association between increased risk of death and increasing age, increasing ASA score, male sex, decreasing CTI, and severely displaced fracture type was found. Although this study is based on consecutive patients with data collected prospectively in a nationwide database, the general limitations of observational studies still apply. The number of observations is limited and the fact that no statistically significant associations were found for several covariates may be due to lack of power in our sample and should be interpreted with care. A concern is that the patients in this study have been selected for osteosynthesis, as older patients with severely displaced fractures should primarily receive arthroplasty in accordance with Danish guidelines (Palm et al. 2012). The fact that we do not find any effect of age on risk of reoperation should therefore be interpreted with caution. The increase in risk of death with increasing age may well impact negatively on the risk of reoperation (patients who have died are not at risk of reoperation, and morbid patients may not receive a relevant reoperation due to poor health). The intra-and inter-reader measurements demonstrated a "good" or "excellent" correlation between the readers for all included measurements, indicating a reproducible reading of the radiographs, but the uncertainty between the 2D view seen on the radiographs and the 3D "reality" has not been validated and introduces an unknown uncertainty to the results. It is not our custom to follow these patients until healing and it was therefore not possible to evaluate the actual risk of non-union, avascular necrosis, or fracture displacement. Therefore, reoperation with secondary arthroplasty, revision of primary osteosynthesis, or femoral head removal was chosen as primary endpoint under the assumption that in our all-access, free-of-charge healthcare system all patients with clinically relevant complications such as pain and/or restriction of mobility would receive reoperation. It is, however, possible that some patients may not have undergone reoperation owing to patient-related causes. A follow-up of 12 months was chosen since previous studies with longer follow-up have demonstrated that 80-90% of all reoperations fall within this timeframe (Murphy et al. 2013), and the high mortality in this patient population is likely to introduce unnecessary confounding with a longer follow-up. The risk factors for reoperation following femoral neck fractures have been evaluated in previous studies; however, most of those cohorts were quite small with less than 150 patients included. Our study, with 654 included cases, underlines the previous findings that for displaced femoral neck fractures a smaller initial displacement of the fracture in AP and/or lateral view (posterior tilt), as well as good reduction and avoiding protrusion of the implants into the hip joint, is associated with a reduced risk of subsequent reoperation (Bjørgul and Reikerås 2007, Hoelsbrekken et al. 2012, Yang et al. 2013. In contrast to the initial fracture displacement the latter 2 factors are both influenced by the surgeon and therefore possible to optimize. Several studies have demonstrated a better outcome with lower mortality as well as fewer healing complications and reoperations when the surgery is performed by a surgeon with experience in the specific procedure and performs it with some regularity (Strömqvist et al. 1992, Palm et al. 2007, Nyholm et al. 2015. Even though the procedure is generally viewed as less demanding, these findings underline the need for proper skill training and supervision of inexperienced surgeons as well as a potential benefit of concentrating the surgeries/supervision on fewer, but more experienced surgeons. It has previously been suggested that poor bone quality is a major risk factor for failure following internal fixation of femoral neck fractures due to the association between poor bone quality and increased risk of primary fractures (Estrada et al. 2002). In our evaluation of the bone quality we chose to measure the bone quality by use of the CTI, which correlates well with BMD regardless of observer experience level (Nguyen et al. 2018) and is more easily accessible for the surgeon preoperatively than performing an acute gold standard DEXA scan (Sah et al. 2007, Nguyen et al. 2018. In contrast to this theory our study aligns with other newer studies in not finding such an association (Viberg et al. 2014). We did, however, find a quite strong inverse association between a low CTI and increased risk of death. Previous studies have demonstrated an association between poor bone quality and poor muscle quality (Papageorgiou et al. 2019) and it could thus be that the CTI is a surrogate measurement of the fitness and nutritional status of the patient. We have no information on the nutritional status of the included patients and therefore this is a theory to investigate in future studies. Based on the findings of our study, the CTI could be used as a marker to identify high-risk patients for postoperative mortality. In line with our findings, risk of death has previously been associated with patient-related factors (age, sex, ASA score) and postoperative medical complications (Bjørgul and Reik-erås 2007). Increasing surgical delay has previously been associated with an increasing risk of mortality following hip fracture (Khan et al. 2009, Nyholm et al. 2015, but the association with risk of reoperation has not been evaluated to the same extent. It has been suggested that expeditious treatment of displaced fractures is necessary to reduce the disturbance in blood supply for the femoral head and thus reduce the risk of avascular necrosis. In accordance with a previous study by Hoelsbrekken et al. (2012) we found that for initially displaced fractures increasing delay is associated with increased risk of later failure. Whether to perform internal fixation or arthroplasty in displaced femoral neck fractures has been investigated quite extensively, primarily in patients older than 60-75 years of age Gurusamy 2006, Rogmark andJohnell 2006) and, here, literature in general recommends a primary arthroplasty. The main argument is that studies with 12 months' follow-up indicate lower risk of reoperation, less pain, faster re-convalescence, and better function, with no increased risk of mortality with arthroplasty (Gjertsen et al. 2010). Another often used argument for a primary arthroplasty in the elderly is the theory that risk of reoperation is increased with increasing age. As our study, in agreement with previous studies (Gregersen et al. 2015), did not support this theory, we feel this argument is weak. As a consequence, the argument for internal fixation in younger patient also weakens, which merits a lower age limit for when to insert an arthroplasty for a displaced femoral neck fracture. Although long term follow-up of primary arthroplasty in younger fracture patients is missing, arthroplasties for osteoarthrosis have in recent years achieved a 5-year and 20-year implant survival rate of 95% and 80% respectively (DHR 2016), and even among patients < 50 years it is 60-75% (DHR 2016). Furthermore, a larger number of younger hip fracture patients have been shown to be comorbid with either chronic diseases or disabilities and/or with an unhealthy lifestyle (tobacco and alcohol) (Rogmark et al. 2018) and these may therefore in many cases be regarded as fragility fractures in a population with a shorter life expectancy than a background population of the same age. We therefore recommend re-thinking the indication for primary arthroplasty for displaced femoral neck fractures and basing the decision on whether patients are at risk of outliving an arthroplasty, thus needing reoperation later on. This would demand a broader evaluation of the patient's risk factors for not only reoperation, but also of death, such as high ASA score, specific comorbidities, and perhaps also low CTI for optimizing the treatment of the individual patient. This merits routinely considering a primary prosthesis for fracture patients still of working age as a viable option, depending on the general medical fitness and activity level. The very youngest and fittest hip fracture patients have not been sufficiently evaluated in radiographic studies and, beyond theoretically superior fracture healing, these patients are at high risk of outliving their prosthesis due to both age and physical demands. In such patients much is to be gained from preserving their natural anatomy if at all possible, and in case of later fracture collapse and reoperation they are well suited for an elective secondary arthroplasty. Table 1 is available as supplementary data in the online version of this article, http://dx.doi.org/10.1080/17453674.2019. 1698503 AMN: Planned the study, wrote the protocol, collected and analyzed the data, performed the intra-and inter-reader analysis, wrote and revised the paper. HP: Planned the study, revised the protocol, supervised the data collection and analysis, performed the intra-and inter-reader analysis, and revised the paper. HS: Planned the study, supervised the statistical analyses, and revised the paper. AT: Planned the study and revised the paper. KG: Planned the study, revised the protocol, supervised the data collection and analysis, and revised the paper. Supplementary data The DFDB collaborators (BV, JVF, JKP, KTH, KS, LB, MBH, MB, PTT, MK, TB, and PR) are responsible for the everyday collection of data and maintenance of the Danish fracture database, and have provided the data for this study. They have all revised and approved of the initial design of the study, they have made a large contribution to the data collection (both by their local efforts to ensure proper data collection for the Danish Fracture Database, and by facilitating access to the radiographs for analysis), they have revised and approved the manuscript, and they hold a shared responsibility for the accuracy of the data presented.
2019-12-06T14:02:55.466Z
2019-12-05T00:00:00.000
{ "year": 2019, "sha1": "07a53b8aa0b2ce5d4f2c24a2a5b3f6fe9c68d650", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17453674.2019.1698503?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c90a1020fbce856e850c96c6dbad9c9ec6e1c95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
31663649
pes2o/s2orc
v3-fos-license
AUTOLOGOUS STEM CELL TRANSPLANTATION IN MULTIPLE MYELOMA : IS IT STILL THE RIGHT CHOICE ? Autologous stem cell transplantation (ASCT) is considered the standard of care for multiple myeloma patients aged <65 years with no relevant comorbidities. The addition of proteasome inhibitors and/or immunomodulatory drugs has significantly increased the percentage of patients achieving a complete remission after induction therapy, and these results are maintained after high-dose melphalan (Alkeran®), leading to a prolonged disease control. Studies are being carried out in order to evaluate whether shortterm consolidation or long-term maintenance therapy can result in disease eradication at the molecular level, thus also increasing patient survival. The efficacy of these new drugs has raised the issue of deferring the transplant after achieving a second response upon relapse. Another controversial point is the optimal treatment strategy for high-risk patients, that do not benefit from ASCT, and for whom the efficacy of new drugs is still matter of debate. For many years the gold standard treatment for multiple myeloma (MM) was the combination of melphalan and prednisone (Deltasone®) (MP), 1 as different polychemotherapy regimens failed to demonstrate a superior efficacy. 2MP was able to induce a response in >40% of treated patients; complete responses, however, were achieved in <5% of the cases, and overall patient survival did not exceeded 3 years.The first step towards the introduction of autologous stem cell transplantation (ASCT) in MM was the demonstration of a doseresponse effect of melphalan in MM cells. 3The potential to overcome resistance to melphalan by using higher doses of the drug was subsequently explored in vivo; 4 27% of newly diagnosed patients reached a complete response (CR) upon treatment with high-dose melphalan (HDM), and this translated into a prolonged survival, even though treatmentrelated mortality was unacceptably high.In order to reduce the duration of profound cytopaenia related to the use of HDM, autologous stem cell rescue was then introduced in the clinical practice, initially for relapsed/refractory disease, and then for newly diagnosed MM. 5,6 The formal demonstration that ASCT is superior to conventional chemotherapy in terms of response, duration of response, and overall survival (OS), came from two randomised trials, from the Intergroup Francophone du Myeloma (IFM) 7 and the Medical Research Council (MRC). 8In order to ameliorate these results, the application of two subsequent ASCTs was then explored by IFM 9 and by the Bologna Follow-up Group; 10 both studies demonstrated an improvement in response rate (RR) and event-free survival (EFS); however, only the French study was able to show a survival advantage for patients receiving a double ASCT.Further analysis of the IFM trial 9 showed that a second ASCT could result in an increased OS only in patients failing to achieve at least a very good partial response (VGPR) after the first ASCT; these data were in agreement with a subanalysis of the Bologna trial 10 showing an improved EFS after a second ASCT in patients failing to achieve at least a near-CR after the first one.While the use of a double ASCT is still matter of debate, from the late 90s onwards, a single ASCT has been referred as the standard of care (SoC) for newly diagnosed MM patients <65 years with no relevant comorbidities. In addition to the clinical benefit offered by ASCT, in recent years the therapeutic results for MM have significantly improved due to the availability of drugs that are active both on neoplastic plasma cells and on bone marrow microenvironment, such as thalidomide (Thalidomid®), lenalidomide (Revlimid®), and bortezomib (Velcade®).Thalidomide was the first agent included in induction therapy for newly diagnosed MM patients eligible for ASCT; the drug was used in combination with high-dose dexamethasone, i.e. thalidomidedexamethasone (TD), yielding interesting results as compared to conventional chemotherapy in a case-match retrospective analysis 11 or to high-dose dexamethasone in a prospective randomised trial. 12n a further randomised trial (Total Therapy 2), 13 thalidomide was continuously applied in the various phases of the whole treatment programme until patient relapse; again, an advantage in terms of CR rate and EFS was observed in patients treated with thalidomide as compared to those not receiving the drug, but OS was similar in the two groups of patients.Subsequent trials were designed to evaluate the combination of TD with doxorubicin (Adriamycin®); 14 a significant improvement in RR was observed when compared to conventional chemotherapy (vincristine [Oncovin®]-doxorubicindexamethasone [Decadron®] [VAD]).Bortezomib was tested in combination to dexamethasone (VD) in a Phase II study; 15 a VGPR rate of >30% was achieved after induction and upgraded to >50% after ASCT.A further Phase II study was designed with the aim to compare VD to conventional VAD; 16 again the arm treated with the novel regimen showed a significantly higher RR (38% VGPR or better versus 15%) that was confirmed after ASCT.The combination of VD with cyclophosphamide (Cytoxan®) (VCD) was able to induce a VGPR or better in >60% of the patients, 17 similar results were reported using VD+ doxorubicin (PAD). 18enalidomide was studied in a randomised trial in combination to high (RD) versus low (Rd) doses with dexamethasone. 19After four courses patients were allowed to undergo ASCT or to proceed with the same therapy; even though RR was significantly higher in the RD group, survival was the same due to the higher toxicity experienced by the patients treated with high-dose dexamethasone. A further improvement in the results obtained with novel drugs ± steroids ± chemotherapy was achieved by combining two novel drugs with dexamethasone.The combination bortezomibthalidomide and dexamethasone (VTD) was randomly compared to TD as induction therapy prior to ASCT (Table 1), yielding a significant advantage in terms of response, both CR and VGPR. 20These data were confirmed by a recent study of the Pethema group. 21A bortezomib + thalidomide-containing regimen was also used in the Total Therapy 3 trial, 22 in the context of a polychemotherapy programme involving induction, ASCT, consolidation, and maintenance; as compared to Total Therapy 2, in which only TD was used, 13 a significant prolongation of EFS was observed.A randomised study conducted by the IFM in newly diagnosed MM patients 23 demonstrated that the triple combination VTD, with reduced dose bortezomib and thalidomide, was superior to VD in terms of response, both after induction and after ASCT.So far, these results indicate that induction therapy in preparation to ASCT should include bortezomib + dexamethasone + an immunomodulating agent, either thalidomide or lenalidomide, that is presently being explored in Phase II trials. 24 DEBATED ISSUES Is Complete Remission a Goal to be Pursued? When MP was the only available therapeutic strategy for MM, the attainment of CR was no matter of concern as only a minority of patients could achieve a minimal residual disease status.The introduction of more aggressive therapeutic programmes including ASCT prompted a better evaluation of minimal residual disease, also including cytofluorimetric analysis 25 and molecular techniques. 26At present, the International Myeloma Working Group (IMWG) 27 has provided the definition of 'stringent CR' including negative serum/urine immunofixation together with a normal serum freelight chain ratio and absence of clonal plasma cells in the bone marrow.Several groups have analysed the relationship between CR and patient outcome, and have pointed out that CR is a strong predictor of survival, 28 especially when extended over several years; 29 for this reason it is now generally recognised that every effort should be made in order to achieve maximal disease eradication through the various phases of the treatment programme. 30an Consolidation or Maintenance Therapy Improve Patient Outcome? The administration of some kinds of treatment upon completion of major therapy in order to improve/ maintain its efficacy represents the SoC in several lymphoproliferative neoplasms, such as acute lymphoblastic leukaemia, low-grade lymphoma, or mantle cell lymphoma, and for this reason it has been considered an attractive option also for MM. Consolidation therapy is defined as a short course of treatment administered after ASCT, which is aimed at further reducing tumour load.A study from the Nordic group 31 has evaluated the efficacy of a short course of Bortezomib, and an increased percentage of CRs was observed.][34] Maintenance therapy is defined as long-term treatment aiming at preventing disease recurrence or progression.Alpha interferon has been widely tested after ASCT, and despite two reports showing an improved survival, side-effects greatly overcome the possible advantage, so that this approach has been definitely abandoned. 35A limited efficacy was also reported with long-term use of steroids. 36halidomide has been studied in six trials, 13,14,[37][38][39][40] and in three, the drug was also used in induction phase.Although all the trials showed an advantage in terms of EFS or progression free survival (PFS), an OS advantage for patients treated with thalidomide was observed only in two trials.][38][39] Furthermore, the likelihood of selecting MM clones resistant to thalidomide and responsible for short post-relapse survival should probably be taken into consideration 13,14,40 as well as the limited efficacy of the drugs in case of poor-risk cytogenetics. 39ue to its favourable toxicity profile, and specifically to the lack of long-term neurological toxicity, lenalidomide has been tested as maintenance therapy in two randomised studies, [41][42] both of which showed a significant advance in time to progression, while OS was significantly improved only in one study. 42Side-effects were mainly hematological, and a higher percentage of second primary malignancies were observed in lenalidomide-treated patients; 41,42 however, these data need further observation as it is clear that survival benefit outgrows the risk of death from second malignancies. 43A recent report analysed the role of bortezomib maintenance after ASCT; 18 patients showed a significant advantage in terms of PFS and OS, even though the potential of neurological toxicity should be taken into consideration.Despite these interesting results, however, data are not mature enough to recommend a specific strategy, and the issue of consolidation and/or maintenance treatment remains still debated.In recent years, many attempts have been made in order to identify patients at high-risk of relapse and poor survival, and several parameters have been taken into consideration.The simplest and cheapest one is the International Staging System prognostic model, 56 designed by the IMWG, based on beta-2 microglobulin and albumin level; a significantly different survival (62 months, 44 months, and 29 months) was shown in Stage 1, 2, or 3 patients, respectively.The major pitfall of this risk stratification is that it does not take into account cytogenetic alterations that are now considered the main parameter affecting patient prognosis.No agreement exists on which -among fluorescence in situ hybridisation, comparative genomic hybridisation, and gene expression profile -is the best method to use in order to detect chromosomal abnormalities.However, patients showing t(4;14), t(14;16) deletion 17p 57 or 1q abnormalities 57,58 carry a worse prognosis and should be treated differently from patients with no chromosomal abnormality. 59Very few data, however, are presently available concerning the efficacy of different therapeutic regimens in poor-risk patients.A bortezomib-containing induction therapy seems to improve the outcome of patients carrying t(4;14). 20,21This is not the case for thalidomide, 60 especially in maintenance trials, 36 while conflicting results were reported regarding lenalidomide-dexamethasone induction. 61On the other hand, patients with 17q deletion seem not to benefit from bortezomib followed by ASCT. 62Dose-dense regimens, upfront myeloablative ASCT, or novel agents are presently proposed for high-risk patients in the context of clinical trials, which are aiming at finding a proper therapeutic approach. CONCLUSION In the last few years the outcome of MM patients has significantly improved with the introduction of novel drugs in the clinical practice.The inclusion of thalidomide, lenalidomide, or bortezomib in various combinations in the different phases of an ASCT programme increases the percentage of patients achieving a CR, thus, potentially leading to patient cure.Data are not mature enough, so far, to establish whether a combination of new drugs, administered for a prolonged period of time, could render ASCT unnecessary.At present, in many US Institutions, both physicians and patients are in favour of a delayed ASCT policy in order to avoid complications related to the period of myelosuppression related to the procedure.It cannot be taken for granted, however, that patient quality of life may be worse in the case of a short time myelosuppression as in ASCT, rather than in the case of a prolonged therapy with any of the new drugs that are presently available and whose side-effects are well known.At present, at least in Europe, ASCT is still considered the SoC for young patients with newly diagnosed MM, and the issue is how the results can be further improved.A number of new drugs are presently being tested in MM, at various disease phases.Among them is carfilzomib (Kyprolis®), an irreversible proteasome inhibitor that, after having proven effective in relapsed/ refractory disease, has been tested in combination with lenalidomide in newly diagnosed MM patients 63 inducing up to 40% stringently defined CR.Pomalidomide (Pomalyst®), a thalidomide derivative, has demonstrated to be effective even in lenalidomide or bortazomib-refractory patients. 64hese drugs will probably be included into induction therapy prior to ASCT in order to further improve disease eradication. Table 1 : Results obtained with novel drug combinations used as induction therapy prior to ASCT. Is ASCT Feasible in Elderly Patients?
2018-05-30T23:09:41.671Z
2014-07-31T00:00:00.000
{ "year": 2014, "sha1": "8a3314b35c8cf7c15fe1c7dd807f0876b9b3b4e2", "oa_license": "CCBYNC", "oa_url": "https://emjreviews.com/wp-content/uploads/sites/2/2018/02/Autologous-Stem-Cell-Transplantation-In-Multiple-Myeloma-Is-It-Still-The-Right-Choice.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8a3314b35c8cf7c15fe1c7dd807f0876b9b3b4e2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
26655004
pes2o/s2orc
v3-fos-license
Giant type III well-differentiated neuroendocrine tumor of the stomach: A case report Highlights • A rare case of a large type III neuroendocrine tumor of the stomach is presented.• Currently there is no published data regarding large neuroendocrine tumors of the stomach.• Classification systems for neuroendocrine tumors are not universally accepted, making it difficult to compare data.• Discrepancy between fine needle aspiration and final pathology is encountered in large neuroendocrine tumors.• Type I and II gastric neuroendocrine tumors can be managed endoscopically. Type III and IV should be managed surgically. Introduction Gastroenteropancreatic neuroendocrine tumors (NETs) are rare lesions which originate in the enterocromaffin cells located in the gastrointestinal (GI) tract. Although they are considered indolent tumors, their clinical behavior is unpredictable and can range from benign to malignant. NETs are subdivided into foregut (gastric, duodenal and pancreatic) midgut (jejunal, ileal, cecal) and hindgut (distal colic and rectal) [1], with the most common site of origin being the ileum, followed by the rectum and the appendix [2,3]. We describe a case of a large type III neuroendocrine tumor of the stomach. Management and current literature are reviewed. Presentation of case A 37 year old female presented with sudden onset epigastric abdominal pain, and associated several episodes of hematemesis and melena over the 3 days prior to presentation. She describes intermittent epigastric discomfort over the past 3 years which improved with proton pump inhibitors. On physical exam, the abdomen was soft, non-tender, and non-distended. Rectal exam was positive for occult blood with no other abnormal findings. Normocytic anemia was the only abnormal routine test with a hemoglobin value of 10. A CT scan of the abdomen and pelvis was performed, displaying a 7 × 7 × 10 cm mass in the left upper quadrant, the origin of which could be either gastric or pancreatic (Fig. 1), with no evidence of metastatic disease. An esophagogastroduodenoscopy (EGD) was performed showing a mass with a large bleeding ulcer adjacent to the gastroesophageal junction (GEJ) (Fig. 2). Endoscopic ultrasound (EUS) revealed a 7 cm mass in the gastric wall arising from the mucosal layer, with no pancreatic involvement. Fine needle aspi- mal: 56-244 ng/ml) and chromogranin A (236 ng/ml normal: 1.9-15 ng/ml) with normal gastrin levels (33 pg-normal < 100 pg) The patient remained stable without any further bleeding and was discharged home, and later returned for an elective gastrectomy. At the time of operation, a large gastric mass was found 3 cm from the GEJ (Fig. 3). A total radical gastrectomy, including perigastric, left gastric and celiac lymph node dissection, was performed. Three centimeters of distal esophagus were also included. A Rouxen-Y reconstruction was performed and a feeding jejunostomy was placed. Negative margins were confirmed by frozen section. The patient had an uneventful postoperative course and was discharged home on postoperative day 6. Final pathology analysis showed a 10 cm well-differentiated grade 2 type III gastric neuroendocrine tumor with subserosal and perineural invasion (T3). Margins were free of a tumor and 7/17 lymph nodes were positive for malignancy (N3). The tumor was solitary, with no endocrine cell hyperplasia or atrophic gastritis, consistent with a type III tumor (Fig. 4). Mitotic rate was 1 per 10 high power fields and Immunohistochemistry showed a Ki67 index of 7.54% assigning a grade 2 neuroendocrine tumor (G2 NET) according to the world health organization (WHO) classification (Fig. 4D). This represented a discrepancy with the prior FNA result which gave a Ki67 index of <2%. Chromogranin A level normalized one month after excision (from 240 to 4 ng/ml). CT scan of the chest and abdomen performed at 3, 6 and 12 months post-operatively have been negative for recurrence. Discussion NETs can be stratified using several classification systems, with the two most prevalent being the WHO and American Joint Committee on Cancer. The 2010 WHO classification is based on number of mitosis and the Ki67 index giving four categories: G1 NET, G2 NET, neuroendocrine carcinoma (NEC) and mixed adenoneuroendocrine carcinoma (MANEC) [4]. The 2009 American Joint Committee on Cancer/Union for International Cancer Control (AJCC/UICC) classification system uses tumor invasion, number of lymph nodes affected and metastases (TNM) [4]. However, these systems are not universally accepted, making it difficult to compare data from different centers [5]. Gastric NETs (GNETs) account for 4% of all neuroendocrine tumors of the body [6], represent between 8.7-23% of all GI tumors of this type [3,7], and only 1% of all neoplasms of the stomach [7]. Due to an increase in routine endoscopy, their incidence has increased substantially [8], and currently stands at 1-2 cases per 100,000 population per year with a female predominance and a mean age of diagnosis of 64 years [3,9] (Table 1). Based on histomorphologic characteristics and pathogenesis, GNETs are classified into four types that differ in prognosis and biological behavior [10]. Type I (70-80%) is related to chronic atrophic gastritis, usually located at the gastric fundus or body with good prognosis after resection [11]. Type II (5-6%) is often associated with Zollinger Ellison syndrome and MEN1. Similarly to type I, types II GNETs are benign with a low risk of malignancy [12]. Type III (14-25%) GNETs are usually sporadic tumors that quite often infiltrate the muscularis propia and serosa conferring a malignant potential. They are also associated with vascular and lymph node invasion and liver metastasis [13]. Type IV GNETs are very rare, usually single, poorly differentiated and malignant, and associated with metastatic spread at the time of presentation [14]. Types I to III GNETs originate from enterochromaffin cells, while type IV originates from other endocrine cells that secrete gastrin, serotonin or adrenocorticotrophic hormone. Types I and II are associated with hypergastrinemia while type III and IV are gastrin independent tumors. Type I GNETs frequently present with multiple small tumors. La Rosa et al., [15] reported an incidence of 77% of tumors less than 1 cm and 97% less than 1.5 cm in size. Type II GNETs are also multiple and less than 2 cm [9]. Type III GNETs are sporadic, isolated, and larger (>2 cm), with a mean of 5 cm in size, located at the body/fundus surrounded by normal (nonathrophic) mucosa [9]. There are currently no reports in the literature of a type III GNET with large dimensions as the one presented. Type IV GNETs are typically larger in size: Bordi et al. [16] presented a case of a type IV GNET measuring 16 cm, representing one of the largest tumor sizes ever reported. Initial evaluation of patients with a suspected GNET should include a serum chromogranin A level. It is elevated in approximately 80% of patients with neuroendocrine tumors regardless of the site [17]. Elevations in Chromogranin A is frequently elevated in Type I to III but normal in type IV, likely related to the poorly differentiated nature of this tumor [10]. When the value is less than twice the upper normal range of baseline, Chromogranin A is a predictive factor for overall survival [18]. Measurement of gastrin levels is also recommended due to the association of types I and II and hypergastrinemia. Upper endoscopy represents an essential diagnostic tool as the majorities of GNETs are found on endoscopic examinations due to dyspeptic symptoms or anemia, and types I and II commonly present as polypoid lesions amenable to endoscopic resection. Biopsies of the lesions should be taken as well as biopsies from normal appearing stomach to determine the presence of atrophic gastritis [19]. Endoscopic ultrasound (EUS) is recommended in lesions greater than 2 cm to assess depth of invasion [20]. An Octreotide scan can be a useful adjunct in the diagnosis of gastric neuroendocrine tumors. FDG-PET scan is more sensitive in the detection of G3 NETs when compared with G1 or G2 tumors due to the highly metabolically active G3 NETs [21]. Endoscopic resection and surveillance is the treatment of choice in the majority of cases of type 1 GNET. Lesions less than one centimeter in size should be observed and carefully followed with annual endoscopy. Lesions greater than one centimeter in size are amenable to endoscopic resection (polypectomy, endoscopic mucosal resection) only if the lesion is confined to the mucosa or submucosa [21]. Gastric resection for a type I and type II GNET is recommended in patients with multifocal lesions (>4-6 lesions) or when invasive or recurrent disease is present [21]. Since Type III and IV GNETs behave similarly to gastric adenocarcinomas, with a high incidence of invasion beyond the submucosa and distant metastasis on presentation (50-100%), radical resection is recommended [21]. Chemotherapy and radiation therapy is indicated in advanced disease and as a palliative option on type IV GNET. Combination chemotherapy regimens are most commonly administered since single-agent chemotherapy has low response rates. The most common agents used are etoposide, cisplatin (CDDP), and carboplatin along with somatostatin analogues octreotide and pasireotide. Somatostatin analogues have shown a role in hormonal symptom control and tumor suppression [10]. In general, type I and II GNETs have good overall prognosis secondary to their benign biology with tumor-related mortality ranging from 0.5 to 5% [10]. Close surveillance is recommended for potential recurrence and malignant transformation. Type III tumor-related 5 year mortality is between 25 and 30% for welldifferentiated and 75-87% for poorly-differentiated tumors [10]. Type IV GNETs have a mortality of 100% in 5 years and a mean survival of 6.5-14 months after diagnosis [10]. This case of GNET is notable for the size of the tumor. In review of the recent literature, the majority of cases discussed have an average diameter of 4 cm, in comparison with this case which was found to be 10 cm in greatest dimension [22][23][24]. Conclusion The incidence of gastric neuroendocrine tumors has been increasing during the last decade, underscoring the need to improve our understanding of their biology and behavior. If a GNET is identified histologically, patient outcomes depend on appropriate determination of tumor biology and subsequent choice of treatment, surgical, medical, or both. As with all malignant neoplasms, treatment of GNETs must have a multi-faceted and team-based approach, utilizing multiple modalities to improve patient outcomes. Conflicts of interest The authors declare that there is no conflict of interests regarding the publication of this manuscript. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Ethical approval Not requested. Consent Written informed consent was obtained from the patient.Written informed consent was obtained from the patient.
2018-04-03T00:44:37.384Z
2016-06-16T00:00:00.000
{ "year": 2016, "sha1": "e24a8a2a60eb9647b94cc0db2b61a5d1a41d7cfd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2016.06.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e24a8a2a60eb9647b94cc0db2b61a5d1a41d7cfd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2421260
pes2o/s2orc
v3-fos-license
Addendum to:"Lifting smooth curves over invariants for representations of compact Lie groups, III"[J. Lie Theory 16 (2006), No. 3, 579-600.] We improve the main results in the paper from the title using a recent refinement of Bronshtein's theorem due to Colombini, Orr\'u, and Pernazza. They are then in general best possible both in the hypothesis and in the outcome. As a consequence we obtain a result on lifting smooth mappings in several variables. A recent refinement of Bronshtein's theorem [5] and of some of its consequences due to Colombini, Orrú, and Pernazza [6] (namely theorem 1(i) below) allows to essentially improve our main results in [10]; see theorem 2 and corollary 3 below. The improvement consists in weakening the hypothesis considerably: In [10] we needed a curve c to be of class (i) C k in order to admit a differentiable lift with locally bounded derivative, (ii) C k+d in order to admit a C 1 -lift, and (iii) C k+2d in order to admit a twice differentiable lift. It turns out that theorem 2 and corollary 3 are in general best possible both in the hypothesis and in the outcome. In theorem 4 and corollary 5 we deduce some results on lifting smooth mappings in several variables. Refinement of Bronshtein's theorem. Bronshtein's theorem [5] (see also Wakabayashi's version [15]) states that, for a curve of monic hyperbolic polynomials with coefficients a j ∈ C n (R) (1 ≤ j ≤ n), there exist differentiable functions λ j (1 ≤ j ≤ n) with locally bounded derivatives which parameterize the roots of P . A polynomial is called hyperbolic if all its roots are real. The following theorem refines Bronshtein's theorem [5] and also a result of Mandai [14] and a result of Kriegl, Losik, and Michor [8]. In [14] the coefficients are required to be of class C 2n for C 1 -roots, and in [8] they are assumed to be C 3n for twice differentiable roots. Counterexamples (e.g. in [6, section 4]) show that in this result the assumptions on P cannot be weakened. Improvement of the results in [10]. Let ρ : G → O(V ) be an orthogonal representation of a compact Lie group G in a real finite dimensional Euclidean vector space V . Choose a minimal system of homogeneous generators σ 1 , . . . , σ n of the algebra which is independent of the choice of the σ i (see [10, 2.4]). If G is a finite group, we write V = V 1 ⊕ · · · ⊕ V l as orthogonal direct sum of irreducible subspaces V i . We choose v i ∈ V i \{0} such that the cardinality of the corresponding isotropy group G vi is maximal, and put Proof. (i) Letc be any differentiable lift of c. Note that the existence ofc is guaranteed for any C d -curve c, by [9]. In the proof of [10, 8.1] we construct curves of monic hyperbolic polynomials t → P i (t) which have the regularity of c and whose roots are parameterized by t → v i | g.c(t) (g ∈ G vi \G). If c is of class C k , then theorem 1(i) provides C 1 -roots of t → P i (t). By the proof of [10, 4.2] we obtain that the parameterization t → v i | g.c(t) is C 1 as well. Hencec is a C 1 -lift of c. Alternatively, the proof of 1(i) in [6] actually shows that any differentiable choice of roots is C 1 . The examples which show that the hypothesis in 1 are best possible also imply that in general the hypothesis in 2 and 3 cannot be improved. On the other hand the outcome of 2 and 3 cannot be refined either: A C ∞ -curve c does in general not allow a C 1,α -lift for any α > 0. See [7], [1], [4]. But see also [3] and [10, remark 4.2]. Note that the improvement affects also [13, part 6]. Proof. Let c : R → U be a C ∞ -curve. By theorem 2(i) the curve f • c admits a C 1 -lift f • c. A further continuous lift of f • c is formed byf • c. By [12, 5.3] we can conclude thatf • c is locally Lipschitz. So we have shown thatf is locally Lipschitz along C ∞ -curves. By Boman [2] (see also [11, 12.7]) that implies thatf is locally Lipschitz. In general there will not always exist a continuous lift of f (for instance, if G is a finite rotation group and f is defined near 0). However, if G is a finite reflection group, then any continuous f allows a continuous lift (since the orbit space can be embedded homeomorphically in V ). Proof. The Weyl group W (Σ) is a finite reflection group, since G is connected.
2011-06-28T14:58:51.000Z
2011-06-28T00:00:00.000
{ "year": 2011, "sha1": "187784f5398af1f5feeffa9726563a545f4749a2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "187784f5398af1f5feeffa9726563a545f4749a2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
253502540
pes2o/s2orc
v3-fos-license
Facies analysis, depositional activity, and internal structure of sieve deposits on an active alluvial fan Sieve lobes typically appear in gravel‐rich and matrix‐poor alluvial fans. Despite being extensively studied, the sieve‐lobe facies has been defined largely based on qualitative field observations without quantitative sedimentological analyses. Additionally, depositional activity of sieve lobes has not been monitored over extended periods (monthly to annually) and not directly associated with specific precipitation triggers. Furthermore, the internal geometry of sieve‐lobe built alluvial fans has not yet been imaged by subsurface methods. We performed a multi‐method analysis of sieve lobes in the Julian Alps (NW Slovenia) on an alpine alluvial fan composed of carbonate gravels. We performed a detailed textural and structural sedimentological analysis of 11 recent sieve lobes differing in size and age. A three‐year aerial survey of the alluvial fan surface with a small unmanned aircraft and photogrammetric modelling was used to detect active sieve‐lobe evolution. Detected sieve‐lobe formation events and volumetric surface changes were paired with triggering precipitation events. Ground‐penetrating radar (GPR) profiling depicted the geometry of the sieve‐lobe built alluvial fan. The sieve‐lobe facies consists of over 80% poorly sorted, open‐framework gravels and less than 2% mud. Lobes exhibit downward coarsening and increase in clast mean size. These textural and structural characteristics are present in all sieve lobes regardless of their age and size. Sieve lobes form with a sub‐annual frequency, usually following 24 h rainfall events exceeding 50 mm. Over 1000 m3 of sediment was deposited during these events. The GPR profiles confirm that the studied alluvial fan is formed predominantly by stacked sieve lobes. Quantitative sedimentary analysis of sieve lobes, monitoring of their recent evolution, and depiction of their subsurface geometry—demonstrated in this study—reinforce the challenged concept that sieve lobes are one of the main building blocks of alluvial fans. This work also demonstrates that, under specific conditions, sieving may become the dominant alluvial fan‐forming process. of the catchment area by a suite of different processes, ranging from dilute flows (stream and sheet flows) to dense ones (gravity flows). A particular sedimentary facies characteristic of alluvial fans is the sieve-lobe facies (Hooke, 1967). Sieve-lobe deposits are an end product of the sieve deposition process and occur primarily on alluvial fans, however, they can also occur in other depositional environments, such as proglacial outwash fans and perennial streams (Milana, 2010). The requirement for sieve deposition is a significant amount of coarse-grained sediment devoid of fine-grained material transported as a bedload on an inclined surface. Such sediment is transported with discharges moderate enough to allow infiltration into permeable and unsaturated ground (Hooke, 1967;Milana, 2010). The resulting sieve-lobe deposit consists of matrix-poor, clast-supported, moderately sorted open-framework particles ranging from sands to very coarse-grained gravels (Bull, 1977;Hugenholtz, 2011;Morgan & Craddock, 2017). Sieve-lobe growth is a gradual process in which a single lobe develops from multiple stacked sublobes, deposited on top of each other during one depositional event (sensu Milana, 2010), which typically occurs during short and intense precipitation events (Milana, 2010;Morgan & Craddock, 2017). Sieve-lobe formation was also documented in permanent streams with stable discharge, where the sediment bed became more permeable (Milana, 2010). A number of intertwined sieve lobes form a sieve-lobe complex and lobes may represent major or even sole building blocks of a fan sequence that has a sigmoidal shaped surface from apex to toe (Hugenholtz, 2011;Milana, 2010;Morgan & Craddock, 2017;Nemec & Postma, 1995). Sieve-lobe deposits and the sieve deposition process were first studied in a laboratory experiment (Hooke, 1967), which later served as an analogue for interpreting naturally occurring deposits on fans and cones. However, the sieve-lobe paradigm was subsequently opposed by several studies (Blair & McPherson, 1992, 1994, 2009, which proposed instead that open-framework sieve-lobe deposits are the end product of winnowing of primary matrix-rich debris flows by water. Despite the documentation of Holocene sieve deposits on Crete (Nemec & Postma, 1993, in a study later challenged by Blair and McPherson (1995), the sieve depositional process remained a disputed topic. Only recently has Hooke's original sieve-lobe paradigm been confirmed by documenting the deposition of multiple recent sieve lobes via the sieve deposition process (Milana, 2010), which corroborated the ideas of Hooke's (1967) original laboratory experiment. Modern studies document coarse-grained (pebble to cobble) (Colombo & Rivero, 2017;G omez-Villar & García-Ruiz, 2000;Milana, 2010;Morgan & Craddock, 2017) and sandgrained (Hugenholtz, 2011) lobes deposited via sieve deposition processes after intense precipitation events. In addition, morphological features on Mars have also been interpreted as deposited by sieve depositional processes (Brož et al., 2019). Although the sieve-lobe depositional process paradigm has been widely accepted recently and sieve lobes have been documented in nature, quantitative sediment analysis, monitoring of their evolution over a longer period, and analysis of the precipitation conditions that form this sedimentary feature have not yet been extensively published in the scientific literature. First, the sieve-lobe facies has been described only qualitatively, without detailed quantitative analyses of the sedimentary structure and texture of individual sieve lobes. In a stratigraphic sequence of alluvial fans, a sieve-lobe deposit can easily be mistaken for another open-framework gravel facies formed by other processes (Lunt & Bridge, 2007;Zhang et al., 2021). Second, the depositional conditions (i.e. the triggering conditions) for sieve-lobe deposition-such as rainfall intensity, rainfall quantity, and amount of transported sediment-have rarely been studied. These depositional conditions could be investigated by detailed monitoring of surface changes over a longer period (monthly to annually). Milana (2010) attributed sieve deposition predominantly to storm events during the rainy season, without defining the rainfall quantities. Hugenholtz (2011) described the formation of sand-grained sieve lobes during rapid snowmelt. Morgan and Craddock (2017) linked the deposition of recent sieve lobes to recorded triggering precipitation events of high intensity and low frequency. None of the studies observed active sieve-lobe deposition over an extended (i.e. annual) period and directly associated the quantity of transported sediment to the intensity of a particular precipitation event. In the case of episodically activated sieve lobes, the minimum amount of precipitation required to trigger the sieve-lobe transport and deposition is unknown. The lack of monitoring of sieve-lobe formation over a longer period raises the question of whether the sieve-lobe formation is an infrequent deposition event driven by extreme and infrequent precipitation, or whether it occurs more frequently under regular meteorological conditions. Third, it is proposed that alluvial fans can be predominantly or even entirely built of multiple stacked sieve lobes (Milana, 2010). However, the subsurface geometry of such a fan has not yet been documented. In this study, we investigated the active deposition of sieve lobes on a gravel-rich alpine alluvial fan by performing a detailed sedimentary, topographic, and geophysical analysis. Our objective was to answer, at least in part, the open questions listed in the previous sections by: (i) providing a detailed quantitative facies analysis based on the sedimentary structure and texture of recent sieve lobes differing in size and age; (ii) monitoring the active sieve-lobe depositional dynamics; (iii) linking their depositional activity to triggering precipitation events; and (iv) depicting the subsurface geometry of the sievelobe built alluvial fan. | Study site The study was performed on the Suhi vrh alluvial fan in the Planica Valley in the Julian Alps in NW Slovenia (Figures 1a and b). The valley slopes are predominantly composed of Upper Triassic carbonates (Gale et al., 2015) and the valley floor is covered by various Quaternary sedimentary bodies, of which the Holocene alluvial fans are the most numerous (Novak et al., 2018). The alluvial fans have ephemeral streams in which the sediment (mainly gravel) is very actively deposited by water flows and sporadic debris floods (Novak et al., 2018(Novak et al., , 2020. The studied Suhi vrh alluvial fan is located on the eastern slope of the valley below Suhi vrh Mountain (Figures 1 and 2a (ARSO, 2006(ARSO, , 2009(ARSO, , 2021a(ARSO, , 2021bFigure 1c). In the study area, there are an average of six to eight precipitation events per year where more than 50 mm of precipitation occurs within a 24 h period and the snow cover persists from late November up to early May. The combination of active gravel-rich alluvial fans lacking fine-grained sediment and detailed meteorological records makes this study site an ideal location for studying active sieve-lobe deposition. | Sampling strategy and sedimentary analysis We catalogued the sedimentary structures and general characteristics (size and morphology) of the sieve lobes in the field. Sieve lobes differ in their relative age and stage of development (i.e. fully developed lobes and sublobes). During fieldwork, several sublobes were found that have not formed into fully developed sieve lobes and therefore represent an initial development stage of incomplete sieve lobes (sensu Milana, 2010). Eleven individual sieve lobes and sublobes evenly distributed on the active surface of the alluvial fan were distinguished and sampled (Figures 2a, b, c). The relative age of each lobe was determined by the intensity of grey lichen coating of clast (Figures 2d,3a and b), with older lobes having a more intense and darker coating. All catalogued sublobes lacked lichen coating, and no sublobes with coating were found ( Figure 3c). We assume that all sublobes are relatively young and were freshly deposited just prior to fieldwork. Using these criteria, we sampled three types of sieve lobes. We sampled four larger and relatively old fully developed sieve lobes (designated SO 1, SO 2, SO 3, and SO 4), four larger and relatively young fully developed sieve lobes (designated SY 5, SY 6, and SY 7), and three relatively young sublobes (designated SL 9, SL 10, and SL 11). For each sampled sieve lobe, the samples were collected from proximal and distal parts of the lobe and, where possible, from the middle part ( Figure 2d). Only lobes that showed no evidence of reworking (i.e. partial erosion or coverage) by subsequent sieve deposition processes were sampled. Depending on the lobe size, each sample contained between 7 and 30 kg of dry weight; a total of 618 kg of sediment was analysed. Granulometric analysis was performed using a Haver and Boecker EML 200 sieve shaker, following previous studies (Dufresne et al., 2016). Samples were oven dried for 48 h at a temperature of 40 C and dry sieved using standard sieve pans with diameters ranging from 32 mm to 63 μm. Clasts larger than 64 mm were measured manually, and each size group was dry weighted. Granulometric analysis of the dry-weighted sediment was performed using Gradistat software (Blott & Pye, 2001) and following grain size and texture classification of coarse sediment particles (Blair & McPherson, 1999;Blott & Pye, 2012). Particles finer than 63 μm were quartered to obtain 1 g of representative sediment and then measured using the Fritch Analysette 22-28 laser granulometer with dynamic image analysis. Each sample was measured three times and an average was calculated from the three measurements. The calculated average of 1 g was extrapolated for the remaining amount of particles below 63 μm and extrapolated to the total amount of sediment. Grain shape, roundness, and fabric were determined visually according to Illenberger (1991). A slope map was created to determine the surface inclination of the area where sieve-lobe deposition occurs. We used a digital elevation model (DEM) derived from airborne laser scanning (ALS) with a resolution of 0.5 m, and classified the inclination in five classes (0-19 , 20-29 , 30-44 , 45-54 , and >55 ). Data from ALS was obtained from the publicly available ALS dataset of Slovenia (ARSO, 2021c). The inclination map was created with the QGIS program (QGIS, 2021a) using the QGIS Raster Terrain Analysis plugin (QGIS, 2022). (Figure 2b). GCP coordinates were obtained with a rapid static GNSS survey, which was periodically repeated to control the long-term stability of the GCP network. Surveys using the DJI Phantom 4 RTK UAV also used a post-processed kinematic method to directly georeference flight paths and imagery, but no significant difference in survey precision was observed compared to other UAVs used in the study. UAV surveys were repeated several times per year following seasonal changes and major rainfall events. In this study, we analyse data from 10 surveys covering the period from April 2019 to Surface changes were linked to recorded precipitation events from the Rateče Meteorological Station (ARSO, 2021d) by selecting the most intense rainfall events that could potentially trigger sediment transport. Following the studies of Guzzetti et al. (2007), ARSO (2009ARSO ( , 2021b, and Novak et al. (2020), the threshold for a potentially triggering precipitation event was set at 50 mm of rainfall in 24 h. Intense 48 h rainfall events were also considered as potential triggers. Additional meteorological factors such as snowfall and snow cover were also considered as factors that could reduce or increase sediment transport. | Ground-penetrating radar The ground-penetrating radar (GPR) technique was applied to understand the subsurface geometry of Suhi vrh fan. This geophysical method has been used successfully for alpine alluvial fans (Franke et al., 2015;Mills & Speech, 1997), as well as for coarse-grained sedimentary bodies (talus slopes) built of carbonate gravels (Sass & Krautblatter, 2007). The Mala ProEx GPR common-offset survey was used with a 50 MHz unshielded rough terrain antenna (RTA). According to research by Sass and Krautblatter (2007), the 50 MHz antenna offers the best compromise between penetration depth and resolution of subsurface structures of sedimentary bodies composed of carbonate gravels. The collected raw GPR data was processed using RadExplorer software by DECO Geophysical (2005). The processing workflow included editing, time-zero correction, removal of DC, and topography correction. The DEM from the 10/6/2021 UAV survey (closest in time to the GPR measurements) was used as an elevation reference. Three GPR profiles were created in the active area of the alluvial fan ( Figure 2b). The profiles were labelled as GPR-1, GPR-2, and GPR-3. Longitudinal profile GPR-1 is 187 m long and extends from the fan's toe up to the fan's apex orientated parallel to the direction of sediment transport. GPR-2 and GPR-3 are 84 and 91 m long, respectively, and orientated perpendicular to GPR-1 and thus the active area of the Suhi vrh fan (Figure 2). The GPR profiles cover only the active area of the fan as the inactive part is too densely vegetated and difficult to access. The radargrams were interpreted according to nomenclature of Sangree and Widmier (1979 3.1.2 | Sediment texture: Grain size, internal grain distribution, grain shape, roundness, and fabric The majority of the samples belong to the textural group of gravel. Only four samples (the proximal sample of SO 3, the proximal and the middle samples of SL 9, and the proximal sample of SL 11) classify as sandy gravel (Figures 5-7). In all samples, the percentage of fines was very low, with less than 2.5% of mud (<63 μm) and samples contained between 0.005 and 0.295% of clay (<2 μm). The distribution curves are unimodal in all samples, except the proximal sample of SO 1. In the field, the lobes appear to have well-sorted sediment; however, the gravel fraction ranges from granules to coarse cobbles (Figure 8 Appendix). In six lobes it was also possible to extract samples from the middle part of the lobe (Figures 4-6). In general, the samples from the middle parts of the lobes have a larger gravel content than the proximal parts and a lower gravel content than the distal parts. For most sieve lobes, there is an increase of one size class from the proximal to the distal part of individual lobes (Appendix), following the gravel size classification of Blair and McPherson (1999). The general trend of increasing downward coarsening of the sediment differs between younger, older, and sublobes (Table 1) | Surface changes and detected triggering events Our aerial surveys during the period from 27/4/2019 to 17/11/2021 yielded 10 DEMs, from which we calculated 9 DoDs that reveal various surface changes on the Suhi vrh alluvial fan (Figures 9-11). The surveying periods, volumetric changes (erosion and deposition of sediment), dates of the triggering rainfall events, rainfall amount, detected | DISCUSSION The present study provides a detailed and quantitative facies analysis of the sieve deposits observed in the natural environment of the Suhi vrh alluvial fan. Their depositional activity is directly related to the amount and intensity of triggering precipitation events, while subsurface results show that they also represent major building blocks of the studied gravel-rich alluvial fan. To place the results in a broader context of gravel-rich alluvial fans formed by sieve lobes, we compare the results of this study to previous sieve-lobe studies (Hooke, 1967;Milana, 2010;Nemec & Postma, 1993 in the following sections. | Sieve-lobe facies The quantitative results of the sedimentary facies analysis in this study corroborates previous qualitative analyses conducted in the field and in the laboratory (Hooke, 1967;Milana, 2010;Nemec & Postma, 1993 (Lunt & Bridge, 2007;Zhang et al., 2021). However, the difference in mean clast size is only in the range of one clast size class, which could be difficult to detect during mapping in outcrop scale (especially for smaller grain sizes that are difficult to distinguish in the field). Therefore, granulometric analysis is required to determine whether a layer is a sieve lobe. A comparable proximal-to-distal grain size variation was observed in grain flow deposits where the front of the lobe has coarser clasts than the back and the roundness of the clasts may be similar to sieve lobes (Van Steijn, 2011;Van Steijn et al., 1995). However, compared to sieve lobes, grain flows occur on considerably steeper surfaces (>33 ) of colluvial deposits (Bertran et al., 1997). Such grain flows should not be confused with the fluidized grain flows observed by Milana (2010), which form at the sieve-lobe front and represent sublobes formed during sieve-lobe growth. The downward gradation of the individual lobes is almost uniformly pronounced in all the sieve lobes we have examined, T A B L E 1 Average amounts of gravel, sand, mud, and clay particles in old, young, and sublobes discrepancy in percentages can be attributed to the sampling strategy. The contacts between the sieve lobes are poorly defined, and it is possible that some of the samples were also partially extracted from the lower deposited lobe. Older sieve lobes tend to have a higher percentage of smallgrained particles (sand and mud) compared to younger ones. This could be attributed to physical weathering of clasts in older deposits, which produces a larger quantity of sand particles. The Conzen dolomite is highly fractured, and physical weathering of clasts following deposition could result in production of finer particles. In addition, the greater amount of smaller particles in older sieve lobes could derive from secondary sedimentation processes following sieve deposition, such as aeolian dust or pedogenesis (Hooke, 1993;Milana, 2010). Measurements of the higher proportion of small-grained particles in older sieve lobes do not confirm the mechanism of winnowing of primary matrix-rich debris flows by waterflow proposed by Blair and McPherson (1992). The results of our study clearly confirm an increase in fine-grained material with age in the sieve lobes. Despite bedload transport, which should orient the a-axis of the clasts transverse to the transport direction, the clasts did not appear orientated. However, the clasts are highly spherical (Figure 8), making orientation challenging to measure. In some lobes, we detected crude stratification based on sharp vertical changes in mean grain sizes (Figure 3d). Similar observations were made by Nemec and Postma (1993) during studies in Crete. We suggest that this phenomenon is probably due to the fact that a single sieve lobe is gradually built up from several sublobes stacked on top of each other, which are hierarchically one level lower than a complete sieve lobe. Milana (2010) and wet deposits, whereas on dry deposits, morphology was smoothed and difficult to discern. We documented only relatively freshly deposited sublobes and no old ones. This suggests that the sampled sublobes were the initial building blocks of undeveloped sieve lobes that did not continue to form due to a variety of reasons, such as lowering discharge or lack of sediment. | Detection of surface changes and creation of sieve lobes Three years of surface monitoring allowed us to detect and quantify surface changes and sieve-lobe formation. The changes are directly related to a specific triggering rainfall event with known quantities. (Figure 1c; ARSO, 2009ARSO, , 2021c. This strongly suggests that sieve-lobe formation may occur frequently and several times per year. During substantial events there was an increase in sediment volume by several hundreds of cubic metres, which was transported as bedload and deposited predominantly as sieve lobes. Such changes occurred annually during rainfall events, with more than 60 mm of rainfall in 24 h (Table 2). A major amount of positive versus negative changes in sediment volume indicate that surface changes are caused not only by redeposition of pre-existing sediment, but also by sediment influx from the catchment. Moderate changes were caused by either 24 or 48 h events with more than 50 mm of precipitation ( Table 2). The magnitude of precipitation events that caused moderate changes was similar to that of major changes in some cases. However, these precipitation events generally transitioned from rain to snowfall, and therefore failed to cause substantial sediment transport. An exception is the triggering event of 4/11/2021, which did not cause substantial deposition, despite the large amount of precipitation (67.5 mm). We assume that rainfall did not occur in a short period of a few hours but extended throughout the entire day. This facilitated concurrent water infiltration so that little surface runoff occurred. During moderate changes, sediment volume increased only up to a few tens of cubic metres, indicating that redeposition of pre-existing sediment was the predominant mechanism, with only little or no sediment influx from the catchment. Minor changes happened during less intense triggering rainfall events which did not exceed 50 mm of rainfall in 24 h. The only exception was the 3/5/2021 event, which had a precipitation amount comparable to a substantial event. However, this event later transitioned to snowfall, which could not cause significant sediment movement. In our study site, sieve lobes were deposited either at the channel mouth or inside the distributary channel, which corroborates previous observations of sieve-lobe generation (Hooke, 1967;Milana, 2010;Nemec & Postma, 1995 (Figures 9a and c, 11a), where the prevailing depositional mechanism was bedload accumulation in the form of sieve-lobe complexes due to total decay of stream shear stress, a process already documented by Milana During two substantial events (Figures 9a and c), the middle and lower parts of the distributary channel were filled with sieve lobes, which lead to avulsion. Alternatively, avulsion might also occur due to channel plugging with sediment of previous moderate and minor events (cf. de Haas et al., 2018). During such events the sediment was deposited inside the distributary channel, predominantly in the form of sieve lobes (Figures 9b, 10b and c, 11c). These lobes, confined in a very narrow channel, could cause a blockage, and avulse the subsequent depositional events. The documented sieve lobes did not damage vegetation, indicating that the transport energy of sieve lobes is low, and the sediment is transported as bedload. The same findings were described by Milana (2010). A low transport energy is also indicated by low surface inclination (20 or less), and the stacked and intertwined lobes exhibiting no erosional contact. With minor and moderate surface changes, the sieve lobes grew on the fan's surface in the middle or even distal parts with no direct connection to the fan's catchment area. Similar phenomena were observed on the alluvial fans of Crete (Nemec & Postma, 1993). Nemec and Postma (1993) Milana (2010) suggested that alluvial fans may be predominantly or entirely built of sieve-lobe deposits, which is corroborated by the research of Argentine fans where the fan surfaces are almost entirely covered by sieve lobes. The study of Hooke (1967) shows that fans can be simultaneously bult either by sieve lobes or other sedimentary deposits, whereas in the findings of Nemec and Postma (1993) sieve lobes account for only a minor percentage of alluvial fan composition. These studies were based on surface analyses of alluvial fans with little or no subsurface information. Our results from the GPR profiles (Figures 12 and 13) confirm previous estimates that entire fans may be predominantly or entirely built of sieve lobes. The maximum depth reached by the GPR signal was up to 15 m, below which the signal was most likely attenuated due to underground water, which is usually present in highly porous sediments. Therefore, the bottom surface of the fan and the deeper sediment geometry were not reached. Consequently, the total thickness of the Suhi vrh alluvial fan is unknown. However, the radargrams show reflection patterns that correspond to the surface topography of exposed recent sieve lobes and are interpreted as such. Reflections in profile GPR 1 clearly show lobate cross-bedding, interpreted as inclined stacked sedimentary layers dipping parallel to the recent surface. Milana (2010) described that the slope of the surface of an alluvial fan built by sieve-lobe deposition exhibits a sigmoidal shape that derives from rapid extraction of water from the transported sediment. The recent surface of the Suhi vrh alluvial fan has a very pronounced sigmoidal shape, which is also visible in the subsurface data. The proximal part exhibits less pronounced sigmoidal shapes due to the dominance of sediment transport over deposition. Radargrams 2 and 3 are interpreted as an undulating F I G U R E 1 3 Radargrams 2 and 3 exhibiting stratified hummocky and discontinuous reflections of up to 10 m long (examples marked with red lines). [Color figure can be viewed at wileyonlinelibrary.com] morphology of intertwined sieve lobes. Such a morphology is present on the recent surface ( Figure 4) and the reflectors have the same shape below the surface. | Subsurface geometry of a sieve-lobe built fan The shape and orientation of the reflectors resemble the parallel and perpendicular cross-sections of the individual sieve lobes that occur on the surface of the Suhi vrh alluvial fan. The GPR data indicate that the upper 10 to 15 m of the studied alluvial fan consist predominantly of stacked sieve lobes, supporting Milana's idea of alluvial fans built entirely of sieve lobes (Milana, 2010). | CONCLUSION This study provides a quantification of sieve-lobe sedimentary facies, depositional activity, and triggering precipitation conditions The depositional process occurs as bedload transport, with coarsegrained sediment deposited predominantly as sieve-lobe deposits. Sieve lobes can be deposited inside or outside the channel and occur at slope angles lower than 30 . The most severe triggering rainfall events that resulted in substantial surface changes and deposition of several sieve lobes had rainfall amounts greater than 60 mm within a 24 h period. Such precipitation events are very common for the study site and therefore sieve-lobe deposition occurs regularly with a sub-annual frequency under regular meteorological conditions. The most substantial event resulted in the deposition of more than 1000 m 3 of sediment. The internal architecture of the studied alluvial fan derived from the GPR data resembles the surface deposits and confirms previous research that some coarse-grained alluvial fans can be almost entirely built by sieve lobes.
2022-11-14T16:12:59.307Z
2022-11-12T00:00:00.000
{ "year": 2022, "sha1": "0960261a1d2f92730afa75ef2f786b8d811bd4b7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/esp.5508", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "fc9a78d41209dd5fb2654d4773237ee7038953d9", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [] }
221238511
pes2o/s2orc
v3-fos-license
5-aza-2′-Deoxycytidine Induces a RIG-I-Related Innate Immune Response by Modulating Mitochondria Stress in Neuroblastoma Background: Neuroblastoma (NB) is one of the most common malignant solid tumors to occur in children, characterized by a wide range of genetic and epigenetic aberrations. We studied whether modifications of the latter with a 5-aza-2′-deoxycytidine (decitabine, Dac) DNA methyltransferase inhibitor can provide a therapeutic advantage in NB. Methods: NB cells with or without MYCN amplification were treated with Dac. We used flow cytometry to measure cell apoptosis and death and mitochondrial reactive oxygen species (mtROS), microarray to analyze gene expression profile and bisulfite pyrosequencing to determine the methylation level of the DDX58/RIG-I promoter. Western blot was used to detect markers related to innate immune response and apoptotic signaling, while immunofluorescent imaging was used to determine dsRNA. We generated mtDNA depleted ρ0 cells using long-term exposure to low-dose ethidium bromide. Results: Dac preferentially induced a RIG-I-predominant innate immune response and cell apoptosis in SK-N-AS NB cells, significantly reduced the methylation level of the DDX58/RIG-I promoter and increased dsRNA accumulation in the cytosol. Dac down regulated mitochondrial genes related to redox homeostasis, but augmented mtROS production. ρ0 cells demonstrated a blunted response in innate immune response and apoptotic cell death, as well as greatly diminished dsRNA. The response of NB cells to CDDP and poly(I:C) was potentiated by Dac in association with increased mtROS, which was blunted in ρ0 cells. Conclusions: This study indicates that Dac effectively induces a RIG-I-related innate immune response and apoptotic signaling primarily in SK-N-AS NB cells by hypomethylating DDX58/RIG-I promoter, elevated mtROS and increased dsRNA. Dac can potentiate the cytotoxic effects of CDDP and poly(I:C) in NB cells. Introduction Neuroblastoma (NB) is the most common extracranial solid tumor to occur in children and is responsible for about 15% of pediatric oncology deaths [1]. Risk factors include age older than 18 months at diagnosis, advanced stage, unfavorable histologic grade and MYCN amplification. MYCN-amplified NB is highly correlated with advanced disease stage and poor prognosis, which accounts for 20-25% of overall and 40% of high-risk cases [2] and MYCN-nonamplified NB with elevated c-MYC expression is also associated with a poor prognosis in NB [3]. In addition to genetic abnormalities, epigenetic aberrations play an important role in the progression of NB. Epigenetic changes that occur in both single genes and at the genome-wide level. Hypermethylation in the promoter region of tumor suppressor genes is associated with poor outcome [4][5][6][7]. Genome-wide analysis of DNA methylation has revealed a DNA methylator phenotype in NB with poor prognosis, characterized by the methylation of a subset of multiple CpG islands [8,9]. Tumorigenic properties of NB can be inhibited by reversing epigenetic changes with DNA methyltransferase inhibitor 5-aza-2 -deoxycytidine (decitabine, Dac) [10], which is also FDA-approved for treating hematological malignancies [11]. Treatment of NB cells with Dac induced cell differentiation and reduced proliferation and colony formation [12,13]. Further studies demonstrated that Dac can potentiate the cytotoxic effects of current chemotherapies [14]. However, the molecular mechanism underlying the clinical effects of Dac remains uncertain. The reactivation of aberrantly methylated tumor suppressor genes following promoter demethylation has shown to grant an antitumor effect [15]. More recently, however, a couple of studies have demonstrated that the tumor-suppressing effect of Dac can be attributed to an activated innate immune response, in which an increase of endogenous dsRNA stimulates retinoic acid-inducible gene I (RIG-I) and melanoma differentiation-associated protein 5 (MDA5) and can then trigger mitochondrial antiviral signaling protein (MAVS)/interferon regulatory factor 3 (IRF3) pathway, ultimately leading to cell death [16,17]. Mitochondria are responsible for the cellular bioenergetics and are involved in redox status. Mitochondrial DNA (mtDNA) encodes tRNA, rRNA and proteins that are essential for oxidative phosphorylation (OXPHOS). This versatile organelle, which includes mtDNA and other interior components and associated proteins, constitutes a central hub of innate immune signaling [18]. The integrity of mitochondrial DNA (mtDNA) plays a central role in MAVS-related pathway activity in HeLa cells [19,20]. In fact, our previous study demonstrated that mtDNA is involved in TLR3-agonist induced oxidative stress and cell death in NB [21]. In this study, we demonstrated that Dac induces a RIG-I-associated innate immune response and cell death in NB through hypomethylated DDX58/RIG-I promoter and accumulated endogenous dsRNA. We also verified the involvement of mitochondria in mediating the anti-NB effect of Dac by using the ρ 0 cell devoid of mtDNA. Finally, we found that Dac can potentiate the anti-NB effect of cisplatin and/or poly(I:C), which are known for targeting mitochondria and stimulating innate immunity, respectively. Gene Expression Microarray Assay Collected RNA samples were subjected to microarray assay to determine a gene expression profile. We utilized Affymetrix Clariom D microarray chips for profiling. The RNA sample were first prepared using the WT PLUS reagent kit (Affymetrix, Thermo Fisher Scientific, Waltham, MA, USA) followed by hybridization on the Clariom D microarray chips. The raw data of Clariom D chips were first subjected to quality control examination pursuant to the Affymetrix manuals. The chips that passed the quality control criteria were then analyzed with Partek; a commercial software specific for microarray data analysis. Methylation Analysis Using Bisulfite Pyrosequencing Pyrosequencing was conducted for four CpGs sites within the RIG-I/DDX58 promoter region. Briefly, 500 ng of each genomic DNA sample was bisulfite-converted using the EpiTect Plus DNA bisulfite kit (Qiagen, Hilden, Germany). The primer sequences used for bisulfate pyrosequencing are listed in Supplementary Table S1. The PCR program was 95 • C for 5 min, 40 cycles of 94 • C for 30 s, 56 • C for 30 s and 72 • C for 30 s, followed by a final extension at 72 • C for 10 min. Single-stranded DNA templates were prepared from the biotinylated PCR product using streptavidin-coated sepharose beads (streptavidin sepharose high performance, GE Healthcare, Inc., Chicago, IL, USA), where the sequence primer was annealed. Primed templates were sequenced using the PyroMark Q24 System (Qiagen, Inc.) and the assay setup was generated using PyroMark Q24 Application Software 2.0 (Qiagen, Inc.). MtDNA-Devoid ρ 0 Cells The procedure for generating mtDNA-devoid SK-N-AS cells (ρ 0 ) has previously been described. Briefly, cells were treated with 50-ng/mL ethidium bromide for 12 weeks in the presence of 1-mM pyruvate and 50-µg/mL uridine. Limit dilution was employed to obtain single and stable ρ 0 clones. The mtDNA-depletion status was characterized by mtDNA copy number, mtDNA-coded protein (cytochrome c oxygenase subunit 2, COX2) expression and inviable phenotype under a medium free of pyruvate and uridine. MtDNA Copy Number We determined mtDNA content using real-time PCR (Light-cycler 480, Roche, Basel, Switzerland). To determine content of nuclear DNA as a copy number reference, we used the forward primer 5 -GGC TCTGTGAGGGATATAAAGACA-3 and reverse primer 5 -CAAACCACCCGAGCAACTAATCT-3 , both of which were complementary to the sequences of the chromosome 1 genome loci on 1q24-25. To analyze the mtDNA content, we used the forward primer 5 -CACAGAAGCTGCCATCAAGTA-3 and reverse primer 5 -CCGGAGAGTATATTGTTGAAGAG-3 , both of which were complementary to the sequences of ND2. The difference of threshold cycle number (∆Ct) values of the nuclear chromosome 1 gene and the mitochondrial ND2 gene were calculated during each PCR run. The mitochondrial copy number was calculated using the following formula: Copy number (copies/cell) = 2 × 2 ∆Ct (1) Western Blotting Proteins from no-treated control and drug-treated samples were separated in 8-12% SDS-PAGE gels and were transferred onto 0.45-µm PVDF membranes (Millipore, Burlington, MA, USA) in a Trans-Blot ® SD Semi-Dry Transfer Cell (Bio-Rad) for 50 min at 400 mA. The membrane was blocked in 5% non-fat milk powder/PBS-T (1X PBS, 0. anti-Tom20 (sc-17764, Santa Cruz Biotechnology, Dallas, TX, USA). The membrane was washed and then incubated for 1 h with 5% non-fat milk powder/PBS-T containing anti-rabbit IgG antibodies or anti-mouse IgG antibodies and was then washed and imaged with enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA). The membrane images were analyzed using an AutoChemi image system (UVP) or exposed to Fuji medical X-ray film, followed by quantification with Alpha View SA 3.4.0 (ProteinSimple, San Jose, CA, USA). Flow Cytometry Detecting Cell Death and ROS The percentage of cell death was determined using propidium iodide (PI) (Sigma-Aldrich, St. Louis, MO, USA) and trypan blue (TB) staining, followed by cytometry-based analysis on the FL2 and FL3 channel, respectively. Briefly, cells were suspended in PBS and stained with PI or TB for 15 min at room temperature. The mitochondrial ROS was measured by MitoSOX TM Red (Invitrogen, Carlsbad, CA, USA). Cells were washed twice with PBS and stained with MitoSOX™ red (5 µM) for 30 min at 37 • C. Then cells were collected, washed twice with PBS and finally resuspended in a flow tube with 1 mL PBS. The fluorescent signal of cell suspension was then measured using a FACS caliber 101 flow cytometer (BD Biosciences, San Jose, CA, USA) and analyzed using winMDI software. Statistical Analysis Data expressed as the mean ± SEM were collected from at least three independent experiments. Differences between two data sets were evaluated using two-tailed unpaired Student's t-test. Statistical tests between multiple data sets were analyzed using a one-way analysis of variance (ANOVA) followed by post hoc Tukey's test. A p-value < 0.05 was considered statistically significant. Dac Preferentially Induces a RIG-I-Related Innate Immune Response and Cell Apoptosis in MYCN Non-Amplified SK-N-AS NB In the study by Ikegaki et al. [22], they demonstrated that the epigenetic modifier Dac could induce the stemness phenotype of NB cells under five days of treatment. Therefore, we determined the stemness or cytotoxic effect of Dac 2.5-µM for 5 days on SK-N-AS NB cells using propidium iodide (PI) and trypan blue (TB) staining followed by flow cytometry analysis after 5 days of treatment. As shown in Figure 1A, Dac significantly increased cell death with dose. Next, we tested whether MYCN-amplification affects NB cells susceptibility to Dac. MYCN-non-amplified SK-N-AS and MYCN-amplified SK-N-DZ human NB cells were treated with 2.5-µM Dac for 5 days. As shown in Figure 1B, Dac significantly increased the death rate in SK-N-AS cells up to 8-fold and tripled the death rate in SK-N-DZ cells (p < 0.001 and p < 0.01, respectively). SK-N-AS cells were more sensitive to Dac treatment (p < 0.001, Figure 1B). Double-staining with annexin V/PI indicated that Dac treatment induces both early and late apoptosis significantly. (Supplementary Figure S1A). To clarify the underlying mechanism, we utilized microarray to analyze the differential gene expression in SK-N-AS cells in response to Dac (Supplementary Figure S1B). As shown in Figure 1C, treatment with Dac induced some interferon-stimulated genes (ISGs), including DDX58, which encodes RIG-I, a dsRNA sensor for initiating innate immune response. Then we evaluated whether Dac could modify DDX58/RIG-I at the epigenetic level. After treatment with Dac, the expression of DNA methyltransferase 1 (DNMT1) protein was decreased with different dose ( Figure 1D). We evaluated the methylation level of DDX58 promoter in four selected CpG sites (Supplementary Figure S2) using bisulfite pyrosequencing. As shown in Figure 1E, Dac suppressed the methylation level of DDX58 promoter. The results suggest that Dac affects the expression of DNMT1, leading to the decreased methylation of DDX58/RIG-I. Dac has been reported to stimulate the expression of endogenous dsRNA [16,17], so we explored the role of Dac in inducing dsRNA in NB cells. The monoclonal antibody J2 (for dsRNA detection) and anti-Tom20 (mitochondria inner membrane) were used to examine spatial distribution and the production of dsRNA. In the untreated control (NT) group, the dsRNA signal expressed in a faint intensity ( Figure 1F) and could be observed within mitochondria (Supplementary Figure S3A). In contrast, Dac-treated cells showed a stronger dsRNA expression, particularly in the cytosol ( Figure 1F). The quantification of dsRNA fluorescence signal was investigated under lower magnification from three independent experiments ( Figure 1F' and Supplementary Figure S3B). These results suggest that Dac may induce endogenous dsRNA and activate RIG-I, resulting in innate immune-related response. Four different NB cells, including two MYCN non-amplified cells (SK-N-AS and SK-N-FI) and two MYCN amplified cells (BE(2)M17 and SK-N-DZ) ( Figure 1G) were used to clarify the innate immunity response of Dac. Dac treatment induced marked RIG-1 protein expression in SK-N-AS and SK-N-FI cells. Furthermore, such RIG-1 related downstream proteins as MAVS and phosphorylated IRF-3 (p-IRF3) were also detected only in SK-N-AS cells ( Figure 1G). Moreover, the apoptosis indicator cleaved caspase-9 in SK-N-AS cells was detected after the treatment of Dac ( Figure 1G). However, both no-treated and Dac-treated SK-N-FI cells presented a similar level of cleaved caspase-9 with Cells 2020, 9,1920 6 of 14 seldom phosphorylation of IRF-3 ( Figure 1G). In addition, the apoptotic rate of SK-N-FI cells remained unchanged after Dac treatment (Supplementary Figure S3C). These findings indicate Dac-induced cell apoptosis involved in the activation of the RIG-1 pathway in SK-N-AS cells only. In line with our previous study, SK-N-AS, but not SK-N-FI and SK-N-DZ cells, exhibited activated innate immunity signaling and marked apoptosis in response to immunostimulant stimulation [23]. Thereafter, we focused on the implication of Dac-induced innate immunity signaling in triggering apoptosis of SK-N-AS cells. Mitochondrial antiviral signaling protein ubiquitination and degradation is a vital step for activating downstream innate immune response [24]. In our experiment, smaller degraded MAVS isoform (50 kDa) was detected in AS and FI cells under Dac stimulation, while the expression of undegraded MAVS protein was found in BE (2)M17 and SK-N-DZ cells ( Figure 1G). We verified the degraded pattern of MAVS in the context of provoked innate immune response in SK-N-AS cells by the use of poly (I:C) (Supplementary Figure S4). A dose manner test revealed that Dac at 2.5-µM is an effective dose for triggering innate immune signaling and apoptotic response in SK-N-AS cells (Figure 2A,B). Then, we verified the role of DDX58/RIG-I by siRNA knockdown and found that inhibition of RIG-I significantly attenuated the degraded form of MAVS, p-IRF3 and cleaved caspase-9 ( Figure 2C). Attenuated RIG-1 gene expression also suppressed the Dac-induced late phase cell apoptosis in SK-N-AS cells ( Figure 2D). As such, these results indicate that RIG-I acts as one of the positive regulators in mediating the effect of Dac on innate immune signaling and NB apoptosis. mtDNA Plays a Vital Role in Dac-Activated Innate Immune Response and Apoptosis In our microarray data, we also found a series of downregulated mitochondrial genes following Dac treatment, including genes related to anti-oxidant, chaperone and mitochondrial dynamics ( Figure 3A), which are critical to oxidative stress [25]. As oxidative stress is characterized by overproduction of reactive oxygen species (ROS) that can cause damage to mitochondrial structure and function [26], we evaluated mitochondrial ROS (mtROS) by using MitoSox TM Red. As shown in Figure 3B, Dac induced a significant increase in mtROS. This induced mtROS by Dac was significantly suppressed in the presence of ROS scavenger N-acetylcysteine (NAC) ( Figure 3B). Attenuation of mitochondrial membrane potential by the treatment of Dac indicates impaired mitochondrial integrity (Supplementary Figure S5A). As mitochondrial import machinery is implicated in regulating the state of mitochondrial oxidative stress in NB cells [27], we further assess the TOM20, a mitochondrial translocase of outer membrane. To validate the results from microarray and mt-ROS measurement ( Figure 3A,B), we checked the protein expression of TOM20 level and confirmed the expression of TOM20 was significantly reduced in response to Dac (Supplementary Figure S5B). Since mtDNA can trigger the innate immune response, we sought to clarify its role by generating mtDNA-depleted SK-N-AS cells (AS-ρ 0 cells). After long term exposure of ethidium bromide along with supplementation of pyruvate and uridine, AS ρ 0 cells presented devoid of mtDNA ( Figure 3C), as well as greatly reduced mtDNA-encoded protein cytochrome c oxidase 2 (COX2) ( Figure 3D). To confirm whether exposure of ethidium bromide leads to DNA damage of nuclear genome which may trigger transformation of cellular nature, we compared the level of phosphorylation of γH2AX at Serine 139 in SK-N-AS and AS-ρ 0 cells. As shown in Supplementary Figure S6A, SK-N-AS and AS-ρ 0 cells manifested similar level of γH2AX (p-S139). Unlike strikingly increased level of endogenous dsRNA in Dac-treated SK-N-AS cells, AS-ρ 0 cells presented an unchanged dsRNA level following Dac stimulation ( Figure 3E,E'; Supplementary Figure S6B). AS-ρ 0 cells treated with Dac also demonstrated an attenuated response in RIG-I, MAVS, p-IRF3, cleaved caspase-9, -3 and PARP ( Figure 3F). The cell death of AS-ρ 0 cells was not affected by Dac, compared to the original SK-N-AS cells ( Figure 3G). These results verify that functional mitochondria are required to convey Dac-induced innate immune response and apoptotic cell death in SK-N-AS NB cells. However, the exact mechanism of mtDNA involving the Dac-induced effect in NB cells needs further investigation. Cells 2020, 9,1920 7 of 14 9 with seldom phosphorylation of IRF-3 ( Figure 1G). In addition, the apoptotic rate of SK-N-FI cells remained unchanged after Dac treatment (Supplementary Figure S3C). These findings indicate Dacinduced cell apoptosis involved in the activation of the RIG-1 pathway in SK-N-AS cells only. In line with our previous study, SK-N-AS, but not SK-N-FI and SK-N-DZ cells, exhibited activated innate immunity signaling and marked apoptosis in response to immunostimulant stimulation [23]. Thereafter, we focused on the implication of Dac-induced innate immunity signaling in triggering apoptosis of SK-N-AS cells. (G) cell death rate was detected using PI or TB staining with flow cytometry. * p < 0.05, ** p < 0.01 when compared to NT or parental cell group. † p < 0.05 between indicated groups. NT-untreated; MFI-mean fluorescence intensity. Dac Potentiates Anti-NB Effect of CDDP and Poly(I:C) Cisplatin (CDDP) is a chemotherapeutic agent commonly used to treat NB [28]. Poly(I:C) is also a well-known immunostimulant, enabling activation of innate immune response through TLR3 [21]. We examined whether Dac can augment the effects of cisplatin and/or poly(I:C) on NB. Cells were pretreated with 2.5-µM Dac for 5 days, followed by exposure to 10-µM CDDP, 50-µg/mL poly(I:C) or both for 24 h ( Figure 4A). Treatment with poly I:C alone or combination of poly I:C and CDDP could induce higher ROS production ( Figure 4B) and death rate in AS cells ( Figure 4C). However, additional Dac stimulation strikingly augmented mitochondrial ROS level in all groups ( Figure 4B), while AS ρ 0 cells devoid of mtDNA greatly attenuated this ROS production with a limited cell death rate ( Figure 4C). The therapeutic effect of Dac with combination with other agents induced more cell death through increasing ROS production. unit) was quantified (representative images shown in Supplementary Figure S6B) under lower magnification (100X) of an Olympus FV10i confocal microscope. * p < 0.05 between indicated groups; (F) representative western blot of RIG-I, MAVS, p-IRF3, cleaved caspase-9, -3 and PARP are shown. β-actin serves as the loading control; (G) cell death rate was detected using PI or TB staining with flow cytometry. * p < 0.05, ** p < 0.01 when compared to NT or parental cell group. † p < 0.05 between indicated groups. NT-untreated; MFI-mean fluorescence intensity. Dac Potentiates Anti-NB Effect of CDDP and Poly(I:C) Cisplatin (CDDP) is a chemotherapeutic agent commonly used to treat NB [28]. Poly(I:C) is also a well-known immunostimulant, enabling activation of innate immune response through TLR3 [21]. We examined whether Dac can augment the effects of cisplatin and/or poly(I:C) on NB. Cells were pretreated with 2.5-μM Dac for 5 days, followed by exposure to 10-μM CDDP, 50-μg/mL poly(I:C) or both for 24 h ( Figure 4A). Treatment with poly I:C alone or combination of poly I:C and CDDP could induce higher ROS production ( Figure 4B) and death rate in AS cells ( Figure 4C). However, additional Dac stimulation strikingly augmented mitochondrial ROS level in all groups ( Figure 4B), while AS ρ 0 cells devoid of mtDNA greatly attenuated this ROS production with a limited cell death rate ( Figure 4C). The therapeutic effect of Dac with combination with other agents induced more cell death through increasing ROS production. Discussion In this study, we demonstrated that Dac treatment induces cell death on MYCN-non-amplified NB cells through activation of the RIG-I-related innate immune response, which involves decreasing methylated DDX58 promoter and the release of endogenous dsRNA. By modulating mitochondrial ROS production, Dac enhanced the cytotoxic effect of poly I:C and CDDP on NB cells. Discussion In this study, we demonstrated that Dac treatment induces cell death on MYCN-non-amplified NB cells through activation of the RIG-I-related innate immune response, which involves decreasing methylated DDX58 promoter and the release of endogenous dsRNA. By modulating mitochondrial ROS production, Dac enhanced the cytotoxic effect of poly I:C and CDDP on NB cells. As a DNMT inhibitor, Dac presents broad effects in inducing cell differentiation and reduced proliferation in NB [29]. Although Dac has been shown to provoke innate immune signaling to exert an antitumoral effect [16,17], whether MYCN amplification hampers such effects of Dac on NB cells remains unknown. In fact, MYCN amplification has been shown to repress cellular immunity, while MYCN deletion was shown to restore the innate immune response [30,31]. The results of our previous study suggest that NB with MYCN amplification shows resistance to immunostimulant treatment. In this study, MYCN non-amplified NB cells showed more susceptible to Dac treatment than those with MYCN amplification, indicating that the presence of MYCN serves as a resistance factor for treatment response. Cytosolic PRRs such as RIG-I, MDA5 or TLR3 may potentially account for Dac-mediated immune response. Activation of PRRs-along with MAVS-together trigger the nuclear translocation of IRF3/7 to turn on interferon-responsive genes. Under Dac treatment, these PRRs are activated and implicated in the cell death of human bronchial epithelial cells [32], ovarian cancer cells [17] and colon cancer cells [16]. In our study, RIG-I, but not MDA5 and TLR3, was significantly increased by Dac treatment, suggesting that RIG-I is the predominant PRR in sensing Dac in MYCN non-amplification NB cells. Regarding the antitumor effect of RIG-I, Liu et al. have demonstrated that downregulation of RIG-I in hepatocellular carcinoma is correlated with poor clinical outcome, while in vitro overexpression of RIG-I can enhance the interferon response to suppress proliferation of hepatocellular carcinoma [33]. Interestingly, our results demonstrated that siRNA-targeting RIG-I reverses Dac-induced innate immune response and cell apoptosis, highlighting the implication of RIG-I in epigenetic modulation of NB treatment. In our study, we found that the DNMT1-inhibiting activity of Dac causes the hypomethylated status of DDX58/RIG-I promoter, leading to its overexpression. The involvement of mtROS in innate immune responses has been reviewed in much literature [34]. Agod et al. have reported that mtROS play a central role in stimulating RIG-I-mediated interferon response in immune cells [35]. Herein, we found that Dac increases mtROS in NB via an imbalanced expression profile of mitochondrial genes. On the other hand, endogenous dsRNA could play a role in activating RIG-I and its downstream immune signaling. Roulois et al. described that Dac treatment in colon cancer leads to an increase in transcription of endogenous retrovirus, which generates intracellular dsRNA [16]. Similar findings were exhibited by Chiappinelli et al. [17]. Meanwhile, Dhir et al. brought up another insight that more than 95% of endogenous dsRNA comes from mitochondria, as evidenced by J2 antibody-based immunoprecipitation along with RNA seq. These cytosolic dsRNAs act to trigger innate immune signaling dependent on MDA5 and partly on RIG-I. In our study, we observed dsRNA colocalized with mitochondria in untreated cells, while cytoplasmic dsRNA level was shown to increase in Dac-treated NB cells. Notably, we did not observe this phenomenon in mtDNA-depleted ρ 0 cells, suggesting the possibility that Dac-induced dsRNA may be of mitochondrial origin. The lack of exact identification of these dsRNA limits out study in explaining their origin. The use of ρ 0 cells enables to clarifying that mtDNA is important in Dac-induced RIG-I, dsRNA accumulation as well as cell death. However, these cells are metabolically different from their counterparts and inefficient mitochondria such as ρ 0 cells have been largely involved in cancer resistance [36][37][38][39]. Gonzalez-Sanchez et al. have reported that ρ 0 cells of hepatocellular carcinoma exhibit a reduction in Bax/Bcl-2 ratio in the presence of chemotherapeutic drugs [36]. It suggested that altered metabolic phenotype may result in the modification of mitochondria-mediated apoptotic signals to acquire drug tolerance ultimately. Similarly, in this study, ρ 0 cells of SK-N-AS NB developed more resistance against Dac and treatment combinations with CDDP and/or poly(I:C). Nevertheless, further study to decipher the detail molecular underpinning the resistance of ρ 0 cells is warranted, and its clarification will provide further insights into the NB treatment strategy. Mitochondria have been shown to be a preferential target of the chemotherapeutic drug CDDP and immunostimulant poly(I:C). CDDP can cause cancer cell apoptosis by binding to mtDNA and the voltage-dependent anion channel 1 (VDAC1) of the mitochondrial outer membrane [40,41]. Our previous study demonstrated that mtDNA is required for mtROS production induced by poly(I:C) [21]. In the present study, we discovered that Dac increased mtROS by repressing the expression of mitochondrial genes that preserve redox homeostasis and further potentiates mtROS production and the anti-NB effect of CDDP and poly(I:C). Of particular note, a lack of mtDNA greatly diminished this effect, indicating the crucial role of functional mitochondria in the Dac treatment of NB cells. Therefore, we suggest that disturbing mitochondrial oxidative stress could be a future therapeutic strategy for the clinical application of Dac in patients with NB. Conclusions This study demonstrates that Dac effectively induces a RIG-I-related innate immune response and apoptotic signaling in MYCN non-amplified NB cells through the hypomethylation of the DDX58/RIG-I promoter and elevated mtROS with increased dsRNA. Furthermore, Dac can potentiate the cytotoxic effects of CDDP and poly(I:C) on NB cells. Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4409/9/9/1920/s1, Figure S1. (related to Figure 1B-C); Figure S2. Pyrosequencing design for four sites of the DDX58/RIG-I promoter; Figure S3; Figure S4. MAVS expression pattern in response to Poly(I:C); Figure S5. Loss of mitochondrial membrane potential and reduction of TOM20 protein level in response to Dac; Figure S6. AS and ρ 0 cells manifest a similar level of DNA damage marker γH2AX (p-S139), but differ in the intracellular dsRNA level; Table S1. Primer sequences used for pyrosequencing. Hospital, Taiwan (CMRPG8H0181-2). However, these organizations had no part in the study design, data collection and analysis, publication decisions or preparation of the manuscript. Conflicts of Interest: The authors hereby declare to have no conflict of interest with regard to this article.
2020-08-20T10:08:29.504Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "6d12c420322b7162669e88d22072901f3789c307", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/9/9/1920/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6d72c0fc64ff69f1b38c0612ed1ff85b389076f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
256408666
pes2o/s2orc
v3-fos-license
Are orthodontic randomised controlled trials justified with a citation of an appropriate systematic review? A systematic review of the evidence should be undertaken to support the justification for undertaking a clinical trial. The aim of this study was to examine whether reports of orthodontic Randomised Clinical Trials (RCTs) cite prior systematic reviews (SR) to explain the rationale or justification of the trial. Study characteristics that predicated the citation of SR in the RCT report were also explored. Orthodontic RCTs published between 1st January 2010 to 31st December 2020 in seven orthodontic journals were identified. All titles and abstracts were screened independently by two authors. Descriptive statistics and associations were assessed for the study characteristics. Logistic regression was used to identify predicators of SR inclusion in the trial report. 301 RCTs fulfilling the eligibility criteria were assessed. 220 SRs were available of which 74.5% (N = 164) were cited, and 24.5% (N = 56) were not included but were available in the literature within 12 months of trial commencement. When a SR was not included in the introduction or no SR was available within 12 months of trial commencement, interventional studies were commonly cited. The continent of the corresponding author predicated the possibility of inclusion of a SR in the introduction (OR 0.36; 95% CI 0.18–0.71; p = 0.003). A quarter of orthodontic RCTs (24.5%) included in this study did not cite a SR in the introduction section to justify the rationale of the trial when a relevant SR was available. To reduce research waste and optimal usage of resources, researchers should identify or conduct a systematic review of the evidence to support the rationale and justification of the trial. Introduction With a wealth of trials being published in orthodontics, it is incumbent on researchers to ensure transparent reporting of interventional studies. Research waste is a known phenomenon and is a direct product of poorly conducted and reported studies as well as unnecessary duplication [1]. Specific concerns include biased or incomplete reporting and failure to adequately address questions of relevant clinical interest [2]. To aid transparent reporting, established, evidence-based checklists and guidelines have been published to aid authors. Examples of such checklists include the Consolidated Standards of Reporting Trials (CONSORT) [3] and the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statements [4]. There is an ethical and fiduciary responsibility on researchers to contextualise their study within the known realms of the established literature. This is highlighted in both the CONSORT and SPIRIT statements, indicating the citation of an appropriate Open Access *Correspondence: jadbinderpal.seehra@kcl.ac.uk 3 Centre for Craniofacial Development and Regeneration, Faculty of Dentistry, Oral and Craniofacial Sciences, King's College London, Guy's Hospital, Guy's and St Thomas NHS Foundation Trust, London SE1 9RT, UK Full list of author information is available at the end of the article systematic review (SR) as justification for the undertaking of the trial [3,4]. The latter specifically states placing: "the trial in the context of the available evidence, it is strongly recommended that an up-to-date SR of relevant studies be summarised and cited in the protocol". Furthermore, the United Kingdom Health Research Authority has seconded these statements indicating that clinical trial design "should be underpinned by a SR of the existing evidence", with the primary research question based also upon this [5]. Meta-epidemiological studies aiming to ascertain the proportion of interventional trials having cited an appropriate SR have indicated that many trials still do not cite a relevant SR in their introduction as a justification for its undertaking [6][7][8]. Neither has this improved with successive reporting, highlighting the need for trial contextualisation [7,9,10]. Whilst these studies considered medical journals, a recent study conducted attempted to quantify proper citation of SRs in dental specialty journals [11]. The study indicated that only 62.5% of published and available randomised controlled trials (RCTs) had appropriate citation of a published and relevant SR. Significant factors predicting the citation of an appropriate SR included an increase in the journal impact factor in which the study was published and location of corresponding author, with those located in Europe having more appropriate SR citations. Whilst this study included two orthodontic journals, there is no study exclusively investigating these parameters in the established orthodontic literature. Therefore, the aim of this meta-epidemiological study was to assess the extent to which reports of orthodontic RCTs cite prior SRs to explain the rationale or justification of the trial. Study characteristics that predicted the citation of SR in the RCT report were explored. Eligibility criteria Orthodontic RCTs published between a 10-year period (1st January 2010 to 31st December 2020) were sourced from the following seven orthodontic journals: American Journal of Orthodontics and Dentofacial Orthopaedics The phrase ''randomised controlled trial'' was screened in the title, abstract and methodology of the article. In accordance with the Cochrane criteria for the selection of RCTs, the following inclusion criteria was used: human participants, interventions related to healthcare, experimental studies, presence of a control or comparative group, randomisation of participants to control and treatment groups, other trials with terminology in the title or abstract such as ''prospective'' , ''comparative'' , ''efficacy'' or where an indication was given that a comparison of treatment groups was undertaken prospectively were analysed to establish whether randomisation was implemented. Studies published in English were only included. Case reports, review articles, editorials, systematic reviews and retrospective studies were excluded. Selection of studies Both journal websites and a single electronic database (Medline via PubMed: https:// pubmed. ncbi. nlm. nih. gov/) were searched by one author (KP) to identify eligible trials. All titles and abstracts were screened independently by 2 authors (KP and JS). Full-text articles of abstracts fulfilling the inclusion criteria were retrieved and further analysed for eligibility independently by 2 authors (KP and JS). Any disagreements in the final articles were resolved by discussion among the authors. Data extraction A pilot assessment of ten RCTs was undertaken between two authors (KP and JS) to ensure consistency in data extraction variables. All study characteristics were extracted by a single author (KP) and entered into a prepiloted Microsoft Excel ® (Microsoft, Redmond, WA) data collection sheet. A second author (JS) independently cross-checked the collected data. Any discrepancies were resolved by discussion. At the level of each RCT, the following study characteristics were extracted: year of publication, number of authors, continent of corresponding author (Europe, Americas, Asia and other), journal impact factor (www. clari vate. com/ webof scien cegro up/ solut ions/ journ al-citat ion-repor ts/), journal title, ethical approval (no approval, exempt from approval or ethical approval obtained), involvement of statistician (not reported or reported; inferred from author affiliations and materials and methods section), study registration (no or yes), significance of results (either yes or no based on primary outcome. In the absence of no clear primary outcome, the first outcome was analysed: significant or non-significant), conflict of interest (conflicts exist and declared, no conflicts to declare or not clearly declared) and funding (industry funded and declared, no industry sponsorship/funding to declare or not clearly declared). As recommended by both the CONSORT [3] and SPIRIT [4] checklists, the introduction section of each trial was inspected for the citation of a SR used to justify the rationale of the trial and relevant to the primary trial outcome (yes or no). If no SR was cited, then the literature was searched to identify if a SR was available 12 months prior to the date of trial commencement (yes or no). Also, in the absence of a SR the type of study cited to justify the rationale of the trial was recorded (in-vitro, interventional, observational or none). Statistics Descriptive statistics and associations were calculated for the inclusion of a SR in the introduction, SR not included but available in literature within 12 months of trial commencement and study characteristics. Logistic regression was used to assess associations between SR inclusion in the introduction and the study characteristics. Odd ratios, corresponding 95% CIs and p-values were calculated. Significant predictors identified during the univariate analysis were entered individually in the multivariable modell. In addition, the Boruta feature selection algorithm in R [12] was used as a an alternative method for variable selection using 100 iterations. A two-tailed p value of 0.05 was considered statistically significant. All analyses were performed using Stata 16 Results A total of 301 RCTs were analysed in this study (Fig. 1). A total of 220 SRs were available of which 74.5% (N = 164) were included in the introduction section, and 24.5% (N = 56) were not included but were available in the literature within 12 months of trial commencement (Table 1). When a SR was not included in the introduction or no SR was available within 12 months of trial commencement, interventional studies were commonly cited (74.1%) ( Table 1). The characteristics of trials which included SR in the introduction or did not include a SR when there was a SR within 12 months of participant recruitment were compared (Table 2). Within this sub-group, SRs were more likely to be included if the RCT was published in 2020 (75.6%), published in the EJO (76.1%), and had a corresponding author based in Europe (80.0%). When SRs were included, the median number of authors and impact factor were 4.5 and 1.96, respectively. An association between continent of corresponding author (p = 0.01) and SR inclusion was detected ( Table 2). The Boruta algorithm confirmed continent as an important feature and all the other attributes as not important (Fig. 2). In the final model, continent and year of publication (year as an a priori confounder) were included. In the multivariable analysis, the continent of the corresponding author predicated the possibility of inclusion of a SR in the introduction with authors based in Asia or other having lower odds than those based in Europe (OR: 0.36; 95% CI 0.18-0.71; p = 0.003) ( Fig. 3; Table 3). Discussion Evidence-based checklists and guidelines aimed at promoting transparency of Randomised controlled trials (RCTs) such as the CONSORT and SPIRIT statements have strongly suggested that a systematic review (SR) is cited within the study's introduction to justify its undertaking [3,4]. This study identified that almost three-quarters (74.5%) of orthodontic RCTs published from January 2010 to December 2020 had cited a relevant SR within its introduction, with 24.5% of studies not citing one when one was publicly available. A SR was more likely to be cited if the country of the first author was based in Europe. The continent of primary author was deemed to be the only predictive factor for positive SR citation. This study showed that a higher proportion of orthodontic RCTs cite relevant SRs when compared to dental specialty journals in its entirety. The latter cohort of RCTs demonstrated that only 62.5% of RCTs have a positive SR citation [11]. Additionally, articles published in the European Journal of Orthodontics, within the field of orthodontics in general and when authors were based in Europe were more likely to cite an appropriate SR [11]. This was concurrent with findings of this study. Extrapolating this outside the field of dentistry, many medical interventional studies are being inappropriately justified with reported prevalence of 46-49.5% of RCTs including the appropriate SR to inform trial implementation [6,13]. Concerning trial protocols, a not grossly dissimilar 40.6% used an SR to inform their trial design [8]. Assessing justification through time, a series of publications auditing the contextualisation of trial findings within the wider literature found no improvement in compliance with established reporting guidelines [7,9,10,14,15]. Within the current sample, when a SR was not included within the introduction, the most common type of study cited was an interventional one. This is once again, what was found elsewhere in the established literature [11]. A large meta-epidemiological study spanning all areas of medicine attempted to address justification of trial selection via citation of an appropriate interventional study. The results were once again alarming with less than 25% of trials citing an appropriate preceding trial [16]. The responsibility to reduce potentially wasteful research lies on all the stakeholders involved in its conception, design, implementation and dissemination. One specific recommendation made by Glasziou et al. [1] to minimise research waste and undue, unethical and unfounded duplicity of research was to encourage wide adoption of established trial checklists. It is incumbent on journal editors and researchers alike to ensure stringent adoption of such statements as it leads to evidencebased improvement in quality of reporting and overall justification of interventional trials [17]. Medical journals such as the Lancet have already ensured that prospective authors willing to submit to their journal follow such checklists [10]. Furthermore, orthodontic journals should take lead in enforcing SR citation in informing trial conception. Indeed, some leading medical journals require pre-publication of the trial protocol before they can consider an RCT for publication. Possibly, the publication of the protocol will aid researchers in considering the available evidence such as a SR before undertaking a trial [18,19]. Other methods to improve contextualisation of existing literature include targeting research and ethics committees and ensuring that appropriate due diligence has been undertaken in the form of SR citation prior to trial approval. The EU clinical trial regulation has made strides in minimising sub-optimal research with its position statement which is due to come into effect in 2021/2022, arming local research and ethics committees alongside national competent authorities to disregard redundant research proposals. They have suggested that 'applicants for trial Table 1 The inclusion of a systematic review: (1) included in introduction, (2) Table 2 The characteristics of trials which included SR in the introduction or not included when there was a SR available in the literature within 12 months of trial commencement (N = 220) authorisation shall justify a new proposal that addresses and outstanding clinical uncertainty in light of the available evidence relevant for the research question and the outcome of interest at issue' . Whilst it does not specifically mention the need for a SR, it does go on to mention that 'where no systematic review exists, applicants should make their best efforts to identify and synthesise knowledge gained in prior studies' [20]. As recommended in both the CONSORT and SPIRIT checklists, prior to carrying out a Randomised Clinical Trial, undertaking a systematic review is an important step to identify any pre-existing primary trials and hence support the justification of the trial and avoid research wastage. Variable SR not included but available in The methodology of the present study was based on the recommendations of the SPIRIT guidelines [4] which state that a trial report should cite a relevant and recent SR to justify the rationale of the study. However, this may be associated with a degree of bias as it could be difficult to differentiate between RCTs that cited a SR to inform the trial and RCTs that also cited a SR but did not explain its impact on the trial design. This study has highlighted that almost a quarter of RCTs did not cite a SR when there was a review available in the literature within 12 months of trial commencement. Previous studies have reported the number RCTs citing a relevant SR as a proportion of the total number of RCTs sampled. However, adopting this approach may result in an overestimation of the situation. To avoid potential bias, the inclusion of SRs available within 12 months of the trial commencement has been recommended [8]. Orthodontic RCTs published between 2010 and 2020 were only included in this study. Within this timeframe, three hundred and one RCTs were identified which represent a large enough sample to ascertain if SR are cited in the reports of RCTs. Whilst all attempts were made by the authors to ensure rigorous literature search, some studies may have been missed. This may be compounded by the fact that RCT articles were limited to English language only, hence potentially eligible RCTs may have been excluded leading to potential bias. However, through independent assessment by two authors every attempt was made to identify SRs which correlate with the primary aim of the trial and reduce potential selection bias. Conclusion As per evidence-based checklists such as CONSORT and SPIRIT statements, almost a quarter (24.5%) of RCTs did not cite an appropriate SR within the introduction section as justification for the trial, when one was available. Trials where the corresponding author was based in Europe were the only predictive factor identified for positive SR citation. Further work by all research stakeholders is required within the field of orthodontics to limit research waste, ensuring finite resources are corralled for appropriately justified trials.
2023-01-31T14:52:23.672Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "1131af2270777ec37bacb858a284107b658b8cc4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40510-021-00395-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "1131af2270777ec37bacb858a284107b658b8cc4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
209323311
pes2o/s2orc
v3-fos-license
Stimulated Scattering of Surface Plasmon Polaritons in a Plasmonic Waveguide with a Smectic A Liquid Crystalline Core We considered theoretically the nonlinear interaction of surface plasmon polaritons (SPPs) in a metal-insulator-metal (MIM) plasmonic waveguide with a smectic liquid crystalline core. The interaction is related to the specific cubic optical nonlinearity mechanism caused by smectic layer oscillations in the SPP electric field. The interfering SPPs create the localized dynamic grating of the smectic layer strain that results in the strong stimulated scattering of SPP modes in the MIM waveguide. We solved simultaneously the smectic layer equation of motion in the SPP electric field and the Maxwell equations for the interacting SPPs. We evaluated the SPP mode slowly varying amplitudes (SVAs), the smectic layer dynamic grating amplitude, and the hydrodynamic velocity of the flow in a smectic A liquid crystal (SmALC). Introduction Nonlinear optical phenomena based on the second-and third-order optical nonlinearity characterized by susceptibilities χ 2 ð Þ and χ 3 ð Þ , respectively, are widely used in modern communication systems for the optical signal processing due to their ultrafast response time and a large number of different interactions [1][2][3][4][5]. The second-order susceptibility χ 2 ð Þ exists in non-centrosymmetric media, while the third-order susceptibility χ 3 ð Þ exists in any medium [6]. The second-order susceptibility χ 2 ð Þ may be used for the second harmonic generation (SHG), sum, and difference frequency generation; the ultrafast Kerr-type third-order susceptibility χ 3 ð Þ results in such effects as four-wave mixing (FWM), self-phase modulation (SPM), cross-phase modulation (XPM), third harmonic generation (THG), bistability, and different types of the stimulated light scattering (SLS) [1][2][3][4][5][6]. Optical-electricaloptical conversion processes can be replaced with the optical signal processing characterized by the femtosecond response time of nonlinearities in optical materials [2,3]. All-optical signal processing, ultrafast switching, optical generation of ultrashort pulses, the control over the laser radiation frequency spectrum, wavelength exchange, coherent detection, multiplexing/demultiplexing, and tunable optical delays can be realized by using the nonlinear optical effects [1][2][3][4]. However, optical nonlinearities are weak and usually occur only with high-intensity laser beams [1,6]. An effective nonlinear optical response can be substantially increased by using the plasmonic effects caused by the coherent oscillations of conduction electrons near the surface of noble metal structures [1]. In the case of the extended metal surfaces, the surface plasmon polaritons (SPPs) may occur [1,7,8]. SPPs are electromagnetic excitations propagating at the interface between a dielectric and a conductor, evanescently confined in the perpendicular direction [1,7]. The SPP electromagnetic field decays exponentially on both sides of the interface which results in the subwavelength confinement near the metal surface [1]. The SPP propagation length is limited by the ohmic losses in metal [1,7,8]. Nonlinear optical effects can be enhanced by plasmonic excitations as follows: (i) the coupling of light to surface plasmons results in strong local electromagnetic fields; (ii) typically, plasmonic excitations are highly sensitive to dielectric properties of the metal and surrounding medium [1]. In nonlinear optical phenomena, such a sensitivity can be used for the light-induced nonlinear change in the dielectric properties of one of the materials which result in the varying of the plasmonic resonances and the signal beam propagation conditions [1]. Plasmonic excitations are characterized by timescale of several femtoseconds which permits the ultrafast optical signal processing [1]. The SPP field confinement and enhancement can be changed by modifying the structure of the metal or the dielectric near the interface [1]. For example, plasmonic waveguides can be created [1,[7][8][9]. Nanoplasmonic waveguides can confine and enhance electric fields near the nanometallic surfaces due to the propagating SPPs [9]. Nanoplasmonic waveguide consists of one or two metal films combined with one or two dielectric slabs [9]. Typically, two types of the plasmonic waveguides exist: (i) an insulator/metal/insulator (IMI) heterostructure where a thin metallic layer is placed between two infinitely thick dielectric claddings and (ii) a metal/insulator/metal (MIM) heterostructure where a thin dielectric layer is sandwiched between two metallic claddings [7]. The MIM waveguides for nonlinear optical applications require highly nonlinear dielectrics [9]. The nonlinear metamaterials can significantly increase the nonlinearity magnitude [10]. Investigation of nonlinear metamaterials is related in particular to nonlinear plasmonics and active media [10]. One of the metamaterial nonlinearity mechanisms is based on liquid crystals (LCs) [10]. Tunability and a strongly nonlinear response of metamaterials can be obtained by their integration with LCs offering a practical solution for controlling metamaterial devices [11]. The integration of LCs with plasmonic and metamaterials may be promising for applications in modern photonics due to the extremely large optical nonlinearity of LCs, strong localized electric fields of surface plasmon polaritons (SPPs), and high operation rates as compared to conventional electro-optic devices [12]. Practically all nonlinear optical processes such as wave mixing, self-focusing, self-guiding, optical bistabilities and instabilities, phase conjugation, SLS, optical limiting, interface switching, beam combining, and self-starting laser oscillations have been observed in LCs [13]. LC can be incorporated into nano-and microstructures such as a MIM plasmonic waveguide. Nematic LCs (NLCs) characterized by the orientation long-range order of the elongated molecules are mainly used in optical applications including plasmonics and nanophotonics [11][12][13][14]. For instance, lightinduced control of fishnet metamaterials infiltrated with NLCs was demonstrated experimentally where a metal-dielectric (Au-MgF 2 ) sandwich nanostructure on a glass substrate with the inserted NLC was used [11]. However, the NLC applications are limited by their large losses and relatively slow response [14,15]. The light scattering in smectic A LC (SmALC) waveguides had been studied theoretically and experimentally, and it was shown that the scattering losses in SmALC are much lower than in NLC due to a higher degree of the long-range order [15]. SmALC can be useful in nonlinear optical applications and low-loss active waveguide devices for integrated optics [14,15]. SmALCs are characterized by a positional long-range order in the direction of the elongated molecular axis and demonstrate a layer structure with a layer thickness d SmA ≈ 2nm [14]. Inside a smectic layer, the molecules form a two-dimensional liquid [14]. Actually, SmALC can be considered as a natural nanostructure. The structures of NLC with the elongated molecules directed mainly along the vector director n ! and the homeotropically oriented SmALC with the layer plane parallel to the claddings are shown in Figure 1a and b, respectively. The nonlinear optical phenomena in SmALC such as a light self-focusing, selftrapping, SPM, SLS, and FWM based on the specific mechanism of the third-order optical nonlinearity related to the smectic layer normal displacement had been investigated theoretically [16][17][18][19][20][21][22][23][24][25][26][27][28]. In particular it has been shown that at the interface of a metal and SmALC, the counter-propagating SPPs created the dynamic grating of the smectic layer normal displacement u x, z, t ð Þ, and the SLS of the interfering SPPs occurred [22,23,26]. We also investigated the behavior of SPP mode in a MIM waveguide with the SmALC core [24,26]. In such a waveguide, SPP behaves as a strongly localized transverse magnetic (TM) mode which creates the localized smectic layer normal deformation and undergoes SPM [24,26]. In this chapter we consider theoretically the interaction of the counterpropagating SPP modes in the MIM waveguide with the SmALC core. The interfering SPP TM modes with the close optical frequencies ω 1,2 create a localized dynamic grating of the smectic layer normal displacement u x, z, t ð Þwith the frequency Δω ¼ ω 1 À ω 2 ≪ ω 1 which results in the nonlinear polarization and stimulated scattering of SPPs. We solved simultaneously the equation of motion for smectic layers in the electric field of the interfering SPP modes and the Maxwell equations for the SPPs in the MIM waveguide taking into account the nonlinear polarization. We used the slowly varying amplitude (SVA) approximation for the SPPs [6]. We evaluated the magnitudes and phases of the coupled SPP SVAs. It is shown that the energy exchange between the coupled SPPs and XPM takes place. We also evaluated the SPP-induced smectic layer displacement and SmALC hydrodynamic velocity. We have shown that the high-frequency localized electric field can occur in the MIM waveguide with the SmALC core due to the flexoelectric effect [28]. The chapter is constructed as follows. The hydrodynamics of SmALC in the external electric field is considered in Section 2. The SPP modes of the MIM waveguide are derived in Section 3. The SPP SVAs, the smectic layer dynamic grating amplitude, and the SmALC hydrodynamic velocity are evaluated in Section 4. The conclusions are presented in Section 5. Hydrodynamics of SmALC in the external electric field In this section we briefly discuss the SmALC hydrodynamics and derive the equation of motion for the smectic layer normal displacement u x, y, z, t ð Þin the Þ . SmALC can be described by the one-dimensional periodic density wave due to its layered structure. Smectic layer oscillations u x, y, z, t ð Þin the external electric field E ! x, y, z, t ð Þare shown in Figure 2. Hydrodynamics of SmALC in general case is very complicated because SmALC is a strongly anisotropic viscous liquid including the layer oscillations, the mass density, and the elongated molecule orientation variations [29][30][31]. However, the elastic constant related to the SmALC bulk compression is much larger than the elastic constant B ≈ 10 6 À 10 7 J m À3 related to the smectic layer compression [29][30][31]. The layers can oscillate without the change of the mass density [29][30][31]. For this reason two uncoupled acoustic modes can propagate in SmALC: the ordinary longitudinal sound wave caused by the mass density variation and the second-sound (SS) wave caused by the layer oscillations [29][30][31]. SS wave is characterized by strongly anisotropic dispersion relation being neither purely transverse nor longitudinal. It propagates in the direction oblique to the layer plane is the second-sound (SS) wave vector, v z is the hydrodynamic velocity perpendicular to the layer plane; ε ∥ and ε ⊥ are the diagonal components of the permittivity tensor parallel and perpendicular to the optical axis, respectively. and vanishes for the wave vector k ! S perpendicular or parallel to the layer plane [29]. SmALC is characterized by the complex order parameter, and SS represents the oscillations of the order parameter phase [29]. SS in SmALC has been observed experimentally by different methods [32][33][34]. The system of hydrodynamic equations for the incompressible SmALC under the constant temperature far from the phase transition has the form [29][30][31] div v Here, v ! is the hydrodynamic velocity, ρ ≈ 10 3 kg m À3 is the SmALC mass density, and F is the free energy density of SmALC. Typically, SmALC is supposed to be an incompressible liquid according to Equation (1) [29]. For this reason, we assume that the pressure Π ¼ 0 and the SmALC free energy density F do not depend on the bulk compression [29][30][31]. We are interested in the SS propagation and neglect the ordinary sound mode. The normal layer displacement u x, y, z, t ð Þby definition has only one component along the Z axis. In such a case, the generalized force density has only the Z component according to Eq (6) is specific for SmALC since it determines the condition of the smectic layer continuity [29][30][31]. The SmALC free energy density F in the presence of the external electric field Here K $ 10 À11 N is the Frank elastic constant associated with the SmALC orientational energy inside layers, ε 0 is the free space permittivity, and ε ik is the SmALC permittivity tensor including the terms defined by the smectic layer strains. The purely orientational second term in the free energy density F (7) can be neglected since for the typical values of the elastic constants B and K K k S where k S ⊥ , the SS wave vector component is parallel to the layer plane. The permittivity tensor ε ik is given by [30] where ε ∥ , ε ⊥ are the diagonal components of the permittivity tensor ε ik along and perpendicular to the optical axis and a ⊥ $ 1, a ∥ $ 1 are the phenomenological dimensionless coefficients [29,30]. SmALC is an optically uniaxial medium with the optical Z axis perpendicular to the smectic layer plane [29][30][31]. Combining Eqs. (1)-(8), we obtain the equation of motion for the smectic layer normal displacement u x, y, z, t ð Þin the electric field E ! x, y, z, t ð Þ [16,17]: In the absence of the external electric field, the homogeneous solution of the equation of motion (9) represents the SS wave with the dispersion relation [29]: Here, k S and Ω S and s 0 are the SS frequency and velocity, respectively [29]. It is seen from Eq. (10) that the SS frequency Ω S ¼ 0 for the propagation direction along the smectic layer plane and perpendicular to it. The decay constant Γ is given by If the viscosity terms responsible for the SS wave decay can be neglected, then the homogeneous part of Eq. (9) reduces to the SS wave equation with the dispersion relation (10) [29][30][31]: We use equation of motion (9) for the evaluation of the light-enhanced dynamic grating u x, y, z, t ð Þ . SPP modes in a MIM waveguide with SmALC core LC slab optical waveguide represents a LC layer of a thickness about 1 μm confined between two glass slides of lower refractive index than LC [14]. LC as a waveguide core provides the photonic signal modulation and switching by using the electro-optic or nonlinear optical effects of LC mesophases [35]. For instance, the large optical nonlinearities were implemented in order to create optical paths by photonic control of solitons in NLC [35]. Various electrode geometries may create due to the electro-optic effect periodically modulated LC core waveguides which can serve as efficient guided distributed Bragg reflectors with the tuning ranges of about 100-1550 nm optical wavelength range [35]. Plasmonic waveguides based on the manipulation and routing of SPPs can demonstrate a subwavelength beyond the diffraction limit together with large bandwidth and high operation rate typical for photonics [36]. The plasmonic devices can be integrated into nanophotonic chips due to their small scale and the compatibility with the VLSI electronic technology [36]. Plasmonic devices are the promising candidates for future integrated photonic circuits for broadband light routing, switching, and interconnecting [36]. It has been shown that different plasmonic structures can provide SPP light waveguiding determining the SPP mode properties [36]. MIM waveguide representing a dielectric sandwiched between two metal slabs attracted a research interest as a basic component of nanoscale plasmonic integrated circuits [37]. LC-tunable waveguides have been proposed as a core element of low-power variable attenuators, phaseshifters, switches, filters, tunable lenses, beam steers, and modulators [37,38]. Typically NLCs have been used due to their strong optical anisotropy, responsivity to external electric and magnetic fields, and low power [37,38]. Different types of NLC plasmonic waveguides have been proposed and investigated theoretically [36][37][38]. Recently, SmALCs attracted attention due to their layered structure and reconfigurable layer curvature [39]. The possibility of the dynamic variation of smectic layer configuration by external fields is intensively studied [39]. We investigated theoretically SLS in the optical slab waveguide with the SmALC core where the third-order optical nonlinearity mechanism was related to the smectic layer dynamic grating created by the interfering waveguide modes [27]. We also considered theoretically the MIM waveguide with the SmALC core [24,26]. The structure of such a symmetric waveguide of the thickness 2d is shown in Figure 3 [24,26]. The plane of the waveguide is perpendicular to the SmALC optical axis Z. The SmALC in the waveguide core is homeotropically oriented, i.e., the smectic layers are parallel to the waveguide claddings z ¼ AEd, while the SmALC elongated molecules are mainly parallel to the Z axis [29]. Typically the waveguide dimension in the Y axis direction is much larger than d, and the dependence on the coordinate y in Eqs. (8) and (9) can be omitted. Than we obtain u ¼ u x, z, t ð Þ, , and the SmALC permittivity tensor (8) takes the form The permittivity ε m ω ð Þ of the metal claddings is described by the Drude model [7,8]: is the plasma frequency of the free electron gas; n 0 is the free electron density in the metal;e, m are the electron charge and mass, respectively; and ω, τ are the SPP angular frequency and lifetime, respectively [7,8]. The Þof the optical wave propagating in a nonlinear medium is described by the following wave equation including the nonlinear part of the electric induction D ! NL [6]: Here μ 0 is the free space permeability and D ! L is the nonlinear part of the electric induction. The SPP can propagate in the plasmonic waveguide only as a transverse magnetic (TM) mode with the electric and magnetic fields given by E . In such a case, we obtain for D ! L and D ! NL in SmALC using Eq. (12) The linear part D ! L m of the electric induction in the metal claddings has the form: D Here c.c. stands for complex conjugate. The SPP fields (17)- (20) are confined in the Z direction. In the linear approximation substituting expressions (15), (18), and (20) into the homogeneous part of the wave equation (14) for the claddings and SmALC core, respectively, we obtain the following expressions for the complex wave numbers k m z and k S z [24,26]: where c is the free space light velocity. The boundary conditions for the fields (17)-(20) at the interfaces z ¼ AEd have the form [7,8] Substituting expressions (17)- (20) into Eqs. (23) and (24), we obtain the dispersion relation for the SPP TM modes in the MIM waveguide given by [24,26] Dispersion relation obtained for the general case of different claddings [7] coincides with expression (25) for the symmetric structure with the same claddings. The results of the numerical solution of Eq. (25) for the typical values of the MIM waveguide parameters and the SPP frequencies ω corresponding to the optical wavelength range λ opt $ 1 À 1:6 μm and 2d $ 1 μm are presented in Figures 4 and 5. These results show that Re k S z $ 10 6 m À1 ≫ Imk S z $ 10 4 m À1 and Re k x $ 10 7 m À1 ≫ Imk x $ 10 3 m À1 [24,26]. In such a case, the SPP oscillation length in the Z direction is defined by the relationship 2π Imk S z À1 $ 10 À4 m ≫ d $ 10 À6 m, and Imk S z can be neglected inside the MIM waveguide, and k S z ≈ Re k S z [24,26]. The SPP propagation length in the X direction L SPP ¼ Imk x ð Þ À1 $ 10 À4 À 10 À3 m ≫ λ SPP ¼ 2π Re k x ð Þ À1 < 10 À6 m where λ SPP is the SPP wavelength. Hence, at the optical wavelength-scale distances, Imk x can be neglected, and k x ≈ Re k x [24,26]. Consequently, for a given optical frequency ω, a single localized TM mode can exist in the SmALC core of the MIM waveguide with the electric field E ! SA x, z, t ð Þgiven by [24,26] The numerical estimations show that for the SPP modes with the close optical frequencies ω 1,2 $ 10 15 s À1 and the frequency difference Δω ¼ ω 1 À ω 2 $ 10 8 s À1 ≪ ω 1 , the wave numbers of the both SPPs k S z1,2 and k x1,2 are practically equal. As a result, only counter-propagating SPP modes can strongly interact in the MIM core creating the dynamic grating of smectic layers as it is seen from Eq. (9). The electric field of the counter-propagating SPP modes of the type (26) in the MIM waveguide SmALC core has the form Substituting expression (27) into equation of motion (9), we obtain the expression of the smectic layer displacement localized dynamic grating u x, z, t ð Þ: Here Expression (28) is the enhanced solution of Eq. (9). The homogeneous solution of Eq. (9) is overdamped for the typical values of SmALC parameters and Δω $ 10 8 s À1 , and it can be neglected. The normalized smectic layer displacement u x, z, t ¼ t 0 ð Þ =U 0 for the optical wavelength λ opt ¼ 1:6μm is shown in Figure 6. It is seen from Figure 6 that the dynamic grating is localized inside the MIM waveguide in the Z direction and oscillates in the propagation direction X. Nonlinear interaction of SPPs in the MIM waveguide The light-enhanced dynamic grating (28) results in the nonlinear polarization defined by Eq. (16). In order to investigate the interaction of the counterpropagating SPPs (27) [6]. For the distances of the order of magnitude of the SPP wavelength λ SPP < 1 μm, the dependence of SAVs on the x coordinate can be neglected. We assume according to the SVA approximation that Substituting expressions (27) and (28) Þfor the optical wavelength λ opt ¼ 1:6 μm. (16), and (27) into wave equation (14), taking into account the dispersion relation (22), neglecting the terms ∂ 2 E SA1,20 =∂t 2 according to condition (31), combining the phase-matched terms with the frequencies ω 1,2 , and dividing the real and imaginary parts, we derive the equations for the SVA magnitudes E SA1,20 t ð Þ j jand phases θ SA1,2 t ð Þ. They have the form 1 The spectral dependence of the localization factor F N k x , k S z is presented in Figure 7. It is seen from Figure 7 that F N k x , k S z is varying by an order of magnitude in the range of the optical wavelengths essential for optical communications. The addition of Eq. (36) results in the following conservation condition [6]: ∂ ∂t The spectral dependence of the localization factor F N k x , k S z . We introduce the dimensionless quantities Substituting relationship (41) into Eq. (36), we obtain where the gain g has the form The spectral dependence of the gain g is shown in Figure 8. The solution of Eq. (41) has the form It is easy to see from Eqs. exchange between the SPPs interfering on the smectic layer dynamic grating. Expressions (28) and (52)-(55) and Figure 11 show that the orientational and hydrodynamic excitations in SmALC core of the MIM waveguide enhanced by the SPPs are spatially localized and reach their maximum value during the time of the energy exchange between the interacting SPPs. Conclusions We investigated theoretically the nonlinear interaction of SPPs in the MIM waveguide with the SmALC core. The third-order nonlinearity mechanism is related to the smectic layer oscillations that take place without the change of the mass density. We solved simultaneously the equation of motion for the smectic layer normal displacement and the Maxwell equations for SPPs including the nonlinear polarization caused by the smectic layer strain. We evaluated the dynamic grating of the smectic layer displacement enhanced by the interfering SPPs. We evaluated the SVAs of the interacting SPPs. It has been shown that the SLS of the orientational type takes place. The pumping wave is depleted, while the signal wave is amplified up to the saturation level defined by the total intensity of the interacting waves. SLS is accompanied by XPM. The phase of the depleted pumping wave rapidly increases, while the phase of the amplified wave tends to a constant value. The SPP characteristic rise time is of the magnitude of 10 À9 s for a feasible SPP electric field of 10 6 V/m. The smectic layer displacement and hydrodynamic velocity enhanced by SPPs are spatially localized and reach their maximum value during the time of the strong energy exchange between the interfering SPPs.
2019-11-14T17:07:21.185Z
2019-10-30T00:00:00.000
{ "year": 2020, "sha1": "929ad7083459cb79406841ca7927cfb40081c240", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/69353", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "16f32217f7ea13a0e289e8494c1e969e7535ba3e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
52127755
pes2o/s2orc
v3-fos-license
SNOMED CT Concept Hierarchies for Sharing Definitions of Clinical Conditions Using Electronic Health Record Data Background  Defining clinical conditions from electronic health record (EHR) data underpins population health activities, clinical decision support, and analytics. In an EHR, defining a condition commonly employs a diagnosis value set or “grouper.” For constructing value sets, Systematized Nomenclature of Medicine–Clinical Terms (SNOMED CT) offers high clinical fidelity, a hierarchical ontology, and wide implementation in EHRs as the standard interoperability vocabulary for problems. Objective  This article demonstrates a practical approach to defining conditions with combinations of SNOMED CT concept hierarchies, and evaluates sharing of definitions for clinical and analytic uses. Methods  We constructed diagnosis value sets for EHR patient registries using SNOMED CT concept hierarchies combined with Boolean logic, and shared them for clinical decision support, reporting, and analytic purposes. Results  A total of 125 condition-defining “standard” SNOMED CT diagnosis value sets were created within our EHR. The median number of SNOMED CT concept hierarchies needed was only 2 (25th–75th percentiles: 1–5). Each value set, when compiled as an EHR diagnosis grouper, was associated with a median of 22 International Classification of Diseases (ICD)-9 and ICD-10 codes (25th–75th percentiles: 8–85) and yielded a median of 155 clinical terms available for selection by clinicians in the EHR (25th–75th percentiles: 63–976). Sharing of standard groupers for population health, clinical decision support, and analytic uses was high, including 57 patient registries (with 362 uses of standard groupers), 132 clinical decision support records, 190 rules, 124 EHR reports, 125 diagnosis dimension slicers for self-service analytics, and 111 clinical quality measure calculations. Identical SNOMED CT definitions were created in an EHR-agnostic tool enabling application across disparate organizations and EHRs. Conclusion  SNOMED CT-based diagnosis value sets are simple to develop, concise, understandable to clinicians, useful in the EHR and for analytics, and shareable. Developing curated SNOMED CT hierarchy-based condition definitions for public use could accelerate cross-organizational population health efforts, “smarter” EHR feature configuration, and clinical–translational research employing EHR-derived data. Background and Significance How should we define various "populations" for population health? Identifying patients who share a common clinical condition (phenotype) proves valuable for clinical care, for population health management, and for clinical-translational research (►Table 1; see also ►Table 2 for definitions used in this article). So how can we define a clinical condition most practically and effectively in the era of electronic health records (EHRs)? And how can we work with partners in population health initiatives and clinical research to define any condition similarly? Previously, defining clinical conditions frequently employed the best and only data available digitally: claims data, tagged with International Classification of Diseases (ICD) diagnosis codes. 1 Thus, shared condition definitions traditionally have been based on published lists of ICD codes ("value sets"), either nationally or locally defined. But in the EHR era, richer sources of digital data abound at the level of clinical events. In many EHRs, clinicians select diagnoses associated with these events using clinical terms rather than ICD codes. These clinical terms, often sourced from a vendor such as Intelligent Medical Objects (IMO) or Health Language, map both to ICD codes (for billing and coding purposes) and to Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) concepts (formerly SNOMED Clinical Terms), as the required nomenclature for EHR interoperability in the United States via health information exchanges (HIEs). Being more numerous and granular, clinical terms and SNOMED CT concepts often enable higher fidelity to clinical diagnostic thinking than ICD codes (►Fig. 1). International Classification of Diseases Coding and Classification The ICD coding system traces its roots to initial efforts at statistically analyzing causes of death, beginning with John Graunt's London Bills of Mortality in the 17th century. [2][3][4] Although rooted firmly in international epidemiology, additional uses emerged, and ICD-7 was adopted in the United States for hospital coding. ICD-9, released in 1979, remained in use in the United States as ICD, Ninth Revision, Clinical Modification (ICD-9-CM) until 2015, when it was replaced by the then 25-year-old ICD-10 (with clinical modifications as ICD-10-CM). ICD's subdivisions are largely body-system based, and employ a strict, disjoint classification scheme. That is, any ICD code has only a single path up to the root (top) of the hierarchy. For statistical reporting, a single path up avoids double-counting of epidemiologic events in summarized data. In the United States, the Centers for Medicare and Medicaid Services produces lists of ICD codes (value sets) for use in calculating performance measures based on ICDcoded claims data. 5 These value sets are publicly available via the National Library of Medicine's Value Set Authority Center (VSAC), used consistently throughout the country, and Conclusion SNOMED CT-based diagnosis value sets are simple to develop, concise, understandable to clinicians, useful in the EHR and for analytics, and shareable. Developing curated SNOMED CT hierarchy-based condition definitions for public use could accelerate cross-organizational population health efforts, "smarter" EHR feature configuration, and clinical-translational research employing EHR-derived data. employed for both financial payment programs and publicreporting of quality measures. Additionally, ICD code lists have been commonly employed in epidemiologic studies, and have a long history of use in scientific publications. Why not International Classification of Diseases Value Sets for Defining Conditions? Given these well-established uses of ICD code value sets, why even look at an alternative? The original intent of ICD as a high-level classification for epidemiological understanding of broad causes of mortality and morbidity makes it less useful for most accurately capturing clinical information at the point of care. For clinical input into electronic systems, a clinical terminology proves better suited. 6,7 In the EHR era, content-rich clinical data are now available electronically: patient-level conditions (problem lists), orders, encounters, test results, procedures, and timed events such as sequential process steps or displays of clinical decision support (CDS) advisories with corresponding clinician responses. Diagnoses at these patient or event levels are typically entered in the EHR as clinical terms (from a vendor's clinical terminology or using SNOMED CT itself). Since these clinical terminologies are more granular than ICD, 6,8,9 employing ICD value sets alone risks loss of clinical fidelity to the patient's specific condition. Accountable Care Organizations (ACOs) and other population health initiatives increasingly need to identify subgroups of patients with a given condition, and to combine clinical data from disparate EHRs for near real-time CDS and care coordination for that condition. Claims data alone can be inadequate for this purpose, both from a timing and an information content standpoint. For instance, claims data are necessarily delayed by the claims submission and adjudication process; consequently, diagnoses from claims are not immediately available for real-time CDS upon entry of the diagnosis in the EHR. Claim diagnoses typically also lag behind transmission of the Continuity of Care Document (CCD) SNOMED CT-encoded diagnosis data via an HIE, which occurs upon completion of an encounter. Increasingly, clinical data derived from EHRs and rooted in clinical terminologies will be a cornerstone for ACO data repositories, analytics, and interventions. Subtype relationship A relationship between two SNOMED CT concepts where one concept is a more specific subtype of another, more general concept. The most widely used type of relationship in SNOMED CT, also known as an "is a" relationship "Is a" relationship; parent-child relationship Subpopulation The subset of persons/patients resulting from some segmentation algorithm. Used here primarily for patient registries which identify patients with a shared condition or a shared exposure SNOMED CT and Concept Hierarchies SNOMED CT serves as a publicly available, international clinical terminology for use in electronic health care applications. As the most comprehensive clinical terminology, SNOMED CT aids in inputting and retrieving coded clinical information, and in interoperability between clinical systems. 3 Originated in 1965 as the Systematic Nomenclature of Pathology by the College of American Pathologists, SNOMED later merged with the "Read Codes" developed by clinicians in England's National Health Service to form SNOMED CT, released in 2002. 10 Iterative development of SNOMED CT occurs through a governing body, SNOMED International. 11 In contrast to ICD, SNOMED CT is an ontology supporting multiple types of relationships between clinical concepts. 3,12,13 The subtype relationship (also known as an "is a" relationship) defines one concept as a subtype of another, more general, "parent" concept. This enables efficient classification of a clinical condition by including references to a SNOMED CT concept with all its hierarchical "children" and further descendant subtype concepts. 14 Additionally, SNOMED CT supports polyhierarchies. For instance, a "Neoplasm of liver" is both a subtype of "Disorder of liver" and a subtype of "Neoplasm." In SNOMED CT, one can find "Neoplasm of liver" (and from there any specific type of liver cancer) traversing down either path (►Fig. 2). In ICD-10, one would have to choose whether to classify "Liver cancer" under Chapter 11 "Diseases of the digestive system" or Chapter 2 "Neoplasms." In addition to the concept hierarchy-defining subtype relationship, over 50 attribute relationships can be used to connect concepts among different SNOMED CT top-level hierarchies. 15 24 A "Neoplasm of liver (disorder)" has 4 parents, including both "Disorder of liver (disorder)" and "Neoplasm of digestive organ (disorder)." Also shown are 5 "children" such as the concept "Malignant neoplasm of liver (disorder)," which in turn has more specific descendants. SNOMED CT International Edition is released in January and July of each year and the U.S. version is released in March and September. The September 2017 SNOMED CT (U.S. edition) release includes 560,985 concepts, 11 while the August 2017 ICD-10-CM release (for U.S. fiscal year 2018) includes only 71,704 codes. 16 As a result, SNOMED CT concepts match many conditions more specifically than does ICD-10. For instance, SNOMED CT concepts, but not ICD-10 codes, distinguish among different types of kidney cancer and of acidosis, for which clinical management varies substantially (►Table 3). SNOMED to Define Patient Conditions in EHRs Many EHRs support the creation of EHR diagnosis groupers to define and reuse a group of clinical conditions. These diagnosis groupers (a type of "content reference set" 3 ) can be created using SNOMED CT concept hierarchies, ICD codes, or lists of individual clinical terms. SNOMED CT-based diagnosis groupers offer the potential to be: 1. Simple to define, because of the logical supertype-subtype nature of SNOMED CT parent-child relationships. Specifying one or a few SNOMED CT supertype (parent) concepts defines a grouper containing multiple more granular subtype ("descendant") disorders. 2. A natural high-fidelity match for the clinical vocabulary of clinical experts who know which subconditions should or should not be included in a subpopulation of clinical interest (►Fig. 3). 3. More resilient to future changes within the coding system, such as addition of new concepts or deprecation of old ones, without requiring grouper redefinition. Within the EHR, potential reuses of a standard, vetted diagnosis grouper for the same condition include CDS, dynamic (rule-based) appearance of tailored documentation tools and order sets, and population of patient registries. 17,18 SNOMED to Define Conditions for ACOs ACOs frequently employ an HIE strategy for combining data from disparate EHRs. HIEs support the federally defined CCD standard, which uses SNOMED CT for exchanging diagnosis information. 19 SNOMED CT value sets thus provide a straightforward way to define conditions of interest from HIE or other CCD-derived data. SNOMED CT condition definitions can then be shared for a variety of population health purposes, such as care coordination, targeted outreach, clinical quality measure (CQM) calculations, and as variables in risk stratification and predictive analytic algorithms. SNOMED to Define Conditions for Clinical and Translational Research Clinical and translational scientists leverage the richer clinical data now being stored in EHRs, to conduct analyses not otherwise possible using claims and other administrative data alone. [20][21][22][23][24] Such research also benefits from the higher fidelity of SNOMED CT-encoded diagnoses for clinically important distinctions among subtypes of conditions. Even if a given research project requires an idiosyncratic definition of the primary study population of interest, covariate conditions can still share existing definitions, rather than having to redundantly create new ones for each project. In summary, significant benefit can be derived from defining each clinical condition (such as "renal cell carcinoma") once, with a single vetted SNOMED CT value set , and also differentiate different types of acidosis. ICD-10 codes do not distinguish among these clinically relevant subtypes. 55 Neurosarcoidosis is a relatively rare condition: it can be searched for by its SNOMED CT concept hierarchy, but not by ICD-9 or ICD-10 code. which then can be shared for clinical, population health, and clinical-translational research purposes. 25 The Present Project In 2015, we undertook an Ambulatory Quality Outcomes project to develop at least one EHR-based specialty-specific registry for each of 30 specialties at the University of Texas (UT) Southwestern. 18 Registries included combinations of: • Clinician documentation tools in the EHR. • Patient questionnaires for patient-reported outcomes (for some registries). • Patient registry list(s) viewable within the EHR. • Data warehouse-derived clinical quality performance measures, fed back into the EHR. We relied on SNOMED CT-based diagnosis groupers to define conditions of interest, either as primary conditions for registries or as covariate conditions. These SNOMED CT groupers were designated as health system "standard" groupers, and reused in CDS tools, rules, reports, performance measure calculations, and tailoring of relevant content to patients on our patient portal. Sharing SNOMED CT definitions of conditions aided in rapid-cycle development of multiple specialty patient registries, accelerating implementation of our population health initiatives. 18 As of January 2018, over 80,000 distinct patients were actively managed on one or more of 57 registries, with 10,875 patient-reported outcome questionnaires completed as part of registry-related data collection. In this report we (1) describe creation of SNOMED CT groupers to define multiple specialty conditions for our registry project, (2) evaluate the relative complexity involved to construct and maintain them, and (3) assess the shared reuse of these groupers for a variety of clinical and analytic purposes. Objective This article demonstrates a practical approach to defining clinical conditions with combinations of SNOMED CT concept hierarchies, and evaluates the potential for sharing definitions for a broad range of clinical, population health, and analytic uses. EHRs and Health Information Exchanges Our organizations each operate a separate instance of EHR software from Epic (Verona, Wisconsin, United States). SNOMED CT-encoded diagnoses are exchanged between these Epic instances via a standard CCD format using the included HIE capability (Care Everywhere). Our organizations also participate in one or more national HIEs (eHealth Exchange, CareEquality), enabling CCD exchange with EHRs from any participating EHR vendor and with the Veterans Administration EHR. The UT Southwestern clinically affiliated network of physicians comprises practices using a variety of individual EHRs. For population health purposes, we are linking these practices to each other and to Epic via an internal HIE (dbMotion, from Allscripts, Chicago, Illinois, United States). To date, the following EHRs have been linked to our dbMotion HIE and can exchange SNOMED CT-encoded diagnosis data: Allscripts Sunrise, eClinicalWorks (Westborough, Massachusetts, United States), Epic, and NextGen (Horsham, Pennsylvania, United States), with other EHRs continuing to be added. Clinical Terminology Vocabulary The clinical terminology vocabulary employed in UT Southwestern's Epic instance during this study was IMO's proprietary Problem (IT) Terminology, version 2018 R1, corresponding to the SNOMED CT International Edition July 2017 release and the SNOMED CT U.S. Edition September 2017 release. Clinical Terminology Mapping Our dbMotion HIE employs clinical terminology mapping software (Symedical, from Clinical Architecture, Carmel, Indiana, United States) for mapping EHR-specific content identifiers to reference clinical terminologies and ontologies. Additional content subset modeling capabilities of Symedical were employed for defining EHR-agnostic and consistent condition definitions using SNOMED CT concept hierarchies. 3 Standard Diagnosis Groupers For specialty patient registries based on a shared condition, 26 we chose to use a SNOMED CT concept hierarchybased definition for the primary condition, as well as for any comorbid conditions. That is, SNOMED CT-defined conditions were employed both to select the list of patients included in a registry (i.e., displayable on rows in a registry report), and typically also to determine one or more registry metrics (each displayable as a column of information about a registry patient). Specialty registries were developed on a staggered basis during a series of 2-week development iterations (mean, 4 iterations per registry); 18 any new standard diagnosis grouper construction required by a registry took place during one or more of those iterations. An initial set of 125 diagnosis groupers requested as part of this specialty registry development were selected for this study, without restriction on the specialties or conditions involved. Defining and Constructing SNOMED CT Concept Groupers Given a request to define a condition within the EHR, a clinical informaticist initially searched for the condition with a SNOMED CT hierarchy browser-either one within the EHR or SNOMED International's Web-based SNOMED CT Browser. 27 Searching on common clinical synonyms for the condition invariably identified one or more matching SNOMED CT concepts, most commonly within the "Clinical finding" top-level hierarchy (which includes "Disorders" or diagnoses). Each initial concept located by searching will be referred to as an "index" concept below. Once found, the index SNOMED CT concept choice was refined with a "drill-up, drill-down" approach. First, a "drillup" examination of each parent concept of the index concept was done (by selecting a parent within the SNOMED CT browser software). This helped gauge (1) if the parent itself more accurately included all the intended condition, and thus should be used instead of the index concept, or (2) if some of the parent's other "child" concepts (i.e., "siblings" of the index concept) were also relevant and should be included in addition to the index concept. For instance, in constructing a grouper for the condition "Coronary artery disease (CAD)," drilling up from the SNOMED CT concept "Coronary atherosclerosis (disorder)" yields the parent "Disorder of coronary artery (disorder)" which proves too broad. Examination of the siblings of "Coronary atherosclerosis (disorder)" reveals some siblings should be included as indicating the presence of CAD, such as "Mechanical complications of coronary bypass (disorder)," while other siblings should be excluded, such as "Congenital anomaly of coronary artery (disorder)." Next, for each identified index concept, a "drill-down" examination of the concept's "descendants" was done, to see if any should be excluded. For instance, for a candidate concept of "Malignant neoplasm of breast (disorder)," the child "Malignant melanoma of the breast (disorder)" could be selectively excluded. The above sequence (search, drill-up, drill-down) was then repeated as needed to look for other condition-defining concepts. Typical additional searches would be for (1) a "history of" concept (within the "Situation with explicit concept" top-level hierarchy), (2) complications of the condition implying its presence (e.g., "diabetic ketoacidosis" for diabetes), or (3) condition-defining procedures, e.g., "Coronary bypass grafting" implying presence of CAD, within the "Procedures" top-level hierarchy (►Fig. 3). Then, each included and excluded concept was numbered from 1 to n, and Boolean logic was written to combine them. By convention, included concept hierarchies were listed first, followed by the excluded concepts. As an example, for 2 included concepts and 2 excluded ones, the Boolean logic would be "(1 OR 2) AND NOT (3 OR 4)"-see ►Fig. 4. By using concept hierarchies, this method leverages the subtype relationships within the SNOMED CT ontology; attribute relationships were not employed. Finally, from the grouper Boolean logic definition, a full list of grouper contents was generated. In the EHR browser, this was a list of included clinical terms from the clinical terminology vocabulary (►Fig. 5). In the Symedical clinical terminology and mapping software, this was a list of SNOMED CT concepts, including descendants (►Fig. 6). Each list was then reviewed for any terms or concepts that should have been excluded or were notably missing. If so, the process was repeated to further refine the SNOMED CT concept selection and Boolean logic. Vetting SNOMED CT Grouper Design with Clinicians The more closely a SNOMED CT grouper validly represents a unique, clinically important condition, the greater its value for shared use. Accordingly, subject matter expert clinician vetting proves advantageous. In vetting a given grouper with specialist experts, we sought their clinical judgment about which real-world diagnosis subtypes to include in, or exclude Applied Clinical Informatics Vol. 9 No. 3/2018 from, the target subpopulation. We avoided making them sift through either a long list of ICD-10 codes or the even longer list of clinical terms within the EHR. Rather, we posed questions based on the much smaller set of relevant SNOMED CT concepts and descendants, as in ►Fig. 6. (Example questions: should "gestational diabetes" and/or "preexisting diabetes mellitus in pregnancy" be included in a definition of "diabetes mellitus"?; should "stunned myocardium" and/or "hibernating myocardium" be included in a definition of "ischemic cardiomyopathy"?) Once vetted, the grouper's name in the EHR was appended with a specific suffix "(Standard)," to streamline recognition and promote reuse. Employing SNOMED CT Groupers in the EHR Standard groupers were reused in rules, decision support advisory records, and reports. Rules can be evaluated at multiple points throughout the EHR, for instance, dynamically presenting condition-specific documentation templates or banners to clinicians. Whenever a rule needed to check for the presence or absence of a diagnostic condition, we encouraged searching for and reusing a "(Standard)" SNOMED CT grouper, rather than creation of a duplicative one-off grouper for isolated use by the rule. Similarly, CDS advisory records frequently evaluate one or more conditions as criteria. Reports within the EHR-both patient-specific detail reports as well as lists of patients-often include conditions as report parameters, i.e., for column display or for patient inclusion in a list. Use of standard SNOMED CT groupers was encouraged during design reviews of CDS advisories and reports. Employing SNOMED CT Groupers in Clinical Quality Measure Calculations Beyond their use in the EHR, the same SNOMED CT groupers were used to analyze extracted EHR data within the enterprise data warehouse (EDW) for: • Population definition, • Comorbid condition definition, and • electronic CQM (eCQM) calculations (ones developed locally to support quality improvement initiatives). Formulae to calculate the denominator, numerator, and exclusions for a given eCQM often involve checking whether each patient has one or more conditions. We defined these conditions using the same standard SNOMED CT groupers employed within the EHR, employing a table-driven approach in the calculation engine that referred to standard groupers. 18 Potential for Sharing SNOMED CT Grouper Definitions across Organizations and EHRs To demonstrate the potential for sharing SNOMED CT grouper definitions more broadly, we set out to construct exactly equivalent SNOMED CT groupers (content subsets) in our HIE's associated clinical terminology management system (Symedical) as in the Epic EHR. These SNOMED CT content subsets are EHR-agnostic, applicable to CCDs received from any EHR within the clinically affiliated network of practices participating in our HIE. Evaluation and Measurement Methods We assessed three aspects of using SNOMED CT concept hierarchy-based groupers: (1) simplicity of grouper In this example, "Ruptured aneurysm" is excluded, perhaps to be included in a separate "intracranial bleeding" condition definition, and "History of cerebral vascular accident (CVA) without residual deficits" is excluded for being potentially unverified (again, all hypothetical for illustration purposes only). Boolean logic handles both the inclusion and exclusion criteria. (C) Within the electronic health record (EHR), construction of the condition-defining diagnosis grouper is straightforward, using Boolean logic. construction, (2) shared use within the EHR for clinical, population health, and analytic purposes, and (3) whether SNOMED CT hierarchy-based groupers could be implemented across organizations on disparate EHRs. Evaluation of Simplicity of Grouper Construction To evaluate the simplicity of grouper construction for the set of vetted "standard" groupers, we calculated the median, minimum, 25th percentile, 75th percentile, maximum value, and mean of several grouper characteristics. We planned a priori to use the median as the primary measure of central tendency, due to expected skew. From each SNOMED CT grouper definition, we assessed: • Number of defining SNOMED CT concepts needed in the Boolean logic expression for the grouper. • Number of total SNOMED CT concepts contained within the grouper, including all descendants of the defining concepts. From the resulting (compiled) diagnosis groupers within the EHR, we assessed each of the following: • Number of distinct ICD-10 codes included (mapped to the included IMO clinical terms). This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited. • Number of distinct ICD-9 codes included. • Number of total distinct ICD codes (9 and 10) included. • Number of IMO clinical terms included. We also counted the number of physician subject matter experts engaged in diagnosis grouper content discussions, in addition to physician informaticist review. Evaluation of EHR and Analytic Shared Use To evaluate shared use of standard groupers within the EHR, we counted the following EHR record types incorporating use of a SNOMED CT standard diagnosis grouper: • Registry inclusion criteria and metric definitions. • CDS records, such as best practice advisories and health maintenance reminders. • Rules for evaluating EHR data that drive other EHR behavior (such as dynamic appearance of a documentation tool or order set). • Real-time reports of various types within the EHR. To evaluate shared use of standard groupers for analytics, we counted the number of: • Conditions available to clinicians for self-service analytics within the EHR, and • eCQM numerator, denominator, and/or exclusion calculations in the EDW which employed one or more standard diagnosis groupers. Qualitative Evaluation of Cross-Organization and Cross-EHR Sharing Potential To assess the feasibility of employing shared condition definitions for clinical data received from disparate EHRs via CCDs, we constructed SNOMED CT hierarchy-based groupers in our HIE's associated clinical terminology management software (Symedical). We evaluated the feasibility of constructing EHR-agnostic groupers in Symedical to exactly match the SNOMED CT concepts and Boolean logic in the Epic-based groupers. Simplicity of Grouper Construction In our set of 125 standard groupers, the median number of SNOMED CT concept hierarchies needed for grouper definition was only 2 (range of 1-30, 25th percentile ¼ 1 and 75th percentile ¼ 5). Thirty-five of 125 groupers (28%) were defined with a single SNOMED CT concept hierarchy; the remaining majority (90 groupers, 72%) employed Boolean combinations of concept hierarchies. Once defined, SNOMED CT hierarchy-based diagnosis groupers generally took 5 to 15 minutes each to create in the EHR development environment (bulk creation via import file is also possible). The number of subject matter expert physicians engaged in discussion on grouper contents (in addition to review of each grouper by one or more physician informaticists) was 43. Among them, the 125 groupers included a total of 525 references to SNOMED CT concept hierarchies: 413 of 525 concepts (79%) were within the "Clinical finding" top-level hierarchy, 80 (15%) within "Situation with explicit context," and 20 (4%) within "Procedure," accounting for 98% of all concept references (►Fig. 3). The remaining 12 concepts (2%) were distributed among the "Social context," "Body structure," and "Observable entity" top-level hierarchies. Among the 413 Clinical finding concepts, 351 (85%) had a semantic tag of "disorder" and 62 (15%) of "finding." Following grouper implementation within our Epic EHR, we assessed the resulting number of distinct SNOMED CT concepts, ICD codes, and clinical terms represented in the compiled grouper contents (►Table 4). Three of these diagnosis grouper definitions are shown in ►Table 5 as examples, and all 125 are available in the ►Supplementary Material (available in the online version). To match the contents of the succinct SNOMED CT concept hierarchy grouper definitions, other list-based approaches to create diagnosis groupers would involve a markedly larger median quantity of items. Compared with the median 2 SNOMED CT hierarchies needed to define a diagnosis grouper, the resulting groupers included a median of 32 SNOMED CT individual concepts, 155 clinical terms selectable in the EHR by clinicians, and 22 different ICD codes (ICD-9 and -10) associated with those clinical terms. EHR and Analytic Shared Use of Standard SNOMED CT Groupers To date, the set of 125 standard groupers have seen shared use by 57 patient registries (which include 362 separate references to standard groupers for registry-defining or comorbid conditions), 132 CDS items (alerts, health maintenance reminders), 190 EHR rules for non-CDS purposes, and 124 report definitions (►Table 6). Our EHR offers ad hoc or "self-service" analytics to clinicians, for instance, to find numbers of patients with certain conditions, optionally sliced further by other criteria (medications, procedural history, etc.). By making standard SNOMED CT diagnosis groupers visible in the self-service analytic tool, 125 vetted condition definitions have been made available to physicians for self-service analytics. In formal performance measurement reporting, 111 eCQM calculations (of a numerator, denominator, or exclusion) performed in UT Southwestern's EDW use one or more of the standard SNOMED CT groupers. Cross-Organization and Cross-EHR Application To demonstrate feasibility of employing a shared SNOMED CT grouper definition for evaluation of clinical data from disparate EHRs, we replicated construction of our standard Epic SNOMED CT hierarchy-based groupers in an HIE-associated clinical terminology management system (Symedical). All 125 were readily constructed in Symedical to match precisely the SNOMED CT concepts and Boolean logic of the Epic groupers, and thus applicable to SNOMED CT-encoded concepts sent by any certified EHR contributing to the HIE. Main Findings In moving toward more personalized medicine and to valuebased reimbursement, defining patient conditions becomes crucial-so that optimal care for each condition can be better specified, delivered, and measured. In the EHR era, clinical events are now being captured in richer detail than available previously with billing data alone. EHR events associated with diagnoses now commonly employ clinical terms, which are mapped to SNOMED CT for diagnosis interoperability among EHRs. Population health efforts require a facile way to define diagnostic conditions similarly across EHRs, preserving high clinical fidelity and leveraging this expanded EHR content. In our study: 1. Starting from clinical intent for a given target population, using SNOMED CT concept hierarchies and Boolean logic proved to be a simple and concise way to define clinical conditions as diagnosis groupers, compared with listing individually all relevant SNOMED CT concepts, ICD codes, or clinical terminology terms. SNOMED CT concept hierarchy-based definitions of even highly specific conditions proved practical to construct. 2. Shared use of standard definitions of conditions via SNOMED CT has been high. Once constructed, SNOMED CT condition definitions found extensive shared use for EHR registries, rules, CDS, eCQM performance measure calculations, and self-service analytics. Benefits of standard grouper reuse include: • Avoiding rework costs of duplicative grouper construction. • Avoiding inconsistent grouper definitions, preventing later avoidable reconciliation work and "archeology" to track down discrepancies between definitions within the EHR and EDW. • Streamlining future rapid, iterative development of new CDS tools and reports within the EHR, and new eCQMs in the EDW. 3. Sharing SNOMED CT concept-based groupers also proved simple to accomplish and can lead to straightforward definition of identical subpopulations across organizations and EHRs. SNOMED CT-based condition definitions proved feasible to construct in an EHR-agnostic clinical terminology tool, enabling application to CCD data from diverse organizations and EHRs within a clinically integrated network. Using SNOMED CT to link disparate sources together optimally leverages the vocabulary standardization requirements of the Health Information Technology for Economic and Clinical Health Act 24 for population health purposes. Clinical Guidelines and eCQMs Should Preferentially Define Conditions Using SNOMED CT Hierarchy Groupers and Boolean Logic Clinical guidelines often change physician practice only slowly and incompletely. [28][29][30] Data collection for eCQMs can involve burdensome box-clicking by physicians-sometimes to idiosyncratically double-document an exclusionary condition already in the EHR. If guideline and eCQM authors were enabled to readily capture with SNOMED CT hierarchies their clinical intent for condition types and subtypes being focused on, then those identical definitions could be made readily available to physicians practicing with certified EHRs. Duplicative efforts across the country to recreate guideline clinical intent with a locally developed diagnosis grouper could be eliminated. Creating diagnosis groupers to help drive associated CDS tool(s) to promote following a clinical guideline would become immediately more practical to implement, whether within the EHR or in a shareable CDS form invoked as an online service. 31,32 Diagnoses recorded during normal clinical care would be leveraged, avoiding redocumentation to meet a CQM. In sum, by employing SNOMED CT hierarchies to streamline practical implementation of guideline promotion within the EHR, updated guidelines could more quickly translate to a positive effect on patient care. Using Standard Condition Definitions Based on SNOMED CT for Improving the Clinician EHR Experience Although the percentage of U.S. hospitals and physician practices on an electronic record increased dramatically with the federal EHR Incentive Program, 33-36 physician dissatisfaction with EHRs remains high. 37,38 Alert fatigue, "documentation fatigue" (click counts), and difficulty finding relevant information in the chart detract from potential EHR benefits for many physicians. 39,40 While enhancing physician experience with the EHR is a far larger topic, a library of consistent, refined SNOMED CTbased condition definitions within the EHR can potentially help, by spurring: • Smarter CDS: Condition-targeted CDS-based on realtime data within the EHR-can appear more selectively and appropriately to clinicians. • More focused data capture: Rules can present conditionspecific documentation templates only when relevant to a particular patient. • Better signal-to-noise in information displays: Problemoriented views can automatically collate and present the most relevant clinical data for the patient's conditions, 41 potentially reducing clinicians' cognitive burden. 42 Sharing Standard Condition Definitions for Clinical-Translational Research and Advanced Analytics A library of SNOMED CT-defined conditions would also benefit clinical/translational research and analytics seeking to derive new knowledge from the growing expanse of EHR data, in the first limb of the "practice-toknowledge, knowledge-to-practice" Learning Health System cycle. 25,30,43 Pragmatic studies making use of EHR data frequently need to consider one or more disorders as covariates or comorbid conditions: reusing existing SNOMED CT definitions avoids redundant work and enhances consistency. Predictive and prescriptive algorithms have potential to become more robust as their input conditions include more sophisticated EHR event data, and with clinical phenotypes consistently defined across any source EHR. Rare and/or specialized conditions can be more easily focused on using the finer granularity of SNOMED CT concept hierarchies. Use of SNOMED CT with Billing Claims Data One potential limitation to adopting SNOMED CT condition definitions is that billing claims data are still required for a complete understanding of a patient's interactions with the health care system-and these data will come tagged with ICD diagnoses alone. To handle this efficiently requires a reliable ICD-to-SNOMED CT map, including both ICD-9 and ICD-10 for coverage of historical data. Such maps exist, with varying coverage of ICD codes. 24,44 In one study of administrative claims data for 1.5 million persons from 2003 to 2007, over 99% of the ICD-9-CM diagnosis codes used could be mapped to SNOMED CT using the Observational Medical Outcomes Partnership (OMOP) common data model. 45 Similarly, over 99% of primary ICD-10-CM diagnosis codes on the 16.2 million claims submitted by UT Southwestern's multispecialty physician practice during the calendar year 2016 mapped successfully to a primary SNOMED CT concept using the OMOP common data model, enabling effective use of SNOMED CT concept groupers with real-world billing data. In the future, valuable harmonization work underway will bring the next version of ICD (ICD-11) and SNOMED CT in much closer alignment. 46 Maintenance of SNOMED CT Hierarchy Groupers SNOMED CT is updated twice yearly. While SNOMED CT hierarchy-based groupers (condition definitions) offer the major maintenance advantage of automatically including any newly added subtype descendants, they still require periodic review. SNOMED CT updates can include newly added concepts, deprecated concepts, or changes in a concept's hierarchical position (through modifications of its subtype-supertype relationships). Additions of new concepts as subtypes of an existing grouper concept are handled gracefully, as are deprecated concepts. Addition of a new concept as a sibling of a currently included concept hierarchy requires clinical review to decide whether the new entry should also be included. An algorithm to autodetect groupers with newly added sibling concepts after a SNOMED CT update would facilitate this targeted review. Newly added descendants could also be automatically detected and reviewed if desired, even though likely to remain included. Changes in hierarchy positions conceptually could cause problems. In practice, many such migrations address a previous quality issue in the SNOMED CT tree, by improving consistency of subtype-supertype relationships. Hierarchybased groupers generally are stable across such migrations. As an example from our list of 125 conditions, we had to reference 5 SNOMED CT concept hierarchies to define "Tinnitus," because 4 of the many variants of tinnitus in this version of SNOMED CT lack "Tinnitus" as a supertypeinstead being linked to the broader supertype "Disorder of ear." One can envision as part of ongoing SNOMED CT quality improvement efforts that these 4 tinnitus variants will ultimately have "Tinnitus (finding)" added as a supertype "parent"-like the several other variants of tinnitus already do. Should such a migration happen, our Tinnitus grouper will not "break." The grouper's definition could then be simplified to consist of just "Tinnitus (finding), including descendants"-but the grouper would work identically, before or after such simplification. Since we do not yet automatically flag groupers potentially affected by SNOMED CT updates, instead we employ periodic reviews. Any enumerated list-based ICD, SNOMED CT, or IMO term groupers (extensional value sets) are put on an annual review cycle as they are more likely to become stale with updates, while hierarchy-based diagnosis groupers (intensional value sets) are put on a 3-year review cycle. In practice, we have not encountered undesirable grouper behavior stemming from SNOMED CT semiannual updates when using this review frequency. Use of SNOMED CT's Subtype Relationship Only Our use of SNOMED CT concept hierarchies (combined with Boolean logic) inherently employs the subtype relationship between concepts. 13 However, other connections between concepts-known as attribute relationships-are possible. Attribute relationships enable additional ways to refine a SNOMED CT content subset by adding further constraints. Sixteen attribute relationships can be used to further elaborate Clinical Finding concepts, such as "Associated morphology," "Finding site," "Causative agent," and "After" (for temporal relationships). 15 Attribute relationships can be included in the "precoordinated" definition of a more-specific individual SNOMED CT concept: For example, "Fracture of neck of femur (disorder) [SCTID: 5913000] has two precoordinated relationships: (1) an "Associated morphology" of "Fracture (morphologic abnormality) [SCTID: 72704001], and (2) a "Finding site" of "Structure of neck of femur" (body structure) [SCTID: 29627003]. Alternatively, attribute relationships can be specified at query time, using "postcoordination." A standard grammar exists for expressing such relationships: SNOMED CT Expression Constraint Language. 47 SNOMED CT Expression Constraint Language also can specify shareable combinations of concept hierarchies exactly equivalent to the Boolean logicbased approach taken in this study. As a practical matter, when constructing diagnosis groupers within the EHR, the concept hierarchies frequently contain precoordinated concepts (such as "Fracture of neck of femur") which make use of attribute relationships. But currently, our EHR groupers do not directly specify postcoordinated SNOMED CT attribute relationships. The Symedical content subset designer does enable such postcoordinated queries. Leveraging attribute relationships could further refine condition and patient subpopulation definitions. Defining Conditions from EHR Data Using Diagnoses Only One might also question why limit the domain of conditiondefining inputs to SNOMED CT diagnosis concepts only? For example, why not also use laboratory test results, current medications, and even unstructured data to define a clinical phenotype such as "Diabetes Mellitus" 48 ? And what will Applied Clinical Informatics Vol. 9 No. 3/2018 happen as more and more "conditions" are defined based on genetic data 49 ? Because of the multiple corollary benefits to safe clinical care and to analytics from each patient having a single accurate master list of their active health conditions (their Problem List), we favor separating out those concerns. That is, we encourage using other nondiagnosis domain data (laboratory test results, medications, etc.) to enhance the quality of the Problem List through active additions and refinements of a patient's conditions. 50-52 But we strongly advocate the patient's Active Problem List serve as the single, unified focal point for clinical situational awareness, clinical communication, and analytic understanding of "all the patient's problems." 53 Conclusion SNOMED CT hierarchy-based diagnosis groupers are simple to develop and maintain, understandable to clinicians, useful in both the EHR and EDW, and readily shareable. Developing curated SNOMED CT hierarchy-based condition definitions ("intensional value sets") and disseminating them publicly (e.g., via the VSAC) could help accelerate cross-organizational population health efforts, "smarter" EHR feature configuration, and clinical-translational research. 54 SNOMED CT hierarchies can define clinical conditions more precisely than achievable with ICD, and closely match how clinicians think about disease subtypes. And by directly employing the terminology standard now native to EHRs, they prove highly practical to implement across multiple health care delivery organizations. Guidelinewriting groups and eCQM authors who define conditions using SNOMED CT hierarchies thus could more quickly see uptake of their work efforts into EHR-based CDS and patient registries, providing clinicians and patients practical tools for improving care delivery and patient outcomes. Clinical Relevance Statement With increasing focus on population health, identifying patients who share a clinical condition helps promote best practice clinical care within an electronic health record (EHR), and across clinically integrated networks. EHRs now exchange diagnoses using a standard terminology, SNOMED CT. Defining clinical conditions with SNOMED CT concept hierarchies is far simpler than alternatives, and such definitions can be readily shared for multiple clinical and analytic purposes. Multiple Choice Questions 1. Compared with lists of ICD codes, EHR diagnosis groupers constructed with SNOMED CT concept hierarchies: a. Require more frequent updating when new codes or concepts are added or deprecated. b. More closely match clinical thinking about disease subtypes to include or exclude. c. Use the coding system mandated for professional billing in the United States. d. Are more complex to construct and maintain. Correct Answer: The correct answer is option b. The hierarchical subtype ("is a") relationships between SNOMED CT concepts express type-subtype relationships that closely match how clinicians think about clinical disorders and their subtypes. This streamlines clinical vetting of groupers to achieve faithful representation of the intended clinical condition. Groupers designed with SNOMED CT hierarchies are more resilient to additions or deprecations of individual codes/concepts than list-based approaches such as lists of ICD codes: addition of new "descendants" within an existing SNOMED CT hierarchy does not require a change to the grouper definition in the EHR. SNOMED CT is the terminology mandated for health information exchange of diagnoses between EHRs; professional billing employs ICD-10-CM codes in the United States, not SNOMED CT concepts. ("Maps" translating ICD-10-CM codes to SNOMED CT concepts enable groupers defined with SNOMED CT hierarchies to handle both clinical EHR data and billing claims data.) SNOMED CT hierarchy groupers require far fewer concepts/codes to define (median of 2 concepts in this report) than groupers using lists of ICD codes. 2. Diagnosis "groupers," or value sets: a. Can only be used in electronic health records, not in analytic applications such as enterprise data warehouses. b. Are best constructed individually each time a condition definition is needed (one use ¼ one grouper). c. Are best constructed for each distinct real-world condition, and shared for multiple uses (one real-world condition ¼ one grouper). d. Are available exclusively as lists of codes or concepts. Correct Answer: The correct answer is option c. Constructing one grouper per distinct real-world condition promotes higher quality through more clinical vetting per grouper, higher consistency, and lower total cost in time/ effort via shared reuse for multiple clinical and analytic purposes. Accordingly, this is preferred over constructing multiple groupers for the same clinical condition each time a new use arises (e.g., CDS tool vs. eCQM). Diagnosis value sets can be shared for use in analytic applications, such as data warehouses as well within EHRs. While diagnosis value sets can be list-based ("extensional," constructed by listing out individual clinical terms, individual ICD codes, or individual SNOMED CT concepts), they also can be hierarchy-based ("intensional")-constructed by referring to combinations of SNOMED CT hierarchies that match desired clinical inclusion/exclusion criteria. Protection of Human and Animal Subjects Creation of specialty registries at UT Southwestern was performed for quality improvement purposes and deemed exempt from institutional review board (IRB) review by UT Southwestern's IRB. Analysis of reuse and complexity of diagnosis groupers in the EHR did not involve human or animal subjects, and did not require IRB review.
2018-09-13T14:09:04.385Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "bb7c3e0f4fe799ba14a24c5142f74b19bfe78434", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0038-1668090.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bb7c3e0f4fe799ba14a24c5142f74b19bfe78434", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
218593734
pes2o/s2orc
v3-fos-license
Fashion Recommendation with Multi-relational Representation Learning Driven by increasing demands of assisting users to dress and match clothing properly, fashion recommendation has attracted wide attention. Its core idea is to model the compatibility among fashion items by jointly projecting embedding into a unified space. However, modeling the item compatibility in such a category-agnostic manner could barely preserve intra-class variance, thus resulting in sub-optimal performance. In this paper, we propose a novel category-aware metric learning framework, which not only learns the cross-category compatibility notions but also preserves the intra-category diversity among items. Specifically, we define a category complementary relation representing a pair of category labels, e.g., tops-bottoms. Given a pair of item embeddings, we first project them to their corresponding relation space, then model the mutual relation of a pair of categories as a relation transition vector to capture compatibility amongst fashion items. We further derive a negative sampling strategy with non-trivial instances to enable the generation of expressive and discriminative item representations. Comprehensive experimental results conducted on two public datasets demonstrate the superiority and feasibility of our proposed approach. Introduction With the proliferation of online fashion websites, such as Polyvore 1 and Farfetch 2 , there are increasing demands on intelligent applications in the fashion domain for a better user shopping experience. This drives researchers to develop various machine learning techniques to meet such demands. Existing work is mainly conducted for three types of fashion applications: (1) clothing retrieval [1,1,8]: retrieving similar clothing items from the data collection based on the query clothing item; (2) fashion attribute detection [3,11,12]: identifying clothing attributes such as color, pattern and texture from the given clothing image; (3) Complementary Clothing Recommendation [5,10,16,21,22]: recommending complementary clothes that match the query clothing item to the user. In this paper, we focus on the third application, which is more challenging and sophisticated due to the fashion data complexity and heterogeneity. It requires the model to infer compatibility among fashion items according to various complementary characteristics, which goes beyond visual similarity measurement. The key point to tackle the above challenges is to derive an appropriate compatibility measurement for pairs of fashion items, which can effectively capture various fashion attributes (e.g., colors and patterns) from item images for comparison. The major stream of existing approaches for fashion compatibility modeling adopts metric learning techniques to extract effective fashion item representations. A typical fashion compatibility modeling strategy is to learn a latent style space, where matching item pairs stay closer than incompatible pairs. The compatibility of two given fashion items is computed by the pairwise Euclidean distance or inner product between fashion item embeddings. Nevertheless, the previous work has two main limitations that lead to sub-optimal performance. Firstly, some approaches consider fashion compatibility modeling as a single-relational task. However, this neglects the fact that people usually focus on different aspects of clothes from different categories. For example, people are more likely to focus on color and material for blouses and pants, while they may pay attention to shape and style for jeans and shoes. Moreover, using a single unified space is likely to result in incorrect similarity transitivity in fashion compatibility. For instance, if item A matches both B and C, while B and C may not be compatible, the embeddings of A, B and C will be forced to be close to each other in a single unified space, which degrades prediction performance because the compatibility essentially does not hold transitivity property. Therefore, such a category-independent approach will result in inaccurate item representations. Secondly, most existing approaches merely randomly sample negative instances from the training set. However, most of the randomly sampled triplets are trivial ones, which may fail to support the model to learn discriminative item representations. In order to address the above mentioned limitations, we propose a novel Category-Aware Fashion Metric Embedding learning network (CA-FME), which models both instances and category-aware relation representations through a translation operation. Specifically, we formulate the fashion compatibility measurement as a multi-relational data modeling task. We treat fashion items as entities and define pairs of compatible categories as complementary relations, e.g., blouses-skirts. The overall flowchart of CA-FME is presented in Fig. 1: Item visual features are first extracted through a pre-trained CNN. Then, each pair of item embeddings is projected to their corresponding category-specific relation subspace. Finally, we model the compatibility based on a transitionbased score function. Our main contributions can be summarized as below: -We present a novel category-aware embedding learning framework for fashion compatibility modeling, which not only captures cross-categorical relationships but also preserves the diversity of intra-category fashion item representations. -We devise a negative sampling strategy with non-trivial samples for discriminative item representations. -Extensive experiments have been conducted on two real world datasets, Polyvore and FashionVC, to demonstrate the superior performance of our model over other state-of-the-art methods. (3) Multiple relation-specific projection spaces for preserving the intra-class diversity. The whole framework is finally optimized via a margin-based ranking objective function in end-to-end manner. Fashion Compatibility Modeling The mainstream of work aims to map fashion items into a latent space where compatible item pairs are close to each other, while incompatible pairs lay in the opposite position. McAuley et al. [13] propose to use Low-rank Mahalanobis Transformation to learn a latent style space for minimizing the distance between matched items and maximizing that of mismatched ones. Following this work, Veit et al. [19] employ the Siamese CNNs to learn a metric for compatibility measurement in an end-to-end manner. Some researchers argue that the complex compatibility cannot be captured by directly learning a single latent space. He et. al [6] propose to learn a mixture of multiple metrics with weight confidences to model the relationships between heterogeneous items. Veit et al. [18] propose Conditional Similarity Network, which learns disentangled item features whose dimensions can be used for separate similarity measurements. Following this work, Vasileva et al. [17] claim that respecting type information has important consequences. Thus, they first form type-type spaces from each pair of types and train these spaces with triplet loss. Knowledge Graph Embedding Learning The techniques of representation learning on the knowledge graph have attracted large attention in recent years. Different from the approaches implemented by tensor factorization, e.g., [14], translation-based models [2,7,20], which is partially inspired by the idea of word2vec, have achieved state-of-the-art performance in the field of the knowledge graph. Similar to the knowledge graph, heterogeneous fashion recommendation can also be considered as a multi-relational problem, where complementary categories form various relations. Enlightened by these findings, we apply a similar idea from the knowledge graph to the fashion domain for compatibility modeling. Problem Formulation The represents a pre-trained CNN with trainable parameters Θ v , which extracts visual features from a fashion item image o ∈ O. We denote a set of category complementary relations as R = {r cicj }, where c i , c j ∈ C represent a pair of complementary categories, such as tops-bottoms. We now use a triplet Each relation r cicj ∈ R corresponds to an embedding vector r cicj ∈ R d from the relation embedding space. Our target is to derive a fashion compatibility scoring function f (v i , v j , r cicj ), which captures visual characteristics from the item embeddings for compatibility measurement. Proposed Approach In this section, we first present our CA-FME model for fashion compatibility modeling. Then, we introduce a novel negative sampling strategy for more effective training. Finally, we describe the optimization algorithm to train our model. The overview of our proposed framework is shown in Fig. 1. We aim to build a model, which can (1) effectively model the notion of compatibility; (2) be easily generalized to unseen fashion item compatibility measurement; (3) focus on different aspects of item embeddings regarding different category complementary relations for the compatibility measurement. In particular, the framework consists of a pre-trained CNN for visual feature extraction and multiple category complementary relation subspaces for category-aware compatibility modeling. Compatibility Modeling To solve the above mentioned limitations, we assign each category complemen- which means o j 's embedding v j should be the nearest neighbor to the resulting vector of v i plus the relation vector r cicj in a specific latent space based on a certain distance metric, e.g., L1 or L2 distance. However, there exists one issue in the above equation: in reality, items from a specific pair of categories share diverse fashion attributes such as material, style and pattern. Therefore, it is insufficient to preserve intra-category diversity by building only a single embedding vector for each category complementary relation. To address this issue, we propose to build multiple relation-specific subspaces, i.e., M r ∈ R k×d , r ∈ R, where k is the number of visual feature vector dimensions. Using such category-aware projection operations is twofold. Firstly, the relation-specific subspaces provide abundant trainable parameters to preserve intra-category diversity. Secondly, it also provides capability for handling unseen items through a projection operation. Thus, we define the projected item vectors of v i and v j as, With the above defined compatibility relationship modeling rule and relationspecific projection, we now could perform compatibility score calculation within the corresponding relation space. Given a pair of fashion items denoted as o i and o j , and their corresponding category complementary relation r cicj , the compatibility score s ij is calculated as, where L2 distance is used. Negative Sampling Negative sampling has been proven to be an effective and helpful training strategy to learn discriminative item representations in various fields. We aim to derive a simple but effective negative sampling strategy to assist our model to identify more subtle style patterns from hard negative instances. Since a category complementary relation corresponds to two different categories, we want both sides of each training triplet can benefit from negative sampling. Therefore, we define the strategy should meet the following requirements: 1. The strategy should consider both sides of training triplets. 2. The strategy should identify hard negative instances effectively and efficiently. 3. The strategy should avoid false negative samples effectively. Now we introduce the details regarding how our designed negative sampling strategy can meet the above-defined requirements. We also present the details of our strategy in Algorithm 1. Requirement 1: We propose to sample negative instances from both sides of a given positive triplet (v i , v j , r cicj ). In particular, we first fix v i and category complementary relation r cicj , then replace v j by randomly sampling an item embedding vector v j from category c j . Similarly, we perform the same negative sampling for the other side item v j . Requirement 2: Given a positive triplet (v i , v j , r cicj ), we first uniformly sample N negative candidates denoted asĤ (v i ,r c i c j ) from category c j 's item set. Then, for each training triplet, we calculate scores for all negative triplets. This two steps correspond to the step 1-2 in Algorithm 1. Intuitively, the negative triplets with high compatibility scores can be regarded as hard negative samples. Requirement 3: Despite the higher scores the harder negative samples are, these samples are likely to be false negative, which instead has destructive impact on the model performance. In order to avoid this issue, we propose to select Form the 4-tuple training set Update the whole network via Hinge loss function: M negative items from the above sampled N negative candidates with different probability by multinomial sampling, which corresponds to step 3 in Algorithm 1. In particular, we grant larger probability for harder negative samples according to their scores. Here, let S = {s 1 , s 2 , ..., s N } be the set of calculated scores of N negative candidates. We first define the following normalization function norm(s ij ) to project all the scores into the range of [0, 1], Finally, we could define the probability of sampling a negative itemv by: Margin-Based Optimization. With the above defined score function and negative sampling strategy, we present the whole training steps in Algorithm 16. Let H (v i ,r c i c j ) andĤ (v j ,r c i c j ) denote the 4-tuple training triplets constructed using the above defined negative sampling strategy. We could define the following margin-based loss function as our objective function for training: where γ is the margin value and [x] + max(0, x). We adopt the stochastic gradient decent algorithm (SGD) for the model optimization. In each step, we sample a mini-batch of training triplets and update the parameters of the whole network. Experiments In this section, we first describe the experimental settings and then give comprehensive analysis based on the experimental results. Dataset We conduct our experiments on two public datasets, FashionVC and Polyvore-Maryland, provided by Song et al. [16] and Han et al. [5] respectively. FashionVC [16]. This dataset consists of 14,871 top item images and 13,663 bottom item images, where each item has a corresponding image, a title and a category label. In this paper, we only consider the visual modality. Therefore, we use images for visual information extraction and category labels to determine which category complementary relation the item pairs belong to. We randomly split the data according to 80%;10%;10% for training, validation and test sets, respectively. PolyvoreMaryland [5]. This dataset contains 21,799 outfits crawled from the online social community website Polyvore. We use the splits provided by Han et al. [5], which has 17,316, 3,076 and 1,407 outfits in training, testing and validation sets respectively. In this paper, we mainly study item-to-item compatibility, therefore, we keep four main groups of fashion items: tops, bottoms, bags and shoes from the outfit data. Each fashion item contains an image, a title and a category label. Note that each group of fashion items have several detailed category labels, e.g., there are hand bags and shoulder bags in the "bags" group. Baseline Methods We compare our model CA-FME with several state-of-the-art models for heterogeneous recommendation. For the fair comparison, we set the pre-trained Alexnet [9] as the visual feature extractor of all methods. -SiameseNet [19]: The approach models compatibility by minimizing the Euclidean distance between compatible pairs and maximizing the distance between incompatible ones in a unified latent space through contrastive loss. -Monomer [6]: The approach models fashion compatibility with a mixture of distances computed from multiple latent spaces. -BPR-DAE [16]: The approach models compatibility through inner-product result of top's and bottom's embeddings and uses Bayesian Personalized Ranking (BPR) [15] as their optimization objective. -TripletNet [4]: The approach models fashion compatibility in a unified latent space through triplet loss. -TransNFCM [22]: The state-of-the-art method that learns item-item compatibility by modeling categorical relations among different fashion items. -TA-CSN [17]: The state-of-the-art method that builds type-aware subspaces for fashion compatibility modeling. Parameter Settings In our experiment, all the hyper-parameters of our approach are tuned to perform the best on the validation set. For the fair comparison, we apply the Alexnet [9] as the visual feature extractor for all methods. In our model, we set margin γ as 1, learning rate α = 10 −4 with momentum 0.9, batch size B = 512. Visual embedding dimension k = 128, with dropout rate 0.5 and relation embedding dimension is set to be 128. Compatibility Prediction Task Description. The compatibility prediction task aims to predict whether a given pair of items are compatible or not. In particular, we replace one item of each testing positive triplet with 100 randomly sampled negative items. Thus, for each testing instance, it requires to give ranking on 101 items based on the query image. We employ two widely-used evaluation metrics, Hit@k and Area Under the ROC curve (AUC) to evaluate the performance of our model and baseline methods based on the predicted compatibility scores. Hit@k is defined as follows, which indicates the proportion of the correct predicted item ranked in top k. where D test denotes the collection of testing instances. The formula for AUC is defined as below, where pred positive > pred negative indicates the number of cases that the predicted score of positive instance is larger than negative one, by comparing the predicted score of each positive instance with each negative instances in the testing set. Performance Comparison We evaluate our model with and without negative sampling strategy, i.e., CA-FME(Neg) and CA-FME. Table 1 shows the performance comparison on two datasets based on AUC and Hit@K evaluation metrics. From the table we have the following observations: -Our model achieves the best performance on both datasets by significant margins compared with all the other state-of-the-art methods, which proves the effectiveness and superior performance of our method. -The category-unaware models including SiameseNet and TripletNet, which merely learn fashion compatibility notions in a single latent space, perform worse than category-respected models including TA-CSN and TransNFCM. This proves that considering category label information is of great importance in fashion compatibility modeling, which can be helpful to avoid incorrect compatibility similarity transitivity. It also proves that items from different categories may have very different visual characteristics for compatibility. -Compared with category-aware methods, TA-CSN and TransNFCM, our model obtains around 15% and 30% improvements on AUC and Hit@20 respectively. Although they build category-aware mask vectors to capture different fashion characteristics among different categories, it is still not sufficient to preserve the intra-category diversity among items. With the help of our relation-specific projection spaces, our model can capture much more specific information of compatibility from different categories. The improvements on PolyvoreMaryland dataset are even much better in terms of AUC and Hit@5. This is mainly because of the different number of relations in two datasets. We define 146 category relations in the Polyvore dataset, while there are only 30 relations in the FashionVC dataset. It proves that more relational spaces can significantly contribute to the improvement of performance. -The results of CA-FME(Neg.) show that our negative sampling strategy is helpful to improve our model's performance, which proves the effectiveness of our proposed training strategy. Case Study In this section, we conduct a case study, aiming to address a real-world fashion recommendation task: selecting the fashion item that matches the query one. As illustrated in Fig. 2, we conduct two query instances on the FashionVC dataset, where the items with a green box are ground-truths. In the first case, we give the model a woman blouse, the model successfully selects the ground-truth at first rank. It can be observed that the model identifies the color of the first ranked jeans matches the query blouse. Our model also successfully identifies that the 7 th jeans are for men and thus gives it the lowest score. In the second case, the model gives a relatively high score to the ground-truth item. However, the main reason that our model gives a higher score to the first item probably due to the color attribute. For the latter items ranked at 5-7, we think our model successfully identifies that their shapes do not match the query skirt. Conclusion In this work, we introduced a novel category-aware neural model CA-FME to model the fashion compatibility notions. It not only captures cross-category compatibility by constructing category relation embeddings but also preserves intra-category diversity among items through build relation-specific projection spaces. To optimize our model, we further introduce a weighted negative sampling strategy to identify high-quality negative instances, which consequently assists our model to infer discriminative representations. In addition, although in our paper, we mainly study the compatibility of tops and bottoms, it can easily generalized to arbitrary types of clothing items. Extensive experiments were conducted on two public fashion datasets, which shows that our CA-FME model can significantly outperform all the state-of-the-art methods on fashion recommendation.
2020-05-12T13:10:55.858Z
2020-04-17T00:00:00.000
{ "year": 2020, "sha1": "358840115ea9660f510ed8b50f1b3e10e9185cbd", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-47426-3_1.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "cc860539c12079ee4ef13795d275472ced3344f9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54489910
pes2o/s2orc
v3-fos-license
Reinvestigating the status of malaria parasite (Plasmodium sp.) in Indian non-human primates Many human parasites and pathogens have closely related counterparts among non-human primates. For example, non-human primates harbour several species of malaria causing parasites of the genus Plasmodium. Studies suggest that for a better understanding of the origin and evolution of human malaria parasites it is important to know the diversity and evolutionary relationships of these parasites in non-human primates. Much work has been undertaken on malaria parasites in wild great Apes of Africa as well as wild monkeys of Southeast Asia however studies are lacking from South Asia, particularly India. India is one of the major malaria prone regions in the world and exhibits high primate diversity which in turn provides ideal setting for both zoonoses and anthropozoonoses. In this study we report the molecular data for malaria parasites from wild populations of Indian non-human primates. We surveyed 349 fecal samples from five different Indian non-human primates, while 94 blood and tissue samples from one of the Indian non-human primate species (Macaca radiata) and one blood sample from M. mulatta. Our results confirm the presence of P. fragile, P. inui and P. cynomolgi in Macaca radiata. Additionally, we report for the first time the presence of human malarial parasite, P. falciparum, in M. mulatta and M. radiata. Additionally, our results indicate that M. radiata does not exhibit population structure probably due to human mediated translocation of problem monkeys. Human mediated transport of macaques adds an additional level of complexity to tacking malaria in human. This issue has implications for both the spread of primate as well as human specific malarias. Introduction In the last two decades much work has been done to understand the evolutionary origin of human malarial parasite. Molecular phylogeny of Plasmodium infecting primates suggests that the principal malaria causing pathogens in humans (P. falciparum, P. vivax, P. malariae, and P. ovale) are related to Plasmodium infecting non-human primates and have multiple, independent evolutionary origins [1][2][3][4]. For example, studies indicate that P. falciparum is of gorilla origin and not of ancient human origin [5]. The largely Asian malarial parasite P. vivax also appears to have an African origin as it is related to Plasmodium infecting the great apes of Africa [6]. These studies indicate that to understand the origin, evolution and transmission of these primate-derived human pathogens it is imperative that we understand the diversity and phylogenetic affinity of these pathogens in their natural hosts, the non-human primates (henceforth referred to as primates). Additionally, continuous monitoring of Plasmodium diversity in wild primates can alert us to new Plasmodium species that might spread to humans. For example, the recently detected knowlesi malaria in human from Southeast Asia was acquired from wild macaques which serve as their reservoir hosts [7][8][9][10][11]. Studies have also shown that primates serve as reservoirs for Plasmodium recently acquired by humans from primates [12][13][14]. For example, P. falciparum-related pathogens can naturally circulate in some monkey populations in Africa [15,16]. Additionally, now there is evidence of multiple transfer of Plasmodium from human to primates [4,[15][16][17][18][19][20]. This is because, even if malaria is completely eradicated from humans, in the long run animal reservoirs can provide source for recurrent malaria infection [21][22][23]. Such anthropozoonoses (pathogens transmitted from humans to animal populations) also have important conservation implication given many primate species are threatened or endangered and their populations might decline due to spread of human malaria to primates. With over 15 species of primates, India is among the countries with high primate diversity [24]. Out of these 15 species three are widely distributed while the remaining species have restricted distributions. The widely distributed species include, Hanuman langur (Semnopithecus sp.) distributed all over India, rhesus macaque (Macaca mulatta) distributed in north India, and bonnet macaque (Macaca radiata) distributed in South India. These monkeys are quite common across India and also found in cities and villages. Additionally, they are considered sacred and are often provisioned by humans. While in other parts of India these monkeys' raid crops and are considered as pests [25]. Thus, there is much interaction between humans and primates in India which in turn provides ample opportunity for disease transmission. However, very little is known about prevalence, diversity and spread of malarial parasites in Indian primates. Till date at least three primate specific Plasmodium species have been reported from M. radiata in South India, these include P. inui, P. cynomolgi and P. fragile [26][27][28][29]. In Sri Lanka these parasites have been reported from M. sinica which is closely related to M. radiata. Additionally, P. cynomolgi has also been reported from langurs (Semnopithecus priam) in Sri Lanka [30,31]. Both P. inui and P. cynomolgi also infect many Southeast Asian primates, whereas P. fragile is endemic to the macaques of India and Sri Lanka. Interestingly, most of these reports of malaria in wild Indian primates are from the high rainfall regions of Southwest India probably due to the restricted distribution of the vector, Anopheles elegans, which is confined to evergreen forests [32][33][34]. Given that India is one of the major malaria prone regions in the world and exhibits high primate species richness, it is plausible that human Plasmodium (particularly P. falciparum and P. vivax) might have been transferred to Indian primates and these primate populations might also represent a source for recurring human malarial infection. Consequently, Indian primates might represent both source and sink of Plasmodium infecting humans. Furthermore, Indian primates may harbour many more Plasmodium species than that has been reported thus far. For example, genetic studies suggest that there exist numerous "cryptic" species of Plasmodium and other genera of malarial parasites in lizards and birds, hence the current estimates of the diversity of malaria causing protists are likely to be low [1]. The close interaction between humans and primates in India might facilitate both zoonoses and anthropozoonoses. Additionally, the unregulated human mediated translocation of "problem monkeys" might further hasten the spread of these pathogens. Till date there has been no published report on Plasmodium genetic diversity and their phylogenetic affinity in free ranging Indian primates. However, such studies have been undertaken on Southeast Asian (SEA) primates [7,[35][36][37][38]. Thus, here we attempt to address the following questions: What are the different species of Plasmodium infecting Indian primates? What are the phylogenetic relationships between these Plasmodium species and how are they related to Plasmodium species isolated from other primates as well as humans? What effect has human mediated translocation had on the population structure of widely-distributed host species? To address these questions, we collected liver and spleen tissues, blood and fecal samples from multiple primate species from India. Much of the sample collection was concentrated along Southwest India where simian malaria has been reported previously. These samples were used to amplify both host and parasite markers to better understand host population structure and simian malaria diversity in India. Table. The whole and sometimes portions of the freshly collected fecal material were stored in sterile vials in 70% alcohol or a 1:1ratio with RNA later at room temperature in field conditions and were stored in laboratory at -20˚C until DNA extraction. Samples Blood and tissue samples. In the case of Macaca radiata from Kerala and few from Karnataka blood and tissue samples were provided by other researchers. Twenty seven samples from Waynad (11 liver, 8 spleen and 8 blood) and twelve blood samples from Kolpetta, were collected by Kerala forest department from the dead animals found in the forest during the year 2014-15. Fifty-one more blood samples were obtained from captive macaques from Trichur Zoo (Kerala) during 2015. Furthermore during 2015 four blood samples from Karnataka were obtained from Primate Research Centre (PRC) at Indian Institute of Sciences (IISc Bangalore) campus, where the blood from macaques were drawn for routine screening for diseases. Blood sample of a captive M. mulatta infected with P. cynomolgy under laboratory condition was obtained from CDRI Lucknow. Molecular data fecal samples DNA extraction from fecal samples and host DNA detection. DNA from the fecal samples was extracted by QIAamp fast DNA stool kit (Qiagen). Each fecal sample was then tested for the presence of the host DNA by using published set of primers specific for each host species. For M. radiata we used published set of primers LqqF: 5' TCCTAGGGCAATCAGAA AGAAAG and TDKD: 5' CCTGAAGTAGGAACCAGATG, for amplifying the~540 bp of mitochondrial D-loop region [39]. For M. fascicularis umbrosa, and M. mulatta we used another published set of primers Saru-4F: 5' ATCACGGGTCTATCACCCTA and Saru-5r: 5' GGCCAGGACCAAGCCTATTT for amplifying 630 bp of mitochondrial D-loop region [40]. For M. Silenus we utilized the published set of primers D1: 5' GTACACTGGCCTTGTAAACC 3') and D3: 5' CTTATTTAAGGGGAACGTGTGG 3', for hypervariable region-I (HVR-I) to detect the host DNA (amplicon size 650 bp) [41]. For detecting the presence of Semnopithecus hypoleucos DNA in the fecal material the mitochondrial Cytochrome b (Cyt-b) region was amplified using primers Langur_CytbF: 5' ATTATCGCARCCTTCACAATC and L2_CytbR: 5' TTGTGRAGTATRGGTAYRATTGTC (amplicon size 320 bp) [42]. For all the primer pairs the published annealing temperature and PCR cycle conditions were followed. A total of 4μl of the PCR product were visualized by gel electrophoresis on a 2% agarose gel. The samples showing amplification for the host DNA were then subjected to malaria parasite screening and further analysis, while the samples showing no amplification were not included in further analysis. Malaria diagnostic PCR using cytochrome b gene from fecal samples. Every fecal sample found positive for the host DNA was then screened for the presence of malaria parasite using primers specifically designed using the PrimerSelect computer program (a component of the DNASTAR, Madison, WI, USA) to amplify 200bp fragment of Plasmodium Cytochrome b (Cyt-b) gene. The Cyt-b external primers Cyt-b3F: 5'GGWCAAATGAGTTATTGRG and Cyt-b 3R: 5'CATAGAATGMACACATAAACC amplified a 350-bp PCR product and internal primers Cyt-b 2F: 5'GGTAGCACWAATCCYTTAGGG and Cyt-b 2R: 5'GGTARAAART ACCATTCWGG amplified 200-bp region of the parasite Cyt-b gene. The primary PCR amplification using the external primers was carried out in a 25μl volume reaction using 2 μl of extracted total genomic DNA, 1.5mM MgCl 2 , 1x PCR buffer, 0.25mM of each deoxynucleoside triphosphate, 0.4mM of each external primer and 0.25 U/μl of NEB taq Polymerase (New England Biolabs Inc.). The primary PCR conditions were: initial denaturation at 95˚C for 5 min, followed by 35 cycles (94˚C for 40 secs, 45˚C for 30 sec and 72˚C for 30 sec) and final extension at 72˚C for 4 min. Second nested PCR was of 25 μl total volume and consisted of2 μl of the primary PCR product, 0.3 μl (1.5 Unit) of Taq DNA Polymerase (New England Biolabs Inc.), 1mM dNTPs, 1 μM of each internal primer. The cycling conditions were same as above except the number of cycles were restricted to 25. The fecal sample from laboratory infected (P. cynomolgy) M. mulatta was utilized as a positive control for the PCRs. Molecular data blood and tissue samples DNA extraction from blood and tissue samples and malaria diagnostics. DNA from blood and tissue samples was extracted using DNeasy blood and tissue kit (Qiagen, Germany) according to the standard protocol. Each sample was then screened for the presence of malaria parasites by nested PCR, using primers for a 1,200 bp fragment of Cyt-b gene that have been used in previous studies [36] and the references therein. The Cyt-b external primers were forward: 5'TGTAATGCCTAGACGTATTCC and reverse: 5'GTCAAWCAAACATGAATATA GAC and the internal primers were Forward: 5' TCTATTAATTTAGYWAAAGCAC and Reverse: 5'GCTTGGGAGCTGTAATCATAAT. The primary PCR amplifications were carried out in a 25μl volume reaction using 20 ng of total genomic DNA, 1.5mM MgCl 2 , 1xPCR buffer, 0.6mM of each deoxynucleoside triphosphate, 0.4mM of each primer, and 0.25 U/μl NEB Taq polymerase (New England Biolabs). The primary PCR conditions were same as mentioned in [36]. A total of 25μl of the secondary PCR was run using 0.1 μl (0.5Unit) of Taq DNA Polymerase (New England Biolabs Inc.), 0.6mM dNTPs, 0.4 μM of each primer (Forward and Reverse). Secondary PCR thermal cycling conditions were the same as mentioned in [36]. The blood sample from laboratory infected (P. cynomolgy) M. mulatta was utilized as a positive control for the PCRs. The 4ul of Cyt-b gene amplified products were visualized through electrophoresis on 2% agarose gel and successfully amplified products were subjected to direct sequencing using ABI 3730 capillary sequencer and identified as Plasmodium using BLAST [43]. Nuclear marker data from parasite. Apart from mitochondrial marker, we further attempted to amplify portions of two nuclear genes (1) MSP-1 42 (encoding a major antigen in the parasite) and (2) 18s rRNA from blood and tissue samples positive for Plasmodium. 1. Previously published primer pairs were used to amplify~900 bp region of MSP-1 42 gene Forward: 5'GACCAAGTAACAACGGGAG and Reverse: 5'CAAAGAGTGGCTCAGAACC [36], the PCR was done in a final volume of 25μl with 20 ng of total genomic DNA, 1.5mM MgCl 2 , 1xPCR buffer, 0.6mM of each deoxynucleoside triphosphate, 0.4mM of each primer, and 0.03 U/μlAmpliTaq Gold DNA polymerase (Applied Biosystems, Roche-USA). PCR thermal cycling conditions were the same as mentioned in [36]. 2. For 18s rRNA gene nested PCR was conducted using two sets of primers. For primary PCR we used the published primer pair rPLU1: 5' TCAAAGATTAAGCCATGCAAGTGA and rPLU5: 5 0 CCTGTTGTTGCCTTAAACTCC [44]. For the secondary PCR the presently designed primers were used rUNIF1: 5' TTAAGCCATGCAAGTGAAAGTAT-3' and rUNIR1: 5'-CGGTATCTGATCGTCTTC. The first PCR amplification was carried out in a final volume of 25μl, which included 10pmol of each primer pair, 0.2mM of dNTP, 1 U taq Polymerase (NEB Biolabs), 1x PCR buffer and 20ng of the total genomic DNA. Cycling conditions included an initial denaturation at 95˚C for 5 min followed by 35 cycles of 1 min denaturation at 95˚C, 1 min annealing at 55˚C, 1 min of extension at 72˚C followed by 5 min final extension at 72˚C. 4μl of the primary PCR product was then used as template for the secondary PCR with a final volume of 25μl which included 10pmol of each primer pair, 0.2mM of dNTP, 1 U taq Polymerase (NEB Biolabs) and 1x PCR buffer. Approximately 4μl of PCR product of each amplified DNA fragment for both the genes (18s rRNAandMSP-1 42 gene) was electrophoresed on a 2% agarose gel, utilizing a 100-bp ladder (BangloreGenei) to confirm amplicon size. Successfully amplified products were purified by incubation with Exonuclease-I and Shrimp Alkaline Phosphatase (Fermentas, Life Sciences) in a thermal cycler at 37˚C for 120 min, followed by enzyme inactivation at 85˚C for 15 min and subjected to sequencing from both the ends and identified using BLAST search. Phylogenetic analysis using parasite molecular data Parasite nuclear and mitochondrial markers sequenced from host M. radiata blood and tissue samples were subjected to phylogenetic analyses. Separate alignments of one mitochondrial (Cyt-b) and two nuclear (MSP-1 42 , and 18s rRNA) markers were generated using ClustalW and Muscle as implemented in MEGA 5.2 [45]with manual editing. We used GTR+I+G model for all phylogenetic analyses based on results from JModelTest 2.1.2 [46]. Phylogenetic relationships were estimated using Maximum likelihood and Bayesian methods using RAxML and MrBayes respectively. Bayesian tree was constructed in MrBayes [47]with the following settings: Markov Chain was run for 4x10 6 generations where sampling was performed every 100 generations, the chains were assumed to have converged once the average standard deviation of posterior probability was below 0.01, first 25% of the trees were discarded as burn-in. The ML tree was generated using the program RAxML GUI [48] with 500 bootstrap replicates using rapid bootstrap settings. S1-S3 Tables provide a complete list of the published sequences utilized for the present study phylogenetic analysis. The African primate parasite P. gonderi was used to root the phylogeny for Cyt-band 18s rRNA gene trees, given that previous studies have shown that SEA simian parasites originated in Africa [2,49], while for MSP-1 42 gene P. fragile sequence was used to root the tree as used in previous study [36]. Host molecular data using fecal samples Human mediated translocations of host species can significantly alter the distribution of their parasites. To better understand host population structure, we sequenced mitochondrial Dloop region from 82 samples of M. radiata (which showed both the presence and absence of parasite) representing all the states where they are distributed. The PCR components and cycling conditions were same as mentioned above and in [39]. Host phylogenetic analysis. The D-loop sequences of the host (M. radiata) were aligned using Muscle in MEGA 5.2. Based on JModelTest 2.1.2, we used a time reversible model with gamma-distributed substitution rates and a proportion of invariant sites (GTR+G+I) for constructing phylogenetic relationships among M. radiata populations. The phylogenetic inferences were made using Maximum likelihood and Bayesian methods as implemented in RAxML and MrBayes respectively using settings described above. Host haplotype network and IBD model. A minimum spanning network for a complete set of 82 M. radiata mtDNA sequences was estimated using PopART [50] with epsilon parameter set to 0. Further to test whether M. radiata mtDNA data fit the isolation-by-distance (IBD) model of population structure [51] the programme IBD v1.5 [52] was used to determine the correlation between pairwise genetic distances (p distance) and the geographic distances between population pairs. Fecal samples Host identification and sequencing of host mitochondrial genes. A total of 349 fecal samples from five different Indian primate species were collected and tested for the presence of host DNA by sequencing mitochondrial regions (details as given in Table 1 and S1 Table). Maximum number of fecal samples were collected from M. radiata where out of 251fecal samples collected 120 were found positive for the presence of host DNA. Fifteen fecal samples out of 24 samples collected for M. mulatta were found positive for host DNA. For M. fascicularis umbrosa a total 57 fecal samples were collected and 30 were found positive for host DNA, while for Symnopithecus hypoleucus 11 fecal samples were collected from two different locations out of which nine samples were positive for host DNA. Furthermore, four out of six fecal samples collected for Macaca silenus were found positive for host DNA. Details of fecal samples collected from each of the species studied and numbers of samples found positive for host DNA are listed in Table 1. Screening host fecal samples for parasite DNA. Total 178 (host DNA positive) fecal samples of five different Indian primate species collected from different locations of India were tested for the presence of Plasmodium parasite using nested PCR. Interestingly, except M. radiata and one sample from M. mulatta all the fecal samples from other primates were found to be negative for the presence of malaria parasite. Out of total 120 fecal samples tested from M. radiata 19 samples (16%) showed the presence of parasite however, out of 15 samples from M. mulatta tested only one showed the presence of parasite (6.6%) ( Table 1). With nested PCR a 200 bp fragment was obtained from all the parasite positive samples. Sequence comparison with the published data (based on 200 bp region) showed that all the sequences were identical to P. falciparum Cyt-b gene sequences. However, we were not able to sequence any bigger fragment of the parasite genome (mitochondrial or nuclear) from these samples, probably due to poor quality of the DNA obtained from fecal samples. Blood samples Screening host blood and tissue samples for parasite DNA. A total of 94 blood and tissue samples from M. radiata collected from four different locations from Southern India were tested for the presence of the Plasmodium parasites ( Table 2). None of the captive macaques from Trichur zoo were found positive for Plasmodium, however in wild samples from Waynad and Kolpetta the prevalence of this parasite was very high (see below). Out of 27 blood and tissue samples collected from Waynad, 12 samples (5 spleen, 6 liver and 1 blood) were found positive for the parasite DNA, while five samples out of 12 blood samples collected from Kolpetta were parasite positive. Furthermore, the parasite was found in two captive M. radiata macaques from PRC IISc campus (Table 2). Plasmodium species identification. Mitochondrial Cyt-b gene sequenced from positive samples were subjected to a BLAST search to identify the Plasmodium species. We were able to amplify and sequence 19 Plasmodium Cyt-b gene sequences from Indian M. radiata. BLAST analysis showed 11 P. inui, five P. fragile two P. falciparum and one P. cynomolgi sequences ( Table 2). A total of 61 sequences (published and presently generated) were aligned using clustal W in MEGA 5.2 (S2 Table) for phylogenetic analysis. The total length of the aligned sequences was 960 positions. Interestingly no variation was seen among Indian P. inui sequences. However, the divergence between Indian and SEA samples was 0.016 (Table 3). The divergence among Indian and Sri Lankan P. fragile was 0.005 (Table 3). Interestingly, only a single isolate of P. cynomolgi was found in the present study, and it's not very divergent from the rest of the SEA sequences (0.004, Table 3). The P. falciparum partial Cyt-b gene sequence isolated from M. radiata was identical to P. falciparum found in humans. To find any difference we further need to sequence the whole mitochondrial genome from these isolates, but we were unable to do so, may be due to very low parasitemia in presently studied macaques. Plasmodium phylogenetic analysis using Cyt-b gene sequences. We used Maximum likelihood and Bayesian methods to construct the phylogenetic tree of Plasmodium species found in Indian macaques. The overall phylogeny was similar for the two methods, thus only the Bayesian tree is shown in Fig 2. Overall, this phylogeny is like those obtained in previous studies [36], but the present phylogeny additionally includes Cyt-b gene sequences of P. inui, P. fragile and P. cynomolgi found Table 2. Details of Plasmodium species detected using blood and tissue samples of M. radiata species. Plasmodium phylogenetic analysis using nuclear genes. The nuclear markers were tested for amplification from fecal samples as well, however, they did not yield any positive results. Furthermore, to test the robustness of phylogenetic relationship depicted in Fig 2, we sequence two nuclear genes (MSP-1 42 and 18s rRNA) from the blood and tissue samples of these macaques. We could obtain six sequences of MSP-1 42 gene (five P. inui and one P. cynomolgi) and nine sequences of 18s rRNA (eight P. inui, one P. cynomolgi) from M. radiata derived Plasmodium samples (Table 2). However, we were unable to obtain sequences of these genes from rest of the samples, probably because of low parasitemia. Cyt-b gene sequences The MSP-1 42 gene sequences obtained from six (five P. inui and one P. cynomolgi) Indian Plasmodium isolates were compared with the published sequences (S3 Table) available for simian malaria parasites. Contrary to Cyt-b gene sequences, MSP-1 42 gene sequences were quite divergent among Indian Plasmodium species (0.015, Table 3) and the divergence between Indian and SEA P. inui sequences was quite high (0.036, Table 3). In case of P. cynomolgi the mean divergence between Indian and SEA sequences was 0.029 (Table 3). All the newly generated Plasmodium MSP-1 42 gene sequences were deposited in GenBank sequence repository under accession numbers: P inui (MH974141, MH974143-MH974146), P. cynomolgi (MH974142). A 429 bp fragment of 18s rRNA gene was sequenced from two Plasmodium species (P. inui and P. cynomolgi) derived from Indian macaques. Similar to mitochondrial Cyt-b gene, eight Indian P. inui isolates of nuclear 18s rRNA gene showed no genetic variation among them and these sequences exhibited very low divergence from other SEA P. inui sequences (0.004, Table 3). The single isolate of P. cynomolgi from India showed nucleotide substitutions at six sites when compared with the SEA P. cynomolgi isolates with the divergence value of 0.10 (Table 3). Like in the mitochondrial tree the nuclear MSP-142 tree showed Indian P. inui and P. cynomolgi sequences were nested within their SEA counterparts (Fig 2 and Fig 3). In the 18s rRNA gene tree P. cynomolgi from Indian and SEA along with P. fragile formed a clade (Fig 4). However, the position of P. inui was unresolved. Overall the 18s rRNA gene tree was uninformative with respect to the position of P. inui and P. cynomolgi probably due to lack of sequences of other Plasmodium species. All the newly generated Plasmodium 18s rRNA gene sequences were deposited in GenBank sequence repository under accession numbers: P. inui (MH917236-MH917243), P. cynomolgi (MH917235). Phylogenetic analysis of host Since we have sampled M. radiata from across the Southern India, covering most of its distribution area, we further wanted to determine if M. radiata exhibits any population structure. Since macaque groups are characterized by female philopatry [53], one might expect geographically structured mtDNA haplotype distribution. Nevertheless, "problem animals" are routinely translocated across India to curb monkey menace. This practice might in turn facilitate gene flow between previously isolated populations and aid in the spread of pathogens. Thus, we utilized the mitochondrial D-loop sequences to test if human mediated translocations might have altered the population structure of M. radiata. The Bayesian phylogenetic tree based on D-loop sequences is shown in S1 Fig. The tree topology does not support climate specific clustering of mtDNA clades, in that dry and wet zone individuals do not form distinct clades. Thus, the phylogeographic pattern suggests that macaques are freely moving across the dry and wet zones of the country. All the newly generated Macaca radiata D-loop sequences were deposited in GenBank sequence repository under accession numbers MH974147-MH974228. Host haplotype network and IBD model A minimum spanning network of 42 M. radiata mtDNA haplotypes is depicted in S2 Fig. this network includes M. radiata haplotypes sampled across its distribution range in Southern India. The network analysis did not reveal geographical clustering of related haplotypes and in most cases very divergent and distantly related haplotypes were retrieved in a given population. Furthermore, in some cases identical haplotypes were derived from geographically distant populations. Thus, haplotype network also suggests a lack of population structure in this species. The results of IBD test was positive but very low (r = 0.298) suggesting week correlation between genetic and geographic distance, however this result was not significant (p = 0.09). Discussion Indian macaques, chiefly M. radiata, have been known to harbour at least three Plasmodium species including P. fragile, P.inui and P. cynomolgi. However much of this work was done in the last century and since early 1980s there have been no studies on primate malaria in India. Here using Plasmodium sequences data isolated from the primate host and phylogenetic analyses we confirm the presence of these pathogens in M. radiata. Additionally, we report for the first time the presence of human malarial parasite, P. falciparum, in M. mulatta and M. radiata. This is also the first report of DNA sequence data obtained from Plasmodium species infecting wild populations of Indian macaques. Overall the P. inui and P. cynomolgi sequences from India branch with their SEA counterparts in both nuclear and mitochondrial trees. Nevertheless, in some comparisons divergence between Indian and SEA sequences is higher than mean divergence among Indian sequences. However, the sample size is very limited. More studies need to be undertaken to determine if Malaria parasites of Indian non-human primates the Indian isolates represent a different species/subspecies. The Plasmodium endemic to Indian subcontinent, P. fragile branches with sequences from Sri Lanka and this species also exhibit high intraspecific variation. Among these three parasites, P. cynomolgi has been detected in several humans in the Nicobar Islands [54] and recently in a patient in Malaysia [55]. This raises the specter of yet another primate malarial parasite exhibiting zoonoses. Additionally, both P. inui and P. cynomolgi have been shown to infect humans in laboratory conditions [27,56] and references therein. Thus, it is conceivable that these parasites have the potential to infect humans in mainland India. Presence of human malaria parasite in Indian macaques The presence of human malarial parasite (P. falciparum) in M. mulatta and M. radiata was an unexpected finding but not surprising given such observations have been made before in other primates [15,16]. However, this finding raises few questions, whether the parasite was able to infect the Indian macaques or it is because of the detection of pre-erythrocytic stage of the parasite. Since, the present study reports parasites from feces and the liver tissue only, one can assume that the detection was due to the pre-erythrocytic stage of the parasite. This is because asymptomatic pre-erythrocytic development of the mammalian malaria parasite occurs in liver, and the parasite might reach feces via bile. Thus, the PCR based method might pick up preerythrocytic stage of the parasites rather than erythrocytic infection in blood [57]. However, one cannot rule out the possibility that the parasite is able to infect the macaque RBCs, as the sample size of testing the blood samples was not enough to confirm. Given this scenario, there is an urgent need for screening more blood samples from Indian macaques using traditional methods of detecting blood stage parasites as well as molecular methods. Although, due to reasons like submicroscopic infections, morphological indistinguishable forms of Plasmodium and primate samples being opportunistic (as many primate species are endangered), the molecular based approaches are favored over the morphological data [4] (and references there in). Previous studies have revealed that African primates (Apes) harbor at least six-host specific lineages representing distinct Plasmodium species within Laverania subgenus [17]. One of these lineages is closely related to human P. falciparum and thus referred as P. falciparum like malaria parasites. Since Ape specific P. falciparum were found to be more genetically diverse than human P. falciparum, it was hypothesized that human P. falciparum has evolved from P. falciparum like parasites found in Gorilla, by a single cross species transmission event [5]. These two P. falciparum lineages can be differentiated based on four SNPs from mitochondrial genome [5]. Interestingly, one of the recent studies from India found that few Indian human P. falciparum isolates shared one of the SNPs (out of the four above mentioned distinguishing SNPs) with P. falciparum like isolates, these samples also showed two novel SNPs. Moreover, they also found that Indian human P. falciparum bear high genetic and haplotype diversity. The authors conclude that Indian human P. falciparum might belong to the ancestral range of the species and it is likely that cross species transmission might have happened in India [58,59]. Our finding of presence of P. falciparum in Indian macaques to some extent supports this hypothesis. However, based on the small fragment that we could sequence from macaques we are unable to determine if these sequences belong to human specific P. falciparum or P. falciparum like lineages isolated from non-human primates. Thus, for further understanding of P. falciparum origin and its hostswitch mechanisms, more studies targeting Indian non-human primates are required. Geographic distribution of malaria parasites in Indian macaques The distribution and spread of malarial parasites is tightly linked to the distributions of its vector and host. The only host known thus far for the primate specific malarial from Southern India is M. radiata which is distributed over much of peninsular India. However, most studies, including ours, have reported these parasites in M. radiata distributed in the high rainfall regions (wet zone) of Southwest India. According to [32] the distribution of macaque malaria appears to be governed by the distribution of the Leucosphyrus group of mosquitoes, which are largely confined to tropical evergreen forest. Evergreen forests in turn are restricted to Southwest India and the rest of the peninsula has semi-arid climate (here referred to as dry zone). The mosquito species Anopheles elegans has been implicated as the vector for these parasites in Southwest India. Nevertheless, in our study we also detected P. inui and P. cynomolgi from a captive population in Bangalore (PRC, IISc campus, Bangalore). The provenance of this captive population is not known but is most likely to have been locally sourced. Interestingly a new parasite named P. osmaniae was reported from free ranging monkeys from Hyderabad district in 1960 [60]. Later this species was reclassified as P. inui (see [27,56]. Both Bangalore and Hyderabad are in the dry zone in central peninsular India. These observations suggest that P. inui and P. cynomolgi might be more widely distributed in Southern India than previously thought. How do we explain the presence of these parasites in M. radiata from dry zone of peninsular India? There are two possible explanations. One plausible scenario is that the vector A. elegans has extended its range into the dry zone carrying the parasite with it and infecting host populations in the dry zone. Secondly infected members of host species might have dispersed from the wet zone of peninsular India to the dry zone and in these areas local Anopheles species (other than A. elegans) might have served as vectors. It is well known that under laboratory conditions many other species of Anopheles can transmit various macaque malarias [27,56]. To explore these scenarios, we looked at the population structure and phylogeography of the host species, M. radiata. Phylogeography of M. radiata Macaca radiata (Family Cercopithecidae) is a widely distributed and common species of macaque endemic to South India. It is distributed in wet and dry zones of peninsular India and is found in both forested areas as well as in human dominated landscapes. Like most other cercopithecines these macaques live in matrilineal troops wherein female offspring remain in their natal territory while male offspring disperse [61]. Such a social structure-termed female natal phylopatry results in geographical clustering on mtDNA haplotypes. This is because mtDNA is maternally inherited and therefore is restricted to a region due to female natal phylopatry. In many species of macaques such mtDNA structuring has been reported [62,63]. However human mediated transport of macaques can disrupt this mtDNA population structure. Our study suggests that M. radiata does not exhibit any population structure in their mtDNA. For example, there is no segregation of wet and dry zone populations, they are interspersed in the phylogeny (S1 Fig). Furthermore, there is no geographical structuring of mtDNA haplotypes across the species' range in that samples collected from 16 different locations are distributed across the network (S2 Fig). The IDB analysis also does not support a significant correlation between geographical and genetic distances. The lack of population structure in macaques we believe is largely due to human mediated dispersal. Across India "problem monkey" mainly from urban areas are trapped and released in their "native" forest habitat. These urban monkeys are usually unable to survive in forests as they have been acclimatized to foraging in urban environments. Often these translocated monkeys then move to nearby human habitations to forage. Such long-distance translocation results in the pattern seen in our analysis wherein individuals from distant locations have identical haplotype (Tumkur and Waynad) or individuals from the same location have very divergent haplotypes (Chickballapur). Human mediated transport of macaques adds an additional level of complexity to tacking malaria. This issue has implications for both the spread of malaria in primates as well as humans. In the case of primate specific malaria such unnatural translocations would facilitates the wider distribution of these pathogens in their host species which in turn would provide more opportunities for zoonoses. Given that India is targeting complete malaria elimination by 2030 [64], our study recommends intensive spatial and temporal monitoring of primate malarial for a holistic approach to controlling of human malaria. Resources: Arun Zachariah, Sajesh P. K., Bathrachalam Chandramohan, Vinoth
2018-12-17T19:10:38.034Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "80d780e46b1f17292304ea6650faa21d5512a680", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0006801&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80d780e46b1f17292304ea6650faa21d5512a680", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261994777
pes2o/s2orc
v3-fos-license
Impact of Age and Variant Time Period on Clinical Presentation and Outcomes of Hospitalized Coronavirus Disease 2019 Patients Objective To evaluate the impact of age and COVID-19 variant time period on morbidity and mortality among those hospitalized with COVID-19. Patients and Methods Patients from the American Heart Association’s Get With The Guidelines COVID-19 cardiovascular disease registry (January 20, 2020-February 14, 2022) were divided into groups based on whether they presented during periods of wild type/alpha, delta, or omicron predominance. They were further subdivided by age (young: 18-40 years; older: more than 40 years), and characteristics and outcomes were compared. Results The cohort consisted of 45,421 hospitalized COVID-19 patients (wild type/alpha period: 41,426, delta period: 3349, and omicron period: 646). Among young patients (18-40 years), presentation during delta was associated with increased odds of severe COVID-19 (OR, 1.6; 95% CI, 1.3-2.1), major adverse cardiovascular events (MACE) (OR, 1.8; 95% CI, 1.3-2.5), and in-hospital mortality (OR, 2.2; 95% CI, 1.5-3.3) when compared with presentation during wild type/alpha. Among older patients (more than 40 years), presentation during delta was associated with increased odds of severe COVID-19 (OR, 1.2; 95% CI, 1.1-1.3), MACE (OR, 1.5; 95% CI, 1.4-1.7), and in-hospital mortality (OR, 1.4; 95% CI, 1.3-1.6) when compared with wild type/alpha. Among older patients (more than 40 years), presentation during omicron associated with decreased odds of severe COVID-19 (OR, 0.7; 95% CI, 0.5-0.9) and in-hospital mortality (OR, 0.6; 95% CI, 0.5-0.9) when compared with wild type/alpha. Conclusion Among hospitalized adults with COVID-19, presentation during a time of delta predominance was associated with increased odds of severe COVID-19, MACE, and in-hospital mortality compared with presentation during wild type/alpha. Among older patients (aged more than 40 years), presentation during omicron was associated with decreased odds of severe COVID-19 and in-hospital mortality compared with wild type/alpha. S evere Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), the virus responsible for the Coronavirus Disease-2019 (COVID-19) pandemic, was first discovered in Hubei Province, China, in late 2019. 1 Since then, the virus has spread rapidly around the globe.The initial strains discovered included the wild type and alpha variants (B 1.1.7).The delta variant (B.1.617)was discovered in India in late 2020 and became the dominant strain in the United States around the beginning of July 2021.More recently, the omicron variant was discovered in South Africa in November 2021, and became the dominant variant in the United States at the end of 2021. 2,3revious studies have found considerable differences between these 3 predominant variants.5][6][7] Throughout the pandemic, age has also emerged as a significant predictor of COVID-19 outcomes.For example, a large study evaluating the impact of age across 45 countries reported a log linear increase in the infection fatality ratio in those older than 30 years. 8In another study of 5279 people from New York city, age was the strongest risk factor for hospital admission and among the strongest predictors of critical illness. 9lthough previous studies have compared one variant to another, few have compared all major variants to each other in a national population.Furthermore, few large studies have evaluated the differential impact of age across all 3 variants.Herein, we address this evidence gap and utilize a large national US database to evaluate the impact of age and variant time period on patient characteristics, treatment patterns, and clinical outcomes among patients hospitalized with COVID-19. METHODS The American Heart Association's (AHA) Get With The Guidelines (GWTG) COVID-19 cardiovascular disease registry was created to serve as an in-patient data repository for hospitalized adult COVID-19 patients aged 18 years or older with the aim of supporting quality improvement and research.The registry was launched in April 2020 and includes 134 hospitals, health centers, and medical centers from 34 states across the United States.An institutional review board either waived review or approved patient enrollment at the participating centers.Full details of the registry have been previously described. 10sing the AHA's GWTG COVID-19 registry, we identified patients hospitalized across the United States with a diagnosis of COVID-19.Patients were divided temporally into different COVID-19 variant time periods depending on their time of admission to the hospital.Patients admitted between January 20, 2020 and July 5, 2021 were grouped into the wild type/alpha period, between July 6, 2021 and December 27, 2021 into the delta period, and between December 28, 2021 and February 14, 2022 into the omicron time period.The alpha variant became the dominant strain in the United States around March 2021, though it did not cause a marked spike in new cases.The wild type and alpha waves were combined for the purposes of this analysis.Date cutoffs were chosen a priori on the basis of the date at which the particular variant became the predominant strain in the United States. 3Patient demographic characteristics, medical comorbidities, vitals on hospital presentation, admission symptoms, medications before admission, laboratory reports, therapies received during hospitalization, procedures performed during hospitalization, and outcomes during hospitalization were compared between age groups within the variant time periods and across time periods within age strata.Continuous and categorical variables were compared across the variants using Kruskal Wallis and c 2 tests, respectively. Using adjusted logistic regression, we evaluated the association of COVID-19 variant time period with in-hospital patient outcomes.Models were run separately for those aged 18-40 years (young) and for those aged more than 40 years (older).Further models were run evaluating the impact of age category (older vs young) on in-hospital outcomes.Outcomes included severe COVID-19, major adverse cardiovascular events (MACE), thromboembolic disease (deep vein thrombosis or pulmonary embolism), and in-hospital mortality.Severe COVID-19 was defined as patients experiencing mechanical ventilation, cardiac arrest, or death while hospitalized.MACE was defined as a composite of myocardial infarction, new-onset heart failure, stroke, or death while hospitalized.Last, multivariate cubic spline models were created to continuously model the impact of age on predicted probability of severe COVID-19, MACE, and inhospital mortality across all 3 COVID-19 variant time periods.Splines were constructed using 3 knots placed at evenly spaced percentiles.Logistic regression and spline models were adjusted for age (where appropriate), sex, body mass index, race or ethnic group, payment source, and medical comorbidities (atrial fibrillation or flutter, cancer, cerebrovascular disease, chronic kidney disease, congenital heart disease, coronary artery disease, diabetes mellitus, dyslipidemia, heart failure, hypertension, immune disorders, peripheral artery disease, pulmonary embolism, pulmonary disease, and smoking).All statistical analyses were conducted using SAS on the AHA's precision medicine platform. 11A 2-sided P<.05 was set as a threshold for statistical significance. RESULTS The overall cohort consisted of 45,421 patients hospitalized with a confirmed diagnosis of COVID-19 between January 20, 2020 and February 14, 2022.Of these, 41,426 were admitted during the wild type/alpha variant time period, 3349 during the delta period, and 646 during the omicron period.The median age (95% CI) of the cohort was 63 years (50-75), and 46.8% of the group was female.The general demographic characteristics and medical comorbidities stratified by age group (young: 18-40 years and older: more than 40 years) and variant time period are shown in Table 1.Of the overall hospitalized cohort, 21.4% of patients were non-Hispanic Black or African American, and 19.4% of the group was Hispanic.Hospitalized patients were more likely to be non-Hispanic Black or Hispanic, and less likely to be non-Hispanic White during the wild type/alpha period compared with the delta and omicron periods in both age groups (Table 1). When comparing admission characteristics, young and older patients were less likely to present with cough, fatigue, fevers or chills, headache, loss of taste or smell, nausea, vomiting, or diarrhea during omicron when compared with the other variant time periods.Patients in both age strata presenting during the delta period were more likely to present with shortness of breath, hypoxia, and with interstitial infiltrates on chest x-ray or computerized tomography (CT) when compared with the other periods.Older patients (more than 40 years) presented with increased rates of confusion or altered mental status, cough, fatigue, hypoxia, and interstitial infiltrates on chest x-ray or CT when compared with younger patients (18-40 years) across all variant time periods (Table 2). When evaluating therapies received during hospitalization, patients in both age strata presenting during the wild type/alpha period were more likely to be treated with convalescent serum, hydroxychloroquine, and azithromycin, whereas patients presenting during the delta period were more likely to be treated with mechanical ventilation, inotropes or vasopressors, corticosteroids, remdesivir, and tocilizumab when compared with the other variant time periods (Table 3).When comparing age groups, older patients were more likely to receive corticosteroids, remdesivir, tocilizumab, mechanical ventilation, and inotropes or vasopressors when compared with younger patients across all 3 periods.In univariate analysis, older patients aged more than 40 years presenting during delta were found to experience more acute myocardial infarction, deep vein thrombosis or pulmonary embolism, in-hospital shock, and inhospital mortality when compared with the other time periods.Younger patients (18-40 years) were more likely to experience deep vein thrombosis or pulmonary embolism and in-hospital mortality during the delta period compared with the other waves.Rates of myocarditis and new-onset heart failure were low across all 3 variants.(Table 3).Rates of missingness are shown in the Supplemental Table, available online at http://www.mcpiqojournal.org. In adjusted logistic regression models evaluating the association of COVID-19 variant time period on in-hospital outcomes among patients 18-40 years, patients presenting during the delta period had increased odds of severe COVID-19 (odds ratio [OR], 1.64; 95% CI, 1.29-2.08),MACE (OR, 1.76; 95% CI, 1.25-2.49),and in-hospital mortality (OR, 2.24; 95% CI, 1.51-3.32),and a decreased odds of discharge to home (OR, 0.69; 95% CI, 0.54-0.89)when compared with patients presenting during the wild type/alpha period.Young patients aged 18 to 40 years presenting during the omicron period were not found to have different odds of severe COVID, MACE, in-hospital mortality, or thromboembolic disease when compared with those presenting during the wild type/alpha period (Table 4).Among older patients (aged more than 40 years), patients presenting during the delta period were found to have increased odds of severe COVID- b Models adjusted for sex, body mass index, race, payment source, and medical comorbidities (atrial fibrillation or flutter, cancer, cerebrovascular disease, chronic kidney disease, congenital heart disease, coronary artery disease, diabetes mellitus, dyslipidemia, heart failure, hypertension, immune disorders, peripheral artery disease, pulmonary embolism, pulmonary disease, and smoking) c Regression models compare outcomes of adults (>40 years) to young adults ( 40 years) 1.53-2.11),in-hospital mortality (OR, 1.44; 95% CI, 1.29-1.62),and discharge to home (OR, 1.15; 95% CI, 1.05-1.27)when compared with patients presenting during the wild type/alpha period.Older patients aged more than 40 years presenting during the omicron period had decreased odds of severe COVID-19 (OR, 0.66; 95% CI, 0.51-0.87)and in-hospital mortality (OR, 0.64; 95% CI, 0.45-0.91)but not MACE or thromboembolic disease when compared with patients presenting during the wild type/alpha period (Table 4). In adjusted spline models continuously modeling the impact of age, increasing age was found to associate with increased predicted probability of severe COVID-19, MACE, and in-hospital mortality across the wild type/alpha, delta, and omicron time periods (Figures 1-3). COVID-19 Evolution The initial strain of COVID-19 (wildtype) was first discovered in Wubei Province, China in December 2019. 1 The virus itself was found to have 90 homotrimeric spike receptors on its membrane, with a mechanism of infectivity involving spike protein binding to the angiotensin converting enzyme 2 (ACE2) receptor on host cells. 12As the wild type strain spread around the world, it began to acquire mutations, changing both its infectivity and severity patterns.Strains with mutations became known as variants, with alpha, delta, and omicron being the 3 main variants to date in the United States. 3The alpha variant was found to have over 12 main mutations in its spike protein, including 7 amino acid substitutions and 2 deletions. 12,13These mutations were shown to increase binding affinity, cell entry, infectivity, and transmissibility. 12The delta variant was found to have further spike protein mutations with resultant increased transmissibility, viral load, and ultimate ability to evade CD8 T cells. 6Last, the omicron variant was found to possess a marked degree of sequence variation, with at least 32 mutations in the spike protein alone. 5,12hese genetic mutations have created distinct patterns of transmissibility, infectivity, and severity, and indeed, previous studies have shown clinical differences between the different COVID-19 strains.[20][21][22][23][24] Clinical Presentation, Treatment Patterns, and Outcomes These previous findings are consistent with the results of our study.For example, patients of both age strata presenting during the delta period were more likely to present with hypoxia, dyspnea, and interstitial infiltrates when compared with the other strains, whereas those presenting during omicron were found to present with milder symptoms such as nasal congestion and were less likely to present with loss of smell or taste.With regards to treatments received during hospitalization, those admitted during the delta period were more likely to receive mechanical ventilation, corticosteroids, remdesivir, and tocilizumab when compared with the other variants.In adjusted models, both younger and older patients presenting during delta had increased odds of severe COVID-19, MACE, thromboembolic disease, and in-hospital mortality when compared with wild type/alpha.Among patients aged more than 40 years, those presenting during omicron were shown to have decreased odds of severe COVID-19 and inhospital mortality when compared to those presenting during wild type/alpha.The differences observed in clinical presentation, treatments received, and outcomes may in part be due to the virulence and location of predominant viral replication in the different variants.In laboratory studies, for example, omicron has been shown to replicate more in the upper airways and less in the lungs and may cause a milder form of disease. 25,26Decreased severity in symptomology and outcomes during omicron may also be attributed to greater rates of vaccination in those presenting during the omicron time period. 21One study from California reported that a greater proportion of patients admitted with COVID-19 during omicron were fully vaccinated (according to Centers for Disease Control definitions at the time) when compared to a period of delta predominance (39.6% vs 25.1%). 21There were also fewer unvaccinated patients hospitalized during omicron when compared with delta (56.4% vs 71.1%). 21Finally, previous studies have reported increasing percentages of hospitalized patients during omicron admitted for an alternate diagnosis who were found to incidentally have COVID-19, which may further explain the improved clinical severity and outcomes in this group. 27,28hen stratifying the cohort by age, we found increased odds of adverse outcomes in those presenting during the delta period compared with wild type/alpha among both young patients (age 18-40 years) and older patients (age more than 40 years).When evaluating the impact of age across each variant time period, we found increased odds of adverse outcomes among older patients compared with younger patients across all 3 variant time periods.Last, when continuously modeling age as a predictor of adverse outcomes, we show that increasing age associated with an increased predicted probability of severe COVID-19, MACE, and in-hospital mortality.0][31][32][33][34][35] Several reasons have been postulated for this association and include increased basal inflammation, hyperresponsiveness of immune cells, ineffective T cell priming, decreased T cell diversity, diminished antibody response or activity, and an unregulated innate immune system in those of older age. 36A higher prevalence of comorbidities, differential host receptor expression, and variations in coagulopathy have also been suggested to play a role. 29,32,37mitations Data included in this study are from voluntary participating institutions in the GWTG COVID-19 cardiovascular disease registry, and therefore may not be fully generalizable to the overall United States population.Fewer patients were enrolled during the delta and omicron periods from a smaller number of participating sites, which may further limit generalizability.We are only able to determine the time period during which patients were hospitalized with COVID-19, and so we are unable to determine the strain of the virus infecting the patient.The data gathered are observational, and therefore, causality cannot be established.The data and outcomes are only gathered from the patient's in-patient admission.Post discharge outcomes are not available.The vaccination rates of the cohort were not tracked.Although logistic regression and spline models were adjusted for possible confounders, residual confounding may still exist. CONCLUSION In one of the largest national COVID-19 analyses to date, we describe demographic, comorbidities, clinical characteristics, hospital treatment patterns, and outcomes for 45,421 patients admitted with COVID-19 in the United States between January 20, 2020 and February 14, 2022, stratified by patient age.Patients presenting during a period of delta predominance were found to have increased morbidity and mortality, whereas patients aged more than 40 years presenting during omicron reported decreased outcome severity when compared with those presenting during wild type/alpha.Increasing age adversely associated with outcomes across all 3 COVID-19 variant time periods.These data provide an important snapshot into the clinical characteristics and outcomes of hospitalized COVID-19 patients stratified by age during the first 2 years of the pandemic in the United States. POTENTIAL COMPETING INTERESTS Dr Fonarow reports consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Edwards, Eli Lilly, Janssen, Medtronic, Merck, and Novartis.Dr Parikh receives research support from the American Heart Association, Janssen, Infraredx, Abbott Vascular, and Bayer, and consulting fees from Abbott Vascular.Dr de Lemos reports consulting income from Eli Lilly, Novo Nordisc, and Astra Zeneca.Dr Yang reports research grants/funding from CSL Behring, Boehringer Ingelheim, Eli Lilly, and Bristol Meyers Squibb, and consulting fees from Pfizer. TABLE 1 . Characteristics of the Cohort Stratified by Age and Coronavirus Disease 2019 Variant Time Period a TABLE 3 . Hospitalization Characteristics and Outcomes of the Cohort by Age and Coronavirus Disease 2019 Variant Time Period a Continuous variables presented as median (25th-75th percentile).Continuous and categorical variables compared using Wilcoxon Rank-Sum test and c 2 tests, respectively. b Comparison P value compares young or older groups across all 3 time periods.c b Comparison P value compares young or older groups across all 3 time periods.c TABLE 4 . Association of COVID-19 Variant Time Period with Outcomes Among Patients 18-40 Years and >40 Years Presenting with COVID-19 a,b Models adjusted for age, sex, body mass index, race, payment source, and medical comorbidities (atrial fibrillation or flutter, cancer, cerebrovascular disease, chronic kidney disease, congenital heart disease, coronary artery disease, diabetes mellitus, dyslipidemia, heart failure, hypertension, immune disorders, peripheral artery disease, pulmonary embolism, pulmonary disease, and smoking).c P value for overall COVID-19 wave effect; Wald c 2 test. Splines reporting association of continuous age with predicted probability of death.
2023-09-17T15:18:14.946Z
2023-09-15T00:00:00.000
{ "year": 2023, "sha1": "35ccf2b9d1c1f32838e4c66ea6cb9b2710cbb5eb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8d1e86a4820ee32b127bbd454aaaba059408155f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258208523
pes2o/s2orc
v3-fos-license
Lessons Learned from COVID-19 for Future Pandemics: Infection Prevention in Health Care Workers Dear Editor, Since the beginning of the pandemic in late December 2019 in Wuhan, China, and the rapid spread of the coronavirus disease 2019 (COVID-19) virus, until May 31, 2022, there were 6.9 million reported deaths and 17.2 million estimated deaths from COVID-19.1 Due to the unique genetic structure of coronaviruses, as well as their ability to reproduce and easily spread, the emergence of COVID-19 has not been unexpected for virologists. Based on the information collected to-date on COVID-19, due to the complex structure of the virus, high risk of inter-human transmission, presence of asymptomatic or mildly-symptomatic carriers, and progression of the condition to respiratory distress and 5-10% mortality, the occurrence of COVID-19 pandemic can be defined as a kind of perfect storm.2 The COVID-19 pandemic, in addition to its negative economic, social, political implications, has posed many challenges for the healthcare sector: dramatic increase of the need for medical staff, the cost of personal prevention and protection equipment, cost of diagnosing and treating patients, need for more intensive care unit and ventilated beds, and in the mortality rate in the community.3 Nurses, as the largest group of medical staff, spend more time with patients than other health care workers (HCWs). For many emerging diseases, there is no standard treatment, and nurses have the key role in the supportive care of the patients infected with emerging diseas.2 In the critical situation of emerging diseases, all social organs, even close relatives and the patient’s family, distance themselves from the patient, and it is the duty of the medical staff to take care of the patient despite the dangers that exist for them. 50% of those who died in the severe acute respiratory syndrome (SARS) epidemic were HCWs who became infected in hospital through caring of the infected patients.4 The prevalence of hospitalization among HCWs was 15.1% and mortality rate was 1.5 %.5 According to the Executive Director of the International Council of Nurses (ICN), “The fact that the number of nurses who died during the epidemic is similar to the number of people who died during World War I Is a giver. We have been calling for standard and systematic data collection on infection and deaths of HCWs, but the fact that we do not yet have accurate statistics is a major scandal”6 as the actual statistics are about 60% higher than reported.7 It is estimated that the rate of virus transmission within hospitals from patients to HCWs is about 29%.8 According to the Deputy Minister of Nursing of the Ministry of Health of Iran, more than 200 000 nurses are working in wards where patients with coronavirus have been hospitalized; since the outbreak of corona in Iran, more than 125 000 nurses have been infected with COVID-19.9 The COVID-19 pandemic makes it even more important to talk about the safety of all HCWs. As a result, maintaining the health and safety of staff is a key principle in promoting patient safety.10 Focusing on the safety of patients should not cause neglect of the safety of HCWs. This article tried to highlight the importance of implementation of new and flexible infection prevention methods based on the characteristics of the disease agent. This letter was written with the aim of prevention of infection in HCWs to infectious diseases, maintaining human resources, increasing nurses’ work efficiency, increasing the quality of care and patient safety, and using health care capacity for education, psychological support, and prevention. Therefore, based on the opinion of experts and the evidence obtained from the review of literature and considering the potentials available in Iran, the following options were proposed to be considered at the next possible pandemics. Letter to Editor the infected patients. 4 The prevalence of hospitalization among HCWs was 15.1% and mortality rate was 1.5 %. 5 According to the Executive Director of the International Council of Nurses (ICN), "The fact that the number of nurses who died during the epidemic is similar to the number of people who died during World War I Is a giver. We have been calling for standard and systematic data collection on infection and deaths of HCWs, but the fact that we do not yet have accurate statistics is a major scandal" 6 as the actual statistics are about 60% higher than reported. 7 It is estimated that the rate of virus transmission within hospitals from patients to HCWs is about 29%. 8 According to the Deputy Minister of Nursing of the Ministry of Health of Iran, more than 200 000 nurses are working in wards where patients with coronavirus have been hospitalized; since the outbreak of corona in Iran, more than 125 000 nurses have been infected with COVID-19. 9 The COVID-19 pandemic makes it even more important to talk about the safety of all HCWs. As a result, maintaining the health and safety of staff is a key principle in promoting patient safety. 10 Focusing on the safety of patients should not cause neglect of the safety of HCWs. This article tried to highlight the importance of implementation of new and flexible infection prevention methods based on the characteristics of the disease agent. This letter was written with the aim of prevention of infection in HCWs to infectious diseases, maintaining human resources, increasing nurses' work efficiency, increasing the quality of care and patient safety, and using health care capacity for education, psychological support, and prevention. Therefore, based on the opinion of experts and the evidence obtained from the review of literature and considering the potentials available in Iran, the following options were proposed to be considered at the next possible pandemics. Implementation of Evidence-Based Modified Infection Prevention Systems Based on the Characteristics of the Infectious Disease Agent Successful experiences of different countries in relation to innovations in the infection control system of medical centers during the occurrence of infectious diseases 8,11 showed positive effects in controlling the transmission of virus from patient to staff Some suggested interventions for this option are: obtain feedback from infection prevention control nurse at the patient zone entrance, avoid unnecessary contact with surfaces, use of runner personnel, as the person responsible for providing the equipment needed by two or more nurses working in isolated rooms, rearrange of the medical equipment, reduce the number of people and the number of times they enter the isolated area, modify and manage the registration of medical documentation, use the spotter as a person who checks how to put on and take off personal protective equipment at the entrance to the isolated room and divide the hospital space based on the severity of the virus infection into an area that is clean, contaminated, and isolated, and designate areas for hand washing and Donning & Doffing personal protective equipment. Psychological Support to HCWs Studies show that during pandemics, in terms of the impact of mental health, HCWs are among the most vulnerable groups due to the high risk of infection, increased work stress and fear of transmitting the disease to their families 11 Overall, most studies have highlighted the need for psychological interventions with more emphasis on psychosocial support with effective strategies and careful psychological care for nurses at the frontline of the COVID-19 struggle. 8 Suggested strategies include the use of psychiatrists in the main referral centers for psychological support and implementation of psychiatric interventions required for vulnerable personnel, use of the cyberspace platform to teach psychological issues related to work stress such as video counseling, consider daily or weekly exercise programs tailored to pandemic conditions such as mountain climbing, increase staff motivation by providing medical expenses, considering rewards and cash rewards, and allocating non-cash points to communicable diseases front-line personnel. Logical and Scientific Scheduling of Nursing Shifts The COVID-19 pandemic has created a challenging situation for nurses, increasing workload, stress, and nursing mortality which has reduced the quality of nursing care. 12 Therefore, the shift schedule and working hours of nurses in the center of COVID-19 should be considered to reduce their workload. 13 Lack of nursing staff and long hours of nursing shifts have led to problems for nurses. In addition, long-term use of personal protective equipment causes blood complications, increased nursing error, adverse patient outcomes, 14 mental and physical fatigue, increased stress levels, decreased job performance and quality of care. 15 It seems kind of allocation nursing workforce can directly affect the safety and quality of patient care. 16 Therefore, scheduling of nurses' shifts in pandemics based on science and in a logical and flexible manner can be necessary to optimize and effectively use the nursing staff, as well as reduce the workload and improve the quality of nursing care. Suggested strategies include the adjust nurses' shifts flexibly according to staff requests as much as possible, simultaneous use of an experienced and expert nurse with a novice nurse as a mentor and to prevent the physical and mental complications of working with a patient with coronavirus, the best type of shift adjustment in infected areas, if there is no personal protective equipment (PPE) deficiency, is 4-hour shifts (Gear method). Vaccination of HCWs Against Infectious Diseases General vaccination is one of the most effective methods to control the disease and reduce mortality during an epidemic of infectious diseases. 17,18 Even with access to the COVID-19 vaccine, the willingness to use it is a major challenge for health systems. Vaccine hesitancy, known as a behavior, has delayed the acceptance or refusal to use the vaccine. According to the World Health Organization, the three main factors influencing. Vaccine hesitancy behavior are: Uncertainty about the vaccine and its effectiveness, lack of understanding of the need for the vaccine and lack of access to a suitable vaccine. 17,18 Recent studies show that more than one-third of respondents reported uncertainty or reluctance to receive the COVID-19 vaccine. 19 Therefore, one of the most important interventions to reduce the incidence of personnel is to inform them about the benefits of vaccination and thus increase their desire to receive the vaccine. Some suggested interventions for this option are: consultation with relevant authorities to prepare and allocate a valid vaccine for use with priority coronavirus, provide personnel with the necessary information and knowledge about the vaccine and identify barriers to acceptance of vaccines by nurses and address staff concerns by holding evidence-based workshops. In general, the COVID-19 pandemic is another important alarm and reminder added to the world health history archive which indicate that the prevalence of emerging diseases due to frequent mutations of the virus, high replication power, rapid spread and increased resistance to the vaccines detected, can be an important challenge for the international health system. This global crisis has created fear among HCWs who are concerned about their health, their co-workers, their families and their friends. Despite these fears and anxieties, they continue their efforts on the front lines against COVID-19. As HCWs continue to struggle, health systems must help them stay safe from the damage caused by the disease. This requires the implementation of effective measures to prevent them from becoming infected with infectious
2023-04-19T15:09:55.553Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "1a87645bf12c9f37a7a2f561e2d53993d2d7b8c5", "oa_license": "CCBYNC", "oa_url": "https://jcs.tbzmed.ac.ir/PDF/jcs-12-1-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc7ddd2c99a75d50affd6a4131f6b6ae354338a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
259579160
pes2o/s2orc
v3-fos-license
Minimally invasive access cavities in endodontics : Background: The access cavity is a critical stage in root canal therapy and it may influence the subsequent steps of the treatment. The new minimally invasive endodontic access cavity preparation concept aims to preserve sound tooth structure by conserving as much intact dentine as possible including the pulp chamber's roof, to keep the teeth from fracturing during and after endodontic treatment. While there is great interest in such access opening designs in numerous publications, still there is a lack of scientific evidence to support the application of such modern access cavity designs in clinical practice. This review aims to critically examine the literature on minimal access cavity preparations, explain the effect of minimally invasive access cavity designs on various aspects of root canal treatment, and identify areas where additional research is required. Data: An electronic search for English-language articles was performed using the following databases: Google Scholar, PubMed, and Research Gate. The following keywords were used: "minimally invasive access cavity", "conservative endodontic cavity ", and "classification of access cavity". Study selection: 64 papers that were the most relevant to the topics in this review were selected between 1969 to 26 February 2022. Conclusions: Minimally invasive access cavities can be classified into conservative, ultraconservative, truss access, caries and restorative-driven cavities. There is a deficiency of proof that a minimally invasive access cavity maintains the resistance to fracture of endodontically treated teeth greater than traditional access cavities. There was no difference in the percentage of untouched walls and debris removal in teeth with conservative vs traditional access cavities, however, truss and ultraconservative access cavities resulted in poor irrigation efficacy compared to traditional ones. Also, the lower cyclic fatigue resistance of rotary instruments and root canal obturation with voids were associated with minimally invasive access cavities. The studies about minimally invasive access cavities still have a wide range of methodological disadvantages or register unsatisfactory or inconclusive results. Therefore, further research on this topic is needed especially with the everyday advancement of techniques and armamentarium used in endodontics. Introduction One of the most critical steps of root canal therapy is access cavity (AC) preparation (1) , as it will influence the subsequent steps and the outcome of the treatment. Residues of pulp tissue that can serve as a substrate for microorganisms should be cleaned through proper access cavities (2) . Also, coronal interference elimination enables the detection of the orifices of root canals (3) and serves as a pathway for irrigation solutions to get a better effect of instrumentation and avoid accidents (4) . The new philosophy of the preparation of minimally invasive access cavities seeks to conserve sound dentin by retaining as much as possible of the pulp chamber's roof (5) . This shift was enabled by the availability of improved endodontic tools such as cone beam computed tomography (CBCT), operating microscopes, and ultrasonic equipment (6) . Advocates of these approaches think that minimally invasive access cavities would aid in the long-term survival of endodontically treated teeth by minimizing unnecessary dentine removal, hence improving the fracture resistance of endodontically treated teeth (5,7) . While the claim of avoiding tooth fracture has yet to be clinically proven, there have been concerns raised about the possible disadvantages of minimally invasive access cavity techniques. A limited access cavity design, for example, presents issues in future procedural stages such as an impaired vision of the pulp chamber and canal, decreased efficacy and efficiency in canal instrumentation and disinfection, and loss of orientation (8,9) in addition to the morphology of the root canal system which is diverge and unpredictable and associated with clinical complications that have a direct impact on treatment outcome (10) . While there is considerable interest in such access opening design techniques in many articles published on this topic, to date, there is a lack of scientific proof to back up the implementation of these modern access cavity designs in clinical practice for the present time (11) . At the same time, clinicians are increasingly favoring access cavity designs that adopt minimally invasive principles (12) . Although the necessity of conserving tooth structure is self-evident, the entire shift to minimally invasive access cavities has yet to be confirmed (13) . This review aims to provide an overview of the different designs of the minimally invasive access cavities in endodontics, summarize the research investigating their effects on the various aspects of endodontic treatment to date, and identify areas where additional research is required. Methods A comprehensive search has been performed on electronically published resources in the English language using Google Scholar, Pub Med, and Research Gate databases from 1969 to February 2022 by using the keywords: "minimally invasive access cavity", "conservative endodontic cavity", and "classification of access cavity". Sixty-four papers were included in this review. The studies were selected according to the following criteria: No social media sources were included, articles, literature reviews, in vitro studies, micro CT studies, finite element studies, retrospective studies, and cross-sectional studies that are related to the minimally invasive access cavities in root canal treatment. The filtering process included selecting the studies based on their relevance to the topics in this review. Classification of access cavity designs Traditional access cavity (Trad AC) Carried out over the past decades, seeking to allow straight-line access to the apex by removing the coronal interference (14,15) . Complete removal of the pulp chamber roof in posterior teeth, followed by straight-line access to the canal orifices with smoothly divergent axial walls, allowing all orifices to be visible and apparent within the outline shape. Straight-line access is achieved in anterior teeth by removing the pulp chamber roof, pulp horns, and the lingual shoulder of the dentine, as well as extending the access cavity to the incisal edge (16) . Conservative access cavity (Cons AC) Such a design of access cavity was proposed by Clark and Khademi in 2010. Preparation of posterior teeth usually begins at the occlusal surface's central fossa. It expands with axial walls that are smoothly convergent only to the degree required to expose the canal orifices, retaining part of the roof of the pulp chamber (5) . This form of access can also be performed by divergent walls (Cons AC, DW) (17) . In anterior teeth, the strategy includes transferring the place of entry from the cingulum on the lingual or palatal surface to the incisal edge by forming a narrow triangular or oval-shaped cavity retaining the horns of pulp and the full peri-cervical dentin (18) , as shown in figures 1 and 2. Ultra conservative access cavity (Ultra AC) Such cavities begin as stated in the Cons AC but with no additional expansions, preserving as great of the roof of the pulp chamber as feasible (7) . When the lingual region of the crown has attrition or a significant concavity of an anterior tooth, the incisal edge can be accessed in the center, parallel to the tooth's long access, as shown in Figures 1 and 2. Truss Access Cavity (Truss AC) This type of access cavity design aims to keep the dentinal bridge between two or more tiny cavities that are created to access the canal orifice in each root of multi-rooted teeth. To access the mesial and distal canals, two or three separate cavities might be created in mandibular molars, for example (19) , as shown in Figures 1 and 2. Caries-driven Access Cavity (Caries AC) Access to the pulp chamber is gained in this design by eliminating caries while conserving all remaining dental structures, including the soft structure described as the underside of an architectural feature such as the ceiling, ceiling corner, or wall (20) , as shown in Figures 1 and 2. Restorative-driven Access Cavity (Resto AC) Access to the pulp chamber is gained in a restored tooth with no caries by removing all or part of the existing restorations while conserving the remaining tooth structures (11) , as shown in Figures 1 and 2. Straight-Line-Furcation (SLF) and Straight-Line-Radicular (SLR) Because the outlines of SLF and SLR are formed from the pulp space landmarks projected onto the occlusal surface of the teeth, they differ from other types of access designs. The reference of the Straight Line Radicular (SLR) access is related to the pulp horn position, but the Straight Line Furcation design (SLF) is based on the placement of the center of each canal at the level of furcation (21) . SLF and SLR are not included in the new proposed classifications, but they have lately been used in clinics with the idea of dynamic CTguided endodontic access treatments (22) . Influence of minimally invasive access cavity on aspects of root canal treatment: The strength of the remaining tooth structure The causes of fractures in endodontically treated teeth include iatrogenic causes (tooth structure loss, effect of chemicals and intracanal medicament, effect of restoration and restorative procedures), and noniatrogenic causes (primary, which includes a history of recurrent pathology and anatomical position of the tooth, and a secondary effect of aging of dental tissue) (23) . The loss of tooth structure is the most common cause of fracture in root-filled teeth. The preparation of the endodontic access cavity after the Trad AC principles were considered the second largest cause of loss of tooth structure (24) . Therefore, an endodontically treated tooth's prognosis might be improved with a correct and minimized endodontic access cavity design (25) . Compared to traditional access cavities, less invasive access cavities may improve the fracture resistance of interproximal repaired teeth (26) . With minimally invasive access preparations, fourteen studies estimated the fracture resistance of extracted teeth. While the fracture resistance of teeth with Cons AC was greater than that of teeth with Trad AC in five studies (27) , no difference was seen in the remaining nine investigations (28) . Of the 14 studies, two studies did not specify how specimens were chosen (29) , and there is a reduction in the anatomical matching of the samples (30) . At the same time, the thickness of the pulp chamber and magnitude of the remaining tooth structure affect the tooth resistance to fracture also the age of the patient and extraction technique are not reported well (29,31,32) . According to Augusto et al. (2020), ultraconservative access cavities in endodontic treatment did not provide any advantages in fracture resistance of mandibular molars when compared to traditional endodontic access cavities (8) . Maske et al. (2021) assess if the access cavity design affects the fracture strength of endodontically treated and repaired molars, and they found that the kind of access cavity preparation does not affect endodontically treated teeth fracture strength (33) . Also, Saberi et al. (2020) find that under thermal stress, the truss endodontic access cavity improves the fracture strength of endodontically treated teeth (34) . In conclusion, according to the results of these studies, the impact of access cavity preparation on tooth strength is at best uncertain (11) . There is insufficient information to make a definitive judgment about whether ConsAC is better than TradAC in terms of fracture resistance (35) . Therefore, more research is required to have a better judgment on whether the minimally invasive access cavity designs may preserve the fracture resistance of the endodontically treated teeth. Chemomechanical root canal preparation A suitably prepared access cavity is critical for the successful instrumentation and administration of irrigation solution into the root canal system (36) . For evaluation of different designs of access cavity on chemomechanical canal preparation in endodontics, Krishan, Paque, and colleagues (2014) found a higher percentage of untouched walls after using Cons AC in the mandibular first molar's distal canal preparation as compared with Trad AC (27) . By comparing Trad AC to Cons AC (37) in maxillary molars, Trad AC to Cons AC in mandibular incisors (18) , and Trad AC to Ultra AC in maxillary premolars (11) , no differences in the percentage of the untouched walls after shaping the root canals of maxillary molars (37) , mandibular incisors (18) and maxillary premolars (11) , were observed. These results demonstrate that a tiny access cavity may not jeopardize the proportion of untouched walls during root canal preparation. For the impact of different access cavity designs on the amount of accumulated debris, Rover et al. (2017) found no difference when comparing maxillary molars with Cons AC or Trad AC (37) , while found that the preparation of maxillary premolar's canal with Ultra AC was associated with a higher percentage of the debris when compared to Cons AC and Trad AC. It's also known that restricted penetration of irrigant, wedging of the needle, the effect of Vapour Lock, and issues related to sonic/ultrasonic/negative apical pressure irrigation are well-documented drawbacks of irrigating minimally enlarged canals (36) . Following the chemomechanical process using the rotary instrument in addition to irrigation with a traditional syringe, Neelakantan et al. (2018) found a significant amount of pulp tissue remanent holdover in the mandibular molars' pulp chamber with Truss AC as compared with Trad AC (19) , which will impair the disinfection procedure by contaminated pulp tissue remnants which act as a source of infection and diseases after treatment (2) . The data suggest that there is no difference between the Trad AC and Cons AC in terms of hard tissue debris collection and untouched canal walls after preparation. However, teeth with the Trad AC had more canal transportation than the Cons AC (11) . Furthermore, the tiniest access cavities, such as Truss AC and Ultra AC, were linked to worse irrigation efficiency due to the retention of more pulp tissue and hard tissue debris after shaping treatment (11) . However, the effect of the type of access cavity on bacterial decline is unknown, and more research is needed. Augusto et al. (2020) found that in comparison to typical endodontic access cavities, ultraconservative endodontic access cavities did not give any advantages in the capacity to shape canals or the resistance to fracture of mandibular molars (8) . While comparing the effects of Cons AC and Truss AC on the capacity for shaping and filling root canals, microbial decrease in canals, and pulp chamber cleaning during root canal therapy on mandibular molars, Barbosa et al. (2020) found no significant differences in microbial decrease, while in comparison to Cons AC, Trad AC had a much smaller percentage of unprepared surface area and also, there were no variations in the proportion of dentine removed (38) . Also, Xia et al. (2020) found that in single-rooted premolars, the untouched canal wall following instrumentation for Trad AC was substantially lower than the untouched canal wall for Cons AC (39) . On the other hand, Peng et al. (2022) found that after instrumentation using Pro Glider and Wave One Gold files, the Cons AC had no significant negative effect on the efficacy of instrumentation as compared to the Trad AC (40) . Doing a very small access cavity might compromise the stage of endodontic treatment by complicating or/and preventing the canal orifice detection and chemomechanical instrumentation and obturation processes (41) . The potential for other complications, such as missed canal, deviation, and/or instrument fracture, may also be increased (41) . The results of the studies are controversial on whether minimal invasive access cavities will impair the chemomechanical process or not. However, the Ultra AC, was associated with a higher percentage of debris and untouched canal walls after preparation. Obturation and retreatment To evaluate the effect of access cavity design on root canal filling, Niemi et al. (2016) estimated the consistency of the oval-shaped canal filling of mandibular premolars following Cons AC or Trad AC using radiographic image analysis (42) . The smaller dimension of minimally invasive access impeded guttapercha cone adaptation and holds the accomplishment of the continuous condensation wave process. Therefore, Niemi et al. (2016), reported that a single cone approach and Warm Lateral Compaction (WLC) would be the best option for canal filling in a tooth with minimally invasive access preparation (42) . Silva et al. (2020) compared the proportion of voids generated next to the root canal filling of two rooted maxillary premolars with round cross-sectional shapes in both Ultra AC and Trad AC teeth (9) . According to the authors, the filling of the canal was not affected by access designs; however, even with an ultrasonic tip, magnification, and more treatment time, the operator was unable to remove the filling remnants from the chamber of the pulp before the restoration of teeth with Ultra AC (11) . The sectioning approach was utilized by Niemi et al. (2016) to assess the performance of rotary systems in removing the substance of root filling from the oval-shaped canals of single-rooted mandibular premolars. They found that teeth with Cons AC had a greater remnant of filling material on the wall of the root canal than teeth with Trad AC (42) . Rover et al. (2020) found more voids in root canal filling in the minimally invasive group than in the traditional one, and the percentage of canal filling remnant material in the chamber of the pulp after the cleaning process was not significantly different among these groups (traditional and minimally invasive access cavities) (43) . Therefore, the minimally invasive access cavities were more likely to be associated with voids in root canal filling and a higher percentage of canal filling remnant material in the chamber of the pulp than the traditional ones, and more research is needed to confirm these results. Restoration of endodontically treated teeth Resin composites are the most common alternative for endodontically treated tooth restoration, especially in minimally invasive access cavities. They are more esthetic, faster, cheaper, and less invasive than indirect restorations (44) . The small dimensions of minimally invasive access cavities combined with the retention of the pulp chamber roof complicate the incremental build-up restorative procedure and may result in adhesion failure and/or voids at the point where the restorative material meets cavity walls (45) . examined the effect of ultraconservative endodontic access cavities (Ultra AC) on establishing gaps and voids in resin composite restorations; however, gaps and voids were seen in every specimen. There was considerable disparity in the creation of voids among the access cavity designs, with the Ultra AC producing significantly higher voids. The creation of gaps was not significantly different between the Trad AC and the Ultra AC (46) . Boscatto et al. (2022) investigated the effect of endodontic access cavity design and restorative technique on hard tissue removal in mandibular premolars. In comparison to ConsAC, TradAC resulted in a 14% increase in hard tissue removal after endodontic treatment (47) . The results of studies are controversial on whether the minimally invasive access cavity is associated with more voids and gaps in the resin composite restorations than traditional access or not, therefore future studies are required to investigate this point. Tooth discoloration induced by endodontic materials and treatments is a concern in clinical practice, causing cosmetic issues and discomfort for both patients and professionals, especially in the anterior teeth (48) . Even with magnification, using an ultrasonic tip, and more treatment time, the operators were unable to bring out residues of filling substances from the chamber of pulp before restoration in teeth with Ultra AC. This prolonged operating technique may cause fatigue in both the patient and the dentist, and the remnants of the filling may affect aesthetics by discoloring the dental crown over time (49,50) . Cyclic fatigue of endodontic instruments Torsion failure and cyclic fatigue are two causes of endodontic instrument separation. When the instrument's tip becomes lodged in the dentin and the instrument continues moving, a torsion fracture occurs (51,52) . On the contrary, cyclic fatigue occurs when the forces of tension compression exceed the elastic limit of the instrument in the canal of the curved root (53) . Reduced access cavities might lead to a higher access inclination of the file into the root canals (4) , in addition to anatomic curvature; it induces extra curvature (54) . Recent investigations have shown that inserting the file into the canal with a more inclined angle reduces the endodontic instruments' cyclic fatigue resistance (55,56) . In Trad AC and Ultra AC endodontically accessed canals, Silva et al. (2020) compared the cyclic fatigue resistance of Reciproc size 25 (R25) and Reciproc blue size 25 (R25B) instruments; R25 and R25B in UltraAC demonstrated muchreduced cycle fatigue resistance (57) . Also, Spicciarelli et al. (2020) found that in endodontically treated teeth using Cons AC, the cyclic fatigue resistance of Reciproc blue R25 was drastically reduced compared to Trad AC (58) . Also, when Corsentino et al. (2021) compared conservative and truss access cavities, they discovered that the truss access cavity produces higher fatigue of Reciproc blue R25 than the conservative access cavity (59) . The studies included in this review showed that minimal invasive access cavities were associated with lower fatigue resistance of endodontic instruments than traditional access cavities. More studies are required to assess new NiTi rotary instruments' fatigue. Effect on the cuspal deflection The loss of tooth structure caused by caries and restorative therapies, rather than the endodontic operations themselves, weakens endodontically treated teeth (60) . The extent of cusp displacement during resin composite repair is determined by several parameters, including the restorative material's characteristics, the cavity's size and structure, and the bonding mechanism (61,62) . Taha et al. (2009) conducted research on tooth strain, cuspal deflection, marginal leakage, and gap development induced by polymerization shrinkage through direct resin composite restoration of endodontically treated premolars (63) , and they found that cuspal deflection and strain were increased as a result of loss of axial walls through endodontic access. González-López et al. (2006) examined the influence of each consecutive cavity formation process on premolar cuspal deflection (including endodontic access). The cavity preparations were performed in the following order: unmodified tooth, conservative MO cavity preparation, extensive MO preparation, MO preparation with endodontic access, and MOD preparation with endodontic access. They found that cuspal deflection increased statistically significantly after MOD cavity preparation with endodontic access and concluded that progressive removal of dental tissue increased cuspal deflection (64) . As a result, it is critical to keep the tooth structure intact wherever possible during the preparation of the access cavity. Further studies are needed to discover whether minimal invasive access cavities will decrease the cuspal deflection or if there will be no difference between traditional and minimally invasive access cavity designs. Conclusion Various acronyms suggested to describe the new minimally invasive access cavity preparation have seriously undermined the articles' comprehension and readability, and new nomenclature is suggested based on self-explanatory abbreviations. According to the collected scientific data, there is a deficiency of solid proof to back up the consideration that a minimally invasive access cavity preserves the resistance to fracture of endodontically treated teeth greater than a traditional access cavity. The studies about minimally invasive access cavities still have a wide range of methodological disadvantages or have registered unsatisfactory or inconclusive results. In addition, the truss access cavity and ultra-conservative access cavity are more conservative types of access cavity that badly influence the irrigation process and canal transportation and, especially in necrotic teeth, aren't recommended. In considering that more additional research is needed to give comprehensive and conclusive evidence about all these topics, it may be considered that there is a lack of proof for supporting and introducing the concept of minimally invasive access cavity preparation in daily clinical practice and also for training the students and post-graduates. Although the necessity of conserving tooth structure is self-evident, the entire shift to minimally invasive access cavities has yet to be confirmed. Minimally invasive access cavities are yet to be adequately proven by data from research, and they will not be able to take the place of typical straight-line access designs. Before clinical trials can be planned, more in-vitro investigations must be completed. Furthermore, before these new methods are generally adopted, randomized controlled trials, as well as retrospective and prospective investigations, must be performed. 16 (2)
2023-07-11T18:43:13.903Z
2023-06-15T00:00:00.000
{ "year": 2023, "sha1": "1e025edae60c45cdb6a14165c9cacf4e5b86e08e", "oa_license": "CCBY", "oa_url": "https://jbcd.uobaghdad.edu.iq/index.php/jbcd/article/download/3406/1142", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "26414628dfd65dae1e2dfc5709b133f9027db521", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
212652072
pes2o/s2orc
v3-fos-license
Pre‐school neurocognitive and functional outcomes after liver transplant in children with early onset urea cycle disorders, maple syrup urine disease, and propionic acidemia: An inception cohort matched‐comparison study Abstract Background Urea cycle disorders (UCD) and organic acid disorders classically present in the neonatal period. In those who survive, developmental delay is common with continued risk of regression. Liver transplantation improves the biochemical abnormality and patient survival is good. We report the neurocognitive and functional outcomes post‐transplant for nine UCD, three maple syrup urine disease, and one propionic acidemia patient. Methods Thirteen inborn errors of metabolism (IEM) patients were individually one‐to‐two matched to 26 non‐IEM patients. All patients received liver transplant. Wilcoxon rank sum test was used to compare full‐scale intelligence‐quotient (FSIQ) and Adaptive Behavior Assessment System‐II General Adaptive Composite (GAC) at age 4.5 years. Dichotomous outcomes were reported as percentages. Results FSIQ and GAC median [IQR] was 75 [54, 82.5] and 62.0 [47.5, 83] in IEM compared with 94.5 [79.8, 103.5] and 88.0 [74.3, 97.5] in matched patients (P‐value <.001), respectively. Of IEM patients, 6 (46%) had intellectual disability (FSIQ and GAC <70), 5 (39%) had autism spectrum disorder, and 1/13 (8%) had cerebral palsy, compared to 1/26 (4%), 0, 0, and 0% of matched patients, respectively. In the subgroup of nine with UCDs, FSIQ (64[54, 79]), and GAC (56[45, 75]) were lower than matched patients (100.5 [98.5, 101] and 95 [86.5, 99.5]), P = .005 and .003, respectively. Conclusion This study evaluated FSIQ and GAC at age 4.5 years through a case‐comparison between IEM and matched non‐IEM patients post‐liver transplantation. The neurocognitive and functional outcomes remained poor in IEM patients, particularly in UCD. This information should be included when counselling parents regarding post‐transplant outcome. | INTRODUCTION Inborn errors of protein metabolism are inherited disorders due to single enzyme or cofactor deficiency leading to accumulation of toxic metabolites and deficiency of substrates. Urea cycle disorders (UCD), maple syrup urine disease (MSUD), and propionic acidemia (PA) fall under this classification and clinically present in the neonatal period with feeding intolerance, seizures, and encephalopathy progressing to coma. Neonatal onset (≤30 days) of these inborn errors of metabolism (IEM) is associated with a poor prognosis for survival and neurological outcomes. 1 Those who survive the initial presentation remain at risk of ongoing metabolic decompensations including hyperammonemia, hyperleucinosis, and metabolic acidosis. The cumulative effects of these recurrent decompensations are thought to lead to global developmental delay with high risk of regression of acquired skills and death. In addition to intellectual disability and abnormal brain imaging, 2,3 individuals with UCD are at risk of cerebral palsy, seizures, and cortical blindness. 4 Individuals with PA can also have significant intellectual disability and additionally show higher prevalence of autism spectrum disorder (ASD), optic nerve atrophy, and basal ganglia strokes. 5 Intellectual disability in MSUD is present, 6 although not as severe as in UCD and PA. In MSUD, attention deficit, hyperactivity, and mental health illnesses are prevalent. 7 Liver transplantation (LT) in these disorders has been shown to correct or significantly improve the biochemical abnormalities. In UCDs, a successful LT eliminates hyperammonemic crises, need for dietary protein restriction and ammonia scavenger therapy. In PA and MSUD, the biochemical abnormalities are significantly attenuated after LT, although some degree of dietary, medical, and illness management is still indicated as there remains a risk of metabolic decompensation. 8,9 Complications following LT include thrombosis, infection, post-transplant lymphoproliferative disease, multiorgan failure, graft loss, and death. 10 However, the literature is describing improved medical outcomes and survival in children, [10][11][12][13] possibly due to fewer pre-LT chronic liver disease comorbidities, and as such frequency of LT for UCDs, MSUD, and organic acid disorders (OAD) has increased in the last 10 years. 10 Recent studies have also described developmental and cognitive performances after LT for IEM. The United Network for Organ Sharing (UNOS) database reported that in 323 pediatric IEM patients approximately 54 months following LT, 40% of UCD, 79% of OAD, and 22% of MSUD patients had cognitive delays. 10 In that study, objective measures were not used as data were collected through questionnaires. Data from the urea cycle consortium presented neuropsychological assessment on SYNOPSIS Intellectual disability (FSIQ and GAC <70) was present in 46% of IEM patients post-liver transplantation. 528 individuals with 8 different enzyme/transporter defects; however, patients were assessed at different ages, the majority had late onset phenotype, and LT status was unknown. 14 Significant variability dependent on age and specific UCD was noted, highlighting difficulties in comparing neonatal onset with late onset phenotype. 14 Stevenson et al found 3/7 (43%) school aged children with UCD post-LT to be 1 to 2 SD (15-30 points) below population norms in intelligence quotient. 15 In an OTC series of four patients, post-LT full scale intelligence quotients (FSIQ) were 75, 73, 68, and 49. 16 At best, LT may halt further neurological insult but does not reverse underlying injury. 10,[17][18][19] In comparison, a study in non-IEM pediatric patients having had LT at age <3 years demonstrated FSIQ on average to be within half a SD of population norms. 20 This suggests that IEM patients with LT, while have fewer complications than those with LT for other reasons, 11 likely have worse cognitive outcomes post-LT. The pre-LT neurological injury and other comorbidities as well as ongoing biochemical abnormalities likely play an important role and need a detailed description to allow better understanding of the post-LT outcome. As each center has limited number of patients, the reported literature lacks concordance of the phenotype, with varying severity of same disease included in the same cohort. 14,16,17 Given access to resources, and variability in IEM and LT care between centers, appropriate comparisons are difficult to draw. In this study, we hoped to address some of these gaps by comparing outcomes at 4.5 years of age between 13 severe (presenting in the neonatal period) IEM patients and 26 matched non-IEM post-LT patients at our Western Canadian referral center. IEM patients rarely have liver dysfunction pre-LT, while non-IEM patients usually do [e.g., hypoalbuminemia, ascites, hepatic encephalopathy]; and IEM patients often have metabolic crises pre-LT, while non-IEM patients rarely do. Therefore, we hypothesized that neurological outcomes will be worse for IEM patients while growth and health outcomes will be similar to non-IEM patients post-LT, and that some pre-LT variables may be associated with adverse outcomes. | Study description Through the prospective, longitudinal, interprovincial inception cohort study, the Western Canadian Complex Paediatric Therapies Follow-up Program (CPTFP), children from western Canada and corresponding northern territories who have complex therapies in Alberta, Canada receive neurodevelopmental follow-up. All patients in this study had LT at the Stollery Children's Hospital, Edmonton, Canada. For children transplanted before their sixth birthday, referral for follow-up was made by the attending hepatologist at the time of the liver transplant. Details of the registration procedure for each child have been previously described. 21 When survival is deemed likely, a nurse coordinator registers the child and discusses follow-up procedures with the parents. Parents understand the dual purpose of follow-up to include service for possible developmental concerns for their child and parental psychosocial support as well as an audit of outcome and research. Contact is made with the developmental follow-up clinic at the tertiary site of referral. From 2000 to 2016, LT was performed in nine patients with UCD, one with PA, and three with MSUD. The patients with UCD included three with carbamylphosphate synthase-1 deficiency (CPS1-D), three with ornithine transcarbamylase deficiency (OTC-D; all males), one with argininosuccinate synthase deficiency (Citrullinemia type 1) and 2 with argininosuccinate lyase deficiency. Casecomparison matching was completed using the following variables: year of transplant (within 2 years), sex, gestational age (within 2 weeks), age at transplant (within 6 months), and socioeconomic status (SES) based on employment of the main wage earner in the household (population mean 43 and SD 13, matched within 15 points). 22 Each IEM patient was matched individually with two other non-IEM children on each of these variables from the established CPTFP database of prospectively collected acute care and outcome data. Indications for LT for the non-IEM children included biliary atresia, acute liver failure, cholestasis, and tumor. Acute care variables from the established database included: gestational age; sex; time on waiting list; age, weight, height, creatinine, West Haven Classification of encephalopathy at LT, 23 postoperative LT days on ventilation, intensive care and hospitalisation, reoperation within 30 days, and retransplant within 1 year. Data on encephalopathy at LT obtained from the established database are characterized based on symptoms at the time of transplantation only. Dependent on biochemical parameters, IEM patients likely experienced encephalopathy at diagnosis and around metabolic decompensations. A retrospective chart review was performed for all patients and additional data were collected on initial presentation, ammonia at presentation, number of hyperammonemic crises (defined as ammonia over 100 μmol/L) pre-and up to 2 years post-LT and use of renal replacement therapy. To better understand the neurological profile pre-LT, clinical description of development as described in the chart and when available, Vineland Adaptive Behavior Scale (VABS) outcomes were recorded. Seizures, abnormal neurological findings (defined as any of seizure, hypotonia, stroke-like episode, or encephalopathy at LT) and brain imaging data were also collected. For the IEM patients, metabolic treatment including dietary management and protein restriction, IEM medications, and peri-LT management was recorded. | Assessment and measures Through the CPTFP, all children at their respective referral sites have multidisciplinary assessments performed post-LT; pre-school (4-6 year) outcomes are reported here. Neurological examinations are completed by a neurodevelopmental pediatrician or pediatrician experienced in developmental follow-up; diagnoses of motor disability are confirmed by a neurologist; and visual impairment, defined as corrected visual acuity in the better eye of <20/60, by an ophthalmologist; growth measurements are completed. A detailed health history is recorded. Experienced paediatric psychologists assess the neurocognitive ability using the gold-standard Wechsler Preschool and Primary Scales of Intelligence (third edition) 24 to give a FSIQ with normative US population mean and SD of 100 (15). Pediatricexperienced audiologists assess bilateral hearing in a soundproof booth; sensorineural hearing impairment is defined as responses in the better ear of >25 dB at any frequency from 250 to 4000 Hz. Functional outcomes are determined using a parent-completed questionnaire, the Adaptive Behavior Assessment System II (ABAS-II). 25 The ABAS-II evaluates realistic, independent behaviors of patients, the effectiveness of interaction with others, within community contexts. The measure includes four domains: conceptual (communication, functional pre-academics, and self-direction), practical (home living, health and safety, community use, and self-care), social (leisure and social), and overall general adaptive composite (GAC), which includes all of the above as well as motor skills. The composite GAC agebased population score has a mean (SD) of 100 (15). In addition some children's parents were given a similar questionnaire pretransplant, the VABS with an overall composite score mean (SD) of 100 (15). 26 ASD was prospectively diagnosed using the gold standard DSM-IV-TR prior to 2014 and thereafter DSM-5 criteria 27,28 by multidisciplinary teams at each site with supplement from the appropriate module of the Autism Diagnostic Observation Schedule (ADOS) and parental interviews and observations. | Perioperative management of IEM patients All IEM patients received intravenous dextrose 10% (glucose infusion rate of 6-11 mg/kg/min) with appropriate electrolytes and intravenous 20% lipids (1.5-3 g/kg/day) immediately prior to and during the transplant procedure. All UCD patients received continuous infusion of IV ammonia scavengers (sodium phenylacetate and sodium benzoate, Ammonul) and IV Arginine during the operation. Information was not available on one patient. Ammonia monitoring was requested but not performed in the majority of cases in the operating room. Perioperative or intraoperative dialysis was not performed in any patient. Immediately after transplant, UCD patients were started on typical post-LT parenteral nutrition without any protein restriction. PA and MSUD patients were started on protein restricted diets and in most, protein was started at half daily requirements and increased to 1.5 to 2 g/kg/day by day 7 post-LT, based on biochemical parameters including acid base status and plasma amino acids. These patients have sick day and metabolic emergency plans post-LT, which is not the case for UCDs. The goal of sick day and emergency plans is to achieve lower protein, higher calories, and higher fluid intake. | Ethics This study has been approved by the local health research ethics boards from all sites. All parents/guardians signed informed consent. | Statistical analysis Variables are described as mean (SD), median (interquartile range [IQR]), or count (percentage) as appropriate. The first objective was to compare outcomes in metabolic and matched patients; Wilcoxon rank-sum test was used to compare continuous outcomes. For dichotomous outcomes, a conditional logistic regression with an exact option was needed to address the sparse data in the matched pairs design. The algorithm did not converge, so that the data are described using counts and percentages, without statistical significance testing. The second objective was to compare outcomes of FSIQ and GAC in UCD and matched patients using the Wilcoxon rank-sum test. The third objective was to compare variables between UCD and individually matched patients, in order to assess what may account for any differences found from objective two. Variables were compared using Wilcoxon rank-sum test for continuous variables, and as above, no statistical testing was done for dichotomous variables. A P-value of ≤.05 was used to determine statistical significance. A similar analysis was not performed for MSUD and PA as numbers were too small to allow for statistical end points to be met. Table 1 summarizes the clinical presentation of the 13 patients with IEM. Age at transplant was 18.8 (14.1) months in IEM patients, and 17.6 (14.7) months in non-IEM patients (Wilcoxon rank-sum test P value = .81). The time on waitlist was 93.8 (65.6) days in IEM patients, and 65.6 (72.5) days in non-IEM patients (P = .25). Of UCD patients, 8/9 (89%) had severe neonatal phenotype based on presentation at <5 days of life with ammonia >500 μmol/L (one patient presented at 30 days of life). All MSUD patients were diagnosed at <10 days of life with leucine >1500 μmol/L and as such are classified as severe neonatal phenotype. The PA patient was diagnosed through newborn screening but was symptomatic with lethargy, hyperammonemia, elevated anion gap, and mild lactic acidosis, thus considered to be neonatal onset. Of the IEM patients 7/13 (54%) received dialysis at diagnosis, including six with UCD. Given patients were transferred from other centers, pre-LT hyperammonemia events are not available on all patients and the majority of recorded events are those that took place in our province; however, it is unlikely that a severe crisis (ammonia >500 umol/L) was missed during chart review even from other provinces as this would have resulted in hospital admission. Of UCD patients, 7/9 (78%) had multiple mild hyperammonemic crises (100-199 μmol/L), and all had at least one severe hyperammonemic crisis (>500 μmol/L; Table 1), while none of the matched patients had any pre-LT hyperammonemic events. Dietary protein restriction was used for all IEM patients pre-LT. Twelve IEM patients had brain MRI pre-LT with 10 (83%) reported as abnormal. Abnormalities on MRI included signal abnormality in white matter or brainstem with variable degrees of edema, gliosis, and atrophy; two patients had intracerebral hemorrhages on MRI of the brain. leucine have been observed with illness in MSUD patients post-LT and in one patient this led to neurological symptoms. PA patient remains on protein restriction to meet daily requirement and remains on carnitine. | Post-LT metabolic course in IEM patients Hyperammonemic episodes have not been observed in the PA patient post-LT even with illness although illness management was used with need for hospitalization. | Outcomes at 4.5 years of IEM (UCD, MSUD, and PA) patients Outcomes between IEM and matched comparison patients are given in Figure 1 and Table 2. Medical outcomes were similar, including weight and height z-scores, number of hospitalizations, medications, and specialists involved in care. Five (39%) of IEM patients compared with 0% from the comparison group had a gastrostomy tube at 4.5 years of age. All patients survived and completed assessments at 4.5 years of age. Children with IEMs had statistically significantly lower FSIQ and GAC at 4.5 years of age compared with non-IEM patients; 7 (54%) and 7 (54%) had FSIQ and GAC <70 compared with 1 (4%) and 3 (12%) of non-IEM patients, respectively ( Table 2). In the IEM patients, four had FSIQ <55, three were between 55 and 69, three were between 70 and 84, two were between 85 and 99, and one was over 100. In the IEM patients, ASD was diagnosed in 5 (39%) (4 UCD, 1 PA) and cerebral palsy in 1 (8%; UCD), whereas these diagnoses were absent in the non-IEM comparison group. All children with a diagnosis of ASD had a FSIQ and GAC <70 and all met the diagnosis of ASD using both the DSM criteria and after the ADOS was completed. Hearing loss was present in 1(8%) and 2 (8%) patients in the IEM and non-IEM groups, respectively. In the non-IEM group, the hearing loss is a complication of a chemotherapy agent. Of note, none of the three MSUD patients had FSIQ <70, intellectual disability (FSIQ and GAC <70), or ASD. | Outcomes at 4.5 years in the UCD only and matched comparison children The UCD patients differed in some respects from their matched cohort (Table 3). Most significantly, in UCD patients the FSIQ of 64 [IQR 54, 79] and GAC of 56 [45, 75] were much lower than in the matched patients (P = .005 and .003 respectively, Table 3). The UCD patients also had higher ammonia at diagnosis (median 1100 umol/L) and many more episodes of hyperammonemia (median 19, [IQR 3.5, 85]) than the matched cohort (median 42 umol/L and median 0, [0, 0] events), P < .001. ASD was diagnosed in 4 (44%) UCD patients. Abnormal neurological examination and seizures preoperatively were also more common in the UCD patients (Table 3). | DISCUSSION Neonatal presentation of UCD, MSUD, and OAD are associated with a poor prognosis for survival and/or neurological outcomes. 1,29,30 Natural history studies show survival without LT in those who survive the neonatal period to be 66% to 91% at 1 year of age for different types of UCDs. 29 Post-LT survival is significantly improved and some studies show survival to be 100%. 15,17 Quality of life improves 31 and risk of further metabolic crises is significantly decreased (and absent in UCDs) after LT. With respect to the neurocognitive outcomes without LT, Krivitzky et al 32 reported FSIQ in 13 neonatal onset UCD patients to average 65.5 (half had FSIQ <70) when assessed between 3 and 16 years of age. With LT, initial studies continue to show unsatisfactory neurocognitive outcomes and highlight the need for more objective data. We present neurocognitive data on 13 IEM patients post-LT. All of our cohort of IEM patients had neonatal onset phenotypes, and 8/9 UCD patients had a severe hyperammonemic event in the first 5 days of life. Our cohort had a significantly lower FSIQ and GAC at 4.5 years of age compared with the matched non-IEM post-LT patients. Almost half (46%) of the IEM patients, and none of the matched non-IEM patients, had intellectual disability. Prevalence of autism (5/13 (39%) of IEM patients, 0 of non-IEM patients), and cerebral palsy (1 (8%) of IEM patients, 0 of non-IEM patients) was also higher in the IEM population at 4.5 years of age. Similar to what has been reported previously, 10 our data suggest that MSUD is associated with better cognitive outcomes post-LT, as none in our cohort had intellectual disability or ASD. However, these patients exhibited clinical concerns of anxiety, ADHD, and language disorders indicating post-LT functional assessment remains important for optimal patient care. Pre-existing neurological injury and severity of hyperammonemic episodes needs to be documented as these may play a major role in the final neurocognitive outcome. 1,33 More UCD patients had pre-LT abnormal neurological examination, seizures, abnormal neuroimaging, and a concern about developmental delay documented on the hospital chart, than in the non-IEM patients (Table 3). In addition, ammonia at diagnosis was high, and episodes of hyperammonemia were frequent in the UCD patients, and did not occur in the non-IEM patients. Ammonia is a known neurotoxin and initial hyperammonemia is known to strongly influence subsequent intellectual development. 34 Thus, repeated hyperammonemic crises pre-LT and potentially other toxic compounds like high glutamine, high citrulline, and low arginine may further contribute to the poor UCD post-LT outcomes. 14 Arginine plays a role in nitric oxide and creatine synthesis 35 and dysregulation of these pathways may further contribute to neurological injury in UCDs. In PA, mitochondrial function is impaired leading to abnormal energy metabolism and increase susceptibility to neurological injury. 5 In MSUD, high leucine levels predispose to encephalopathy and neurological injury. Thus, pre-LT events may account for much of the differences between IEM and non-IEM patients both pre-and post-LT in neurological and functional outcomes. Although guidelines recommend early LT in UCD patients, 36 data from UNOS did not find that patients having LT at <2 years of age did better cognitively compared to patients having LT after 2 year of age. 10 Of our IEM patients, 10 (77%) had LT at <2 years of age and from these, 5 (38%) were transplanted at <1 year of age. Thus, it may not be age at LT but more so the severity and duration of metabolic decompensations that play a significant role in determining neurocognitive outcomes. The neonatal brain may also be more vulnerable to a severe hyperammonemic insult 3 but the effect of this damage may not be apparent till the child is older. General predictors of post-LT adverse neurocognitive outcomes are variable in the literature, but may include pre-LT poor growth, malnutrition, encephalopathy at LT, post-LT neurotoxic medications, and clinical instability (eg, inotropes and high serum creatinine). 20 These predictors were similar between the IEM and non-IEM patients in our study. However, two potential exceptions need to be highlighted. First, given the severe protein restriction in the IEM patients pre-LT, it is likely that malnutrition is present but not reflected in weight and height measurements. Second, while encephalopathy was not different at time of transplant in the two groups, encephalopathy during any period prior to LT (especially at presentation) is assumed in the IEM cohort. As such, it is not yet clear that earlier LT in neonatal onset IEM phenotype could improve the poor neurocognitive outcomes we report. In addition, performing LT at younger ages is associated with more frequent postoperative complications and mortality and this would need to be balanced in any decisions made. 37 There are important limitations in this study. First, the study has a small cohort of n = 13 IEM patients (with n = 9 UCD patients), and n = 26 matched non-IEM patients, undergoing LT at a single referral center. Second, being observational in design, we cannot prove cause and effect relationships. We hypothesize that the poor outcomes in neonatal onset IEM are due to neurological insults inherent from the underlying disease in infancy preceding LT. We did not match for hyperammonemia episodes, as this was not possible given its rarity in non-IEM patients, and we hypothesized it as a likely explanation of differences in outcomes. In addition, we do not know whether the outcomes in IEM patients would be different without LT, as the study design did not match to IEM patients not having LT. Third, some of the retrospectively collected variables on neurological findings, development, and neuro-imaging are subject to reporting bias for recording in the charts. Ideally, we would have objective assessments on all subjects pre-LT; however, even if this was available, the literature supports the inability to use objective formal assessments in infantile period to predict cognitive profile at school age. 32 Fourth, the matched comparison group included patients with heterogeneous causes of liver disease, and it is possible that specific subgroups of non-IEM patients may have similarly poor outcomes to the IEM group. Nevertheless, in our follow-up program the outcomes of acute liver failure patients transplanted at age <3 years include FSIQ of 92, 20 suggesting that the IEM group does more poorly than even this high-risk subgroup of non-IEM patients. Fifth, the small sample size precluded our testing for predictors of adverse outcome in the IEM cohort. Our study addressed some deficiencies in the literature. Strengths of the study are the 100% long-term 4.5 year follow-up on all IEM and matched non-IEM LT patients, and the detailed outcome assessments done using validated instruments on all patients. In addition, the IEM patients were a homogeneous cohort of neonatal onset phenotypes. Although objective assessments were not available on all patients pre-LT, we were successful in outlining a general view of an IEM patient's developmental and neurological profile including brain imaging pre-LT. This, in addition to highlighting the metabolic course with dietary therapy and hyperammonemic crises allows a better understanding of the complex medical issues faced by IEM patients pre-LT. The finding that IEM, and UCD patients in particular, have poor neurocognitive outcomes and high incidence of ASD, even after early LT, are concerning and requires further study. It may be helpful to have neurocognitive outcomes in younger affected siblings who were diagnosed prenatally. They may have avoided the initial severe hyperammonemic insult due to the early diagnosis and as such, their neurocognitive outcomes may be different. | CONCLUSION Neonatal forms of UCD, MSUD, and PA are at lifelong risks of metabolic decompensation and each episode carries a risk of sustained neurological injury and death. LT is primarily considered as a treatment option to eliminate or significantly decrease the ongoing risk of such metabolic decompensations and increase survival. Our data aligns with reported excellent patient survival along with elimination of hyperammonemic crisis in UCD patients and significantly decreased metabolic decompensations in MSUD and PA patients. The goal of our study was to evaluate FSIQ and GAC at age 4.5 years through a case and matched-comparison between IEM and non-IEM patients post-LT. Our data shows in neonatal forms of UCDs, MSUD, and PA, the neurocognitive and functional outcomes remain poor. Particularly in UCDs, there were concerning FSIQ and GAC with high incidence of intellectual disability and ASD. Thus, it becomes crucial that families are appropriately counselled regarding the likelihood of poor neurological outcome even after a successful LT for neonatal IEM phenotypes.
2020-01-30T09:06:56.139Z
2020-01-27T00:00:00.000
{ "year": 2020, "sha1": "47eac891836826448c842696932b4b067466404f", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmd2.12095", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30f9627ba744081d3b86bcfaeaf642ebc23111de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233194450
pes2o/s2orc
v3-fos-license
A cell-based screening method using an intracellular antibody for discovering small molecules targeting the translocation protein LMO2 A cell-based screening uses intracellular antibodies to select compounds targeting the chromosomal translocation protein LMO2. INTRODUCTION Intracellular antibodies are a class of reagent that bind targets in the cellular environment (1,2). They are tools that can be functionalized with effector warheads such as cell death (3,4), proteasome components for targeted-protein degradation (5,6), or be directly used as inhibitors of protein activities (7,8). Furthermore, proteinprotein interaction (PPI) can be attenuated by intracellular antibodies, which is an important tool in target validation (9,10). PPIs are a target class that has been challenging to convert to therapeutics with small-molecule inhibitors because they not only are usually composed of relatively large interaction surfaces involving several binding hotspots but also usually lack a well-defined binding site (or pocket) (11). Nevertheless, PPIs can be efficiently inhibited by macromolecules such as intracellular antibody fragments [e.g., single-chain Fragment variable (scFv) (12)(13)(14) or intracellular domain antibodies (iDAbs) (9,10,15)] and other intracellular antibody-like formats (16,17). The advantages of intracellular antibody-based reagents are that the natural properties of antibodies such as their high affinity and specificity can be exploited. Furthermore, their relatively quick selection processes with methods such as intracellular antibody capture (13) allow their use to investigate their effects on a target disease in relevant preclinical models (target validation) (9,10,15). While the aim of using intracellular antibodies as drugs in their own right [termed macrodrugs (18)] is still being developed, the small size of the iDAb interaction surface with target antigens has been explored as a template for small-molecule surrogates in a method called Abd technology (antibody-derived compound technology) (19). The initial Abd selection was carried out as a biochemical assay. We used a competitive surface plasmon resonance (cSPR) method to select compounds from a fragment library overlapping the antibody-binding site on HRAS G12V (19). This selection method yielded RAS-binding fragment hits that were developed by structure-guided design to nanomolar interacting compounds that inhibited RAS-effector interactions (19). However, the cSPR Abd depends on favorable binding properties of the intracellular antibody with its target (very high affinity, high K on , and low K off ), on the selected compounds having advantageous properties in cellular uptake and on the feasibility to express and purify the recombinant protein of interest. Consequently, new versatile methods that enable the rapid discovery of compounds targeting challenging proteins, such as the product of chromosomal translocations or transcription factors, would be valuable as these have been considered to be extremely difficult drug targets. In this study, we developed a novel approach to development of Abd compounds with a cell-based screening method. With this new Abd technique, the interaction of target protein with iDAb was monitored by bioluminescence resonance energy transfer 2 (BRET2) signal. We applied this method to a challenging protein to target, the LIM domain only protein 2 (LMO2) that is activated by chromosomal translocations t(11;14)(p13;q11) and t(7;11)(q35;p13) in T cell acute lymphoblastic leukemia (T-ALL) (20). In addition, LMO2 is overexpressed in more than 50% T-ALL (21) and is not expressed in normal T cells (22). We have previously used an intracellular VH, VH576, binding to LMO2 (hereafter named iDAb LMO2) to validate the LMO2 protein target in T-ALL by showing that T cell tumors do not grow when LMO2 is blocked (9). The mechanism of this inhibition is that the iDAb binds to LMO2, causing a stable structure that precludes the PPI with its natural partners (23). We have used this anti-LMO2 iDAb as a tool for the development of a cell-based chemical library screening method to select compounds that bind to the same interface of LMO2 as the iDAb (i.e., the iDAb combining site). We established a BRET2-based LMO2 biosensor involving the interaction of LMO2 with a mutant of the LMO2 iDAb where the affinity had been reduced (dematured) by mutation of the VH complementary determining regions (CDRs). The purpose is to lower the interaction sufficiently to facilitate compounds from a chemical library to inhibit LMO2-iDAb interaction. The screening identified a chemical hit series that binds to LMO2 and interferes with iDAb binding in cells. The chemical matter was subjected to a structure-activity relationship (SAR) study to monitor the new analogs' potency to interfere with LMO2 PPI in cells. Our study shows a further implementation of intracellular antibodies as starting points toward the selection of small molecules and, in this case, the basis of future inhibitors of the chromosomal translocationactivated protein, LMO2. Therefore, using antibodies in a drug discovery program is a new practical use with a huge potential, particularly for difficult-to-target proteins. Establishing a BRET-based LMO2-iDAb biosensor for a small-molecule screen We previously described the use of a high-affinity intracellular antibody binding to RAS protein in a cSPR screening of a chemical library screen (19). The method relies on high-affinity interaction between antibody and antigen on the SPR chip to select Abd compounds. Because the interaction affinity of the anti-LMO2 iDAb for LMO2 is the nanomolar range, rather than picomolar as the anti-RAS, and LMO2 can be expressed in Escherichia coli only when in complex with the LID domain of LIM domain binding 1 (LDB1) (24) or with the iDAb (23), the implementation of the cSPR Abd method to LMO2 could be challenging. Therefore, an alternative approach was designed using a cellbased screening method for iDAb surrogates. Such a cell-based screen for compounds that inhibit PPIs requires an assay that generates a signal from the PPI but which does not occur via a highaffinity interaction because initial chemical hits would be expected to be weak binders. Accordingly, we engineered a BRET-based LMO2-iDAb LMO2 biosensor based on the strategy of RAS biosensors (25). We used structural data from LMO2-iDAb LMO2 complex (23) to optimize the proximity of donor and acceptor moieties. The donor moiety RLuc8 was fused at the C-terminal end of LMO2 and the green fluorescent protein 2 (GFP 2 ) acceptor molecule to the N-terminal end of the iDAb LMO2. The interaction between LMO2-RLuc8 and GFP 2 -iDAb LMO2, the lower affinity GFP 2 -iDAb LMO2 dm [a dematured iDAb LMO2 (25)], or the nonrelevant GFP 2 -iDAb RAS (10) (hereafter named iDAb control or iDAb Ctl) was tested by BRET donor saturation assays (fig. S1A). These data demonstrate that the dematuration mutagenesis has lowered the affinity of the iDAb LMO2 dm , since there is 10-fold increase in BRET 50 (an approximation to the relative affinity of the acceptor for the donor protein) of iDAb LMO2 dm compared to iDAb LMO2 (0.44 versus 0.03, respectively; see fig. S1A). The specificity of these interactions was assessed with a BRET competition assay in which an untagged competitor (iDAb LMO2) or a nonrelevant competitor (iDAb Ctl) were expressed with either the BRET pairs LMO2-iDAb LMO2 (fig. S1B) or LMO2-iDAb LMO2 dm (fig. S1C). The competitor iDAb LMO2 decreased LMO2-iDAb LMO2 interaction in a dose-dependent manner but only to ~65% at the highest dose of competitor (fig. S1B). Accordingly, iDAb LMO2 competed the lower-affinity LMO2-iDAb LMO2 dm interaction with a stronger inhibition at its highest dose (~80%; fig. S1C), and the expression of these proteins was not altered (fig. S1D). These data suggest that the affinity of the iDAb LMO2 dm needed to be further decreased to be used in a screening assay where the binding strength of initial hits was likely to be low. We have used a dematuration method to decrease iDAb affinity based on CDR sequences (26) such as it enabled an alpha-screen of RAS G12V -binding compounds and analysis of in vitro-derived RAS-binding Abd compounds (27). On the basis of the LMO2-iDAb LMO2 structural information (23), we introduced additional mutations on the CDRs of iDAb LMO2 dm that would affect the interaction between key amino acids from the iDAb and LMO2 with alanine or glycine substitution while still retaining specific binding (Fig. 1A). Hence, we constructed six mutants named iDAb LMO2 dm1 to iDAb LMO2 dm6 [DNA and protein sequences shown in fig. S2 (A to H)]. Most of the modifications on the iDAb LMO2 affected its binding around the hinge region of LMO2 (Fig. 1, B and C). Next, we tested them in BRET donor saturation assays ( Fig. 2A). Each of the iDAb LMO2 mutants (dm1 to dm6) had a decreased BRET max value (an approximation for the total number of complex LMO2/ iDAb and the distance between the donor and the acceptor within the dimer) and an increased BRET 50 value compared to the template iDAb LMO2 dm (Fig. 2B). This suggested an overall decreased affinity of the dematured iDAbs toward LMO2, and the mutations did not affect their expression (Fig. 2C). Last, we performed a BRET competition experiment with each mutant (Fig. 2D) to determine the optimal dematured iDAb for the chemical library screen. The competition data with iDAb LMO2 dm3 showed that it was the best mutant as its interaction with LMO2 was almost completely inhibited by iDAb LMO2 (~90%) while it retained a relatively high BRET signal (Fig. 2D). Therefore, we chose this mutant for a cell-based high-throughput screening of small molecules. HTS for inhibitors of LMO2-iDAb dm3 interaction We exploited the robustness and scalability of our cell-based BRET LMO2-iDAb LMO2 dm3 interaction assay in a high-throughput screen (HTS) to identify compounds that inhibit this interaction. We screened a library of 10,720 small molecules assembled from BioFocus and ChemBridge sources (see Materials and Methods). The flowchart of the HTS is described in Fig. 3A. Human embryonic kidney (HEK) 293T cells were transfected on day 1 with plasmids expressing LMO2-RLuc8 and GFP 2 -iDAb LMO2 dm3 , and 24 hours later, compounds were added to 10 M and the BRET signals were determined after a further 24 hours. The entire screen was done in duplicate plates. Ninety-nine compounds modulated LMO2-iDAb LMO2 dm3 BRET interaction within both duplicates using a cutoff of 3×SD from dimethyl sulfoxide (DMSO) controls (28) in order that the number of hits could readily be handled in secondary assays (Fig. 3B). Sixty-five compounds potentiated, and 34 primary hits inhibited LMO2-iDAb LMO2 dm3 interaction. The 65 compounds might favor the interaction between LMO2 and iDAb LMO2 dm3 or modify the complex in a way that brings the GFP 2 and Rluc8 moieties closer or in an orientation that is more permissive for energy transfer. It is also possible that the increase of signal is due to an increase of GFP 2 signal (e.g., if a compound emits in the GFP 2 channel). We focused on the 34 compounds as we sought to identify inhibitors of LMO2-iDAb LMO2 dm3 interaction. These were retested using the original BRET assay to confirm inhibition of signal. Because selected compounds should be weak binders, we also used BRET-based assay between the strong interaction LMO2 and unmutated iDAb LMO2 to eliminate nonspecific compounds that could bind to key residues of the iDAb, such as residues L104/E105/L106 that were previously shown essential for LMO2 binding by mutagenesis (Fig. 3, C and D) (23). In addition, initial hits affecting RLuc8 luminescence or intrinsic GFP 2 fluorescence by greater than twofold were not considered, further including many potent primary hits such as P24H7 ( fig. S3, A and B). This rescreen confirmed eight inhibitors of LMO2-iDAb LMO2 dm3 interaction that corresponded to ≈25% of the primary hits ( fig. S3, C and D). The eight compounds were lastly tested with a nonrelevant BRET-based interaction assay (MAX bHLH-CMYC bHLH) to provide further confirmation of specific interaction with LMO2 ( fig. S3E). The chemical structures of the selected hits show that the compounds belong to a family that can be divided into two subfamilies because of their chemical similarities. The main difference is the presence of either a five-or seven-membered ring in each subfamily ( fig. S3, C and D, respectively). The different moieties of Abd compounds were divided into four substituent groups-namely, benzyl (position A), imidazolidinone (position B), oxazole (position C), and aniline (position D)and were modified systematically (Fig. 4C). Representative analogs are shown (Abd-L13 to Abd-L25; Fig. 4C and fig. S4, B to E). In position A, most substituted benzyl groups were found to be well tolerated (see red boxes in Fig. 4C). Notably, the methoxy group could be placed in ortho, meta, or para positions on the benzyl ring (see red boxes on Abd-L15, Abd-L19, and Abd-L20) with minimal effect on the BRET inhibition potency of derivatives ( Fig. 4D and fig. S4, C to E). At position D, it was found that a large array of substituted anilines and benzyl amines were also well tolerated (green boxes in Fig. 4C and BRET data in Fig. 4D and fig. S4, C to E). Modifications to positions B and C had more substantial effect on the potency of the analogs (Fig. 4, C and D). In position B, any replacement of the imidazolidinone was found to bring a loss of activity (see pink box on Abd-L24; Fig. 4C) apart from the corresponding piperazine (see pink box on Abd-L15 in Fig. 4C). Because of the potential chemical instability of the imidazolidinone, the lower yields, and a large number of side products during synthesis, the piperazine moiety was used in further SAR investigations, eradicating those issues. In position C, it was found that only the Fig. 2. Establishing a BRET-based LMO2-iDAb biosensor amenable for a high-throughput screening of small-molecule libraries. A BRET biosensor was established for the interaction of LMO2 with the anti-LMO2 VH by titrating BRET signal for mutant VH binding to LMO2. The BRET2 assay comprises live in-cell generation of signal following interaction of a donor protein (in this case, LMO2-RLuc8) and an acceptor protein (in this case, GFP 2 -anti-LMO2 iDAb) and BRET signal (energy transfer from activated RLuc8 to GFP 2 ). (A) BRET donor saturation assay with donor LMO2 and different mutant iDAb LMO2 acceptors, iDAb LMO2 dm , LMO2 dm1-dm6 . (B) BRET max and BRET 50 values from the donor saturation curves displayed in (A). (C) Western blot data for the expression of the GFP 2 -iDAb LMO2 and mutants (using anti-GFP antibody) and expression of LMO2-RLuc8 (with anti-LMO2 antibody). -Tubulin is the loading control. (D) BRET competition assay of LMO2-RLuc8 and the different GFP 2 -iDAb LMO2 dmx by expression of a nonrelevant control iDAb [anti-RAS (10); Ctl, white bars] or unmutated iDAb LMO2 (black bars) as competitors. This competition is performed at the lowest dose of competitor (i.e., 0.1 g, see Materials and Methods). The percentage inhibition by iDAb LMO2 compared to iDAb Ctl is displayed. The iDAb LMO2 dm3 mutant choose for the cell-based screening assay is colored in blue. Each experiment was performed twice. Where error bars are presented (A and D), they correspond to mean values ± SD of biological repeats. Fig. 3. Cell-based high-throughput screening for inhibitors of LMO2-iDAb LMO2 PPI. A high through-put screen of a diverse compound library was conducted using the LMO2-VH BRET assay. (A) The scheme for the cell-based HTS is shown where a diverse chemical library of 10,720 compounds was screened using a BRET cell assay to determine diminution of signal generated by interaction of LMO2-RLuc8 and GFP 2 -iDAb dm3 . (B) Scatter plot of the normalized BRET signal from 10,720 compounds tested at 10 M. Thirty-four compounds (primary hits) caused inhibition of BRET signal below a cutoff of three times the SD (minus, −3×SD) of the DMSO BRET signal (28). Some primary hits are pinpointed in orange. (C and D) Confirmation of inhibition of BRET signal from LMO2 interaction with iDAb LMO2 dm3 (C) and for interaction of LMO2 with unmutated iDAb (D). Eight hits (depicted by blue bars) were confirmed to decrease LMO2-iDAb LMO2 dm3 signal by at least 3×SD of the BRET signal with DMSO control (i.e., DMSO BRET signal ± 3×SD: 12.2 ± 3.6, threshold set at 8.6 and shown with the dotted line) without affecting LMO2-iDAb LMO2 interaction (i.e., DMSO BRET signal ± 3×SD: 30.3 ± 3.4, threshold set at 26.9 and shown with the dotted line). P24H7 compound highlighted with a red asterisk is an example of compound that was not pursued further as it affects both iDAb LMO2 dm3 and iDAb LMO2 interaction with LMO2. Experiments in (C) and (D) were performed twice. Error bars presented in (C) and (D) correspond to mean values ± SD of biological repeats. 2,4-substituted-thiazole (Fig. 4C, blue box on Abd-L17) and 2,4-substituted-oxazole (Fig. 4C, blue box on Abd-L15) were tolerated as a core and the corresponding 2,5-substituted heterocycles and different heterocycle such as pyrimidine (see blue box on Abd-L25 in Fig. 4C as example) led to a loss of activity (Fig. 4D). These data suggested that the B and C positions are important for the interaction of the compounds with LMO2, while position A and D could be modified to add new functional groups. We also tested Abd-L9 and some analogs in a parallel artificial membrane permeability assay (PAMPA) and Abd-L9 in a Caco-2 permeability assays ( fig. S5, A and B). This showed that the compounds were permeable through a synthetic membrane (PAMPA) or into cells (Caco-2) as would be expected from compounds derived from cell-based screens. Abd-L9 showed the best properties in the PAMPA compared to the analogs and a low transport but low efflux ratio in the Caco-2 assay ( fig. S5, A and B). These results suggested that while Abd-L9 enters cells with a relatively low efficiency, it is not actively exported from the cells (low efflux ratio). Abd compounds bind LMO2 in vitro The LMO2 Abd compounds were subsequently verified using an in vitro orthogonal assay. We used the photoaffinity labeling (PAL), a powerful technique to study protein-ligand interactions in cell lysate or with (partially) purified proteins (29) (Fig. 5A). PAL corresponds to the use of a chemical probe added on a ligand that can then covalently bind to its target in response to activation by ultraviolet (UV) light (Fig. 5A) (30). In addition, a tag moiety is added on the ligand that allows the capture of the covalent ligand-protein complex by affinity beads before detection by Western blot (Fig. 5A). The extensive SAR data on the LMO2 Abd compounds suggested attachment sites on the parent ligand. We added a benzophenone photoreactive group in place of the benzyl substituent (position A) and a linker with a biotin tag in position D (designated Abd-L26; Fig. 5B). To test whether the addition of the photoreactive group on the Abd compound would modify its potency, and because the biotin moiety may render compounds cell impermeable (29), we tested the ability of a precursor compound, Abd-L27 (fig. S6A), to inhibit LMO2-iDAb LMO2 dm3 interaction in BRET assay. We observed that Abd-L27 retained its inhibitory potency in BRET ( fig. S6, B and C) and thus maintains its ability to bind to LMO2. The analysis of PAL technique requires soluble, recombinant LMO2 in order that it has the Abd-L compound-binding site accessible. We carried out a phage display screen of scFvs with LMO2-LID protein antigen and obtained scFv that binds LMO2 and can be coexpressed in E. coli. By using the partially purified scFv-LMO2 dimer, the PAL technique was performed after inducing cross-linking of Abd-L26 to the scFv-LMO2 complex with UV light for photocrosslinking (illustrated in Fig. 5C). The Abd-L26 in the complex was isolated by interaction of the biotin moiety with avidin beads and the protein analyzed by Western blot with either anti-biotin antibody (Fig. 5D), anti-LMO2 antibody (Fig. 5E), or anti-HIS tag (Fig. 5F). The pulldown data show that protein is only cross-linked when the mixture is treated with UV light, and we observed a protein (Fig. 5D, lane 3) coincident with the size of LMO2 (Fig. 5E). In addition, the recovery of biotinylated LMO2 was inhibited by incubating the protein with Abd-L26 (PAL) in the presence of 5× concentration of Abd-L9 competitor (Fig. 5D, lane 4), confirming a specific binding of the compound on LMO2. The anti-biotin antibody showed that the biotinylated proteins specifically bound to the beads through the PAL Abd-L26 compound, while the anti-LMO2 and anti-HIS antibodies show nonspecific binding of proteins to the beads. We noted that the recombinant LMO2 and the scFv had a tendency to associate nonspecifically with avidin agarose beads used for the pulldown without UV cross-linking (see lanes 1 and 2, Fig. 5, E and F). This may be due to partial denaturation of the proteins during the PAL incubation and explains the apparent partial inability of Abd-L9 to compete the PAL compound (Fig. 5F, lane 4 versus lane 3). Activity of LMO2 Abd compounds in cells We tested the specificity and potency of Abd-L compounds in cells by using dose-response BRET assays on different LMO2 PPI. This included LMO2 interaction with the unmutated iDAb and the iDAb dm3 , with its natural partner proteins LDB1 and TAL1 (together with E47) (31) and with a nonrelevant control PPI, which is the interaction of the bHLH regions of CMYC with MAX. We first tested the direct interaction LMO2 with TAL1 by a BRET donor saturation assay ( fig. S7A), but this interaction is weak and gave a high BRET 50 value. We added individually the partner proteins involved in the LMO2 complex (31) and found that coexpression of E47, a heterodimerization partner of TAL1, increased the relative affinity of LMO2-TAL1 and that the addition of LDB1 gave the strongest binding between LMO2 and TAL1 ( fig. S7A; see decreasing BRET 50 values, from 12.6 to 1). We also developed the BRET pairs LMO2-LDB1 ( fig. S7B) and the nonrelevant interaction of MAX bHLH with CMYC bHLH (fig. S7C). Last, the specificity of these three interactions was tested, with BRET competition assays, by coexpressing nontagged versions of iDAb Ctl or iDAb LMO2 in the BRET assay cells. iDAb LMO2 inhibited BRET signal from LMO2-TAL1 + E47 ( fig. S7D) and from LMO2-LDB1 ( fig. S7E) but not from MAX-CMYC interaction ( fig. S7F). We assessed the anti-LMO2 Abd compounds in BRET doseresponse assays with the various BRET assays (Fig. 6, A to D, and fig. S7, G and H). None of the compounds inhibited LMO2-iDAb LMO2 dm3 BRET by more than 40 to 50%, with the exception of Abd-L22 (~85%). However, we found that Abd-L9, Abd-L10, and Abd-L16 had the best relative median inhibitory concentration (IC 50 ) for the interaction LMO2-iDAb LMO2 dm3 at around 1 M (Fig. 6A and table S1) whether or not the compound contained imidazolidinone substituents (Abd-L9 and L10) or a piperazine substituent (Abd-L16). The other compounds tested showed relative IC 50 values ranging from just more than 7 M to nearly 50 M for Abd-L19 (Fig. 6, A and D, and table S1). When this group of compounds were assayed with LMO2-LDB1 BRET, little effect was observed except for Abd-L10 that caused only a small inhibition (35% at the highest concentration of Abd-L10 with an IC 50 of 1.2 M; Fig. 6, B and D, and table S1). Testing this group of Abd-L compounds with the BRET assays for LMO2-iDAb LMO2 (unmutated iDAb) (Fig. 6C), LMO2-TAL1 + E47 ( fig. S7G), or MAX bHLH-CMYC bHLH ( fig. S7H) failed to show any inhibition, even at the highest concentration of compound as used throughout the series of BRET inhibition assays with the exception of Abd-L22 that inhibits LMO2-iDAb LMO2 interaction but only at high concentrations (above 25 M; Fig. 6C). Intracellular antibodies as templates for drug discovery Intracellular antibody fragments interact with proteins at any antigenic site or where natural partner proteins are involved in PPI. This gives an opportunity using the intracellular antibody to derive compounds that overlap the antibody-binding site. When the intracellular antibody interferes directly with a PPI, rather than using the natural partner protein, the intracellular antibody can be obtained with very high affinity binding as we showed for selected compounds binding to the RAS proteins (19), demonstrating that this so-called undruggable target is, in fact, druggable. The same interaction site of an intracellular antibody fragment can be used for both target validation and drug screening [e.g., LMO2 or RAS targets (15,23)]. The use of intracellular antibody fragments rather than natural partner proteins as a general approach for drug discovery has several advantages based on the natural specificity and affinity of antibodies where domain antibodies have small binding areas of few residues compared to natural interactions that tend to be flat and with a large interaction surface (32), making them difficult to find compounds that are inhibitors (33). In addition, manipulating the affinity of intracellular antibody fragments, unlike natural partners, is potentially straightforward without structural data, since their CDRs, defined by primary sequence, are major actors of their interaction with their target antigen (34). This process is called intracellular antibody dematuration (27). Furthermore, intracellular antibodies can discriminate family members (isoforms or paralogs), which themselves may have common binding partners [e.g., KRAS (16)]. Last, the use of intracellular antibody is advantageous over the use of natural binders in cases where (i) the natural partner is not defined or (ii) multiple partners are involved and interact on different interfaces of the protein of interest, such as the complex involving LMO2 (31). In contrast to the cSPR Abd method where a high-affinity antibody is needed (19), the measured in vitro affinity of the iDAbs for their target is not a limitation with our cell-based assay. Highaffinity iDAbs could be dematured to reduce their binding for use in the cell-based assay such as in a RAS compound selection assay (27). Alternatively, an iDAb already with a lower affinity could be directly used in the cell-based assay without prior dematuration step, which makes this a flexible approach. Notably, we previously observed with iDAb RAS that affinity measured in vitro and by BRET assay were not in the same rank of order (25). iDAb RAS has an in vitro affinity of 6.2 nM, while the iDAb RAS dm has an affinity around 1 M (>160-fold difference). Nonetheless, the BRET 50 value of iDAb RAS is 0.34, while it is 1.34 for the iDAb RAS dm (>4-fold difference), showing that there is no direct affinity comparison possible between in vitro and BRET assay. While the quantity of proteins is known for in vitro measurements, it is not readily controlled for the cell-based assays, which therefore only gives a proxy for an affinity. The data from Fig. 3D show that none of the hits were able to inhibit LMO2-unmutated iDAb LMO2 interaction supporting our dematuration process as key in the success of our screen. However, it is possible that the dematuration method could increase the probability of selecting false-positive binders by increasing the dissociation between the iDAb and its target. Furthermore, screening using the BRET assay could also introduce false-positive binders through small differences in protein expression levels of GFP 2 fusion protein. This could partly explain why only 25% of the 34 primary hits was confirmed after retesting. Nevertheless, by performing the screening in duplicate plates, we aimed to reduce the probability of selecting false-positive binders. The cell-based Abd assay is a more versatile method, as it could be implemented to any challenging protein that is difficult to express and/or to purify in recombinant form, such as LMO2. Actually, only a small quantity of nonpurified protein is necessary with the BRET assay to detect an interaction, while usually larger amounts of proteins are needed for cell-free assays. Last, the intrinsic advantage of cell-based assays, in which a signal is generated by the direct interaction of target with iDAb, is that the compounds already have the characteristic of cell entry, which we show here with our LMO2 Abd-L series of compounds. This property is highly relevant for the use of small molecules as drugs. LMO2 binding compounds derived from a cell-based BRET2 chemical library screen Rather than being a pure drug development campaign per se, our study demonstrates a methodology for using antibodies in drug discovery, in this case against a chromosomal translocation protein LMO2, confirming the general applicability of our methods. We describe a cell-based intracellular single domain antibody-guided small-molecule selection method that allows the direct identification of compounds that bind at the same region of the iDAb. We have illustrated this approach using the T cell oncogenic chromosomal translocation protein LMO2 (20) with an inhibitory anti-LMO2 iDAb (9,23). We used the iDAb to screen a compound library (10,000 compounds) that bind to the T cell oncogenic protein LMO2 with the LMO2-iDAb BRET2 cell-based interaction assay. We obtained a number of initial hits, and one chemical series of which Abd-L5 to Abd-L12 were the progenitors. The difficulty with being unable to express and purify recombinant LMO2 protein alone in E. coli in a quantity that would enable structural analysis precludes this study at this stage. However, because our compounds can tolerate linkers and larger groups, as found out by SAR analysis, it was possible to use the PAL technology that requires addition of two nonbinding substituents. We tested a benzophenone moiety as PAL and observed that analogs bearing this group on the right-or the left-hand sides were still active, whereas the linker could only be located on the right-hand side. A compound (Abd-L26) was therefore prepared with the benzophenone photoreactive moiety on the piperazine and the biotin linked to the aniline in the para position (Fig. 5B). The cross-linking of Abd-L26 to the LMO2 protein confirmed binding to LMO2 protein in vitro, and this was inhibited by addition of the parental compound Abd-L9 (Fig. 5D). Notably, some contaminating bands appeared on the Western blots (Fig. 5D), as the scFv-LMO2 protein was only partially purified. In addition, scFv-LMO2 protein bound nonspecifically to the beads, and this may explain minimal decrease of LMO2 signal by competition of Abd-L9 with the LMO2 antibody (Fig. 5E, lanes 3 and 4). These data suggest that the chemical series is an intracellular antibody surrogate that binds to LMO2 where the anti-LMO2 iDAb contacts LMO2. The cell-based selection involved competition by the compounds for the interaction of LMO2 with a dematured iDAb (i.e., with a lower affinity), and these compounds do not influence the interaction of LMO2 with unmutated iDAb (except Abd-L22 that might behave aberrantly at high concentrations) as would be expected from their micromolar relative IC 50 . By solving the structure of the LMO2/iDAb dimer, we showed that the iDAb modifies the LMO2 conformation, thereby impeding the interaction of partners such as TAL1 and E47 (23). We confirmed these data by BRET assay where the iDAb LMO2 blocks the binding of these proteins with LMO2 ( fig. S7D). However, with the Abd-L compounds selected here, we could not observe an inhibition of the interaction LMO2/TAL1-E47 by BRET assay (fig. S7G), suggesting that the compounds do not substantially modify the conformation of LMO2 protein. Therefore, in the future, our aim is to extend these compounds to potentially achieve binding geography like that seen for the anti-LMO2 iDAb and that could achieve the same LMO2 stable conformation change that the iDAb causes, with the similar effects on the LMO2 protein complex function in T cell acute leukemia. Furthermore, our compounds could also be the starting point for the development of LMO2 PROTAC (proteolysis targeting chimera) degraders (35). Intracellular antibody fragments or other macromolecules can be used to investigate their effects on a target disease in relevant preclinical models (target validation) (9,10,15,36). Furthermore, recurrent chromosomal translocations are abundant in all tumor types (37) that produce intracellular proteins that function in various cellular processes such as transcription, where PPIs are critical. These are challenging to target directly with small molecules and have been considered to be undruggable or, at best, very hard drug targets. Thus, for the chromosomal translocation protein LMO2, we have exploited target discovery via chromosomal translocation junctions (38,39) to target validation with scFv (40) and an iDAb (9,23) to drug discovery using this cell-based method. The strategy could be implemented with any other PPI of interest involving similar cell-based Abd screening. Furthermore, our work reported here, and previously (19), demonstrates the use of antibodies to select chemical compounds as surrogates of the antibody binding site (once considered a holy grail of antibody biology). This has recently also been shown in the case of an anti-HIV antibody (41). This concept can be applied to any antibody whether to extracellular, cell surface, or intracellular targets. In conclusion, our methodology provides at least three application features. Antibody combining sites can be used for selecting chemical compounds, the method can be applied in cells to previously considered undruggable targets such as transcription factors and can be applied to drug discovery to chromosomal translocation proteins previously considered to be difficult-to-target proteins. BRET2 titration curves and competition assays For all BRET experiments (titration curves and competition assays), 650,000 HEK293T were seeded in each well of six-well plates. After 24 hours at 37°C, cells were transfected with a total of 1.6 g of DNA mix, containing the donor + acceptor ± competitor plasmids, using Lipofectamine 2000 transfection reagent (Thermo Fisher Scientific). For the BRET donor saturation assays, cells were transfected with 0.05 g of donor (LMO2) and with an increased amount of acceptor plasmid (0.025, 0.05, 0.1, 0.25, 0.5, 0.75, and/or 1 g of DNA) equalized to a total amount of 1.6 g of DNA with an empty vector pEFcyto-myc. In dose-response competition experiments, competitors were transfected with the following amount of DNA: 0.1, 0.5, and 1 g. In single-dose competition experiments, competitors were transfected with 0.1 g of DNA. Cells were detached 24 hours later, washed with phosphate-buffered saline (PBS), and seeded in a white 96-well plate (clear bottom, PerkinElmer, catalog no. 6005181) in Opti-MEM without phenol red medium complemented with 4% FBS, and cells were incubated for an additional 20 to 24 hours at 37°C before the BRET assay reading. A detailed BRET protocol is provided elsewhere (42). Cell treatment Compounds were prepared in 100% DMSO at 10 mM. For BRET competition assays, cells were treated with the indicated compounds at concentration of 1 (or 5), 10, and 20 M for 22 hours. For BRET-based dose-response experiments, cells were treated with compounds at concentration of 0.01, 0.1, 1, 4, 10, 25, and 50 M for 22 hours. The compounds were diluted in the BRET medium [Opti-MEM without phenol red (Life Technologies)] supplemented with 4% FBS and with a final concentration of 0.2% DMSO. BRET2 measurements BRET2 signal was determined immediately after injection of coelenterazine 400a substrate (10 M final concentration) to cells (Cayman Chemicals) using a CLARIOstar instrument (BMG Labtech) with a luminescence module. Total GFP 2 fluorescence was detected with excitation and emission peaks set at 405 and 515 nm, respectively. Total RLuc8 luminescence was measured with the luminescence 400-to 700-nm wavelength filter. The BRET signal or BRET ratio corresponds to the light emitted by the GFP 2 acceptor constructs (515 nm ± 30) upon addition of coelenterazine 400a divided by the light emitted by the RLuc8 donor constructs (410 nm ± 80). The background signal is subtracted from that BRET ratio using the donor-only negative control where only the RLuc8 fusion plasmid is transfected into the cells. The normalized BRET ratio is the BRET ratio normalized to a negative control (iDAb control or DMSO control) during a competition assay. Total GFP 2 and RLuc8 signals were used as a proxy to ensure that similar protein expression between comparable probes were used in BRET experiments. Western blot analysis Cells were washed once with PBS and lysed in SDS-tris buffer [1% SDS and 10 mM tris-HCl (pH 7.4)] supplemented with protease inhibitors (Sigma-Aldrich) and phosphatase inhibitors (Thermo Fisher Scientific). Cell lysates were sonicated with a Branson Sonifier, and the protein concentrations were determined by using the Pierce BCA (bicinchoninic acid) Protein Assay Kit (Thermo Fisher Scientific). Equal amounts of protein (20 g) were resolved on 12.5% SDSpolyacrylamide gel electrophoresis and subsequently transferred onto a polyvinylidene fluoride membrane (GE). The membrane was blocked with 10% nonfat milk (Sigma-Aldrich) in tris-buffered saline (TBS)-0.1% Tween 20 and incubated overnight with primary antibody at 4°C. After washing, the membrane was incubated with horseradish peroxidase (HRP)-conjugated secondary antibody for 1 hour at room temperature (RT; 22°C). The membrane was washed with TBS-0.1% Tween and developed using Clarity Western ECL Substrate (Bio-Rad) and CL-XPosure films (Thermo Fisher Scientific) or the ChemiDoc XRS+ imaging system (Bio-Rad). High-throughput chemical screening with LMO2-iDAb LMO2 mutant BRET biosensor The screen was carried out in 384-well plate format. An in-house library of 10,720 compounds (comprising 6991 compounds from BioFocus and 3729 from ChemBridge) were in 96-well plate format. The library was compressed into 384-well plate format for the HTS purpose. The volume and quantities indicated are for 40 assay 384well plates. The screen was carried out in duplicate at 10 M. Two sessions of HTS, containing 5360 compounds each, were screened in 68 assay plates (34 assay plates in duplicate). Before starting, HEK293T cells were seeded into 2xT175 flask. Three days later, the 2xT175 were split into 6xT175. 1) Day 1: Cell seeding. Cells were harvested from 6xT175 flasks at ~70% confluency. The cells were resuspended in 110 ml of complete DMEM, 120 × 10 6 cells were inoculated into each of two Corning HYPERFlask M cell culture vessels (Corning, catalog no. 10030), and 560 ml of medium was added to fill one HYPERFlask. 2) Day 2: Cell transfection with pEF-LMO2-RLuc8 and pEF-GFP 2 -iDAb LMO2 dm3 . For each HYPERFlask, 10 ml of Opti-MEM was added together with 19 g of pEF-LMO2-RLuc8, 37 g of pEF-GFP 2 -iDAb LMO2 dm3 , and 244 g of pEF-empty-cyto-myc plasmids. Seven hundred fifty microliters of Lipofectamine 2000 was added in 10 ml of Opti-MEM and mixed gently. The 10 ml of DNA dilution was added and incubated for 20 min. The DNA/Lipofectamine 2000 mix was added in 500 ml of complete DMEM, and the medium of the HYPERFlask had been removed. Last, the medium + transfection mix was carefully poured into the HYPERFlask without creating any bubbles, and the flask was filled with medium. 3) Day 3: Cell seeding in 384-well plates. The cells were harvested with 100 ml of trypsin that were added per HYPERFlask and incubated for 2 min at 37°C, and the trypsinized cells were transferred to a beaker containing 100 ml of complete DMEM. Each flask was washed once with 100 ml of complete DMEM and mixed gently but thoroughly to ensure single-cell suspensions (final volume for one flask: 300 ml). A total of 90 × 10 6 cells were added per 250-ml Corning centrifuge tube (4 × 250 ml centrifuge tubes were used, which was 360 × 10 6 transfected cells in total), and the cells were centrifuged at 220g for 5 min at RT. Each cell pellet (4 in total) was gently resuspended in 200 ml of Opti-MEM without red phenol + 4% FBS + 1% PS (hereafter called BRET medium) to a final concentration of 0.45 × 10 6 of cells/ml. The cells were seeded in white 384-well plates (clear bottom, PerkinElmer, catalog no. 6007480) with a PerkinElmer Janus liquid handling workstation housed in a category 2 enclosure (45 l per well; 20,000 cells). A blank plate was first used to remove any air bubble in the liquid handling workstation. 4) Day 3: Library dilution. Stock solutions (100 M) were prepared for each compound in the library (the initial concentration of the library was 10 mM). One hundred fifty nanoliters of each compound (10 mM) was added using an Echo Acoustic Dispenser (Labcyte) into 15 l of BRET medium, giving a final concentration of 100 M. 5) Day 3: Compounds addition: A 1% DMSO was prepared in BRET medium. Five microliters of 1% DMSO solution was dispensed in the columns 1 and 2 and 23 and 24 as negative controls. Compounds were added to cells in 5 l (100 M) in each well using the PerkinElmer Janus liquid handling workstation (final concentration of 10 M, 0.1% DMSO), and the plates were incubated for 20 hours. 6) Day 4: Plate reading. A PHERAstar FSX plate reader (BMG Labtech) was used to read the plates equipped with a BRET2 optic module. The GFP 2 signal of each plate was first measured to assess the relative cell number in each well. After the GFP 2 reading, the bottom of each plate was covered with a white tape. Eighty milliliters of 100 M BRET substrate (i.e., coelenterazine 400a, Cayman Chemicals) was prepared by dissolving 3 mg of coelenterazine 400a in a 32 ml of 100% ethanol, and the volume was brought to 80 ml by adding 48 ml of BRET medium. BRET reading was carried out by adding 5.5 l of coelenterazine 400a (final concentration of 10 M) using injectors and reading the BRET signal of each well. The reading time for one 384-well plate was about 8 min. Therfore, it will take ~4.5 hours to read 34 plates. Purification of scFv-LMO2 for PAL analysis For coexpression of recombinant LMO2 and anti-LMO2 scFv, the scFv was cloned into an existing bicistronic expression vector [pRK-His-TEV-VH576-LMO2; (23)]. DNA encoding the scFv was amplified by PCR and cloned into the pRK vector to replace the VH576 using Nco I and Eco RI restriction sites. Plasmid DNA was transformed into E. coli C41 (DE3) cells for protein coexpression. A single colony was used to inoculate 50 ml of LB media containing ampicillin (100 g ml −1 ) that was grown overnight at 37°C with shaking at 225 rpm. The overnight seed culture was diluted 1:100 in 8 × 1 liter of LB containing ampicillin (100 g ml −1 ). The cultures were grown at 37°C with shaking at 225 rpm until an OD 600 (optical density at 600 nm) of 0.6 was reached. ZnSO 4 was added before induction to a final concentration of 0.1 mM. Protein expression was induced by the addition of 0.5 mM isopropyl 1-thio--d-galactopyranosid, and the cells were incubated overnight at 16°C with shaking at 225 rpm. Cells were harvested by centrifugation at 6000 rpm for 20 min at 4°C. Cell pellets were resuspended in lysis buffer [20 mM tris (pH 8.0), 250 mM NaCl, 20 mM imidazole, 0.1 mM ZnSO 4 , 5 mM 2-mercaptoethanol, and 5% glycerol] containing EDTA-free protease inhibitor cocktail tablets (Roche, Germany) before lysis at 25 kpsi at 4°C using a cell disruptor system (Constant Systems Ltd., UK). The cell lysate was incubated with deoxyribonuclease I and 2 mM MgCl 2 for 20 min at RT before being clarified by centrifugation at 22,000 rpm for 1 hour at 4°C. LMO2 and anti-LMO2 scFv were copurified using a 5-ml HisTrap HP column (GE Healthcare, UK) using a 50 ml of imidazole gradient from 20 to 300 mM. The protein was concentrated to 1.5 ml and purified further by gel filtration using a HiLoad 16/600 Superdex 75 column (GE Healthcare, UK) in 20 mM tris (pH 8.0), 250 mM NaCl, and 1 mM dithiothreitol. The copurification of LMO2 and anti-LMO2 scFv was verified by standard Western blotting using anti-LMO2 (R&D Systems, AF2726) and anti-His-HRP (Sigma-Aldrich, A7058). PAL pulldown Abd-L26 (20 M) with or without Abd-L9 (100 M) (competitor) is added in a final volume of 400 l of PBS with 40 g of purified protein of interest (scFv-LMO2). The same samples are prepared for the no UV controls. The samples are incubated for 25 min at RT. The samples to be cross-linked are put into ice and under the UV lamp for cross-linking for 1 hour. The no UV controls are kept on ice. During the 1 hour of cross-linking, the agarose monomeric avidin beads (catalog no. 20228, Thermo Fisher Scientific) are washed twice with PBS. After the cross-link, 20 l of washed beads is added in all the samples (cross-linked and non-cross-linked) and incubated for 2 hours at 4°C on a roller. Two hours later, the beads were washed three times with 400 l of PBS. The samples are lastly denatured with 50 l of 2× loading buffer with BME (betâmercaptoethanol) added directly on the beads (and boiled at 100°C for 5 min) and loaded for a Western blot analysis. CACO-2 assay Caco-2 apparent permeability (Papp) was determined in the Caco-2 human colon carcinoma cell line as described (43). Cells were maintained (DMEM with 10% FBS, penicillin, and streptomycin) in a humidified atmosphere with 5% CO 2 /95% air for 10 days. Cells were plated out onto a cell culture assembly plate (Millipore, UK), and monolayer confluency was checked using a TEER (Transepithelial/ endothelial Electrical Resistance) electrode before the assay. Media was washed off and replaced with Hanks' balanced salt solution (HBSS) buffer (pH 7.4) containing compound (10 M, 1% DMSO) in the appropriate apical and basal donor wells. HBSS buffer alone was placed in acceptor wells. In particular instances, a specific P-gp inhibitor, LY335979 (5 M; named "inhibitor" in the column compound), was added to the HBSS to confirm that the cells are expressing functional efflux transporter proteins. The Caco-2 plate was incubated for 2 hours at 37°C. Samples from the apical (A) and basolateral (B) chambers were analyzed using a Waters (Milford, MA, US) TQ-S liquid chromatography-tandem mass spectrometry (LC-MS/MS) system. The cell permeability properties of Abd-L compounds were compared to low (nadolol) and high (antipyrine) permeability compounds and a compound with high export (indinavir). Apparent permeability (Papp) was determined as follows where Vr is the volume of receptical, A is surface area of monolayer, and Co is the initial compound concentration in donor PAMPA assay The PAMPA was used to determine compound permeability by passive diffusion. The assay used an artificial membrane consisting of 2% phosphatidyl choline in dodecane (Sigma-Aldrich, Dorset, UK). The donor plate was a MultiScreen-IP Plate with 0.45-m hydrophobe Immobilon-P membrane (Millipore, UK), and the acceptor plate was a MultiScreen 96-well Transport Receiver Plate (Millipore, UK). The permeability was measured at three different pH levels (pH 5, 6.5, and pH 7.4) in buffer containing 1% bovine serum albumin (Sigma-Aldrich, Dorset, UK). A 10 mM DMSO stock solution of test compound was used to prepare the 10 M PAMPA donor solutions and calibration curves in each of the three buffers. Six microliters of the membrane solution was added to each well of the donor plate. Buffer donor solutions (200 l) were added to the appropriate wells of the PAMPA donor plate. Three hundred microliters per well of blank PBS (pH 7.4) was added to the PAMPA acceptor plate. The donor and acceptor plates were then sandwiched together, covered with a lid, and incubated at 30°C in a humid environment for 16 hours. After the incubation period, the plates were removed from the incubator and the sandwich was dismantled. Samples were then transferred into a fresh plate and centrifuged. All sample supernatants were diluted and analyzed using a Waters (Milford, MA, US) TQ-S LC-MS/MS system. Permeability values (cm/s) were calculated using the following equation where V D is the volume of donor, V A is the volume of acceptor, and area is the surface area of the membrane × porosity. Chemical synthesis All solvents and reagents were used as supplied (analytical or high-performance liquid chromatography grade) without prior purification. Water was purified by an Elix UV-10 system. Thin-layer chromatography was performed on aluminum plates coated with 60 F254 silica. Plates were visualized using UV light (254 nm) or 1% aq. KMnO 4 . Flash column chromatography was performed on Kieselgel 60M silica in a glass column. Nuclear magnetic resonance spectra were recorded on Bruker Avance spectrometers (AVII400, AVIII 400, AVIIIHD 600, or AVIII 700) in the deuterated solvent stated. The field was locked by external referencing to the relevant deuteron resonance. Chemical shifts () are reported in parts per million (ppm) referenced to the solvent peak. 1 H spectra reported to two decimal places, 13 C spectra reported to one decimal place, and coupling constants (J) are quoted in hertz (reported to one decimal place). The multiplicity of each signal is indicated by s (singlet), br. s (broad singlet), d (doublet), t (triplet), q (quartet), dd (doublet of doublets), td (triplet of doublets), qt (quartet of triplets), or m (multiplet). Low-resolution mass spectra were recorded on an Agilent 6120 spectrometer from solutions of MeOH. Accurate mass measurements were run on either a Bruker MicroTOF internally calibrated with polyalanine or a Micromass GCT instrument fitted with a Scientific Glass Instruments BPX5 column (15 m by 0.25 mm) using amyl acetate as a lock mass by the Mass Spectrometry Department of the Chemistry Research Laboratory, University of Oxford, UK; mass/charge ratio values are reported in daltons. The detailed chemical synthesis protocols and 1 H and 13 C spectra for the final Abd compounds (Abd-L5 to -L27) are shown in the Supplementary Materials. SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/7/15/eabg1950/DC1 View/request a protocol for this paper from Bio-protocol.
2021-04-10T13:19:49.375Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "9df1c6407f0103536d3127a4e8fc8c71948c1b6a", "oa_license": "CCBY", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.abg1950?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b653bfe1e5c4628cf149ed81ca63f07107537ca", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232098230
pes2o/s2orc
v3-fos-license
Prevalence and Factors Associated with Immediate Postnatal Care Utilization in Ethiopia: Analysis of Ethiopian Demographic Health Survey 2016 Background Maternal mortality is unacceptably high in Ethiopia. Most maternal complications are preventable using immediate postnatal care. However, it is not utilized effectively. Hence, this study can assist in formulation of national policies to increase use of immediate postnatal care in Ethiopia. Objective To assess the prevalence and factors associated with immediate postnatal care utilization in Ethiopia, in 2016. Methods Secondary data analysis was done on Ethiopian Demographic Health Survey 2016 data, in a stratified, two-stage, and cluster sampling study. This analysis was restricted to postnatal women who had given birth at least once in the five years before the survey. Chi-square test of statistics was performed to identify factors associated with immediate postnatal care service uptake. Bi-variable and multi-variable logistic regression analyses were carried out to identify factors associated with immediate postnatal care utilization. Odds ratio with 95% confidence level was computed and P-value < 0.05 was considered as statistically significant in the multivariable logistic regression. Results The overall level of immediate postnatal care service utilization was 6.3% in Ethiopia. Urban setting (AOR=2.3, 95% CI, 1.9, 2.9), higher education status (AOR=1.6, 95% CI, 1.3, 2.0), secondary education status (AOR=2.6, 95% CI, 1.9, 3.6), primary education status (AOR=3.1, 95% CI 2.0, 4.6), always listening to the radio (AOR=2.4, 95% CI, 1.7, 3.2), being in a richer wealth quintile (AOR=4.2, 95% CI, 3.0, 5.8), being in a middle wealth quintile (AOR=2.8, 95% CI, 2.0, 3.9), being in a poorer wealth quintile (AOR=1.9, 95% CI, 1.3, 2.8), having fewer than six children (AOR=1.3, 95% CI, 1.1, 2.0), and being told about pregnancy complications (AOR=2.2, 95%CI, 1.7, 2.7) were factors positively associated with utlilization of immediate postnatal care. Conclusion Prevalence of immediate postnatal care utilization is still low in Ethiopia. Awareness should be created about immediate postnatal care utilization through the efforts of health extension workers. In addition, the Ethiopian government should design strategies to enhance the socio-economic status of women. Beside these, information about postnatal care and its benefit is critical and can be transmitted through mass media. deathsoccur in sub-Saharan African countries. 2 Previous studies stated that 50% of maternal deaths and 40% of neonatal deaths occur within 24 hours after childbirth. 3,4 In Ethiopia, the maternal mortality ratio was 401 per 100,000 live births in 2017. The incidence of these deaths decreases with increasing time from birth. 5,6 The World Health Organization (WHO) recommends four standard postnatal care visits: 1, 3, 7-14 and 42 days after birth. 7 Specifically, the Ethiopian Federal Ministry of Health recommends four postnatal care visits at 6-24 hours, 3 days, 6 days and 42 days. 8 Based on WHO data, the first 24 hours after birth is the most critical time to diagnose complications and provide suitable interventions. 9 Postnatal care utilization could prevent the death of 60,000 newborns every year. In Ethiopia, it should be possible to reduce neonatal mortality by 10-27% through effective postnatal care utilization. 10 Regardless of its benefit, mothers generally do not visit health institutions following childbirth in Africa. Consequently, the coverage of postnatal care service is one of the lowest among reproductive and child health services. 11 For example, the Ethiopia Demographic and Health Survey 2005 reported that 95% of women did not receive postnatal care in the first 2 days after childbirth. 12 Moreover, postnatal care utilization was only 8% in Ethiopia in 2011. 13 In addition, only 34.3% of women received postnatal care at 6 weeks after delivery. 14 Previous studies have indicated that multi-layered and interlinked factors affect postnatal care utilization. For example, institutional delivery utilization, marital status, wealth quintile and age were reported as factors associated with postnatal care service utilization. 15 Place of residence, ethnicity, pregnancy intention, antenatal care visits and place of delivery were reported factors associated with variable postnatal care uptake. 16 There is also evidence that maternal occupation and pregnancy intention were important predictors of postnatal care utilization. 17 Besides, maternal education, education of partners, health facility delivery, and a skilled delivery attendant at least one postnatal care visit were among valuable factors associated with postnatal care utilization. 18 The Ethiopian government target is to reduce the maternal mortality ratio to 70 deaths per 100,000 live births by 2030. 19 Hence, Ethiopia has applied many different strategies to reduce maternal mortality. Comprehensive postnatal health packages through the health extension program is one of the key approaches among these strategies. Postnatal care is the most useful, but the most neglected maternal health service component to improve the survival of women and their babies. Although previous studies have focused on antenatal and delivery service utilization, information on postnatal care is scarce. There is a paucity of national research on immediate postnatal care. The main aim of this study is to assess the prevalence of immediate postnatal care utilization and associated factors in Ethiopia, in 2016. Study Area and Period Ethiopia is divided into nine regional states and two city administrations. Each regional state is further divided into zones. Zones are again divided into administrative units called districts. Districts are further subdivided into the lowest administrative units called 'Kebeles'. A primary health-care package includes preventive, promoting, and curative services. Postnatal care is one of the most valuable aspects of primary health care. In Ethiopia, health sector development Plan-I has introduced a four-tier health system for health service delivery. It consists of the following institutions in hierarchy: One health center and five satellite health posts, district hospital, zonal hospital, and specialized hospitals. Postnatal care services are primarily offered in health centers free of charge for pregnant women. According to Federal Ministry of Health recommendations, postnatal care should be conducted by health extension workers through home-to-home visits at 24 hours, 3 days and 7 days after birth. 20 The data collection period of the Ethiopian Demographic Health Survey was from January 18 to June 27, 2016. Study Variables Immediate postnatal care service utilization was the outcome variable for this study. It is a binary outcome variable. Study participants were asked "Whether they utilized postnatal care or not at least one time in the first 24 hours after birth in the most recent birth". They answered either 'Yes' or "No". These answers were coded as '1ʹ and '0ʹ respectively during analysis of this study. The covariates included in this study were grouped into sociodemographic and reproductive factors. Sociodemographic factors were variables such as age, place of residence, educational status, husband's educational status, religion, ethnicity, marital status and wealth quintile of women, and total family size. Reproductive variables submit your manuscript | www.dovepress.com DovePress International Journal of Women's Health 2021:13 258 included in this study were age at first marriage, total number of children, antenatal care visits, place of delivery, current pregnancy status, and own mobile phone. The authors classified and categorized some variables to make them comparable with other studies. For example, age of participants was a continuous variable expressed in completed years. It was classified into three categories as: 15-24, 25-34, and 35-49 years. Furthermore, ethnicity was classified into five classes based on their proportion from largest to lowest in Ethiopia. These included: Amhara, Oromo, Somali, Sidama, and Others. The variable total family size was categorized into two parts: 1-5 persons and 6 and more persons. In addition, age at first pregnancy was classified into two classes based on the Ethiopian legal age for marriage. These included age less than 18 years and 18 and above years. Operational Definition Immediate postnatal care is when participants utilize postnatal care at least once within 24 hours after discharge if participants deliver in a health institution or at home. Wealth index was constructed using principal components analysis on household asset data. Individuals were classified into five wealth quintiles (poorest, poorer, medium, richer, and richest). Variables included in the wealth index were ownership of selected household assets, size of agricultural land, quantity of livestock, and materials used for house construction. 21 Sample Size and Sampling Procedures The 2007 population and housing census, which was conducted by the central statistical agency, was the source of the sampling frame for EDHS 2016. Samples were selected using a stratified, two-stage cluster design. Each Kebele (the smallest administration unit of Ethiopia) was subdivided into enumeration areas (EAs) in the 2007 census. They were convenient for the implementation of the census. Enumeration areas were the sampling units in the first stage. There were 181 households in each EA. Stratified, two-stage cluster sampling technique was applied in EDHS 2016. First, 645 EAs were selected, and allocated proportionally in urban and rural areas based on their total number of EAs in Ethiopia. Consequently, 202 were selected from urban areas, and 443 from rural areas. Second, a fixed number of 28 households per cluster/EA were selected using systematic random sampling. Hence, a total of 18,008 households were selected in the country. From these, 17,067 households were occupied by women of reproductive age. However, effective interviews were conducted over 16,650 households. From these, only 16,583 eligible reproductive age women existed in the selected households. Specifically, only 15,683 study participants gave a full response, resulting in a response rate of 95%. A total of 7590 study participants who had given birth at least once in the five years before the survey were selected, and analyzed for the current study. 22 Study Design and Population Secondary data analysis was conducted on the Ethiopian Demographic Health Survey 2016. The EDHS is a community based survey which is conducted at five-year intervals at a national level. Women who gave birth at least once in the five years before the survey were selected for this secondary analysis. For those respondents who gave birth more than once in the past five years, the most recent birth was taken for the current analysis. Therefore, all women who gave the most recent birth in the past five years before data collection of EDHS 2016, and had come for postnatal checkup within 24 hours were the study population of this study. All women who gave birth at least once from 2011-2016 in Ethiopia were the source population of this study. Data Collection Fieldwork was carried out by 33 field teams, each consisting of 1 team supervisor, 1 field editor, 3 female interviewers and 1 male interviewer. In addition, there were 14 quality controllers for EDHS 2016. The pretest was conducted from October 1-28, 2015, in Bishoftu. The central statistical agency recruited 294 people for the main fieldwork, and they were trained to serve as team supervisors, field editors, interviewers, secondary editors, and reserve interviewers. The DHS Program's standard Demographic and Health Survey questionnaires were adapted to reflect the population and health issues relevant to Ethiopia. Five questionnaires were used for the 2016 EDHS: Household questionnaire, woman's questionnaire, man's questionnaire, biomarker questionnaire, and health facility questionnaire. Further details of sampling, questionnaire and procedure can be found in the publicly available survey sampling. The woman's questionnaire was used to collect information from all eligible women aged 15-49. These information includes: background characteristics, family planning, antenatal, delivery, and postnatal care, breastfeeding, sexually transmitted infections, female genital cutting, fistula, violence against women. Data Analysis Data were analyzed using SPSS version 22 software. First, we selected, re-categorized, and coded important variables for this analysis in the current study. We made them comparable across studies throughout countries. In this analysis, we followed a series of steps. Step 1, we applied sample weighting to compensate for unequal probability of selection among geographic strata. Ideally, both bias and variance should be minimized in a complex survey. Weighting variable was created by dividing the individual woman variable (V005) by 1,000,000. Hence, prevalence of immediate postnatal care utilization was calculated after weighting the data. Stratification and clustering were used to compute standard error. We created a plan file by using three variables needed to set up complex samples. These include the following variables in the dataset: primary sample unit (021), sample strata (v022), and weighting created in step 1. Third, we have fixed sampling with replacement as estimator assumption. Finally, we have analyzed the plan file created in these three steps to identify factors associated with immediate postnatal care utilization. Chi-square test was performed to observe any association between independent variables and an outcome variable. First, we performed binary logistic regression analysis to identify variables associated with immediate postnatal care service utilization. In bi-variable logistic regression analysis, we took variables with P-value less than or equal to 0.05 into multivariable logistic regression analysis to control cofounders. Then, variables which had significant association with antenatal care utilization were identified based on Adjusted Odds Ratio (AOR) and P-value less than 0.05 in the multivariable logistic regression analysis model. Descriptive statistics are presented by the use of texts and tables. Socio-Demographic Characteristics The majority (3826; 50.4%) of the study participants were in the age range 25-34 years. Nearly all (2884; 97.5%) participants were married. The majority (2114; 71.4%) of the study participants had no education. From the total respondents more than half (1741; 58.9%) had no work. The majority of the study participants (2734; 92.4%) was living in a rural area ( Table 1). Most of the respondents (6555; 91.9%) were married at the age of 15-24 years. One of every two study partici- Prevalence of Immediate Postnatal Care Data from a total of 7590 participants were extracted and analyzed. The response rate of this study was found to be 100%. Prevalence of immediate postnatal care utilization was 478 (6.3%) in this study. Cross Tabulation of Factors Associated with Immediate Postnatal Care Utilization As shown in eight variables were associated with immediate postnatal care utilization using chi-square test statistics. Variables with P value less than 0.05 were reported to have a significant association with outcome variables and were taken to bi-variable logistic regression analysis. All of these eight variables also showed significant association in bi-variable logistic regression. P-value of less than 0.25 was considered as a cutoff point for significantly associated variables in the bi-variable logistic regression analysis (Table 3). Factors Associated with Immediate Postnatal Care There were eight factors significantly associated with immediate postnatal care utilization in this study. But, only six factors were identified as being statistically significant in the multivariable logistic regression analysis. Women who lived in urban areas were 2.3 times more likely to utilize postpartum care than their counterparts (AOR=2.3, 95% CI, 1.9, 2.9). Compared with non-educated women, women with higher education status were 1.6 times more likely to use the postnatal care service (AOR=1.6, 95% CI, 1.3, 2.0). Women with secondary education status were 2.6 (AOR=2.6, 95% CI, 1.9, 3.6), and women with primary education were 3.1 times (2.0-4.6) more likely to utilize postnatal care than uneducated women. Women who always listen to the radio were 2.4 times more likely than their counterparts (AOR=2.4, 95% CI, 1.7, 3.2) to utilize the postnatal care service. Women in richer wealth quintile were 4.2 times (AOR=4.2, 95% CI, 3.0, 5.8), women in middle wealth quintile were 2.8 times (AOR=2.8, 95% CI, 2.0, 3.99), women in poorer wealth quintile were 1.9 (AOR=1.9, 95% CI, 1.3, 2.8) times more likely than the poorest women to utilize postnatal care. Women who had less than six children were 1.3 times more likely to utilize postnatal care than their counterparts (AOR=1.3, 95% CI, 1.1, 2.0). This analysis also reveals that women who were told about (Table 4). Discussion We have tried to assess immediate postnatal care utilization in Ethiopia using data from the Ethiopian Demographic Health Survey 2016. Although previous studies were conducted in Ethiopia, they failed to address immediate postnatal care uptake at a national level. Most maternal deaths that are occur are due to heavy bleeding within 24 hours of childbirth. Hence, immediate postnatal care utilization is the critical period in the lives of women and their children. Furthermore, this study had a number of unique characteristics: the proportion of immediate newborn care at community level, EDHS data use a standard measurement tool, and the study utilized an adequate sample size. Prevalence of immediate postnatal care utilization was found to be 6.3% in this study. This finding is lower than those of studies conducted in some other African countries: Tanzania (10.4%), Rwanda (12.8%), 15 and Nigeria (37%). 23 The possible reason for this difference might be due to low awareness level of immediate postnatal care availability among women in this study. 24 It might be also be due to widespread cultural and spiritual taboos, and misinformed beliefs in Ethiopia. For instance, that postpartum women should not go out of home alone as they could be affected by evil spirits. Previous research showed that women might not utilize postnatal care due to social and traditional perceptions. 25,26 Lastly, it is apparent that the socio-demographic level of study participants in Ethiopia and the study participants in countries mentioned above are be comparable. In this study, postnatal care utilization is highly associated with socio-demographic factors and reproductive factors. These were: Educational status, place of residence, listening to the radio, wealth quintile, number of children, and being told about pregnancy complications. Specifically, urban women were more likely to utilize immediate postnatal care than their counterparts. This finding is consistent with a study conducted in Loma district, South Ethiopia. 27 This might be due to the fact that study participants in urban areas have more access to information about the benefits of immediate postnatal care service utilization than their counterparts. A previous study also showed that the extent of women's awareness about postnatal care availability determined postnatal care service uptake. 28 Moreover, urban dwellers could access immediate postnatal care services more easily than their counterparts. In addition, cultural malpractice and misconceptions are more prevalent among rural communities which can hinder postnatal care utilization. More than 80% of Ethiopian women are believed to live in rural areas, from which the study participants were drawn for this study. Women are affected by several reasons mentioned above and they could not use immediate postnatal care. Most previous studies were conducted on health institutions and relatively urban areas which cannot capture reliable estimates of utilization of immediate postnatal care. Women with primary education status had higher odds of immediate postnatal care utilization than non-educated women, and women with secondary education status were more likely to utilize immediate postnatal care than noneducated women. Women with higher educational status were far more likely to use an immediate postnatal care service than non-educated women. This finding is in line with finding of studies conducted in Ethiopia, Indonesia, Uganda, and India. [29][30][31] This consistency could be explained by the fact that education gives women skills in informed decision making, which in turn increases their health-seeking behavior. 32,33 Furthermore, education can give economic independence and political participation through which women can attain gender equality. 34 As overall women's development is improved, they start to use health services including postnatal care. Furthermore, education is the key for health service utilization through reading of health messages. Most Ethiopian women who live in rural areas marry young, and they are uneducated. Women who listened to the radio at least once per week utilized immediate postnatal care services more than women who did not listen at all. This result is consistent with studies in Adwa, Southern Ethiopia, Jabitena Amhara, Kenya and Nepal. 18,[35][36][37][38] It is clear that listening to the radio can increase the chance of getting information about immediate postnatal care utilization. So, study participants can anticipate health risks and benefits of having immediate postnatal care in the first 24 hours. 39,40 This could indicate media had little role on the improvement of immediate postnatal care. Specifically, the radio had higher impact on health messages; however, the radio had little or no airtime for messages about danger signs after delivery, thus leaving women unaware and with less uptake of the service. Women in the richer wealth quintile were more likely to utilize immediate postnatal care than women in the poorest wealth quintile. Women in the middle wealth quintile were more likely to utilize immediate postnatal care than women in the poorest wealth quintile, and women in the poorer wealth quintile were more likely to use immediate postnatal care than women in the poorest wealth quintile. This finding is consistent with a study conducted in Rwanda. 15 The expected explanation could be that wealth is necessary for direct and indirect costs related with immediate postnatal care utilization, and to have different assets as sources of information. Previous evidence showed that low wealth in a household leads to low maternal health service utilization. 41 Women who have six or fewer children were more likely to utilize an immediate postnatal care service than their counterparts. This was in line with another study done in Ethiopia. 42 The possible reason might be that women with fewer children had little experience about pregnancy and childbirth. So, they had less confidence in their health status which in turn increases immediate postnatal care utilization. Moreover, women who had few children could get enough income and time to care for their babies than their counterparts. This is supported by the findings of previous studies. 43,44 Study participants who were told about pregnancy complications used immediate postnatal care services more than their counterparts. This study finding is in line with a previous study done in Goba Woreda, in Ethiopia. 45 The possible explanation might be due to the fact that awareness of maternal complications is an important factor in motivating women and their families to attend a health-care service at the earliest time. Limitation and Strength The current study has a number of strengths. We used national survey data and a relatively large sample size with a high response rate (95%). It utilized internationally validated and nationally adapted surveys. Therefore, the current findings are generalizable to the entire country. This is more likely to yield accurate estimates. In addition, this is the first study to report the prevalence and factors associated with utilization of immediate postnatal care in Ethiopia. Nevertheless, the current study has several limitations. As the survey asked information retrospectively, this may have yielded some recall bias. Nevertheless, this bias is not considered problematic since this study included only women giving birth within five years preceding the survey. Moreover, this secondary data analysis of Ethiopian Demographic and Health Survey could not provide some variables. Conclusion and Public Health Implication The overall prevalence of immediate postnatal care is very low in this study. Living in rural areas, being uneducated, being lowest in wealth quintile, living in large families, not listening to the radio, and lack of information about pregnancy complications were factors affecting immediate postnatal care utilization in Ethiopia in 2016. Information dissemination should be intensified by stakeholders about immediate postnatal care utilization. The government should work to improve the socio-economic status of women. Moreover, family planning programs should work for accessible and high-quality family planning services. Finally, health extension workers should strongly enhance awareness among rural women in Ethiopia and link women with health institutions in case of postnatal complications. Abbreviations WHO, World Health Organization; PNC, postnatal care; CI, confidence interval; DHS, demographic and health surveys; COR, crude odds ratios; SNNPRs, Southern Nation Nationality and People Regional State; EAs, enumeration areas; CSA, Central Statistical Agency. Data Sharing Statement Permission to access database was obtained. Database was available at https://dhsprogram.com. Ethics Approval and Consent to Participate We registered and requested data from DHS on-line archive. We received an approval to download identified DHS data files.
2021-03-04T05:40:32.366Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "6bc37754ba3b96a460b4e7a47ab1180f88d32786", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=67020", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bc37754ba3b96a460b4e7a47ab1180f88d32786", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }