id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
254922722 | pes2o/s2orc | v3-fos-license | Effects of Experienced Discrimination in Pediatric Sickle Cell Disease: Caregiver and Provider Perspectives
For Black children with sickle cell disease (SCD) and their families, high disease stigmatization and pervasive racism increase susceptibility to discrimination in healthcare settings. Childhood experiences of discrimination can result in medical nonadherence, mistrust of healthcare providers, and poorer health outcomes across the lifespan. Caregivers and medical providers are essential to childhood SCD management and are therefore well-positioned to provide insight into discrimination in the context of pediatric SCD. This mixed-methods study sought caregivers’ and providers’ perspectives on processes underlying discrimination and potential solutions to mitigate the negative effects of perceived discrimination among children with SCD. Caregivers (N = 27) of children with SCD (≤ 12 years old) and providers from their hematology clinics (N = 11) participated in individual semi-structured interviews exploring experiences of discrimination and daily SCD management and completed a quantitative measure of discrimination. Qualitative data were collected until themes reached saturation and subsequently transcribed verbatim, coded, and analyzed using applied thematic analysis. Quantitative and qualitative data converged to suggest the pervasiveness of discrimination in healthcare settings. Three qualitative themes emerged: (1) healthcare system factors underlie discrimination, (2) families’ challenging interactions with providers lead to perceptions of discrimination, and (3) experiences of discrimination impact caregiver-provider interactions. Both caregivers and providers highlighted building trusting patient-provider relationships and encouraging patients’ self-advocacy as means to reduce experiences and impacts of discrimination. These findings offer potential approaches to tangibly mitigate occurrences of discrimination in pediatric healthcare settings by trust building, accountability keeping, and fostering rapport to improve quality of care and pediatric SCD health outcomes.
Introduction
Sickle cell disease (SCD) is a genetic blood disorder often resulting in multiple complications, resulting in chronic and progressive organ damage, including acute excruciating pain Ms. Blakey and Ms. Lavarin share first authorship crises, stroke, and life-threatening acute chest syndrome [1].Black Americans are disproportionately affected by SCD [2].Sickle cell anemia is a subtype of SCD associated with severe sickle cell-related complications (e.g., severe pain crises) beginning typically in early childhood and often treated by hydroxyurea or penicillin [1].
Structural racism, "the normalization and legitimization of an array of dynamics-historical, cultural, institutional and interpersonal-that routinely advantage White people while producing cumulative and chronic adverse outcomes for people of color," [4] is a key contributor to experiences of racebased discrimination for Black Americans [3][4][5].Given the history of institutionalized racism and perpetuated stereotypes of drug-seeking behavior experienced by Black Americans in medical settings, Black individuals with SCD must often contend with experiences of intersecting disease-based and race-based discrimination [6].For example, the management of acute, severe SCD-related pain often requires opioid medications, either at home or in the emergency department.However, opioids are associated with biases and stigma towards those who use them, as they are highly addictive and frequently misused in the general population [7,8], yet this is not often seen in those with SCD [9][10][11].Among pediatric populations in the healthcare setting, perceived race-based bias and discrimination are associated with poorer health and mental health outcomes, decreased adherence to medical advice, and increased mistrust in healthcare providers [12][13][14][15][16][17].Discrimination encountered during childhood is particularly deleterious for individuals with SCD, due to patient trust and medication/guideline adherence, ultimately affecting the trajectory of their health outcomes across their lifetime [15,18].
Given the disproportionate prevalence of SCD among Black Americans, stigma attached to individuals who use opioids, and structural racism within medical settings, Black individuals with SCD experience compounding susceptibility to discrimination, beginning in childhood.Yet, the small existing body of research on experiencing intersecting raceand disease-based discrimination in the context of pediatric SCD is limited to adolescents' self-reports and often exclusively employs quantitative methods [15,17,[19][20][21]. Sole reliance on quantitative assessments may underestimate the frequency with which discrimination is reported due to narrow scope of survey questions, which often miss important contextual data of people's lived experiences and interactions within the healthcare setting [22].In contrast to previous qualitative SCD literature, which has reported adolescents' experiences of and reactions to racial bias [17] and highlighted caregivers' perceptions of racism in the context of inadequate SCD healthcare [23], we integrate the perspectives of both caregivers and providers to explore experienced discrimination particularly in the context of SCD during childhood.Mixed-methods research allows for a multidisciplinary approach [24] and the addition of focused solicitation from multiple respondent perspectives to ensure comprehensive reporting of personal experiences and justification for addressing discrimination.The current study uses a convergent explanatory mixed-methods approach [25] to qualitatively and quantitatively document conceptualizations, experiences, and effects of perceived discrimination in the context of childhood SCD from the perspective of both caregivers and clinical providers.
Participants
Participants were recruited as part of a larger study addressing the implementation of a screening and referral intervention for Social Determinants of Health (SDoH) within the SCD outpatient care setting.For the larger study, primary caregivers participated in qualitative interviews and completed surveys to assess possible mechanisms linking SDoH to SCD outcomes.A purposive sampling approach was used to recruit primary caregivers in person from two hematology clinics to ensure breadth across the child's gender and age.Eligible caregiver participants were aged > 18 years, spoke English, and identified as the primary caregiver of a child with SCD aged 0-12 years old who was prescribed daily penicillin or hydroxyurea as part of their SCD management.The primary caregiver was defined as the key adult involved in caregiving and SCD disease management (most often the mother of a child with SCD).
In addition, clinical providers (i.e., medical assistants, nurses, physicians, social workers, hereafter referred to as "providers") from two pediatric hematology clinics involved in the larger SDoH project completed qualitative interviews to examine facilitators and barriers to implementing the SDoH screening and referral intervention.Providers were identified by clinic leadership and were recruited via email.Providers were eligible to participate if they were involved in the clinical care of children with SCD and spoke English.Of the 17 providers contacted, 11 agreed to participate and 6 passively declined by not responding to the email invitation.
Quantitative Measures
Primary caregivers and providers completed online questionnaires collecting sociodemographic information.Primary caregivers also answered five questions adapted from the Commonwealth 2001 Health Care Quality Survey to assess their experiences of discrimination in the healthcare setting [26] (see Table 1).These yes/no questions asked caregivers if they were ever judged unfairly or treated with disrespect because of their racialized group identity or ethnic background, how well they spoke English, the type of insurance they had, or because their child had sickle cell disease.
Qualitative Interviews
Separate semi-structured interview guides were developed for caregivers and providers.The methodological approach to this study was phenomenology, which seeks to understand the phenomenon of perceived discrimination within the sickle cell disease context and its effects.Female authors (AOB, JSE, and CA) conducted caregiver interviews.Female authors (CL and AB) conducted provider interviews.All interviewers represented various professional levels (from graduate students to post-doctoral fellows) and have received relevant training accordingly.Stemming from the larger study, there were some pre-established relationships between some interviewers and the providers.There were no pre-established relationships between the interviewers and the caregivers.
Though specific questions across the two interview guides differed, both were created in alignment with the Health Stigma and Discrimination Framework, which outlines the multilevel consequences of health stigmatization (see Table 2) [27].Interviews with primary caregivers began with questions to obtain information about the family's background, social contexts, and experiences with SCD and SDoH.This research focused on questions related to (1) experiences of differential care in healthcare settings; (2) discrimination experienced by families impacted by SCD; and (3) ways caregivers prepared their child with SCD for discriminatory encounters or experiences.Interviews with providers prompted reflection on how public discourse regarding structural racism has impacted clinical teams' engagement in addressing SDoH, given the interplay between SDoH, health equity, and discrimination.Additionally, providers were asked to describe how their patients reported the effects of bias and discrimination on their healthcare, and to share their thoughts on how to address race-and disease-based bias and discrimination within their clinics, hospitals, and the healthcare system more broadly.Through the course of data collection, new interview questions were added to delve into emerging concepts and others were deemphasized once themes had reached saturation [28].
Data Collection
For primary caregivers, semi-structured interviews lasted approximately 60 min and took place via Zoom.All but two primary caregivers gave consent to having their interviews audio-recorded.For two caregivers who declined to record, they provided permission for research staff to take detailed notes.Surveys assessing sociodemographic characteristics and discrimination experiences were administered via RED-Cap.Primary caregivers were provided $50 for their participation.For providers, semi-structured interviews were audio-recorded and held over Zoom or phone, depending on providers' preference and availability, and lasted < 60 min.Providers completed sociodemographic surveys via REDCap and were compensated $25 for their participation.Study procedures were approved by the Institutional Review Boards of Boston University Medical Campus Boston University Charles River Campus and Boston Children's Hospital.Data were collected from March 2020 to March 2021.
Data Analysis
For the survey data, descriptive statistics were calculated from the sociodemographic survey and quantitative measures using Microsoft Excel 2013.For the qualitative data, recorded interviews were transcribed verbatim, cleaned, and entered in NVivo 12 [29].Based on a priori research questions and the health stigma and discrimination framework [27], coding structures were developed for the primary caregiver and provider interviews.A subset of transcripts was coded to refine each coding structure based on relevant emerging themes and to ensure reliability among coders (A.O.B., C.L.).The coding structures were deemed final once no new codes were added through this iterative process.All transcripts were coded using the final coding structure.The coding team met weekly to discuss coding and resolve discrepancies through mutual consensus.All coded transcripts were stratified by respondent role (i.e., caregiver vs. provider) and analyzed using applied thematic analysis [30].Qualitative data were used to give context to the experienced discrimination reported by caregivers in the quantitative survey data.
Reflexivity
The authors of this manuscript offer a positionality statement to be transparent about our backgrounds and the lenses through which we view research and research processes.
Quantitative Discrimination Findings
Among caregivers, 40% (n = 10) reported experiencing at least one experience of discrimination Among these, six (24%) reported believing that their child would have received better medical care if their child had belonged to a different racialized group identity or ethnic group.Additionally, caregivers reported that doctors or medical staff judged them or treated them with disrespect because of their racialized group identity or ethnic background (20%), how well they spoke English (16%), their child having SCD (24%), and/ or their type of insurance (24%).Of the 15 caregivers who did not quantitatively endorse discrimination, nine described experiences of discrimination during the semi-structured interviews (Table 2).
Qualitative Discrimination Findings
Three overarching themes emerged from the qualitative data analysis: (1) healthcare system factors underlie discrimination, (2) families' challenging interactions with providers lead to perceptions of discrimination, and (3) experiences of discrimination impact caregiver-provider interactions (see Fig. 1).
Healthcare System Factors Underlie Discrimination (Theme 1)
Caregivers often described racism, inadequate disease knowledge, stigma, and bias as key aspects of the healthcare system that contributed to discriminatory encounters.With regard to racism, caregivers attributed their experiences of discrimination to their own racialized group identity and/or their child's racialized group identity.Moreover, caregivers believed their likelihood of experiencing discrimination from providers or within the healthcare system broadly would be different if they were White.For example, as one caregiver waited for an appointment for her own child with SCD, she witnessed another Black parent abruptly leave an appointment and say that providers were not giving proper care because the child was Black.Though the caregiver did not have any conversation with that parent, they explicitly reported feeling uncomfortable following this encounter.One White caregiver of an adopted Black child with SCD further expounded upon this phenomenon as she described how her racialized group identity was protective against discriminatory encounters: Both caregivers and providers reported how providers' lack of disease knowledge and the "invisible nature" of SCD perpetuate skepticism about the seriousness of symptoms (e.g., pain) and subsequent mistreatment in the healthcare setting.Caregivers and providers both indicated that lack of disease knowledge contributed to providers' misconceptions about SCD and sustain disease-based stigma."I trust [child's] hematologist and I trust her primary care doctor because they know us and I know they kinda respect us, but sometimes I feel like going into the ER, you have doctors… not as knowledgeable about sickle cell.And I feel like they don't always listen to me when I'm talking and it's not until they talk to her doctor that they, that I feel like they're listening to me." (Mother) Respondents also emphasized how the intersecting effects of race-and disease-based stigma contributed to families' increased likelihood of encountering discrimination.While some caregivers solely attributed providers' assumptions of drug-seeking behavior to disease-based stigma, others emphasized that their Black racialized group identity fed into this assumption.Providers echoed both sentiments, explaining how implicit biases fueled by both historical prejudices towards Black Americans and misperceptions surrounding SCD contributed to discriminatory encounters in the healthcare setting."I frequently hear stories about patients feeling discriminated against when they present to the emergency department, how they're looked at as drug seekers based on the amount of pain meds they need, based on the color of their skin, based on the way they're dressed.If they're not feeling well, they're not going to put on a three-piece suit, you know what I'm saying?It really affects a lot of our patients greatly."(Hematology provider) Lastly, caregivers also reported experiencing discrimination due to bias regarding their insurance status.Caregivers with private insurance reported witnessing and experiencing preferential treatment, while caregivers with public insurance or no access to insurance reported longer wait times and subpar medical care.
"One thing that I do have an issue with is that-a few times that I would go into the emergency room, insurance may have been pending.And I feel like under those circumstances [child] was kind of rushed.[Child] wasn't properly looked at.And I know when he needs to be admitted… But a few times, [healthcare providers] did a quick little checkup and … sent him home."(Mother)
Families' Challenging Interactions with Providers Lead to Perceptions of Discrimination (Theme 2)
Caregivers described examples of challenging interpersonal interactions including being directly spoken to harshly, repeatedly dismissed, or ignored altogether by providers.In particular, caregivers emphasized that emergency room providers seemed to overlook their families, ignore their certainty of common SCD-related complications, or prematurely send them home when seeking acute care services.For example, one mother described a provider speaking to her harshly and taking her phone out of her hands to obtain her attention, after having repeatedly ignored her when first bringing her child to the emergency department.She attributed the experience to her being Black, as she observed this provider speaking in a similar "rude way" to another Black patient but speaking in a "different tone" to White patients.Altogether, these interactions often left caregivers feeling disrespected, mistrusting, and concerned that their family received lowerquality healthcare.
"I had asked [child's providers]-'What does this medicine do?… why is my daughter [taking]
it?'-and the doctor said, 'Well, it's not my job to explain to you what it is.'I said, 'Oh, oh, but it is… it's your job to explain to me every risk and every pro or con to this…' I was so angry with them, and I was trying to be nice, but I was livid…I actually felt like [the doctor] felt like, 'Who does this woman think she is to question me and to make me explain to her and I'm the medical professional?'…like she felt like I'm supposed to just do what she says and that's just it… I felt so disrespected."(Mother) Caregivers further described experiencing discrimination when they perceived differences in their received healthcare based on racialized group identity or their child having SCD.Experiences of providers' mistreatment of their children felt unreasonable and left caregivers confused and ultimately concerned about the quality of care afforded to their families.
"When my son is in pain, they keep throwing this behavioral services issue on him.And that is a very upsetting thing, because I can't think of any woman that has gone into labor that has been pleasant.And those are grown adults.So why would you say that… a child that is having bone pain is misbehaving because they're not 'hello and hi" or even polite about it?Yet, I hear cancer patients, whenever I'm on the floor, screeching and hollering down the hallway, but I don't see security and behavioral services running to…manage their situations.My question is, why would there be behavioral services and security called on a child that can't even walk and has barely enough energy?Why would it take five nurses and four security guards?And this happened twice.And the security guards were standing there and criminalizing him."(Mother)
Experiences of Discrimination Impact Caregiver-Provider Interactions (Theme 3)
Experiences of discrimination affected how caregivers and providers interacted with one another and engaged with the healthcare system overall.Having to navigate the added responsibility and subsequent stress associated with the uncertainty of how their child would be treated within healthcare settings, in addition to typical parenting and caretaking responsibilities, was a tangible effect of experienced discrimination for caregivers.Caregivers reported being worried and/or scared for their child's future following their own or others' experiences of discrimination.However, caregivers also reported that these feelings of fear drove them to increase their disease knowledge and advocacy skills in an attempt to mitigate future encounters with discrimination."…it's really sad, because most of sickle cell affects Black people, and I don't know how [providers] see us sometimes.I'm not playing the race card, but… we have to advocate for ourselves.If we don't know, and we just allow them to do whatever they want, we don't get the best treatment.So, we have to advocate for our kids and educate ourselves to know what is out there, that will help the kids.So, [the] communication piece-it's key." (Mother) Many caregivers also expressed the importance of empowering their children to advocate for themselves independently to fortify their abilities to better navigate and overcome potential future encounters of discrimination.
"[Child's doctor] was very honest and she said, unfortunately, [discrimination] does happen and she's not gonna lie and say that it doesn't.Sometimes [doctors] think people are just there for pain meds, and they don't understand.So [doctor] spoke to the importance of actually going to the hospital where your child received services, and not us just being a support to advocate for [child] but to teach him how to advocate for himself."(Mother) Providers also described the importance of educating themselves and colleagues to mitigate occurrences of discrimination.In particular, providers reported informally adapting their overarching health systems' antidiscrimination initiatives for smaller forum settings and encouraging peer accountability across departments to assist in efforts to combat implicit biases and racism contributing to discrimination."We've had discussions in our clinic about racism within the healthcare system, how it may be implicit, so we really address that.I think that has definitely made more providers aware of needs.I've always been aware of the sickle cell population having a lot of unmet needs, because they're a very underserved population."(Hematology provider) Lastly, several providers reported building "trusting relationships" with patients and their families as a way to acknowledge their potential mistrust of providers due to previous experiences of discrimination with other providers and the historically pervasive nature of racism in the general healthcare setting.Providers particularly described feeling the need to be sensitive to patients'/families' needs and creating safe spaces for conversation.
"… there is a need for us to work even harder in helping these families trust us as the providers, which in large part is a combination of education and experience.It's a bit of both.It's a bit of helping them understand we are here for you."(Hematology provider)
Discussion
The current research highlights the presence and impact of discrimination within the context of healthcare for children with SCD.Caregivers and providers expressed discrimination, stigma, and racism similarly within healthcare systems.Furthermore, caregivers and providers reported that racism and discrimination contribute to lower-quality care for children across multiple levels of healthcare, which is consistent with previous SCD research [17,31].Current findings extend existing research by highlighting how caregivers shifted their behaviors (e.g., by prioritizing the importance of healthy patient-provider rapport and relationships, implementing initiatives to reduce provider biases and increase self-awareness, and holding space for peer accountability across departments) to accommodate the pervasive presence and/or threat of discrimination and bias within healthcare settings.
Differences across respondent types emerged regarding the degree to which discrimination was referenced explicitly.Although caregivers vividly described experiences of otherism or differential treatment due to race-or SCD-related stigmas, providers were more likely to use the terms racism, discrimination, and stigma.This disconnect between describing and explicitly using the terms racism, discrimination, and/or stigma has been briefly reported in SCD research and may underlie differences between our qualitative and quantitative findings [32].Similarly, patients with other stigmatizing illnesses, such as substance use disorder and HIV/AIDS, tend not to explicitly name racism as a reason for their differences in care in comparison to their White counterparts [33,34].Caregivers' reasons for describing racism and discrimination yet refraining from explicitly naming those constructs may have important implications regarding how people living with SCD and their families experience, process, and understand discriminatory behavior.Research indicates that there are deleterious biopsychosocial effects of internalized racism in healthcare for patients who are stigmatized [35].This can manifest in increased levels of stress imposed on the nervous and immune systems, and further employ negative impacts on mental health outcomes [35].Further research should explore the long-term impacts of discrimination on SCD families, and their engagement with the healthcare system.
Current findings suggest that mistrust between providers and patients may be mitigated by intentional efforts to foster trusting relationships and build rapport between providers and families.Many efforts to address discrimination have been limited to the general field of pediatrics and patient-provider relationships [36][37][38][39].By focusing specifically on pediatric SCD, the current study illuminates the uniquely complex challenges faced by patients with SCD and their families due to combined diseaseand race-based discrimination and highlights the need for more robust provider-caregiver-patient approaches across care settings.For example, hematology providers, such as patient advocates, are uniquely positioned to be able to advocate for families of individuals with SCD and improve antidiscriminatory models of treatment between patient families and providers (e.g., trust building, accountability keeping, values, fostering rapport) across healthcare settings [40,41].Current findings suggest the need for communal gatekeeping of self and team accountability within larger healthcare provider networks to mitigate discriminatory effects on patients and families as they seek care in the emergency department or inpatient setting.Existing models may be adapted for the SCD population.Peer accountability and self-awareness may further serve as mechanisms to mitigate stigma, bias, and discrimination in healthcare [42], but additional approaches are needed to address the intersection between race-and disease-related stigma and bias.Systemic and institutional approaches may be best to reduce discrimination to shift the responsibility for seeking antidiscriminatory solutions to providers and away from families.
Our findings can be understood in the context of historic and current racism broadly within the US and specifically within the healthcare system.For example, healthcare providers' perceptions of, and influence on, Black individuals with SCD may be affected by the broader history of pain medication overuse and abuse in the USA and aspects of structural racism, which together may amplify the persistent drug-seeking stigma among Black individuals with SCD [6,8,16,17,43].The pervasive nature of discrimination and differential treatment has been well documented within the healthcare system broadly [44], and for SCD within the emergency department [43,45,46] and inpatient settings [47,48], including challenging interpersonal interactions with medical providers.The current work also highlights discrimination related to insurance, which is not yet represented in available literature despite high rates of Medicaid usage among pediatric SCD populations [49].
Lack of awareness of SCD in the healthcare system may contribute to the invisibility of the disease itself [50].SCD is often referred to as an invisible disease for various reasons, such as the difficulty of objectively identifying the characteristics and symptoms, the lack of physical or laboratory markers of pain, and the systemic marginalization of the people that it predominantly affects [51].In addition, there is less funding, research, and advancements for SCD compared to similar diseases that predominantly affect White Americans [3,52].For example, cystic fibrosis (CF) has three times more federally approved medications/treatments and receives up to 11 times more funding than SCD [3,52].Yet, CF impacts less than one-third as many people as SCD (100,000 to 30,000) [3,52].Increasing awareness and education regarding SCD may equip providers to more effectively care for children and adults with SCD and may help to expand funding for research to inform efforts to mitigate the presence and negative effects of discrimination within healthcare settings.Drawing upon findings from other stigmatized illnesses such as HIV/ AIDS and substance use disorder, promising approaches to mitigate discrimination include ensuring appropriate descriptive language and naming of stigmatized populations within healthcare settings, listening to patient stories, and including and encouraging familial and/or community support [53,54].
Current findings should be considered in light of limitations.First, the sample of two clinics and 27 caregivers from the Northeast region of the USA may not be generalizable to the broader SCD population.Second, providers were all female and nearly all non-Latinx White.This may not be representative of many hematology clinics.Third, caregivers of children between the ages 0-12 years old may have different experiences than caregivers of adolescents.Yet, these early experiences of children and caregivers are important to capture as they may have implications for their life course.Though the nature of agency and autonomy of their healthcare engagement may change adolescence and young adulthood, at which point caregivers' awareness of discrimination is subject to change.Additionally, the measure used to quantitatively evaluate discrimination was adapted from an existing health quality survey [25] and as such is not an independently validated measure of discrimination.
Future research should seek to utilize a validated measure of discrimination to improve the replicability of reliable and accurate results.Lastly, there may be bias or missed information present among participants who decided not to participate versus those who agreed to participate in the study.Additional funding for research, community partnership, and engagement between healthcare institutions and SCD populations may help to improve the quality of care for people living with SCD [42].
In conclusion, this research highlights the experiences of discrimination, stigma, and bias of families of children with SCD within the healthcare system.By drawing on the perspectives of both caregivers and providers, these findings suggest promising approaches to reduce the frequency of discrimination in healthcare, foster healthier relationships and partnerships between patients/families and providers, and improve the quality of care across the healthcare system.Together, this may lead to improved healthcare experiences and trajectories of health outcomes for individuals with SCD across the life course.
Fig. 1
Fig. 1 Process and effects of experienced discrimination
Table 1
Perceived discrimination questionnaire a.What was different for you in these experiences?3. What experiences have you heard about from others managing a child with SCA? 4.In what ways have you prepared for discriminatory encounters/ experiences?Lead question: How do you feel the recent conversations about structural racism have impacted provider interest or engagement in addressing unmet basic needs through WE CARE? 1.How do you feel recent events or discussions about structural racism have influenced caregivers' receptivity to being asked questions about nonclinical needs?To receiving or accessing resources?2.More broadly how has bias/discrimination affected healthcare in your patient population?Probes: a.On what is the discrimination or bias based?[Race, gender, socioeconomic status, the disease itself, etc.] i.To what extent is it based on sociodemographic characteristics such as race or income versus clinical characteristics?b.What are your thoughts on how to address bias/discrimination within your own clinic?How about in your hospital, or even in the healthcare system as a whole?
Qualitative interviewers were Black and White cisgender young adult and middle-aged females.More broadly, the authors of this manuscript represent various racial/ethnic backgrounds, and religions/spiritual orientation and sexual orientation.Professionally, we hold graduate degrees in Psychology, Education, Public Health, and Medicine.Our academic ranks span master's and doctoral-level graduate students to Assistant and Associate Professors.Our scholarship uses both qualitative and quantitative methods, and we strive for antidiscrimination in our research and clinical efforts.
Table 4
Practitioner and staff sociodemographics (n = 11) "I knew this from when I was learning about sickle cell when I was in nursing school, I don't fit the category of doctors looking down on me, because I am not African American, but I do feel… that doctors might not listen to parents about a child and when I was in [Midwestern State] I certainly felt that-they had not a clue what | 2022-12-21T16:22:31.143Z | 2022-12-19T00:00:00.000 | {
"year": 2022,
"sha1": "25c298e48b1714120f6d240638216bb0f8b97e64",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40615-022-01483-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "76c5c50d4c81bbdf685bb80b515c148ab830358f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254918085 | pes2o/s2orc | v3-fos-license | Green Bond Issuance and Peer Firms’ Green Innovation
: Based on the realistic background of the rapid development of China’s green bond market, this paper uses the data of China’s non-financial listed companies from 2010 to 2020 to examine the impact of green bond issuance on peer firms’ green innovation. The results show that the issuance of corporate green bonds can significantly promote the quantity and quality of peer firms’ green innovation, and this promotion effect is sustainable. The heterogeneity test shows that when the issuer of green bonds is an industry leader or the issuer is highly concerned by the media, the green innovation promotion effect of peer firms is more significant. Similarly, when the issuer and the peer firm are close competitors or in the same board network, the peer firm has a higher level of green innovation. It is further found that the green innovation behavior adopted by peer firms can significantly improve their environmental performance. The article indicates that the issuance of corporate green bonds can produce a good spillover effect of green innovation in the industry, which is conducive to China’s strategic goal of “carbon neutrality, carbon emission peak”.
Introduction
According to the latest version of the Climate Bond Standard issued by the Climate Bond Initiative (CBI), green bonds refer to a bond where the proceeds will be exclusively applied to finance or re-finance, in part or in full, new or existing eligible green projects, which are aligned with the four core components of the Green Bond Principles (Check the Climate Bond Standard for details of the four core components.https://www.climatebonds.net/files/files/climate-bonds-standard-v3-20191210.pdf(accessed on 10 December 2022)).Similar to traditional fixed-income securities, enterprises can raise capital through green bonds.In addition, green bonds need to report, in detail, the use of the raised funds in green projects and the "green" nature of the projects [1].Moreover, green bonds aim to produce positive environmental benefits.Previous studies have shown that green bonds play a role in reducing carbon dioxide emissions [2], improving air quality [3], and improving environmental performance [4], although there are some greenwashing behaviors in the green bond market [5].
In 2007, the European Investment Bank (EIB) issued the first "Climate Awareness Bond", which marked the rise of the green bond market [6].After this, the green bond market developed slowly until 2013, when the market began to grow rapidly and entered an "exciting period" [7].Although the green bond market has developed rapidly in recent years, due to its late start, the issuance of green bonds is less than 1% of the global cumulative bond issuance [8].In addition, the statistical results of this research sample show that only 63 non-financial listed companies in China have successfully issued green bonds by the end of 2020, which is less than 1.5% of the total number of listed companies in China.Therefore, it is important to ask, how can such a small amount of green bonds promote overall environmental improvement?A reasonable explanation is that the issuance of green bonds not only affects the pro-environmental behavior of the issuer [2] but also drives peer firms to take more measures that are beneficial to environmental protection [3], thus resulting in better overall environmental performance and social welfare.
Throughout the existing literature, scholars mainly study green bonds from two aspects: issue pricing and market reaction.Regarding green bond issuance pricing, most studies believe that green bonds can signal enterprises' ecologically sustainable development to the market and enhance the ESG value of issuers.Investors who are concerned about environmental protection may be willing to sacrifice a certain expected income to buy green bonds, resulting in green premium [5,[9][10][11][12][13].In research on the market reaction of green bonds, scholars have found that the stock market has a positive response to green bonds issuers by using the event study method [2,[14][15][16].One stream of research has also confirmed that green bonds can produce a positive environmental performance [2,4] and financial performance [17].At the present stage of the development of the green bond market, the corporate green bonds issuers have explored new financing ways for other enterprises; they are the "pioneers" of the green bond market, and their successful practices will inevitably be learned and referenced by peer firms, thus affecting the behavior of peer companies.Wu et al. (2022) [3] found that enterprises issuing green bonds will drive the same industry companies to take more actions that benefit environmental protection.These behaviors are recognized by investors, thus reducing the bond financing costs of other enterprises in the same industry.However, up to now, there is no article to explore the impact of green bonds issued by enterprises on the green innovation behavior of peer companies.This paper will make a supplement in this field.
We use the non-financial listed companies in China from 2010 to 2020 as samples to test the impact of green bonds on the green innovation of peer firms.Since the Green Bond Issuance Guidelines were released in 2015, China's green bond market has developed rapidly.By the end of 2020, China has issued a total of USD 258.8 billion of green bonds in domestic and overseas markets and has become the second largest green bond issuer (The data come from the China Green Bond Market Report 2021.You can access https: //www.chinabond.com.cn/cb/cn/yjfx/zzfx/nb/20220704/160692668.shtml(accessed on 10 December 2022) for the original text).The Wind database is the largest, most complete, and most widely used financial database in China.It contains all the announcements of listed companies, stock funds, bond data, financial laws, and regulations in the China stock market.The Wind database comprehensively includes the announcement date, circulation, issue price, basic information of issuers, and other data on green bonds issued by China enterprises, which meets the data requirements of this study.Suppose enterprises in an industry have successfully issued green bonds.In that case, we take the companies that have not issued green bonds in this industry as the treatment group and use the propensity score matching (PSM) method to match the appropriate control groups in other industries.Then, we build a difference-in-difference (DID) model to test the impact of green bonds on the green innovation of peer firms.Whether enterprises issue green bonds is not random, and PSM can alleviate the problem of selectivity deviation in the model.The DID model can effectively evaluate the implementation effect of green bonds, thus alleviating the endogenous problem of the model.From this data set, we find that the issuance of corporate green bonds can significantly improve the quantity and quality of peer firms' green innovation, which is manifested in the significant increase in the total number of green patent applications, green invention patent applications, and green patent citations of peer firms.This result is robust in several different specifications, including, but not limited to, changing the PSM matching method, the explained variables being delayed by one period, using the Tobit model to re-estimate the results, conducting a placebo test, and excluding the influence of industry policies.The results from these specifications are similar to the baseline.
The premise that the DID model is effective is that the enterprises in the treatment group and the control group have a parallel trend before the policy impact, which we have confirmed.We find that the green bond has a dynamic and sustained role in promoting the green innovation of peer firms.
Next, we are interested in what kind of green bond issuers can produce better green innovation spillover effects.We find that when the green bond issuer is an industry leader or the issuer is highly concerned by the media, the green innovation promotion effect of peer firms is more significant.Similarly, when the issuer and the peer firm are close competitors or in the same board network, the peer firm has a higher level of green innovation.
We further explored whether the green innovation behavior adopted by the peer firms can produce positive environmental performance.We found that the green innovation behavior of peer firms can significantly increase their probability of obtaining environmental recognition or other positive evaluation, improve their environmental responsibility scores, and promote them to pass the ISO14001 certification.Generally speaking, our research shows that peer firms improve their environmental performance by upgrading the quantity and quality of green innovation after enterprises issue green bonds.
This research clarified the mechanism of green bonds on the green innovation behavior of peer companies, explored the specific path of green bonds to produce environmental performance, and answered an important question in practice, that is, how does the green bond market support the sustainable development of economy and ecology in the initial stage of development?
Moreover, this study expands and enriches the related literature on green bonds.The existing research mainly focuses on the issues of the pricing of green bonds [9][10][11]13,18], stock market reflection [2,16], environmental performance [2,4], financial performance [17], links with other financial products [19,20], and factors affecting green bonds [21,22].So far, there is no literature that studies the innovation spillover effect of green bonds.We supplement this research system by studying the impact of green bond issuance on the green innovation of peer firms.
Furthermore, this study also contributes to the literature on the influencing factors of green innovation.The driving factors of green innovation have been widely considered in academic circles.The existing research mainly focuses on the following aspects: First, environmental regulation, mainly including environmental tax collection, government subsidies, emission trading, environmental information disclosure, environmental decentralization, environmental interview, environmental policy uncertainty, carbon emission regulation, government environmental expenditure, central environmental protection inspector, and other factors [23][24][25][26].Second, the enterprise's organizational structure, mainly including factors, such as company resources, corporate governance mechanism, and senior management characteristics [27][28][29][30][31]. Third, the external environment, mainly including the digital economy, intellectual property protection, air pollution, Internet development, media attention, and other factors [32,33].At present, the research on the promotion path of green innovation lacks the realistic consideration of the support of financial instruments.Although a few pieces of literature have studied the promotion of green finance to green innovation from the perspective of green credit, the research on other green financial instruments represented by green bonds is lacking.
In addition, this study also adds to the literature on the role of peers in firms' decision making.Previous literature has studied the influence of executive compensation [34], stock split [35], dividend policy [36], M&A decision [37], investment decision [38,39], etc., on peers' decision making, and we have supplemented the literature on the influence of financing decisions on peers' investment behavior.
Enterprise Practice of Peer Learning Theory
Relevant research based on social interaction theory shows that in order to avoid the risk problems caused by its information limitations and limited resources in the decisionmaking process, enterprises often choose to learn and follow the behaviors of other organizations with similar characteristics [40,41].Graham and Harvey (2001) [42] surveyed 392 chief financial officers, and found that enterprises are not completely independent in making financial decisions and often interact socially.For example, the successful listing of enterprises will stimulate the willingness of peers to raise funds in the capital market.The practice of multinational corporations' social responsibility will drive the peers to fulfill more social responsibilities [43].In addition, the company's capital structure [44], executive compensation [34], stock split [30], cash holding [45], dividend policy [36], M&A decision [37], investment decision [38,39], and other behaviors may be learned and imitated by peers, thus affecting the financial behavior of peer companies.As a new financing tool with both "green" and "financial" characteristics, green bonds can effectively solve the financing problem of green projects of enterprises [6,10], and, at the same time, it can help enterprises to establish a "green development" social image [2] and enhance the ESG value of enterprises.As the "pioneer" of the green bond market, the successful practice of green-bond-issuing enterprises will surely be learned and referenced by other enterprises, thus affecting the financial decisions of peer firms.
The Trigger Mechanism of Industry Green Innovation Spillover Effect by Green Bond
As a forward-looking strategy to fundamentally solve environmental problems, green innovation can help enterprises establish technical barriers and long-term competitive advantages by producing the dual value effects of the environment and finance.It is an important financial investment decision for enterprises [46].Green innovation has natural development bottlenecks, such as high risk, long cycle, and significant investment [47], which creates fuzzy and difficult conditions for enterprises to implement green innovation decision making [48].Previous studies have shown that enterprises' green technology innovation decisions are not only influenced by their organizational characteristics, resource conditions, and other factors but also closely related to the financial behavior of other firms [49].Because peer enterprises are faced with the similar market environment and development prospects, when the pioneers of green bonds appear in the industry, it will inevitably lead to the learning and imitation of peer companies.Peer firms may adjust their green innovation level out of the instinct of "seeking benefits" and "avoiding harm", thus triggering the industry green innovation spillover effect.
Peer Firm's "Profit-Seeking" Motivation
First, the imitation effect.The successful practice of green bonds in enterprises has opened up a brand-new financing channel for peer enterprises and stimulated the willingness of peers to finance through the green bond market.Green bonds can not only play their financing function, providing a reliable long-term funding source for the development of green projects of enterprises, but also play a signal effect to establish the image of green and sustainable development of enterprises [2,15], thus guiding green government capital and green credit capital [50,51], as well as green social capital flowing into enterprises [13,14,16] to ease corporate financing constraints and reduce capital costs [9,12], optimizing the debt maturity structure.The successful experience of issuing green bonds by enterprises will be circulated in the industry through information channels, such as the media and the network of the board of directors.After learning the experience and benefits of green bond issuing enterprises, peer firms may make more strategic decisions that are beneficial to environmental protection, such as green innovation, so as to accumulate results for issuing green bonds in the future.
Second, the technology spillover effect.Green bonds can promote issuers to implement more green innovation activities and improve the industry's basic level of green technology.Zhang et al. (2022) [52] found that green bonds can improve the issuers' green innovation capability by easing financing constraints and improving information transparency; Wang and Feng (2022) [53] also found that in the green bond market in China, green bonds can significantly improve the green innovation level of issuers.The innovative products of green bond issuers can be used by peer firms for reference and transformation, thus improving the green innovation efficiency of peers and then driving the iterative upgrading of new technologies in the industry.
Peer Firm's "Harm-Avoiding" Motivation
First, competitive pressure.The inherent green attribute of green bonds can help enterprises gain more investors' attention and green premium and enhance their financing advantages.Investing the funds from green bonds in green projects will improve the green performance of products, thus boosting consumers' demands for the green value of products.Faced with the change in the competitive environment, the interaction between them will make the peer companies respond to the green bond behavior of competitors to prevent the establishment of competition barriers and the loss of their advantages.According to the dynamic competition theory, innovation is the key for enterprises to shape their core competitiveness, while green innovation is critical for enterprises to seize the future market and achieve sustainable development [48].Therefore, when the pioneering enterprises of green bonds appear in the industry, peers will pay more attention to the positive contribution of green technology innovation to financial performance and environmental performance and implement more green innovation activities.
Second, compliance pressure.According to the planning theory, individual behavior is influenced by the perception of stress during decision making [54], while the behaviors of other individuals in the reference group and the expectations of critical stakeholders for individuals constitute the primary sources of stress [55,56].After green bonds guide green funds to flow into the industry, it will drive the orderly development of the green industry and simultaneously raise the cognitive threshold of the environmental legality of peer companies, resulting in subjective normative pressure and the willingness of enterprises to innovate green technologies.In addition, when an enterprise issues green bonds, it may indeed have a point-to-point effect, which will drive the environmental performance of the same group of enterprises to improve, thus triggering the environmental concern of the regulatory authorities and raising the threshold of environmental compliance, while standardizing in terms of weight will increase the environmental risks and compliance costs of enterprises.Therefore, for the motivation of crisis prevention, when peer enterprises observe that other enterprises in the industry successfully issue green bonds, they will improve their green innovation level to reduce uncertainty in the future.
To sum up, after learning the experience and benefits of issuing green bonds, the peer companies will adjust their level of green technology innovation for "seeking benefits" and "avoiding harm" and form the green innovation spillover effect of green bonds, the trigger mechanism of which is shown in Figure 1.Based on this, this paper proposes the following hypothesis: Hypothesis 1 (H1).Green bonds have an industry green technology spillover effect, and the issuance of corporate green bonds will promote the level of peer firms' green innovation.Hypothesis 1 (H1).Green bonds have an industry green technology spillover effect, and the issuance of corporate green bonds will promote the level of peer firms' green innovation.
Data Source and Sample
The Wind database comprehensively includes the announcement date, circulation, issue price, basic information of issuers, and other data on green bonds issued by China enterprises.The information on the green bonds database has been updated to July 2022.However, we were only able to obtain the green innovation data before 2020, so the data analyzed in this paper are relevant up to the end of 2020.A total of 1324 green bonds were collected in this paper.As it is impossible to observe which green projects government bonds and financial bonds are invested in, the bonds issued by financial enterprises or local governments were excluded from the sample; thus, 1001 green bonds were obtained, and the issuers involved 38 industries.Although the target company can learn from different types of companies, due to the similarity of factors, such as industry environment and market competition, and the consensus formed by closed circles, the target company is more inclined to interact with companies in the same industry.Therefore, we took the listed companies that had not issued green bonds in these 38 industries as the treatment group (2245 in total) and the listed companies in other industries as the control group (2690 in total) and observed the impact of the green bonds on the green innovation behavior of peers.Based on the data of non-financial listed companies in Shanghai and Shenzhen A-shares in China from 2010 to 2020, this paper used PSM one-to-one nearest neighbor matching method to match the appropriate control companies for the treatment group and excluded the ST companies and the samples with missing required indicators, and, finally, we obtained 3606 sample companies, with a total of 24,858 firm-year observations.Green innovation data was obtained from the Chinese Research Data Services (CNRDS) database.CNRDS is a high-quality, open, and platform-based comprehensive data platform for Chinese economic, financial, and business research and is a very commonly used and reliable database for Chinese academic research.The other related data was obtained from the China Stock Market & Accounting Research Database (CSMAR).CSMAR database is a professional economic and financial data platform.The database covers a number of research series, including China stock market, China listed companies, China fund market, China bond market, China derivatives, China economy, China money market, special research, etc., including structured financial statements, trading quotes, unstructured news information, research reports, company announcement data, etc.It is one of the most comprehensive economic and financial research databases in China at present.When we found suspicious data, we checked the company's annual report.For example, we found that the return on equity of Qinghai Salt Lake Industry Co., Ltd. was −19.28% and −19.16% in 2017 and 2018, respectively, and it suddenly became 160.52% in 2019.This phenomenon is abnormal, so we checked the annual report to find out the reason.We checked more than 200 pieces of data, accounting for about 1% of the total sample.We did not find that the sample data were inconsistent with the annual report data, which also made us more convinced that the data in the database we adopted were reliable.In addition, in order to reduce the influence of extreme values, all continuous variables were treated with 1% and 99% quantile winsorized.
Research Model
Due to the different time points for enterprises to issue green bonds, referring to [3,4,57], a gradual double difference model was set: Patent i,t was the explained variable of this paper, which indicated the level of green innovation of enterprise i in t year.We measured it from the quantity and quality of green innovation.Green i × Post t was a double difference term in the DID model.This paper focused on the coefficient α 1 of Green i × Post t .If α 1 was significantly positive, it meant that the issuance of corporate green bonds could promote the green innovation level of peers.Controls i,t was the control variable group of the model.In addition, Model 1 also controlled the firm individual fixed effect λ i and annual fixed effect δ t .
Variable Definition (1) Explained variable
Peer firms' green innovation (Patent i,t ) was the explained variable.Green innovation refers to the enterprise's green product design and process innovation, as well as organizational management support and innovative implementation, involving energy conservation, pollution prevention, waste recycling, and other aspects that deal with environmental problems and achieve specific environmental protection goals and sustainable development.In the existing literature, some scholars use the sales revenue of new products per unit energy consumption to measure the level of green technology innovation of enterprises [58].Although it can reflect the degree of green innovation of enterprises to some extent, it is difficult to describe the green R&D achievements of enterprises.In contrast, green patent requires enterprises to research, develop, popularize, and apply green technology, which is the dominant output of green technology innovation of enterprises, and thus can better reflect the green innovation capability of enterprises [27,59]; this paper used the research methods of Lin and Ma (2022) [32] and Xia et al. (2022) [25] for reference and identified the green patents applied by enterprises, according to IPC Green Inventory issued by the World Intellectual Property Organization (WIPO).The steps are as follows: (1) Obtain the IPC Green Inventory published by the WIPO in 2010, a convenient retrieval tool for green patents launched.According to the IPC Green Inventory, green patents include seven fields: waste management, nuclear power, transportation, energy conservation, agriculture and forestry, alternative energy production, and administrative supervision and design, involving about 200 areas that are directly friendly to the environment.(2) Obtain the patent information of China A and share listed companies from the CNRDS database.CNRDS provides information on the number, classification, and citation of patents applied by Chinese companies.(3) Match the IPC Green Inventory with the patent information of listed companies in China, according to the IPC number, to obtain the listed companies' green-patent-related data.In China, patents are divided into invention patents, utility model patents, and design patents.An invention patent refers to a new technical proposal for a product, method, or improvement.A utility model patent refers to a new technical scheme suitable for practical use, based on product shape, structure, or combination.The granting of a utility model patent does not require substantial examination, so the procedure is relatively simple, and the cost is relatively low.A design patent refers to a new design rich in aesthetic feeling and suitable for industrial application by the shape, pattern, or combination of products and the combination of color, shape, and pattern.Compared with utility model patents and design patents, invention patents have a relatively high degree of innovation, a longer development cycle, more significant difficulty in research and development, and more complicated application procedures.Following [60,61], we used the number of green invention patent applications to characterize the quality of green innovation.In addition, the number of green patent citations can also explain the patent quality well.Therefore, we used three indicators to measure green innovation.(1) The total number of green patent applications (Patent1) refers to the number of green patents applied by enterprises in a fiscal year.When precisely calculated, we added 1 to the total number of green patent applications in that year and then took the natural logarithm.This indicator represented the quantity of green innovations.(2) The number of green invention patent applications (Patent2) refers to the total number of green invention patents applied by enterprises in a fiscal year.When calculating precisely, we added 1 to the number of invention green patents applied for that year and then took the natural logarithm.This indicator represented the quality of green innovation.(3) The amount of green patents cited (Patent3) refers to the sum of the number of patents cited by an enterprise in a fiscal year.In the specific calculation, we added 1 to the number of green patents cited in that year and then took the natural logarithm.This indicator also represented the quality of green innovation.
(2) Explanatory variables Green i × Post t was the core explanatory variable in this paper.Green i was the dummy variable of the treatment group and the control group.If an enterprise in a specific industry publicly issued green bonds, other enterprises in the same industry will be assigned a value of 1, otherwise, it will be assigned a value of 0. The same industry adopts the fourth industry classification of Wind industry classification.Wind industry classification standard is based on the Global Industry Classification System (GICS) and adopts a four-level industry classification system, including 11 first-level industries, 24 s-level industries, 62 third-level industries, and 136 fourth-level industries.It is a common industry classification standard in China.Post t was a virtual variable of time.For the treatment group, if the issuing time of the first green bond enterprise in an industry was t, the enterprises in the industry were assigned a value of 1 at time t and later, otherwise they were assigned a value of 0; For the control group, all Post t values were 0.
(3) Control variables Many factors drive the level of green technology innovation of enterprises, so we included several groups of essential control variables in the model.Firstly, based on the resource-based theory, enterprises' willingness and risk-taking level to innovate are influenced by their characteristics and capabilities.Therefore, we controlled the variables of enterprise size (Size), return on net assets (Roe), capital expenditure (Expend), cash flow (Cfo), equity nature (Soe), and enterprise age (Age) in the model.Secondly, based on the stakeholder theory, the demands and governance ability of stakeholders, such as creditors and enterprise executives, also interfere with the enterprises' green technology innovation decisions.Therefore, the asset-liability ratio (Lev) and executive shareholding ratio (Exe) were included in the control variable group.Thirdly, because of the externality of environmental governance and technology, enterprises often lack the original motivation of green technology innovation, so government intervention plays an important role in the decision making of green technology innovation.Based on this, this paper controlled two factors: government subsidy (Subsidy) and environmental regulation (Regula) in the model.See Table 1 for definitions of the main variables.
Variables Description
Patent1 Add 1 to the total number of green patent applications in that year and then take the natural logarithm.
Patent2
Add 1 to the number of invention green patents applied for in that year and then take the natural logarithm.
Patent3
Add 1 to the number of green patents cited in that year and then take the natural logarithm.
Green × Post If an enterprise issues green bonds in t year, other enterprises in the same industry will be assigned a value of 1 in t year and later, otherwise it will be assigned a value of 0.
Size Natural logarithm of total assets at the end of the fiscal year.
Roe
Annual net profit divided by total owner's equity at the end of fiscal year.
Expend
Cash paid for the purchase and construction of fixed assets, intangible assets, and other long-term assets divided by total assets at the end of fiscal year.
Cfo
Net cash flow from operating activities divided by total assets at the end of fiscal year.
Soe
If the enterprise is state-controlled, take one, otherwise take zero.
Variables Description
Age Time interval between enterprise registration year and enterprise sample year.
Lev
Liabilities divided by total assets at the end of fiscal year.
Exe
Number of shares held by executives divided by total share capital at the end of the fiscal year.
Subsidy
Total government subsidies received by enterprises divided by operating income.
Regula
The investment amount of pollution control in each province is divided by the total industrial output value of the province and multiplied by 1000.
Descriptive Statistical
Table 2 provides descriptive statistical results of major variables.The standard deviations of total green patent applications (Patent1), green invention patent applications (Patent2), and green patents cited (Patent3) are 1.105, 0.897, and 1.597, respectively, which indicates that there are significant differences in the level of green innovation among enterprises.We tested the average difference before and after the impact of green bonds by industry in the treatment group.Table 3 reports the industry results that passed the test of at least 10% confidence level difference.We sorted them according to the average difference of the total number of green patent applications (Patent1).It can be seen that when an enterprise in an industry issues green bonds, the total number of green patent applications (Patent1), green invention patent applications (Patent2), and green patent citations (Patent3) of other enterprises in most industries are significantly increased.In addition, we also found that the green innovation level of comprehensive industries has the most significant effect, followed by environmental and facility services, while the green innovation level of gold, automobile manufacturing, seaport and service industries, and airport service industries has declined instead of rising.These industries should be further studied and paid attention to by scholars and government departments.We further test the correlation of the research variables, and the results are shown in Table 4: First, green bonds are significantly positively correlated with the total number of green patent applications (Patent1), green invention patent applications (Patent2), and green patent citations (Patent3), hypothesis 1 is preliminarily verified.Secondly, the control variables, such as enterprise size (Size), return on net assets (Roe), capital expenditure (Expend), cash flow (Cfo), equity nature (Soe), enterprise age (Age), government subsidy (Subsidy), the asset-liability ratio (Lev), executive shareholding ratio (Exe), and environmental regulation (Regula), are related to green innovation (Patent1/Patent2/Patent3). Therefore, it is reasonable to bring these variables into Model 1.In addition, the correlation coefficients among the control variables are small, which means that there is no obvious multicollinearity problem among the main variables selected in this paper.Further, we test Model 1 by VIF one by one, and the test results show that the maximum VIF value is 3.56, which is far less than 10, indicating that Model 1 is less affected by multicollinearity and its estimation result is reliable.[62], the indicators of the treatment group enterprises were selected in the year before the policy impact to match the data of the control group enterprises in the same period.For example, if an enterprise in an industry successfully issues green bonds for the first time in 2017, and the enterprise is the first enterprise in the industry to successfully issue green bonds, the industry enterprises will use the indicators from 2016 to match the data of the control group enterprises in 2016.In the specific matching, the tendency score is estimated by the logit model, and one-to-one non-put-back neighbor matching is adopted, and only individuals within the common value range are matched.Covariates include enterprise size (Size), return on net assets (Roe), asset-liability ratio (Lev), cash flow (Cfo), equity nature (Soe), and control of the industry.The premise of PSM effectiveness is that there is no significant difference in observable variables between the matched treatment group and the control group.Therefore, the matching balance test is carried out in this paper, and the results are shown in Table 5.First, compared with the variables before matching (unmatched), the standardized deviation (%bias) of all matched variables (matched) is greatly reduced.Second, the standardized deviation (%bias) of all the matched variables (matched) is less than 10%.Third, the t-test results of all matched variables (matched) accept the original assumption that there is no systematic difference between the treatment and control groups.The above results show that the observable variables selected in this paper and the matching methods are appropriate.Figure 2 shows the kernel density functions before and after matching.It can be seen that the deviation of the two kernel density curves before matching is significant, and the two curves are closer due to the narrowing of the mean distance after matching, which shows that the matching is effective and reasonable to some extent.In this way, the estimation deviation caused by the "self-selection problem" of samples can be better solved.
Average Treatment Effect
Table 6 reports the average processing effect of PSM, in which "Unmatched" reports the estimated results of the samples before matching.The differences between the treatment group and the control group before matching are 0.4902, 0.3316, and 0.6440, respectively, all of which are significant at the 1% level; "ATT" is the average treatment effect of the participants after PSM.The differences between the treatment group and the control group are 0.4768, 0.3226, and 0.6280, respectively, and they are all significant at the level of 1%.The above results support hypothesis 1: the issuance of corporate green bonds can significantly improve the peer firms' green technology innovation.
Average Treatment Effect
Table 6 reports the average processing effect of PSM, in which "Unmatched" reports the estimated results of the samples before matching.The differences between the treatment group and the control group before matching are 0.4902, 0.3316, and 0.6440, respectively, all of which are significant at the 1% level; "ATT" is the average treatment effect of the participants after PSM.The differences between the treatment group and the control group are 0.4768, 0.3226, and 0.6280, respectively, and they are all significant at the level of 1%.The above results support hypothesis 1: the issuance of corporate green bonds can significantly improve the peer firms' green technology innovation.
Baseline Regression Results
This paper performs the Model 1 test based on PSM processing.Table 7, respectively, reports the impact of the issuance of corporate green bonds on the total number of green patent applications (Patent1), green invention patent applications (Patent2), and green patent citations (Patent3) of peer enterprises.Columns 1 to 3 report the regression results without added control variables.It can be seen that the coefficients of Green × Post are 0.105, 0.069, and 0.145, respectively, and they are all significant at the 1% level.By adding control variables to the model, the results are shown in columns 4 to 6 of Table 5, and the coefficient of Green × Post is still significantly positive at 1%.The above results show that the issuance of green bonds can significantly promote the quantity and quality of peer firms' green patents.Hypothesis 1 has been verified.
Parallel Trend and Dynamic Effect Test
The double difference model needs to satisfy the parallel trend hypothesis, that is, the development trend of the explained variables in the treatment group and the control group is the same without policy intervention.We build Model 2 to test the parallel trend of the progressive DID model and further analyze the dynamic marginal effect of the green bonds on the green innovation of enterprises.In Model 2, Before(>3)/Before(3)/Before(2)/Before(1)/ Current/After(1)/After(2)/After(>2), respectively, represent the interaction items of Green Table 8 reports the parallel trend and dynamic effect test results.It can be seen that the regression coefficients of Before(3), Before(2), and Before(1) are not significant, irrespective of the explained variables of the total green patent applications (Patent1), green invention patent applications (Patent2) or green patent citations (Patent3), indicating that there is no significant difference in the green innovation level between the enterprises in the treatment group and those in the control group before the issuance of green bonds.In addition, the coefficients of Current, After(1), and After(2) are basically significantly positive, indicating that issuing green bonds can continuously promote the green innovation level of peer enterprises.In order to alleviate the research result error caused by the loss of one-to-one matching samples, this paper adopts the kernel matching method to determine the weight again, applies the condition of "common support", and finally obtains 28,805 firm-year observations.W re-estimate Model 1 with new samples, which are shown in Table 9 No matter what the explained variables of the total green patent applications (Patent1), green invention patent applications (Patent2), and green patent citations (Patent3) are, the coefficient of Green×Post is always positive and significant at the 1% confidence level.Hypothesis 1 is verified again, which shows that the results of this paper are robust.(2) Tobit model As the explained variables of this paper, total green patent applications (Patent1), green invention patent applications (Patent2), and green patent citations (Patent3) are all truncated variables with zero as the lower limit, which may lead to errors in OLS estimation results; therefore, the Tobit model is used again in this paper for estimation, and the results are shown in Table 9 columns 4-6.The coefficients of Green × Post are 0.294, 0.211, and 0.180, respectively, all of which are significant at the level of 1%, which indicates that the issuance of green bonds by enterprises can indeed improve the green innovation level of peer enterprises and also supports hypothesis 1.
(3) Reduce the influence of industry policies Although this paper alleviates the influence of industry policies by controlling the fixed effect of firms, the green bond issuers in the sample are mainly distributed in three industries, including the thermal power production and supply industry, the ecological protection and environmental management industry, and the civil engineering and construction industry, accounting for nearly half of all green bond issuers.In order to alleviate the possible impact of industry aggregation, this paper excludes the samples of bond issuers belonging to these three industries and carries out the PSM-DID test again.The regression results are shown in Table 9 columns 7-9.The coefficient of Green × Post is still significant, which shows that the results of this paper are robust.
(4) Placebo test In order to alleviate the influence of other unobservable factors on the research results, we constructed the counter-evidence by repeated sampling regression.Specifically, we first selected 2245 companies from the sample companies as treatment groups by random sampling method.Then, we used one-to-one non-put-back neighbor matching to match the appropriate control groups in other companies.Finally, we used the new samples to estimate Model 1.We repeated the extraction 1000 times, and the result is shown in Figure 3.The straight line is the T value estimated by the baseline regression model.Irrespective of the explained variable of the total number of green patent applications (Patent1), the number of green invention patent applications (Patent2), and the number of green patents cited (Patent3), the t values of most random sampling results are near zero, and only a few estimated results have t values larger than the baseline regression results.This means that the promotion effect of green bonds on green innovation of peer enterprises is not caused by other unobservable factors.
Heterogeneity Test
So far, our research shows that the successful practice of issuing green bonds will be learned by peer enterprises, thus promoting the level of green technology innovation of peers.So, we are interested in what kind of enterprises that issue green bonds are more likely to be learned by their peers.
Industry Leaders
Studies have found that the behavior of industry leaders easily influences the behavior of other enterprises in the same industry.For example, Brown et al. (2018) [63] found that when an industry leader receives a letter of inquiry about financial reports, the company that did not receive the letter will pay more attention to the publicly inquired opinions of the company that received the letter and improves its information disclosure quality.Bratten et al. (2016) [64] found that enterprises in the same industry will pay attention to the reported earnings of industry leaders.If the performance of industry leaders exceeds analysts' expectations, the motivation of other enterprises in the same industry to manipulate earnings will also be enhanced.Therefore, we believe that when industry leaders issue green bonds, other enterprises in the same industry will pay more attention to green bonds and green behaviors.Referring to Wu et al. (2022) [3], we define companies whose operating income accounts for more than 3% of industry income as industry leaders.If the green bond issuer is an industry leader, the variable Leader value is 1; otherwise, it is 0. We put Leader and Green × Post × Leader into Model 1 at the same time.The results are shown in Table 10 columns 1-3.The coefficients of Green × Post × Leader are all significantly positive at the 1% level, indicating that when the green bond issuer is an industry leader, it has a more decisive influence on the green innovation level of the peer company.
Issuer with High Media Attention
In the information age, the media plays an important role.On the one hand, the media, as the intermediary of information dissemination, can disseminate the company's information in time, reducing market information asymmetry.At the same time, news reports are short and focused.This information paradigm is also conducive to information users' reading and absorbing of information and improves the efficiency of information dissemination.On the other hand, the media will integrate the confidence of different sources, such as relevant government policies, institutional analysis, public comments, etc., in reporting the reduction of the information search cost of information users.Therefore, when an enterprise issues green bonds, the media, as an intermediary of information dissemination and production, reports more related events, it can arouse more attention and interest of peer companies in learning, and it can more strongly promote the green technology innovation of peer companies.First, we sorted out the list of companies that successfully issued green bonds and then searched the Baidu search engine for "the name of the issuer, green bonds".Then, through keyword search, supplemented by manual reading, we found out the number of reports related to the issuance of green bonds by a specific company in the above-collected information.For example, we searched Baidu for "solar, green bonds" and then identified through reading that Sina Finance, Shanghai Securities News, Zhitong Finance Network, Liangjiang Finance Watch, and other media have reported on the green bonds issued by China Energy Conservation Solar Co., Ltd.(stock code: 00591) 12 times.We take the median of the total number of times the media report green bonds issued by enterprises in an industry as the demarcation point.Suppose the total number of times green bonds issued by enterprises in an industry are reported on a certain day is equal to or higher than the median.In that case, the industry is defined as an industry with high media attention.Other enterprises in the industry are assigned a value of 1; otherwise, it is assigned a value of 0. We put Media and Green × Post × Media into Model 1 simultaneously, and the results are shown in Table 10 columns 4-6.The coefficients of the interaction term Green × Post × Media are all significantly positive in the 5% confidence interval, which indicates that when the media attention of the issuer of green bonds is high, the green innovation of the peer company will be improved more significantly.
Close Competitors
In the product and capital markets, the company will pay close attention to its competitors and try to make the company superior to its competitors through imitation and other means.Brown et al. (2018) [63] found that when companies noticed that the stock exchange questioned their competitors' financial reports, they might improve their financial reports according to the inquiry letter so that they would not receive the inquiry letter.Therefore, we believe that the experience and income of green bond issuers will substantially impact the green technology innovation of close competitors.Referring to [63], we define the peer enterprises whose total assets do not exceed 10% of the green bond issuers as close competitors.If the peer company is a close competitor of the green bond issuer, the Rival is assigned to 1. Otherwise, it is 0. We put Rival and Green × Post × Rival into Model 1 simultaneously, and the results are shown in Table 11 columns 1-3.The coefficient of Green × Post × Rival is significantly positive at least at the level of 5%, indicating that when the peer enterprise and the green bond issuer are close competitors, they are more inclined to learn from the experience and benefits of issuers and improve their green innovation to try their best to make themselves competitive.
In the Same Network of Directors
The director network, also known as director connection, director chain, board network, etc., refers to the direct and indirect connection established by individual directors of a company through serving on at least two boards of directors simultaneously.Previous studies have shown that the director network can bring imitation pressure, that is, the managers tend to imitate the behavior of other organizations in the same director network [65].In addition, the director network is also a transmission channel of information and a learning platform of knowledge and market experience [66,67]; the experience of green bond issuers spreads faster and more thoroughly in the same network of directors.Therefore, we believe that when the peer company and the green bond issuers are in the same director network, the promotion effect of the peer company's green technology innovation is more significant.Drawing on the research of [68,69], this paper uses a virtual variable (Director) to represent the director chain enterprise.If the peer company and the green bond issuers are in the same director network, the value is 1.Otherwise, it is 0. We introduce the director chain as a moderating item into the Model 1, and the results are shown in Table 11 columns 4-6.The coefficient of interaction item (Green × Post × Director) is significant at least at a 10% level, which indicates that when the issuer of green bonds is in the same director network with the peer company, its spillover effect on the green innovation of the peer company is more vigorous.
Further Analysis
The foregoing analysis shows that peer enterprises in the same industry will improve their level of green innovation after learning about the experience and benefits of green bond issuers.We pay more attention to the impact of this result on the environmental performance of peer enterprises.To this end, we construct Model 3: Among them, Environmental i,t+1 indicates the environmental performance of enterprise i in t+1.We use three indicators to measure environmental performance: (1) Environmental recognition (E-recog), referring to [70]; if the enterprise receives environmental recognition or other positive evaluation, it will be assigned a value of 1. Otherwise, it will be assigned a value of 0. ( 2) Environmental responsibility score (E-score), which we obtained from the evaluation of corporate social responsibility published by Hexun.com.We added 1 to the corporate environmental responsibility score in the actual calculation process and then took the natural logarithm.(3) Environmental certification (E-certi), which is consistent with [3].If the enterprise's environmental management system has passed ISO14001 certification, the value is 1.Otherwise, it is 0. The environmental recognition and certification data come from the social responsibility database of the China Research Data Service Platform (CNRDS).From the empirical results in Table 12, after enterprises issue green bonds, peer enterprises in the same industry can get more environmental recognition or positive evaluation, their environmental responsibility scores are higher, and the probability of passing ISO14001 certification is also improved.The above evidence shows that green bonds' industrial technology spillover effect can produce positive environmental performance.This result is consistent with [2][3][4], that is, green bonds can indeed produce positive environmental benefits.This result also confirms our view that the reason why green bonds promote the overall environmental improvement in the early stage of development is that green bonds can promote the green innovation behavior of peer companies.
Conclusions
Faced with global environmental problems, such as environmental pollution, resource depletion, and deterioration of the ecological environment, human beings have realized the harm of past production and consumption patterns to the environment.In order to better coordinate the relationship between economic development and ecological protection, sustainable development and green development have become the mainstream strategies of future economic development.Green development needs a lot of capital investment, but government funds can only cover a small part.Therefore, it is important to build a green financial system and guide social capital to participate in environmental governance.As a new financing tool with both "green" and "financial" characteristics, green bonds are an effective way to help the green development strategy break through the capital shackles [6,10].However, whether and how green bonds can play an active role in promoting the sustainable development of the environment has not been answered consistently.Against this background, we studied green bonds' industrial technology spillover effect, mechanism, and environmental performance.We find that after enterprises issue green bonds, peer companies will improve the quantity and quality of their green innovations based on the motivation of seeking benefits and avoiding harm, and this improvement effect is dynamic and sustainable.This conclusion is consistent with the research conclusion of [44], which states that corporate behavior motivated by organizational learning and reputation acquisition will be influenced by other corporate behaviors in the group.Different from the literature that suggests green bonds promote the green innovation of issuers, our research found another transmission path in which green bonds play a positive environmental role.
Then, we are curious about what kind of enterprises issuing green bonds can produce greater industry spillover effects.Industry leaders are the benchmark enterprises in the industry, and their behavior is often more considered and studied by other enterprises in the same industry.Therefore, when industry leaders issue green bonds, the information about green bonds issued by them is more likely to spread to peer enterprises, and it is also easier to cause peer companies to learn and imitate, resulting in greater industry spillover effect.At the same time, media reports are the main method of information diffusion and dissemination.Therefore, when enterprises issue green bonds with more media attention and publicity, it will naturally bring more industry spillover effects.In addition, enterprises tend to pay more attention to the behavior of close competitors.When competitors adopt new means or methods to obtain resources, enterprises will try to imitate or even surpass them.Therefore, the industry spillover effect will be more significant when the issuer of green bonds is a close competitor with its peer company.In addition, the board network is also an effective channel for information exchange.Therefore, when the issuer of green bonds is in the same board network as the peer company, the peer company's green innovation level is higher.These findings make us better understand the industry spillover effect of green bonds.
Finally, we are interested in whether peer enterprises' green innovation impacts their environmental performance.The answer to this question can echo the question we put forward in the introduction: at the early stage of the development of the green bond market, how can green bonds produce overall environmental performance when the number and coverage of green bonds issued are meager?We found that the green innovation behavior of peer enterprises promoted by green bonds can significantly improve their environmental performance, which is manifested in the higher probability of peer enterprises receiving environmental recognition or other positive evaluation, higher environmental responsibility score, and higher probability of passing ISO14001 certification.This makes us believe that the issuance of green bonds not only affects the pro-environmental behavior of the issuer [2] but also drives the peer enterprises to take more measures that are beneficial to environmental protection [3], resulting in overall environmental performance and social welfare.
In general, our research shows that the issuance of corporate green bonds can produce a good spillover effect of green innovation in the industry, and then produce positive environmental benefits, which is conducive to China's strategic goal of "carbon neutrality, carbon emission peak".
Theoretical Implications
Firstly, our research enriches the related literature on the factors influencing green innovation and helps to broaden the academic space of green innovation research.We enrich and improve the theoretical system of green innovation research.Secondly, this paper can make up for the deficiency of academic work in the research field of green finance theory and make positive academic contributions to the research of micro-transmission mechanisms and the path of green finance.Thirdly, our research broadens the application scenario of peer learning theory and supplements the literature around financing decisions affecting the investment behavior of peer enterprises.
Practical Implications
Firstly, from the perspective of green bonds, we try to analyze the influence path of green finance on the green behavior of entities and provide evidence and experience for promoting ecologically sustainable development by market-oriented means.Secondly, we have confirmed the positive effect of green bonds on peer enterprises' green innovation and environmental performance.We affirmed the correct scheme to guide social capital to participate in environmental governance by developing the green bond market.
Suggestions
Green bond is a fixed-income tool that aims to provide financing for environmental and sustainable development projects, and it is attracting issuers and investors in various fields [1].This article found the positive industrial spillover effect of green bonds, which showed that it was feasible to develop a green bond market, guide social capital to flow into green fields, and build a sustainable economy together.However, compared with the mission of green bonds, the scale of the green bond market is still tiny.Therefore, government departments should speed up the cultivation and construction of the green bond market and promote the orderly growth of green bond issuance.Future improvements can be made in the following aspects.
First, we agree with [1] and we recommend that government departments pursue the standardization of issuance by developing a standard green bond framework.At the same time, we recommend promoting the international convergence of this standard to attract more international capital to invest in Chinese green bonds.
Second, we suggest taking measures to encourage institutional investors, such as commercial banks, insurance companies, and securities companies, to invest in green bonds.For example, increasing the proportion of green bonds in financial institutions' green financial evaluation schemes, giving priority to selecting green bonds that meet the standards and include them in the central bank's pledge pool, making use of small tax spillovers to improve the return rate of investing in green bonds [71], and enriching and improving the diversified green bond product system to attract investors' attention and participation in the green bond market.
Third, we recommend enhancing green bond issuers' information transparency and promoting information circulation among enterprises.On the one hand, the relevant departments should establish a complete and transparent information disclosure framework for green bonds, standardize the information disclosure behavior of issuers, and promote the standardization and digitization of environmental benefit information disclosure of green bonds.On the other hand, China should further improve the third-party certification system, promote the healthy and orderly development of the external certification market, provide full access to the supervisory role of the third-party certification, and reasonably guarantee the quality of information disclosure.
Fourth, because of the media's attention to promoting the industrial spillover effect of the green bond, we suggest guiding the media to publicize the green bond and provide full access to the media's supervision and information diffusion.For example, we can choose typical green bond issuers to focus on tracking and reporting so that more enterprises and investors can know about green bonds.In addition, in view of the fact that the issuance of green bonds by industry leaders can produce greater industry spillover effects, qualified leading enterprises should be encouraged and guided to issue green bonds.
Limitations and Future Research
This study has three main limitations, which can be optimized in future research.Firstly, limited by data, this paper only considers the green patent for measuring an enterprise's green technology innovation, while green innovation is a systematic process.In the future, we can participate in collecting relevant data and bringing green innovation input, efficiency, performance, and other factors into the research framework.Secondly, as a few samples of enterprises successfully issue green bonds at present, this paper does not subdivide the types of green bonds.In the future, we can further explore the influence of the heterogeneity of green bonds on the green technology innovation of peer enterprises.Thirdly, this paper does not consider the synergy mechanism of green bonds, that is, which factors can enhance the industry spillover effect of green bonds.In the future, we can supplement this vacancy from the perspectives of environmental regulation, good legal systems for intellectual property protection, and effective internal control environment.This paper also fails to consider the alternative mechanism of green bonds, that is, which factors can replace green bonds and produce the same positive effect as green bonds.In the future, we can further investigate whether other green financial products, such as green insurance, green funds, and green credit, can promote the green technological innovation of enterprises.
Sustainability 2022, 14 , 17035 6 of 25 Figure 1 .
Figure 1.The trigger mechanism of industry green technology spillover effect by green bond.
Figure 1 .
Figure 1.The trigger mechanism of industry green technology spillover effect by green bond.
Figure 2 .
Figure 2. Kernel density function plot before and after matching.
Table 3 .
Difference of average value of green innovation before and after the impact of green bonds in various industries.If an enterprise in an industry successfully issues green bonds, other enterprises in the industry that have not issued green bonds are the treatment group, while enterprises in other industries are the control group.For matching objects, as learned from HAO et al.(2018)
Figure 2. Kernel density function plot before and after matching.
Table 6 .
PSM average treatment effect.
Table 6 .
PSM average treatment effect.
Table 8 .
Parallel trend and dynamic effect test results.
Notes: *** Significant at 1% level; t values are reported in parentheses; refer to Table1for variables description.
Notes: *** Significant at 1% level; t values are reported in parentheses; refer to Table1for variables description. | 2022-12-21T16:21:19.554Z | 2022-12-19T00:00:00.000 | {
"year": 2022,
"sha1": "9495e6cfc816adc8103f6d1498ea5a6c38fe2e70",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/24/17035/pdf?version=1671616703",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "40d97442cd0620735fd90b5b700a834e0fe08ce0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
231714284 | pes2o/s2orc | v3-fos-license | Transcripts switched off at the stop of phloem unloading highlight the energy efficiency of sugar import in the ripening V. vinifera fruit
Transcriptomic changes at the cessation of sugar accumulation in the pericarp of Vitis vinifera were addressed on single berries re-synchronised according to their individual growth patterns. The net rates of water, sugars and K+ accumulation inferred from individual growth and solute concentration confirmed that these inflows stopped simultaneously in the ripe berry, while the small amount of malic acid remaining at this stage was still being oxidised at low rate. Re-synchronised individual berries displayed negligible variations in gene expression among triplicates. RNA-seq studies revealed sharp reprogramming of cell-wall enzymes and structural proteins at the stop of phloem unloading, associated with an 80% repression of multiple sugar transporters and aquaporins on the plasma or tonoplast membranes, with the noticeable exception of H+/sugar symporters, which were rather weakly and constitutively expressed. This was verified in three genotypes placed in contrasted thermo-hydric conditions. The prevalence of SWEET suggests that electrogenic transporters would play a minor role on the plasma membranes of SE/CC complex and the one of the flesh, while sucrose/H+ exchangers dominate on its tonoplast. Cis-regulatory elements present in their promoters allowed to sort these transporters in different groups, also including specific TIPs and PIPs paralogs, and cohorts of cell wall-related genes. Together with simple thermodynamic considerations, these results lead to propose that H+/sugar exchangers at the tonoplast, associated with a considerably acidic vacuolar pH, may exhaust cytosolic sugars in the flesh and alleviate the need for supplementary energisation of sugar transport at the plasma membrane.
S2a). They were further grouped according to their expression profiles (Fig. S3), and the cluster 180 number was restricted to four upon visual analysis. Cluster A and B included the majority of the 181 genes increasing or decreasing in expression from G to S (Fig. 3a). The genes commonly 182 8 Other important differentially expressed genes (DEGs) could be identified in Syrah and 226 only one microvine due to discrepancies at P stage. Since the variance among P stage samples 227 was certainly exacerbated by abrupt changes in physiology and gene expression at phloem arrest, 228 worsened by sampling delay, data were tested in pairwise mode analysing stage S versus stage 229
G. 230
A total number of 6585, 6453 and 5621 DEGs were identified in Syrah, MV032, and 231 MV102 ( Table S2b). The commonly modulated genes increased to 899 up-regulated genes and 232 1155 down-regulated ones, yielding to 2054 DEGs (Table S3). Other 411 genes were modulated 233 in the three genotypes but with a discordant gene trajectory. Compared to the previous analysis, 234 this second comparison gave a more comprehensive list of commonly modulated DEGs (before 235 they were 1448). These two lists shared 809 genes. In this new DEGs list, there were down-236 regulated genes like VviTMT2, VviHT2, VviGIN1, VviH + ATPase, VviEXP19 and other cell wall-237 related genes (Table S3). 238 239
Rates of gene expression: ranking genes by transcription priority 240
Gene expression burdens cells by consuming resources and energy for transcription, 241 translation, and the synthesis and degradation of hydrophobic proteins like transporters is 242 particularly expensive 29 . In this respect, the transcription of transporters must stop very fast once 243 they are no longer required. As a proxy for energy cost, we merged and ranked the two lists of 244 differentially expressed genes according to their absolute change in expression (CPM, i.e. 245 uncorrected for transcript length) during the stop of water, sugar and K + influx in the berry (S 246 versus G), as summarised in Table 1 (full list in Table S3). This straightforward procedure 247 proved extremely efficient in the detection of channels and transporters virtually turned off 248 simultaneously with phloem together with cell wall-related genes. Amazingly, in Syrah, among 249 solute transporters, VviHT6 exhibited the largest decay in expression from G to S stages with 250 80% inhibition, indicating tight synchronism with phloem arrest. This was also verified for 251 VviSWEET10, VviPIP1.3 and VviPIP2.5, which decreased by 90% and 83%. Moreover, a pectate 9 (+244%), metallothionein (+213%), and polyubiquitin (+73%) appeared as the most induced 255 ones. 256 257
Promoter analysis 258
Sixty selected genes related to water, sugar, and growth down-regulated at the stop of 259 phloem were distributed in different regulatory networks when clustered according to the Z-260 score 30 which accounts for length, number and position of their promoter motifs (Fig. 4) For the first time, non-destructive monitoring of single berry growth allowed us to sample 286 synchronised fruits within an original population marked by a significant delay in the 287 development of individual fruits. Sorting berries according to their net growth (or water import) 288 rate warranted that berries still growing and importing water were not mixed with shrivelling 289 (over-ripe) ones. Real-time individual growth monitoring outperformed our previous sampling 290 procedure which just relied on solute concentration 4,17 upon adding the lacking kinetic 291 dimension. Therefore, the simultaneous use of four distinct variables (malate, glucose, fructose 292 and volume) empowered us to discriminate individual berry stages and obtain a recognizable 293 transcriptomic signature. We showed here that the net imports of water, sugar, and K + 294 simultaneously stop when the ripening berry reaches its maximum volume, which confirms that 295 sugar accumulation at constant volume indicating intense xylem back-flow during late 296 ripening 3,16 resulted from an averaging artefact on asynchronous samples 4 . This was verified in 297 contrasted thermo-hydric conditions (field -higher temperature and water demand, versus 298 greenhouse -non-limiting water supply) and when phloem unloading arrested at 0.8 M of sugar 299 in microvines (as already observed for these genotypes 33 ) and at 1.2 M in Syrah. At phloem 300 arrest, with a malate concentration below 100 mEq (39 in Syrah, 48 and 93 mEq in MV032 and 301 MV102), the berry already consumed between 60 and 80% of the malic acid accumulated before 302 softening. These differences in acidity that would principally originate from the green stage 303 accumulation period 27 did not apparently interfere with phloem arrest. 304 Our study, performed in two environments and in three genotypes displaying different 305 ripening features, reports for the first time robust gene trajectories associated with the end of the 306 sugar and water accumulation processes, with the most intensively down-regulated genes related 307 to cell wall modification (arrest of growth), aquaporins and sugar transporters (arrest of phloem 308 unloading). The prevalence of plasma-membrane channels and transporters genes virtually 309 turned off at phloem arrest re-emphasizes the central role of the apoplasmic pathway in ripening 310 berry. It clearly illustrates that a strategy addressing the single berry level can lead to more 311 comprehensive insights on fruit developmental biology than random samplings. 312
313
The arrest of growth is modulated by the down-regulation of cell wall associated genes 314 The large cohort of cell wall related genes being repressed at growth arrest yields an 315 extremely dynamic image of the rearrangements accompanying cell wall extension and cellular arrest where many genes were down-regulated. The present single berry pericarp profiling 318 greatly affirms the connection between aquaporins, cell wall and sugar transport. No obvious 319 marker of cell death was detected here, which is often described later in the shrivelling process 320 of Syrah berries 34 . 321 Here we reported that genes involved in cellulose metabolism (15 cellulose synthases), 322 hemicellulose metabolism (4 endo-1,4-beta-glucanases, 7 xyloglucan endo-transglucosylase / 323 hydrolase proteins), pectin metabolism (4 pectate lyase, 4 polygalacturonases, 6 ß-galactosidases, 324 6 fasciclin-like arabinogalactan protein), and expansins genes (9) were strongly down-regulated 325 at the arrest of phloem (Table S3). In particular, we emphasise that precisely such expansins 326 (VviEXPA19, VviEXLA1, VviEXPB4, VviEXPA14) previously linked to growth and cellular 327 expansion during the second growth phases both in the flesh and in the skin 35 were down-328 regulated concomitantly with growth cessation. A link between these expansins and aquaporins 329 has been envisaged in a pre-genomic study reporting a down-regulating trend two or three weeks 330 before the attainment of maximum berry size on asynchronous berry samples 36 . This so-called 331 "unexpected" link was recently confirmed through a gene coexpression network analysis 37 . 332
333
The arrest of phloem is linked with strongly down-regulated aquaporins 334 Aquaporins are transmembrane channels facilitating the transport of water and small 335 solutes from cell to cell and between cell compartments. Noticeably, seven of the ten plasma 336 membrane intrinsic proteins (PIPs) identified in V. vinifera 37 were drastically inhibited (Table 1, 337 Table S3) Petit Manseng, which was not followed by a marked decrease at phloem arrest 41 . Our RNA-seq 375 data show that VviSWEET15 displays a similar expression level than VviSWEET10 during the 376 active phloem unloading without being inhibited at phloem stop (Table S4) The probable PM H + hexose symporter VviHT2 (Vitvi18g00397 -LOC100232961) 44 also 404 exhibited a considerable increase during ripening, followed by a 35% decrease in two weeks 53 . 405 Synchronised berries show here that more than 50% inhibition occurs at growth arrest, reaching 406 after that 84% decrease in Syrah. VviHT2 was already associated with the induction of VviHT6 at 407 veraison both in flesh and skin 54 and in ABA, and GA 3 treated berry 45,55 , but data were lacking 408 regarding its inhibition at ripe stage. To the best of our knowledge, VviHT2 localisation is still 409 unknown. Unfortunately, more information is available for VviHT1, but it was not commonly 410 modulated in the genotypes studied here. 411 accumulates one hexose 4 . Thus, a phloem unloading pathway passing through H + -coupled sugar 415 transporters on the three membrane interfaces from SE/CC complex to flesh vacuoles, combined 416 with the functioning of apoplasmic invertase, would waste within 50% of cellular ATP for H + 417 recycling, in a pure loss, as the sucrose gradient between first and terminal compartments is 418 thermodynamically favourable in V. vinifera. Present results (DEGs, promoter analysis) lead to 419 propose a more efficient design of sugar import, which refines the previous ones 4,56 upon 420 integrating key genes/functions observed in this study (Fig. 5). 421 VviSWEET10, located on the SE/CC plasma membrane, would be the major player in the 422 export of sucrose from the phloem into flesh apoplasm. Sucrose hydrolysis in the apoplasm 423 would require its resynthesis in flesh cells to allow sucrose vacuolar transport by TST (see 424 discussion above) which could be compatible with the huge induction of SPS during ripening 23,56 425 and with SPS, SuSy, AI, NI in vitro activities 41 in five thousand excess, at least, with sugar 426 accumulation rate. However, except for AI, the corresponding genes were not strongly inhibited 427 at phloem arrest, which may indicate that sucrose resynthesis could be also involved in 428 intercellular sugar exchanges needed for respiration in fruit periphery, which continues after 429 phloem arrest. In fact, Vitvi07g00353 and Vitvi11g00542 annotated as sucrose synthase and 430 sucrose phosphate synthase (Table S4) were rather constant during the three stages in the three 431 genotypes. 432 Sugar transport by SWEETs is energetically silent and the sink strength for sugar 433 accumulation is driven by the key electrogenic antiporters (proton-coupled, i.e. energy 434 consuming) VviHT6 and VviTMT2 that transport sucrose across the tonoplast. At the beginning 435 of ripening, the vacuolar acidity within 400 and 450 mEq (Table S1) at pH 2.7, provides a 436 considerable proton motive force for the transport of sucrose into the vacuole, as long as the 437 counterion malate is available. In tandem with vacuolar invertase, this system is susceptible to 438 restrict cytosolic sucrose below the nanomolar range. Although less dramatic, a similar 439 conclusion holds for hexose, in case VviHT6 and VviTMT2 do not transport sucrose (Table S5). 440 Once in the cytoplasm, the exchanged malic acid will serve as a substrate for respiration or 441 gluconeogenesis, according to the reactions: 2 H + + malate + 3 O 2 = 4 CO 2 + 3 H 2 O and 2 malate 442 + 4 H + → sucrose at the tonoplast (Table S1) are scavenged from the cytoplasm through these reactions, 445 but the remaining ones need to be pumped back in the vacuole at the expense of a progressive 446 activation of glycolysis, aerobic fermentation (ADH2), vacuolar H + /ATPase and H + /PPiase 57 . It 447 is noticeable that also VviADH2 is inhibited at phloem arrest, along with vacuolar invertase. 448 Results on the respiration of single grapevine berry, which provide strong arguments in this 449 direction, will be detailed in a next publication. The only PM H + symporter encountered here 450 (VviHT2) displayed the highest expression in Syrah, which also needed more ATP at the 451 tonoplast (Table S1), when compared to microvines. This suggests that energy-requiring H + 452 symporters would be activated on the plasma membrane to increase sink strength in 453 unfavourable environmental conditions, while VviSWEET15 would allow the bulk of sugar 454 import on flesh cells, even though further work is needed to precise its substrate specificity. 455 The energetically-optimised model proposed here alleviates the opening of an AKT2-like 456 channel that would compensate the energy deficit during phloem unloading 5 , fully inherent to the 457 sugar/H + symporters dogma, that can be discussed on thermodynamic and transcriptional 458 grounds in grape berry. Such a mechanism would require a simultaneous activation of 459 quantitatively similar flows of sugars and K + in ripening berries, which must be clearly excluded Single berry growth and development were monitored through biweekly pictures of 483 selected clusters (up to 8 for each genotype) starting a few days before softening day until two 484 weeks after maximum berry growth. Pictures were taken using a Lumix FZ100 camera 485 (Panasonic), keeping the focal range and cluster to camera lens distance (30cm) constant. The 486 volume increment of selected berries was calculated by analysing the pictures with ImageJ 60 . The 487 software, after calibrating the images using the 1 cm scale present in each of them, automatically 488 counted the pixels enclosed in each targeted berry area, measured as an ellipsoid. The estimated 489 berry volume was then calculated accordingly using the radius of the previously calculated area. 490 To eliminate the intra-and inter-cultivars variability in berry sizes and compare the changes in 491 volume among berries of different clusters, vines, and genotypes, each berry growth profile was 492 normalised to the softening volume, set to 1. 493 According to their own volume changes, 10 to 15 berries were sampled for each genotype 494 at different dates during the ripening growing phase (stage G), closest to the berry growth peak 495 (stage P) and two weeks after the maximum growth (stage S) in the shrivelling phase. 496 Concerning stage P, one must notice that a-posteriori it was found that phloem and, obviously, 497 growth, were already blocked at this stage, which should, technically speaking, be called P+1. 498 Berries were sampled at the same time of the day, between 9 am to 11 am, to avoid circadian 499 cycle influences. Berries without pedicel were rapidly deseeded before freezing (1-2 min after 500 harvest) in liquid N 2 and stored at -80 until further analysis. Single berries were ground to a fine 501 powder under liquid nitrogen using a ball mixer mill (Retsch MM400). (Table S6). Aligned 523 reads were counted using the last available annotation VCost.v3 with HTSeq-count (version 524 0.9.1) with the "nonunique all" flag 64 . Genes were filtered by applying an RPKM>1 cut-off in at 525 least one experimental condition (Table S4), and the variance stabilising transformation was 526 applied for data visualisation. 527 Genes (TMM normalised) were tested for multi time-series significance for finding genes 528 with significant (P<0.05) temporal expression changes using the R package MaSigPro 65 , with 529 Syrah dataset used as a control. A quadratic regression model (rqs=0.7) was applied. Significant 530 genes over time were clustered using STEM (short time-series expression miner) software 66 with 531 the maximum number of model profiles set to 24. Filtering or normalisation was not applied by 532 the software since input data were already filtered and normalised, but a mean-centred scale was 533 applied to overcome cluster mismatches due to different expression levels. Genes showing the 534 same profiles pattern among the significant clusters were grouped as follows: cluster A = cluster 535 Promoter sequences (1.5 kb upstream of TSS) were extracted using bedtools from the 543 12X2 genome assembly 62 . Gene correspondence file at 544 https://urgi.versailles.inra.fr/content/download/5723/43038/file/list_genes_vitis_correspondences 545 V3_1.xlsx and in the related gff3 shows that the 5'UTR was frequently lost in the VCost.v3 546 annotation. Therefore, we decided to take the longest sequence between VCost.v3 and NCBI 547 (Vitvi and LOC references, respectively) as the most probable TSS. Cis-regulatory elements 548 The figure is divided into three panels: early ripening, late ripening, and shrivelling with the stop 822 of phloem between the two latter (marked by a black circle). Genes written in red or blue italics 823 refer to those respectively expressed or repressed at specific stages, those in black are Table 1. List of thirty-five selected genes related to cell growth, water and sugar transport sorted 828 by absolute difference in Syrah G versus P. Gene ID is reported as VCost.V3, V1 and RefSeq 829 code. Gene expression is expressed as count per million (CPM). 830 Table 1 List of thirty-five selected genes related to cell growth, water and sugar transport sorted by absolute difference in Syrah G versus P. Gene ID is reported as VCost.V3 and RefSeq code. Gene expression is expressed as count per million (CPM). | 2021-01-27T14:15:41.768Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "ba463b71ca5c0493ad94ea2c3475f1dd15e38126",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc8408237?pdf=render",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "ba463b71ca5c0493ad94ea2c3475f1dd15e38126",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
53213444 | pes2o/s2orc | v3-fos-license | A brain-penetrant triazolopyrimidine enhances microtubule-stability, reduces axonal dysfunction and decreases tau pathology in a mouse tauopathy model
Background Alzheimer’s disease (AD) and related tauopathies are neurodegenerative diseases that are characterized by the presence of insoluble inclusions of the protein tau within brain neurons and often glia. Tau is normally found associated with axonal microtubules (MTs) in the brain, and in tauopathies this MT binding is diminished due to tau hyperphosphorylation. As MTs play a critical role in the movement of cellular constituents within neurons via axonal transport, it is likely that the dissociation of tau from MTs alters MT structure and axonal transport, and there is evidence of this in tauopathy mouse models as well as in AD brain. We previously demonstrated that different natural products which stabilize MTs by interacting with β-tubulin at the taxane binding site provide significant benefit in transgenic mouse models of tauopathy. More recently, we have reported on a series of MT-stabilizing triazolopyrimidines (TPDs), which interact with β-tubulin at the vinblastine binding site, that exhibit favorable properties including brain penetration and oral bioavailability. Here, we have examined a prototype TPD example, CNDR-51657, in a secondary prevention study utilizing aged tau transgenic mice. Methods 9-Month old female PS19 mice with a low amount of existing tau pathology received twice-weekly administration of vehicle, or 3 or 10 mg/kg of CNDR-51657, for 3 months. Mice were examined in the Barnes maze at the end of the dosing period, and brain tissue and optic nerves were examined immunohistochemically or biochemically for changes in MT density, axonal dystrophy, and tau pathology. Mice were also assessed for changes in organ weights and blood cell numbers. Results CNDR-51657 caused a significant amelioration of the MT deficit and axonal dystrophy observed in vehicle-treated aged PS19 mice. Moreover, PS19 mice receiving CNDR-51657 had significantly lower tau pathology, with a trend toward improved Barnes maze performance. Importantly, no adverse effects were observed in the compound-treated mice, including no change in white blood cell counts as is often observed in cancer patients receiving high doses of MT-stabilizing drugs. Conclusions A brain-penetrant MT-stabilizing TPD can safely correct MT and axonal deficits in an established mouse model of tauopathy, resulting in reduced tau pathology. Electronic supplementary material The online version of this article (10.1186/s13024-018-0291-3) contains supplementary material, which is available to authorized users.
Background
The tauopathies are neurodegenerative diseases characterized by the presence of insoluble inclusions of the tau protein within brain neurons and often glia. These tau accumulations are referred to as neurofibrillary tangles (NFTs) when found in the neuronal soma and neuropil threads (NTs) when found in dendritic processes [1,2]. AD is the most prevalent tauopathy, where the hallmark pathologies are NFT, NT and neuritic plaque-associated tau inclusions, as well as senile plaques comprised of amyloid β peptides [3]. In contrast, neuronal and/or glial tau inclusions are the primary pathology in other tauopathies that include progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), Pick's disease and other frontotemporal lobar degenerative (FTLD) conditions [1]. There is a strong correlation between tau pathological burden in the brain and cognitive decline in AD [4][5][6], a finding bolstered by recent tau positron emission tomography imaging studies in AD [7,8] and FTLD due to tau pathology [9,10], suggesting that it is the development of abundant tau inclusions that ultimately leads to the neurodegeneration observed in AD and the other tauopathies. That tau mutations lead to familial cases of FTLD with NFTs and NTs [11,12] further confirms that misfolded tau oligomers and/or inclusions are sufficient to cause neurodegeneration.
Tau is normally found associated with axonal MTs in the brain, and in tauopathies this MT binding is diminished due to tau hyperphosphorylation [13][14][15], facilitating tau deposition into the fibrillar accumulations that comprise NFTs and NTs. Tau binding to MTs is thought to reduce MT dynamicity, particularly at the more labile distal portions of MTs [16,17], thereby providing increased stability to this region of axonal MTs either directly [18] and/or through inhibition of MT-severing enzymes [19,20]. As MTs play a critical role in the movement of vesicles, mitochondria and other cellular constituents within neurons via axonal transport [21], it is likely that the dissociation of tau from MTs in tauopathies alters both MT structure and axonal transport, although the observation of axonal transport deficits in other neurodegenerative diseases with neuronal protein inclusions (e.g., Parkinson's disease and amyotrophic lateral sclerosis) suggests that inclusions themselves may affect MT structure and/or function [22]. There is compelling evidence of MT abnormalities in neuronal [23] and transgenic (Tg) mouse models [24][25][26][27] of tauopathy, with the latter showing decreased MT density, increased MT dynamicity, and slowed axonal transport. MT deficits have also been observed in AD brain [28][29][30], and it is thus likely that altered MT structure and function contributes to the neurodegenerative processes in tauopathies [31]. In fact, studies from our laboratories and others have revealed that treatment of tau Tg mice with brain-penetrant MT-stabilizing natural products such as epothilone D (EpoD) and dictyostatin improves a number of CNS outcomes, with enhanced MT density, axonal transport, neuron survival, and cognitive performance with a reduction of tau pathology [24,25,27,32]. Notably, EpoD proved to be particularly safe and efficacious in tauopathy models, and EpoD (BMS-241027) subsequently advanced to Phase 1b testing in AD patients (ClinicalTrials.gov identifier NCT01492374), where it was found to be safe in a 9-week trial.
Given the therapeutic potential of brain-penetrant MT-stabilizing compounds, we have recently evaluated non-naturally occurring small molecule MT-stabilizing agents, with the goal of identifying alternative and potentially improved candidates for development as disease-modifying drugs for AD and other neurodegenerative conditions. These efforts led to the characterization of a series of brain-penetrant TPD and phenylpyrimidine (PPD) MT-modulating molecules [33][34][35] that, when compared to EpoD and dictyostatin, exhibit several favorable features including oral bioavailability, lack of P-glycoprotein (Pgp) interaction and ease of synthesis. Notably, the mechanism of action of these MT-active small molecules is believed to be unique and distinct from that of EpoD, dictyostatin and other MT taxane-site binders, as binding [36,37] and X-ray crystallography [38] studies revealed that TPDs interact with β-tubulin at a site that largely overlaps with the vinblastine binding site. An evaluation of representative PPD and TPD examples revealed an unexpected divergence of MT-directed activity of these molecules, in which all active PPDs and a large proportion of active TPDs demonstrated a bell-shaped concentration-response profile when markers of stabilized MTs (i.e., acetylated and detyrosinated α-tubulin [39], or AcTub and GluTub, respectively) were quantified in cellular assays [35]. Moreover, the PPD and TPD molecules that elicited this unusual concentration-response caused MT disruption at higher concentrations, as visualized by immunocytochemistry, with an associated proteasomemediated degradation of cellular tubulin [35]. In contrast, a subset of the TPD molecules (referred to as TPD+ compounds) elicited linear concentration-dependent increases in stable MT markers and in cellular MT mass in both transformed cells and primary neuron cultures. Moreover, a prototype TPD+ molecule (CNDR-51657; hereafter 51657, structure in Fig. 1a) was shown to rescue neuron cultures from axonal damage resulting from MT-destabilization [35]. In addition, 51657 was shown to increase brain AcTub in wild-type (WT) mice after a single administration [35].
Here, we have selected 51657 as a prototype TPD+ compound for more complete in vivo characterization, including efficacy testing in the PS19 tau Tg mouse model of tauopathy [40]. We reveal that 51657 provided benefit in PS19 mice, including increased MT density, reduced axonal dystrophy and a significant reduction of brain tau pathology, features that closely resemble the salutary effects previously obtained with EpoD. These data demonstrate that a MT-stabilizing agent that interacts with MTs at a site distinct from the taxane/epothilone binding site can provide benefits in a neurodegenerative disease model that are comparable to those observed with taxane-site binders. Thus, TPD+ molecules hold promise as potential therapeutic agents for AD and other neurodegenerative diseases.
Compound synthesis
The synthesis of 51657 was conducted at the 0.5 g scale following procedures described previously [35]. The spectroscopic properties of the compound were identical to those reported in the literature. In addition, single crystal x-ray diffraction analysis of the final compound was conducted (see Supplemental Information).
ADR-RES cytotoxicity assay
ADR-RES cells (NCI) were maintained in RPMI medium (Mediatech) containing 10% FBS, 2 mM L-glutamine, and 1% penicillin/streptomycin (complete RPMI) at 37°C in 5% CO 2 . For compound testing, cells were dissociated with trypsin/EDTA and plated at a density of 3000 cells/ well in black 96-well clear-bottom plates (Perkin-Elmer) in 0.1 ml of complete RPMI medium, followed 24 h later by the addition of paclitaxel or 51657 diluted from 10 mM DMSO stock solutions that were diluted into complete RPMI medium (0.1 ml total added to existing medium; final compound concentration of 1 μM). In addition, wells were also treated with 0.1 ml of vehicle alone (final concentration of 0.01% DMSO on cells). Cells were maintained at 37°C in 5% CO 2 and at 72 h after compound addition, 20 μl of Alamar Blue cell viability reagent (Invitrogen) was added to the wells and allowed to incubate for 4 h at 37°C in 5% CO 2 followed by measurement in a SpectraMax M5 plate reader with excitation of 550 nm and emission of 590 nm with a cutoff of 570 nm. A set of vehicle-treated wells were treated with digitonin (final concentration of 0.5%) at the time of compound addition to kill cells and elicit the minimal Alamar Blue signal. The percent cell viability was calculated as 100 × (Test-Digitonin)/(Vehicle-Digitonin).
Microsomal metabolism of 51657
Pooled human and mouse liver microsomes (Corning Life Sciences) were utilized at a concentration of 1 mg/ ml with a NADPH regenerating system as per vendor instructions. Compound (51657) was added at 1 μM in the absence or presence of CYP450 inhibitors, and aliquots of the reaction mixture were removed at 10 min intervals for 60 min. Acetonitrile was added to the sampled reactions at 3:1 (v/v) and the mixtures were vortexed and centrifuged, with the supernatant subjected to LC-MS/MS analysis as previously described [35].
Mouse studies
All methods utilizing mice were first submitted and approved by the University of Pennsylvania Institutional Animal Care and Use Committee (IACUC).
Analysis of plasma and brain compound concentrations
Test compound was administered to 2-4 month old CD-1 or B6SJL mice, with both female and male mice utilized but sexes were not mixed within experimental groups. For standard single time-point brain and plasma determinations, groups of mice (n = 3) were injected intraperitoneally (i.p.) with a single dose of 5 mg/kg compound dissolved in DMSO. For pharmacokinetic analysis, groups of mice (n = 3) were sacrificed at various times points after i.p. dosing of 5 mg/kg of compound. Whole brain hemispheres were homogenized in 10 mM ammonium acetate, pH 5.7 (50%, w/v), using a hand-held sonic homogenizer. Plasma was obtained from blood collected in 0.5 M EDTA solution and centrifuged for 10 min at 4,500 × g at 4°C.
The analysis of compound concentrations in plasma and brain homogenates was as previously described [35].
Brain AcTub determinations
CD-1 female mice (n = 3; 2-3 months of age) received three i.p. injections of 10 mg/kg of 51657 spaced 72 h apart. After 72 h following the third injection, mice were euthanized by an IACUC-approved protocol and cortices were dissected from each brain and placed immediately in ice-cold RIPA buffer (50 mM Tris, 150 mM NaCl, 5 mM EDTA, 0.5% sodium deoxycholate, 1% NP-40, 0.1% SDS, pH 8.0) containing protease -inhibitor cocktail (Sigma Aldrich), 1 mM phenylmethylsulfonyl fluoride (PMSF) (Sigma Aldrich), and 3 μM trichostatin A (Sigma Aldrich). Tissue was homogenized with a hand-held battery operated pestle motor mixer and then sonicated to complete the lysis. Samples were centrifuged at 100,000 × g for 30 min at 4°C and supernatant was transferred to a new Eppendorf tube. Remaining pellets were re-suspended in RIPA buffer and homogenized, sonicated, and centrifuged again, as before. Supernatant from the second centrifugation step was pooled with that from first spin. Samples were assessed for protein concentration by bicinchoninic acid (BCA) assay (Thermo Fisher Scientific) and enzyme-linked immunosorbent assay (ELISA) analysis of acetyl-and alpha-tubulin levels was performed, as previously described [35,41].
51657 Treatment of PS19 Tg mice PS19 mice [40] express a transgene encoding the human T34 tau isoform (1N4R) containing the P301S mutation found in inherited FTLD-tau [42]. Groups of 9-month old female PS19 mice (B6C3/F1 background as described [40]) were administered twice-weekly i.p. injections of 3 mg/kg or 10 mg/kg of 51657 at a volume of 2 μl/g body weight, or vehicle only (9% DMSO/91% corn oil), for a total of 12 weeks. An additional group of age-matched non-transgenic female littermates were treated with vehicle as above. Mice entered into the study in 4 separate cohorts spaced over 4 months, with each cohort having all groups represented such that the final group size of all treatment arms reached n = 12. All mice were monitored for signs of abnormal behavior or distress, and were weighed weekly to monitor body weight. After 11 weeks of dosing, the mice from 3 of 4 study cohorts underwent Barnes maze testing as described below. After sacrifice by an IACUC-approved protocol, blood was collected from 3 of 4 study cohorts for complete blood cell counts, as described [25]. Similarly, the optic nerve (ON), which harbors tau pathology together with retinal ganglia cells in these mice [25], was recovered from 3 of 4 study cohorts for transmission electron microscopy (EM) analysis of axonal dystrophy and MT density. Brains were collected from all study mice for biochemical and immunohistochemical analyses, and organ weights were recorded to assess compound tolerability.
Body weights, organ weights and complete blood cell counts Study mice were weighed once-weekly during the course of the dosing period. Upon sacrifice and perfusion, key organs were collected and weights determined. Blood samples from a subset of the study WT and PS19 mice, as indicated in the figure legend, were sent to an outside vendor (7th Wave Laboratories, St. Louis, MO) for complete blood cell analyses.
ON axonal dystrophy and MT density analyses
EM was performed on cross sections of ON from vehicle-or 51657-treated WT and PS19 mice to assess MT density and axonal dystrophy, as previously described [27,32].
Immunoblot analysis of insoluble brain tau
Combined cortex and hippocampus samples (~40-50 mg) from frozen hemispheres of vehicle-and 51657-treated PS19 mice were homogenized in 0.2 ml of RAB high salt buffer (0.1 M MES, 1 mM EGTA, 0.5 mM MgSO 4 , 0.75 M NaCl, 0.02 M NaF, pH 7.0), and the homogenates were centrifuged at 100,000 × g for 30 min at 4°C. The resulting pellet was resuspended in 0.2 ml RAB buffer and centrifuged as above, followed by another resuspension in 0.3 ml of RAB buffer followed by centrifugation. The remaining pellet was resuspended in 0.2 ml of RIPA buffer (50 mM Tris, 150 mM NaCl, 0.1% SDS, 0.5% sodium deoxycholate, 1% NP40 and 5 mM EDTA), followed by centrifugation as above. This pellet was resuspended in 0.1 ml of 2% SDS and sonicated, followed by centrifugation at 100,000 × g for 30 min at 22°C. The SDS pellet was resuspended in 0.1 ml of 2% SDS followed by sonication and centrifugation, and the resulting supernatant was combined with the first SDS supernatant. This combined SDS supernatant fraction was utilized for SDS-PAGE analysis and immunoblotting as previously described [35], using a rabbit polyclonal antibody recognizing total tau (17205 [43]; developed in-house, RRID:AB_2315435, used at 1:2000 dilution of sera in Li-Cor blocking buffer), a rabbit polyclonal antibody recognizing tau containing an acetyl modification at lysine residue 280 (TauAcK280; developed in-house [44], used at 1:2000 dilution of sera in Li-Cor blocking buffer) or the AT8 monoclonal antibody (ThermoFisher) that recognizes tau that is phosphorylated at serine residue 202 and/or threonine residue 205 (1:2000 dilution of sera in Li-Cor blocking buffer). Immunoblots were imaged using an Odyssey IR imaging system (Li-Cor) and relative protein amounts were quantified from the immunoblots using ImageStudio software (Li-Cor). Because the majority of the total protein, including housekeeping proteins, are extracted into the RAB-soluble fraction, the RAB-insoluble, SDS-soluble brain samples from each PS19 mouse were loaded onto SDS-PAGE gels at equal protein amounts based on the corresponding RAB-soluble protein concentration. More specifically, RAB-insoluble samples, which were all solubilized in equal volumes of SDS as described above, were prepared for SDS-PAGE such that the amounts loaded corresponded to 0.25 mg/ml of the RAB-soluble fraction. For example, if the RAB-soluble protein concentration was 2 mg/ml, the RAB-insoluble sample was diluted 8-fold for SDS-PAGE analysis. To provide further normalization accuracy, corresponding samples of the RAB-soluble fractions diluted to 0.25 mg/ml were loaded onto separate SDS-PAGE gels and blotted for the housekeeping protein, GAPDH. The immunoblot densitometric values for the tau species from the RABinsoluble samples for each mouse were then normalized to the corresponding GAPDH densitometric value from the RAB-soluble sample (see Additional file 1: Figure S6 for representative blot images). All samples for immunoblot analyses were coded so as to mask the sample identification throughout the immunoblot procedure, including during densitometric quantification of the tau and GAPDH bands.
Tau ELISA
The RAB-soluble fractions from brain homogenates of vehicle-and 51657-treated PS19 mice (see above) were assessed for total tau utilizing a sandwich ELISA essentially as previously described [27], with volumes of the samples adjusted based on total protein content in the RAB-soluble fraction as determined by BCA assay.
AT8 immunohistochemistry
Study mice were perfused with PBS (20 ml) after being deeply anesthetized using a protocol approved by the University of Pennsylvania IACUC. The brains were subsequently removed and one hemisphere from each mouse was processed as previously described [25,27], with 6 μm thick paraffin-embedded sections prepared and stained with the AT8 antibody (1:2000 dilution) that recognizes tau phosphorylated at S202/T205 [45]. Immunostained sections that were masked to treatment group were imaged using a 4× microscopic objective. For analysis of hippocampal neurons, 3 matched brain sections (Bregma: − 2.20 to − 2.80) from vehicle-and 51657-treated PS19 mice were manually annotated around the entire hippocampus and entorhinal cortex using HALO (Indica Labs, Corrales, NM) software.
Sections representing average AT8 staining intensity were thresholded to allow quantification of tau pathology in the hippocampal and cortical sections without contribution of background staining, and a common threshold was then applied to all sections. Quantification was conducted with the HALO software. The area of tau pathology within each annotated region was determined, and this was summed across the three individual sections from each mouse and divided by the sum of the total annotated area from the three sections to get the total % area with tau pathology. This value was multiplied by the average optical density (OD) of the tau pathology to yield the final "normalized AT8 area x OD", and the sum of these values from the hippocampal and cortical assessments are reported.
NeuN immunohistochemistry
Quantification of CA3 neurons was performed using NeuN antibody to label neuronal nuclei [46]. Staining was performed as noted above for AT8 staining, using a mouse anti-NeuN antibody (Millipore; 1:500). Two bregma levels (Bregma: − 1.82 and − 1.94) containing the CA3 region of the hippocampus were used for analyses. Slides were blinded and scanned using a Perkins Elmer Lamina slide scanner. ImageJ (NIH) was used for NeuN image analysis and quantification. Briefly, RGB TIFF images were converted to 8-bit images and then inverted. Max entropy auto-thresholding was used on all images and the CA3 region of the hippocampus was annotated manually using morphological landmarks in the mouse brain. Percent NeuN-positive area was then used as a readout for neuronal density, with the data decoded and compiled by an independent investigator.
Barnes maze analyses
Barnes Maze testing was performed as previously described [47] by an experimenter blinded to the treatment groups. Briefly, mice were handled for 3 days prior to Barnes Maze testing to get accustomed to the experimenter. Mice were habituated to the testing room for 30 min prior to testing each day. Mice were then placed in the center of Barnes Maze (San Diego Instruments, White 7001-0235) in the starting cylinder for 30 s. The starting cylinder was then removed and mice were allowed to explore the Barnes Maze for 2.5 min. If the mouse did not find the target box, the mouse was gently guided into the target box. The mice were allowed to remain in the target box for 1 min before returning them to their home cage. Two trials per mouse were performed each day with a 15-min inter-trial interval. Mice were tested for 4 consecutive days. The percent success was determined based on the mouse's first encounter (≥ 2 s) with the target box, termed primary success. Primary measures were used because some mice would successfully locate the target box yet continue to explore the maze, a behavior which has been reported previously [48].
Statistics
GraphPad Prism 7 was utilized for all statistical analyses. Comparisons between treatment groups consisted of unpaired t-tests when comparing two groups, or one-way ANOVA analyses with Tukey post-hoc analysis to compare between groups when comparing more than two groups. Grubb's tests (GraphPad QuickCalc) were applied to the data to query for extreme outliers, and when found (as noted in figure legends) these outliers were removed from the data analysis.
Pharmacokinetic (PK) and Pharmacodynamic properties of CNDR-51657
Among the TPD+ compounds, 51657 (Fig. 1a) was chosen as a prototype for full in vivo characterization. Prior analyses demonstrated that 51657 is orally bioavailable and has excellent brain penetration, with a brain-to-plasma (B/P) exposure ratio of~2.7 at 1 h after i.p. dosing [35]. A more complete PK analysis of 51657 in WT mice confirmed that total brain exposure exceeded that in plasma, with terminal brain and plasma T 1/2 values of~1.0-1.5 h (Fig. 1a). Although the T 1/2 of 51657 is somewhat short, we had previously demonstrated that a single 1 mg/kg i.p. dose increased WT mouse brain AcTub one day after administration, indicating target engagement and increased MT stability [35]. In further analyses, we found that a 10 mg/kg dose of 51657 administered once every 3-4 days over 7 days to WT mice resulted in elevated brain AcTub that persisted for 72 h after the final dosing (Fig. 1b). Given that the compound is eliminated from the brain relatively quickly, this prolonged MT activity suggests that brain MTs retain stability for an extended period after drug clearance. Alternatively, an active metabolite of 51657 may be formed with significantly longer brain retention than the parent compound. However, we have investigated the metabolism of 51657 in mouse and human microsomal studies and found that the molecule is metabolized by several CYP450 enzymes to release an inactive N-dealkylated derivative (Additional file 1: Figure S1A & B). An examination of mouse plasma and brain homogenates confirms the generation of high levels of this inactive metabolite after 51657 dosing (Additional file 1: Figure S1C), suggesting that the extended MT stabilization observed after 51657 dosing was not likely due to the formation of an active metabolite. The prolonged MT-stabilizing effect of 51657 could also result from an irreversible MT interaction. However, the crystal structure of a structurally-related TPD molecule bound to a MT [38], which we have determined is a TPD+ compound [49], reveals no evidence of a covalent interaction, suggesting it is unlikely that 51657 binds covalently to MTs. Finally, it is possible that a small fraction of 51657 remains non-covalently bound to brain MTs after the majority of drug clears from the brain, and that this amount is sufficient to impart increased AcTub. Regardless of the exact mechanism of this extended MT-stabilizing effect, our observations suggest that a long CNS residence time may not be required for meaningful MT stabilization, as was previously postulated for the MT-stabilizing agents EpoD and dictyostatin, both of which had very long brain T 1/2 values [25,50]. It has been reported that TPD molecules that are structurally related to 51657 are cytotoxic to Pgpexpressing cancer cell lines [38,51], indicating that they are not Pgp substrates. This would differentiate such TPDs from paclitaxel and related MT-directed taxanes, as well as many other cancer drugs, which are ineffective against Pgp-expressing cells. We previously demonstrated that 51657 is not a competitive Pgp inhibitor [35], which indicates it does not interact with Pgp. To further verify an absence of Pgp binding, we examined the ability of 51657 to inhibit proliferation of Pgpexpressing ADR-RES cells [52] and compared its activity to paclitaxel, which is a Pgp substrate. As expected, 51657 (1 μM) promoted a significantly greater cytotoxicity than did the same concentration of paclitaxel (Additional file 1: Figure S2). This concentration of paclitaxel is at least two orders of magnitude greater than that required for cytotoxic activity in cell lines not expressing Pgp [53], and the greater effect of 51657 reveals that it is an effective cytotoxic agent for Pgp-expressing cells. Given the excellent brain exposure of 51657, these data would suggest that TPD+ compounds of this type might hold promise for the treatment of brain cancers such as astrocytomas or glioblastomas, as many anti-cancer agents have poor brain penetration due to Pgp efflux at the blood-brain barrier, and there is also evidence of Pgpmediated drug resistance in some glioblastomas [54]. However, it is important to note that 51657 was present continuously in the cell culture medium in these cytotoxicity studies, and given the short plasma and brain half-life observed after dosing in mice, it is likely that frequent dosing of this compound would be required to elicit a meaningful anti-mitotic effect. In fact, as discussed further below, we observed no signs of cytotoxicity or anti-mitotic activity when 51657 was dosed twice-weekly in tau Tg mice, although this dosing schedule provided CNS benefit.
Testing of CNDR-51657 in PS19 tau transgenic mice
Given the ability of 51657 to elicit a prolonged increase of AcTub in the WT mouse brain, we subsequently evaluated the compound for efficacy in the PS19 tau transgenic mouse model, which expresses human 1N4R tau harboring the P301S mutation found in inherited FTLD-tau [40]. We previously utilized this mouse model in both prevention and intervention studies to demonstrate the efficacy of EpoD [25,27] and dictyostatin [32]. In these prior studies, only male PS19 mice were utilized because they develop tau pathology more rapidly than female PS19 mice and mixing of age-matched PS19 mice of both sexes results in unacceptably high variability in the amount of tau pathology that can mask treatment effects. Unpublished work from our laboratories revealed that female PS19 mice will also develop appreciable tau pathology, albeit with a 5-6-month delay relative to male PS19 mice. Moreover, young (2-3 month) female PS19 mice can develop tau pathology to an extent comparable to that observed in age-matched male PS19 mice when synthetic tau fibril "seeds" are introduced into the brain to initiate the formation of tau pathology [55,56]. Because female mice can be group-housed to reduce study costs, we opted to examine 51657 in aged female PS19 mice. Groups of 9-month old female PS19 mice (n = 12/ group) received vehicle, or 3 mg/kg or 10 mg/kg of 51657, twice-weekly (i.p.) for a total duration of 3 months. In addition, a group of age-matched non-transgenic female littermates received twice-weekly administration of vehicle. We anticipated that the 9-month old female PS19 mice would be roughly comparable to 3-4-month old male PS19 mice with regard to the extent of brain tau pathology, with the latter showing a low but detectable tau inclusion burden [25,56]. Thus, the study was designed to be a secondary prevention assessment of 51657 efficacy, similar to our prior study in which EpoD was shown to have beneficial effects when dosed in male PS19 mice from 3 to 6 months of age [25]. As in our prior prevention study with EpoD in PS19 mice [25], we examined MT density and axonal dystrophy in the ON, as well as brain tau pathology and cognitive performance. In addition, complete blood cell counts were obtained to determine whether changes in mitotic blood cells were observed upon prolonged treatment with 51657. We were particularly interested in determining whether compound treatment affected neutrophils, as neutropenia is a primary dose-limiting side-effect observed with MT-stabilizing drugs in cancer patients [57,58].
The 3-month treatment of PS19 mice with 51657 appeared to be well-tolerated at both doses, as there were no significant changes in body weight between the vehicle-and 51657-treated PS19 mice. None of the treatment groups showed meaningful body weight loss and PS19 mice receiving 51657 showed somewhat less body weight loss than did the vehicle-treated PS19 mice, although this difference did not reach statistical significance (Additional file 1: Figure S3). Similarly, there were no differences in organ weights when normalized to body weight between the vehicle-and 51657-treated mice (Additional file 1: Figure S4). Importantly, there was no evidence of a compound-mediated changes in blood cells, as total white blood cells (Fig. 2a), red blood cells (Fig. 2b) and neutrophils (Fig. 2c) were unchanged in 51657-treated mice relative to either PS19 or WT mice receiving vehicle only.
To assess whether 51657 improved MT density in the treated PS19 mice, as previously observed with EpoD [25,27] and dictyostatin [32], ON segments were removed from the study mice after perfusion and sacrifice, and were fixed to allow for EM analysis of MTs in cross-sectional images via blinded quantification. As previously observed in 6-month old male PS19 mice [25], 12-month old vehicle-treated female PS19 mice showed reduced ON MT density relative to age-matched vehicle-treated non-transgenic littermates ( Fig. 3a and c). Notably, the PS19 mice receiving either 3 mg/kg or 10 mg/kg of 51657 had a significant increase in MT density that reached the value observed in the WT mice (Fig. 3a). Thus, the twice-weekly dosing scheme with 51657 had the desired effect of abrogating the MT deficit observed in the PS19 mice, with the magnitude of MT enhancement being similar to that previously observed with EpoD [25]. In prior studies with PS19 mice, a reduction of ON MTs coincided with a significant increase in ON axonal dystrophy, with an abundance of swollen and demyelinated axons observed upon EM analysis [25,27,32]. An increase in dystrophic axons was also observed in the 12-month old female PS19 mice from the current study, and both doses of 51657 led to a dramatic lowering of ON axonal dystrophy to the level observed in the vehicle-treated WT mice (Fig. 3b and d). These results provide further evidence of 51657 having the desired effect of increasing CNS MTs and improving axonal integrity and function.
One hemisphere from each brain of the study mice was flash frozen for biochemical measurement of insoluble tau pathology, and the other hemisphere was fixed for immunohistochemical (IHC) evaluation. In our prior prevention study of EpoD in young male PS19 mice, we observed a modest amount of AT8 (pS202/pT205)-positive tau pathology in 6-month old male PS19 mice upon IHC assessment, with a non-significant trend toward reduced pathology in the EpoD-treated mice [25]. A significant reduction of tau pathology was seen in an intervention study in older PS19 mice with greater tau pathology [27]. An examination of AT8-positive tau in the 12-month old female PS19 mice via IHC analysis revealed somewhat less tau pathology than previously observed in 6-month old male PS19 mice, with the amount of AT8 staining being low-to-moderate in vehicle-treated female PS19 mice (Fig. 4a) with significant mouse-to-mouse variability, as previously observed with male PS19 mice [25,27]. Nonetheless, we attempted to quantify the AT8-positive staining, with analysis of the hippocampus and entorhinal cortex where the majority of tau pathology was observed (Fig. 4a). A blinded assessment of three Bregma-matched sections from each study mouse revealed a non-significant trend toward a reduction of combined cortical and hippocampal AT8-positive staining in the 51657-treated mice (Fig. 4b) that resembled the results previously observed in the prevention study with EpoD in PS19 mice. As previously observed in 6-month old male PS19 mice [25], NeuN staining of neurons revealed no evidence of hippocampal CA3 neuron loss in the female PS19 mice of this study (Additional file 1: Figure S5), a brain region where significant neuron loss is observed in PS19 mice with greater pathology [27]. This is consistent with the relatively modest level of tau pathology observed in these mice.
Given the generally low amount of tau pathology observed by AT8-staining in the 12-month old female PS19 mice, a degree of regional variability in the location of AT8-positive tau and the semi-quantitative nature of IHC measurements, we also conducted biochemical assessments of tau pathology since such analyses are generally more quantitative than IHC. The entire cortex and hippocampus from frozen brain hemispheres from each PS19 mouse were subjected to sequential extraction, with homogenization first in high-salt buffer followed by centrifugation, with subsequent extraction of the pellet in RIPA buffer with centrifugation. The remaining high saltand RIPA-insoluble pellet fraction was solubilized in SDS and analyzed by immunoblotting to determine the amount of total tau, AT8-positive phosphorylated tau, and K280-acetylated tau [44] in the buffer-insoluble fraction.
Both AT8 and acetyl-K280 tau have been shown to be enriched in pathological tau, with the latter appearing in more mature tau inclusions [59,60]. The samples were blinded prior to gel loading and quantification, and the results revealed that the 3 mg/kg dose of 51657 led to a significant reduction of insoluble total tau (Fig. 5a), AT8-positive tau (Fig. 5b) and acetyl-K280-positive tau C A D Fig. 3 PS19 mice treated with 51657 had significantly increased ON MT density and reduced axonal dystrophy. ON sections from vehicle-treated WT mice (n = 10) and PS19 mice treated with vehicle (n = 9) or 3 mg/kg (n = 8) or 10 mg/kg (n = 10) of 51657 were imaged by EM, and the number of MTs and dystrophic axons within treatment-masked images were counted as previously described [25]. a Quantification of MT density in ON sections demonstrates that vehicle-treated PS19 mice have a MT deficit relative to vehicle-treated WT mice, and treatment of PS19 mice with 3 mg/kg or 10 mg/kg of 51657 increases MT density to a level comparable to that of WT mice. b Quantification of ON EM images reveals a significant reduction in axonal dystrophy in PS19 mice receiving either 3 mg/kg or 10 mg/kg of 51657 compared to vehicle-treated PS19 mice. After quantification, a Grubb's test determined there was an extreme outlier within the 10 mg/kg 51657 group that was not used for quantification. Analyses consisted of a one-way ANOVA with Tukey's post-hoc analysis of between group differences. Error bars represent SEM. c Representative ON images from a vehicle-treated WT and PS19 mouse, with example MTs indicated by arrows. As depicted in the PS19 vehicle image, hexagonal fields of 0.035 μm 2 were overlaid on the ON images, with MTs counted within the hexagon and on three of the six borders to avoid repeat counting of MTs of MTs on adjacent hexagons (see also [27]). Scale bar represents 0.5 μM. d Representative ON images from a vehicle-treated WT and PS19 mouse, as well as a PS19 mouse that received a twice-weekly dose of 10 mg/kg of 51657. Vehicle-treated PS19 mice have greater axonal dystrophy, as evidenced by fewer intact axons and more axons that are demyelinated or debris-filled, than vehicletreated WT mice. ONs of PS19 mice treated with 51657 more closely resembled those of vehicle-treated WT mice. Scale bar = 2 μm ( Fig. 5c and Additional file 1: Figure S6). The 10 mg/kg dose of 51657 caused a significant reduction of insoluble acetyl-K280 tau ( Fig. 5c and Additional file 1: Figure S6), although the observed decrease in insoluble AT8-positive tau and total insoluble tau did not reach statistical significance ( Fig. 5a and b). A comparison of the amount of insoluble AcTau in 9-month old female PS19 mice (i.e., start of treatment) to that within 12-month female PS19 mice suggests that there is roughly a doubling of the amount of mature pathological tau over the 3-month treatment period (Additional file 1: Figure S7), and thus an~50% reduction of tau pathology would be expected if 51657 treatment led to a cessation of further pathology development.
This is approximately the effect size observed in the 51657-treated PS19 mice (Fig. 5). As the amount of insoluble AT8-positive and total insoluble tau did not differ significantly between the 3 mg/kg and 10 mg/kg 51657 treatment groups, we cannot conclude that the higher dose was less effective than the lower dose, particularly since both doses significantly improved MT density and reduced axonal dystrophy. The reductions in insoluble tau species in the PS19 mice receiving 51657 were not due to an overall reduction in tau protein expression, as soluble tau levels were not significantly different in the vehicleand 51657-treated PS19 mice (Fig. 5d). An assessment of GFAP-positive astrocytes and Iba1-positive microglia did A B Fig. 4 IHC staining and quantification of tau pathology. Three bregma-matched brain sections from each PS19 mice receiving vehicle (n = 12), or 3 mg/kg (n = 11) or 10 mg/kg (n = 12) of 51657, were stained with AT8 antibody to visualize tau pathology. a Representative images from a vehicle-treated PS19 mouse with an average amount of tau pathology (Bregma − 2.5). Regions of stained sections encompassing the hippocampus and entorhinal cortex were imaged and a fixed threshold was applied to distinguish AT8-positive staining from background (images on right; yellow, low AT8 signal; orange, moderate AT8 signal; red, high AT8 signal), followed by quantification of AT8-positive tau pathology. b A plot of the combined AT8-positive pathology from the entorhinal cortex and hippocampus from PS19 mice in each treatment group reveals a trend toward reduced tau pathology in the 51657-treated mice not reveal noticeable differences in cell density or morphology in any PS19 mouse treatment group (Additional file 1: Figure S8), suggesting that the reduction of tau pathology in the 51657-treated mice was not due to compound-induced effects on glia. In summary, these data reveal that 51657 treatment led to a reduction of insoluble tau pathology in the PS19 mice, as previously demonstrated for EpoD in an interventional study [27] and for which there was a trend toward reduction upon EpoD treatment in a prior prevention study [25]. These changes in tau pathology are believed to result directly from compound-mediated effects on MTs.
We have previously observed mild cognitive deficits in male PS19 mice as young as 6 months of age [25]. Thus, the female PS19 mice and littermate controls within this study underwent testing in the Barnes maze shortly after receiving their last vehicle or compound administration. As summarized in Fig. 6, there was a trend towards impaired performance by the vehicle-treated female PS19 mice relative to vehicle-treated non-transgenic littermates during the first two days of testing, as measured by their success in identifying an escape compartment, with the 51657-treated PS19 mice performing somewhat better than the vehicle PS19 group on these days. However, group variability was relatively large and these differences did not reach statistical significance. All treatment groups showed nearly complete learning by days 3 and 4 of testing, which in light of the modest amount of tau pathology and absence of neuron loss in the 12-month old female PS19 is not surprising. Thus, the totality of study data reveal that treatment of female PS19 mice with 51657 from 9-to 12-months of age in a secondary prevention model led to significantly improved MT density and reduced axonal dystrophy, with a resulting reduction of tau pathology and a trend toward improved cognitive performance.
Discussion
The concept of utilizing MT-stabilizing agents to treat tauopathies has been discussed for some time [31], and our studies and those from others over the past several years have demonstrated the potential of this therapeutic strategy in tauopathy mouse models [24-27, 32, 61, 62], as well as in other tau model systems [63,64]. Among traditional small molecule MT-stabilizing compounds, both EpoD and the abeotaxane, TPI-287, have progressed to Phase 1b clinical testing, where each was examined in short 2-3 month studies in AD and/or tauopathy patients. Both of these drug candidates appeared to be well tolerated at the tested doses [65]. Interestingly, TPI-287 treatment was reported to result in a significant improvement in AD patient MMSE scores relative to the placebo group after 12 weeks of drug dosing (www.corticebiosciences.com; 11/03/17 press release). However, given the very short duration of these Phase 1b trials and the small number of patients evaluated, we believe caution should be exercised in drawing either positive or negative inferences about the potential of MT-stabilizing agents in neurodegenerative disease from these studies, particularly since diseasemodifying trials for AD are typically at least 18-24 months in length. It is unclear whether either BMS-241027/EpoD or TPI-287 will advance into larger clinical studies of longer duration. Given this and the fact that both of these natural product-derived molecules bind the taxane-site on MTs, we have further investigated the TPD series of small molecule MT-stabilizing molecules which bind to a distinct region on MTs [38], with the goal of identifying alternative and potentially improved candidates for development as disease-modifying drugs for AD and other neurodegenerative diseases. Such molecules have potential advantages over the existing classes of MT-stabilizing natural products, especially in terms of drug-like physicochemical properties and synthetic accessibility. As previously detailed [35], we discovered that the TPDs could be broadly categorized into two distinct groups; those that elicit an undesirable bell-shaped concentration-response profile in in vitro models with an induction of proteasome-mediated tubulin degradation, and a smaller set referred to as TPD+ compounds that are generally characterized by the absence of a para alkoxy side-chain on the phenyl group and which exhibit the desired properties of increasing MT stability and MT mass in cellular models [35].
The in vivo characterization of various TPD+ examples revealed that nearly all have excellent brain exposure [35], and we selected 51657 as a prototype for more complete in vivo testing. Although 51657 was found to have a relatively short plasma and brain half-life, the compound caused a significant increase in brain AcTub that could be observed up to 3 days after cessation of dosing. This suggests that MT stabilization persists after most, if not all, of the compound is cleared from the brain. We are unsure of the mechanism of this lasting effect, but perhaps tubulin acetylation or other tubulin post-translational modifications that occur after initial compound-mediated stabilization contribute to prolonged MT activity [39]. Importantly, these data indicate that long brain half-life may not be a necessity for a beneficial MT-stabilizing effect, as was previously suggested based on the long brain retentions of EpoD and dictyostatin [41,50]. Thus, MT-stabilizing agents with shorter brain half-lives, such as 51657, might provide advantages over the previously examined natural products in that they would still allow for relatively infrequent dosing but with a reduced risk of compound accumulation in the brain and other tissues after repeated dosing.
The ability of 51657 to improve CNS MT density and reduce axonal dystrophy in PS19 mice with twice-weekly Barnes maze testing of WT and PS19 mice treated with vehicle or 51657. The vehicle-treated PS19 mice had a modest deficit relative to vehicle-treated WT mice in successfully identify the escape compartment in the maze during the first two days of testing, and the PS19 mice receiving 51657 showed a non-significant trend toward improvement on day 1 and 2 compared to the vehicle group. Because of the modest amount of tau pathology and absence of overt neuron loss in the 12-month female PS19 mice, the behavioral deficits were mild and all treatment groups showed nearly 100% performance by the third and fourth days of testing dosing further supports the conclusion that a MTstabilizing compound with a relatively short half-life can be efficacious. The extent of 51657-mediated improvement in MT density in the PS19 tau transgenic mice was comparable to that previously observed with EpoD [25,27], as was the compound-induced reduction in axonal dystrophy. Moreover, 51657 treatment led to a reduction in insoluble pathologic tau in PS19 mice, as has been observed for both EpoD [24,25] and dictyostatin [32]. The observations of MT-stabilizing agents reducing tau pathology in tau Tg mice is interesting and arguably somewhat unexpected since compounds that improve MT structure/ function would not necessarily be expected to affect the accumulation of misfolded tau. However, it has previously been demonstrated that there is a relationship between impaired axonal transport and tau pathology, as genetically crossing mice with defective kinesin-1 function with tau Tg mice resulted in an exacerbation of tau pathology that may have resulted, at least in part, from JNK pathway activation [66,67]. Thus, it might be expected that, conversely, normalization of MT function and axonal transport in PS19 mice with a MT-stabilizing agent would lead to a reduction of tau pathology. Our data reveal that a brain-penetrant MT-stabilizing TPD+ molecule that is readily synthesized can provide meaningful benefit in an established mouse model of tauopathy. Moreover, the lack of effect on dividing blood cells after twice-weekly 51657 administration indicates that axonal MTs can be modulated to provide CNS benefit while avoiding the untoward side-effects observed when high doses of MT-stabilizing drugs are used for the treatment of cancer. Interestingly, the TPD+ molecules described here bind to MTs at a site that is distinct from the binding site of taxanes, epothilones and dictyostatin, with TPD+ molecules interacting at the vinca alkaloid site on MTs [38]. Notably, the binding of vinblastine and vincristine to this site results in MT depolymerization [68], and thus TPD+ compounds are unique in their ability to stabilize MTs through interaction at this site. Moreover, TPD+ molecules appear to promote stability through longitudinal tubulin contacts in MTs, and importantly, do not enhance lateral MT contacts as observed with compounds that bind the taxane site [38]. Thus, 51657 and related TPD+ molecules provide a promising class of brain-penetrant and orally bioavailable MT-stabilizing agents. In addition, 51657 and other TPD molecules generally do not interact with Pgp, unlike EpoD [35] and many taxanes [53,69]. The absence of Pgp binding provides a potential safety benefit, as this transporter prevents xenobiotics from entering the brain and thus inhibitors of Pgp could increase brain exposure to unwanted molecules. Moreover, the brain and a number of tumors exclude MT-stabilizing drugs through Pgp efflux, and thus higher or more frequent doses of 51657 or other non-Pgp binding TPD molecules might have utility for the treatment of tumors that are resistant to existing MT-directed drugs, particularly brain tumors.
Conclusions
We demonstrate for the first time that a vinca site-binding TPD+ compound (51657) with favorable drug-like properties is capable of increasing CNS MT stabilization in an established mouse model of tauopathy at a relatively low dose administered twice-weekly, with a resulting reduction of axonal dystrophy and tau pathology in the brain. These beneficial effects are comparable to what we and others have found in mouse tauopathy models with MT-stabilizing natural products like paclitaxel [26], EpoD [24,25,27] and dictyostatin [32] that bind the taxane-site on MTs. These results thus reveal that brain-penetrant TPD+ molecules hold considerable promise for the treatment of AD and related tauopathies, as well as possibly additional neurodegenerative disorders [22].
Additional file
Additional file 1: Figure S1. Metabolism of 51657. Figure S2. ADR-Res cells expressing Pgp are more sensitive to 51657 than to paclitaxel. Figure S3. Normalized WT and PS19 mouse body weights over time while receiving twice-weekly dosing of vehicle or 51657 (3 mg/kg or 10 mg/kg). Figure S4. PS19 mouse organ weights were unaffected by 12 weeks of 51657 dosing. Figure S5. Quantification of NeuN-positive neurons within the CA3 region of the hippocampus of 12-month old female WT mice or vehicle-or 51657-treated female PS19 mice. Figure S6. Composite images of the three blots utilized in quantification of insoluble AcTau as shown in manuscript Fig. 5c. Figure S7. A comparison of insoluble AcTau levels in 9-month old and 12-month old female PS19 mice. Figure S8. Representative 40× images of hippocampal dentate region from brain sections of vehicle-or 51657-treated PS19 mice stained to visualize astrocytes and microglia. Table S1. Crystal data and structure refinement for CNDR-51657. Table S2. Atomic coordinates (× 104) and equivalent isotropic displacement parameters (Å2x 103) for CNDR-51657. | 2018-11-08T14:53:32.078Z | 2018-11-07T00:00:00.000 | {
"year": 2018,
"sha1": "1b64378e7929e752fc116641f2dc65d040f70d8c",
"oa_license": "CCBY",
"oa_url": "https://molecularneurodegeneration.biomedcentral.com/track/pdf/10.1186/s13024-018-0291-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61dbcde342050eae97d330b91e301c8af5edb7ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
268059768 | pes2o/s2orc | v3-fos-license | PEGylation of NIR Cd0.3Pb0.7S aqueous quantum dots for stabilization and reduction of nonspecific binding to cells
Cd0.3Pb0.7S (CdPbS) aqueous quantum dots (AQDs) made with 3-mercaptoproprionic acid (MPA) as a ligand have the advantages of emitting near-infrared light, well above 800 nm, that completely circumvents interference from tissue autofluorescence and have significant amounts of ligands for bioconjugation. However, retaining the right amount of MPA became a challenge when using CdPbS AQDs for bioimaging because retaining too much MPA could lead to significant nonspecific staining in cell imaging while insufficient MPA could cause AQDs instability in biological systems. Here we examined PEGylation (i.e. chemically linking amine-functionalized polyethylene glycol (PEG)) to modify MPA on the AQDs surface to improve AQDs stability and reduce nonspecific staining. In addition, for conjugation with antibodies, a bifunctional PEG with a carboxyl functionality was used to permit chemical linkage of a PEG to an antibody on the other end. It was found that performing PEGylation at the thiol concentration where the zeta potential becomes saturated stabilized the CdPbS AQDs suspension and reduced nonspecific binding to cells. Furthermore, with the bifunctional PEG, the CdPbS AQDs were conjugated with antibodies and the AQD-Ab conjugates were shown to stain cancer cells specifically against normal cells with a signal-to-noise ratio of 8.
Introduction
Bioimaging traditionally uses conventional fluorescent dyes such as fluorescein isothiocyanate (FITC) and the Alexa Fluor family that have low photostability [1,2].FITC had a reduction in fluorescence over 20% in just 80 seconds of exposure under a fluorescent microscope, while Alexa Fluor 568 fluorescence decreased about 15% under the same conditions [3].Quantum dots (QDs), fluorescent inorganic semiconductor nanocrystals, are far superior to traditional dyes in photostability and brightness.QDs can also be excited by any wavelength of light less than their band-gap wavelength; a simpler requirement compared to dyes, which have a narrow excitation spectrum [4,5].There are increasing applications of QDs for bioimaging recently [6][7][8][9][10].
Previously, we have developed an aqueous synthesis method where cadmium lead sulfide (Cd 0.3 Pb 0.7 S (CdPbS thereafter)) aqueous QDs (AQDs) could be synthesized using 3-mercaptoproprionic acid (MPA) as a ligand in ambient conditions in water [11] without the need of ligand and solvent exchange as required by the traditional organic solvent-based QD synthesis methods.Once made, the surface of CdPbS AQDs were already functionalized with the ligand that could potentially be used for functionalization in an aqueous environment.More importantly, the CdPbS AQDs emit near-infrared (NIR) fluorescence well above 800 nm with a peak at 820 nm.This makes them ideal for bioimaging because its NIR emission is completely separated from any potential interference from tissue autofluorescence in the 280-700 nm range [12,13].As such, an 800 nm long-pass filter can be used to view or image the CdPbS AQD staining even under room light because only AQD fluorescence will be captured through the filter for analysis.It should be noted that even though heavy metal ions Cd 2+ and Pb 2+ are toxic to biological system, the CdPbS AQD system is intended for ex vivo imaging on excised specimens.
However, when using the CdPbS AQDs for bioimaging it was found that there was significant nonspecific binding to the cell surface including MDA-MB-231, and HT29 cell lines.In addition, the AQDs are less stable at neutral pH with a reduction in photoluminescence (PL) as pH reduced from 12 to 7.This was likely due to the short, carboxyl ligand, MPA.Because MPA did not bind strongly to the AQDs, it was necessary to include excess MPA in the suspension to maintain the AQDs stability [14][15][16].However, MPA on the AQDs could lead to nonspecific binding as the negative charge of MPA could bond to a positive charge of surface proteins.Also, the free MPA in the suspension during conjugation could compete with the MPA on the AQDs thereby reducing the chance of AQDs to link to Ab and increasing the nonspecific binding.
Previously Bentzen et al [17] showed that by coating amphiphilic poly(acrylic acid) ((AMP)-capped) CdSe/ZnS with polyethylene glycol (PEG) (MW 2000) the nonspecific binding to cells was reduced.In the literature, there were several other studies using PEG to functionalize QDs [18][19][20][21].Here we investigate the use of PEG to enhance aqueous CdPbS QD suspension stability and reduce nonspecific binding to cells.PEG is a hydrophilic and flexible ligand [22], and could lead to better suspension stability due to the hydrophilicity of the PEG and the larger separation distance between particles due to the added length of PEG.Nonspecific binding of AQDs to cell surfaces is primarily due to electrostatic and hydrophobic interactions between the AQD and the cell surfaces and AQD self-aggregation [23,24].PEG is hydrophilic and is expected to make the AQD surface less negatively charged by reacting with the carboxyl groups of the MPA.The monofunctional PEG used in the present study (MW 3000) had an amine group to react with the carboxyl functionality of surface capping molecules through carbodiimide chemistry [25].Because AQDs were made with excess MPA, the PEGylation of AQDs is different from PEGylation of QDs made by the organic route, which does not have excess free capping molecules in the solution.As a result, it was necessary to work out the condition of PEGylation that is optimized at the MPA concentration that maximizes the surface zeta potential of the AQDs.The equilibration of surface MPA and free MPA in the solution can be described by Langmuir isotherm [26][27][28] that will be helpful in guiding our experimental approach.Previously, Zhang et al [29] used homo-bifunctional PEG linkers for PEGylation of CdSe/ZnSe/ZnS QDs (emission ∼600 nm) synthesized in organic solvent.In contrast, here we used hetero-bifunctional PEGylation of NIR AQDs.We also study the specific staining of cells by conjugation with antibodies, in addition to reduction of nonspecific staining of cells.
In this study, which is based on chapter 2 of the author's thesis [30], it was shown that PEGylation led to reduced nonspecific binding in cell staining.Furthermore, using bifunctional PEG (MW 700) with a carboxyl group on the other end to conjugate with antibodies created AQD-Ab conjugates.These AQD-Ab conjugates were shown to specifically stain cancer cells against normal cells with a signal-to-noise ratio (SNR) of 8.
CdPbS synthesis
CdPbS with a nominal molar ratio 8:5:1 of [MPA]:[Cd&Pb]:[S] with [S] being 0.6 mM was prepared.First, 21 µl of MPA (Sigma-Aldrich, St. Louis, MO) was added to 47 ml of deionized (DI) water at room temperature and stirred for 5 min.Then, 300 µl of 0.08 M Cd precursor solution (1.23 g of cadmium nitrate tetrahydrate (Sigma-Aldrich) dissolved in 50 ml DI water) was added and stirred for 10 min before adding 700 µl of 0.08 M Pb precursor solution (1.32 g of lead (II) nitrate (Sigma-Aldrich) dissolved in 50 ml DI water) and stirring for 10 more minutes.The pH was adjusted to 11 by adding 510 µl of tetramethylammonium hydroxide (TMAH) (Sigma-Aldrich).This solution was stirred for 10 min before the AQDs were formed upon the addition of 375 µl of 0.08 M S precursor solution (0.96 g of sodium sulfide nonahydrate (Sigma-Aldrich) in 50 ml DI water).This suspension was stirred for 10 more minutes.The molar ratio of [MPA]: [Cd&Pb]: [S] was 8:2.6:1 at this stage.Additional 875 µl of Cd precursor was then added to fulfill the 8:5:1 [MPA]: [Cd&Pb]: [S] ratio, followed by adjusting the pH to 12 with an additional 325 µl of TMAH.The final suspension of CdPbS AQDs, at 0.6 µM concentration by particle, was stirred for 10 more minutes before storing the AQDs at 4 • C in the dark.They were used for PEGylation within 2 d of synthesis.
Excess MPA removal by centrifugation with 10k filter
To optimally react amine-PEG to the MPA on the AQD surface, the excess MPA in the suspension were removed by centrifugation at 6k relative centrifugal force for 12 min with a 10 kDa filter (MilliporeSigma, Burlington, MA) followed by refilling the retentate with DI water to restore the volume.The number of rounds of filtration determined the thiol concentration.
MPA concentration quantification
The MPA concentration was quantified by Ellman's reagent thiol assay.In the following, one may view the thiol concentration as a proxy for MPA concentration as each MPA possesses one thiol.A 20× Ellman's stock solution was first prepared in 5 ml DI water with 4.0 mg Ellman's reagent (Thermo Fisher Scientific, Waltham, MA) and 20.5 mg sodium acetate (Thermo Fisher Scientific) and a 1× Ellman working solution was prepared by diluting the 20× stock solution to 1× with 0.1 M Tris HCl (Thermo Fisher Scientific) [31].
The thiol concentration of the AQDs suspension was then measured by diluting the AQDs 100× in 1× Ellman's reagent working solution and incubating for 5 min.The absorbance of the solution was then measured at 412 nm using a UV-VIS spectrometer (Ocean Optics, Orlando, FL).The thiol concentration of the AQD suspension in question was then determined according to a standard curve created using the thiol of cysteamine (Thermo Fisher Scientific) as a model.Zeta-potential and size measured by ZetaSizer ZS90 (Malvern, Surrey, UK).PL was measured using the Photon Technology International Spectrometer (Birmingham, NJ) with excitation 468 nm.
PEGylation
PEGylation is the process of reacting the MPA on the AQDs with the PEG.Because our AQDs are capped with MPA (carboxyl), we used an aminefunctionalized PEG (MW 3000).Using carbodiimide chemistry [7,25], in which the amine will bind to the carboxyl, allowing the hydroxyl end of the PEG to interact with the outside environment.A summary of the PEGylation process using amine-PEG is shown in figure S1.The concentration of 1-(3-Dimethylaminopropyl)-3ethylcarbodiimide (EDC) (Thermo Fisher Scientific), sulfo-N-hydroxysuccinimide (sNHS) (Thermo Fisher Scientific), and PEG are important factors in the binding of amine-PEG (Sigma-Aldrich) to the AQD surface.
EDC and sNHS stock solutions were prepared.The 20 mM EDC or sNHS stock solutions were prepared in 1× phosphate buffered saline (PBS) (Thermo Fisher Scientific) immediately before use.The 2× concentrated AQDs, concentrated by 10 kDa filtering, were added to a centrifuge tube so their final concentration will be 0.6 µM by AQD.The 4 mM final concentration of EDC and sNHS were added and incubated for 15 min.This EDC/sNHS concentration was chosen because it had a minimal effect on the PL of the AQDs and did not cause aggregation in the AQDs (figure S2).The pH was raised to 6.8-7 with 10-20 µl, dependent on thiol concentration, of 20× borate buffer (Thermo Fisher Scientific) for a 2 ml total sample volume.Next, amine-PEG was added from a 30 mM stock solution in 1× PBS.The remaining volume was filled with 1× PBS.This mixture was incubated in ambient conditions.The PL stability was tracked over 6 d.After determining the optimal thiol concentration using 2 mM amine-PEG, corresponding to 3300 nominal PEG per AQD (PEG/AQD), the optimal amine-PEG concentration was studied by adding PEG ranging from 1 to 3 mM, corresponding to 1700, 3300, and 5000 nominal PEG/AQD.Samples were filtered to remove excess PEG and EDC/sNHS reagents using a 10 kDa filter.The process with bifunctional PEG (NH 2 -PEG12-proprionic acid) (Sigma Aldrich) was the same as amine-PEG but required optimization of PEG/AQD.This PEGylation process is described in figure S3.The bifunctional PEG had one end of carboxyl and the other end of amine with a MW 700.
Fluorescamine (FA) assay
One tool to determine the binding of the amine-PEG to carboxyl (of MPA) is a FA assay.FA is a molecule that, upon binding to amine groups, becomes fluorescent with excitation at 390 nm and emission at 470 nm.Increased fluorescence is correlated to an increased amine concentration.A standard curve was created using glycine (Thermo Fisher Scientific) from 0 to 5 mM.The FA (Thermo Fisher Scientific) stock was prepared as 0.3 mg ml −1 of acetone.The samples to be measured were diluted 10× with PBS and 25% of the final volume was the FA stock (7.5 µl sample + 67.5 µl PBS + 25 µl FA stock) [5].Samples were read in a 96-well plate using a BioTek Synergy plate reader (Winooski, VA) with excitation 390 nm and emission 470 nm.
Gel electrophoresis
Agarose gel was used to compare sizes of the various PEGylated AQDs.A 1.25% agarose gel was prepared by heating the 1.25% low melting point agarose (Thermo Fisher Scientific) in 1× Tris-borate-EDTA buffer (TBE) (Thermo Fisher Scientific) to 60 • C until agarose was fully dissolved and poured into a horizontal Mini-gel system (Fisher Scientific, Pittsburgh, PA).The gel was left to set for 1 h.Samples were added to wells and covered with agarose to prevent samples from leaking out of the wells.The gel was run in 1× TBE buffer for 15 min at 120 V.The UVP ChemStudio gel imaging system (Analytik Jena GmbH, Jena, Germany) was used for imaging of the AQDs fluorescence using blue light excitation and 707-752 nm emission filter.
PEGylated AQD conjugation with antibody
A schematic of the conjugation with antibody is shown in figure S4.The 1700 bifunctional PEG/AQD PEGylated AQDs were first purified by filtering 3× at 7.5k rpm for 4 min each (Sorvall Biofuge Primo, Marshall Scientific, Hampton, NH) in a 10 kDa filter, replacing with DI water after each filtering.The AQDs were concentrated to 2× (1.2 µM) during this process.A final concentration of 0.3 µM EDC and sNHS were added to a 0.4 µM final concentration AQDs at pH 6.8 and incubated for 30 min.Then, 80 nM anti-Tn antigen antibody (GeneTex, Irvine, Ca) was added and the rest of the volume added as 1× PBS and incubated for 2 h at room temperature.The conjugate was filtered 3× at 5k rpm in a 300 kDa filter (Sartorius, Göttingen, Germany) and volume lost was replaced with PBS.Final conjugate was syringe filtered in 0.22 µM filter (Sartorius).
Cells were cultured on an 8-chamber slide (MilliporeSigma) overnight.They were washed twice with PBS for 2 min each, fixed in 4% paraformaldehyde in PBS (Thermo Fisher Scientific) for 15 min at room temperature, and washed twice in PBS for 2 min each.The cells were blocked with 1% bovine serum albumin (Fisher Scientific) in PBS for 1 h at room temperature and washed 3× in PBST (PBS with 0.1% Tween-20 (Fisher Scientific, Waltham, MA)) for 5 min each before staining with AQD samples or Ab-AQD conjugates for 1 h.Finally, the cells were washed 3× with PBST for 5 min each and counterstained with 4 ′ , 6-diamidino-2phenylindole (DAPI) (Thermo Fisher Scientific) and cover-slipped for examination under a fluorescent microscope (Olympus BX51, Tokyo, Japan).Fluorescent images were analyzed using a MATLAB program, which was written to outline individual cells or groups of cells and return the intensity.The cells included in the analysis are outlined in figure S5.
Optimal MPA concentration for PEGylation
The purpose of PEGylation was for the PEG to react with MPA on the AQD surface, but not the free MPA in the suspension.The amount of MPA adsorbed on the AQD surface is equilibrated with the amount of MPA in the suspension.Qualitatively, we can describe the amount of MPA on the AQD surface as a function of MPA in the suspension by the Langmuir isotherm [26][27][28].As the concentration of MPA (represented by the thiol concentration) in suspension increases, more MPA will be adsorbed on the AQD surface.There will be an onset MPA concentration above which MPA coverage on the surface is saturated.We postulated that the onset MPA on AQDs is favorable for optimal PEGylation because below the onset concentration, there was not enough MPA on the AQD surface to stabilize the AQDs whereas above the onset, the PEG reaction with the free MPA in the suspension may be favored over the surface MPA, which could lead to instability.To identify the onset MPA concentration, we measured the zeta potential of the CdPbS AQDs at various MPA concentrations as determined by the thiol assay as each MPA contains thiol.In figure 1(a), the zeta potential of AQDs in PBS versus the MPA (thiol) concentration is shown as full squares.MPA has a carboxyl terminus, so we expect more MPA on the surface would give a more negative zeta potential [10].As can be seen, the zeta potential indeed increased in magnitude with an increase in MPA (thiol) concentration and saturated at around −30 mV at an MPA (thiol) concentration around 2.3 mM.The behavior is consistent with a Langmuir isotherm, if one considers the zeta potential as a proxy for MPA coverage.For charged nanoparticles like these CdPbS AQDs, the magnitude of the zeta potential was important for suspension stability.The knowledge that the zeta potential of the AQDs started to saturate at around 2.3 mM MPA was important to chart out the conditions for PEGylation.
To determine the optimal MPA concentration for PEGylation, 3330 amine-PEG per AQD (amine-PEG/AQD) were added to the AQDs at 1.8, 2.2, and 2.5 mM MPA (thiol concentration), around the onset thiol concentration of 2.3 mM for zeta potential saturation shown in figure 1(a) and in the presence of 4 mM EDC and sNHS.The PL, or light emission, of the PEGylated AQDs were plotted versus time in figure 1(b).While all PL decreased with time, overall, the PL of AQDs with PEGylation (dashed lines and full symbols) were higher and showed less decrease with time than those without PEGylation (solid lines and open symbols).Notably, among the PEGylated AQDs, the PL was most retained at 2.2 mM MPA at all time points, indicating that 2.2 mM MPA, which is around the onset of zeta potential saturation, was the optimal MPA concentration for PEGylation.Below this concentration, there was not enough MPA on the surface to stabilize the AQDs while around this concentration, the additional free MPA in the solution could react with amine-PEG and reduced the number of bound PEGs on the surface.In figure 1(a), we also plotted the zeta potential of the PEGylated AQDs versus the MPA (thiol) concentration when PEGylation was performed.As can be seen, the zeta potential of the PEGylated AQDs became significantly less negative as compared to AQDs without PEGylation.This is expected because the -OH end of the amine-PEG carried no charge while the negatively charged carboxyls of the MPA on the AQD surface were consumed to form a peptide bond with the amine end of the amine-PEG.Also shown were the zeta potential of AQDs with 4 mM of EDC and sNHS, but without PEG, at 1.7 mM and 2 mM thiol concentration.Clearly, with only EDC and sNHS, the zeta potential of the AQDs were not much different from without EDC and sNHS.Only in the presence of PEG with EDC and sNHS did the zeta potential became significantly less negative, supporting that PEG was chemically linked to the MPA on the AQD surface.
It is of interest to note that while the zeta potential of PEGylated AQDs became less negative than without PEGylation as shown in figure 1(a) (particularly at MPA = 2.2 mM and 2.5 mM), the PL of the PEGylated AQDs were brighter and decreased more slowly than without PEGylation.This indicates that the stability of the PEGylated AQDs was less due to electrostatic repulsion but more likely due to the extension of the capping molecules on the AQD surface, which prevents the aggregation of AQDs at neutral pH.
Optimal PEG/AQD ratio
Once we obtained the optimal thiol concentration of 2.2 mM, we varied the amount of amine-PEG to find the optimal ratio of PEG per AQD (PEG/AQD) for PEGylation where the molar concentration of AQDs is defined as that of the AQD particles (0.6 µM).In figure 2(a), we show the PL versus time of PEGylated AQDs with PEG/AQD = 0, 1700, 3300, or 5000 at a nominal thiol (MPA) concentration of 2.2 mM.While all PEGylated AQDs had a higher PL than without PEGylation (PEG/AQD = 0), the AQDs with PEG/AQD = 3300 appeared slightly more stable over 6 d as compared to those with PEG/AQD = 1700 and 5000.In figure 2 we show both the PL (2b) in black and the amount of bound amine-PEG (2c) in red versus the nominal PEG/AQD ratio used for PEGylation.The actual number of bound PEG (2c), as measured by FA, increased as the nominal PEG/AQD ratio increased.This indicates that the reaction between the amine-PEG and MPA occurred.The concentration of bound amine-PEG was calculated by subtracting the remaining amine concentration after PEGylation from the starting amine concentration.There was no significant increase in amine-PEG bound to AQD when increasing PEG/AQD ratio from 3300 to 5000.In fact, PEG/AQD = 3300 retained the PL the most over time (2b).This indicates that around 3300 PEG/AQD ratio was the optimal for the reaction in the system with 4 mM EDC and sNHS used.The FA measurement indicated that about 1.5 ± 0.3 mM out of the total MPA (which was around 2.2 ± 0.2 mM) was bound to PEG after the PEGylation process.While the FA measurement did not indicate whether the PEG is bound to the MPA on the AQD surface or to the free MPA, the assumption was that at least some portions of the bound PEG were on the surface.Using a diameter of 5 nm and assuming a lattice constant 0.59 nm and 1 MPA per cation, it was estimated that the AQDs could have up to about 0.6 mM MPA on their surface.As a result, we expect the zeta potential of the AQDs to change after PEGylation. Figure 2(d) shows the zeta potential of AQDs after PEGylation and filtering out the excess.All PEGylated AQDs became less negatively charged.The fact that the binding behavior of PEGylation (figure 2(c)) is similar to that of PL (figure 2(b)) and zeta potential measurements (figure 2(d)), supports the argument that PEGylation occurred on AQD surface rather than with free MPA.Furthermore, the fact that the 3300 and 5000 have similar degree of PEGylation shows that PEGylation saturated in the system at ∼3300 PEG/AQD and the PEGylation was with the particle surface and no further PEGylation is possible due to steric hindrance on the AQD or hydrolysis of EDC/sNHS and inability for more PEG to bind.The zeta potential of AQDs went from about −25 mV to about −12 to −10 mV after PEGylation.The carboxyl of MPA on the surface of non-PEGylated AQDs was more negatively charged at neutral pH, whereas, if PEGylated, the hydroxyl capping was mostly neutral at neutral pH [32].The inserts of figure 2(d) describe the PEGylation of the AQDs.The insert I corresponds to an MPA-capped AQD and was highly negatively charged due to carboxyl surface coverage.However, with the addition of amine-PEG, some of the MPA charge was covered by the neutral charge of the hydroxyl of amine-PEG (inserts II, III, and IV) resulting in less negative zeta potentials.
Nonspecific staining of cells with amine-PEG AQDs
Here we examine how nonspecific binding to the cell surface can be reduced by PEGylation at various PEG/AQD ratios.As mentioned earlier, nonspecific binding of AQDs to cell surfaces is primarily due to electrostatic and hydrophobic interactions between the AQD and cell surfaces [23].The CdPbS AQDs are hydrophilic when capped with MPA, but PEG is a very hydrophilic molecule.It has been demonstrated that the surface charge is less negative after PEGylation.So, we expect a reduction in both electrostatic and hydrophobic interactions with the cell surface after the AQD was successfully PEGylated.
Figure 3 shows the nonspecific staining results of PEGylated AQDs at various PEG/AQD including PEG/AQD = 0 (see figure 3(a)) on HT29 cells for comparison.Visibly, there was a reduction of staining fluorescent intensity in samples stained by PEGylated AQDs with any PEG/AQD ratio (see figures 3(b)-(d)) compared to staining with PEG/AQD = 0 (figure 3(a)).This indicates that PEG was indeed bound on the AQDs and reduced their interactions with the cell surface.The nonspecific staining intensity distributions by the AQDs with various PEG/AQD ratios are summarized in figure 3(e).Clearly, all PEGylated samples (dashed, dashed-dotted, and dotted lines) had a much lower average and standard deviation of the staining intensity.The 5000 amine-PEG/AQD sample (dotted line) showed the lowest average staining intensity, reducing the nonspecific binding by about 55% (after background subtraction) as compared to without PEG (solid line).The 1700 (dashed line) and 3300 (dashed-dotted line) ratios only reduced the nonspecific staining by about 40%.Clearly, for amine-PEG, a greater nominal PEG/AQD yielded better results for reducing nonspecific binding.In addition to offering the most reduction of nonspecific binding, the 5000 PEG/AQD ratio (dotted line) also had the tightest staining intensity distribution with the lowest standard deviation, which could be advantageous if the AQDs are eventually used for targeted imaging.
Nonspecific staining of cells with bifunctional PEGylation
While the amine-PEG showed the proof-of-concept of using PEG to stabilize AQDs and reduce their nonspecific binding, it does not have the functionality to conjugate AQDs to a biomolecule such as an antibody for specific fluorescent staining of cells.Therefore, bifunctional PEG was used for the PEGylation.The bifunctional PEG has a slightly different chemistry compared to the amine-PEG as there is an amine end and a carboxyl end.This also means that the bifunctional PEG may bind to itself.Therefore, the optimal concentration of the bifunctional PEG may not be the same as that of the amine-PEG previously utilized.
Like amine-PEG, all the bifunctional PEG concentrations showed an increase in AQD PL stability (figure 4(a)).The 1700 bifunctional PEG/AQD showed the highest PL retention at ∼450 000 overall, about 140% higher than unmodified AQDs over 6 d (at ∼190 000), with 3300 (at ∼410 000) and 5000 PEG (at ∼390 000) being 120% and 105% higher in PL, respectively.The size measurements in figure 4(b) show an increase in size with an increase in PEG/AQD ratio.Without PEG the AQD diameter was 5 ± 3 nm, but the diameter increased to 22 ± 7 nm, 54 ± 9 = 10 nm, and 73 ± 11 nm for PEG/AQD = 1700, 3300, and 5000, respectively.The PEG alone is about 5 nm.Therefore, a single layer of PEG coating on the AQD could increase the diameter by about 10 nm.The 1700 PEG amount is within the standard deviation of such an increase, supporting that there was a monolayer of PEG on the AQD surface when PEGylated with PEG/AQD = 1700.This is illustrated with insert (II) in figure 4(b) where bifunctional PEG has bound to the MPA, elongating it.As more PEG is added, the size also increases, indicating additional layers of bifunctional PEG binding to itself and creating 2 or 3 layers as shown in inserts (III) and (IV).Figure 4(e) shows size as measured by gel electrophoresis, which is consistent with the ZetaSizer measurements of increasing size with increasing PEG/AQD ratio.Figure 4(c) plots the PL after 6 d as a function of PEG/AQD ratio with the ratio of 1700 having the highest PL over this time, indicating this PEG/AQD ratio created the most stable AQD. Figure 4(d) shows the zeta potential of bifunctional PEG-functionalized AQDs versus PEG/AQD ratio.Again, the unmodified or PEG/AQD = 0 had a zeta potential of −25 ± 5 mV.All PEGylated AQDs had a slightly less negative zeta potential of about −18 ± 3 mV.This indicates bifunctional PEG binding to the surface of the AQD shielded some of the negative charge from the MPA carboxyl and made the zeta potential slightly less negative.The zeta potential results can also be explained by the inserts in figure 4(b).Inserts (II), (III), and (IV) illustrate that not all MPA may be covered with PEG, but instead, the PEG that does bind, whether one layer or several, extends beyond MPA.The PEG is what is measured for zeta potential and because there is less of it compared to unmodified AQDs (PEG/AQD = 0), the zeta potential is slightly less negative.The fact that the 1700 PEG/AQD ratio retained the most PL and had the smallest size indicates that it was the optimal PEG/AQD ratio to cover the AQD surface with a single layer of PEG.The smaller size would benefit conjugation strategies.On the other hand, further increasing the PEG/AQD ratio created a larger AQD, likely with several layers of PEG.The FA assay was not used for bifunctional PEG binding as the PEG can self-react, i.e. carboxyl of one PEG reacts to the amine of another PEG.Results of such measurements would be difficult to quantify binding of PEG to the AQD surface.
We performed TEM characterization of MPAcapped and bifunctional-PEGylated AQD with PEG/AQD = 1700 to learn more about the microscopic feature of the PEGylation.The results are shown in figure 5.It can be seen in figure 5 that PEGylation causes particles to be separated thus improving the stability of AQD.Meanwhile, the MPA-capped AQDs were aggregated.Similarly, the PL spectra of the MPA-capped AQD and that of the bifunctional PEGylated AQDs in PBS over 1 month are shown in figure S6 in the supplemental information.It can be seen that the bifunctional PEGylated AQDs are more stable thus resulting in less nonspecific staining due to aggregation.
While the PL and size measurements suggest that the 1700 PEG/AQD ratio could be optimal, the nonspecific binding results are imperative to the use of PEGylated AQDs in future bioimaging.Figure 6 shows a decrease in nonspecific staining intensity for all PEG/AQD ratios compared to unmodified PEG (figure 6(a)), like the staining results with amine-PEG.This agreed with the hypothesis that PEGylating the AQDs will reduce nonspecific binding.The 1700 PEG/AQD ratio (figure 6(b)) had the lowest nonspecific staining (at 64 ± 12), reducing it by about 40% compared to 0 PEG/AQD (at 110 ± 17), with the 3300 PEG/AQD (at 67 ± 14) having very similar staining to the 1700 PEG/AQD.The 5000 PEG/AQD ratio reduced the nonspecific binding by about 30% (at 80 ± 17).While the standard deviation of the 3300 PEG/AQD ratio was slightly less than that of the 1700 PEG/AQD ratio, the smaller size and higher brightness of AQDs with a 1700 PEG/AQD ratio may make these AQDs more suited for bioimaging applications.
Specific staining of cancer cells with bifunctional PEGylated AQD-Ab Conjugate
With the success of PEGylation in reducing the nonspecific staining of cells, we used the PEGylated AQDs for conjugation with antibody to examine whether specific staining of cancer cells could be achieved.The results of using the PEGylated AQDs to create Tnantigen specific conjugates to distinguish Tn-antigen [33][34][35][36] presenting MDA-MB-231 cancer cells, from MCF12A normal cells are shown in figure 7. The The small size of 1700 PEG/AQD allowed multiple AQDs to be conjugated to an antibody without steric hindrance.With a nominal 1:5 Ab:AQD ratio, the staining was specific with a SNR of 8. SNR can be defined in many ways [37].Here we follow the method described by Gharia et al [38].Specifically, the signal is defined as the specific cancer cell staining on MDA-MB-231 cells and noise is considered using the nonspecific staining on MCF12A cells.The SNR is calculated by dividing the difference in staining intensity between the two cells lines divided by the standard deviation of the noise (nonspecific MCF12A staining).The SNR is acceptable though not as high as we would like to have likely because the standard deviation of the nonspecific staining was quite high.Further filtering of the conjugates may reduce some of the nonspecific staining on the MCF12A cells, thus reducing the standard deviation.This could reduce the overlap of the staining intensity distributions shown in figure 7(c), increasing the specificity of the conjugate.The optimization of the conjugation of the AQDs to antibodies to improve specific staining and minimize nonspecific staining and apply it to tissue slide staining will be examined in a future study.
Conclusions
The results of this study indicated that PEGylation was effective in stabilizing and reducing nonspecific binding of CdPbS AQDs to cells.All PEGylated AQDs showed an increase in PL stability over time compared to unmodified AQDs.Using amine-PEG created a hydroxyl-capped AQD after PEGylation.In this case, the higher PEG/AQD ratio the lower the nonspecific binding.Among the ratios used, the 5000 amine-PEG/AQD led to the least nonspecific binding with cells.On the other hand, the bifunctional PEG, which retained the carboxyl functionality on the AQD surface after PEGylation, exhibited a minimal nonspecific staining at a nominal 1700 PEG/AQD ratio.Comparing the amine-PEG and the bifunctional PEG, the bifunctional PEG is more efficient in reducing nonspecific binding to cells with a small particle size using less amount of PEG, allowing multiple (nominal 5) AQDs to conjugate to one antibody.The minimal nonspecific binding at the 1700 bifunctional PEG/AQD ratio was related to a single PEG layer on the AQD surface.For targeted staining, our study showed the potential for specific, direct staining of cancerous cells using a PEGstabilized, NIR-emitting AQD and cancer-specific antibody.We believe by further optimizing the antibody-AQDs conjugation protocol, it is possible for further improvement on this metric, which will be examined in a future study.This evidence of conjugation also paves way for future applications in bioimaging, targeted drug delivery, disease detection and diagnoses.
Figure 1 .
Figure 1.Effect of thiol (MPA) concentration and PEGylation on AQD stability by zeta potential and photoluminescent (PL) measurements: (a) Zeta potential of modified (EDC/sNHS and PEGylated) or unmodified (MPA-capped) CdPbS AQDs versus thiol concentrations in neutral pH, and (b) Photoluminescence (PL) versus time for PEGylated (full symbols and dashed lines) or unmodified (open symbols and solid lines) AQDs at various thiol concentrations.
Figure 2 .
Figure 2. Effect of amine-PEG per AQD on stability of AQD in PBS in terms of zeta potential and its binding efficiency to carboxyl groups of MPA: (a) photoluminescence (PL) stability versus time for various nominal amine-PEG/AQD ratios added to the AQDs with 2.2 mM thiol concentration, (b) PL of samples 6 d post-synthesis with varied PEG/AQD (c) the actual amount of bound amine-PEG/AQD measured by fluorescamine versus the nominal PEG/AQD added (d) zeta potential of AQD suspensions with increasing PEG/AQD.The inserts (I), (II), (III), and (IV) in (d) illustrate the elongation of PEG as consistent with the AQD size with an increasing PEG/AQD ratio.
Figure 4 .
Figure 4. Bifunctional PEG/AQD effect on (a) PL versus time over 6 d, (b) size by ZetaSizer, (c) PL 6 d post-synthesis, (d) zeta potential, and (e) size by agarose gel electrophoresis.The inserts (I), (II), (III), and (IV) in (b) illustrate the elongation of PEG as consistent with the AQD size with an increasing PEG/AQD ratio.
Figure 7 .
Figure 7. Specific staining of cancer cells using bifunctional PEGylated CdPbS AQD-Ab conjugates on (a) MDA-MB-231 breast cancer cells and (b) MCF12A normal breast cells with (c) distribution of staining intensities for each cell line with background subtracted and signal-to-noise ratio (SNR) is 8. DAPI staining for nuclei in blue, AQD in red. | 2024-03-01T06:18:24.046Z | 2024-02-28T00:00:00.000 | {
"year": 2024,
"sha1": "e5b888808f10042ff4d0fd0b5fecedb49a6f7338",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1748-605X/ad2e0e/pdf",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "b41719154fdbb360c716c78d039a90c5a3a37490",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
118491561 | pes2o/s2orc | v3-fos-license | Weak lensing tomography with orthogonal polynomials
The topic of this article is weak cosmic shear tomography where the line of sight-weighting is carried out with a set of specifically constructed orthogonal polynomials, dubbed TaRDiS (Tomography with orthogonAl Radial Distance polynomIal Systems). We investigate the properties of these polynomials and employ weak convergence spectra, which have been obtained by weighting with these polynomials, for the estimation of cosmological parameters. We quantify their power in constraining parameters in a Fisher-matrix technique and demonstrate how each polynomial projects out statistically independent information, and how the combination of multiple polynomials lifts degeneracies. The assumption of a reference cosmology is needed for the construction of the polynomials, and as a last point we investigate how errors in the construction with a wrong cosmological model propagate to misestimates in cosmological parameters. TaRDiS performs on a similar level as traditional tomographic methods and some key features of tomography are made easier to understand.
INTRODUCTION
Weak gravitational lensing by the cosmic large-scale structure (Blandford et al. 1991;Schneider et al. 1992;Kamionkowski et al. 1998) is regarded as a very promising way for investigating the properties of the cosmic matter distribution and the measurement of cosmological parameters. The primary tool is the angular spectra of the weak lensing convergence (Jain & Seljak 1997;Hu & Tegmark 1999;Hu & White 2001;Hu & Jain 2004), which has, in the context of dark energy cosmologies with adiabatic fluctuations in the dark matter distribution, the potential to deliver parameter constraints competitive with those from the cosmic microwave background.
The weak cosmic shearing effect of the large-scale structure has been detected by four independent research groups a decade ago (Van Waerbeke et al. 2000;Kaiser et al. 2000;Bacon et al. 2000;Wittman et al. 2000) and now parameter constraints derived from weak lensing spectra coincide well with those from the CMB, with a small tension concerning the parameters Ω m and σ 8 . (Kilbinger et al. 2009(Kilbinger et al. , 2010. These topics are reviewed in detail by Bartelmann & Schneider (2001) and Bartelmann (2010a,b).
Together with the property of weak lensing to map out fluctuations in the cosmic matter distribution in a linear way, tomographic methods have been shown to offer superior precision in the determination of cosmological parameters (Hu 1999(Hu , 2002aTakada & White 2004;Hannestad et al. 2006), in particular concerning the properties of dark energy ( Amara . The idea of tomography is a division of the galaxy sample used for shear estimation into a number of bins in distance. This splitting is the reason for two important advantages of tomography: The weak lensing convergence is a line of sight-integrated quantity of which one measures the angular fluctuation statistics. Angular convergence spectra, however, convey less information compared to the fluctuations statistics of the threedimensional matter distribution because in the projection process, a mixing of scales is taking place. Secondly, cosmological parameters can have different influences on the amplitude of the weak lensing convergence at different redshifts, which would be averaged out. Combining convergence spectra measured for each subset of galaxies are able to alleviate these these two problems. Related developments include shear ratio measurements, where the same tidal fields are measured with galaxy populations at different redshifts (Jain & Taylor 2003;Taylor et al. 2007), cross correlation cosmography, where part of the galaxies are used for constructing a template on which one observes the shearing effect of more distant galaxies (Bernstein & Jain 2004), and finally threedimensional weak shear methods, which provide a direct reconstruction of the fluctuations of the cosmic density field (Castro et al. 2005;Heavens et al. 2006;Kitching et al. 2011). Common to these methods are nonzero covariances between measurements of different redshifts. This is a natural consequence that lensing is caused by the large-scale structure between the lensed objects and the observer, so that the light from different galaxy samples has to transverse partially the same large-scale structure for reaching the observer.
Our motivation is to revisit weak lensing tomography and to construct a weighing scheme for the galaxies as a function on dis-tance, such that the covariances between spectra estimated from differently weighted galaxy samples has a particular simple, diagonal shape. For this goal, we construct a set of orthogonal polynomials in distance, which provides a diagonalisation of the covariance, and which, when employed in the derivation of the convergence spectrum, projects out statistically independent information about the large-scale structure.
After a brief summary of key formulae for cosmology, structure formation and weak lensing in Sect. 2, we describe the construction of TaRDiS-polynomials and investigate their properties in Sect. 3. Their statistical and systematical errors are evaluated in Sects. 4 and 5, respectively, and we summarise our results in Sect. 6. The reference cosmological model used is a spatially flat wCDM cosmology with Gaussian adiabatic initial perturbations in the cold dark matter density field. The specific parameter choices are Ω m = 0.25, σ 8 = 0.8, H 0 = 100 h km/s/Mpc, with h = 0.72, n s = 1 and Ω b = 0.04. The dark energy equation of state is set to w = −0.9 and the sound speed is equal to the speed of light (c s = c) such that there is no clustering in the dark energy fluid.
Dark energy cosmologies
In spatially flat dark energy cosmologies with the matter density parameter Ω m , the Hubble function H(a) = d ln a/dt is given by with the dark energy equation of state w(a), for which we use the common parameterisation (Chevallier & Polarski 2001) where w 0 = −1 and w a = 0 would correspond to ΛCDM. Comoving distance χ and scale factor a are related by such that the comoving distance is given in units of the Hubble distance χ H = c/H 0 .
CDM power spectrum
The linear CDM density power spectrum P(k) describes statistically homogeneous Gaussian fluctuations of the density field δ, It is composed from a scale invariant term ∝ k ns and the transfer function T (k), In low-Ω m cosmologies T (k) is approximated with the fit proposed by Bardeen et al. (1986), where the wave vector k = qΓ is given in units of the shape parameter Γ. Γ determines the peak shape of the CDM spectrum and is to first order given by Γ = Ω m h, with small corrections due to the baryon density Ω b (Sugiyama 1995), The spectrum P(k) is normalised to the variance σ 8 on the scale R = 8 Mpc/h, with a Fourier transformed spherical top hat filter function, W R (k) = 3 j 1 (kR)/(kR). j ℓ (x) is the spherical Bessel function of the first kind of order ℓ (Abramowitz & Stegun 1972).
Structure growth with clustering dark energy
Linear homogeneous growth of the density field, δ(x, a) = D + (a)δ(x, a = 1), is described by the growth function D + (a), which is the solution to the growth equation (Turner & White 1997;Wang & Steinhardt 1998;Linder & Jenkins 2003), Nonlinear structure formation enhances the CDM-spectrum P(k) on small scales by a factor of ≃ 40, which is described by the fit suggested by Smith et al. (2003) which is gauged to n-body simulations of cosmic structure formation.
Weak gravitational lensing
The weak lensing convergence κ provides a weighted lineof-sight average of the matter density δ (for reviews, see Bartelmann & Schneider 2001;Bartelmann 2010b) with the weak lensing efficiency W κ (χ) as the weighting function, q(z) denotes the redshift distribution of the lensed background galaxies, and is the approximate forcasted distribution of the EUCLID mission (Amara & Réfrégier 2007). z 0 has been chosen to be ≃ 0.64 such that the median of the redshift distribution is 0.9. With these definitions, one can carry out a Limber-projection (Limber 1954) of the weak lensing convergence for obtaining the angular convergence spectrum C κ (ℓ), which describes the fluctuation statistics of the convergence field.
motivation for tomography
Weak cosmic shear provides a measurement of the weighted, line of sight integrated tidal shears althoug it is more convenient to work in terms of the weak lensing convergence and the cosmic density field, which have statistically equivalent properties. Weak lensing therefore provides an integrated measurement of the evolution of the cosmic density field weighted with the lensing efficiency function. Being a line of sight-averaged quantity, the weak lensing convergence is statistically not as constraining as the full 3-dimensional density field, which is caused by the mixing of spatial scales in the Limber-projection. Additionally, parameters which enter the model in a nonlinear way can cause different effects on the weak lensing signal at different redshifts. This sensitivity would be averaged out in a long-baseline line of sight integration and the information would be lost. The solution to this problem are tomographic methods: splitting up the lensing signal from different distances allows a larger signal strength and a higher sensitivity with respect to cosmological parameters because the line of sight-averaging effect is reduced. This comes at the cost of a more complicated covariance between the spectra measured, and a higher shape noise for each of the spectra, because the effective number of galaxies is reduced. There are different methods for exploiting evolution of the cosmic density field and the lensing sensitivity along the line of sight. Since the inception of tomography for investigating weak lensing convergence spectra (Hu 1999(Hu , 2002b) and bispectra (Takada & Jain 2004), there have been many developments leading ultimately to crosscorrelation cosmography (Bernstein & Jain 2004), where a lensing template is derived from data and used to predict the weak shear signal on background sources, and to 3-dimensional weak shear methods, which are an unbinned, direct mapping of the cosmological density field (Castro et al. 2005;Heavens 2003;Kitching et al. 2011).
Common to these methods are strong covariances between spectra, which we aim to avoid by employing a set of specifically designed polynomials, which perform a weighting of the observed ellipticities in their distance, in a way that convergence cross-spectra computed with two different polynomials vanishes. This ensures that every polynomial projects out information about the deflection field which is statistically independent, and generates a covariance matrix of a particular simple shape. In a sense, TaRDiS is using a non-local binning of the ellipticities motivated by the anticipated weak lensing signal, which corresponds to the non-local nature of weak lensing.
construction of orthogonal sets of polynomials
We introduce a weighting of the distance distribution of the galaxies q(χ) = q(z)dz/dχ = q(z)H(z) with a function p i (χ), such that the lensing efficiency function reads with A line of sight-weighting of the weak lensing convergence with a function p i (χ) yields the weighted convergence field κ i , from which one obtains for the variance between two Fouriermodes κ i (ℓ), with the weighted convergence spectrum S i j (ℓ) for homogeneous and isotropic fluctuations of κ i , This expression provides the definition of a scalar product between the weighting functions p i (χ) and p j (χ) contained in W i (χ) and W j (χ), respectively, which exhibits the necessary properties of being positive definite ( p i , p i 0 and p i , p i = 0 ↔ p i ≡ 0), symmetric ( p i , p j = p j , p i ) and linear in both arguments. Starting point for the construction of orthogonal polynomials which would diag- where χ node can be chosen for placing the node of the first polynomial in a sensible way, in our case it is set to the median value of the galaxy redshift distribution, converted into comoving distance. The monomials p ′ i (χ) are subjected to a Gram-Schmidt orthogonalisation procedure, with the initial condition and, iteratively, It should be emphasised that the orthogonalisation procedure needs to be repeated for every multipole ℓ, and that we have omitted an additional index ℓ of the polynomials p i (χ) for clarity. For illustration the polynomials are normalised using the norm induced by the scalar product defined in eqn. (19), which appears in the denominator of eqn. (22) in a natural way. Clearly, the scalar product S i j (ℓ) is equal to the convergence spectrum C κ (ℓ) for i = j = 0, in which case p 0 ≡ 1.
properties of orthogonal polynomials
Fig . 1 shows the polynomials p i (χ) at a fixed multipole order of ℓ = 1000. They exhibit a sequence of oscillations roughly at the positions where the previous polynomial assume their maximal values. When computing the polynomials for the nonlinear CDM spectrum instead for the linear one, the sequence of oscillations is shifted to smaller comoving distances. The reason for this shift is the larger amplitude of the nonlinear spectrum P(k) for small k = ℓ/chi, which causes a larger contribution of the weak lensing spectrum to be generated at high χ.
The variation of the polynomials p i (χ) with distance χ and multipole order ℓ is depicted in Fig. 2, in this case for the polynomial i = 9. Again, small variations with multipole order ℓ are present, and the oscillations when varying χ are clearly seen.
The orthonormality relation p i , p j of of the polynomials p i (χ) for the lensing signal as well as for the galaxy shape noise is given in Fig. 3. The polynomials are constructed to be orthogonal by the Gram-Schmidt procedure, but numerical noise is collected in the iterative process, such that the orthogonality relation is better , as a function of comoving distance χ constructed with the Gram-Schmidt algorithm for the weak lensing spectrum at ℓ = 10 3 , with the lowest order polynomials on top, for both a linear (solid line) and a nonlinear (dashed line) CDM spectrum P(k).
Figure 2.
Orthonormal polynomials p i (χ) as a function of multipole order ℓ and comoving distance χ. The order of the polynomial has been fixed to i = 9 and was constructed for a nonlinear CDM spectrum. One can see the slow variation of the p i (χ) polynomials with multipole order ℓ.
fulfilled at small i compared to larger i. Deviations from p i , p j = 0 for i j are of the order of ∼ 10 −15 for low-order polynomials, but increase to values of ∼ 10 −4 at high order. This deterioration in orthogonality is a known drawback of the Gram-Schmidt procedure, in particular when dealing with sets of functions instead sets of vectors, which means that there is larger numerical noise in the evaluation of the scalar products, but is acceptable for the number of basis polynomials we are using in this work. One can already notice the main difference between the different approaches in tomography: Whereas the noise covariance would be diagonal in classic tomography with a very complicated structure of the signal covariance, in our case the shapes of the signal and noise covariance matrix are interchanged. Fig. 4 gives an impression of the lensing efficiency function W i (χ) modified by the polynomials p i (χ) on an angular scale of ℓ = 1000, in comparison to that of weak shear without tomography, seemingly messy functions W i (χ) disentangle and approach zero at large distances. Finally, polynomial-weighted weak lensing spectra S ii (ℓ) = p i , p j are shown in Fig. 5. The spectra drop in amplitude, which is mostly an effect of the absence of normalisation, and there are in fact differences in shape, when higher-order polynomials are used (compare Fig. A1 in the appendix, where all spectra are scaled such that they assume the same value at a certain multipole.). By construction, these spectra provide statistically independent information of the cosmic large-scale structure. In the next sections, we will investigate statistical bounds and systematical errors on cosmological parameters, when the information from different spectra is combined.
, as a function of multipole order ℓ, for both a linear (solid line) and a nonlinear (dashed line) CDM-spectrum P(k). The spectra decrease in amplitude with increasing polynome order i for unnormalised polynomials.
variances of weighted ellipticities
For deriving the expressions for the signal covariance and the noise covariance needed in forecasting statistical and systematical errors, we derive expressions for the mean and the variance for a weighted set of ellipticities. This derivation is done for a discrete set of weighting coefficients w m and then generalised to the continuous case, where the weighting is done with a polynomial p i (χ). The distribution p(ǫ)dǫ of ellipticities ǫ is assumed to be Gaussian, with a zero mean, variance σ ǫ and the ellipticities are taken to be intrinsically uncorrelated (i.e. intrinsic alignment effects are discarded, see Schäfer 2009, for a review).
From a measurement of a set of ellipticities ǫ m drawn from the parent distribution p(ǫ)dǫ on can estimate the shear by computing the weighted meanǭ, and the expectation value of the weighted mean ǭ if the drawing of ellipticities is repeated, which vanishes if the mean ellipticity vanishes, ǫ m = 0. For the variance ǭ 2 one obtains under the assumption of intrinsicallly uncorrelated ellipticities , ǫ m ǫ n = σ 2 ǫ δ mn , which reduces to the classic Poissonian result if w m is either 0 or 1, because w 2 m = w m , and N is defined as the effective number of ellipticities in the sample. The cross-variance for two different sets of weights w m and v n is given by: In the continuum limit we make the transition m . . . →n dχq(χ) . . .
with the unit normalised galaxy distance distribution q(χ)dχ. The discrete weights w m and v m will be replaced by the set of polynomials p i (χ) and p j (χ). For conserving the normalisation of the weighted galaxy distance distribution q(χ)dχ, we normalise the polynomials such that the weak shear spectrum S i j (ℓ) becomes which is diagonal by construction, and the noise covariance, with the shape noise σ ǫ and the mean density of galaxiesn per steradian, for which we substitute the numbers σ ǫ = 0.3 and n = 40/arcmin 2 projected for EUCLID. It can already be seen from the expression for N i j that the weighting of data with highorder polynomials p i (χ) will be noisy: q(χ) is a slowly varying function and the integrals dχ q(χ)p i (χ) in the denominator will assume small values if the polynomial is rapidly oscillating. This will ultimately limit the order of the usable polynomials. Again, for i = j = 0, the standard Poissonian expression N 00 = σ 2 ǫ /n is recovered due to the normalisation of q(z), as well as the weak shear spectrum S 00 (ℓ) = C κ (ℓ).
Fisher-analysis
The likelihood function for observing Gaussian-distributed modes κ i (ℓ) of the p i (χ)-weighted weak lensing convergence for a given parameter set x µ is defined as (see Tegmark et al. 1997;Carron et al. 2011): with the covariance C i j (ℓ, ℓ ′ ) ≡ κ i (ℓ)κ * j (ℓ ′ ) which is diagonal in ℓ for homogeneous random fields. The χ 2 -functional, L ∝ exp(−χ 2 /2), can be obtained from the logarithmic likelihood and reads: with the definition of the data matrix D i j = κ i (ℓ)κ j (ℓ), using the relation ln det(C) = tr ln(C) and discarding irrelevant multiplicative prefactors. The second derivatives of the χ 2 -functional with respect to cosmological parameters x µ evaluated at the point of maximum likelihood yields the Fisher matrix with multiplicity 2ℓ + 1 because on each angular scale ℓ there are 2ℓ + 1 statistically independent m-modes. The covariance C i j = S i j + N i j can be split up into the signal covariance S i j , and the noise covariance N i j , multipole order ℓ tr ∂ µ ln C 2 Figure 6. Derivatives tr ∂ Ωm ln C 2 (solid lines) and tr (∂ w ln C) 2 (dashed lines) as a function of multipole order ℓ and cumulative polynomial order q. The derivatives have been weighted with the EUCLID covariance.
We will work in the limit ∂S i j /∂x µ ≫ ∂N i j /∂x µ which is well justified in our case. Fig. 6 gives an example of how the combination of multiple line of sight-weighted measurements helps to avoid weaknesses in the sensitivity towards cosmological parameters. We plot the sensitivity i.e. the ratio between the derivative of the spectrum with respect to a cosmological parameter and the covariance of the measurement, which corresponds to the contributions to the diagonal elements of the Fisher-matrix per ℓ-mode. The sensitivites of the measurement with respect to the cosmological parameters Ω m and w are shown, which exhibit singularities at certain multipoles for a classical nontomographic measurement. This happens when a parameter affects a certain ℓ-range with a different sign than others and the spectrum pivots around this value when varying a parameter. Clearly, angular scales in the vicinity of that pivot scale do not add much sensitivity to the Fisher-matrix. The inclusion of a second polynomial, however, avoids this: For q 1 on, singularities are lifted and the derivatives assume larger values with increasing numbers of polynomials, although this effect saturates for very large numbers of polynomials. At large multipoles, the influence of the shape noise can be observed, which causes the derivatives to drop rapidly, and do not add significant sensitivity to the Fisher matrix on scales larger than ℓ ≃ 3000.
statistical errors
From the Fisher-matrix F µν one can obtain the Cramér-Rao errors, Fig. 7 summarises statistical errors on cosmological parameters resulting from the Fisher-matrix analysis, cumulative in q and for ℓ = 1000 as well as for ℓ = 3000. As expected, statistical errors drop with larger number of polynomials and are smaller if more multipoles are considered. A very fascinating feature of the plot is the approximate scaling of the error with the inverse root of the number of polynomials, σ ∝ 1/ √ q, which one expects from Poissonian arguments because each spectrum S ii (ℓ) adds statistically independent information.
2-dimensional marginalised likelihoods for all parameter pairs are shown in Fig. 8, for ℓ = 1000 and ℓ = 3000 as the maximum multipole considered. In the confidence contours, up to 10 polynomials were combined. Clearly, the uncertainty in cosmological parameters decreases with larger numbers of polynomials used, as well as for an increased multipole range. Additionally, the parameter degeneracies change a little when more polynomials are used, which is caused by differing sensitivities of the lensing signal with distance. A 3-dimensional view of the marginalised likelihood of the parameters Ω m , σ 8 and w is given in Fig. 9. It illustrates nicely the nested 1σ-ellipsoids which become smaller for larger number of polynomials combined in the measurement. Comparing the performance of the TaRDiS-polynomials to other tomographic methods shows that they operate on very similar levels of performance, perhaps with a small advantage for the cosmological parameters Ω m and w. This is due to the mechanism that the values in the Fishermatrix are maximised if the covariance becomes diagonal due to a match between the true cosmology and the assumed cosmology used for constructing the polynomials. This should be taken with a grain of salt, however, because part of the statistical error would be transported to the systematical error budget. The impact of starting off with a wrong cosmological model for construction of the TaRDiS-polynomials on the estimation of parameters will be the topic of the next chapter. Figure 8. Constraints on the cosmological parameters Ω m , σ 8 , h, n s and w from EUCLID using tomography with orthogonal polynomials. The ellipses mark 1σ confidence regions and decrease in size with increasing cumulative polynomial order (q = 0 in blue to q = 9 in green), for ℓ max = 1000 (dotted lines) and ℓ max = 3000 (solid lines).
signal to noise-ratio
Analogous to the definition of the Fisher-matrix we construct the cumulative signal to noise-ratio Σ, from the covariance matrices with the multiplicity 2ℓ+1. The cumulative signal to noise-ratio Σ and the differential contribution dΣ/dℓ of each multipole is summarised in Fig. 10. Clearly, increasing q or ℓ increases the signal up to multipole orders of a few thousand. The growth of dΣ(ℓ)/dℓ ∝ √ 2ℓ + 1 is driven by the reduced cosmic variance until the shape noise limits the measurability of the spectra. Consequently, the integrated signal to noise ratio settles off at a few hundred, and adding statistically independent information by using new polynomials increases the signal strength by almost an order of magnitude and reaches values comparable to the primary CMB temperature anisotropy spectrum (Hu 2002b). These numbers correspond well to those derived by Takada & Jain (2009) if one works in the approximation of a Gaussian covariance -non-Gaussian contributions can significantly lower the signal strength. This can be expected from eqn. (40), as the signal to noise ratio is invariant under orthogonal transformations diagonalising either S i j or N i j .
SYSTEMATICAL ERRORS
Naturally, the set of polynomials used for analysing the data needs to be constructed for specific cosmology, so the question arises if an imprecise prior knowledge of the cosmological model has an impact on the parameter estimates from the weighted weak lensing spectra. Any incompleteness or imperfection in the model used for interpreting the data is going to shift the estimated parameter values away from their true values and introduces parameter estimation biases. Specifically, we consider three cases: Firstly, a time-varying equation of state (w = −0.8+0.2(1−a)) when in reality the equation of state of dark energy is constant (w = −0.9), i.e. a systematic which is not a degree of freedom of the model, and secondly wrongly assumed Ω m -and σ 8 -values (0.3 instead of 0.25 and 0.85 instead of 0.8, respectively) as a strong systematic. In addition, we compute biases in the estimation of cosmological parameters if the redshift distribution of galaxies substituted in the polynomial construction is not the true one.
parameter estimation bias
For the Gaussian likelihood function L ∝ exp(−χ 2 /2) for a parabolic χ 2 -functional one can identify χ 2 ≡ ℓ tr ln C + C −1 D with the covariance C i j and the data matrix D i j . Now, the fit of Figure 9. 1σ-ellipsoids in the space spanned by (Ω m , σ 8 , w), cumulative in polynomial order q, from q = 2 (largest ellipsoid) up to q = 9 (smallest ellipsoid) with the maximum multipole order set to ℓ = 3000. Observational characteristics correspond to those of the EUCLID mission. multipole order ℓ signal-to-noise ratio Σ, derivative dΣ/dℓ Figure 10. Signal to noise-ratio Σ(ℓ) (solid lines) and the contribution dΣ/dℓ by each multipole (dashed lines) as a function of inverse angular scale ℓ and cumulative polynomial order q. The shape noise corresponds to that of the projected EUCLID performance.
the true model C t to the data would give rise to the correct χ 2 tfunctional, whereas the assumption of a wrong model yields which in general will assume its global minimum at parameter values different than χ 2 t . The distance δ between the best fit values x t of the true model and x f of the false model will then be the parameter estimation bias. This parameter estimation bias can be quantified with a second-order Taylor expansion of the χ 2 f -functional for the wrong model around the best-fit point x t of the true model (see Cabré et al. 2007), where the parameter estimation bias vector δ ≡ x f − x t . The best-fit position x f of χ 2 f can be recovered by extremisation of the ensemble-averaged χ 2 f , yielding which is a linear system of equations of the form where the two quantities G µν and a µ follow from the derivatives of the averaged χ 2 f -functional, evaluated at x t , with D = C t and the multiplicity 2ℓ + 1 added for each ℓ-mode: This vector reduces to a µ = 0 if C t = C f (id being the identity matrix). Furthermore, which simplifies to G µν = F µν in the case of choosing the correct model, such that the parameter estimation bias vanishes. The same happens for q = 0, i.e. if only the polynomial p 0 (χ) ≡ 1 is used. In the derivation outlined above, the identities were used. This formalism is a generalisation of the case of diagonal covariance matrices (Cabré et al. 2007;Amara & Réfrégier 2008;Taburet et al. 2009;March et al. 2011), for which it has been shown to work well by comparison with results from Monte-Carlo Markov chains (Taburet et al. 2010). We need to employ this more general formalism because the signal covariance S i j (ℓ) ceases to be diagonal if the cosmology used for constructing the polynomials differs from the true cosmology. Other examples of non-diagonal covariances include 3-dimensional cosmic shear (Heavens 2003;Kitching et al. 2008). It should be emphasised that we are only investigating the impact of systematics or a wrongly chosen initial cosmology on the construction of polynomials and that systematics control plays a very important role in parameter estimation from weak lensing (among others, see King & Schneider 2002;Huterer et al. 2006;Bridle & King 2007;Semboloni et al. 2011), which we are not touching here.
systematical errors
We constructed TaRDiS-polynomials for three wrong assumptions: As the first example, we consider a rather small change in the dark energy model, namely a time varying gequation of state instead of a constant one. These two cosmologies have the same average dark energy equation of state, but the degree of freedom of a time varying w is contained in the true cosmology. For this case, Fig. 11 shows the estimation bias in the true dark energy model, when the model used for constructing the polynomials had been a time varying equation of state with the same average eos-parameter. The figure illustrates that such a mistake has a minor impact on the estimation of parameters, as biases are smaller compared to the statistical precision by at least an order of magnitude. Most estimation bias can be found in the normalisation σ 8 . Biases in the estimation of cosmological parameters if for the construction of the polynomials the wrong values for Ω m and σ 8 have been used, are summarised in Fig. 12. This case is a much stronger systematic compared to the previous case. There are strong biases in particular in Ω m and σ 8 , and to a lesser extend in n s , whereas h and w are not strongly affected. Furthermore, the biases in Ω m and σ 8 are in a direction almost orthogonal to the orientation of the degeneracy, indicating a patricularly strong impact.These estimation biases, however, can be reduced by including a larger number of polynomials. In that way, a reduction of the bias to values similar to the statistical error is possible.
Finally, biases due to using a convolved redshift distribution in the polynomial construction whereas the signal is in reality generated by the unconvolved redshift distribution have been computed in Fig. 13. As a very simple model for the error in the measurement of the galaxy redshift distribution q(z)dz we used the convolution with a Gaussian (Ma et al. 2006), which is of course too coarse to model errors in a proper weak lensing measurement, but will serve as an example. In contrast to the previous example, systematical errors in the parameters increase with larger numbers of polynomials used, and reach values of the order of the statistical error, in particular for Ω m and σ 8 , and to a lesser extend h. Quite similarly, the direction of the bias move the best fit point quickly away from the true cosmology because it is at roughly right angles relative to the degeneracy direction. Given the magnitude of systematical errors in comparison to their statistical accuracies, parameter estimation biases due to a wrongly chosen cosmology appear unlikely to affect measurements. In particular, if values estimated from unweighted tomography are used, the cosmological parameters are known well enough that TaRDiS-polynomials can be constructed in a reliable way and consequent parameter estimation biases would always be subdominant in relation to the statistical errors. In addition, it is always possible to iterate between parameter estimation and polynomial construction for narrowing down estimation biases.
cross validation
A possible way of validating the correctness of the assumed cosmology for the construction of the orthogonal set of polynomials would be to estimate the cross spectra C i j (ℓ) which should be compatible with zero for the correct choice (similarly to Huterer & White 2005), or alternatively, one can take advantage of the non-commutativity [S , N] = S N − NS 0 of the signal covariance S and the noise covariance N. If the polynomials have been constructed for the true cosmology with the first normalisation variant eqn. (23), S for unit-normalised polynomials is equal to the unit matrix and would always commute with N. In the case of the wrong cosmology, S and N are symmetric matrices with different eigensystems, and [S , N] does not vanish. In Fig. 14 this commutator is given as a function of multipole order ℓ and depending on the number of polynomials used. For giving a single number quantifying Given the minor impact on choosing a wrong cosmological impact on parameter estimation from weak lensing spectra weighted with orthogonal polynomials, we point out that it should be possible to employ the polynomials iteratively by alternating between parameter estimation and polynomial construction. At the same time we emphasise the possible usefulness of measuring cross spectra C i j (ℓ), i j, or the commutator tr [S , N] 2 for validating the cosmological model used for data analysis.
SUMMARY
The topic of this paper is a novel method of carrying out tomographic weak lensing measurements by line of sight-weighting of the weak lensing convergence with specifically constructed orthogonal polynomials.
(i) The TaRDiS-polynomials have been constructed with a Gram-Schmidt orthogonalisation procedure in order to diagonalise the signal covariance matrix, and to project out statistically independent information of the weak lensing field by performing weak lensing tomography. They differ from traditional tomography and 3-dimensional weak lensing methods in the respect that the signal covariance is diagonal instead of the noise covariance.
(ii) The statistical error forcasts for cosmological parameters was investigated with the Fisher-matrix formalism. Because of the polynomial's property of providing statistically independent information of the weak convergence field, one sees a particularly simple square-root scaling of the signal to noise-ratio and the statistical errors with the maximum multipole considered and the number of polynomials used. Clearly, statistical errors can be reduced when combining polynomials, where a small advantage can be observed Figure 11. Parameter estimation biases (δ µ , δ ν ) in the parameters Ω m , σ 8 , h, n s and w, superimposed on the 1σ-confidence regions, if the polynomials p i (χ) have been constructed for wCDM with an evolving equation of state parameterised by w 0 = −0.8 and w a = −0.2 instead of wCDM with a constant equation of state with w = −0.9. In the estimation biases, the dot colour is proportional to the cumulative order q of the polynomials, and is plotted again as a function of q, for Ω m (dots), σ 8 (squares), h (lozenges), n s (triangles, pointing up), w (triangles, pointing down). For all computations, the EUCLID survey characteristics were used, and the data was accumulated up to ℓ max = 1000. in comparison to traditional tomography. This is due to the fact that in case of a matching between the assumed and the true cosmology, the Fisher-matrix assumes the largest possible values because of teh diagonalised covariance and provides consequently the smallest statistical errors.
(iii) We extended the Fisher-matrix formalism for investigating how the assumption of a wrong cosmology in the construction of the polynomials impacts on the estimation of cosmological parameters and to what extend biases are introduced. We assumed three systematically wrong priors: assumtion of a time varying instead of a constant dark energy equation of state parameter, as an example of a degree of freedom not contained in the model, a wrong Ω mand σ 8 -pair, as being the two most prominent parameters determining the strength of the weak lensing signal, and finally a convolved galaxy redshift distribution. All three systematics had a minor impact on the parameter estimation, with the biases decreasing if a larger number of polynomials was used. With 10 polynomials biases were of the order of the statistical error even for strong mismatches in the choice of the initial cosmology. Only the assumption of a wrong redshift distribution generates roughly constant biases, which, when normalised to the statistical errors, increase with the number of polynomials used. Our extension of the Fisher-matrix formalism treats the parameter estimation biases in full generality and does not assume a diagonal shape of neither the signal nor the noise covariance.
(iv) The accuracy needed for constructing viable polynomials corresponds to the statistical error reachable with a simple unweighted convergence spectrum, which is readily available, and can be improved by adding additional priors in the form of the CMBlikelihood, parameter constraints form baryon acoustic oscillations or from supernovae.
(v) Additionally, it is possible to compute diagnostics for the choice of the prior cosmological model used for constructing the cosmology: Examples include the cross-spectra S i j (ℓ) for two different polynomials i j, which should vanish for correctlyconstructed polynomials, and the commutator [S , N] = S N − NS between the signal and the noise covariance, which should vanish if the signal covariance is equal to the unit matrix for a certain choice of normalisation for the polynomials.
We plan to extend our research on the usage of orthogonal polynomials to the weak lensing bispectrum in a future paper, and to carry out forecasts on dark energy cosmologies from combined constraints with spectrum and bispectrum tomography. Parameter estimation biases (δ µ , δ ν ) in the cosmological parameters Ω m , σ 8 , h, n s and w, superimposed on the 1σ-confidence regions, if in the construction of the polynomials Ω m and σ 8 were set too high (0.275 and 0.85 instead of 0.25 and 0.8, respectively) relative to the fiducial cosmology. The colour of the dots indicates the cumulative polynomial order q, for Ω m (dots), σ 8 (squares), h (lozenges), n s (triangles, pointing up) and w (triangles, pointing down). For the plot, EUCLID survey characteristics were assumed, and the signal integrated up to ℓ max = 1000. Figure 13. Biases (δ µ , δ ν ) in the cosmological parameters Ω m , σ 8 , h, n s and w, superimposed on the 1σ-confidence regions, if the true redshift distribution of galaxies differs from the observed one by a convolution with a Gaussian with σ z = 0.05. The dot colour changes with cumulative polynomial order q, and is replotted as a function of q in the inset for Ω m (dots), σ 8 (squares), h (lozenges), n s (triangles, pointing up) and w (triangles, poiting down). The errors and biases correspond to the EUCLID survey up to the multipole order ℓ max = 1000. | 2011-07-12T08:48:15.000Z | 2011-07-12T00:00:00.000 | {
"year": 2011,
"sha1": "df4489afe28ae79f26a5d63f490cb44435607dbd",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/423/4/3445/4904481/mnras0423-3445.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "df4489afe28ae79f26a5d63f490cb44435607dbd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236209033 | pes2o/s2orc | v3-fos-license | Public Awareness, Knowledge of Availability, And Willingness to Use Neurosurgical Care Services in Africa: A Cross-Sectional E-Survey Protocol
Background: Barriers to care cause delays in seeking, reaching, and getting care. These delays affect low-and middle-income countries (LMICs), where 9 out of 10 LMIC inhabitants have no access to basic surgical care. Knowledge of healthcare utilization behavior within underserved communities is useful when developing and implementing health policies. Little is known about the neurosurgical health-seeking behavior of African adults. This study evaluates public awareness, knowledge of availability, and readiness for neurosurgical care services amongst African adults. Methodology: The cross-sectional study will be run using a self-administered e-survey hosted on Google Forms (Google, CA, USA) disseminated from 10th May 2021 to 10th June 2021. The Questionnaire would be in two languages, English and French. The survey will contain closed-ended, open-ended, and Likert Scale questions. The structured questionnaire will have four sections with 42 questions; Sociodemographic characteristics, Definition of neurosurgery care, Knowledge of neurosurgical diseases, practice and availability, and Common beliefs about neurosurgical care. All consenting adult Africans will be eligible. A minimum sample size of 424 will be used. Data will be analyzed using SPSS version 26 (IBM, WA, USA). Odds ratios and their 95% confidence intervals, Chi-Square test, and ANOVA will be used to test for associations between independent and dependent variables. A P-value <0.05 will be considered statistically significant. Also, a multinomial regression model will be used. Dissemination: The study findings will be published in an academic peer-reviewed journal, and the abstract will be presented at an international conference. Highlights The burden of neurosurgical diseases is enormous in low- and middle-income countries, especially in Africa. Unfortunately, most neurosurgical needs in Africa are unmet because of delays in seeking, reaching, and getting care. Most efforts aimed at reducing barriers to care have focused on improving the neurosurgical workforce density and infrastructure. Little or no efforts have been directed towards understanding or reducing the barriers to seeking care. We aimed to understand public awareness, willingness to use, and knowledge of the availability of neurosurgical care in Africa. The study findings can inform effective strategies that promote the utilization of neurosurgical services and patient education in Africa.
INTRODUCTION
More than two-thirds of the global population lacks access to appropriate surgical care. An estimated 140 million necessary surgical procedures are left undone, resulting in excessive economic costs and profound disability and death [1]. There is a gross discrepancy in the availability of health resources for surgery, leaving a huge treatment gap geographically; this gap is seen particularly in Africa and Southeast Asia [2]. A huge disproportion exists in health care access in low-and middle-income countries (LMICs), where 9 out of 10 people have no access to basic surgical care [3]. There exist multiple factors that increase the delay to surgical care [4,5]. A widely used framework in many areas of global health, including surgery, is the three-delay framework -seeking, reaching, and receiving health care [6,7].
A review by Grimes et al. mentioned that barriers that limit access to surgery in LMICs are cultural (acceptability) and structural (accessibility), and to a significant extent, financial (affordability) [8]. Other studies identified low levels of health literacy and outdated medical facilities as barriers to surgical care [8][9][10][11][12][13][14][15]. Most of the efforts to reduce the barriers to neurosurgical care have been focused on improving the workforce density and the infrastructure via the provision of standard neurosurgical education and training and the provision of neurosurgical infrastructure [16,17]. While this is important, patients who need the services will only use them if they are willing to and are aware that these services exist in their community.
Healthcare-seeking behavior (HSB) is "any action or inaction which is undertaken by individuals who perceive themselves to have a health problem or to be ill to find an appropriate remedy" [18]. HSB can be appropriate or inappropriate. Appropriate HSB refers to "consulting a qualified medical professional or seeking healthcare at orthodox health facilities such as private clinics, primary health centers, and general hospitals during illness episodes or any situation requiring medical attention" [18]. Inappropriate HSB has been linked to worse health outcomes, increased morbidity and mortality, and poorer health statistics [19,20]. In Nigeria, rural dwellers are more likely to exhibit inappropriate HSB than urban dwellers [21]. Other determinants of HSB include geographical proximity, socio-economic class, culture, political affiliation, education level, and the healthcare systems themselves [22,23]. Most rural dwellers use health facilities outside their villages because they lack access to these services in their local areas [24].
Africans attribute metaphysical characteristics, such as witchcraft/sorcery, to neurological diseases, including epilepsy, and this, in turn, causes stigmatization of people with epilepsy, thereby affecting the HSB [24]. In Ethiopia, patients report that their religion plays a critical role in their attitude towards neurosurgical procedures [25]. Similarly, Nigerian patients turn to religion for comfort after learning about their neurosurgical disease [26]. In Kenya, almost 70% of pregnant women within households in the upper socio-economic stratum deliver in health facilities compared with 42% among pregnant women in the middle socio-economic stratum and 38% in the low socio-economic stratum [27].
The lack of knowledge and general misperceptions regarding neurosurgical care are considerable barriers to neurosurgical care, and further education of the general public would be of great value. To develop effective strategies to promote the utilization of neurosurgical services in Africa, we must explore demographic characteristics and barriers related to general public awareness of neurosurgical services and intentions for use and knowledge of availability. Knowledge of healthcare utilization behavior within communities is useful when developing health policies and implementing health programs [28]. We, therefore, aim with this paper to understand the public awareness of neurosurgical care, willingness to use, and knowledge of the availability.
AIMS AND OBJECTIVES
Aims: To map out public awareness, knowledge of availability, and readiness for neurosurgical care services amongst adult Africans.
STUDY DESIGN
This cross-sectional study will use a self-administered e-survey to collect public awareness, knowledge of availability, and readiness for neurosurgical care services in Africa. A team of African neurosurgeons and patients will be consulted to assess the questionnaire's face validity. This team will check the questionnaire for duplication, confusing terms, and leading questions. We will use the team's feedback to improve the survey. Next, the questionnaire will be piloted among an arbitrary sample size of 20 respondents to identify technical issues with the survey. We will use Cronbach's alpha to evaluate the questionnaire's internal consistency and principal component analysis to identify factor loadings. Next, we will make revisions accordingly.
• We aimed to understand public awareness, willingness to use, and knowledge of the availability of neurosurgical care in Africa. • • The study findings can inform effective strategies that promote the utilization of neurosurgical services and patient education in Africa.
ELIGIBILITY CRITERIA
• Inclusion Criteria: All consenting adult Africans who are not studying towards a healthcare degree or working in the healthcare sector. • Exclusion Criteria: All medical/dental/nursing/allied health students and workers. This group will be excluded because it is expected that members will have better knowledge and more positive attitudes towards neurosurgical care than the general public.
DATA COLLECTION TOOLS AND TECHNIQUE
Self-administered structured Google Form (Google, CA, USA) questionnaires in French and English will be used for the study. The survey will contain closed-ended, open-ended, and Likert Scale questions. The structured questionnaire will have four sections (A-D). They are: Section A -Sociodemographic characteristics (7): Age, gender, marital status, Nationality, Occupation, Urban/Rural settlement, Length of living in the region. Section B -Definition of neurosurgery care (2): Respondents will be asked to define neurosurgery in their own words, then asked to select from several options disorders that can be treated with Neurosurgical care. Section C -Knowledge of neurosurgical diseases, practice, and availability (11). Section D -Common beliefs about neurosurgical care (12).
Written consent will be obtained from the respondents before the administration of questionnaires.
SAMPLE SIZE
The sample size was determined using the Normal approximation to the hypergeometric: n = n = Nz^2pq/ (E^2(N-1) + z^2pq) [29] Where; n = Minimum required sample size N = the population size ( Therefore, the sample size (n) is equal to 424.
OUTCOME MEASURES
The primary outcome is the level of knowledge about the availability of the services. Secondary outcomes will include the level of knowledge and beliefs about neurosurgical disease and the knowledge level of neurosurgical practice in Africa.
DATA ANALYSIS
Independent variables will include age, sex, marital status, profession, nationality, urban/rural residency, previous experience with neurosurgery. Dependent variables will include the level of knowledge and willingness to use neurosurgical services.
Data will be analyzed using SPSS version 26 (IBM, WA, USA). The analyzed univariable data will be presented as frequency tables and charts. The levels of knowledge will be stratified into five groups equally spaced: deficient, sufficient, satisfactory, good, and very good. The Q-Q plot will be used to evaluate the distribution of quantitative data. Also, the odds ratios and their 95% confidence intervals, Chi-Square or Fisher's Exact Test, and ANOVA or Kruskal-Wallis test will be used to quantify associations between independent and dependent variables. Next, the independent variables will be integrated into a multinomial regression model.
ETHICS
Approval to carry out this study was obtained from the institutional review board of Bel Campus University of Technology, Kinshasa, DRC (No 2135488). The participants will be informed of the reason and nature of the study. Informed consent will be sought, and the participants will be informed of their right to withdraw at any point of the study. Participants will not be coerced to participate in the study, and confidentiality will be maintained throughout the study.
LIMITATIONS
Internet access remains low in many African countries, making this study inaccessible to a wide audience, especially those who seek or have used neurosurgical facilities in remote areas. Therefore, this provides an opportunity for researchers to conduct a large population-based survey in the future to get an idea of the ampleness of public awareness, knowledge of availability, and availability of neurosurgical care services in Africa by including both the population in remote areas and those who do not speak English or French. Socioeconomic status and the education level of citizens are some of the factors that will limit the generalizability of our results. Citizens in the higher income bracket and those with a higher level of education are perhaps more likely to have more access to the internet and a better understanding of research goals.
CONCLUSION
Although there has been an increase in access to neurosurgical care in Africa, the unmet need remains significant. One reason this might be the case is that African neurosurgical patients experience multiple barriers to seeking care. Therefore, we must understand the role of HSBs in bridging the treatment gap. This study will survey the public awareness of neurosurgical care services intended for use, knowledge of availability, and readiness for neurosurgical care services in Africa.
DISSEMINATION
We plan to publish this review in a peer-reviewed journal. We may also present this review at local and/or international conferences.
FUNDING INFORMATION
There was no funding for the current study protocol.
COMPETING INTERESTS
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-07-25T05:27:27.981Z | 2021-07-13T00:00:00.000 | {
"year": 2021,
"sha1": "f849dccfc2d89a1c215e6cab683eb547f145c8ef",
"oa_license": "CCBY",
"oa_url": "http://www.ijsprotocols.com/articles/10.29337/ijsp.149/galley/235/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f849dccfc2d89a1c215e6cab683eb547f145c8ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253664610 | pes2o/s2orc | v3-fos-license | Synchronous hepatocellular carcinoma and renal cell carcinoma in young woman with sarcoidosis: A case report
Hepatocellular carcinoma (HCC) and clear cell renal carcinoma are both frequent cancers, especially in patients with risk factors such as cirrhosis in the first case or genetic mutations such as Li-Fraumeni syndrome in the second case; however, their synchronous appearance is very rare especially in young patients with no apparent predisposing factors. We describe the case of a 33-year-old woman with acute pain onset in right hypochondrium. The ultrasound (US) imaging and the contrast-enhanced computed tomography (CECT) of the abdomen revealed 2 abdominal masses: one in the VI-VII segments of the liver and the other one in the right kidney. The chest CECT study, acquired for staging purpose, detected multiple micronodules with patchy peri-bronchial distribution at both lungs. At the histological examination, the tumor arising from the right kidney was finally diagnosed as clear cell renal carcinoma, whereas the tumor arising from the right lateral hepatic lobe as HCC. The histological examination of lung lesions revealed sarcoidosis granulomas. The patient is still being followed up for the occurrence of lung and lymph node metastases from HCC 14 months later.
Introduction
Multiple synchronous primary malignancies have been reported since XIX century [1] . The coexistence of hepatocellular carcinoma (HCC) and other primary malignancies ranges from 2.1% to 14.5% and is associated with chronic liver disease, older age and male gender, with a male-to-female ratio of 11:1 [2] . Gastrointestinal tumors are the most common extrahepatic primary malignancies associated with HCC [2] . The lesion in the right hepatic lobe exhibits contrast graphic features like the renal lesion, that is, significant enhancement in arterial phase (white arrows in image A), washout in later phases, and inhomogeneous appearance due to the presence of hypodense contextual areas (curved arrows); it is not always easy to discern primary lesions from secondary lesions.
The incidence of synchronous primary cancers with renal cell cancer (RCC) is about 3.7%, and the most common site are urogenital system and lung [3] .
The coexistence of synchronous HCC and RCC is quite rare. We present the case of a young woman, with no predisposing factors, affected by synchronous HCC and RCC, treated with wedge hepatectomy and right nephrectomy.
Case report
A 33-year-old woman was admitted to the emergency department due to acute onset of abdominal pain in the right side without any reported trauma or other apparent causes.
The US abdominal examination revealed the presence of two masses: one at the right kidney, and the other one at the right lobe of the liver.
A contrast-enhanced computed tomography was then acquired and confirmed the two lesions. The one on the right kidney ( Fig. 1 ) measured 7 cm and showed intense arterial enhancement. The lesion on the VI-VII hepatic segments ( Fig. 2 ) measured 5 cm and showed arterial enhancement with washout in portal phase.
Also, an enhanced magnetic resonance imaging (MRI) study of the upper abdomen with hepato-specific contrast agent was performed and showed the same angio dynamic behavior of the lesions, which also had high signal intensity in DWI with high b-value (800) ( Fig. 3 ).
The physical examination was completely silent. In anamnestic history, the patient reported psoriasis, previously treated, a mild smoke habit lasted for about 15 years, no alco- hol abuse and a varied diet. At the time of symptoms presentation, no drug therapy was mentioned.
About family oncological history, the patient reported a bladder cancer in the paternal grandfather and a prostate cancer in the father.
After CT finding, further investigations were required for staging purposes: head and chest CT enhanced scan, bone scintigraphy, and liver and kidney biopsy for a definitive diagnosis.
The CT scan reported multiple areas of thickening with a tendency for peri-bronchial distribution to both lung fields; no lymphadenopathies, no brain lesions ( Fig. 4 ). Bone scintigraphy was negative.
To characterize the lung lesions, the patient underwent to a wedge resection through video-assisted thoracoscopic surgery. Subsequent histological analysis showed that the lung lesions were not secondary nodules, as suspected, but sarcoidosis granulomas.
Genetic surveys were also carried out to identify a genetic syndrome that could explain the presence of a double neoplasm in young patient without familiar predisposition and in the absence of hepatopathy. However, no specific genetic syndrome was detected.
The patient was referred to surgeons for surgical treatment of the 2 neoplasms. During the surgery, she underwent an intraoperative hepatic contrast-enhanced ultrasound that depicted additional nodules at the II, III, and VIII hepatic segments of doubtful nature. Thus, she underwent a resection of HCC at VI and VII segments and an additional wedge resection of the above segments plus right nephrectomy. The histological analysis revealed that the lesion at the III segment was focal nodular hyperplasia, whereas the nodules at II and VIII segments turned out to be sarcoidosis granulomas.
Fig. 5 -Histopathology images of HCC at 10 × (A) and 20 × (B-D) magnification. Images A and B show tumor cells with enlarged nuclei with high replicative activity (black arrows); images C and D exhibit a macro trabecular growth pattern (arrowheads); Grade III based on Edmondson Class.
The postoperative course was uneventful, and the patient was discharged on day 9 post-surgery. Approximately 1 year after surgery, a follow-up CT scan shows the appearance of some solid nodular formations at the right lung, characterized by radiopharmaceutical uptake at the PET-CT scan as suspicious for secondary pulmonary nodules.
The patient underwent a wedge resection through videoassisted thoracoscopic surgery for nodule exheresis and the histological examination confirmed their secondary nature from HCC. The postoperative course was characterized by the appearance of PNX that resolved in the following days by drainage.
The patient started systemic therapy with Lenvatinib. Four months later, on follow-up CT scan, there was evidence of suspicious mediastinal lymphadenopathy. The patient underwent endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA).
Histological examination was diagnostic of lymph node metastasis from high-grade HCC.
Due to disease progression, the systemic therapy was switched to second line Sorafenib and then to Cabozantinib as third line.
To date, the patient is still being followed up and the disease is stable.
In addition to systemic therapy, the patient starts Prednisone for sarcoidosis, Amlodipine for hypertension, and Levothyroxine for hypothyroidism.
Discussion
Multiple primary malignancies are defined as synchronous when they present simultaneously or within 6 months following the first diagnosis [4] .
With the increase in life expectancy and the development of improved diagnostic techniques, the frequency of multiple primary tumors has increased [4] .
According to Warren and Gates' criteria [5] , in the diagnosis of multiple primary malignancies, the occurrence of metastatic cells must be excluded. Cancers must be histologically distinct and present a different nuclear grading status.
Our patient has synchronous HCC and RCC that have been confirmed by histopathological and immunohistochemical examinations after surgery.
Dac Hong et al. [6] reported 8 cases of synchronous HCC and RCC from English literature: all these patients were male, 5 had hepatitis B with or without cirrhosis; 6 of them underwent resection, while in 2 cases, the malignancies were treated with radiofrequency ablation.
Surgery resection with adjuvant chemotherapy still represents the gold standard in multiple primary tumors treatment [4] .
All the reports confirmed that prognosis of patients with multiple primary malignancies and that of patients with HCC alone does not significantly differ, as the survival rate depends on HCC evolution [4] ( Fig. 5 , Fig. 6 ).
Patient consent
Patient was informed of the publication of the case report and an informed consent was obtained. Furthermore, the patient is aware that the name will not appear in the text of the article. | 2022-11-19T16:07:56.954Z | 2022-11-17T00:00:00.000 | {
"year": 2022,
"sha1": "9fdc482bfbd8574dfd09e73d5d7bc2d8ecae6a28",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "911aee04d3bb59595fad22984c4938924352b696",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117894770 | pes2o/s2orc | v3-fos-license | Superquasicrystals: selfsimilar ordered structures with non-crystallographic point symmetries
We present a systematic method of constructing limit-quasiperiodic structures with non-crystallographic point symmetries. Such structures are different aperiodic ordered structures from quasicrystals, and we call them"superquasicrystals". They are sections of higher-dimensional limit-periodic structures constructed on"super-Bravais-lattices". We enumerate important super-Bravais-lattices. Superquasicrystals with strong selfsimilarities form an important subclass. A simplest example is a two-dimensional octagonal superquasicrystal.
have a self-similarity characterized by a Pisot number which is severely restricted by the super-Bravais-lattice. iv) As a simplest example, an octagonal SQC is presented.
We begin with a brief review of QCs (for a finer review, see [10]). The point groups which are important in application to physical QCs include D 8 (octagonal), D 10 (decagonal) and D 12 (dodecagonal) in two dimensions (2D) and Y h (icosahedral) in 3D; all these point groups are isometric. A QC having one of these point groups is a d-dimensional section of a hypercrystal in the 2d-dimensional space E 2d with d = 2 or 3 being the number of dimensions of the physical space E d . The entities of the hypercrystal are not atoms but geometric objects called hyperatoms (atomic surfaces, windows, etc.), which are bounded open domains in the internal space, E ⊥ d , i.e., the orthogonal complement to E d . We consider a hypercrystal constructed by locating one kind of hyperatoms onto the sites of a Bravais latticeΛ in E 2d . There exists only one Bravais lattice for each of the three 2D point groups, but there exist three types of icosahedral Bravais lattices, namely, primitive, face-centered and body-centered types. An infinite number of different QCs are obtained from a single hypercrystal by choosing different sections which are parallel to E d , and they form a so-called locally isomorphic class (LI-class) [3]. The point groupĜ ofΛ is isomorphic with a noncrystallographic point group G in d dimensions. More precisely,Ĝ = G ⊕G ⊥ , where G and G ⊥ (≃ G) are groups acting onto E d and E ⊥ d , respectively. Let π and π ⊥ be the projectors projecting E 2d onto E d and E ⊥ d , respectively. Then, Λ := πΛ and Λ ⊥ := π ⊥Λ are modules with d generators, and are dense sets. They have a scaling symmetry whose scale is a quadratic irrational, τ , where τ = 1 + √ 2, 1 2 (1 + √ 5 ) or 2 + √ 3 for the case of the three 2D point groups, D 8 , D 10 or D 12 , respectively, while τ = 2 + √ 5 for the primitive-icosahedral case but τ = (1 + √ 5 )/2 for the other two icosahedral cases; τ is a Pisot number and also a unit in Z[τ ] := {n + mτ | n, m ∈ Z}. Let ϕ be the scaling operation with the ratio τ . Then, any member of the infinite group, G := G, ϕ , generated by G and ϕ is an automorphism of Λ. A QC is "point diffractive" in the sense that its structure factor is composed only of Bragg spots. The position vectors of the Bragg spots form the Fourier module, which corresponds to the reciprocal lattice for a periodic crystal. The Fourier module is given by Λ * := πΛ * ⊂ E * d withΛ * being the reciprocal lattice ofΛ and E * d the dual space to E d . We have already seen several examples of triads of objects, {X, X ⊥ ,X}, associated with the three worlds, E d , E ⊥ d and E 2d , where X = πX and X ⊥ := π ⊥X . Any member of a triad uniquely determines the remaining two; in particular,X = π −1 X is the lifted version of X. Many relationships are isomorphic among the three worlds, and a relationship in one of the three worlds can be readily translated to those in the other two [4]; if X and X ⊥ above are sets, the symbol "⊥" turns out a bijection from X onto X ⊥ . There exists an important triad of linear maps {ϕ, ϕ ⊥ ,φ} with ϕ ⊥ being a scaling with the ratioτ , the algebraic conjugate of τ , whereasφ = ϕ ⊕ ϕ ⊥ is an automorphism ofΛ:φΛ =Λ. It follows that a QC has a self-similarity with ratio τ [4,11].
Another important triad of linear maps is associated with a bi-similarity transformationσ = σ ⊕ σ ⊥ , where σ and σ ⊥ are similarity transformations acting onto E d and E ⊥ d , respectively, whileσ satisfiesΛ 1 :=σΛ ⊂Λ.Λ 1 is the last member of the triads {Λ 1 , Λ ⊥ 1 ,Λ 1 }, in which Λ 1 = σΛ and Λ ⊥ 1 = σ ⊥ Λ ⊥ are similar submodules (SSMs) of Λ and Λ ⊥ , respectively. Moreover, Λ 1 as well as Λ is invariant against the action of G. We shall denote by m the index of Λ 1 in Λ: m := [Λ : Λ 1 ]. The scaling transformation ϕ is a special similarity transformation since it is invertible and, consequently, m = 1. However, if m > 1, Λ is a larger set than Λ 1 , and hence σ is not invertible in Λ.Λ 1 in this case is a sublattice ofΛ or, in the terminology of crystallography, a superlattice of Λ, and is divided into m equivalent sublattices toΛ 1 . We shall callΛ 1 a quasi-similar superlattice ofΛ. The similarity transformation σ combining an SSM, Λ 1 , with Λ by Λ 1 = σΛ is, however, not uniquely determined by Λ 1 because any member of the set G(Λ 1 ) := σG = Gσ satisfies the same condition. The set of all the similarity ratios of the members of G(Λ 1 ) is given by {|σ|τ k | k ∈ Z}. LetB be the set of all the quasi-similar superlattices ofΛ. Then, it is a member of a triad, {B, B ⊥ ,B}, and B is the denumerable set of all the SSMs of Λ. The set, Σ, of all the similarity transformations associated with B form a semigroup. We may expect that there exists a bijection between B and the quotient semigroup, Σ/G. This is true provided that, for the case of the 2D point group D 12 , G is slightly modified as made in the next paragraph.
Prior to proceeding to the subject of super-Bravais-lattices, we shall investigate in more detail SSMs of Λ for the case of the 2D point group D p with p = 8, 10 or 12. An important member of B is the one written as σ p = |σ p |ρ 2p with |σ p | := 2 cos (π/p) and ρ k being the rotation operation through 2π/k [11,12]. The index m p of the SSM, σ p Λ, is equal to two, five or one for p = 8, 10 or 12, respectively. Since σ 12 is invertible and |σ 12 | = √ τ , we have to redefine the map ϕ for p = 12 by σ 12 and, correspondingly, the automorphism group G of Λ is redefined. There exist two types of SSMs: we call Λ 1 = σΛ a type I or II SSM if σ is chosen to be a simple scaling or not, respectively. A complete discussion for possible SSMs has been made in [12]. A type I SSM is written with a positive number ν ∈ Z[τ ] as νΛ, and its index is given by m = [N(ν)] 2 with N(ν) := νν. A simplest SSM for p = 12, for example, is given by (1 + √ 3 )Λ, whose index is equal to four. On the other hand, type II SSMs are somewhat complicated. We shall call a type II SSM proper if the point group D p of Λ leaves it invariant. If it is not proper, the common point group between it and Λ is equal to C p . In this report, we shall ignore "improper" type II SSMs (cf. [11]). Then, there exist no type II SSMs for p = 12. A simplest type II SSM for p = 8 or 10 is σ p Λ. A general type II SSM is written with ν ∈ Z[τ ] as νσ p Λ and m = m p [N(ν)] 2 . We may write σ := νσ p = |σ|ρ 2p with |σ| = ν|σ p |. Then, σ 2 Λ is identical to the type I SSM, . The modules Λ n := σ n Λ, ∀n ∈ N, satisfy Λ ⊃ Λ 1 ⊃ Λ 2 ⊃ · · · and [Λ : Λ n ] = m n . We shall denote by " n ≡" the equivalence relationship introduced into Λ by the residue module, Λ/Λ n . Its important property is the following: if ℓ As a consequence, Λ becomes a normed module if a non archimedean norm of ℓ ∈ Λ is defined by ||ℓ|| := 1/2 n with n being the largest number satisfying ℓ n ≡ 0. It is just a metric space called an inverse system; different vectors in Λ are "coloured" by different colours, and Λ is regarded as a "coloured module" with an infinite number of colours. This structure can be transferred toΛ, yielding a 2d-dimensional coloured lattice, L. We shall call L a super-Bravais-lattice because different supercrystals are constructed on it as shown shortly. SinceΛ n := π −1 Λ n is superlattice ofΛ, L is, intuitively, a recursive superlattice structure. The map σ as well as the group G acts naturally onto L, and σL ⊂ L. We shall turn our arguments to the dual space (or the reciprocal space). The modules Λ * n := σ −n Λ * , ∀n ∈ Z, satisfy Λ * n = σΛ * n+1 ⊂ Λ * n+1 . The denumerable setΛ * ∞ := π −1 Λ * ∞ with Λ * ∞ := Λ * 0 ∪Λ * 1 ∪Λ * 2 ∪· · · is the dual module to L. It is not finitely generated in contrast to Λ * . It is invariant against the action of the infinite group, G, σ . We can divide Λ * ∞ into the disjoint sets, ∆ n := Λ * n − Λ * n−1 , n ∈ Z, which are not modules. They are invariant against the action of G and satisfy ∆ n = σ∆ n+1 and ∆ n ⊂ Λ * n . We may write as Λ * ∞ = Λ * 0 + ∆ 1 + ∆ 2 + · · · because Λ * 0 is disjoint with ∆ n , n > 0. Any vector in ∆ n is indexed with the generators of Λ * n by 2d-integers. In order to construct a supercrystal on the super-Bravais-lattice, L, we need a set A consisting of all the allowed hyperatoms, whose diameters are assumed to be bounded by a positive number. We can identify a hyperatom with its characteristic function on E ⊥ d , and A is embedded into L 1 , the function space with the p = 1 norm. Let α be a map from Λ into A. Then, a supercrystal is specified by L and α; we shall denote it by S(L, α). The hyperatoms in a supercrystal are not uniform but their shapes, sizes, and/or orientations are determined by the colours of the relevant sites. We assume α to be uniformly-continuous in the sense that, for any ε > 0, there exists an integer n such that ||α ℓ − α ℓ ′ || 1 < ε, ∀ℓ, ℓ ′ ∈ Λ with ℓ n ≡ ℓ ′ , where α ℓ ∈ A stands for the image of ℓ ∈ Λ. The point group of the supercrystal is identical tô G if αG = G ⊥ α and A is invariant against the action of G ⊥ . The map fulfills these conditions if α ℓ is, for instance, a regular p-gon whose size is given by f (||ℓ||) with f (x) being a continuos function bounded from both sides by two positive numbers. Let α be a special map satisfying α ℓ = α ℓ ′ , ∀ℓ, ℓ ′ ∈ Λ with ℓ n ≡ ℓ ′ . Then, S(L, α) degenerates into a hypercrystal whose translational group is given byΛ n . If the supremum norm is introduced into the "function space" of maps, {α}, a supercrystal can be approximated in any precision in this norm by a hypercrystal to be called an approximant hypercrystal. This means that the supercrystal is limit periodic [6].
An SQC obtained from the supercrystal, S(L, α), is parametrized by the phase vector, φ ∈ E ⊥ d specifying the "level" at which the section of the supercrystal is taken. It is represented as S(L, α, φ) := S(L, α) ∩ (φ + E d ), which is a disctrete subset of Λ. If identical point scatters with the unit scattering strength are located on the sites of the SQC, we obtain a scatterer field on E d : with α ℓ (x ⊥ ) being the characteristic function of α ℓ . This form of the scatterer field can be generalized for the case where α is a generic uniformly-continuous map from Λ into L 1 ; the scattering strength of a site may now depend on the "colour" of the site. This generalized scatterer field is point diffractive because it can be approximated in any precision in the supremum norm by one of its quasiperiodic approximant. Thus, an SQC is a perfectly ordered structure with a long-range order. An SQC is, in fact, a model set. This is shown by identifying α with the set {(α ℓ , ℓ) | ℓ ∈ Λ} (⊂ E ⊥ d × Λ), which is basically a window in the theory of the model set [9].
The Fourier module of the SQC is given by Λ * ∞ , and each Bragg spot is indexed by (2d + 1)-integers; the last index specifies the superlattice order n. We should emphasise that the Fourier module of S(L, α, φ) is determined solely by L, the super-Bravaislattice. The Fourier transform of the distribution Eq.1 is a distribution in E * d : where α * stands for a map from Λ * ∞ into L 1 defined on (E * d ) ⊥ . Since A ⊂ L 1 , we can define an average over a set of hyperatoms. Then, for n ∈ N, we can define naturally the n-th order averaged hypercrystal, in which the hyperatom assigned tol ∈Λ n is the average over all the hyperatoms on the residue classl +Λ n . It can be shown readily that the Fourier transform of the scatterer field of the averaged QC is identical to the one obtained from Eq.2 by restricting the summand to Λ * n . This allows us to determine the map α * . An SQC has always a self-similarity in the sense that it includes a subset which is geometrically similar to itself. Its proof is basically similar to the one which was made in [14] for the case of uniform hyperatoms. The key concept in the proof is Pisot maps. A similarity transformation σ is called a Pisot map if |σ ⊥ | < 1; this implies |σ| > 1 because m = (|σ| |σ ⊥ |) d ≥ 1. The map, ϕ, is a Pisot map because |ϕ ⊥ | = τ −1 < 1. Since ϕG(Λ 1 ) = G(Λ 1 ), G(Λ 1 ) includes an infinite number of Pisot maps. A similar subset of S := S(L, α, 0) is written as S 1 := σS with σ ∈ G(Λ 1 ) being a Pisot map [14]. We may say that the SQC, S, is strongly selfsimilar if it and its inflation, S 1 , are mutually locally-derivable (MLD: for MLD see [13]). This is not necessarily the case for a generic α. Regrettably, we have yet found no condition for α such that a strongly selfsimilar SQC is obtained. A weak self-similarity is of no physical interest as discussed in [14].
Since S 1 ⊂ Λ 1 , the inflated SQC come only from one of m submodules into which Λ is divided. More generally, the n-th inflation, S n := σ n S, is a subset of Λ n . This is in sharp contrast to the case of a QC, for which the inflated QC come evenly from different submodules on account of ϕΛ = Λ. The self-similarity ratio |σ| (or its square |σ| 2 ) is a Pisot number in Z[τ ] if the map σ is of the type I (or II). However, it is not a unit in contrast to the self-similarity ratio of a QC. In particular, the smallest value of |σ| is given by |σ 8 |, |σ 10 |τ or 1 + √ 3 for the case of the 2D point group D p with p = 8, 10 or 12, respectively. It should be emphasised that self-similarity of an SQC as well as a QC is a natural consequence of its NCPS.
A limit-quasiperiodic tiling produced by a substitution rule is necessarily strongly selfsimilar. Properties of 2D and 3D tilings of this type have been extensively investigated in [6], and a recursion formula determining the Fourier transform of the relevant scatterer field is presented. A limit-quasiperiodic tiling is considered to be an SQC only if it has an NCPS. However, such a tiling has not been known so far [15]. We have discovered a few examples of substitution rules which generate SQCs. The simplest of them is the octagonal SQC in Fig.1 (cf. [16]). The relevant SSM is given by σ 8 Λ. We have determined yet the exact form of the relevant map, α, of this SQC. Our preliminary investigation strongly indicates, however, that the hyperatoms are topologically discs with fractal boundaries. Strong Bragg spots of the SQC are located on Λ * , the Fourier module of an octagonal QC, while those on ∆ n , n > 0 are all weak; the latters are superlattice reflections in the terminology of crystallography. Interestingly, we have noticed that the distribution of Bragg spots exhibits a pattern strongly reflecting the symmetry of σ 8 .
The section, S s , of the octagonal SQC in Fig.1 through the horizontal line at the bottom yields a 1D limit-quasiperiodic tiling with two intervals, S and L, whose lengths satisfy |L|/|S| = √ 2 . The tiling is produced by the substitution rule: S → SLS, L → LSSL. It is, alternatively, given as a section of a 2D limit-periodic structure which is a 2D section of the relevant supercrystal, S. On the other hand, the projection, S p , of the octagonal SQC onto the same line is identical to S s / √ 2 , which is a section of a 2D limit-periodic structure given as a 2D projection of S. Conversely, a 2D SQC can be constructed by the grid method from a 1D LQPSs. It is readily shown that the relevant hyperatoms are polygons.
The present theory is readily extended to the case of the icosahedral point group in 3D. Only the type I SSMs concern this case, and the index of an icosahedral SSM, νΛ, is given by m = [N(ν)] 3 .
There exists a bijection between the infinite set, B, and the set of all the super-Bravais-lattices on a single Bravais lattice,Λ, which is the host lattice of QCs. This means that the world of SQCs is far richer than that of QCs, which is contrary to a previous conception. SQCs together with QCs form an important class of aperiodic ordered structures with non-crystallographic point symmetries.
The authors are grateful to O. Terasaki. | 2019-04-14T02:06:50.971Z | 2004-11-05T00:00:00.000 | {
"year": 2004,
"sha1": "91be89bfdfbe42f0faf3e3628a88a96901fabb50",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0411134",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f7ad9fe83c6811a913afd17503809cd26879649f",
"s2fieldsofstudy": [
"Materials Science",
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Materials Science"
]
} |
248641611 | pes2o/s2orc | v3-fos-license | Photocurable Hydrogel Substrate—Better Potential Substitute on Bone-Marrow-Derived Dendritic Cells Culturing
Dendritic cells (DCs) are recognized as the most effective antigen-presenting cells at present. DCs have corresponding therapeutic effects in tumor immunity, transplantation immunity, infection inflammation and cardiovascular diseases, and the activation of T cells is dependent on DCs. However, normal bone-marrow-derived Dendritic cells (BMDCs) cultured on conventional culture plates are easy to be activated during culturing, and it is difficult to imitate the internal immune function. Here, we reported a novel BMDCs culturing with hydrogel substrate (CCHS), where we synthesized low substituted Gelatin Methacrylate-30 (GelMA-30) hydrogels and used them as a substitute for conventional culture plates in the culture and induction of BMDCs in vitro. The results showed that 5% GelMA-30 substrate was the best culture condition for BMDCs culturing. The low level of costimulatory molecules and the level of development-related transcription factors of BMDCs by CCHS were closer to that of spleen DCs and were capable of better promoting T cell activation and exerting an immune effect. CCHS was helpful to study the transformation of DCs from initial state to activated state, which contributes to the development of DC-T cell immunotherapy.
Introduction
Dendritic cells (DCs), known as "natural adjuvants", have been recognized as the most effective antigen-presenting and natural vectors of antigen transmission cells in vivo. Studies have found that BMDCs reinfusion has a therapeutic effect in MI (myocardial infarction) [1], murine polymicrobial sepsis [2] and Sjögren's-like disease [3] in mice. In recent decades, the development of DC vaccines as newly developed and promising anticancer therapies has been carried out with clinical trials of antigen-pulsed DCs for various kinds of tumors, including lung cancer [4], metastatic nasopharyngeal carcinoma [5], metastatic breast cancer [6], hepatocellular carcinoma [7], and prostate cancer [8]. Meanwhile, despite the advantages of DC vaccines, work is still required to identify their clinical efficacy [9]. Genetic engineering is a promising approach for DC vaccine development, and ectogenic genetic manipulations for embedding these tumor-associated antigens with MHC protein could further increase their immunogenicity to strengthen CD4 + and/or CD8 + cytotoxic T cell responses. In short, DCs are of importance in the mechanism study of diseases and medicine application. Research on primary cells (such as BMDCs) focuses more on internal cells' function than cell lines. However, BMDCs induced and grown on the surface of plastic Petri dishes not only are difficult to transfect [10] and easy to be activated during culturing, but also show differences in their biological characteristics relative to DCs in the internal environment. Thus, it makes significant sense to find a substitute for plastic Petri dishes and explore how to better induce and culture BMDCs in vitro.
BMDCs are easy to be activated during culture on plastic Petri dishes; perhaps the biomaterials have a potential application to solve the dilemma. The biomaterials, which have undergone rapid development, need to demonstrate good bioactivity and formability to achieve a similar intracorporeal extracellular matrix environment and facilitate further cell development. There are some studies about the effect of polysaccharide-based hydrogels on the response of antigen-presenting cell lines [11] and strategies to reduce dendritic cell activation through PEG hydrogels loaded with immunomodulators (TGF-β1 and IL-10) during co-incubation [12]. Christine et al. evaluated the ability of a series of engineered (manufactured/fabricated) and natural collagen matrices as different treatments to either activate BMDCs or conversely induce BMDCs apoptosis in vitro [13]. In short, different biomaterials have different influences on the function of DCs [14], and this study designs and synthesizes novel biomaterials with the purpose of finding the optimal condition for BMDCs culturing.
Used as a base material of its inherently biocompatibility [15], gelatin methacrylate (GelMA, EFL-GM Series) [16] is a chemically modified gelatin, which is a denatured form of collagen with good formability and mechanical properties. It has shown promise in various areas of tissue engineering including cardiac [17,18], bone [19,20], and vascularization [21,22], as well as the development of tumor microenvironment models [23]. Meanwhile, GelMA is evidently compatible with a range of cell types, although one study showed that the GelMA culture condition drives human mononuclear cells to suppress tumor necrosis factor-a (TNF-a) expression [24]. However, there is a current lack of information regarding its interactions with immune cells, especially the effect of by GelMA on BMDCs culturing in mice in vitro.
Chakraborty et al. showed that environmental stiffness mediated by polydimethylsiloxane (PDMS) hydrogel-coated plates promotes DCs activation [25]. Thus, the characteristic of BMDCs' easy activation during culturing was partly attributed to plastic Petri dishes with non-adjustable and stiffness substrate. There should be a better way to induce and culture BMDCs in vitro and make them more similar to the physiological microenvironment and reduce cell activation during culturing. So, we synthesized low substituted GelMA-30 hydrogels with a stiffness-adjustable characteristic and applied it to the BMDCs culture, in which a low-replacement-rate GelMA hydrogel was obtained through controlling the grafting reaction of MA with amine and hydroxy functionalities in gelatin (grafting rate is 30%). With low polymerization substitution, GelMA-30 (EFL-GM-30) is a pliant substrate suitable for cell growth, compared to highly polymerized GelMA hydrogels. In this study, firstly, 3%, 5%, 7.5%, and 10% (w/v) GelMA-30 were prepared by dissolving GelMA-30 (EFL-GM-30) in phosphate buffer saline (PBS) containing 0.25% (w/v) lithium phenyl-2,4,6-trimethylbenzoylphosphinate (LAP) at 60 • C for 30 min. Additionally, different concentrations of GelMA-30 represented different stiffness. Secondly, the different concentrations of GelMA-30 solution heated by 37 • C were immediately injected into a 24-hole plate (300 µL/hole) and then irradiated with a 405 nm light source, 3 cm, with 30 s for gelation. Then, the BMDCs suspension (5 × 10 5 cells/hole, 400 µL) was laid on the solidified gel to form the BMDCs culturing with hydrogel substrate (CCHS). From the third day, we added fresh culture medium every day until the seventh day. At last, we aimed to find the effect of BMDCs by CCHS in vitro and whether they have a similar phenotype and function as splenic DCs in vivo.
We studied the effect and potential of hydrogel as a culturing substrate on the growth and development of BMDCs in vitro. Thus, in this study, we wanted to explore it on GelMA-30. In this study, we found that the phenotype and function of BMDCs induced and cultured in vitro on 5% GelMA-30 approached those of spleen DCs in vivo more than BMDCs cultured under normal conditions. BMDCs cultured with 5% GelMA-30 also had a strong T cell stimulation ability after LPS activation. Therefore, BMDCs culturing with hydrogel substrate (CCHS) is more conducive to studying the biological properties and immune response mechanism of DCs in vitro and contributing to the development of DC-T cell immunotherapy.
Fabrication of GelMA Hydrogels
First, 10 g of gelatin was dissolved in 100 mL of DI and heated at 50 • C with stirring. Then, 0.25 mL of MA and 1% NaOH were slowly added to the above solution at the same time, and the reaction was stirred for 2 h at 50 • C in the dark. Finally, the solution was transferred to a dialysis bag (cut-off molecular weight 10 kDa), dialyzed with deionized water at 40-50 • C for 3 days and then freeze-dried. The samples were stored at 20 • C for later use.
Fabrication of Hydrogel Solutions
First, 0.1 g LAP was dissolved in 40 mL PBS to prepare a 0.25% (w/v) LAP at 60 • C for 30 min; secondly 3%, 5%, 7.5%, and 10% (w/v) GelMA-30 were prepared by dissolving 0.3 g, 0.5 g, 0.75 g, and 1 g hydrogel in PBS containing 0.25% (w/v) LAP buffer separately at 60 • C for 30 min. Thirdly, the different concentration GelMA-30 solutions were filtered by the 0.22 um filtering screen to obtain the sterile liquid GelMA-30 solutions and then the liquid solution was immediately injected into the 24-hole plate (300 µL/hole), then irradiated with a 405 nm light source, 3 cm, for 30 s for gelation. At last, the cell suspension (5 × 10 5 cells/hole, 400 µL) in RPMI1640 containing 10% FBS, 10 ng/mL GM-CSF and 1 ng/mL IL-4 was laid on the solidified GelMA-30 solutions to form a two-dimensional cell culture mode. After 7 days of culture, the GelMA-30 was dissolved with a special lysate, and the cells were collected.
Mechanical Testing
The mechanical properties of the GelMA-30 were measured by a compressive tester (UTM-2203, Suzhou yanuotianxia Instrument Co., Ltd., Suzhou, China). To obtain a stressstrain curve, a displacement of 1 mms −1 was applied to the photo-crosslinked cylindrical samples with a diameter of 9 mm and a height of 6 mm. The load and displacement data by normalizing to sample cross-sectional area and height formed the stress-strain curve. The ultimate strength and failure strain were obtained from the data of broken state, and we determined the compressive modulus by the slope of the linear region of the stress-strain curve.
2.4. Routine Culture of Mouse Bone-Marrow-Derived Dendritic Cells (BMDCs) on GelMA-30 and Cell Morphology Observation C57BL/6 mice were killed by anesthesia and soaked in 75% alcohol for 5 min. Bone marrow cells were washed out with RPMI1640 solution. Red blood cells were lysed with tris NH4Cl (1 mL for one mouse). After 2 min, RPMI1640 solution was added to terminate the red blood breaking process, and the filter screens were used to filter the rest tissue mass. After RPMI1640 solution washing twice, the cells were resuspended with complete medium containing 10 ng/mL GM-CSF and 1 ng/mL IL-4. The culture medium containing cells was evenly spread in a conventional 24-well plate and a 24-well plate covered with 5% GelMA-30, 0.4 mL per well, recorded as day 0. After 72 h of culture (i.e., the third day), they were rehydrated every day and cultured for 7 days, and cell morphology was observed by microscope. All studies were performed in compliance with the guideline of the Institutional Animal Care and use Committee at Zhejiang University College of Medicine.
Isolation and Culture of T Cells from Mouse Spleen
OT2 mice or C57 mice were killed by anesthesia and soaked in 75% alcohol for 5 min. We separated the spleen and placed it in a RPMI1640 solution, ground it, and added 1 mL Tris-NH4Cl for the red blood cell rupture. Then, we added RPMI1640 solution to terminate the reaction after 2 min and the filter screens were used to filter the rest tissue mass. The spleen cells were resuspended with MACS buffer solution (fresh RPMI1640 solution containing 2% FBS and 2 mm EDTA) for subsequent experiments. All studies were performed in compliance with guidelines of the Institutional Animal Care and Use Committee at Zhejiang University College of Medicine.
MACS Magnetic Beads Separate BMDCs and T Cells
The collected spleen cells or BMDCs were counted and centrifuged for 5 min (rotating speed 300× g) to remove the supernatant. The cells were resuspended with fresh Macs buffer containing CD4 or CD11c beads and incubated at 4 • C for at least 15 min after mixing. Using the column from the magnet, we obtained the purified cells, and we resuspended the cells with culture medium for subsequent culture and flow analysis.
DCs Activation Test In Vitro
The inoculated cells on the 24-well plates were stimulated by LPS in a cell incubator containing 5% CO 2 at 37 • C. After 24 h, the supernatant of the corresponding cell group was collected and centrifuged, and the secretion of IL-6 and TNF-a in the supernatant was detected by ELISA. A comparison was then made. The cells obtained by centrifugation were used for flow cytometry and T-proliferation experiment.
The Secretion of Cytokines Was Detected by ELISA
The supernatant sample was taken and the specific operation was carried out according to the instructions of the ELISA kit (BioLegend, San Diego, CA, USA).
The Phenotype of BMDCs Was Detected by FACS
The collected BMDCs of each group were resuspended with cold 100-200 µL PBS, and then we added 0.2 µL CD11c-BV421, 0.15 µL Iab-FITC or 0.15 µL CD80-FITC, 0.2 µL CD40-APC, and 0.15 µL CD86-PE to each tube, incubating flow cytometry antibody at 4 • C for 20 min, added cold PBS to wash away excess flow cytometry antibody, centrifuging to remove supernatant, and then added cold PBS to make up 200 µL. BMDCs' phenotypes were detected by flow cytometry.
T Cell Proliferation Test
CD4 + T cells were labeled with CFSE to form T cell suspension; then, ova323-339 peptide was added (the final concentration of ova323-339 peptide was 200 nM after coculture with DCs), and DCs of each group were inoculated after 24 h stimulation for co-culturation (DC:T = 1:10). After co-culture for 24-48 h, 100 µL fresh medium was supplemented each hole in the 96-well plate. After co-culturation for 64 h, the cells in the well were collected, labeled with flow cytometry antibodies CD4-APC, 7AAD, CD25-PE, and CD69-PEcy7, and loaded with flow cytometry. The degree of T cell proliferation was reflected by detecting the dilution of CFSE.
Real-Time PCR
Total RNA was extracted from cells or using the Trizol reagent (15596018, Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. In short, 500 ng of total RNA was used to make cDNA by using PrimeScript RT reagent Kit (RR037A, Takara, Shiga, Japan) according to the manufacturer's instructions. The resulting cDNA was subjected to real-time PCR using SYBR Premix Ex Taq II (Tli RNaseH Plus) kits (RR820A, Takara, Shiga, Japan) on an Applied Biosystems Q7 Fast Real-Time PCR System (ABI, Torrance, CA, USA). PCR was performed with the following conditions: denaturation temperature 95 • C for 0.5 min, annealing temperature (according to respective primer) for 0.5 min, and extension temperature 72 • C for 1 min, and the PCR cycle was determined according to a kinetic profile. β-actin was used as an internal loading control to normalize all PCR products. All primers information is as follows: Forward Primer AGATCAAGATCATTGCTCCTCCT Reverse Primer ACGCAGCTCAGTAACAGTCC
Phagocytosis Assay and Flow Cytometry
BMDCs phagocytosis was performed by treating BMDCs stimulated by LPS for 24 h together with dextran-FITC (100 µg/mL), which was divided into two groups: one group was cultured in 4 • C for 2 h, the other in 37 • C for the same time. Then, BMDCs were washed by PBS twice and were binded with CD11c monoclonal antibodies at 4 • C for 20 mins to sort out DCs by FACS. Flow cytometry analysis was performed using the application Flowjo.
Statistical Analysis
The GraphPad Prism 6.0 analysis software was used for statistical analysis. The unpaired sample t test was used for the comparison between the two samples, and one-way ANOVA was used for the mean comparison between multiple groups. The measurement data were expressed in mean ± SD. The difference was statistically significant (p < 0.05).
BMDCs on GelMA-30 Substrates
DCs play significant role in priming T cells responses, as Figure 1a depicts. In order to better culture BMDCs and solve the problem of easy activation, partly owing to the stiffness during BMDCs culturing, we synthesized the low substituted GelMA-30 hydrogels with stiffness adjustable and applied it to the BMDCs culture. Figure 1b demonstrates the simple and general synthesis steps of GelMA-30. It can be clearly seen that a low replacement rate GelMA hydrogel was obtained through controlling the grafting reaction of MA with amine and hydroxy functionalities in gelatin (grafting rate is 30%). Because of the low polymerization substitution, GelMA-30 (EFL-GM-30) has a pliant substrate suitable for cell growth, compared to the highly polymerized GelMA hydrogels. Furthermore, different concentrations of GelMA-30 (EFL-GM-30) represented different stiffness. As is shown in Figure 1c, the elastic modulus of GelMA-30 under compression at fully swollen state varies significantly with GelMA-30 concentration, and the higher the concentration of GelMA-30, the greater the mechanical stiffness. According to the characteristics of semisuspension and semiadherence of BMDCs, we then used GelMA-30 with different concentrations as a substitute for the conventional culture plates on the culture and induction of BMDCs in vitro (Figure 1d), which was named the BMDCs culturing with hydrogel substrate (CCHS). To find the suitable stiffness for BMDCs culturing, 10% GelMA-30 was first applied to culture BMDCs. A total of 78.2% of cells died after 6 days of culture ( Figure S1), indicating that 10% GelMA-30 was not usable for BMDCs culture because of its stiffness with the higher polymerization amount in the substrate. Then, we tried to culture BMDCs on 3%, 5%, and 7.5% GelMA-30. After several days of culture, it was found that the cells cultured on the 5% GelMA-30 could survive normally based on observations demonstrating no significant difference, with a higher total cell number compared to the 3%, 7.5% GelMA-30 group (Figure 1e) and the lowest dead cells ratio (Figure 1f,g). However, the number of cells cultured in the 5% GelMA-30 group was still lower than that in the conventional culture system (Figure 1e). Therefore, cells could survive on 5% GelMA-30 with the lower dead cells ratio; 5% GelMA-30 is the appropriate concentration and stiffness for BMDCs culture. were cultured with GM-CSF (10 ng/uL) and IL-4 (1 ng/uL) from C57 mice on stiffness adjustable hydrogel substrate GelMA-30. (e-g) BMDCs by CCHS or on normal condition, respectively, were collected, respectively, and bound with 7AAD-percpcy5.5, with the observation and comparison by FACS. These experiments were repeated twice with essentially the same results. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and * p < 0.05, ** p < 0.01, *** p < 0.001, ns: nonsignificant.
BMDCs Were Induced and Cultured Successfully on 5% GelMA-30
Theoretically, CD11c is a specific marker of the successful induction of BMDCs, while mature DC markers express the costimulatory molecules CD80, CD86, MHC-II, etc. [26,27]. To further understand the growth status of cells on 5% GelMA-30 and the induction of BMDCs, the microscopic results showed that the cells cultured on the surface of 5% GelMA-30 grew with agglomeration, which was similar to the conventional culture group. However, 7 days post-cell culture, the rounder cells cultured on GelMA-30 produced fewer tentacles (Figures 2a and S2). Additionally, the proportion of CD11c+ cells at day 7 reached more than 90% for the 5% GelMA-30 and normal culture groups with cell purification by magnetic bead sorting (Figure 2b). Additionally, the flow cytometry results revealed that the proportion of CD11c + cells was similar with no significant difference (Figure 3a) between the bone marrow cells in conventional culture and the cells on 5% GelMA-30 over 3-7 days, although the immune fluorescence data revealed that expression of the markers Iab, CD80 and CD86 of CD11c + cells cultured on 5% GelMA-30 was much lower than that of BMDCs by the conventional culture (Figure 3(b1,b2)), illustrating the low expression level of these costimulatory molecules of BMDCs by CCHS. Therefore, BMDCs could be successfully induced in vitro by the novel culture method, the BMDCs culturing with hydrogel substrate (CCHS). BMDCs were induced and cultured successfully on 5% GelMA-30. (a) BMDCs by CCHS or on normal condition, respectively, were collected (from the third to the seventh day) every day, respectively, and bound with CD11c, with the observation and comparison by FACS. (b1,b2) BMDCs by CCHS or on normal condition, respectively, were collected (from the third to the seventh day) every day, respectively, and bound with Iab, CD80, CD40, with the observation and comparison by FACS. These experiments were repeated twice with essentially the same results. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and * p < 0.05, ** p < 0.01, **** p < 0.0001, ns: nonsignificant. Mean fluorescence intensity, MFI.
BMDCs on 5% GelMA-30 with Low Expression Levels of Costimulatory Molecules Were Activated by LPS
To determine whether the BMDCs culturing on 5% GelMA-30 can be activated by LPS, we tested the BMDCs costimulatory molecule expression and the inflammatory cytokine secretion of BMDCs with LPS treatment. We found that the mean fluorescence intensity of the costimulatory molecule expressed (Iab, CD40, CD80 and CD86) on BMDCs by CCHS was significantly higher than that of BMDCs from the normal culture group (Figure 4a,b) after 24 h LPS stimulation, while the secretion of Interleukin-6 (IL-6) increased with no significant difference (Figure 4c) in both groups. These results demonstrated that BMDCs culturing on 5% GelMA-30 had better potential for maturation by LPS stimulation. At resting state, DCs are considered immature, with high endocytic capability and expressing low levels of MHC and costimulatory molecules [28], and dextran is recognized and taken up by macrophages, DCs, LSECs, and some other preferred cell types via specific receptors [29][30][31][32]. We found that BMDCs on 5% GelMA-30 promoted dextran phagocytosis (Figure 5a) compared to the normal culture group, whereas with 24 h LPS stimulation, the CCHS group inhibited dextran phagocytosis (Figure 5b). BMDCs by CCHS had more potential for antigen uptake and were more immature than those collected from the control group. were collected, respectively, with the 24 h LPS stimulation (100 ng/mL) and bound with CD11c, Iab, CD80, CD40, with the observation and comparison by FACS. These experiments were repeated twice with essentially the same results. (c) Culture supernatants were collected 24 h after LPS stimulation, and cytokine (IL-6) levels were measured using ELISA. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and *** p < 0.001, **** p < 0.0001, ns: nonsignificant. Mean fluorescence intensity, MFI.
Figure 5.
BMDCs cultured on 5% GelMA-30 with more potential for antigen uptake. (a,b) BMDCs by CCHS or on normal condition, respectively, were pretreated with 100 ng/mL LPS for 24 h and then incubated with 100 µg/mL FITC-dextran for 1 h, setting two groups-one group was cultured in 4 degrees as the negative control, another in 37 • C in favor of cells phagocytosis. Then, BMDCs were bound with CD11c-BV421 and the dextran was detected in the FITC channel in FACS. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and * p < 0.05.
BMDCs on GelMA-30 Approached Spleen DCs
Based on the previous results, are the low expression levels of costimulatory molecules on BMDCs by CCHS closer to the phenotype and function of spleen DCs in vivo than BMDCs by routine culture in vitro? To find the answer, we tested and compared the expression level of costimulatory molecules on spleen DCs and BMDCs from different culturing substrates. The FACS analysis results showed that the low expression levels of costimulatory molecules (Iab, CD80, CD86, CD40) on BMDCs cultured on 5% GelMA-30, as well as on the 3% and 7.5% GelMA-30, were all similar to those of spleen DCs (Figure 6a,c). Meanwhile, according to the data from the spleen DCs used as controls, the specific value of costimulatory molecules (Iab, CD80, CD86, CD40) in the 5% GelMA-30 group/spleen DCs group approached 1 (Figure 6b,d). It suggests that the low expression levels of costimulatory molecules on BMDCs cultured on 5% GelMA-30 were closer to the phenotype of spleen DCs in vivo with the evidence of the appropriate stiffness and good biocompatibility of GelMA-30 (EFL-GM-30) for BMDCs culturing. To further study the corresponding functions of BMDCs cultured on different substrates, as is shown in Figure 6e, the activated CD11c + population expressing CD40 and CD86 on 5% GelMA-30 BMDCs significantly increased post-LPS stimulation, and the proportion of the activated cell population on spleen DCs also rose to a certain extent, but the proportion of the activated subpopulation of BMDCs by routine culturing did not significantly change after LPS stimulation (Figure 6e). To sum up, a larger proportion of cells were matured on BMDCs by CCHS with LPS treatment, which is beneficial for studying the transformation mechanism of DCs from initial state to activated state.
Theoretically, DCs were able to present antigens and contribute to the proliferation of T cells, and the increased expression of CD25 and CD69 is the marker of T cell activation [33,34]. Therefore, the BMDCs were cocultured with CD4 + T cells for the T proliferation experiment. In the unstimulated state, the proliferation and activation level of CD4 + T cells stimulated by BMDCs in the GelMA-30 group by CCHS approached that in the spleen DCs group (Figure 7a). Additionally, Figure 7b showed that the 5% GelMA-30 culture group/spleen DCs group-specific CD4 + T cell proliferation percentage approached 1. It illustrated that the function of BMDCs by CCHS at the resting state approached the role of spleen DCs. The activated population of costimulatory molecules (CD40 + and CD86 + ) in the GelMA-30 group grew most apparently after LPS stimulation, which resulted in the widest proliferation ratio change of CD4 + T cells (Figure 7c) and the highest proportion of activated T cell populations expressing CD25 and CD69 (Figure 7d,e). In other words, BMDCs by CCHS post-maturation by LPS were more capable of triggering T cell responses. On the other hand, BMDCs cultured on 5% GelMA-30 secreted IL-6, IL-12, and TNF-a at similar levels with no significant difference compared to the spleen DC group (Figure 8a-c). In summary, LPS stimulation can also activate BMDCs by CCHS, which contributes to the development of DC-T cell immunotherapy. Figure 6. BMDCs cultured on 5% GelMA-30 were closer to the spleen DCs on phenotype and function in vivo. (a,c) BMDCs were collected from different culture systems, respectively. Then, the spleen DCs were collected by MACS to make the comparison for the expression of the costimulatory molecule with the application of FACS. (b,d) The costimulatory molecule specific values were obtained from the ratio of the MFI data of BMDCs from different culture systems, respectively, to the data of the spleen DC group. "s DC" represents the spleen DC. (e) BMDCs and spleen DCs were stimulated by LPS (100 ng/mL) for 24 h. Then, the cells were bound with Iab, CD40, CD80 and CD86 and were analyzed and compared in FACS. These experiments were repeated twice with essentially the same results. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns: nonsignificant. Mean fluorescence intensity, MFI.
Figure 7.
BMDCs cultured on 5% GelMA-30 were closer to the spleen DCs with more potential to activate T cells. (a,c-e) BMDCs from different culture systems and spleen DCs were stimulated with 100 ng/mL LPS, respectively, for 24 h. After 24 h, the collected DCs (10 4 cells) were co-cultured with CD4 + T cells (10 5 cells bound with CFSE-FITC) extracted from OT2 mice's spleen with the existence of OVA-17 peptides for 64 h. After 64 h, the proliferation peak of CD4 + T cells and CD25 + or CD69 + cells were observed and measured using flow cytometry. Numbers above bracketed lines indicate percent CFSE low (proliferated) cells. Right, frequency of rapidly dividing cells among those at left. (b) The CD4 + T proliferation percent specific value was obtained from the ratio of the data of BMDCs from different culture systems, respectively, to the data of the spleen DC group. "s DC" represents the spleen DC. These experiments were repeated twice with essentially the same results. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and * p < 0.05, ** p < 0.01, **** p < 0.0001. In all experiments, each picture was collected as means ± SEM, data were analyzed with unpaired student's t test or analysis variance (ANOVA) and multiple comparison, and * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns: nonsignificant.
BMDCs on 5% GelMA-30 Were Closer to Spleen DCs in Terms of the Developmental State
Murine DCs subsets consisted of IRF8-dependent conventional (c)DC1s, IRF4-dependent cDC2s, and monocyte-derived DCs. During the differentiation and development of dendritic cells, cDC1s express a high level of the transcription levels of Interferon regulatory factor 8 (IRF8) and rely on IRF8 [35,36], basic leucine zipper ATF-like transcription fac-tor3 (Batf3) [37,38], DNA-binding2 (ID2) [39,40], nuclear factor, interleukin 3 regulated (Nfil3) [41], and B cell lymphoma 6 (Bcl6) for development [42,43], while cDC2s express Interferon regulatory factor 4 (IRF4) and IRF8, although the expression level is lower than that of cDC1s cells. Therefore, we used qPCR to detect and analyze the transcription levels of DCs development-related factors among the spleen DCs and BMDCs by CCHS or culturing on the plastic plate. The results showed that there was no significant difference in the IRF4, IRF8, and Batf3 transcription levels of BMDCs culturing on the 5% GelMA-30 and spleen DCs, yet these transcription levels of BMDCs culturing on the plastic plate were higher (Figure 8d). Thus, we came to the conclusion that the functional phenotype and developmental state of BMDCs by CCHS were closer to those in the spleen DCs group, which further proved the appropriate stiffness and good biocompatibility of GelMA-30 (EFL-GM-30) as the better settlement of easy activation during BMDCs culture.
Discussion
Recently, with the development of 3D printing, biomaterials are more often used in medicine and cell biology. For example, a biomaterials approach and active devices contributed to the development and translation of epicardial therapies for myocardial infarction [44], and material stiffness influences the polarization state, function, and migration mode of macrophages [45]. Can novel materials (GelMA) be used for culturing cells? Because of the importance of DCs and the problem of BMDCs culture on the plastic plates, we applied the photocurable GelMA hydrogels to the BMDCs culture and observed the effect.
In this study, the lack of amino groups and the high polymerization degree on the highly polymerized GelMA hydrogels made it have poor solubility and unadjusted stiffness. Therefore, we synthesized the low substituted GelMA-30 hydrogels (EFL-GM-30), in which one amino group is substituted by polymerization with a 30% substitution rate (Figure 1b) and used it as a substitute for plastic Petri dishes on the culture of BMDCs in vitro, which was named the BMDCs culturing with hydrogel substrate (CCHS). GelMA-30 was able to dissolve in PBS at 60 • C and different concentrations of GelMA-30 had different stiffness. We found that BMDCs on 5% GelMA-30 have a lower dead cells ratio compared to the 3% and 7.5% groups, so 5% GelMA-30 was the best concentration and condition for BMDCs culture. During the culture period, the generally low expression levels of costimulatory molecules on 5% GelMA were similar to the expression levels of costimulatory excitons on spleen DCs, which proved the high biocompatibility of GelMA-30. Moreover, the BMDCs by CCHS had larger potential for antigen uptake.
DCs generally need three signals to activate the T cell immune response: the first signal is the complex formed by the binding of MHC molecules and antigen peptides, which transmits antigen information to T cell surface receptors; the second signal is the interaction between some costimulatory molecules of DCs (CD86, CD80, CD40) and T cell surface molecules (CD28); the third signal is the cytokines secreted by DCs, such as IL-12, TNF-a, and IL-10, which directly affect the direction of T cell polarization [46][47][48]. The increased expression of CD25 and CD69 is a marker of T cell activation; CD69 is one of the earliest upregulated markers after T cell activation [33], and CD25 is an IL-2 receptor with the capability of activating T lymphocytes and further producing IL-2 [34]. We found that the proliferation and activation proportion of CD4 + T cells after encountering the resting BMDCs by CCHS was closer to that of the spleen DCs; after LPS stimulation, BMDCs culturing on 5% GelMA triggered the most CD4 + T cells proliferation and activation. At the same time, the qPCR results of transcription levels of development-related factors showed that the transcription levels of IRF4, IRF8, and Batf3 in BMDCs cultured on 5% GelMA-30 were similar to those of spleen DCs. These results illustrated that the initial state of BMDCs by CCHS in vitro was closer to that of spleen DCs, which expressed low levels of costimulatory molecules and similar development-related factors. LPS stimulation can also drive BMDCs by CCHS mature, which is more conducive to the study of the transformation of DCs from the initial state to a matured state. The low substituted photocurable hydrogel substrate (GelMA-30) is convenient for finding BMDCs culturing substrate stiffness, partly because of its adjustable stiffness and good biocompatibility. Additionally, 5% GelMA-30 is the appropriate stiffness for BMDCs culture because of its lowest dead-cell ratio during culture.
The function of DCs mainly depends on their activation and maturation. Controlling the activation and maturation of DCs is of great significance in the treatment of many clinical diseases and vaccine production [49]. Moreover, the specific mechanism of DCs in the process of immune tolerance and immune activation is not clear [50]. BMDCs by CCHS with a low expression level of costimulatory molecules were also activated by LPS, which is beneficial for promoting the development of DCs vaccines. However, the culture system has the problems of a small number of cells and high cost, so the culture materials and conditions still need to be further developed and explored. Most importantly, BMDCs by CCHS were closer to spleen DCs in terms of phenotype and function in vivo, able to better promote T cell activation and exert the immune effect. CCHS is helpful to study the transformation of DCs from initial state to activated state, which contributes to the development of DC-T cell immunotherapy.
Conclusions
In this study, to solve the problem of easy activation, partly owing to the stiffness during BMDCs culturing, we synthesized GelMA-30 and demonstrated the effect of low substituted GelMA-30 hydrogels on the growth and development of BMDCs. Additionally, a novel BMDCs culturing with hydrogel substrate (CCHS) using the low substituted GelMA-30 was reported. At resting state, BMDCs culturing on the 5% GelMA-30 substrate was closer to that of spleen DCs; after LPS stimulation, they are capable of better promoting T cell activation and exerting immune effect, contributing to the development of DC-T cell immunotherapy. The function of DCs mainly depends on their activation and maturation. Controlling the activation and maturation of DCs is of great significance in the treatment of many clinical diseases and vaccine production. Moreover, the specific mechanism of DCs in the process of immune tolerance and immune activation is not clear. BMDCs by CCHS with a low expression level of costimulatory molecules were also matured by LPS, which is beneficial for promoting the development of DCs vaccines.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ma15093322/s1, Figure S1: GelMA-30. BMDCs were cultured with GM-CSF (10 ng/µL) and IL-4 (1 ng/µL) from C57 mice with the application of GelMA-30 by 10% GelMA-30 or on normal condition respectively. Then the BMDCs on day 6 (or day 7 and day 8) were collected respectively and bound with 7AAD-percpcy5.5, with the observation and comparison by FACS; Figure S2: The morphology of BMDCs at different culture day. BMDCs by CCHS or on normal condition respectively were collected (from the third to the seventh day) every day and the morphology was observed under the microscope. These experiments were repeated twice with essentially the same results.
Conflicts of Interest:
We declare that we have no financial and personal relationship with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled "Photocurable Hydrogel Substrate-Better Potential Substitute on Bone-marrow-derived Dendritic Cells Culturing". | 2022-05-10T15:44:59.847Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "b42ba0d4fed95ac87a18bf0da518fba0774fb886",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/9/3322/pdf?version=1651754009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1c5efcc85dcd4d0cbb61d6cb41303a59be3308e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
213471380 | pes2o/s2orc | v3-fos-license | Copeptin as a Biomarker of Atherosclerosis in Type 1 Diabetic Patients
AIM: To evaluate copeptin as an early marker of atherosclerosis in adolescent type 1 diabetics. METHODS: Sixty-two type 1 diabetic patients and 50 healthy volunteers were enrolled in the study. Serum copeptin, glycosylated haemoglobin (HbA1c), lipid profile, oxidised low-density lipoprotein (OxLDL), urinary albumin/creatinine ratio, carotid intimal medial thickness (cIMT), aortic intimal medial thickness (aIMT) and resistivity index were assessed for all participants in the study. RESULTS: HbA1c, albumin/creatinine ratio, lipid profile, OxlDL, copeptin, cIMT and aIMT were significantly higher in diabetic patients. Copeptin was higher in patients with positive cIMT and aIMT. Copeptin correlated with cIMT and aIMT. Stepwise multiple regression analysis found that copeptin correlated with aIMT. ROC curve showed that copeptin had 100 % specificity with aIMT and cIMT and 95.2 and 60,7 sensitivity with aIMT and cIMT respectively. CONCLUSION: Copeptin can be used as a marker for early detection of atherosclerosis of type 1 diabetic patients.
Introduction
Diabetic nephropathy (DN) and cardiac complication being considered the most important factor for morbidity and mortality in type 1 diabetes (T1D) [1], [2]. Early detection of the coronary artery plaque by using Coronary artery calcification (CAC) is an indication of endpoint coronary artery disease (CAD) [2].
As Arginine vasopressin (AVP) is small in size and had a short half-life, it can not be easily measured [3]. AVP is essential for renal and cardiovascular function as it regulates the volume status. Although Copeptin and AVP are derived from the same precursor molecule, copeptin is a more stable peptide and used for evaluation of fluid and osmosis status in various diseases [3], [4]. Type 1 diabetic patients had higher AVP and exaggerated response to AVP concentrations [5], [6], which stimulate V1a receptors leading to diabetic cardiovascular complications. Although the relationship between copeptin, CAD and DN have been studied in adults with type 2 diabetes (T2D) [7], [8], yet from our knowledge, very minimal studies were done on type 1 diabetic patients.
We aimed to study the association between copeptin and atherosclerosis and diabetic nephropathy.
Patients and Methods
This cross-sectional study was done on 62 type of 1 diabetic patient and 50 healthy volunteers. The diabetic patients were selected from the endocrine clinic, Medical Center of Excellence, National Research Centre and the controls among https://www.id-press.eu/mjms/index healthy children attending the Medical centre of Excellence with their relatives. The ethical committee approval from National Research Centre, Registration number 19101 was taken. Also, written consent was received from diabetics or their parents and controls.
Diabetic patients (age > 14 and < 19 yrs) and duration of diabetes more than 5 years were selected. On the other hand, people with diabetes with acute diabetic complications, CVD, taking metformin or multivitamins or Smokers were excluded from the study. Demographic data of diabetic patients was taken. General, cardiac, chest and neurological examination were done for all diabetics and controls. Blood pressure was assessed for all diabetics and controls. It was measured three times after 5-minute rest in the sitting position by using automatic manometer (Omron M4 Plus, Omron Health care Europe, Hoof drop, and Holland). The mean value of the second and the third measurement was calculated.
After 12 hr fasting, venous blood was collected for assessment of lipid profile [11]. Lowdensity lipoprotein (LDL) cholesterol was calculated using the Friedewald equation. Triglycerides (Tg) was measured in a Techno Con AutoAnalyzer II, Tarrytown, NY, USA.
The mean value of glycosylated haemoglobin (HbA1c) of one year was recorded. Measurement of HbA1c every 3 was taken from files of patients.
Screening for microalbuminuria was assessed in fresh morning urine samples by measuring albumin/creatinine ratio. Microalbuminuria was measured 3 times (separated every 2 months), and it was considered positive if 2 from 3 samples were positive. If one sample was positive urine analysis was done to exclude urinary tract infection. Copeptin and OxLDL were measured by the ELISA method (quantitative sandwich enzyme-linked immunosorbent assay technique).
Measurement of the aortic intimal medial thickness (aIMT)
The transducer (7.5 MHz) was put in the upper abdomen for evaluation of abdominal aorta and aortic bifurcation. The aortic intima-media complex was assessed (10 MHz linear array transducer). For the assessment of aIMT, the image was focused on the far wall (dorsal arterial wall of the most distal 15 mm of the abdominal aorta), and gain settings were used to optimise image quality [13]. The average of 3 measurements of each patient was taken for evaluation of aIMT.
Renal colour duplex scan by Toshiba, Xario ultrasound machine (3-6 MHz transducer) was done in Rt and Lt renal arteries for measurement of the peak systolic velocities and excluded renal artery stenosis in all patients by assessment of different segments starting from their origins to renal hila. Right, and left resistivity indices in segmental, interlobar and arcuate arteries were also measured [14].
Statistical Analysis
Statistical Package for Social Science (SPSS) program version 20.0 (Chicago, Illinois, USA) was used. T-test for quantitative variables was done. We evaluate the correlation between copeptin with demographics, laboratory data, anthropometric data, and image study of diabetic patients. Pearson's correlation, followed by stepwise multiple regression analysis, were also done. Receiver Operating Characteristic curve (ROC curve) was used for detecting sensitivity and specificity of copeptin with cIMT and aIMT. Stepwise multiple regression analysis of copeptin about demographics, anthropometric data, laboratory data and image study in type 1 diabetic patients
Glycosylated
haemoglobin, albumin/ creatinine ratio, lipid profile OxLDL, copeptin, cIMT and aIMT were significantly higher in people with diabetes (Table 1). Copeptin was significantly higher in diabetic patients with positive aIMT and cIMT (Table 2). Copeptin had a positive correlation with age of diabetic patients, cIMT and aIMT (Table 3). Stepwise multiple regression analysis of copeptin concerning demographic, anthropometric data, laboratory data and image study in type 1 diabetic patients were shown in Table 4. ROC curve of copeptin for detection of atherosclerosis in relation to carotid intimal medial thickness and aortic intimal medial thickness in type 1 diabetic (Table 5).
Discussion
In our study, adolescent type 1 diabetic patients had higher HbA1c, microalbuminuria, dyslipidemia and higher aIMT and cIMT. aIMT was found to be higher than cIMT in diabetic patients. This result is comparable with previous studies [15], [16]. Järvisalo et al., [17] revealed that young type 1 diabetic patients had an increased incidence of subclinical atherosclerosis. McGill et al., [12] found that intima of the abdominal aorta is affected before intima of the carotid artery and aIMT is an early diagnosis of preclinical atherosclerosis in children [18].
In the current study, copeptin was higher in adolescent type 1 diabetics and patients with higher cIMT and aIMT. Stepwise multiple regression analysis revealed that copeptin had a significant correlation with aIMT. Our results are comparable with the result of [19]. Copeptin had a strong relationship with atherosclerosis and diabetic kidney disease in adult's type 1 diabetic patient [19].
In the present study, copeptin had no significant correlation with albumin/creatinine ratio. As Copeptin and AVP are secreted from neurohypophysis, their level increase in many medical conditions in type 2 diabetic patients such as acute myocardial infarction [20], cardiovascular mortality [3] and diabetic kidney disease.
Vasopressin may lead to an increase in blood pressure (systemic and glomerular) as it leads to an increase in vasoconstriction of blood vessels, enhancing gluconeogenesis, glucagon release and leads to fat accumulation. Vasopressin aggravates kidney disease in animals and leads to an increase in proteinuria in humans [2].
Bjornstad et al., [19], revealed that copeptin is high in adult type 1 diabetic patients and is related to albuminuria, impaired glomerular filtration rate (GFR) and increase in coronary calcium calcification. Increase copeptin level is associated with cardiorenal complications irrespective to the control of blood glucose, lipid profile and blood pressure. Also, Schiel et al., [21] reported that copeptin might be considered as a marker of renal function in children and adolescents with type 1 diabetes mellitus. Moreover, the concentration of copeptin may also be related to stress, behavioural and lifestyle factors as well as https://www.id-press.eu/mjms/index inflammatory activity and the lipid profile.
In the current study, ROC curve of copeptin showed that the cut off level of copeptin for detection of aIMT was > 7 with specificity and sensitivity 100% and 95.2% respectively and for cIMT cut off was > 9 and specificity and sensitivity 100% and 60.7%. Copeptin has better sensitivity and specificity with aIMT, which is early affected before cIMT and can be used as early detection of atherosclerosis.
In conclusion, copeptin is high in adolescent type 1 diabetic patients irrespective of control of blood glucose, dyslipidemia and hypertension. It can be used for early detection of atherosclerosis. Copeptin has no relation to diabetic nephropathy. AVP (Vapans) can be given to adolescent type 1 diabetic patients with cardiorenal complications. A follow-up study and increase number of patients is recommended to detect if copeptin is a cause of occurrence of diabetic atherosclerosis. | 2020-01-30T09:06:48.874Z | 2019-12-13T00:00:00.000 | {
"year": 2019,
"sha1": "1e51e7893859c0ab11971e8e6bf75301fb3e8baa",
"oa_license": null,
"oa_url": "https://doi.org/10.3889/oamjms.2019.643",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f9be0015df74893a88cd6d6019f46d8ebd9d7d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17197036 | pes2o/s2orc | v3-fos-license | Improvement of hepatic bioavailability as a new step for the future of statin
Statins (HMG-CoA reductase inhibitors) are a group of highly efficient pharmacological agents used for reducing blood cholesterol level and prevention/treatment of cardiovascular disease. Adverse reactions during statin treatment affect quite significant numbers of patients (reportedly from 5% to 20%), with more side effects occurring at higher doses. Reduced statin dosing can be achieved by improved bioavailability of statins, which is fairly low due to poor aqueous solubility, low permeability and high molecular weight of some members of the statin family. Moreover, since hepatic cholesterologenesis is a main target of statin action and extrahepatic inhibition of HMG-CoA reductase has no effect on plasma lipids, hepatic bioavailability, in our opinion, becomes a new important modality of statins maximizing their potential effect on the plasma lipid profile and diminishing their extrahepatic toxicity. Therefore efficient delivery systems of statins into hepatocytes need to be developed and introduced. Uses of nano-emulsifying statin delivery systems which may include vectors of intrahepatic transport, in particular lycopene, are discussed. As a proof of concept, some preliminary results revealing the effect of a lycopene-containing nanoformulation of simvastatin (designated as Lyco-Simvastatin) on LDL in mildly hypercholesterolemic patients are shown.
Introduction
Cholesterol is an essential component of every living eukaryotic cell required for the formation of cell membranes and cellular organelles, also serving as a major metabolic precursor of all steroid hormones, bile acids and vitamin D [1]. Over 90% of cholesterol is located in the tissues and organs of an animal body, whereas 7-10% is associated with plasma and blood cells. Since cholesterol is a highly insoluble substance, it is transported in blood by particles, called lipoproteins, which are classified according to their density. Low-density lipoproteins (LDL) are known to be a major means of transport of cholesterol in human blood [2]. An overwhelming number of epidemiological studies suggest that elevated levels of LDL and total cholesterol in blood are linked to increased incidence of cardiovascular disease and higher mortality of cardiovascular patients [3,4]. On the other hand, reduced LDL and total cholesterol values confer lower occurrence of atherosclerosis, stroke and myocardial infarction in patients with atherosclerosis and peripheral artery disease [5][6][7]. Therefore, reduction of blood cholesterol, especially its LDL fraction, represents the most popular and modern approach in the prevention and pharmacological management of cardiovascular disease [8,9].
Statins
Approximately 80% of cholesterol in human blood originates from the liver [10]. Therefore a search for effective pharmacological inhibitors of hepatic lipogenesis initiated decades ago remains the most effective strategy in the prevention and treatment of hypercholesterolemia and atherosclerosis. Screening of different xenogenic compounds resulted 30 years ago in the discovery of selective inhibitors of 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), a rate-limiting enzyme of the cholesterol biosynthetic pathway in the liver [11]. Notably, pharmacological inhibition of any other enzyme belonging to the cholesterol biosynthesis pathway in the liver is not as effective in terms of the cholesterol reduction in blood [12]. Subsequently, a new class of pharmacological compounds called statins emerged. This group recently includes lovastatin, fluvastatin, pitavastatin, pravastatin, rosuvastatin, simvastatin and atorvastatin. All of them have a closely related chemical structure and work exclusively by activation of the hepatic clearance of plasma lipoprotein via an LDL-receptor pathway and further elimination of cholesterol from the human body with bile [11][12][13]. Thus, the reduction of cholesterol in the blood of statin-treated patients is a strictly liver-mediated phenomenon [14]. It has to be emphasized that HMG-CoA reductase, the only known molecular target of statins mediating their effects on the lipid profile, is a liver-specific enzyme poorly expressed in other tissues [15]. Despite undisputable health benefits of the statin treatment proven in hundreds of research projects (reduction in heart attack/sudden cardiac death incidence by 60%, and strokes by 17%), there are some significant concerns related to their longterm use in clinical practice [16,17]. It is believed that poor compliance and insufficient persistence in statin treatment does not confer measurable health benefits [18]. However, long-term intake of statins is associated with significant side effects. Adverse reactions during statin treatment affect quite significant numbers of patients (reportedly from 5% to 20%), with more side effects occurring at higher doses [19,20]. Myopathies, memory impairment, neuropathies, increased risk of type 2 diabetes, elevated liver enzymes, general weakness and depression are reported in statin-treated patients [21]. Among them, muscle damage leading in extreme cases to fatal rhabdomyolysis is considered to be the most severe side effect of statin therapy [22,23]. It has to be emphasized that toxic effects of statins develop in a strictly dose-dependent manner and often subside when the dose is reduced. Therefore, avoidance of unnecessarily intense treatment schedules and targeting the statin delivery to the liver might be very useful in the prevention of statin toxicity.
Bioavailability of statins
Reduced statin dosing can be achieved by improved bioavailability of statins, which is fairly low due to poor aqueous solubility, low permeability and high molecular weight of some members of the statin family. As an example, the bioavailability rate for lovastatin is astonishingly low, approximating 5% only, whereas the reported value for simvastatin is much higher, reaching up to 60% [24]. Although limited aqueous solubility of statins is considered to be an important cause of their low bioavailability, solubility enhancement, as well as maximizing intestinal absorption of statins used as a single approach, is not likely to be a successful strategy. A high concentration of statins in the systemic circulation may aggravate the statin side effects and toxicity in the long term.
It can be assumed that since hepatic cholesterologenesis is a main target of statin action, an efficient system of delivery of statins into hepatocyte needs to be developed. Therefore hepatic bioavailability becomes a new important modality of action of statins in the human body, maximizing their potential effect on the plasma lipid profile and diminishing their extrahepatic toxicity.
Intrahepatic availability as a keystone feature of statin action
Occurrence of statins in the systemic circulation upon absorption does not necessarily translate into an immediate lipid-lowering effect. At the onset of their action, statins have to become available for binding with HMG-CoA reductase inside hepatocytes. However, statins tend to be widely distributed among different internal organs and tissues (liver, spleen, adrenal glands, adipose tissue and muscles) after absorption [25]. Once again, the liver is a main target organ for HMG-CoA reductase inhibitors, and statin action in non-hepatic tissues has no known therapeutic benefits. Minimizing deposition of statins in extrahepatic tissues as well as enhancing statin delivery to the liver would be very beneficial for clinical practice. First-pass uptake of statins by hepatocytes is reported to be mediated by different mechanisms including passive diffusion, and active, carrier-mediated transport through the hepatocyte membrane with organic anion transport polypeptide-C is thought to be essential for hepatocellular delivery of hydrophilic statins [24,26]. Strikingly, rosuvastatin, a known champion of hepatoselectivity among statins, whose intra-hepatic delivery rate reaches 90% of the dose absorbed in the intestine, has the most prominent effect on plasma LDL as well as remarkably low toxicity [24].
Taking everything into consideration, it can be stated that an ideal statin delivery system has to meet at least two basic requirements. Firstly it has to provide efficient transport of statins through a gastrointestinal barrier, and secondly it has to be capable of effective intrahepatic delivery of the drug. Moreover, there is a huge and as yet poorly explored potential for use of endogenous receptor-mediated hepatic pathways to promote intrahepatic delivery of pharmaceuticals. Various receptors expressed on hepatocyte membranes have extreme selectivity and efficiency in the internalization of different ligands, in contrast to less efficient passive and active diffusion of xenobiotics through the hepatocyte membrane. Therefore construction of microparticles containing a xenobiotic compound bound to the ligands of receptor-mediated hepatic uptake may represent a new strategy in enhancing hepatoselectivity of statins.
From this standpoint, a novel statin delivery system, designated as Lycostatin, was developed in our work [27]. Lycostatin is a new formulation of statins in which a HMG-CoA reductase inhibitor, in particular simvastatin (named Lyco-Simvastatin), is incorporated into a microemulsifying system using spray drying, ultrasound, and supercritical CO 2 [28]. This system contains lycopene, a hydrophobic compound, which is used not only as a core-forming agent, but also as a vector with high tropism to hepatocytes, which are known to express abundantly a carotenoid receptor. In addition it contains amphiphilic phosphatidylcholine as a chaperone for lycopene, which also has hydrophilyzing as well as emulsifying properties and increases thereby intestinal absorption. In a water-free environment Lyco-Simvastatin is a composition of nano-sized lycosome particles. In experimental settings, the solubilized lycosome particles have an enhanced intestinal absorption rate and ability to bind hepatocyte membranes as compared to unmodified simvastatin (Petyaev et al., unpublished observation). The preliminary clinical results for Lyco-Simvastatin use are shown in Figure 1. As can be seen from our preliminary results (Figure 1), Lyco-simvastatin has superior activity in reducing LDL levels in patients with hypercholesterolemia at the same dosage level (20 mg daily) as compared to the unmodified simvastatin (p = 0.0049).
Although further research related to pharmacology of Lyco-Simvastatin (as well as other lycosome-formulated statins) still needs to be done, these results allow us to assume that higher functional activity of Lyco-Simvastatin could be attributable to enhanced hepatic delivery of the drug arising from the specifics of the nanoparticle composition used. The interface area of lycosome-formulated statin microparticles contains lycopene, a carotenoid utilizing a unique transport system inside the human body. It is well acknowledged that upon absorption lycopene crystals and/or lycopene-containing nanoparticles (lycosomes) become incorporated into chylomicrons to be distributed in the human body by lymph and blood flows [29]. Inside the liver the lycosome-containing chylomicrons are likely to undergo a dual receptor-mediated uptake. Since lycosome-containing chylomicrons include in their core lycopene, a powerful ligand for carotenoid receptors, expressed by hepatocytes, they become more easily internalized by these cells via a carotenoid receptor mechanism, promoting thereby intrahepatic delivery of lycosome-formulated statins. Besides the carotenoid receptor, the enhanced hepatocellular delivery of Lycostatin can be confidently explained by an LDL-receptor mechanism, which represents, in our opinion, a second pathway of intrahepatic uptake. It is well known that chylomicrons and products of their enzymatic degradation (LDL and VLDL) are transported inside hepatocytes using the LDL receptor mediated by ApoB, an intrinsic component of low-density lipoprotein particles [30].
Conclusions
Discovery of statins and their further development started with scrupulous investigation and subsequent chemical modifications of compac- tin, a single naturally occurring small molecule produced by a fungus from the Penicillium family [31,32]. In recent times the search for new statins has been virtually exhausted since computational chemistry does not predict any new statin derivate showing inhibitory activity towards HMG-CoA reductase [33]. Therefore, the developments in pharmacology of hypercholesterolemia will be limited in the foreseeable future to already known statins, while optimization of their delivery systems and bioavailability may offer new therapeutic benefits. However, the projected use of statins is likely to grow over the next decades as new indications for their use become substantiated [19,34].
In these terms, development of statin formulations with increased hepatic bioavailability would be a significant step forward in the treatment of cardiovascular disease. Incorporation of simvastatin in the lycopene-containing microparticles, promoting their enhanced absorption and subsequent incorporation in chylomicrons with further hepatic intake via a dual carotenoid/LDL receptor mechanism ensures targeted hepatic delivery of the drug to the liver. It is possible that other vectors promoting efficient hepatic delivery can be used for new statin formulations with enhanced therapeutic efficiency. Redirecting a drug flow to the liver not only allows statin dose reduction but also minimizes exposure of the tissues vulnerable to statin action (muscles, nerve tissue, etc.), thereby reducing adverse effects. This would help to expand the use of this drug to the broader population to further reduce the prevalence of cardiovascular disease and other clinical complications of atherosclerosis. | 2018-04-03T04:57:48.693Z | 2015-04-23T00:00:00.000 | {
"year": 2015,
"sha1": "ab3bb8d0d4d7100f1b610e1a4207e9df1c15f7b4",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-19/pdf-25022-10?filename=improvement.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab3bb8d0d4d7100f1b610e1a4207e9df1c15f7b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233790473 | pes2o/s2orc | v3-fos-license | On the Computation of Some Interval Reliability Indicators for Semi-Markov Systems
: In this paper, we computed general interval indicators of availability and reliability for systems modelled by time non-homogeneous semi-Markov chains. First, we considered duration-dependent extensions of the Interval Reliability and then, we determined an explicit formula for the availability with a given window and containing a given point. To make the computation of the window availability, an explicit formula was derived involving duration-dependent transition probabilities and the interval reliability function. Both interval reliability and availability functions were evaluated considering the local behavior of the system through the recurrence time processes. The results are illustrated through a numerical example. They show that the considered indicators can describe the duration effects and the age of the multi-state system and be useful in real-life problems.
Introduction
Reliability measures of repairable systems have been extensively investigated. Specific indicators are used according to the characteristics of the system that the user wishes to understand and to the nature of the system. For example, it is common to come across reliability, availability, and maintainability functions when dealing with general mechanical systems (see, e.g., [1]) or to single-use reliability function for software performance assessment (see, e.g., [2,3]). Frequently, the evolution of the system is conveniently described by multi-state models where the state of the system evolves in time according to a specified probabilistic structure (see, e.g., [4]). One of the most popular choices is for Markov chain models in continuous or discrete-time cases (see, e.g., [5]). Markov models rely on the Markovian property that informally states that the future state of a system is independent of its past evolution given the state occupied at present. Unfortunately, this property is rarely observed on real data in reliability studies as well as in different domains of applications. For this reason, the proposal of more general frameworks is becoming, even more, a rule rather than an exception. This is confirmed by the success achieved by semi-Markov models in different scientific domains such as applied probability [6], financial credit ratings [7], population dynamics [8], asymptotic behavior of random systems [9] risk assessment and evaluation [10], change of measures in credit risk [11] and pricing problems [12].
Semi-Markov processes have been applied in reliability studies by several authors. Studies based on discrete-time semi-Markov processes in homogeneous case (see e.g., [13]) and in the non-homogeneous case (see e.g., [14]) have demonstrated the ability of this class of stochastic processes to describe and represent problems of reliability theory in a more flexible and satisfactorily way as compared to Markovian models. Continuous-time models were considered in [15] as related to the dependability analysis of a semi-Markov system, in [16] for numerical treatment of probabilistic functions in homogeneous case and in [17] for non-homogeneous processes. Models with general state space related to reliability measures were considered in [18] and existence and uniqueness of solutions of Markov renewal equations were investigated in [19]. Recent developments concern indexed semi-Markov models [20] and the development of reliability measures for those systems showing the presence of an indexed mechanism [21].
In a semi-Markov processes, the transition probabilities and related indicators are duration dependent, that is, the time the system is in a state influence its transition probabilities, see e.g., [22,23]. This effect can be shown by computing transition probabilities including the information contained in the recurrence time processes. Recurrence time processes play an important role in describing the local behavior of a renewal process, see e.g., [24]. They are also intimately related to semi-Markov processes which are a multivariate extension of renewal processes. For this reason, they have been investigated by many authors both connected to the asymptotic behavior of the process ( [25,26]) as well as to the transient analysis, see e.g., [27]. The recurrence times are of a backward and forward type. The former denotes the time since the last transition of a system or, in other words, the time elapsed in the current state occupied by the system. The latter denotes the time to the next transition. The reason for the existence of this duration dependence resides in the fact that the conditional waiting time distribution functions in the states of the system, i.e., the length of time in a state before making a transition, can be of any type, furthermore, no memoryless distributions can be used. In this case, the time length spent in the starting state (backward value) changes the transition probabilities as well as the information concerning how long the process will stay in the current state (forward value). The consideration of backward and forward processes at the initial and final times permits us to have complete knowledge of the waiting times at the beginning and at the end of the observation period of the model, this issue has been investigated in discrete time [22], continuous time models [23] and related to the mono-unireducible topological structure [28].
Recently, a stream of research has focused on general performance measures of a system. The proposed measures generalize classical reliability indicators. These measures are interval based in the sense that they refer to properties of the system not in relation to a point in time but rather to an interval of time.
Confining our attention to semi-Markov systems, the first contributions dealing with interval measures, in the specific with the interval reliability, are those by [29,30]. In those papers, the author determines a system of integral equation the interval reliability function should satisfy. The solution gives the probability of the system to be operational in a given time interval originating at some time s and length x. This measure contains, as special cases, the availability function and the reliability function and has also been evaluated concerning discrete-time systems, see [31,32]. Similar ideas are at the origin of another interval-based measure: the availability of a given window and containing a point. This function is defined as the probability that a repairable system is operational throughout an interval window of length s which contains a point in time x. This interval availability function has been introduced by [33] for Markov repairable systems in continuous-time and successively generalized by [34] for discrete-time semi-Markovian systems. The results are achieved by determining specific relations concerning the Z-transform of working period length, failure period length and whole period length. These Z-transforms are used to get a representation of the corresponding Z-transforms of reliability and availability measures and need the application of inverse Z-transforms to produce numerical results.
In the present paper, we considered several new aspects in the computation of intervalbased performability indices. First, we extended the framework from time-homogeneous processes to a more general time non-homogeneous setting. In our case, the indicators depend on the initial time when the evaluation is done. Accordingly, the age of the system is fully taken into account. Second, we computed the indicators involving the recurrence time processes at the initial time. This extension allows us to consider the duration effect properly. Third, we provided a new proof of the availability of a given window and containing a point that does not make use of transform analysis. The proof is based on the introduction of specific random times and on the exploration of the relationship among duration dependent transition probability function, duration-dependent interval reliability and availability of given window. The result is a new formula linking the aforementioned indicators.
The next section contains a short description of discrete-time non-homogeneous semi-Markov processes with recurrence time processes. The section after presents the main results of the paper. First, the general framework of performability analysis through multistate systems is presented together with classical reliability indicators. Then, the Duration Dependent Interval Reliability function is presented and explicit formulas are given for its calculation in different cases. The section ends by considering the Duration Dependent Availability of given window and containing a point and a new relation that can be used for its computation. Section 4 presents an example of a repairable semi-Markov system for which all of the considered performance measures are evaluated. In the last section, some concluding remarks are made.
Non-Homogeneous Semi-Markov Models
In this section, discrete-time semi-Markov models are briefly described. Let (Ω, , P) be a probability space equipped with a filtration F = ( t , t ∈ N 0 = {0} ∪ N) satisfying the usual conditions. On this probability space, we defined two random variables denoted by J n and T n . The variable J n , n ∈ N represents the state of the system at the n-th transition and assumes values in a finite state space E = {1, 2, . . . , m}. The random variable T n , n ∈ N, with state space equal to N 0 , represents the time of the n-th transition. The filtration F = ( t , t ∈ N 0 ) coincides with the natural filtration generated by the joint process (J n , T n ) n∈N 0 . The process (J n , T n ) is supposed to be a non-homogeneous discretetime Markov Renewal Process. Accordingly, we assume that: The probabilities Q = [Q i,j (s, t)] define the so-called semi-Markov kernel. They can be written as follows: The main difference between a non-homogeneous Markov process and a non-homogeneous semi-Markov process (NHSMP) resides in the family of probability distributions G i,j (s, ·) = P[T n+1 ≤ · | J n = i, J n+1 = j, T n = s]. Indeed, in a Markovian framework, these functions have to be geometrically distributed, while, in the semi-Markov case they can be of any type. The probabilities p i,j (s) i,j∈E , s ∈ N 0 , represent the transition probabilities of the non-homogeneous embedded Markov chains {J n } n∈N 0 . They denote the probability to have next transition in state j given that the system entered state i at current time s. Now, let N(t) = max{n ∈ N|T n ≤ t} be the number of transitions up to time t, then the discrete-time non-homogeneous semi-Markov chain is defined according to: The process Z(t) indicates which state is being to be occupied by the embedded Markov chain at the last transition.
Transition probability functions are defined in the following way: Hence, they denote the probability of being in state j at time t given that the system entered state i at time s. They are obtained by solving the following evolution equations, see e.g., [21]: where The first part on the right-hand side (RHS) of Equation (1) expresses the probability the system does not have any transition up to the time t conditional on the entrance in the state i at the time s. The second term represents the probability that the system will enter into state a at time θ, given that it entered the state i at time s and then, after the execution of this transition, will follow one of the possible trajectories connecting state a at time θ, to state j at t. This event is considered for all possible values of a ∈ E and θ ∈ {s + 1, . . . , t}.
Given the process (J n , T n ) n∈N 0 , it is possible to introduce two stochastic processes of recurrence times: the backward process B(t) and the forward process F(t). They are defined according to: (2) Following the general notation adopted in [22] we consider some transition probability functions with recurrence time processes that will be needed to reach our scopes. First, define by: We call Equation (3) the transition probability functions with initial backward and forward. These probabilities can be obtained according to the following equation: Equation (4) reveals that the probability to be in state j at time t depends on the local behavior of the process in the initial time s, i.e., on the state occupied at that time and also on the time since last jump and on the time needed to have next transition. The transition probabilities in (4) can be obtained once Equation (3) is solved. Particular cases of Equation (4) are those where the recurrence time processes are considered separately. Precisely, we can have transition probability with initial backward which satisfy relation: with b φ a,j (0, θ; t) = φ a,j (θ; t), and transition probability with initial forward: which satisfy formula: The last probability of our interest is the one with initial and final backward times, i.e., which satisfy relation: and All of the relations presented before are particular cases of the more general transition probability functions with initial and final backward and forward presented in [22].
Interval-Based Performability Measures
In this section, we present the main results of the paper. First, we described the framework of multistate systems as applied to reliability studies, and successively, we analyzed two performability measures for a repairable system based on semi-Markov process and we derived specific recurrent relations useful for their computation.
The General Framework of Performability Analysis through Multi-State Systems
A general approach to measure the performance of a system is to consider a state space E = {1, 2, . . . , m} as a representation of the different levels to which a system can perform. In some circumstances, it may be opportune to assume an ordering relation on E so that to lower ranks i ∈ E correspond to a lower system's performance. It is frequent to partition the state space E into two disjoint sets U and D such that: The subset U contains all the elements of E which denote that the system is operational (or working well), instead the subset D contains all the states of E in which the system is not well performing or has fault. The system changes its performance in time by migrating from one state to another. According to our working hypothesis, we assume the stochastic behavior of the system can be well represented by a non-homogeneous discrete-time semi-Markov process The overall quality of the system can be measured by introducing specific indicators that we need to remember in a non-homogeneous environment.
The availability function for a non-homogeneous semi-Markov system can be defined by: This function expresses the probability that a system ranked i at time s will be operational at time t. This indicator can be computed using the following formula: The reliability function for a non-homogeneous semi-Markov system can be defined by: This function expresses the probability that a system ranked i at time s will never experience a fault (visit to subset D) from time s up to time t. This indicator has been evaluated through a transformation of the semi-Markov kernel that render the states of D absorbing. In formula: whereφ i,j (s, t) are the transition probabilities computed by using the following kernel transformation:p The transformation (15) defines a new semi-Markov kernel for which all the states of the subset D are changed in absorbing states, see [14].
The availability and the reliability functions have also been generalized by considering the influence of the recurrence time processes B(t) and F(t) as developed for example in [22].
The Duration Dependent Interval Reliability Function
The notion of Interval Reliability has been introduced for continuous-time semi-Markov processes in [29,30] and only recently it has been investigated in relation to discretetime semi-Markov processes in [31,32]. In this subsection, we extended this indicator to the more general non-homogeneous discrete-time semi-Markov framework and we derived formulas that consider the influence of recurrence time processes in different ways.
First, we defined the Non-Homogeneous Interval Reliability IR i (s; t, p), s, t, p ∈ N, s < t, as the probability that the system is working at time t and will continue to work for the next p time units given that at time s the system entered state i. In formula: This measure is of particular interest and includes as special cases both the reliability and availability function. In this regard, it is sufficient to observe that: The calculation strategy adopted in [31] can be adjusted to the time non-homogeneous processes. Thus, it is possible to obtain the following relation: It should be remarked that Equation (16) is of recursive type and can be seen as a Markov Renewal Equation for which well-known computational methods have been proposed to get a solution. The seminal contribution of Erhan Çinlar [26] was followed by an extensive treatment in [27] and some recent results given in [19]. Numerical methods were developed in [35] while general indexed Markov renewal equations were considered in [20] together with their numerical solution. Now, we proceed to compute the Interval Reliability using the incremental information brought by the recurrence time processes in different cases. To this end, we define the Duration Dependent Interval Reliability DIR i (v, s, u; t, p), v, s, u, t, p ∈ N, v < s < u < t < t + p, as the probability that the system is working at time t and will continue to work for the next p time units given that at time s the system occupies state i being entered in this state in the last transition at time v and will exit from this state at time u. In formula: Thus, this function expresses the probability of the same event considered by IR i (s; t, p) but evaluated on an enlarged information set which includes local behavior of the system around the present time s. Any difference between DIR i (v, s, u; t, p) and IR i (s; t, p) is only due to the influence of the recurrence time processes at the initial time s.
It should be remarked that while the event {B(s) = s − v} ∈ t , the same does not hold for {F(s) = u − s} which is not t -measurable. Accordingly, the conditioning at time s on possible values of the forward process F(s) may serve as a strategy to build scenario-based perturbations of a reliability indicator.
A slightly more general indicators can be defined allowing for the process F(s) to assume value in a specified time interval, namely [a − s, b − s]. This motivates the following definition. The is defined as the probability that the system is working at time t and will continue to work for the next p time units given that at time s the system occupies state i being entered in this state in the last transition at time v and will exit from this state in a moment belonging to the time interval [a, b]. In formula: The following proposition provides formulas for computing the Duration Dependent Interval Reliability according to five different cases we can observe according to the diverse relationship between temporal variables.
for a discrete-time non-homogeneous semi-Markov system can be expressed by the following six cases: (i) For s < a < b < t < t + p, we can prove that: (ii) For s < a < t < b < t + p, we have that: For s < a < t < t + p < b, it results that: (iii) For s < t < a < b < t + p, we have that: Proof. We start with the proof of Equation (17) which corresponds to the case when s < a < b < t < t + p. In this eventuality it results that: The denominator of Equation (23) is given by: The numerator of Equation (23) A substitution of Equations (24) and (25) in Equation (23) gives Equation (9). Equation (18) corresponds to the case when s < a < t < b < t + p. Let us consider again Equation (23): The denominator has been calculated in Equation (24), whereas the numerator can now be represented as follows: A substitution of Equations (24) and (26) in Equation (23) gives Equation (18). Equation (19) corresponds to the case when s < a < t < t + p < b. Let us consider again Equation (23): The denominator has been calculated in Equation (24), whereas the numerator can be now represented as follows: A substitution of Equations (24) and (27) in Equation (23) gives Equation (19). Equation (20) corresponds to the case when s < t < a < b < t + p. Let us consider again Equation (23): The denominator has been calculated in Equation (24), whereas the numerator can be now represented as follows: A substitution of Equations (24) and (28) in Equation (23) gives Equation (20). Equation (21) corresponds to the case when s < t < a < t + p < b. The starting point is always Equation (23) for which the denominator has been calculated in Equation (24). The numerator can be now represented as follows: A substitution of Equations (24) and (29) in Equation (23) gives Equation (21). Equation (22) corresponds to the case when s < t < t + p < a < b. In this case, directly from the definition of the Duration Dependent Interval Reliability we get:
Corollary 1.
The Duration Dependent Interval Reliability DIR i (v, s, u; t, p), v, s, u, t, p ∈ N, v < s < u < t < t + p for a discrete-time non-homogeneous semi-Markov system can be expressed by the following three cases: (i) For s < t < t + p < u, we have that: (ii) For s < t < u < t + p, we have that: (iii) For s < u < t < t + p, we have that: Proof. Equation (30) is obtained simply considering Equation (22) with a = b = u.
Equations (31) and (32) are obtained from Equations (20) and (18) with a = b = u and observing that in this case we obtain: The Duration Dependent Interval Reliabilities DIR i (l, s, u; t, p) and DIR i (l, s, [a, b]; t, p) provide important information to the reliability engineer. In particular, the backward value s − l at initial time s permits to include in the model the information related to the time occupancy of the current state of performance of the system. This allows the possibility to differentiate the evaluation of the reliability of the system according to the time elapsed in the current state. This feature is a prerogative of the semi-Markovian models and cannot be reproduced by Markov chain based models. The indicator DIR i (l, s, u; t, p) considers also the impact of the value u of the forward process at time s and permits the measurement of the effect caused by the time in which the first transition after the current time s will happen. Since this time cannot be known at the present time s, any conjecture on its value can be used to build up a scenario analysis of the reliability of the system. Due to the uncertainty on the value of the forward process, the indicator DIR i (l, s, [a, b]; t, p) permits the advancement of even mild belief on the value of F(s) which can now be expressed in interval form. All of the obtained relations (Equations (18)-(32)) express the Duration Dependent Interval Reliabilities as a function of the Reliability and Interval Reliability of the system.
The Duration Dependent Availability of Given Window and Containing a Point
In this subsection, we dealt with another performability measure based on intervals. In particular, we considered the availability of a given window and containing a point. This measure has been introduced for Markov repairable stochastic systems in [33] where the corresponding calculation formula is derived using the Laplace transform technique. A further generalization is provided in [34] where the analysis is extended to discrete-time homogeneous semi-Markov systems. Again the results are obtained using the mathematical apparatus based on transform analysis which requires inverse transformation to get numerical results useful in applications. Here, we extended the investigation to include non-homogeneous discrete-time semi-Markov process with duration dependence effects. We demonstrated how to derive a formula for this indicator without making use of transform analysis and exploiting the relationship between this indicator, the Duration Dependent Interval Reliability DIR i (v, s; t, p) and the duration dependent transition probability To achieve this result we needed to introduce the formal definition of the indicator, some auxiliary random times, and corresponding properties.
We defined the Duration Dependent Availability of the given window and containing a point DA (i,v,s) (τ; x), v, s, τ, x ∈ N, v < s < x , ∀τ , as the probability that a repairable system works throughout an interval window which has at least a length τ and contains a given point x given that at time s the system was in state i with a time elapsed in this state equal to s − v. In formula: We defined the last failure time before time x as: with the convention that S x = +∞ when sup{t < x : Z(t) ∈ D} = ∅. We also defined the excess time of x at the level τ as the random variable E x,τ defined according to: It denotes the minimum time to add to x − S x to reach at least the length τ. Let us introduce two useful subsets of the sample space that are in a direct relation with S x and E x,τ .
First we denote by: and then consider the set: Lemma 1. The subsets W(·, ·, ·) and w(·, ·) satisfy the following relation: Proof. Given three times s, x, τ we can distinguish two cases: Let us first consider the case a). In order to represent the set W(s, x, τ) in this situation we can enumerate all the intervals [c, c + τ] where the system works. The first interval is Thus, the union of all these intervals gives: Set i = c − x + τ to transform Equation (40) into: In the alternative case b), that is when τ > x − s, the enumeration of all possible intervals of length τ covering x starts from the first interval [s, s + τ] which is obtained for c = s and proceeds until the last interval [x, x + τ] which is obtained for c = x. Thus, the union of all these intervals gives: that, after the change of variable i = c − s is transformed into: Equations (41) and (43) can be merged using the max and min operators in a unique expression:
Lemma 2.
The subsetsW(·, ·, ·) and w(·, ·) satisfy the following relation: Proof. Fix the three times s, x, τ such that τ > x − s and x > s and observe that from Lemma 1 we have that: Observe now that for i = 0: then by substitution, Equation (45) becomes: A change of variable posing d = s + i − 1 transforms Equation (47) into: which proves the first part of Equation (44).
Let us consider now the second case which corresponds to times such that τ ≤ x − s and x > s.
From Lemma 1 we have that: s, x)) . Now, observe that: Moreover, from Equation (46) we have: Therefore: Now, set d = x + i − τ − 1, then Equation (49) becomes equal to: In this way by substitution, we obtain that for τ ≤ x − s, W(s, x, τ) w c (s, x) is expressed by the union between the sets, i.e., Proposition 2. The Duration Dependent Availability of given window and containing a point DA (i,v,s) (τ; x), v, s, τ, x ∈ N, v < s < x , ∀τ for a discrete-time non-homogeneous semi-Markov system can be expressed by the following formula: Proof. The Duration Dependent Availability of given window and containing a point can be expressed in term of the set W(s, x, τ) : where the last equality is only considered for introducing a compact notation we shall use extensively. First, consider that: Consider before the first addendum of Equation (51) Now, we distinguish two cases according to whether w(x + i − τ, x + i) and then we have that: Consequently, the first term on the RHS of Equation (52) becomes: On the contrary, for τ > x − s, from Lemma 1 we know that W(s, x, τ) = x−s i=0 w(s + i, s + i + τ) and since: we have that: Thus, the results obtained in the two cases give: It still remains to compute the second addend on the RHS of Equation (51). We proceed by decomposing it according to Lemma 2 as follows: The events {S x = d, E x,τ = d + τ − x} are mutually exclusive for any choice of Now, let us consider the first addendum on the RHS of Equation (55): where the symbol D d−v 0,d−s = {0, 1, . . . , d − s − 1} ∪ {d − v} denotes the set of possible durations (value of the backward process) at time d. Equation (56) can be expressed as follows: Now, notice that: Then, a substitution of the quantities computed in the points (i)-(iv) inside Equation (57) gives the following representation of the first addendum on the RHS of Equation (47): Now, we proceed to compute the second addendum on the RHS of Equation (55). The computations share similar ideas as those generating Equation (58). In particular: Finally, we can compute the third addendum on the RHS of Equation (55). The computations are based again on similar ideas as those which gave Equation (58). In particular: We observed that Equations (58) and (60) may be merged in a unique expression because the indicator function 1 {x−s≥τ} in Equation (60) implies that: Concerning Equation (58), due to the fact that x − s < τ, it can be rewritten replacing the summation as follows: According to Equations (61) and (62), we can write the summation of Equations (58) and (60) by: A substitution of (59) and (63) in (55) and the summation of the results with (54) completes the proof.
A Numerical Example
In this section, we gave a numerical example of the behavior of the Duration Dependent Interval Reliability functions and of the Duration Dependent Availability.
To simplify this application, we considered a repairable systems with only two states for the system: state U and state D. When the system is in state U it means that it is working, on the contrary state D denotes a failure of the system. We modelled an non-homogeneous semi-Markov kernel by fixing a transition probability matrix which makes provision for a system alternating its states between state U and state D according to the following transition probability matrix: and time varying sojourn time distributions according to Weibull distribution: with time-varying parameters given by: G U,D (s, ·) ∼ W(a = 4.5 , b = 1.7) for s ≥ 10, In this way, we were able to model a non-homogeneous semi-Markov process. Then, we used the theoretical relations determined in previous sections to compute the Non-Homogeneous Interval Reliability, the Duration Dependent Interval Reliability and the Duration Dependent Availability of the given window and containing a point. The results have been validated implementing a Monte Carlo technique based on 100,000 simulated trajectories. The algorithm to simulate the process is described in Figure 1.
Mathematics 2021, 9, x FOR PEER REVIEW have been validated implementing a Monte Carlo technique based on 100,000 si trajectories. The algorithm to simulate the process is described in Figure 1. In Figure 2 we plotted the Non-Homogeneous Interval Reliability for initial stat , for reference purpose we set = 0. The behavior is as expected, in fact the pro to work for time length is inversely proportional to itself in both cases, bu versely proportional to when the system is already in state while the propor with respect to is the opposite when the starting state is . In Figure 2 we plotted the Non-Homogeneous Interval Reliability for initial state U and D, for reference purpose we set s = 0. The behavior is as expected, in fact the probability to work for time length p is inversely proportional to p itself in both cases, but it is inversely proportional to t when the system is already in state U while the proportionality with respect to t is the opposite when the starting state is D. The Duration Dependent Availability of the given window and containing a point is shown in Figure 4 for = , = 0, = 3. The function is plotted depending of and . As expected, the probability decreases when the window length increases. We also found a small dependence on . In fact, for small values and for high values of the probability is higher. The Duration Dependent Availability of the given window and containing a point is shown in Figure 4 for i = U, s = 0, v = 3. The function is plotted depending of x and τ. As expected, the probability decreases when the window length τ increases. We also found a small dependence on x. In fact, for small values and for high values of x the probability is higher.
Conclusions
In this paper, we extended the definition of some important reliability indicators in several directions. First, we considered a discrete-time non-homogeneous semi-Markov repairable model for which interval availability and reliability indicators are defined in such a way to consider the durational effects in terms of backward and forward recurrence time processes. Then, we analysed the link between these indicators and previously studied measures and we determined new formulas of recurrence type useful to the computation of the new indexes. The results avoid the recourse to the transform analysis apparatus and may be applied in real life problems of reliability. Further developments of this research include the extension of the analysis to indexed semi-Markov models and the application of the results to other applied domains such as finance and renewable energies. A possible further extension consists of the computation of these measures when some of the parameters of the reliability system are expressed in form of intervals. It is not infrequent, in the study of a mechanical system, to improve performance evaluation of uncertain systems using interval parameters, see e.g., [36]. Thus, a mixed form of uncertainty can be considered both probabilistic and engineering in nature.
Conclusions
In this paper, we extended the definition of some important reliability indicators in several directions. First, we considered a discrete-time non-homogeneous semi-Markov repairable model for which interval availability and reliability indicators are defined in such a way to consider the durational effects in terms of backward and forward recurrence time processes. Then, we analysed the link between these indicators and previously studied measures and we determined new formulas of recurrence type useful to the computation of the new indexes. The results avoid the recourse to the transform analysis apparatus and may be applied in real life problems of reliability. Further developments of this research include the extension of the analysis to indexed semi-Markov models and the application of the results to other applied domains such as finance and renewable energies. A possible further extension consists of the computation of these measures when some of the parameters of the reliability system are expressed in form of intervals. It is not infrequent, in the study of a mechanical system, to improve performance evaluation of uncertain systems using interval parameters, see e.g., [36]. Thus, a mixed form of uncertainty can be considered both probabilistic and engineering in nature. | 2021-05-07T00:02:51.748Z | 2021-03-08T00:00:00.000 | {
"year": 2021,
"sha1": "52bfa79db302df1c464b2bd02aff71bae0467af6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/9/5/575/pdf?version=1615431900",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "48794c5e81626b35b4363cb46289fcb704887dc1",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216471662 | pes2o/s2orc | v3-fos-license | Classification of All Non-Isomorphic Regular and Cuspidal Arm Anatomies in an Orthogonal Metamorphic Manipulator
: This paper proposes a classification of all non-isomorphic anatomies of an orthogonal metamorphic manipulator according to the topology of workspace considering cusps and nodes. Using symbolic algebra, a general kinematics polynomial equation is formulated, and the closed-form parametric solution of the inverse kinematics is obtained for the coming anatomies. The metamorphic design space was disjointed into eight distinct subspaces with the same number of cusps and nodes plotting the bifurcating and strict surfaces in a cartesian coordinate system (cid:110) θ π 1 , θ π 2 , d 4 (cid:111) . In addition, several non-singular, smooth and continuous trajectories are simulated to show the importance of this classification.
Introduction
This paper investigates the cuspidality of the anatomies derived by a metamorphic manipulator structure. Metamorphic manipulators are considered as a new special class of serial open-chain manipulators that have been developed to fulfill the high manufacturing demands (high productivity, low maintenance cost, space-saving, high accuracy etc.) [1].
A metamorphic manipulator consists of rigid links, pseudo-joints and active modules that can be easily and quickly assembled into different arm structures. The metamorphosis is achieved through the pseudo-joints [2], which are used to change the arm anatomy [3]. It was showed that a regular metamorphic manipulator system can be used to achieve high kinematic performance adapted to the tasks' requirements [4,5].
The first prototypes of modular, reconfigurable robotic arms have been started to come up at the end of the previous century. A modular manipulator system was developed, which consists of actuator modules, rigid links and a control unit and the efficiency of its mechanical hardware and control software [6]. A reconfigurable modular manipulator system (RMMS) with rigid links, intelligent and active joint modules of various sizes was proposed to perform a wide range of simple or more complex tasks thanks to different arm geometries and an embodied sophisticated kinematic, calibration and control software [7]. A rapidly reconfigurable robotic work cell was designed based on component technology with hardware, software and control issues [8] and a fully functional RMMS work cell to perform light machining tasks was introduced [9]. An innovative mechanical conceptual design is proposed for a modular reconfigurable serial manipulator with a unique geometric cubic actuator module design with connecting ports on all faces of the cubic module, so to minimize the total number of passive or active modules [10].
Presentation of Metamorphic Manipulator
The Metamorphic Manipulator belongs to reconfigurable modular robotic systems that could be reconfigured providing a variety of manipulator anatomies adapted to a wide spectrum of task requirements. The mechanism consists of active modules, pseudo-joints, and link modules (rigid links) [3][4][5]. Active modules (Figure 1a) are mechatronic devices, which are fully equipped with an actuator, harmonic drive, failsafe brake, encoder, power electronics, transmissions and sensors. They are designed with a variety of mechanical standardized coupling connector interfaces that can be easily and quickly assembled into different arm structures. In the present study, the active module is considered as a fully rotational joint.
On the other hand, a pseudo-joint constitutes a versatile connector between successive active joints [2], as it is shown in Figure 1b. Pseudo-joints are specially designed to receive discrete values in [−90 • , 90 • ] with a step of 15 • . Moreover, mechanical interface connectors are developed to align the passive with the active module and couple them together with sufficient strength to transmit the internal forces generated by the load and movement of the arm. The pseudo-joints remain locked in a preselected angular position as the manipulator system is on-line. The angular position of the passive joint is changing offline manually. The transition from an initial robot metamorphic anatomy to a completely different one is feasible with the variation of the pseudo-angles without reassembly of the structure. The pseudo-joint angle alteration changes the D-H parameters of the manipulator structure without changing its kinematic topology. Robotics 2020, 9, 20 4 of 22 In this work, three active modules and two pseudo-joints are used to assemble the metamorphic structure and investigate the cuspidality of the derived anatomies, as it is shown in Figure 2a. In Figure 2, R i is the radius of pseudo-joint, h is the added length of the rotating part of the upper part of the pseudo-joint. The half-width of second and third servo-electric module are α i and the length of the second actuator is indicated with d. All lengths are measured in meters. A specific family of 3R orthogonal metamorphic manipulators with five kinematic parameters {d 2 , d 3 , d 4 , r 2 , r 3 } and mutually orthogonal active joint axes i.e., a 2 , a 3 = ±90 • is studied. The first passive joint axis is placed perpendicular to the first active joint axis and parallel to the second active module joint axis. In the same way, the second passive joint axis is placed perpendicular to the second actuator axis and parallel with respect to the third active joint axis. Moreover, the actuator limits are ignored, therefore it is considered that (θ 1 , θ 2 , θ 3 ) ∈ [−π, π]. The D-H parameters of the metamorphic manipulator shown in Figure 2 are presented in Table 1 as functions of the metamorphic parameters θ π 1 , θ π 2 . Table 1. The DH-parameters of the metamorphic manipulator (A = (R 1 +α 1 +h), B = (R 2 +α 2 +h), R 1 = R 2 = 0.045 m α 1 = α 2 = 0.04225 m d = 0.2735 m and h = 0.08725 m).
The Proposed Method
The metamorphic structure presented in the previous section is used for the introduction of the proposed method. The general, necessary and sufficient conditions for cuspidal 3R serial manipulators are considered to investigate cuspidal anatomies derived from this metamorphic structure. The study focuses on a special family of orthogonal modular manipulators with four distinct kinematic parameters {d 2 , d 3 , d 4 , r 2 } depended on two metamorphic variables θ π 1 , θ π 2 while the last joint offset equals to zero (r 3 = 0). The algebraic parametric polynomial P of fourth-degree in t = tan θ 3 2 that it solves the inverse kinematics of the depicted mechanism in Figure 2 must have one or more multiple real triple roots [22] or it is equivalent to show that the parametric polynomial system P, dP dt , d 2 P dt 2 has real roots. The above parametric algebraic polynomial system is considered a zero-dimensional system with three equations on three variables {ρ, z, t} where ρ = x 2 +y 2 . Thanks to the parametric solution of the polynomial system are obtained the real suitable boundary algebraic polynomials that depend only on D-H parameters shown in Table 1 after removing the imaginary polynomials from the discriminant variety of polynomial system. Thanks to the parametric solution of the polynomial system, the real suitable boundary algebraic polynomials are obtained that depend only on D-H parameters shown in Table 1 after removing the imaginary polynomials from the discriminant variety of the polynomial system. The derived bifurcation equations divide the metamorphic space into subspaces with a constant number of cusp points (0, 2, 4) in workspace.
By the annulment of the determinant of the Jacobian matrix, the singular values could be determined. The investigation of det(J) = 0 produces new separating equations [23] that effectively verify the bifurcation equations, which are derived from the solution of the parametric polynomial system. The new separating equations represent intersecting points between singular curves and straight lines in the joint space plane (θ 2 , θ 3 ). Moreover, the topological feature of node is taken into account in this investigation. Node is a point in the workspace where the inverse kinematics admits two double solutions [21]. So, the node-based classification is done with pure geometric reasoning by looking at the continuous deformation of workspace [23]. Extra separating equations are produced, which allow us to enumerate both cusp and node points for the depicted kinematic topology in Figure 2. The results are displayed in 3D graphs with strict surfaces which depend only on the metamorphic design parameters θ π 1 , θ π 2 , d 4 . Finally, illustrative metamorphic anatomy is selected from each subspace of metamorphic design parameters 3D space and it is displayed so the singular curves and straight lines in joint space are mapped as the external and internal boundaries of metamorphic workspace.
A considered number of scientific works have been elaborated to plot so the internal as the maximum reach external boundaries in a half cross-section of the total workspace [24,25]. Regional boundaries are singular points in the workspace that the inverse kinematics solutions admit real roots with multiplicity higher than one [26]. However, the most suitable methods investigate the discriminant of the polynomial that provides the inverse kinematics solutions to represent the singular values in half cross-section of the workspace [27]. Furthermore, a variety of important topological features such as cusps, nodes, accessibility and voids are displayed in half cross-section of the metamorphic workspace that assists the engineer to plan discrete or continuous paths in operational space avoiding regional singularities.
Besides, sample metamorphic anatomies are displayed in the same figure in a half cross-section of metamorphic workspace to emphasize the importance of design parameters or the combination of them in the transformation of the workspace (cusp, node, regular, dexterity, manipulability).
Furthermore, non-singular posture changing trajectories with several numerical examples in generic and non-generic models of the metamorphic manipulator are demonstrated. The notion of aspects (singularity free-regions in joint space) helps us to join two inverse kinematic solutions in joint space [28].
Classification of Orthogonal Kinematic Non-Isomorphic Configurations of 3R Metamorphic Manipulator according to the Topology of Metamorphic Workspace
Exploiting the homogeneity of mechanism R 1 = R 2 , α 1 = α 2 or A = B the geometric analysis of the orthogonal 3R metamorphic manipulator revealed variable DH-kinematic parameters that are dependent on metamorphic design parameters θ π 1 , θ π 2 as is shown in Table 1. Assuming continues variation of the pseudo-joints in − π 2 , π 2 then the link lengths and the joint offset are continuously varied, while the twist angles remain constant and equal to 90 • .
The position of the end effector TCP of the end-effector with respect to the manipulator base is given by: where, c i = cos θ i , s i = sin θ i for i = 1, 2, 3. The metamorphic parameters θ π 1 , θ π 2 of the manipulator are embodied in the system of Equation (2). It is known that the inverse kinematics in general 3R manipulators can be solved through a fourth-degree polynomial P in the variable of the last active joint. The Groebner Basis Elimination is used to eliminate the first two active joint variables (θ 1 , θ 2 ) [22], and to produce a solution that stands for the orthogonal 3R metamorphic manipulator. Following the method presented in [22] the polynomial is derived in the following form: The coefficients of the polynomial P depend on the DH-parameters, including the pseudo-joint variables, and the TCP coordinates (x, y, z).
Necessary and Sufficient Conditions to Investigate Cuspidality
The necessary and sufficient conditions to recognize a cuspidal 3R manipulator has been introduced in [28]. If and only if there is at least one singular point in its workspace such that the inverse kinematics admits a real triple root, then the manipulator is considered as cuspidal. Therefore, it is equivalent to prove that t in Equation (3) admits at least one triple root. Based on the method introduced in [22], a fourth-degree polynomial P has at least one or more triple roots if and only if the polynomial system P, dP dt , d 2 P dt 2 admits real radicals. In this way, an algebraic parametric polynomial system S derived to identify and investigate the cuspidal anatomies in orthogonal 3R metamorphic manipulators: The parametric polynomial system is considered a zero-dimensional system of three equations with three unknowns t, z, ρ . Without loss of generality, it is assumed that y = 0 because a complete rotation around the z-axis of the first active joint lets the system invariant. In addition, d 2 > 0, d 3 > 0, d 4 > 0, r 2 > 0 and r 3 = 0 are the constraints for the solution of S. Since d 2 (θ π 1 ) = A * sin(θ π 1 ) is a continuous and differentiable function in − π 2 , π 2 , then, d 2 (θ π 1 ) and d 3 (θ π 2 ) are monotonically increasing function in − π 2 , π 2 alwaysd 2 > 0 . The same stands for d 3 (θ π 2 ) since A and B are positive quantities. r 2 (θ π 2 ) > 0 is always positive i.e., d > B * cos θ π 2 in − π 2 , π 2 .
Robotics 2020, 9, 20 7 of 22 The design parameter space {d 2 , d 3 , d 4 , r 2 } ∈ R 4 must be divided in subspaces such that the sign of the polynomials in Equation (4) is constant. The set of variables t, z, ρ should be eliminated to derive polynomials that depend only on DH-parameters.
For this reason, it has been developed general and efficient algorithms to solve parametric algebraic polynomial systems [29,30]. In this way, the parametric polynomial system shown in Equation (4) is solved and the discriminant variety is obtained. After removing the imaginary polynomials, the desired algebraic polynomials are derived that depend only on the following four DH-kinematic parameters {d 2 , d 3 , d 4 , r 2 }. The following system of polynomials indicates the bifurcating equations: However, only three real polynomials out of five {h 2 , h 3 , h 5 } in Equation (5) can be used to the classification according to the number of real roots of Equation (4) i.e., cusp points [31].
Last but not least, it is worth mentioning that Equation (5) has the most general form, as well as the separating equations, are valid for any 3R orthogonal metamorphic manipulator with the selected four kinematic parameters {d 2 , d 3 , d 4 , r 2 }.
In the following sections the investigation of the set of the Equation (5) to classify the metamorphic manipulator according to the number of cusps and the number of nodes.
Separating Algebraic Equations through Investigation of det(J) = 0
The algebraic set of Equation (5) is used to the classification of open chain 3R orthogonal metamorphic manipulators according to the number of cusp points. The analysis presented in this section is based on the method introduced in [31] without taking into account the assumption that d 2 = 1, since in the considered metamorphic structure this parameter depends on the first pseudo-joint angle θ π 2 . The bifurcating surfaces separate the metamorphic design parameters space is subspaces according to the number of cusps as it is shown in Figure 3. The discrete transition of the metamorphic parameters is shown only for the positive angles of the two pseudo-joints θ π 1 , θ π 2 . The classification is the same for all possible combinations of pseudo-joints exploiting the symmetry for all the distinct kinematic configurations of the mechanism i.e., (169 kinematic postures). Using Figure 3 the anatomy is derived by selecting the metamorphic parameters based on the number of cusp points.
The bifurcating equation h 5 in Equation (5) is a biquadratic polynomial in d 4 providing the following two roots: Equations (6) and (7) apply to manipulators with a singular point in the workspace where two cusp points coincide with a node such that Equation (4) has a quadruple root [32]. Equation (6) defines the transition between binary and quaternary manipulators. The surfaces C 0α and C 0b does not appear in Figure 3 since they are valid for negative values of the metamorphic parameters θ π 1 , θ π 2 .
Robotics 2020, 9, The bifurcating equation ℎ 5 in Equation (5) is a biquadratic polynomial in 4 providing the following two roots: Equations (6) and (7) apply to manipulators with a singular point in the workspace where two cusp points coincide with a node such that Equation (4) has a quadruple root [32]. Equation (6) defines the transition between binary and quaternary manipulators. The surfaces C 0α and C 0b does not appear in Figure 3 since they are valid for negative values of the metamorphic parameters (θ π 1 ,θ π 2 ).
The rest bifurcating equations are derived from the investigation of the determinant of Jacobian det(J) = 0: Since for the last joint offset, it is assumed that 3 = 0, Equation (8) could be written as two product factors providing the following equations: Since for the last joint offset, it is assumed that r 3 = 0, Equation (8) could be written as two product factors providing the following equations: Taking into account that s 3 , where ε = ±1 for d 3 ≤ d 4 and substituting in Equation (9) the following equation is obtained: As it shown in Figure 4a the transition from subspace 1 to subspace 2 is characterized by a manipulator for which the singular branch (line) E 1 defined by in the joint space is tangent to the singular curve S 1 . So, the bifurcating surface C 1 separates the metamorphic anatomies with four and two cusps such that, Robotics 2020, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/robotics in joint space is tangent to the singular curve S 2 . So, the bifurcating surface C 2 separates the manipulators with two and four cusps such that, The final bifurcating surface is As it shown in Figure 4c the transition from subspace 2 to subspace 4 is characterized by a manipulator for which the singular branch (line) E 2 defined by in joint space is tangent to the singular curve S 1 . So, the bifurcating surface C 3 separates the manipulators with two and no cusp points (regular workspace topology) given by, The above surfaces C i , i=0,1,2,3 can be verified through Equation (5). The separating equation h 2 in Equation (5) is a second-degree polynomial in d 4 such that: Similarly, the bifurcating algebraic equation h can be simplified as: Since d 2 and d 3 depend on θ π 1 and θ π 2 respectively then the bifurcating surface C 1 , separates the subspace 1 from subspace 2 with four and two cups respectively, as it is shown in Figure 3. Assuming As it shown in Figure 4b the transition from subspace 2 with two cusps to subspace 3 with four cusps is characterized by a metamorphic anatomy for which the singular branch (line) E 1 defined by in joint space is tangent to the singular curve S 2 . So, the bifurcating surface C 2 separates the manipulators with two and four cusps such that, The final bifurcating surface is d 4 = d 3 As it shown in Figure 4c the transition from subspace 2 to subspace 4 is characterized by a manipulator for which the singular branch (line) E 2 defined by in joint space is tangent to the singular curve S 1 . So, the bifurcating surface C 3 separates the manipulators with two and no cusp points (regular workspace topology) given by, The above surfaces C i , i = 0, 1, 2, 3 can be verified through Equation (5). The separating equation h 2 in Equation (5) is a second-degree polynomial in d 4 such that: Similarly, the bifurcating algebraic equation h 3 can be simplified as:
Classification According to the Number of Nodes
Another important topological feature is the node which is a singular point in the workspace where two singular curves (internal or external) intersect and the polynomial P admits two double roots. In the present section, the distinct kinematic anatomies are classified according to the number of nodes in order to show the deformation of the workspace and hence the non-isomorphism of the kinematic topology.
The method to classify 3R orthogonal fixed manipulators according to the number of nodes is introduced in [23], where analytical algebraic expressions of the surfaces in the parameter space were derived. Analytical algebraic expressions of the surfaces of the parameter space were produced in [23] and are used in this paper to classify the anatomies derived from the considered orthogonal metamorphic structure. The bifurcating surfaces subdivide the metamorphic design space θ π 1 , θ π 2 , d 4 into eight distinct non-isomorphic subspaces with a constant number of cusps and nodes shown in Figure 5. The number of cusp and node points are indicated in parentheses of separating subspaces, respectively. The production of these subspaces is based on the following analysis. Subspace 1 in Figure 3 represents metamorphic anatomies with four cusp points and is divided into three distinct subspaces with different numbers of nodes. The transition between subspace 1.1(4, 2) to subspace 1.2 (4, 0) is given by the following boundary surface: Figure 6a shows the singularity curves of a representative metamorphic anatomy from subspace 1.1 (4, 2) that includes generic anatomies with four cusps, two nodes, a void, two subregions with four and one with two inverse kinematic solutions (IKS), respectively shown in Figure 5. Subspace 1.2 (4, 0) in Figure 6b includes metamorphic anatomies with 4 cusps, no nodes, one subregion with four IKS and another one with two IKS, respectively shown in Figure 5. The surface that divides the subspace 1.2 (4, 0) and subspace 1.3 (4, 2) is the following: Robotics 2020, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/robotics [23] and are used in this paper to classify the anatomies derived from the considered orthogonal metamorphic structure. The bifurcating surfaces subdivide the metamorphic design space {θ π 1 ,θ π 2 , d 4 } into eight distinct non-isomorphic subspaces with a constant number of cusps and nodes shown in Figure 5. The number of cusp and node points are indicated in parentheses of separating subspaces, respectively. The production of these subspaces is based on the following analysis. Subspace 1 in Figure 3 represents metamorphic anatomies with four cusp points and is divided into three distinct subspaces with different numbers of nodes. The transition between subspace 1.1(4,2) to subspace 1.2 (4,0) is given by the following boundary surface: Figure 6a shows the singularity curves of a representative metamorphic anatomy from subspace 1.1 (4,2) that includes generic anatomies with four cusps, two nodes, a void, two subregions with four and one with two inverse kinematic solutions (IKS), respectively shown in Figure 5. Subspace 1.2 (4,0) in Figure 6b includes metamorphic anatomies with 4 cusps, no nodes, one subregion with four IKS and another one with two IKS, respectively shown in Figure 5. The surface that divides the subspace 1.2 (4,0) and subspace 1.3 (4,2) is the following: Figure 6c contains metamorphic anatomies with four cusps, two nodes, one region with two and three regions with four IKS respectively and five c-sheets shown in Figure 5. Moreover, subspace 2 in Figure 3 includes metamorphic anatomies with two cusps and it can be subdivided into two neighboring subspaces. The bifurcating surface is formulated as follows, . Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with the metamorphic parameters: (a) θ π 1 = ± 75°, θ π 2 = ± 90°, d 4 = 0,07m (b) θ π 1 = ± 45°, θ π 2 = ± 60°, d 4 = 0.1 m (c) θ π 1 = ± 15°, θ π 2 = ± 30°, d 4 = 0.11m.
The transition between subspace 1.3 (4,2) to subspace 2.1 (2,1) is expressed by Equation (12) in Figure 4a subspace 2.1 (2,1) exhibits non-generic metamorphic anatomies with two cusps, one node, two subregions with four and one subregion with 2 IKS respectively and 5 aspects. On the other hand, the transition from subspace 2.1 (2,1) to subspace 2.2 (2,3) is defined through the boundary strict surface in Equation (17). In Figure 7a subspace 2.2 (2,3) includes metamorphic anatomies with two cusps, three nodes, five c-sheets, two subregions with four and two IKS, too as well as the internal intersect with the external boundaries. Furthermore, subspace 3 (4,4) in Figure 4b is a region with four cusps, four nodes, six c-sheets, three subregions with four and two subregions with two IKS, respectively shown in Figure 5.
Finally, the regular subspace 4 in Figure 3 is classified into two spaces through the surface in Equation (17). Subspace 4.1 (0,0) in Figure 7b includes regular metamorphic anatomies with no nodes, 4 c-sheets, one region with four and two IKS, respectively. Finally, the subspace 4.2 (0,2) in Figure 4c includes regular non-generic metamorphic anatomies with two nodes, two subregions with 2 and one subregion with four IKS, respectively and four aspects shown in Figure 5.
Moreover, three more arm anatomies are exhibited with at least one zero DH-parameter in Figure 7c-d from subspaces 1.3, 4.1 and 1.3, respectively. The anatomy appeared in Figure 7c is regular with one region of 4 IKS and 2 aspects. Similarly, the manipulator anatomy in Figure 7d has one region of 4 IKS but 4 aspects. Figure 6c contains metamorphic anatomies with four cusps, two nodes, one region with two and three regions with four IKS respectively and five c-sheets shown in Figure 5. Moreover, subspace 2 in Figure 3 includes metamorphic anatomies with two cusps and it can be subdivided into two neighboring subspaces. The bifurcating surface is formulated as follows, The transition between subspace 1.3 (4, 2) to subspace 2.1 (2, 1) is expressed by Equation (12) in Figure 4a subspace 2.1 (2, 1) exhibits non-generic metamorphic anatomies with two cusps, one node, two subregions with four and one subregion with 2 IKS respectively and 5 aspects. On the other hand, the transition from subspace 2.1 (2, 1) to subspace 2.2 (2, 3) is defined through the boundary strict surface in Equation (17). In Figure 7a subspace 2.2 (2, 3) includes metamorphic anatomies with two cusps, three nodes, five c-sheets, two subregions with four and two IKS, too as well as the internal intersect with the external boundaries. Furthermore, subspace 3 (4, 4) in Figure 4b is a region with four cusps, four nodes, six c-sheets, three subregions with four and two subregions with two IKS, respectively shown in Figure 5. Figure 6. Selective arm anatomies of the metamorphic structure and singularities are displayed in joint space and half cross-section of workspace with the metamorphic parameters: (a) θ π 1 = ± 75°, θ π 2 = ± 90°, d 4 = 0,07m (b) θ π 1 = ± 45°, θ π 2 = ± 60°, d 4 = 0.1 m (c) θ π 1 = ± 15°, θ π 2 = ± 30°, d 4 = 0.11m.
The transition between subspace 1.3 4,2 to subspace 2.1 2,1 is expressed by Equation (12) in Figure 4a subspace 2.1 2,1 exhibits non-generic metamorphic anatomies with two cusps, one node, two subregions with four and one subregion with 2 IKS respectively and 5 aspects. On the other hand, the transition from subspace 2.1 2,1 to subspace 2.2 2,3 is defined through the boundary strict surface in Equation (17). In Figure 7a subspace 2.2 2,3 includes metamorphic anatomies with two cusps, three nodes, five c-sheets, two subregions with four and two IKS, too as well as the internal intersect with the external boundaries. Furthermore, subspace 3 4,4 in Figure 4b is a region with four cusps, four nodes, six c-sheets, three subregions with four and two subregions with two IKS, respectively shown in Figure 5.
Finally, the regular subspace 4 in Figure 3 is classified into two spaces through the surface in Equation (17). Subspace 4.1 0,0 in Figure 7b includes regular metamorphic anatomies with no nodes, 4 c-sheets, one region with four and two IKS, respectively. Finally, the subspace 4.2 0,2 in Figure 4c includes regular non-generic metamorphic anatomies with two nodes, two subregions with 2 and one subregion with four IKS, respectively and four aspects shown in Figure 5.
Moreover, three more arm anatomies are exhibited with at least one zero DH-parameter in Figure 8a illustrates the transformation of metamorphic workspace with a variation of θ π 1 and θ π 2 = ± π 2 ,d 4 = 0.13 m and the corresponding topologies are discerned with different colors. However, the number of cusp points remains constant and equal to four. Moreover, the internal singular segments tend to deform in both axes (ρ,z) as well as the location of cusp points is changed too. Moreover, the ratios between the internal subregion i.e., 4 IKS and external one i.e., 2 IKS is changed too. The two DH-parameters d 3 , r 2 depend on the variation of the second passive joint angle θ π 2 as it is shown in Table 1. The variation of θ π 2 in [- Finally, the regular subspace 4 in Figure 3 is classified into two spaces through the surface in Equation (17). Subspace 4.1 (0, 0) in Figure 7b includes regular metamorphic anatomies with no nodes, 4 c-sheets, one region with four and two IKS, respectively. Finally, the subspace 4.2 (0, 2) in Figure 4c includes regular non-generic metamorphic anatomies with two nodes, two subregions with 2 and one subregion with four IKS, respectively and four aspects shown in Figure 5.
Moreover, three more arm anatomies are exhibited with at least one zero DH-parameter in Figure 7c,d from subspaces 1.3, 4.1 and 1.3, respectively. The anatomy appeared in Figure 7c is regular with one region of 4 IKS and 2 aspects. Similarly, the manipulator anatomy in Figure 7d has one region of 4 IKS but 4 aspects. Figure 8a illustrates the transformation of metamorphic workspace with a variation of θ π 1 and θ π 2 = ± π 2 , d 4 = 0.13 m and the corresponding topologies are discerned with different colors. However, the number of cusp points remains constant and equal to four. Moreover, the internal singular segments tend to deform in both axes (ρ, z) as well as the location of cusp points is changed too. Moreover, the ratios between the internal subregion i.e., 4 IKS and external one i.e., 2 IKS is changed too.
The two DH-parameters d 3 , r 2 depend on the variation of the second passive joint angle θ π 2 as it is shown in Table 1. The variation of θ π 2 in − π 2 , π 2 . causes increasing variation for d 3 , decreasing and increasing for r 2 in 0, π 2 , − π 2 , 0 respectively. Consequently, the continuous change of the second pseudo-joint with θ π 1 = ± π 2 , d 4 = 0.13 m changes in a continuous manner the topology of the workspace as it is plotted in Figure 8b. The ratios of internal and external regions are varied, the number of cusp and node points is changing as well as the maximum reach of the end-effector of the mechanism is increasing. Moreover, it is also feasible to switch from generic to non-generic manipulators. Finally, the perpendicular distance from the first joint axes is decreased and as a result, the total workspace is placed closer to the local coordinate system of the base (see Figure 8b in horizontal axis ρ). Besides, the topological transition from cuspidal to regular anatomy is feasible only with the activation of angular rotation steps of θ π 2 .
In conclusion, it is worth mentioning that the metamorphosis provides various anatomies from a single structure. Kinematic singularities in the workspace of cuspidal manipulators especially the internal boundaries cause serious drawbacks in planning smooth and continuous trajectories and control. However, metamorphic manipulators overcome this fact since it provides a wide spectrum of arm anatomies and hence a variety of regular or cuspidal topological workspaces with varied shape or volume are created. Therefore, engineers can easily select the anatomy required by the given task, based on the classification and analysis introduced in this work. Then, the position of the trajectory or the points for moving objects based on [14,16] can be optimized. In the next section examples of these trajectories are presented.
Planning Non-Singular Posture Changing Trajectories
In this section, three distinct metamorphic anatomies that perform non-singular, continuous and smooth trajectories are presented. The active joint angles are illustrated as a function of the distinct steps and the determinant of Jacobian matrix is also plotted to verify the continuity and smoothness of the executed trajectories. In manufacturing and particularly in precision engineering it is important to achieve smooth and very precise trajectories so the manufacturing engineer could select the proper metamorphic anatomy to design the location of such a task avoiding the activation of manipulator breaks close to singularity points. On the other hand, for a different task like point to point motion, the engineer could select a proper metamorphic anatomy using the analysis and the results presented in the previous section.
Generic Mechanism
The design of a representative anatomy with four cusps and no nodes is based on Figures 3 and 5. So, a metamorphic anatomy is chosen from subspace 2 (4, 0) to perform non-singular mode changing trajectory. The selected anatomy has the metamorphic parameters θ π 1 = ±45 • , θ π 2 = ±60 • , d 4 = 0.1 m. The desired geometric motion is a circle with its center and radius, respectively K (x, y, z) = (0.45, 0, 0.1) and R = 0.045 m respectively. The circular path belongs in the vertical plane passing through the Z-axis of the first joint in order to show the singularity free motion. The following function is used to define the path and the trajectory is divided into 100 distinct steps: where λ = ±1 for left-hand or right-hand direction and ρ = x 2 +y 2 . An inner point in 4 IKS region is considered as the starting point of the circle such that A(x, y, z) = (0.45, 0, 0.055) and the IKP is solved. Then the respective set of the inverse kinematic solutions are shown in Table 2. Left-hand rotation is selected i.e., λ = 1 and the TCP of the manipulator performs the circular motion in the half-cross section of the metamorphic workspace (see Figure 9b), as well as the path free of singularity, is traced in the joint space (θ 2 , θ 3 ) joining the third inverse kinematic solution to the first one in a continuous manner. Besides, the allocation of inverse geometric solutions sets in couples for each aspect is shown in Figure 9a. The joint path is smooth and does not cross any singular curve S 1,2 in aspect A 1 .
Robotics 2020, 9, x FOR PEER REVIEW 17 of 23 4 −0.7096 −0.2471 −0.7420 Left-hand rotation is selected i.e., λ = 1 and the TCP of the manipulator performs the circular motion in the half-cross section of the metamorphic workspace (see Figure 9b), as well as the path free of singularity, is traced in the joint space (θ 2 ,θ 3 ) joining the third inverse kinematic solution to the first one in a continuous manner. Besides, the allocation of inverse geometric solutions sets in couples for each aspect is shown in Figure 9a. The joint path is smooth and does not cross any singular curve S 1,2 in aspect A 1 . Furthermore, the joints angles are plotted in Figure 10 as a function of discrete steps for the circular path generation. The non-singular posture changing trajectory is feasible without encountering a singularity (outstretched or folded arm) and that can be useful for tasks where collision avoidance is needed. Finally, the determinant of the Jacobian matrix is plotted in Figure 11 as a function of the executed steps for a complete circle. The function of determinant is smooth and does not change sign or becomes zero. Moreover, the determinant's morphology at the beginning of the path is different from the end of the closed path showing the change of posture. Consequently, determinant's behavior Furthermore, the joints angles are plotted in Figure 10 as a function of discrete steps for the circular path generation. The non-singular posture changing trajectory is feasible without encountering a singularity (outstretched or folded arm) and that can be useful for tasks where collision avoidance is needed. Figure 9b), as well as the path free of singularity, is traced in the joint space (θ 2 ,θ 3 ) joining the third inverse kinematic solution to the first one in a continuous manner. Besides, the allocation of inverse geometric solutions sets in couples for each aspect is shown in Figure 9a. The joint path is smooth and does not cross any singular curve S 1,2 in aspect A 1 . Furthermore, the joints angles are plotted in Figure 10 as a function of discrete steps for the circular path generation. The non-singular posture changing trajectory is feasible without encountering a singularity (outstretched or folded arm) and that can be useful for tasks where collision avoidance is needed. Finally, the determinant of the Jacobian matrix is plotted in Figure 11 as a function of the executed steps for a complete circle. The function of determinant is smooth and does not change sign or becomes zero. Moreover, the determinant's morphology at the beginning of the path is different from the end of the closed path showing the change of posture. Consequently, determinant's behavior Finally, the determinant of the Jacobian matrix is plotted in Figure 11 as a function of the executed steps for a complete circle. The function of determinant is smooth and does not change sign or becomes zero. Moreover, the determinant's morphology at the beginning of the path is different from the end of the closed path showing the change of posture. Consequently, determinant's behavior verifies the singularity avoidance during the transition from an inverse kinematic solution to another one.
Robotics 2020, 9, x FOR PEER REVIEW 18 of 23 Robotics 2020, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/robotics verifies the singularity avoidance during the transition from an inverse kinematic solution to another one. Figure 11. The determinant of the Jacobian matrix as a function of discrete steps for a perfect circle.
Non-Generic Anatomy
Non-Generic 3R arm anatomies are considered the open-chain manipulator mechanisms that show two extra critical branches (straight lines) in configuration space (θ 2 ,θ 3 ). In this section, two different trajectories are planned in two separate non-generic metamorphic anatomies.
Planning Closed Smooth and Continuous Path
A non-generic anatomy is chosen from 3D graphs subspace 2.1 (2,1) in Figure 5 with two cusps and one node and its singularities are shown in Figure 1. subsequently, the locations of cusp and node points are taken into account to plan a closed path free of singularities. Besides, the appearance of cusp points in half cross-section of the metamorphic anatomy workspace helps the engineer to locate the task to be performed with non-singular posture changing.
The parameters used are: [θ π 1 = ± 90°, θ π 2 = ± 90°, d 4 = 0.23 m]. A circular path is chosen to be performed in metamorphic half cross-section of the workspace (ρ, Z). According to the location of cusp and node point in the workspace topology, the center and the radius of the circle are defined to be as follows K(x,y,z) = (0.48, 0, 0.063) and R = 0.05 m. Then, an inner point in the region with the maximum accessibility is selected as the initial point of the path such that A(x,y,z) = (0.48,0,0.013) and the inverse kinematics is solved for this point.
The inverse kinematic solutions of Table 3 are distributed in various aspects in configuration space. Only one couple of solutions seems to exist in a single aspect, namely in aspect A 1 , while the rest of the IK solutions are placed in different c-sheets and hence a joint path could be executed joining two inverse geometric radical generators. Table 3. Four inverse kinematic solutions on the starting-ending point of trajectory. The motion of the mechanism's end-effector is described by the desired geometry such that: ρ = R cos (λθ − 3π 2 ) + K(1,1) and Ζ = R sin (λθ − 3π 2 ) + K (1,3), where λ = ± 1 for left-hand or righthand direction, ρ =√x 2 +y 2 .
Right-hand rotation is selected (λ = −1) to perform the trajectory in the chosen aspect A 1 . So, taking advantage of the notion of aspects, circles are performed in the characteristic topology of metamorphic anatomy workspace as well as the corresponding joint path is illustrated in the joint Figure 11. The determinant of the Jacobian matrix as a function of discrete steps for a perfect circle.
Non-Generic Anatomy
Non-Generic 3R arm anatomies are considered the open-chain manipulator mechanisms that show two extra critical branches (straight lines) in configuration space (θ 2 , θ 3 ). In this section, two different trajectories are planned in two separate non-generic metamorphic anatomies.
Planning Closed Smooth and Continuous Path
A non-generic anatomy is chosen from 3D graphs subspace 2.1 (2, 1) in Figure 5 with two cusps and one node and its singularities are shown in Figure 1. subsequently, the locations of cusp and node points are taken into account to plan a closed path free of singularities. Besides, the appearance of cusp points in half cross-section of the metamorphic anatomy workspace helps the engineer to locate the task to be performed with non-singular posture changing.
The parameters used are: θ π 1 = ±90 • , θ π 2 = ±90 • , d 4 = 0.23 m . A circular path is chosen to be performed in metamorphic half cross-section of the workspace (ρ, Z). According to the location of cusp and node point in the workspace topology, the center and the radius of the circle are defined to be as follows K(x, y, z) = (0.48, 0, 0.063 ) and R = 0.05 m. Then, an inner point in the region with the maximum accessibility is selected as the initial point of the path such that A(x, y, z) = (0.48, 0, 0.013) and the inverse kinematics is solved for this point.
The inverse kinematic solutions of Table 3 are distributed in various aspects in configuration space. Only one couple of solutions seems to exist in a single aspect, namely in aspect A 1 , while the rest of the IK solutions are placed in different c-sheets and hence a joint path could be executed joining two inverse geometric radical generators. The motion of the mechanism's end-effector is described by the desired geometry such that: ρ = R cos λθ − 3π 2 + K(1, 1) and Z = R sin λθ − 3π 2 + K(1, 3), where λ = ±1 for left-hand or right-hand direction, ρ = x 2 +y 2 .
Right-hand rotation is selected (λ = −1) to perform the trajectory in the chosen aspect A 1 . So, taking advantage of the notion of aspects, circles are performed in the characteristic topology of metamorphic anatomy workspace as well as the corresponding joint path is illustrated in the joint space as it is shown in Figure 12. Moreover, it is possible to perform non-singular trajectories even if the end-effector cross internal boundaries in a radial section of the metamorphic anatomy workspace. The corresponding variation of active joints is plotted as a function of the discrete steps for a complete circular motion. The variation is smooth and the manipulator begins with one posture and ends in a different one, as it is shown in Figure 13.
It is worth also mentioning that this allocation of solutions in joint space seems not to be actually shown in generic manipulators as we see in Section 4.2. Finally, the determinant of the geometric Jacobian matrix is displayed in Figure 14. The determinant's behavior is continuous and smooth. It presents a global minimum in 59th step of trajectory, where the metamorphic anatomy switch inverse kinematic solution in this transition point. The metamorphic manipulator changes inverse solution during the pre-programming trajectory and ends the path with a different posture. This is very important for high precision tasks such as arc welding, cutting or painting in complicated products. The corresponding variation of active joints is plotted as a function of the discrete steps for a complete circular motion. The variation is smooth and the manipulator begins with one posture and ends in a different one, as it is shown in Figure 13. The corresponding variation of active joints is plotted as a function of the discrete steps for a complete circular motion. The variation is smooth and the manipulator begins with one posture and ends in a different one, as it is shown in Figure 13.
It is worth also mentioning that this allocation of solutions in joint space seems not to be actually shown in generic manipulators as we see in Section 4.2. Finally, the determinant of the geometric Jacobian matrix is displayed in Figure 14. The determinant's behavior is continuous and smooth. It presents a global minimum in 59th step of trajectory, where the metamorphic anatomy switch inverse kinematic solution in this transition point. The metamorphic manipulator changes inverse solution during the pre-programming trajectory and ends the path with a different posture. This is very important for high precision tasks such as arc welding, cutting or painting in complicated products. It is worth also mentioning that this allocation of solutions in joint space seems not to be actually shown in generic manipulators as we see in Section 4.2.
Finally, the determinant of the geometric Jacobian matrix is displayed in Figure 14. The determinant's behavior is continuous and smooth. It presents a global minimum in 59th step of trajectory, where the metamorphic anatomy switch inverse kinematic solution in this transition point. The metamorphic manipulator changes inverse solution during the pre-programming trajectory and ends the path with a different posture. This is very important for high precision tasks such as arc welding, cutting or painting in complicated products.
Robotics 2020, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/robotics Figure 14. The behavior of determinant of Jacobian for non-generic metamorphic anatomy.
Rectilinear Trajectory
The 3R orthogonal arm can perform any arbitrary path in a radial section of a metamorphic anatomy workspace. So, a rectilinear trajectory is performed which is important for a wide range of industrial applications such as assembling, grinding, welding, object placement.
Anatomy is selected using the 3D graphs in Figure 15 and the kinematic singularities are displayed in joint space and workspace (see Figure 15), respectively. The anatomy belongs to subspace 3 (4,4) with the respective design parameters [θ π 1 = ±15°, θ π 2 = ±90°, d 4 = 0.6 m] . The topological knowledge of workspace helps the engineer to plan a linear, continuous and smooth path in metamorphic workspace. For this reason, the region with 2 IKS is selected to perform straight lines (two IKS).
The geometry of the path is defined using linear interpolation and the end-effector is driven with the following geometric motion such that z = -0.6ρ + 0.88. Continuing the study of rectilinear motion, the joint variation is plotted in Figure 16. The active joints have almost linear behavior during the task execution.
Rectilinear Trajectory
The 3R orthogonal arm can perform any arbitrary path in a radial section of a metamorphic anatomy workspace. So, a rectilinear trajectory is performed which is important for a wide range of industrial applications such as assembling, grinding, welding, object placement.
Anatomy is selected using the 3D graphs in Figure 15 and the kinematic singularities are displayed in joint space and workspace (see Figure 15), respectively. The anatomy belongs to subspace 3 (4, 4) with the respective design parameters θ π 1 = ±15 • , θ π 2 = ±90 • , d 4 = 0.6 m . The topological knowledge of workspace helps the engineer to plan a linear, continuous and smooth path in metamorphic workspace. For this reason, the region with 2 IKS is selected to perform straight lines (two IKS).
Rectilinear Trajectory
The 3R orthogonal arm can perform any arbitrary path in a radial section of a metamorphic anatomy workspace. So, a rectilinear trajectory is performed which is important for a wide range of industrial applications such as assembling, grinding, welding, object placement.
Anatomy is selected using the 3D graphs in Figure 15 and the kinematic singularities are displayed in joint space and workspace (see Figure 15), respectively. The anatomy belongs to subspace 3 (4,4) with the respective design parameters [θ π 1 = ±15°, θ π 2 = ±90°, d 4 = 0.6 m] . The topological knowledge of workspace helps the engineer to plan a linear, continuous and smooth path in metamorphic workspace. For this reason, the region with 2 IKS is selected to perform straight lines (two IKS).
The geometry of the path is defined using linear interpolation and the end-effector is driven with the following geometric motion such that z = -0.6ρ + 0.88. Continuing the study of rectilinear motion, the joint variation is plotted in Figure 16. The active joints have almost linear behavior during the task execution. The geometry of the path is defined using linear interpolation and the end-effector is driven with the following geometric motion such that z = −0.6ρ + 0.88.
The trajectory is divided into 100 steps and ρ takes values in [0.3, 0.8]. Continuing the study of rectilinear motion, the joint variation is plotted in Figure 16. The active joints have almost linear behavior during the task execution. Finally, the determinant of the geometric Jacobian is plotted in Figure 17 to show the continuous non-singular configuration alteration of the selected anatomy as the end-effector performs the selected rectilinear path in the operational space. In the first 60 steps, the determinant behaves as an increasing function and then decreases until the end of the task.
Conclusions
In this paper, the cuspidality investigation of a metamorphic manipulator is introduced. It embodies the fundamental principles of metamorphosis and the corresponding method allows the engineers to select anatomies from a predefined structure according to topological features such as cusps and nodes.
However, the classification of all non-isomorphic regular or cuspidal metamorphic anatomies revealed novel research results with respect to the metamorphic workspace topological features (cusp, node, shape, volume, accessibility, kinematic dexterity, regular). This paper revealed also interesting results with respect to kinematic positioning singularities in 3R orthogonal metamorphic manipulators that can be rapidly reconfigured to execute the desired tasks.
The current study shows that the mechanism can be rapidly reconfigured in its arm geometry in order to perform smooth and continuous arbitrary trajectories. The engineers are able to select any non-isomorphic arm geometry from the divided design parameter space {θ π 1 , θ π 2 , d 4 } thanks to the closed-form solution for the determination of the bifurcating surfaces, that presented in this paper. In this way, regular anatomies are always available for simple tasks as well as cuspidal anatomies could be chosen especially for closed paths i.e., non-singular posture changing trajectory.
The proposed approach enhances the flexibility, extensibility, adaptability and versatility that manufacturing demands can be easily met for a huge variety of reliable and quality products beyond the limitations of well-known, regular fixed-anatomy robots that be able to achieve high task's performance only for which they were designed. Finally, the determinant of the geometric Jacobian is plotted in Figure 17 to show the continuous non-singular configuration alteration of the selected anatomy as the end-effector performs the selected rectilinear path in the operational space. In the first 60 steps, the determinant behaves as an increasing function and then decreases until the end of the task. Finally, the determinant of the geometric Jacobian is plotted in Figure 17 to show the continuous non-singular configuration alteration of the selected anatomy as the end-effector performs the selected rectilinear path in the operational space. In the first 60 steps, the determinant behaves as an increasing function and then decreases until the end of the task.
Conclusions
In this paper, the cuspidality investigation of a metamorphic manipulator is introduced. It embodies the fundamental principles of metamorphosis and the corresponding method allows the engineers to select anatomies from a predefined structure according to topological features such as cusps and nodes.
However, the classification of all non-isomorphic regular or cuspidal metamorphic anatomies revealed novel research results with respect to the metamorphic workspace topological features (cusp, node, shape, volume, accessibility, kinematic dexterity, regular). This paper revealed also interesting results with respect to kinematic positioning singularities in 3R orthogonal metamorphic manipulators that can be rapidly reconfigured to execute the desired tasks.
The current study shows that the mechanism can be rapidly reconfigured in its arm geometry in order to perform smooth and continuous arbitrary trajectories. The engineers are able to select any non-isomorphic arm geometry from the divided design parameter space {θ π 1 , θ π 2 , d 4 } thanks to the closed-form solution for the determination of the bifurcating surfaces, that presented in this paper. In this way, regular anatomies are always available for simple tasks as well as cuspidal anatomies could be chosen especially for closed paths i.e., non-singular posture changing trajectory.
The proposed approach enhances the flexibility, extensibility, adaptability and versatility that manufacturing demands can be easily met for a huge variety of reliable and quality products beyond the limitations of well-known, regular fixed-anatomy robots that be able to achieve high task's performance only for which they were designed.
Conclusions
In this paper, the cuspidality investigation of a metamorphic manipulator is introduced. It embodies the fundamental principles of metamorphosis and the corresponding method allows the engineers to select anatomies from a predefined structure according to topological features such as cusps and nodes.
However, the classification of all non-isomorphic regular or cuspidal metamorphic anatomies revealed novel research results with respect to the metamorphic workspace topological features (cusp, node, shape, volume, accessibility, kinematic dexterity, regular). This paper revealed also interesting results with respect to kinematic positioning singularities in 3R orthogonal metamorphic manipulators that can be rapidly reconfigured to execute the desired tasks.
The current study shows that the mechanism can be rapidly reconfigured in its arm geometry in order to perform smooth and continuous arbitrary trajectories. The engineers are able to select any non-isomorphic arm geometry from the divided design parameter space θ π 1 , θ π 2 , d 4 thanks to the closed-form solution for the determination of the bifurcating surfaces, that presented in this paper. In this way, regular anatomies are always available for simple tasks as well as cuspidal anatomies could be chosen especially for closed paths i.e., non-singular posture changing trajectory.
The proposed approach enhances the flexibility, extensibility, adaptability and versatility that manufacturing demands can be easily met for a huge variety of reliable and quality products beyond the limitations of well-known, regular fixed-anatomy robots that be able to achieve high task's performance only for which they were designed.
As for future work, after the selection of the desired subspace based on the topological features, the position of the task can be further optimized based on the methods presented in [14,16] for obstacle avoidance as well as for increasing the kinematic dexterity of the metamorphic manipulator. | 2020-04-09T09:12:02.209Z | 2020-04-02T00:00:00.000 | {
"year": 2020,
"sha1": "46089a77f2e1dfde512955fe8029be0c7c007e02",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-6581/9/2/20/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e78293e5cca73f14df9a0a7c7856eb4ae3f821e4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
1863125 | pes2o/s2orc | v3-fos-license | On the Capacity of Cloud Radio Access Networks with Oblivious Relaying
We study the transmission over a network in which users send information to a remote destination through relay nodes that are connected to the destination via finite-capacity error-free links, i.e., a cloud radio access network. The relays are constrained to operate without knowledge of the users' codebooks, i.e., they perform oblivious processing. The destination, or central processor, however, is informed about the users' codebooks. We establish a single-letter characterization of the capacity region of this model for a class of discrete memoryless channels in which the outputs at the relay nodes are independent given the users' inputs. We show that both relaying \`a-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, and"noisy network coding", are optimal. The proof of the converse part establishes, and utilizes, connections with the Chief Executive Officer (CEO) source coding problem under logarithmic loss distortion measure. Extensions to general discrete memoryless channels are also investigated. In this case, we establish inner and outer bounds on the capacity region. For memoryless Gaussian channels within the studied class of channels, we characterize the capacity region when the users are constrained to time-share among Gaussian codebooks. We also discuss the suboptimality of separate decompression-decoding and the role of time-sharing. Furthermore, we study the related distributed information bottleneck problem and characterize optimal tradeoffs between rates (i.e., complexity) and information (i.e., accuracy) in the vector Gaussian case.
I. INTRODUCTION
Cloud radio access networks (CRAN) provide a new architecture for next-generation wireless cellular systems in which base stations (BSs) are connected to a cloud-computing central processor (CP) via error-free finite-rate fronthaul links.This architecture is generally seen as an efficient means to increase spectral efficiency in cellular networks by enabling joint processing of the signals received by multiple BSs at the CP, thus alleviating the effect of interference.Other advantages include low cost deployment and flexible network utilization [1].
In a CRAN network, each BS acts essentially as a relay node; and so can in principle implement any relaying strategy, e.g., decode-and-forward [2, Theorem 1], compress-andforward [2,Theorem 6] or combinations of them.Relaying strategies in CRANs can be divided roughly into two classes: i) strategies that require the relay nodes to know the users' codebooks (i.e., modulation, coding), such as decode-andforward, compute-and-forward [3], [4] or variants of it, and ii) strategies in which the relay nodes operate without knowledge of the users' codebooks, often referred to as oblivious relay processing (or nomadic transmitter) [5]- [7].This second class is composed essentially of strategies in which the relays implement forms of compress-and-forward [2], such as successive The work of G. Caire is supported by an Alexander von Humboldt Professorship.The work of S.Shamai has been supported by the European Union's Horizon 2020 Research And Innovation Programme, grant agreement no.694630.
Wyner-Ziv compression [8]- [10] and noisy-network coding [11].Schemes combining the two apporaches have been shown to possibly outperform the best of the two in [12], especially in scenarios in which there are more users than relay nodes.
In the spirit, however, a CRAN architecture is usually envisioned as one in which BSs operate as simple radio units (RUs) that are constrained to implement only the radio functionalities, such as analog-to-digital conversion and filtering, while the baseband functionalities are migrated to the CP.For this reason, while relaying schemes that involve partial or full decoding of the users' codewords can sometimes offer rate gains, they do not seem to be suitable in practice -In fact, such schemes assume that all or a subset of the relay nodes are fully aware (at all times) of the codebooks and encoding used by the users; and the signaling required to convey such information is generally prohibitive, particularly as networks become large.Instead, schemes in which relays perform oblivious processing are preferred.Oblivious processing was first introduced in [5].The basic idea is that of using randomized encoding to model lack of information about codebooks.For related works, the reader may refer to [6], [13]- [15].In particular, [15] extends the original definition of oblivious processing of [5], which rules out time-sharing, to include settings in which encoders are allowed to switch among different codebooks and oblivious nodes are unaware of the codebooks but are given, or can acquire, time-or frequency-schedule information, which is generally less difficult to obtain.The framework is termed therein as "oblivious processing with enabled time-sharing".
In this work, we consider transmission over a CRAN in which the relay nodes are constrained to operate without knowledge of the users' codebooks, i.e., are oblivious, and only know time-or frequency-sharing information.Focusing on a class of discrete memoryless channels in which the relay outputs are independent conditionally on the users' inputs, we establish a single-letter characterization of the capacity region of this class of channels.We show that relaying à-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, or noisy network coding, are optimal.For the proof of the converse part, we utilize useful connections with the Chief Executive Officer (CEO) source coding problem under logarithmic loss distortion measure [16].For memoryless Gaussian channels within this class, we characterize the capacity under Gaussian channel inputs.Extensions to general discrete memoryless channels are also investigated.In this case, we establish inner and outer bounds on the capacity region.
Notation: Throughout, we use the following notation.Lower case letters denote scalars, e.g., x; upper case letters denote random variables, e.g., X, boldface lower case letters denote vectors, e.g., x, and boldface upper case letters denote matrices, e.g., X. Calligraphic letters denote sets, e.g., X ; and the cardinality of set X is denoted by |X |.For a set of integers K, the notation X K denotes the set of random variables {X k } with indices k in the set K, i.e., X K = {X k } k∈K .II.SYSTEM MODEL Consider the discrete memoryless CRAN model shown in Figure 1.In this model, a set of users communicate with a central processor (CP) through a set of relay nodes that are connected to the CP via error-free finite-rate fronthaul links.Let L = {1, . . ., L} denote the set of users, and K = {1, . . ., K} denote the set of relays, and let C k be the capacity of the link connecting relay node k to the CP, k ∈ K. Similar to [6], the relays nodes are constrained to operate without knowledge of the users' codebook and only know time-sharing information, i.e., oblivious relay processing with enabled time sharing.The obliviousness of the relay nodes to the actual codebooks is modeled by the transmitters picking at random their selected codebooks and the relays not aware of the actual codebooks indices.Specifically, the codeword X n (F l , M l , Q n ) transmitted by encoder l, l ∈ L, depends not only on the message M l ∈ [1, 2 nR l ], but also on the index F l which runs over all possible codebooks of the given rate R l , i.e., F l ∈ [1, |X l | n2 nR l ] and the time sharing sequence Q n .Formally, the model is defined as follows. 1) The index F l is picked at random and shared with CP, but not to the relays.2) Time-sharing sequence: All terminals, including the relay nodes, are aware of a time-sharing sequence Q n , distributed as 3) Encoding functions: The encoding function at user l, l ∈ L, is defined by a pair (p X l , φ l ) where p X l is a singleletter pmf and φ l is a mapping , that maps the given codebook index F l , message m l and time-sharing variable q n to a channel input where , and maps the received sequence ), that it then sends to the CP over the error-free link of capacity C k .The index J k is then sent the to the CP over the link of capacity C k .5) Decoding function: Upon receiving the indices J K = (J 1 . . ., J K ), the CP estimates the users' messages as where g : is the decoder mapping.
Definition 2. A rate tuple (R 1 , . . ., R L ) is said to be achievable if, for any > 0, there exist a sequence of codes, such that, for sufficiently long blocklength n, each user's message can be decoded by the CP at rate at least R k with vanishing probability of error, i.e., For given C K , the capacity region R(C K ) is the closure of all achievable rate tuples (R 1 , . . ., R L ).
Due to space limitations, some of the results of this paper are only outlined or given without proofs.The detailed proofs can be found in [17].
A. Class of Discrete Memoryless Channels
In this work, we establish the capacity region of the following class of discrete memoryless CRAN channels with oblivious relay processing and enabled time-sharing.In this class, the channel outputs at the relay nodes are independent conditionally on the users' inputs.That is, or, equivalently, the following Markov chain holds,
B. Oblivious Relaying with Enabled Time-Sharing
Similar to [6], the above constraint of oblivious relay processing with enabled time-sharing means that, in the absence of information regarding the indices F L and the messages M L , a codeword x n l (F l , m l |q n ) taken from a (n, R l ) codebook has independent but non-identically distributed entries.Lemma 1.Without the knowledge of the selected codebooks indices (F 1 , . . ., F L ), the distribution of the transmitted codewords conditioned on the time-sharing sequence are given by p
A. Capacity Region of Studied Class of CRAN Channels
The main result of this paper is a single-letter characterization of the capacity region of the class of channels with oblivious relaying and enabled time-sharing that satisfy (5).The following theorem states the result.
Theorem 1.For the class of discrete memoryless channels given by (5) with oblivious relay processing and enabled timesharing, a rate tuple (R 1 , . . ., R L ) is achievable if and only if for all T ⊆ L and for all S ⊆ K, we have for some joint measure of the form p(q) Proof: The proof of converse part of Theorem 1 is relegated to Section V.The proof of the direct part can be obtained by applying the noisy network coding (NNC) scheme of [11,Theorem 1].Alternatively, the rate region of Theorem 1 can also be achieved by a scheme that generalizes that of [7, Theorem 1], which is established in the case of a single transmit node, to the case of multiple users and accommodate timesharing.By opposition to the NNC scheme, the latter scheme is based on compress-and-forward à la Cover-El Gamal with joint decoding and decompression at the CP (CoF-JD).
Remark 1. Key element for the proof of Theorem 1 is the connection with the chief executive officer (CEO) problem.For the case of m encoders, m ≥ 3, while characterization of the optimal rate-distortion region of this problem for general distortion measures has eluded the information theory, a characterization of the optimal region in the case of logarithmic loss distortion measure has been provided recently in [16].
Remark 2. The sum-rate of Theorem 1 can also be achieved by a scheme in which the CP decodes explicitly the compression indices first, and then decodes the users' transmitted messages, i.e., decompression and decoding is not performed jointly.A similar observation is found in [18,Theorem 2].
B. Memoryless Gaussian Model
In this section, we consider a memoryless Gaussian MIMO model of the studied CRAN with oblivious relay processing and enabled time sharing.The channel output at relay node k, equipped with M k receive antennas, is given by where is the channel matrix connecting user l to relay node k, and N k ∈ C M k is the noise vector at relay node k, assumed to be Gaussian with N k ∼ CN (0, Σ k ).The transmission is subjected to power constraint Tr(K l ) ≤ P k , where K l = E[X l X H l ] is the covariance matrix of X l .The noises at the relay nodes are assumed to be independent; and so the studied Gaussian model satisfies the Markov chain (5).
The result of Theorem 1 can be extended to continuous channels using standard techniques; and so it characterizes the capacity region of the model (7).The computation of this region, however, is not easy as it requires finding the optimal choices of the involved auxiliary random variables U 1 , . . ., U K .The following theorem characterizes more explicitly the capacity region when the users are constrained to employ Gaussian signaling, i.e., for Q = q, X l,q ∼ CN (0, K l,q ), for all l ∈ L. Theorem 2. If the input vectors are Gaussian, the capacity region of the memoryless Gaussian MIMO model ( 7) is given by the set of all rate tuples (R 1 , . . ., R L ) satisfying that for all T ⊆ L and all S ⊆ K t∈T for some 0 B k Σ −1 k , where H k,T denotes the channel matrix connecting the input X T to the output Y k , formed by concatenating the matrices H k,l , l ∈ T , horizontally.Remark 3. Theorem 2 extends the result of [5,Theorem 5] to the case of L users and enabled time-sharing.In addition to showing that under the constraint of Gaussian input signaling, the quantization codewords can be chosen optimally to be Gaussian, the result of Theorem 2 also means that timesharing is not needed in the memoryless Gaussian case.Recall that, as shown through an example in [5], if the relays are aware of the users' codebooks restricting to Gaussian input signaling can be a severe constraint and is generally suboptimal.
Remark 4. In [18], the authors study the questions of optimal fronthaul compression and decoding strategies for uplink CRAN networks without oblivious processing constraints.It is shown that NNC with Gaussian input and Gaussian quantization achieve to within a constant gap of the capacity region of the Gaussian MIMO uplink CRAN.In this paper, we show that if only oblivious relay processing is allowed, NNC and CoF-JD is in fact optimal from a capacity viewpoint.
IV. GENERAL DISCRETE MEMORYLESS MODEL
In this section, we focus on general discrete memoryless CRAN channels with oblivious relay processing and time sharing, i.e., the channel outputs at the relays are arbitrarily correlated and the Markov chain (5) does not necessarily hold.We establish bounds on the capacity region of the model.The results extend those of [5], which only consider a single transmitter and no time-sharing, to the case of multiple transmitters and allowed time-sharing.
The following theorem provides an inner bound on the capacity region of the general DM CRAN model with oblivious relay processing and time sharing.Theorem 3.For general DM CRAN channels with oblivious relay processing and enabled time-sharing, the set of rates (R 1 , . . ., R L ) such that for all T ⊆ L and all S ⊆ K, for some joint measure p(q) We now provide an outer bound on the capacity region of the general DM CRAN model with oblivious relay processing and time-sharing.The following theorem states the result.Theorem 4. For general DM CRAN channels with oblivious relay processing and enabled time-sharing, if a rate-tuple (R 1 , . . ., R L ) is achievable then for all T ⊆ L and all S ⊆ K, for some for some random variable W and some deterministic functions {f k }, k ∈ K.
Remark 5.The inner bound of Theorem 3 and the outer bound of Theorem 4 do not coincide in general.This is due to the fact that in Theorem 3, U 1 , . . ., U K satisfy the Markov chain does not necessarily hold for the auxiliary random variables of the outer bound.Remark 6.As we already mentioned, the class (5) of DM CRAN channels connects with the CEO problem under logarithmic loss distortion measure.The rate-distortion region of this problem is characterized in the excellent contribution [16] for an arbitrary number of (source) encoders (see Theorem 3 therein).For general DM CRAN channels, i.e., without the Markov chain (5) the model connects with the distributed source coding problem under logarithmic loss distortion measure.While a solution of the latter problem for the case of two encoders has been found in [16,Theorem 6], generalizing the result to the case of arbitrary number of encoders poses a significant challenge.In fact, as also mentioned in [16], the Berger-Tung inner bound is known to be generally suboptimal (e.g., see the Korner-Marton lossless modulo-sum problem [19]).Characterizing the capacity region of the general DM CRAN model under the constraint of oblivious relay processing and enabled time-sharing poses a similar challenge, except perhaps for the case of two relay nodes, results on which will be reported elsewhere.
V. PROOF OF CONVERSE PART OF THEOREM 1
Assume the rate tuple (R 1 , . . ., R L ) is achievable.Let T be a set of L, S be a non-empty set of K, and J k φ r k (Y n k , q n ) be the message sent by relay k ∈ K, and let Q = q n be the time-sharing variable.For simplicity we define Qi (X i−1 L , X n L,i+1 , Q).From Fano's inequality, we have with n → 0 for n → ∞ (for vanishing probability of error), for all T ⊆ L, We show the following inequality, used below in the proof.
Inequality ( 16) can be shown as follows.
where (17) follows since m T are independent; (19) follows since m T is independent of Q and F T c ; (20) follows from ( 14); (24) follows since m T is independent of F L ; (26) follows from the data processing inequality;(28) follows since X n T c , F T c are independent from X n T and since conditioning reduces entropy and; (29) follows due to the Markov chain Then, from (29) we have ( 16) as follows: where (33) is due to Lemma 1. Continuing from (29), we have where (36) follows due to Lemma 1; and (37) follows since conditioning reduces entropy.
On the other hand, we have the following equality where (39) follows due to the Markov chain which follows since the channel is memoryless.
Then, from the relay side we have, where (47) follows since J S is a function of Y n S ; (49) follows from ( 16); (51) follows since conditioning reduces entropy; and (52) follows from ( 16) and (42).
In general, Qi is not independent of X L,i , Y S,i , and that due to Lemma 1, conditioned on Qi , we have the Markov chain Finally, a standard time-sharing argument completes the proof of Theorem 1.
VI. CONCLUDING REMARKS In this paper, we study transmission over a cloud radio access network under the framework of oblivious processing at the relay nodes, i.e., the relays are not allowed to know, or cannot acquire, the users' codebooks.Our results shed light (and sometimes determine exactly) what operations the relay nodes should perform optimally in this case.In particular, perhaps non-surprisingly it is shown that compress-and-forward, or variants of it, generally perform well in this case, and are optimal when the outputs at the relay nodes are conditionally independent on the users inputs.Furthermore, in addition to its relevance from a practical viewpoint, restricting the relays not to know/utilize the users' codebooks causes only a bounded rate loss in comparison with the non-oblivious setting (e.g., compress-and-forward and noisy network coding perform to within a constant gap from the cut-set bound in the Gaussian case).
≥H(Xn T |X n T c , J S c , Q) − nΓ T + I(Y n S ; J S |X n L , J S c , Q) T ,i |X n T c , J S c , X i−1 T , Q) − nΓ T + I(Y n S ; J S |X n L , J S c , Q) T ,i |X T c ,i , U S c ,i , Qi ) − nΓ T + I(Y n S ; J S |X n L , J S c , Q) (51) =nR T − n i=1 I(X T ,i ; U S c ,i |X T c ,i , Qi ) k,i ; U k,i |X L,i , Qi ), 43) and since J k is a function of of Y n k ; and (41) follows due to the Markov chain Y | 2017-01-25T10:12:31.000Z | 2017-01-25T00:00:00.000 | {
"year": 2017,
"sha1": "87f5d355ffd487feeaf366a627fea5f5f20b0d60",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/3246996/files/IT-65-7-2019-azcs.pdf",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "6f4fad0b4a0bacaa43397b6df3b7a4b1b15b8f60",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
219420799 | pes2o/s2orc | v3-fos-license | Assessment of the Need for Human Resources in the Non-Humanitarian Sphere in the Economy of the Khabarovsk Territory on the Basis of Artificial Neural Networks
The article is devoted to the study of the use of neural networks for analysis and forecasting of socio-economic processes, both in Russia and in its individual subjects, on the example of the Khabarovsk territory. The study was carried out by building neural networks to assess, analyze and forecast labor resources in Russia, in the far Eastern Federal district, and as a private aspect – in the Khabarovsk territory. Brief results of scientific analysis and evaluation of modern system of education in preparation of graduates in different clusters of the national economy made the Outlook for reducing the number of labor resources in Khabarovsk territory and the far Eastern Federal district, as threat trends for the effective development of Russia's Eastern territories. The conclusion about efficiency of application of neural networks for estimation of target indicators of the existing regional programmers aimed at education (teaching), employment promotion, support to fill manpower in science-related spheres of economy of the Khabarovsk territory, as well as improving the quality of education of the population to use modern information technologies in the framework of the digital economy, to create jobs in high-tech industries, development of popularization of technical and information-technical education for successful and effective redistribution of jobs for intensive development of the Eastern territories of the Russian Federation.
INTRODUCTION
In the context of the development of digital technologies, significant transformations are taking place in the professional sphere. New competencies are becoming popular among specialists in various sectors of the economy. At the regional level, programs are being adopted to modernize education so that society is ready for major information technology changes. At the same time, the relevant ministries and departments are faced with the task of not just modernizing the existing systems of economic and social development, but also organizing the training of the young generation so that the knowledge gained in the system of General and specialized education now (today) does not become outdated in the next ten or fifteen years. The uncertainty generated by digital transformation is one of the reasons for the difficulties that arise in predicting the competencies that a professional (specialist) will need to possess in order to successfully implement production and management processes in the digital economy. This problem, in turn, is integrated into the broader problem of the demand for labor resources in the digital economy. The problems of assessing the needs of the economy in labor resources have been studied in recent years by a number of authors and in various aspects. Thus, Moroz D. M., Astafieva M. P. and Pitukhin E. A. outlined the main problems of forecasting the number of employees in the economy, and also proposed their own method of forecasting [4]. Marien L. S. and Melnikova D. M. [3] considered the reasons and principles that determine the formation of new approaches to forecasting the needs of the Russian economy in qualified personnel in the conditions of digitalization. Pitukhin E. A., Kekkonen A. L., Shabaeva S. V. we studied the potential of the education system of the Far East from the position of ensuring the advanced development of the macroregion as a priority territory of Russia, and gave a forecast quantitative assessment of the educational system's capabilities to meet the needs of the economy of the Far East with personnel with secondary professional and higher education [5]. Okunkova E. A. proposed basic methods and tools for calculating the needs of the innovative economy in highly qualified personnel for the long and medium term [6]. Peshkova, G. Yu., Samarin, and A. Yu. we have identified the problems that the digitalization policy in Russia faces or may face, and formulate acceptable solutions to them [7]. Also, a number of authors have studied and proposed several options for using neural networks in management activities. So Kashirina E. A. defines neural networks as the main tool for predicting indicators of socio-economic development. Motrin T. G., Demenko A., and Dolgov, I. V. describe the neural network as one of the main methods of forecast performance [8]. At the same time, the possibilities of using neural networks as a method for assessing and clustering labor needs in the Khabarovsk territory have not been studied. Thus, the purpose of this paper is to characterize the possibility of using neural networks to assess the needs for non-human labor resources in the economy of the Khabarovsk territory.
Analysis of the number of able-bodied population of the Khabarovsk territory and the share of graduates of educational institutions by economic clusters region
Through the use of neural networks, the analysis of the number of able-bodied population of the Khabarovsk territory and the share of graduates of educational institutions in clusters of the region's economy was carried out. Preliminary data of the official state statistics, on the basis of which the neural network was built , are given in this section.
Intensive development of industrial production in Russia and shortage of labor resources in non-humanitarian areas of its economy
Today, Russia is actively developing industrial production; from 2000 to the present. Many enterprises (hundreds or even thousands) in Russia during this period were either restored after the collapse, or modernized in such a way that enterprises produce products not only for the domestic market, but also for the foreign market (during 1990-2000, the situation was the opposite). If we analyse statistics ,for example, "Graduation of bachelors, specialists, and masters by higher education organizations and scientific organizations in groups of specialties and areas of training", we can see that the trend that began in the 1900s to train more and more "humanitarians" in the field of Humanities, economic Sciences, management, culture and art, and the service sector is steadily growing, the authors do not want to beg the significance of these areas for the country's economy and society, but in the last fifteen or twenty years, and most importantly in the coming decades should increase the trend in technical environments and in the healthcare environment, taking baseline "economy and management", we have calculated the ratio to it of the other areas of training, due to the fact that the values for the number of graduates of the direction "economy and management are the highest in the whole table Rosstat, the second largest number of graduates belongs to the "Humanities" (table 1).
1.1.2.
Estimation of the share of the working age population in the Khabarovsk territory and in Russia.
The proportion of the working age population in the Khabarovsk territory is higher than in Russia in General and in the far Eastern Federal district, and this population tends to outflow from the Khabarovsk territory and the DFO, which is critical for the future development of these territories. Table 2 shows the total number of able-bodied people.
Estimation of the share of specialists with higher education employed in socially significant areas of the region's economy.
We see that in the Khabarovsk territory, as well as in the far Eastern Federal district, there is a tendency to reduce the number of workers. Over the past 18 years, the labor force in the Khabarovsk territory has decreased by 5.4%, which is lower than the rate of decline in the labor force in the far Eastern Federal district (9.2%). However, it should be taken into account that in the Russian Federation as a whole, the labor force has increased by 4.3 percent over the past 18 years. The low percentage of specialists with higher education employed in socially significant areas of education and health care for such large territories as the far Eastern Federal district and the Khabarovsk territory is also a factor affecting the outflow of population (table 3).
Preliminary conclusions on the assessment of the need for personnel that exists in the economy of the Khabarovsk territory.
In the current conditions, the issues of assessing the need for personnel, which exists in the economy of the Khabarovsk territory, by the system of professional education in the context of professions and specialties/ areas of training, are updated. It is important to understand how many graduates the system should prepare for the levels of education in order to meet the staffing needs of the region's economy, while taking into account the clusters of the labor market [4]. [11] Advances in Economics, Business and Management Research, volume 139
Artificial neural networks as a modern tool for analyzing and predicting socio-economic processes in the digital economy
Artificial neural networks are one of the most promising tools for predicting socio-economic indicators. They open up new approaches to the study of dynamic systems in the field of Economics. The mechanism of functioning of neural networks assumes that a system of connected and closely interacting fairly simple processors (artificial neurons) is essentially an artificial neural network. The main advantage of neural networks over traditional computational algorithms is the ability to learn. From a technical point of view, training consists in finding the coefficients of connections between neurons. In General, the difference between neural networks and traditional forecasting models consists in taking into account a large amount of information, which increases the accuracy of the forecast. The accuracy of a one-year forecast made using neural networks can be more than 90%. At the same time, the need to use neural networks in the economic field is not due to replacing traditional methods. The goal of data mining is to identify latent patterns and rules in data sets. At the same time, the model of the process of assessing the need for human resources in the non-humanitarian sphere in the economy of the Khabarovsk territory based on artificial neural networks may include a number of elements: 1.
The input values of indicators such as the number of jobs for specific type of works for the production of goods and services (Table 3), labour force (according to Table 2), the number of places by groups of specialties and areas of training in educational institutions of higher education in Khabarovsk territory.
2. The output is a classification sign indicating the need to reduce or increase the number of seats in the educational institutions of higher education.
3. Limitations of the process of assessing the need for labor resourcesthe reliability of the output data within 0.05. 4. Providing mechanismssoftware used to assess the need for labor resources, a mathematical model of a neural network. Limitations of the process of assessing the need for labor resourcesthe reliability of the output data within 0.05. A similar model can be used to assess the labor needs of non-humanitarian secondary vocational education.
Our Contribution
When building a neural network to assess the prognostic value of labor in terms of number of population permanently residing on the territory of Khabarovsk territory and far Eastern Federal district, was the result of a significant reduction in the number as the permanent population and workforce. This reflects the negative trends of the last thirty years of population outflow from the Far East and from the Khabarovsk territory, in particular. Therefore, the measures that the Russian Government is taking today to preserve the permanent population in the East of the country, including in the Khabarovsk territory, are not enough to not only preserve the region's population, but also increase it.
Paper Structure
The rest of the article is organized as follows. Section 2 presents the preliminary data used in this article, which includes the final result of building a neural network for estimating the forecast of labor demand in the Khabarovsk territory, based on the above analysis and evaluation of state statistics data. An assessment of the current legislation on the development of the digital economy in Russia and its aspects in relation to the number of economically active population of the far Eastern Federal district and its constituent part -the Khabarovsk territory is made. Finally, at the end of the work, the main conclusions and direction for future research are presented.
CONNECTION OF PROGRAMS FOR THE DEVELOPMENT OF THE DIGITAL ECONOMY IN RUSSIA AND PLANNING TRAINING FOR NON-HUMANITARIAN SECTORS OF THE ECONOMY OF THE KHABAROVSK TERRITORY
Further, considering the programs for the development of the digital economy in Russia and the far Eastern Federal district, in particular; legislation in the development of the education system in Russia (including the decree Of the government of the Russian Federation of 26.12.2017 1642 " on approval of the state program in the Russian Federation "development of education" [9]), competence approach and transition to a professional standard, conclusions were drawn: the task of the relevant ministries and departments is not just to modernize the existing systems of economic and social development, but to prepare the young generation from the position of education so that the knowledge obtained in the system of General and specialized education now (today) is not outdated in the nearest tenfifteen year perspective. I. e. it is necessary to develop standards, programs, and forms that include not just modern information technologies, but information technologies that will only be introduced into public life in the next ten or fifteen years. In the analysis of the education system, neural networks can significantly help, first of all, to identify factors that significantly affect the development of education in the country [1, 2].
Analysis of the constructed neural network based on official statistics
The task of developing modern programs and standards that prepare future specialists ahead of the curve, which allows creating specialists from modern students who are able and ready for selfstudy and self-education, is complex in its development, because it is connected with modern social challenges, the established model of moral and ethical principles, psychology and mentality of the modern young generation. In addition, the state is trying to reach the entire population as part of the modernization of education and training (retraining) of modern specialists. This gives rise to the target program of retraining of specialists of the older generation (especially before retirement age). In modern realities, the government of the Russian Federation is developing education and health programs that include components "ahead of" their development in the near future, including the "Digital economy". An attempt is being made to revive the importance of working professions, and correct the "kinks" in the education and training system in the period 1995-2010, when a significant part of graduates of General education institutions chose the future profession of "economist", "Manager" or "lawyer"; and the labor market in Russia today is oversaturated with these professions, and there is an acute shortage of specialists in "technical", "engineering" specialties, as well as doctors and medical workers, and this shortage extends in severity from West to East. Today, the far Eastern Federal district is experiencing an acute shortage of technical specialists, despite the fact that the size of the territory is a third of the entire Russia. At the same time, the integration of the modern education system with the requirements and expectations of managers of industrial enterprises for the skills and abilities of specialists and employees, in our opinion, should take into account: 1) the requirements of professional standards themselves; 2) dynamics of changes in the labor market; 3) to preserve the best traditions and methods of the national education system (including, as the experience of the Russian Empire -Zemstvo schools; the rich experience of the Soviet period of the country, including professional and secondary special education); 4) to form a more scientific content of the subjects and disciplines studied in the system of General, secondary and higher education; 5) special attention in the education system should be paid to the disciplines of the natural science cycle (physics, chemistry, biology, mathematics, and information technology); 6) to give creation in the socio-cultural environment respectful attitudes to trades and to the professions of social sphere, perhaps even at the expense of a slight infringement of professions related to Commerce and trade, at the same time, for the profession of the seller (trade hall) or the cashier does not have to have a College education, because the nature and quality of this profession is not commensurate with the labour doctor, engineer, etc.; and for the profession of merchandise (expert) required special knowledge of markets, materials and raw materials for the manufacture of quality products; etc. this issue needs to be approached with reflection and selectively; 7) in the higher education system, provide motivation in the training of technical and engineering specialists, especially in the training of physical and mathematical specialists, chemists and biologists, including integrated areas (physical chemistry, biochemistry, etc.), allocating a greater number of budget places and places for the target set of these specialties.
Results of the study
1) The purpose of the above proposals is to balance the labor market so that there is no shortage or surplus of specialists in different sectors. There are a lot of employees with the specialty "economist", "Manager", "lawyer" in Russia today, so people have to either retrain for new specialties, or take jobs with lower qualifications than they have in their diploma.
2) Of course, knowledge in the field of law and Economics is necessary for every citizen of Russia, and the more literate the population is in these areas, the better it will be for the country as a whole. But when there are not enough specialists in the Russian Federation-engineers, physicists, chemists, biologists of various fields and related specializations, doctors, teachers-this problem must be solved. Regarding teaching: the teacher, as a specialist with a deep knowledge of the disciplines that he teaches, must also know the teaching methodology perfectly; teaching is not only a teacher's lectures to his students, in which he explains the material in a comprehensive and multifaceted way, but also the ability to build feedback with students, demand the study of the material, conduct quality control of knowledge. not every specialist in his field is able to be a teacher, just as not every medical worker is able to be a doctor.
3) Then, the use of traditional forecasting models is very difficult to assess the personnel reserve, as well as to analyze and evaluate the number of specialists prepared by the education system for economic clusters, because such analysis and forecasting are multi-factor and consists in taking into account a large amount of information, and therefore require increased accuracy of the forecast. The difference between neural networks is that they are able to perform multi-factor analysis and forecasting, taking into account a large amount of information, the accuracy of the forecast is very high at the same time more than 90%, moreover, modern software tools allow you to quickly build neural networks. In other words, the use of neural networks is effective for qualitative analysis and forecasting of complex socio-economic models, and is economically feasible in comparison with traditional means of analysis (time series, variance analysis, etc.).
4) The measures that the Russian government is taking today to preserve the permanent population in the East of the country, including in the Khabarovsk territory, are not enough to not only preserve the region's population, but also increase it.
Advances in Economics, Business and Management Research,volume 139 | 2020-05-21T09:18:08.588Z | 2020-05-12T00:00:00.000 | {
"year": 2020,
"sha1": "bcaea5c88a457c0d3ea0ceec5acb4ef0c0305e04",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/aebmr.k.200509.018",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d8668a10da017e2b3b83aee08ad6144719ecbb98",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
229344507 | pes2o/s2orc | v3-fos-license | Mechanical and Physical Regulation of Fibroblast–Myofibroblast Transition: From Cellular Mechanoresponse to Tissue Pathology
Fibroblasts are cells present throughout the human body that are primarily responsible for the production and maintenance of the extracellular matrix (ECM) within the tissues. They have the capability to modify the mechanical properties of the ECM within the tissue and transition into myofibroblasts, a cell type that is associated with the development of fibrotic tissue through an acute increase of cell density and protein deposition. This transition from fibroblast to myofibroblast—a well-known cellular hallmark of the pathological state of tissues—and the environmental stimuli that can induce this transition have received a lot of attention, for example in the contexts of asthma and cardiac fibrosis. Recent efforts in understanding how cells sense their physical environment at the micro- and nano-scales have ushered in a new appreciation that the substrates on which the cells adhere provide not only passive influence, but also active stimulus that can affect fibroblast activation. These studies suggest that mechanical interactions at the cell–substrate interface play a key role in regulating this phenotype transition by changing the mechanical and morphological properties of the cells. Here, we briefly summarize the reported chemical and physical cues regulating fibroblast phenotype. We then argue that a better understanding of how cells mechanically interact with the substrate (mechanosensing) and how this influences cell behaviors (mechanotransduction) using well-defined platforms that decouple the physical stimuli from the chemical ones can provide a powerful tool to control the balance between physiological tissue regeneration and pathological fibrotic response.
INTRODUCTION
Fibroblasts are cells belonging to the mesenchyme that are capable of producing and modifying extracellular matrix (ECM) components such as fibronectin and collagen (Kanekar et al., 1998). They are present in various tissues. For example, in neonatal and adult heart tissues, fibroblasts arise from endogenous cell populations via epithelial to mesenchymal transition (EMT) and from bone marrow derived cells (Visconti et al., 2006). Cardiac fibroblasts play a crucial role during fetal development and neonatal growth by contributing ECM to several specific structures of the heart (Figure 1) (Manso et al., 2009;Souders et al., 2009). In general, fibroblasts are flat and spindle shaped and can be easily distinguished from other cell types residing in the tissues, as fibroblasts lack tissue-specific functional hallmarks. Returning to the example of the heart tissue, the cardiac fibroblasts lack the basement membrane typical of the other cardiac resident cells (Kanekar et al., 1998).
Despite this, fibroblasts perform various critical functions in tissues and organs, such as generating ECM, actively migrating, and producing or degrading growth factors and cytokines that are fundamental for inflammatory cell response. Fibroblasts are also key players in several tissue-specific functions, such as ensuring normal heartbeat, where they form and maintain networks of junctions with several cell types, without which the tissue enters a pathological state (Camelliti et al., 2004(Camelliti et al., , 2005Baudino et al., 2006). As such, understanding their behavior within the tissue is a matter of high relevance, especially given that fibroblasts are a very common cell type throughout the human body. Fibroblasts have been extensively studied in vitro over several decades, partly because they can be easily derived from different tissues and aided by the simplicity of their in-vitro culture. There is evidence that fibroblast have to be activated to proliferate and migrate during specific pathophysiological conditions such as wound healing and fibrosis, and thus play an important role for development and repair of tissues (Gabbiani, 1996;Hinz et al., 2007). How the activation, phenotype transition, and migration of fibroblast take place in the contexts of injury response, tissue regeneration, wound healing, and fibrosis remains a key outstanding question.
One of the first responses after a stress in the tissue, such as in acute injures, is physical changes at the cellular and tissue level, such as tissue stiffening associated with changes in the ECM composition (Georges et al., 2007). These changes inevitably disrupt the mechanical homeostasis that underlies normal tissue FIGURE 1 | Fibroblast-to-myofibroblast transition (FMT). The scheme summarizes the FMT process, the corresponding changes in fibroblast behavior, and the downstream effects at the tissue level. The transition start from the fibroblast activation due to the different kinds of stimuli. The activation can sometimes be reversed or can proceed to the apoptosis of the myofibroblasts. When they escape these routes, due either to the persistent stimuli or to intracellular misregulations, FMT will lead to changes in the extracellular matrix (ECM) deposition and its architecture, driving the tissue to a pathological state. At the cellular level, FMT results in appreciable in the intracellular stress fibers and α-SMA expression. architecture and function (Humphrey et al., 2014). Inflammatory signals such as transforming growth factor beta (TGF-β) and tumor necrosis factor alpha (TNF-α) are released after injury, which can lead to cytoskeletal remodeling that, in turn, alters cell-generated forces and cellular mechanical properties (Wang et al., 2001;Leung et al., 2007;Yang et al., 2011). When the injuries cannot be resolved and repaired, the response switches from wound healing to fibrosis.
Fibroblast-mediated fibrosis can affect every tissue of the body and is a frequent pathological feature of chronic inflammatory diseases. During this pathological process, homeostasis is disrupted and a variety of biochemical factors are released by inflammatory cells, which trigger fibroblasts to undergo a phenotypical change to become myofibroblasts, which in turn leads to a notable change in the tissue microenvironment. One critical pathway is the TGF-β pathway (Wynn and Ramalingam, 2012;Rockey et al., 2015). This pathway can strongly impact the transition of fibroblasts to a myofibroblast phenotype, which involves alpha smooth muscle actin (α-SMA) production with stress-fiber-like appearance, further leading to migration, proliferation, and production of ECM components such as collagen type 1 that changes the mechanical and physical properties of the environment. It was shown that increasing matrix stiffness, a phenomenon observed in aging tissue, leads to myofibroblast activation (Wang et al., 2006;van Putten et al., 2016). During dermal wound healing, the stresses within the tissues are reduced especially inside the wound bed, causing myofibroblasts to enter a quiescent state or initiate the apoptosis pathway (Desmoulière et al., 1997;Hinz et al., 2001b). On the other hand, splitting the wound or exposing the tissue to chronic mechanical stress keeps the myofibroblasts activated, leading to the opposite response, i.e., preventing healing and promoting scar formation (Aarabi et al., 2007;Gurtner et al., 2011). Similarly, during wound healing, myofibroblast can be either inactivated, going toward a more quiescent state, or continue with its normal functioning, leading the tissue in which it resides along the fibrotic pathway. This risk associated with myofibroblasts escaping inactivation and overcoming the apoptosis control is always present; an example of controlled escaping is in the CCl 4 rodent liver tissue model, where hepatic stellate cells can turn into myofibroblasts (Kisseleva et al., 2012).
As we shall discuss, the evidence indicates a complex, dynamic interplay between fibroblasts and the extracellular matrix in the tissue, where cells alter the properties of the environment and, at the same time, changes in the substrate mechanical and physical cues lead to changes in cellular organization and behavior (Jaalouk and Lammerding, 2009;Kurniawan et al., 2016). Sensing of physical extracellular cues and the subsequent dynamics of the interaction between cells and the ECM regulate the downstream mechanotransductive events, causing a variety of nano-and micro-topography-sensitive cellular behaviors, including cell adhesion, morphology, proliferation, gene expression, self-renewal, and differentiation (Lemischka and Moore, 2003;Kingham and Oreffo, 2013). In light of these, in this review we highlight how a better knowledge of how physical/mechanical stimuli can influence the phenotype transition of fibroblasts can provide us with a better control on this process and allow us to revert in more efficient way the fibrotic tissue response, thereby presenting an important step forward to treat fibrotic pathologies.
FIBROBLAST-TO-MYOFIBROBLAST TRANSITION
A key step in wound healing, but also in fibrotic pathological diseases, is the activation of the fibroblast to become myofibroblast, where they escape the entrance to a quiescent state or the apoptosis pathway (Gabbiani et al., 1971). This phenotype transition is defined as Fibroblast-to-Myofibroblast Transition (FMT). The influence of FMT has received significant attention in the context of diseases such as bronchial asthma (Michalik et al., 2018). Some tissue-specific FMT events have been identified, such as increased collagen deposition within the subepithelial basement membrane in asthma, although these events do not fully explain the variations in the severity of asthma (Chu et al., 1998). Here, therefore, we will focus on the shared features of FMT and factors that promote FMT, drawing examples from different tissues and tissue pathologies.
Broadly, FMT can be subdivided into 2 stages. The fibroblast first become activated to a proto-myofibroblast phenotype, followed by a second stage completing the cell phenotype transition (Tomasek et al., 2002;Hinz and Gabbiani, 2003). During the initial transition stage, distinguishing the normal fibroblasts from proto-myofibroblasts is very difficult, but if the elevated mechanical, physical, and biochemical stresses due to the injuries continue to be present in the tissue, they start the polymerization of α-SMA-containing stress fibers (Figures 2A,B). The hallmarks of the full transition into myofibroblast are the expression of β-cadherins, the formation of mature focal adhesions (FAs), and reduction in migration and proliferation, with increased contractility (Hinz and Gabbiani, 2003;Hinz et al., 2004;Ward et al., 2008). The mechanical tension of the wound and the presence of growth factors further push toward the phenotype transition from fibroblast to myofibroblast (Balza et al., 1988). Interestingly, the formation of stress fibers that promote cell motility can also be induced by the presence of growth factors (Malmström et al., 2003), suggesting the role of environmental humoral stimuli in FMT.
Another kind of FMT-inducing stimuli is related to an elevated influx of immune cells associated with increased vascular permeability, and the subsequent release of cytokines and chemokines. Especially the role of interleukins during inflammatory response and how they are correlated to FMT is well-understood. For example, when stimulated by IL-4 and IL-3, the expressions of α-SMA in human lung fibroblasts are increased depending on the interleukins concentration and in a time-dependent manner (Hashimoto et al., 2001;Saito et al., 2003).
More recently, mechanical stimuli have also been demonstrated to have a key role in FMT (Tomasek et al., 2002;Balestrini et al., 2012;Hinz et al., 2012;Darby et al., 2016). Several in-vitro studies have shown that mechanical stress within the cellular environment, induced for instance by different mechanical and physical properties of collagen gels, is one of the factors that controls the shift in the fibroblast phenotype and cell fate (Arora et al., 1999;Hinz et al., 2001b;Wang et al., 2003;Choe et al., 2006;Balestrini et al., 2012). However, at the moment, there are still very few results on the direct impact of mechanical factors and their influence in FMT. There is evidence that mechanical stress leads to an increase of the ECM proteins and proteoglycan content (Breen, 2000;Ludwig et al., 2004;Le Bellego et al., 2009;Manuyakorn, 2014;Manuyakorn et al., 2016) and in recent studies it was demonstrated that mechanical properties FIGURE 2 | Key signatures of FMT. (A) Influence on cytoskeletal arrangement and cardiomyogenic differentiation of the substrate that presents pattern at different time points. Along the first row the cells reside on flat surface while the following two present same kind of grooves but with different dimensions. Image was adapted with permission from Gu et al. (2017). (B) Schematic illustration of the mechanostimulation that lead to myofibroblast differentiation. The upper part shows that endothelial cells lose their endothelial markers. The lower part shows that mechanical factors, such as the degradation or production of ECM that alter tissue stiffening, can induce the differentiation, for example in cardiac fibroblasts. Image was adapted with permission from Schroer and Merryman (2015). (C) Different fibroblast activation pathways through biochemical and mechanical factors in asthmatic (AS) and not asthmatic (NA) patients. Image was adapted with permission from Michalik et al. (2018).
of the microenvironment in which the cells reside, such as lung, bronchial, neuronal and cardiac tissues, have an active role on cell fate and their development (Tschumperlin, 2013;Michalik et al., 2018;Park et al., 2018). These studies suggest that biochemical, mechanical and physical factors in the microenvironment take part in a regulatory network that leads to different cell fate by specifically inducing intracellular changes also at a mechanical level, inducing the completion of the phenotypic transition from proto-myofibroblast to myofibroblast.
Cell Phenotype Is Driven by Physical and Mechanical Properties of the Environment
In vivo, cells are embedded in a complex ECM during both development and normal homeostatic maintenance, where ECM fibers present chemically and structurally intricate contact interfaces. Within this intricate ECM network, fibroblasts and other cell types drive ECM remodeling by deforming, reorienting, and degrading the ECM fibers, as well as depositing new ECM (Zamir et al., 2000;Shieh et al., 2011). These events are critical for tissue morphogenesis and maintenance. To study these contact interfaces systematically, minimal in-vitro model systems using either microfabricated substrates, controlled deposition of ECM fibers, or structured protein patterns have been developed (Kurniawan and Bouten, 2018).
Experiments performed using adipose stromal cells (ASCs) cultured in collagen matrices of different architectures show that matrices with thicker fibers promote ASC phenotype transition into myofibroblast through regulation of VEGF and IL-8 secretion (Seo et al., 2020). Intriguingly, nanoscale changes in the ligand spacing of model ECM fibers were shown to influence the collective cell behavior and overall characteristics, such as action-potential propagation in cardiac myocytes, on the scale of centimeters, suggesting effects on the ECM organization over six orders of magnitude of length scale (Kim et al., 2010). Recent works from our group using patterning of ECM proteins have furthermore shown that various morphological features of myofibroblasts that are relevant for FMT, such as cell area, shape, elongation, and alignment, are sensitively governed by the ECM patterns in a length-scale-dependent manner (Buskermolen et al., 2019(Buskermolen et al., , 2020. Moreover, the ECM architecture at the microscale induces different cellular events that activate a mechanical feedback loop whereby cell-generated forces lead to matrix remodeling, which in turn induces mechanotransductive processes and thus influencing the cell-generated forces again, by modulating the cell's capability to form and mature FAs as a result of changes in the stiffness of the substrate (Hall et al., 2016;Sapudom et al., 2019). Taken together, these findings clearly show that physical cues from the environment can strongly influence the phenotype of tissue-resident fibroblasts, which in turn can shape tissue homeostasis.
Another way that microenvironmental cues can affect tissue homeostasis is through changes in cell composition due to cellular movements. A relatively well-recognized consequence of the abovementioned mechanical feedback loop is the family of cellular "taxis" responses triggered by the ability of cells to sense chemical, mechanical, electrical stimuli gradients in the environment. These taxis responses include chemotaxis (sensing to spatial gradients of chemical factors) (Devreotes and Janetopoulos, 2003), haptotaxis (sensing of the surface-bound ECM proteins densities) (McCarthy and Furcht, 1984;Isenberg et al., 2009), durotaxis (sensing of substratum rigidity) (Lo et al., 2000), galvanotaxis (sensing of electric fields) (Mycielska and Djamgoz, 2004), and curvotaxis (sensing of cell-scale curvature variations) (Pieuchot et al., 2018). These sensing machineries can be locally activated within the tissue microenvironment, triggering specific mechanotransductive pathways that not only can instigate FMT directly, but also can promote active migration of fibroblasts and myofibroblasts into and out of the tissue. Using artificial engineered substrates that mimic the chemical, mechanical, and physical properties of highly organized ECM fibers, and so controlling their spatial density, it was shown in a recent study that the fiber density variation can be sensed by fibroblasts, and interestingly different cell types exhibit different sensitivities along a density gradient depending on their cortical stiffness (Kim et al., 2012). Interestingly, skin fibroblasts have bidirectional guidance from the highest and the lowest density areas toward an optimal one. This suggests that a topotactical guidance depending on ECM density is present. Indeed, cells tend to move toward the direction that allows them to make the largest contact area with the substrate (Park et al., 2018). Of note, fibroblast sensitivity to the taxis guidance cues can vary with its activation state; indeed, myofibroblast migrate differently depending on the mechanical and physical properties of the ECM (Berk et al., 2007). During wound healing, fibroblasts are directed chemotactically by the presence of TGFβ 1 in the microenvironment where the homeostasis is disrupted and can afterward shift their phenotype to myofibroblasts, leading further to cell migration toward the wound site, thereby increasing the local myofibroblast subcellular population (van Caam et al., 2018).
Through these cellular response to the physical cues in the environment, pathological features can emerge and progress. A high density of myofibroblasts and a different ratio between ECM components can be found in the bronchial and transbronchial biopsies of advanced asthma patients, compared to those of the patients with controlled and treated asthma symptoms (Weitoft et al., 2014). These are caused by activation of myofibroblasts within the tissue, which start a positive loop to retain their activated state instead of entering a quiescent state, as well as the associated regulation of metalloproteases MMPs and their regulators tissue inhibitors (TIMPs) secretion, thereby allowing the progression of the pathological state.
Interestingly, specific mechanical requirements have been found using a 2D in-vitro platform that the substrate must satisfy in order to initiate FMT. In particular, the substrates have to present a Young's modulus of at least 3 kPa, which allows the cells to produce large, mature integrin clusters that enable the full phenotype transition (Balestrini et al., 2012). Moreover, depending on the cell type studied, it can happen that stiffer culture substrates with a Young's modulus higher than 20 kPa are needed to continue the mechanotransductive machinery required to drive the phenotype transition of fibroblasts. Similarly, during in vitro wound healing assays, FMT requires a stiffness threshold in range of 25-50 kPa (Balestrini et al., 2012). These studies further emphasize the importance of physical and mechanical interactions with the ECM during FMT.
The Contact Events Start the Signal Transduction
Contact events with ECM and maturation of adhesion complexes are the first key steps in cell-ECM interactions that allow the regulation of cell functions such as growth, differentiation, and disease (Hynes, 2002;Geiger et al., 2009). The adhesion complexes arise as nascent adhesions (Alexandrova et al., 2008;Choi et al., 2008) that reach the dimensions of ∼110 nm (Bachir et al., 2014;Changede et al., 2015;Changede and Sheetz, 2017). Their maturation is then promoted through outside-in mechanotransduction mechanism from the matrix (Wolfenson et al., 2016;Saxena et al., 2017a,b).
When cell membrane receptors bind a ligand in the ECM substrate, the intracellular tail of the B subunits of integrins binds talin, a mechanoprotein in closed conformation. Talin links the B subunits with F-actin and, due to the force exerted by myosin II through F-actin, switches to an open conformation, consequently exposing binding site for vinculin, another protein that confers stability to this complex, the so-called "molecular clutch" (Sheetz, 1974;Geiger et al., 2001;Swaminathan and Waterman, 2016). First, vinculin binds to the binding site along talin, which is in open confomation, nearby the link between integrin β-tail and talin. Subsequently, vinculin binds in the same way along talin, but in proximity of the link between talin and F-actin. If this clutch can support the loading force exerted by the myosin II on the cell membrane receptors, the maturation of FAs lead to mechanotransduction depending on properties of the environment where the cell reside in Sheetz (1974), Geiger et al. (2001), and Swaminathan and Waterman (2016). The maturation of the FAs has been directly linked to ligand spacing (Dalby et al., 2014) as well as substrate stiffness (Oria et al., 2017), showing that depending on the Young's modulus of the substrate, the minimum ligand spacing necessary to lead at maturation of adhesion complexes can change. The molecular clutch mechanism can therefore instigate FMT through these mechanosensitive responses at the cell-substrate contact interface.
ECM Composition Regulates Fibroblast Mechanosensing
The ECM composition is also a hallmark of the pathological state of the tissues. Depending on the abundance of its components, the ECM can present different microarchitectures, leading to different mechanical and physical properties. Importantly, interactions between the cells and the different ECM components can directly regulate cell behavior such as migration and development (Park et al., 2018;Changede et al., 2019;Nastały et al., 2020).
One of the main components of the ECM that has been shown to play an important role in promoting FMT is fibronectin splice variant ectodomain A (ED-A-FN), which is upregulated in pulmonary disorders such as asthma (Larsen et al., 2006;Ge et al., 2015). Fibroblast from lung ovalbumin treated mice that lack the ED-A-FN present a reduced tendency to proliferate and migrate, and very interestingly display a lower α-SMA expression as well as less collagen deposition with impaired TGF-β 1 and IL-13 release, which are all hallmarks of a phenotype transition (Kohan et al., 2011). This suggests that the composition of the ECM can influence the mechanical stress in the tissue and thus affecting cellular phenotype transition. Another evidence is the role of the fibulin-1, a glycoprotein related to the stabilization of other protein group in ECM that is also known to be a marker for bronchial asthma (Lau et al., 2010;Giziry et al., 2017), indicating that the enhanced stability of ECM increases the propensity of fibroblasts to FMT.
It was demonstrated that enhancing actomyosin-mediated cell contractility can induce stromal cell mechanoactivation, leading to adipose stromal cells turn into myofibroblasts (Seo et al., 2015). This transition, in turn, leads to changes in the cellular environment through deposition of more fibronectin as well as deformation of the fibronectin network, partially unfolding the fibronectin molecules (Wan et al., 2013;Wang et al., 2017).
Mechanotransduction Leads to Distinct Internal Cellular Rearrangements
Following the mechanosensing events described above, the mechanical signals are transduced to elicit a variety of cellular responses that are also reflected in the alterations of internal cell organizations relevant for the progression of FMT. Here we highlight a few notable findings that exemplify this concept.
It has been recently demonstrated that cell behaviors that are implicated in FMT, such as migration, can be influenced by the curvature of the substrate (Pieuchot et al., 2018;Werner et al., 2019). This is especially interesting as curvature is a common geometrical feature of in-vivo tissues and organs (Callens et al., 2020;Werner et al., 2020). Convex spherical surfaces have been shown to cause a compression of the cytoskeleton on the nucleus, increasing the contact area between cell and substrate (Werner et al., 2017). Thus, cell nuclei were flattened and stretched over the convex surface, even resulting in a bean-like nuclear morphology. This effect on nuclear morphology can further translate to changes at transcription level. In a recent study, it was observed that, depending on the adhesion area, fibroblasts alters their cytoskeletal tension to the nuclear envelope; small substrates areas leads to an increased histone acetylation levels with a decreased nuclear volume (Alisafaei et al., 2019). Moreover, the motility of cells is regulated by the organization of stress fibers (SFs), but in curved environment the fibroblasts present a different SF organization with respect to those on planar substrates. A negative curvature polarize the cells and direct cell migration (Bade et al., 2018). Therefore, rearrangement of the cytoskeleton through mechanotransductive machinery leads to changes in the nucleus polarity and positioning within the cell, influencing cell migration (Vassaux et al., 2019;Moure and Gomez, 2020). Indeed, both F-actin and focal adhesion distributions were strongly influenced by this repositioning of nuclear compartment (Nastały et al., 2020). These findings provide intriguing insights that physical reorganization of the intracellular structural components and mechanotransductive players, which can be induced by changes in the tissue morphology, can directly affect FMT.
Further evidence of the importance of the mechanosensing in governing intracellular organizations and cellular response can be observed by direct perturbations to the mechanosensing apparatus. When ASCs that are undergoing FMT are treated with Y27632, which inhibits ROCK and reduces α-SMA levels (Seo et al., 2015), as well as diminishing the capability of the cells to sense the environment by inhibiting the receptors to TGF-β, the cells exhibit a decrease in myofibroblast transition and moreover reduced VEGF and IL-8 secretion. On the contrary, treating these cells with blebbistatin influences their morphology, confining the adhesions to the extreme cell periphery, causing actin stress fiber formation and enhancing contractility, thereby stimulating ASC myofibroblast transition (Seo et al., 2020). Consistent with this link between cell mechanosensing, force generation, mechanical properties, and organization, it is also increasingly recognized that ECM viscoelasticity, non-linear elasticity, and fiber rearrangement play a central role for cell behavior such as proliferation and multilineage differentiation (Baker et al., 2015;Chaudhuri et al., 2016;Das et al., 2016;Xie et al., 2017;Matera et al., 2019;Vining et al., 2019). Tuning the structural and mechanical properties of hydrogels has been shown to lead to different types of cellular organization and responses (Goh and Holmes, 2017;Herum et al., 2017).
With regard to active mechanical cues such as stretching, in recent study it was shown that exerting stretching on myofibroblasts lead to the maintenance of the shifted phenotype through the activation of the release of endogenous latent TGFβ 1 (Walker et al., 2020). On the other hand, when treated to block the release of TGF-β 1 , stretched cells maintain their phenotype, exhibiting comparable contractility and stiffness as in static cultures. This suggest that cyclic stretching can be responsible to maintain the myofibroblastic phenotype, leading to chronic fibrosis (Walker et al., 2020). Furthermore, under stretch, TGF-β 1 -treated cells showed further alignment to static conditions, as well as increased gel compaction (Walker et al., 2020). This demonstrates the capability of mechanical stretch of the substrate to change the sensitivity of the cells to biochemical stimuli present within the environment. The stretching causes a downregulation of the ECM proteases, leading to an increase of collagen-I associated peptides secretion. This is consistent with the concomitant increases of inflammatory and fibrotic response of the tissue (Sun et al., 2016;Rogers et al., 2020).
Taken together, these studies highlight the importance of better characterization and quantification of the involvement of mechanical and physical properties of the environment in which the cells reside, giving attentions not only on the mechanotransductive machinery involved, but also achieving a better control on this machinery through the passive function of the substrates and the active adaptation of the cells.
DISSECTING THE CELL-SUBSTRATE INTERACTION EVENTS IN FMT
The previous sections have unequivocally established the importance of changes at the mechanical level of the cell in FMT, from the first contact event with the substrate to the influence on cellular behavior. During this phenotype transition, a variety of humoral and mechanical cues from the substrate drive the fibroblast to myofibroblast transition via a proto-myofibroblast state. When the transition completes, the myofibroblasts present α-SMA containing stress fibers and an enhanced contractility. To complete such transition, the mechanical and physical properties of the environment play an active role in inducing intracellular changes. These cellular and intracellular events are overall interconnected through the mechanotransductive machinery, starting from the mechanosensing exerted by the fibroblasts.
Thus, the dynamic interplay between the ECM and the cell seems to play a central role in cytogenesis, especially during FMT.
Efforts to understand the cellular mechanosensing and mechanotransduction mechanisms have gained significant attention since it became known that these are involved in cell differentiation processes (Yim and Sheetz, 2012;Dalby et al., 2014;Iskratsch et al., 2014;Murphy et al., 2014). Importantly, differentiation and FMT share many regulatory pathways that direct the expression or the release of factors involved in cell metabolism or cell fate, such as α-SMA production and the release of TGF-β. In the initial contact events with the substrate, mechanosensing and mechanotransduction of the physical signals in the environment leads to the maturation of the FAs only if the environment properties (ligands spacing and stiffness) satisfy the requirements to drive the changes. In addition, cellular sensing of extracellular topographical cues through nanoscale architecture causes a multitude of nanotopographysensitive cellular behaviors, including cell adhesion, morphology, proliferation, gene expression, self-renewal, and differentiation (Lemischka and Moore, 2003;Kingham and Oreffo, 2013). This is possible though integrin-mediated sensing of mechanical and physical features of the microenvironment (Geiger et al., 2009;Dalby et al., 2014;Chen et al., 2015;Humphries et al., 2015) and will lead to intracellular rearrangements of the cytoskeleton and alteration in the mechanical proprieties of the cell, a key step in FMT. At the same time, the cell contractility is enhanced, causing the cell to release latent TGF-β and pushing toward FMT. Thus, dynamic interplay between the cells and the ECM induces cytoskeletal rearrangements that will simultaneously cause changes in the force transmission, cytoskeletal organization, and mechanical properties of the cell and its nucleus, as well as of the ECM in tissues (Jaalouk and Lammerding, 2009;Kurniawan et al., 2016;Nastały et al., 2020).
The importance of considering the cell-substrate interaction events in FMT should also be considered in light of the fibroblast heterogeneity in different tissue microenvironments. Indeed, fibroblasts exhibit differing functional identities, including the composition and expression profile of the intracellular macromolecules, depending on the tissue where they reside (Lynch and Watt, 2018;LeBleu and Neilson, 2020). In-vitro culture entails loss of most of mechanical and physical stimuli normally present in the tissue-specific microenvironments (Lynch and Watt, 2018), which affect not only the mechanical properties of the cells, but also cellular functions such as polarization (Nastały et al., 2020). This suggests a possible involvement of mechanotransduction in the regulation of gene expressions. As will be addressed in section Coupling of Mechanical and Physical Cues of the Substrates, combinations of different local physical and mechanical stimuli that are sensed by cells in a physiological environment, such as roughness, topography, stiffness, and stretching, could influence the functional identities of the fibroblasts, further leading to heterogeneity in the cell population.
Taken together, studying the mechanical interactions between cells and the substrate at the cell and tissue levels is critical, not only to start recognizing how much the environment is involved in physiological processes, but also to better understand how environmental features can be manipulated to speed up or slowdown pathological processes. To do so, in-vitro biomimetic substrates have become an invaluable toolset as a way to simplify the complexity of the in-vivo cell-substrate interactions.
PRODUCTION OF SUBSTRATES WITH WELL-DEFINED PHYSICAL AND MECHANICAL CHARACTERISTICS
To better understand the role of physical and mechanical cues in the environment in FMT, a multitude of versatile substrate fabrication techniques have been developed and applied in the attempt to produce substrates that accurately mimic aspects of the ECM properties. In this section, we summarize commonly used methodologies for creating substrates with welldefined physical and mechanical features of ECM. Here we pay particular attention to the accessible length-scale target and characteristics (Figure 3), while referring readers who are interested in the detailed working principles and practical aspects to the original articles describing the individual methods. Moreover, we highlight the emerging efforts to use a combination of these cues to better mimic the physiological condition of the tissues.
Mimicking ECM Topography
The aim of the approaches that focus on topography is to understand the impact of substrate micro-or nanotopography on mechanotransductive processes, and to exploit these substrates to control cell behavior. Various fabrication technologies have been adapted to the needs of cell biology (Dalby et al., 2014;Chen et al., 2015;Crowder et al., 2016;Chighizola et al., 2019). One of the most commonly applied group of techniques for structuring the surface is based on lithography. In this group, there are three main methodologies: photolithography, electron beam lithography (EBL), and colloidal lithography.
The optical lithography works by transferring the pattern that is required onto a photosensitive emulsion (photoresist) on a substrate. This approach can be very useful for creating adhesion patterns and controlling cells organization (Dalby et al., 2007;Bettinger et al., 2009). The main limitation with optical lithography is the requirement that the starting layer on which the features will be built must be a stiff flat surface, which precludes fabrication of 3D structures. Prefabricated structured rigid molds can be used in pattern transfer methods to print the mold features to other materials with high efficiency and fidelity (Guo, 2004;Pandey et al., 2019). The most common techniques are nanoimprinting and replica molding (Chen et al., 2015). These methodologies can be exploited to transfer patterns with resolution of a few hundred nanometers. In practice, the procedure for replica molding require baking time, while one of the bottlenecks of nanoimprinting is to control the demolding after the heating up to transfer the pattern. Normally these approaches are used in combination with microfluidic devices. Another methodology that does not necessitate templates is surface roughening that can be achieved, for example, by physical or chemical etching (Boyan et al., 1998;Thapa et al., 2003). Both methods can be applied to large surface areas, but an accurate control of the feature size is challenging (Chen et al., 2015). Surface roughening is a very useful technique that allows study of cell sensing of substrates with well-defined surface roughness.
In general, the mold must present precisely defined topographies that are transferred to the substrate. For example, to exploit the nanoimprinting lithography, the molds are produced using PDMS (Odom et al., 2002), polyurethane acrylate (Kim et al., 2003), and "hard-PDMS" (Schmid and Michel, 2000). In addition, there are some efforts directed toward the production of soft molds with an improved modulus and solvent resistance, although this has the drawback of reduced durability due to the temperature during the pattern transferring (Ro et al., 2011). Moreover, these kind of molds are produced starting from master inorganic templates, normally metals or ceramics (Albrecht et al., 2015). The bottleneck in their production is the complexity of the procedures.
The optical lithography approach can be applied at large scale and high throughput due to the very quick transfer of topography from the mold to the substrate. A modern lithography tool is able to produce till 300 mm/h patterned wafers with roughly 50 nm 2D pattern resolution, achieving a pixel throughput of 1.8T pixels/s. The achievable resolution of this method is determined by the UV light wavelength, as well as by the capability to reduce diffraction at the mask apertures by reduction lenses that capture higher order diffraction light. However, going beyond sub-100 nm resolution is very challenging (Chen et al., 2015).
In EBL, electrons are used instead of photons in order to improve the spatial resolution of the lithography, enabling a resolution of <10 nm (e.g., for periodical line patterns) (Michishita et al., 2014). This allows a very precise mimicry of nanotopographical ECM features (such as collagen fibers with lengths on the order of 10 µm; Buehler, 2006), but at the same time limiting the scale of surface area and throughput that can be fabricated with reasonable time/cost efforts (Chen et al., 2015). The above methodology is a top-down approach, meaning that the topographies are transferred to the substrate of interest through the usage of a mold. In practice, this approach is relatively time consuming. An alternative method is a bottomup approach, which can produce ordered structures over large area in a cost-effective manner (Yamada et al., 2017). Another bottom-up approach is colloidal lithography, in which colloidal nanoparticle with crystal structures self-assemble on planar surfaces (Yang et al., 2006). These colloidal nanoparticles can then by reduced by etching. With this technique, one can achieve high throughput of nanometric features, but without an accurate control of the spatial pattern (Chen et al., 2015).
Furthermore, several material synthesis methods can be exploited for tissue engineering, such as electrospinning, phase separation, anodisation, and sintering, which have been described in detail in dedicated review articles (Zhang and Ma, 1999;Li and Xia, 2004;Park et al., 2007;Smith et al., 2008;Dulgar-Tulloch et al., 2009;Bhardwaj and Kundu, 2010).
Coupling of Mechanical and Physical Cues of the Substrates
More recently, the importance of examining the effects of multiple mechanical and physical cues simultaneously presented to the cells has been increasingly recognized, as efforts are made to bridge the minimalistic model systems and complex in vivo situations. For example, in the study of Oria et al., the authors studied the ligands spacing coupled to the stiffness of the substrate, in this case a simple 2D protein patterning hydrogels allowing to control the Young modulus and the disposition of the ligands (Oria et al., 2017). The results indicate that finding the right combination can lead to the activation of mechanotransductive pathway, allowing the force loading on molecular clutches.
Fibroblasts have also been shown to sense the mechanical stiffness of the substrate. In fact, it is well-documented that, between the physiological and pathological states of the tissue, there is a significant difference in stiffness. This is caused by changes in the microenvironment composition, which in the pathological state is higher in collagen I and reduced in collagen III (Herum et al., 2017). Culturing fibroblasts on polyacrylamide gels with stiffness mimicking the pathological state of breast tissue (20 kPa) resulted in larger cell spreading area compared to on stiffness mimicking the physiological ECM (1 kPa) (Schwager et al., 2019). Moreover, α-SMA content increased on the stiffer substrate, suggesting that the fibroblast activation can be promoted by matrix stiffness (Schwager et al., 2019). When cardiac fibroblasts were cultured on hyaluronic acid gels with different stiffnesses ranging from the healthy myocardium (8 kPa) to the infarcted state (20-100 kPa), where the in vivo presence of myofibroblast is known to be higher, significantly reduced formation of α-SMA was observed on the softer substrates (corresponding to the healthy myocardium stiffness). In addition, the FAs on these substrates were small and peripheral, whereas on the stiffer substrates the FAs were bigger and distributed throughout the cell membrane (Herum et al., 2017). The mechanical properties of the ECM therefore seems to play an important role in the maintenance of the quiescent fibroblast phenotype and the FMT, highlighting the relevance of coming up with methods to fabricate substrates with tunable stiffness in the range relevant for physiological tissues.
Another kind of physiologically relevant mechanical stimulus that can be recapitulated in vitro is stretching. Tensile testing has been used to stretch silicon substrates on which cells have been allowed to adhere. This technique comprises an electronic control console and a loading frame with a load capacity of 2.5 N in tension or in compression (Boccafoschi et al., 2007). This stretching method can be combined with topographical cues to obtain different cell responses such as different intracellular rearrangements and adhesion patterns. For example, in the study by Gu et al., the authors analyzed the effects due to the simultaneous presence of protein patterns and cyclic stretching on the cardiomyogenic differentiation of hMSCs (Gu et al., 2017). In our group, we have examined the effect of stretch in combination with shear flow in a vascular construct that mimics the mechanical environment in cardiovascular tissues (van Haaften et al., 2018). This approach has revealed the distinct roles of stretch and shear in governing myofibroblast activity and neotissue production, both directly (van Haaften et al., 2018) and indirectly through crosstalk with immune cells Wissing et al., 2020). Furthermore, a combination of substrate protein patterning, substrate stiffness, and mechanical stretching has been studied to push toward the complete FMT by stimulating the release of latent TGF-β (Walker et al., 2020).
This kind of coupling between mechanical and physical features of the environment seems to be the key to reach in a more controlled way different cell fates, by only exploiting the physical and mechanical characteristics of the substrates.
3D Environments for Studying Fibroblast Activation
Mimicking of physiological environment to better understand cellular responses and gain fundamental insights to prevent pathological outcome is getting increasing traction, especially with new methodologies and protocols to mimic the 3D properties of environment. Recent studies have demonstrated that different geometrical states of the cell, such as its shape and spatial constraints, lead to significantly different transcriptional cellular responses, even when the cells are stimulated by the same biochemical factors (Mitra et al., 2017;Damodaran et al., 2018). In particular, recent efforts have focused in producing 3D in vitro environments that capture factors that normally are neglected or overlooked in 2D studies. Some studies use coculturing in order to reproduce the interplay between different cell types, while simultaneously tuning the 3D culture setup to resemble the desired cues of the tissue microenvironment; an example is using spheroids of collagen matrix to mimic the interplay between fibroblast and cancer cells (Venkatachalapathy et al., 2020). Interestingly, fibroblasts were shown to sense the biochemical stimuli released by cancer cells, indicating a complex interplay between both cell types that affects the capability of fibroblasts to remodel the ECM and the capability of cancer cells to invade the surrounding tissue (Kalluri, 2016;Erdogan et al., 2017;Richards et al., 2017). A more physiological 3D environment can also be reproduced using gels, such as collagen I hydrogels to model pulmonary fibrotic tissues coupled to a fibrosis-on-chip model (Sundarakrishnan et al., 2018(Sundarakrishnan et al., , 2019. It is also noteworthy that such 3D in-vitro microenvironments are also generally amenable to long-term culturing, enabling dynamic alteration of their properties to mimic the properties of tissue pathologies to study the fibroblast activation, which is still missing in 2D approaches. Thus, the development of tunable 3D physiological environment open new avenues for more in-depth studies into the coordination between different stimuli toward a better understanding and prevention of pathological progression.
FROM UNDERSTANDING TO CONTROLLING FMT
In conclusion, examining the role of mechanical and physical stimuli on cell behavior and fate is critical for understanding the pathophysiological state of the tissue. Specifically, the normal functioning of fibroblast throughout the organism can sensitively determine the difference between tissue healing or regeneration and progression to diseases, such as fibrotic diseases. Injuries, which result in mechanical and physical stresses to the tissue, can induce fibroblasts to switch their phenotype to a myofibroblastic state, characterized by elongated shape with the production of stress fibers. This phenotype transition cause reduced proliferation and migration, with an increased contractility (Hinz and Gabbiani, 2003;Hinz et al., 2004;Ward et al., 2008). As such, these myofibroblasts, which are normally is switched off in a more quiescent state, are responsible for the tissue stiffening as response to injuries such as in cardiac, lung, and liver diseases.
Physical and mechanical stimuli in the cellular microenvironment, such as topography, ligands spacing, and stiffness, have been identified as passive stimuli that allow the cells to complete the FMT by affecting the formation of FAs through the mechanotransductive pathway. This, in turn, causes cytoskeleton rearrangements, leading to α-SMA production, one of the hallmarks of myofibroblast (van Putten et al., 2016). Moreover, there are also active stimuli such as mechanical stretching that push the myofibroblasts to maintain their activated state, even when the biochemical part of the network causing the phenotype transition is switched off (Walker et al., 2020). This evidence unambiguously demonstrate that the environment where the cells reside has a very active role in regulating internal cell processes that induce different cell fate; mismatches lead to pathological states.
To study these processes systematically, in-vitro investigations involving production of substrates that allow researchers to control various combination of mechanical and physical cues has proven to be invaluable. At the same time, these efforts also highlight the possibility of using cellular environmental properties not only to gain in-depth understanding of the FMT process, but also to control and manipulate it. In particular, the formation and maturation of the FA complexes as well as the cytoskeletal rearrangements can be sensitively tuned using environmental cues and thus can present a unique toolset for tweaking the FMT process. We expect that the current rapid advances in the technologies to produce substrates with unprecedented control of the topographical and mechanical characteristics will further fuel the emergence of new methods and therapies to control cytoskeletal rearrangements, potentially allowing to reverse phenotype transition in fibrotic tissue and the progression of the disease.
AUTHOR CONTRIBUTIONS
MD'U and NK conceived, outlined, wrote, and approved the review. All authors contributed to the article and approved the submitted version.
FUNDING
The authors acknowledge financial support from the European Research Council (grant 851960). | 2020-12-22T14:07:05.212Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "61c13dfd682f96b3dacf713b7f915f387df5d882",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2020.609653/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61c13dfd682f96b3dacf713b7f915f387df5d882",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44795291 | pes2o/s2orc | v3-fos-license | Icehouse – Greenhouse Variations in Marine Denitrification
Long-term secular variation in the isotopic composition of seawater fixed nitrogen (N) is poorly known. Here, we document variation in the N-isotopic composition of marine sediments ( δNsed) since 660 Ma (million years ago) in order to understand major changes in the marine N cycle through time and their relationship to first-order climate variation. During the Phanerozoic, greenhouse climate modes were characterized by lowδNsed (∼ −2 to +2 ‰) and icehouse climate modes by high δNsed (∼ +4 to +8 ‰). Shifts toward higherδNsed occurred rapidly during the early stages of icehouse modes, prior to the development of major continental glaciation, suggesting a potentially important role for the marine N cycle in long-term climate change. Reservoir box modeling of the marine N cycle demonstrates that secular variation inδNsed was likely due to changes in the dominant locus of denitrification, with a shift in favor of sedimentary denitrification during greenhouse modes owing to higher eustatic (global sea-level) elevations and greater on-shelf burial of organic matter, and a shift in favor of water-column denitrification during icehouse modes owing to lower eustatic elevations, enhanced organic carbon sinking fluxes, and expanded oceanic oxygen-minimum zones. The results of this study provide new insights into operation of the marine N cycle, its relationship to the global carbon cycle, and its potential role in modulating climate change at multimillion-year timescales.
Introduction
Nitrogen (N) plays a key role in marine productivity and organic carbon fluxes and is thus a potentially major influence on the global climate system (Gruber and Galloway, 2008). Variation in marine sediment N-isotopic compositions during the Quaternary (2.6 Ma to the present) has been linked to changes in organic carbon burial and oceanic denitrification rates during Pleistocene glacial-interglacial cycles (François et al., 1992;Altabet et al., 1995;Ganeshram et al., 1995;Haug et al., 1998;Naqvi et al., 1998;Broecker and Henderson, 1998;Suthhof et al., 2001;Liu et al., 2005Liu et al., , 2008. At this timescale (i.e., ∼ 10 5 yr), the marine N cycle is thought to act mainly as a positive climate feedback, but negative feedbacks involving the influence of both N fixation and denitrification on oceanic fixed-N inventories have been proposed as well (Deutsch et al., 2004). Although pre-Quaternary δ 15 N sed variation has been reported, including highly 15 N-depleted (−4 to 0 ‰) Jurassic-Cretaceous units (Rau et al., 1987;Jenkyns et al., 2001;Junium and Arthur, 2007) and highly 15 N-enriched (+6 to +14 ‰) Carboniferous units (Algeo et al., 2008), the Phanerozoic record of marine N-isotopic variation and its relationship to long-term (i.e., multimillion-year) climate change have not been systemically investigated to date (Algeo and Meyers, 2009). Additional study of the marine N cycle is needed to better understand its relationship to organic carbon burial and long-term climate change and to more accurately parameterize N fluxes in general circulation models. In this study, we document variation in δ 15 N sed from 660 Ma to the present, demonstrating a strong relationship to first-order climate cycles, with Table 1). The mean long-term trend is given by a LOWESS curve (red line) and uncertainty envelope ( ± 1σ ; green field). The LOWESS curve, which varies over a ∼ 10 ‰ range, accounts for 74 % of total variance in the δ 15 N sed data set. At top, epochs of moderate (light blue) and heavy (dark blue) continental glaciation are from Montañez et al. (2011); the ages of all marine sedimentary units and climate events have been adjusted to the timescale of Gradstein et al. (2012). Tr = transitional interval.
lower δ 15 N during greenhouse intervals and higher δ 15 N during icehouse intervals. This pattern suggests that long-term variation in the marine N cycle is controlled by first-order tectonic cycles, and that it is linked to (is a possibly a driver of) long-term climate change.
Methods
This study is based on the N-isotope distributions of 153 marine units ranging in age from the Neoproterozoic (660 Ma) to the early Quaternary (∼ 2 Ma) (Fig. 1). Among these units are 35 that were analyzed specifically for this study (see isotopic methods, Appendix A), 33 that were taken from our own earlier research publications, and 85 that were taken from other published reports. For each study unit, we determined the median (50th percentile), standard deviation range (16th and 84th percentiles), and full range (minimum and maximum values) of its δ 15 N distribution (Table 1). We also report organic δ 13 C distributions as well as means for %TOC (total organic carbon), %N, and molar C org : N ratios, where available (Table 1). The ages of all units were adjusted to the 2012 geologic timescale (Gradstein et al., 2012). A LOWESS (LOcally WEighted Scatterplot Smoothing) curve was calculated for the entire data set per the methods of Appendix B.
Results
Our δ 15 N sed data set exhibits a mean plus/minus one standard deviation of +2.0 ± 3.1 ‰ with a range of −5.2 to +10.4 ‰ ( (Fig. 1). The modeled LOWESS curve for the Phanerozoic exhibits a minimum of −2.8 ‰ in the Cambrian and a maximum of +8.0 ‰ in the Mississippian. The uncertainty attached to this mean trend varies from ±0.9 to 2.9 ‰ through the Phanerozoic but is mostly < ±2 ‰ (based on plus/minus one standard deviation). The most abrupt changes in δ 15 N sed are associated with a ∼ 6 ‰ rise during the mid-Mississippian and a ∼ 5 ‰ rise during the Late Cretaceous. The Phanerozoic δ 15 N sed curve shows a strong relationship to first-order climate cycles, with low values during the greenhouse climate modes of the mid-Paleozoic and mid-Mesozoic and high values during the icehouse climate modes of the Late Paleozoic and Cenozoic (Fig. 1). The δ 15 N sed data set exhibits pronounced secular variation (i.e., a range of > 10 ‰) and strong secular coherence (i.e., 74 % of total variance is accounted for by the LOWESS curve). The secular coherence of the data set is significant in view of the relatively short residence time of nitrate in seawater (∼ 3 kyr) (Tyrrell, 1999;Brandes and Devol, 2002), which theoretically offers potential for strong δ 15 N NO − 3 variation at intermediate (10 3 -10 6 yr) timescales (Deutsch et al., 2004). Indeed, sub-Recent marine sediments exhibit a ∼ 14 ‰ range of δ 15 N variation (Tesdal et al., 2012), reflecting local water mass effects linked to (1) strong N fixation, which can lower δ 15 N NO − 3 by several per mille, as in the Cariaco Basin and Baltic Sea, and (2) strong water-column denitrification, which can raise δ 15 N NO − 3 by > 10 ‰, as in upwelling systems in the Arabian Sea and the eastern tropical Pacific (Brandes and Devol, 2002;. However, the cumulative δ 15 N distribution for sub-Recent sediments yields a mode of 5-6 ‰ with a standard deviation of ±2.5 ‰ (Tesdal et al., 2012), which conforms well to the isotopic composition of modern seawater nitrate (+4.8 ± 0.2 ‰) (Sigman et al., 2000); note that the mean value of 6.7 ‰ reported by Tesdal et al. (2012) is skewed toward the high side by an overrepresentation of upwelling-zone sediments. Thus, the δ 15 N sed values of paleomarine units (Table 1) can be viewed as a random sample of a population of sediment δ 15 N sed values of a given age, the average of which is close to the δ 15 N NO − 3 of contemporaneous seawater. Although we cannot discount the possibility that some of our units are nonrepresentative of seawater δ 15 N NO − 3 of a given age, the broadly coherent pattern of secular variation recorded by our data set is not consistent with it being primarily a record of random local water mass effects (see Sect. 4.4).
The marine nitrogen cycle
Long-term secular variation in δ 15 N sed and, by extension, in the δ 15 N of seawater fixed nitrogen can be interpreted in terms of dominant processes of the marine N cycle, the main features of which are now well understood. Most bioavailable N is fixed by diazotrophic cyanobacteria with a fractionation of −1 to −3 ‰ relative to the atmospheric N 2 source (δ 15 N air ∼ 0 ‰) (Brandes and Devol, 2002;. Apart from assimilatory uptake, the major sinks for seawater fixed N are denitrification within the sediment or in the water column and the anammox process. Denitrification involves the bacterial use of nitrate as an oxidant in the respiration of organic matter with a maximum fractionation of ∼ −27 ‰ (but commonly with an effective fractionation of ∼ −20 ± 3 ‰), resulting in a strongly 15 N-enriched residual seawater nitrate pool. Denitrification in suboxic marine sediments typically yields much lower net fractionation (∼ −1 to −3 ‰ ) owing to near-quantitative utilization of porewater nitrate Lehmann et al., 2004). The anammox reaction, in which ammonium and nitrate (or nitrite) are converted to N 2 , may eliminate more fixed N than denitrification in some marine environments (Kuypers et al., 2005), although the isotopic fractionation associated with this process is not well known (Galbraith et al., 2008).
The N-isotopic composition of marine sediment depends on the δ 15 N of seawater fixed N, fractionation during assimilatory uptake, and subsequent alteration during decay in the water column and sediment . Both ammonium and nitrate can be used as N sources in primary production, with fractionations of −10 (± 5) ‰ and −3( ± 2) ‰, respectively (Hoch et al., 1994;Waser et al., 1998). Nitrate is by far the more important source of N for eukaryotic marine algae, but ammonium is utilized by some modern microbial communities (Higgins et al., 2012) and may have been the main N substrate for eukaryotic algae during some oceanic anoxic events (OAEs; Altabet, 2001;Higgins et al., 2012). Assimilatory uptake enriches the residual fixed N pool in 15 N and can result in shifts in the δ 15 N NO3 -of local water masses (Hoch et al., 1994), but quantitative utilization of fixed N by marine autotrophs at annual timescales normally limits fractionation due to this process (Sigman et al., 2000;Somes et al., 2010). These processes determine the N-isotopic composition of primary marine organic matter, before modification by diagenesis.
Influence of diagenesis on sediment δ 15 N
Diagenesis has the potential to alter the N-isotopic composition of organic matter. First, selective degradation of amino acids can produce shifts of a few per mille in δ 15 N sed (Prahl et al., 1997;Gaye-Haake et al., 2005). Second, aerobic bacterial decomposition of organic matter results in deamination, i.e., the release of isotopically light NH + 4 to sediment porewaters (Macko and Estep, 1984;Macko et al., 1987;Holmes et al., 1999), which results in 15 N enrichment of the organic residue by a few per mille (Altabet, 1988;Libes and Deuser, 1988;François et al., 1992;Saino, 1992;Lourey et al., 2003). Subsequent nitrification can enrich the porewater NH + 4 pool in 15 N by 4-5 ‰, potentially leading to changes in bulk-sediment δ 15 N if NH + 4 diffuses back to the water column (Brandes and Devol, 1997;Prokopenko et al., 2006). However, if NH + 4 generated within the sediment is captured by clay minerals, then the bulk sediment may show little or no change in δ 15 N relative to the organic sinking flux (Higgins et al., 2012). Because decay processes can have variable effects on δ 15 N sed , net fractionation can be either positive or negative relative to the unaltered source material. Surface sediments tend to be enriched in 15 N by 1-5 ‰ relative to particulate organic nitrogen in the water column, possibly because the latter has undergone less extensive deamination (Brandes and Devol, 1997;Gaye-Haake et al., 2005;Prokopenko et al., 2006;Higgins et al., 2010). Differences between the δ 15 N of the sinking and sediment fractions show water-depth dependence, reflecting greater oxic degradation of organic matter settling to the deep-ocean floor, although this effect is relatively small . In contrast, rapid burial of organic matter in continental shelf and shelf-margin settings can yield sediment δ 15 N values that are little modified from those of the organic export flux (Altabet and François, 1994;Altabet, 2001;Robinson et al., 2012).
Studies of N subfractions have been undertaken with the goal of recovering a N-isotopic signature that is comparatively free of diagenetic effects. Chlorin N (Higgins et al., 2010) is 15 N-depleted relative to bulk-sediment N due to a ∼ 5 ‰ fractionation during photosynthesis . Some studies have claimed large (up to 5 ‰ ) shifts in bulk-sediment δ 15 N as a consequence of diagenesis . However, the studies of N subfractions cited above exhibit a systematic offset of 3-5 ‰ between bulk-sediment and compound-specific δ 15 N values that is consistent with the effects of photosynthetic fractionation overprinted by, at most, small (< 2 ‰) diagenetic effects. Following early diagenesis, deeper burial rarely causes more than minor changes in N-isotopic compositions, as shown by (1) δ 15 N variation of only a few per mille over a wide range of metamorphic grades (Imbus et al., 1992;Busigny et al., 2003;Jia and Kerrich, 2004), and (2) δ 15 N values for metamorphosed units that are virtually indistinguishable from those of coeval unmetamorphosed units (e.g., compare the Eocene-Jurassic Franciscan Complex with age-equivalent units; Table 1). Ancient marine sediments are thus considered to be fairly robust recorders of the ambient isotopic composition of seawater fixed N (Altabet and François, 1994;Altabet et al., 1995;Higgins et al., 2010;Robinson et al., 2012).
Influence of organic matter source on sediment δ 15 N
All of the data used in this study represent bulk-sediment Nisotopic compositions, thus including both organic and inorganic nitrogen. The amount of mineral N present in most marine sediments is so small that it typically has little influence on bulk-sediment δ 15 N (Holloway et al., 1998;Holloway and Dahlgren, 1999). In contrast, clay-adsorbed N (principally ammonium) can be quantitatively important, with concentrations of ∼ 0.1-0.2 % in some marine units (e.g., Fig. 3 in Meyers, 1997;Fig. 3 in Lücke and Brauer, 2004;and Fig. S1 in Algeo et al., 2008). However, clay-adsorbed N is mostly derived from sedimentary organic matter, and the organic-toclay transfer of nitrogen is often at a late diagenetic stage, thus limiting translocation of N within the sediment column (Macko et al., 1986). These considerations suggest that the presence of a small inorganic N fraction in the study units is unlikely to affect our results.
In compiling the δ 15 N sed data set used in the present study, our principal concern was that admixture of large amounts of terrestrially sourced organic N might bias the marine δ 15 N record. A number of different procedures can be used to screen samples for the presence of terrestrial organic matter, including petrographic analysis to identify maceral types (Hutton, 1987), biomarker analysis of steroids, polysaccharides, and hopane and tricyclic ratios (Huang and Meischein, 1979;Frimmel et al., 2004;Peters et al., 2004;Grice et al., 2005;Sephton et al., 2005;Wang and Visscher, 2007;Xie et al., 2007;Algeo et al., 2012), and hydrogen and oxygen indices (HI-OI) (Espitalié et al., 1977(Espitalié et al., , 1985Peters, 1986). Such proxies are generally reliable in distinguishing organic matter sources, subject to some caveats (Meyers et al., 2009a). These types of proxies were available only for a subset of the present study units (Table 1), but, where available, they generally confirmed the dominance of marine over terrestrial organic matter. Studies of modern continental shelf sediments show a rapid decline in the proportion of terrestrial organic matter away from coastlines (Hedges et al., 1997;Hartnett et al., 1998). The study units of Proterozoic to Jurassic age were mostly epicontinental and, hence, deposited close to land areas (Fig. 2), although there was little terrestrial vegetation for export to marine systems prior to the Devonian (Kenrick and Crane, 1997 and distal continent-margin sites that were at a significant remove from land areas (Fig. 2) and, hence, unlikely to have accumulated large amounts of terrestrial organic matter. Sediment C org : N ratios potentially also provide insights regarding organic matter sources (Meyers, 1994(Meyers, , 1997. Terrestrial organic matter is characterized by high C org : N ratios (∼ 20-200) owing to an abundance of N-poor cellulose in land plants (Ertel and Hedges, 1985). In contrast, fresh marine organic matter exhibits low C org : N ratios (∼ 4-10) owing to a lack of cellulose and an abundance of N-rich proteins in planktic algae (Müller, 1977). Diagenesis can result in either lower C org : N ratios through preferential preservation of organic N as clay-adsorbed ammonium, or higher C org : N ratios through preferential loss of proteinaceous components (Meyers, 1994). Covariation between δ 15 N, δ 13 C org , and C org : N ratios can reveal mixing relationships in estuarine (Thornton and McManus, 1994;Ogrinc et al., 2005) and marine sediments (Müller, 1977;Meyers et al., 2009b). In our Phanerozoic data set, δ 13 C org and δ 15 N exhibit no relationship (r 2 = 0.01; Fig. 3), but δ 15 N exhibits moderate negative covariation with C org : N (r 2 = 0.21; p(α) < 0.001; Fig. 4). The source of the latter relationship is uncertain. Although conceivably representing a marine-terrestrial mixing trend, this interpretation is unlikely given that the majority of units with low δ 15 N and high C org : N values come from openmarine settings of Cretaceous-Recent age that presumably contain little terrestrial organic matter. The linkage of higher C org : N ratios (to ∼ 40) with lower δ 15 N values is particu- Fig. 4. δ 15 N sed versus C org : N ratio. Average composition of modern marine plankton shown by red star, and approximate compositional range of terrestrial (i.e., soil-derived) organic matter by green rectangle. Note that the pattern of negative covariation between δ 15 N sed and C org : N is not clearly associated with a terrestrial endmember and does not provide evidence of pervasive mixing of marine and terrestrial organic matter in our study units. larly characteristic of organic-rich sediments deposited under anoxic conditions (e.g., Junium and Arthur, 2007). This pattern has been attributed to enhanced cyanobacterial N fixation under N-poor conditions in restricted anoxic marine basins (Junium and Arthur, 2007) but potentially might be due to enhanced assimilatory recycling of 15 N-depleted ammonium in such settings (Higgins et al., 2012).
The relatively N-poor nature of terrestrial organic matter means that, even if present in modest quantities, it is unlikely to have had much influence on bulk sediment δ 15 N. For example, in a 50 : 50 mixture of marine and terrestrial organic matter, ∼ 80-95 % of total N will be of marine origin because of the lower C org : N ratios of marine organic matter (∼ 4-10) relative to terrestrial organic matter (∼ 20-200) (Meyers, 1994(Meyers, , 1997. Where mixing proportions have been quantified, the terrestrial organic fraction is more commonly in the range of 10-20 % (e.g., Jaminski et al., 1998;Algeo et al., 2008), in which case > 95 % of total N is marinederived. Although we cannot conclusively demonstrate that our Phanerozoic marine δ 15 N trend (Fig. 1) is uninfluenced by terrestrial contamination, we infer that such influences were probably minimal, and that the observed pattern of secular variation in δ 15 N sed broadly reflects the isotopic composition of contemporaneous seawater fixed N.
Influence of depositional setting on sediment δ 15 N
One important issue is whether our δ 15 N sed data set records variation in a global parameter (i.e., seawater nitrate δ 15 N) or represents mainly local water mass effects in which sediment δ 15 N varied as a function of depositional setting. Owing to unevenness in the distribution of depositional settings in our data set through time, we cannot answer this question definitively, but the following analysis provides some insight as to the relative importance of local versus global controls on sediment δ 15 N.
We classified the 153 study units into five categories of depositional setting: (1) oceanic, i.e., unrestricted deep marine; (2) oceanic-mediterranean, i.e., restricted deep marine; (3) upwelling, i.e., open continental margin/slope with a known upwelling system; (4) shelf, i.e., open continental margin without upwelling; and (5) epeiric sea, i.e., a cratonic-interior shelf or basin. When viewed as a function of time (Fig. 5), it is apparent that there is a major change in depositional settings in the mid-Mesozoic: all pre-Cretaceous units are from either shelf or epeiric-sea settings, whereas Cretaceous to Recent units are mostly oceanic or oceanic-mediterranean with a small number from other settings. The reason for this shift is that nearly all pre-Cretaceous units were collected in outcrop and represent cratonic deposits, whereas the majority of Cretaceous and younger units were collected during deepsea (DSDP, ODP, or IODP) cruises and represent deep-ocean deposits. Thus, the ultimate control on the age distribution of depositional settings in our data set is the age distribution of present-day oceanic crust. Several significant observations can be gleaned from the age distribution of depositional settings (Fig. 5). First, relatively young (i.e., Neogene) oceanic units exhibit an average δ 15 N sed (+4.2 ± 0.8 ‰) that overlaps with and is only marginally depleted relative to the N-isotopic composition of present-day seawater nitrate (+4.8-5.0 ‰) (Sigman et al., 2000). This observation is consistent with the inference that δ 15 N sed is a relatively robust recorder of seawater δ 15 N NO − 3 (Altabet and François, 1994;Altabet et al., 1995;Higgins et al., 2010;Robinson et al., 2012). Second, the range of δ 15 N sed variation shown by Neogene units as a function of depositional setting is limited: on average, upwelling units (+5.5 ± 2.1 ‰) are just 1.3 ‰ enriched and oceanicmediterranean units (+1.2 ± 2.2 ‰) just 3.0 ‰ depleted in 15 N relative to oceanic units (Fig. 3). While these differences are statistically significant (at p(α) < 0.01), they are much smaller than the > 10 ‰ range of δ 15 N sed variation observed through the Phanerozoic (Fig. 1). Third, secular variation in δ 15 N sed is coherent across the mid-Mesozoic "junction" at which pre-Cretaceous epeiric/shelf units yield to Cretaceous and younger oceanic/oceanic-mediterranean units (Fig. 5). This observation is significant because it suggests that different kinds of depositional settings are recording a common signal that shows up in both cratonic interiors and the deep ocean. While local influences are likely to have modified the N-isotopic composition of some study units, the foregoing observations are not consistent with the hypothesis that our Phanerozoic δ 15 N sed record is dominated by such influences. We infer that there is a dominant underlying secular signal present in the δ 15 N sed data set that is independent of setting type and that reflects a global control, i.e., seawater δ 15 N NO − 3 . In our data set, most of the Phanerozoic is characterized by a relatively low density of data (averaging one data point per 4-5 million years). However, a few narrow (≤ 1-Myrlong) time slices are represented by multiple data points, providing a basis for assessing spatial variance at certain times in the past. One such interval is the Cenomanian-Turonian boundary (Fig. 6). During this interval, the range of variation in unit-mean δ 15 N sed values is just 1.7 ‰ (i.e., −2.9 to −1.2 ‰) for sites ranging from high northern to high southern paleolatitudes. While the majority of these units represent oceanic-mediterranean settings in the young North Atlantic and South Atlantic basins, similar δ 15 N sed values are nonetheless observed in epeiric (−2.9 ‰ in England) and upwelling settings (−1.6 ‰ in Morocco) (Jenkyns et al., 2007) as well as outside the Atlantic region (−2.6 ‰ on the Kerguelen Plateau) (Meyers et al., 2009a). Thus, these data imply a relatively uniform N-isotopic composition for global seawater nitrate at the Cenomanian-Turonian boundary. Further, almost all of these regions exhibit a +4 to +5 ‰ shift in δ 15 N sed for units of latest Cretaceous to early Paleogene age ( Fig. 1; Table 1), consistent with our hypothesis of a global shift in seawater δ 15 N NO − Table 1.
Another time slice with multiple data points is the Permian-Triassic boundary (PTB; Fig. 7). The range of δ 15 N sed variation observed at the PTB is somewhat greater than for the Cenomanian-Turonian boundary, but the geographic distribution of units is wider and their setting types are more diverse as well. Late Permian units predating the PTB crisis exhibit a δ 15 N sed range of 4.6 ‰ (i.e., +0.3 ‰ to +4.9 ‰) but show spatially coherent variation: low values characterize the central Panthalassic Ocean (+0.3 ‰), intermediate values the Tethyan region (mostly +2.0 to +3.6 ‰), and high values the northwestern Pangean margin (+3.5 to +4.9 ‰). This pattern is likely to reflect regional variation in the intensity of water-column denitrification, which was higher in the high-productivity oceanic cul-de-sac formed by the Tethys Ocean (Mii et al., 2001;Grossman et al., 2008) and in the northwest Pangean upwelling system (Beauchamp and Baud, 2002;Schoepfer et al., 2012Schoepfer et al., , 2013. Despite major changes in seawater temperature and dissolved oxygen levels in conjunction with the PTB crisis (Romano et al., 2012;Sun et al., 2012;Song et al., 2013), marine units show remarkably little change in δ 15 N across the PTB: Lower Triassic unit means range from −0.4 ‰ to +5.3 ‰, and the magnitude of the PTB shift at individual locales varies from −2.8 ‰ to +0.4 ‰ with an average of −0.9 ‰. Negative δ 15 N sed shifts at the PTB have been attributed to enhanced N fixation rates (Luo et al., 2011). However, these shifts are consistent with our hypothesis of lowered seawater δ 15 N NO3values as a consequence of enhanced rates of sedimentary (relative to water-column) denitrification during greenhouse climate intervals such as that of the Early Triassic (Romano et al., 2012;Sun et al., 2012).
Marine nitrogen cycle modeling
We employed a reservoir box model to investigate possible controls on long-term secular variation in seawater δ 15 N NO − 3 (see Appendix C for model details). Seawater δ 15 N NO − 3 can be approximated from a steady-state isotope mass balance that assumes N fixation (f FIX ) as the primary source and sedimentary (f DS ) and water-column (f DW ) denitrification as the two largest sinks for seawater fixed N (Brandes and Devol, 2002;Deutsch et al., 2004;. We assumed that the marine N cycle is in a homeostatic steadystate condition at geologic timescales (DeVries et al., 2013), and thus that losses of fixed N to denitrification are balanced by new N fixation (i.e., f DW + f DS = f FIX ), which is consistent with the strong spatial coupling of these processes in the modern ocean (Galbraith et al., 2004;Deutsch et al., 2007;Knapp et al., 2008). Our baseline scenario utilized fluxes and fractionation factors based on the modern marine N cycle -i.e., f FIX = 220 Tg a −1 , f DS = 160 Tg a −1 , f DW = 60 Tg a −1 , ε FIX = −2 ‰, ε DS = −2 ‰, and ε DW = −20 ‰ -where ε represents the fractionations associated with N source and sink fluxes (f ), and the photosynthetic fractionation factor (ε P ) linked to nitrate utilization is 0 ‰. This scenario yields an equilibrium seawater δ 15 N NO − 3 of +4.9 ‰ that matches the composition of fixed N in the present-day deep ocean (Sigman et al., 2000).
The most important influence on global seawater δ 15 N NO − 3 variation in our model is the fraction of denitrification that occurs in the water column (F DW , calculated as f DW / (f DW + f DS ); Fig. 8). In our baseline scenario (ε DW = −20 ‰, ε P = 0 ‰), the modern seawater δ 15 N NO − 3 of ∼ +4.9 ‰ corresponds to F DW of 0.27 (point 1, Fig. 8b), which is close to recent estimates of 0.29 (DeVries et al., 2012) and 0.36 (Eugster and Gruber, 2012). However, the same δ 15 N NO − 3 composition can be achieved with other model parameterizations. Laboratory culture studies indicate that ε DW might be as low as −10 ‰ in some marine systems (Kritee et al., 2012). Reducing ε DW to −15 ‰ and −10 ‰ yields F DW of 0.37 and 0.62 (points 2 and 3, Fig. 8b); the former is still within the range of F DW estimates for modern marine systems (cf. Eugster and Gruber, 2012) although the latter is not. Our baseline scenario assumes no net fractionation linked to photosynthetic assimilation of seawater nitrate (ε P = 0 ‰) (Brandes and Devol, 2002;Granger et al., 2010), but uptake of ammonium is accompanied by a significant negative fractionation (Hoch et al., 1994;Waser et al., 1998), and a recent study lends support to the hypothesis that recycled ammonium was a major source of fixed N for eukaryotic algae during some OAEs (Higgins et al., 2012). We modeled the effects of variable photosynthetic fractionation with ε P values of −4 ‰ and −8 ‰, which yield δ 15 N NO − 3 equal to +4.9 ‰ when F DW is 0.48 and 0.72, respectively (points 4 and 5, Fig. 8b). These F DW values are improbably large for the modern (icehouse) marine N cycle, but nonzero values of (Fig. 1). Site-specific changes in δ 15 N sed across the PTB range from −2.8 ‰ to +0.4 ‰ with an average of −0.9 ‰, which is consistent with our hypothesis of intensified sedimentary denitrification during greenhouse climate intervals such as the Early Triassic. Data sources in Table 1 with additional data from S. Schoepfer and T. Algeo (unpubl. data). ε P may have been important during greenhouse intervals (see below).
The variations in seawater δ 15 N NO − 3 between icehouse and greenhouse climate modes observed in our long-term δ 15 N sed record ( Fig. 1) are an indication of major secular changes in the marine N cycle. In our baseline scenario, the peak icehouse δ 15 N NO − 3 of ∼ +8 ‰ yields F DW of ∼ 0.45 (point 6, Fig. 8b), indicating an increase in water-column denitrification relative to the modern ocean. Although the same δ 15 N NO − 3 can be achieved with ε DW of −15 ‰ and −10 ‰, the resulting F DW values (0.62 and 0.98; points 7 and 8, Fig. 8b) are improbably large. Lack of evidence for ammonium recycling during icehouse modes makes nonzero ε P values unlikely, which in any case would yield equally improbable values of F DW . The minimum greenhouse δ 15 N NO − 3 of ∼ −3 ‰ cannot be achieved in our baseline scenario even when F DW is reduced to 0 (point 9, Fig. 8b). However, evidence for strong ammonium recycling in greenhouse oceans (Higgins et al., 2012) indicates that ε P may have been nonzero at those times. Decreasing ε P to −4 ‰ and −8 ‰ yields F DW of 0.10 and 0.33 (points 10 and 11, Fig. 8b), the former representing a large decrease in F DW relative to the modern ocean. Since ε P of −8 ‰ represents an absolute minimum (i.e., recycling of nearly all seawater N as ammonium), ε P = −4 ‰ is a more reasonable estimate for anoxic marine systems with mixed utilization of recycled ammonium and cyanobacterially fixed N (Higgins et al., 2012). Note that, at low F DW , variation in ε DW has little effect on δ 15 N NO − 3 . In summary, the most likely scenario to account for long-term secular shifts in δ 15 N NO − 3 ( Fig. 1) within existing N-budget and N-isotopic constraints is for (1) F DW to vary between ∼ 0.2 and 0.5 (permissive of ε DW values between −15 and −20 ‰ ) with ε P = 0 ‰ during icehouse climate modes, and (2) F DW to decrease to ∼ 0.1-0.2 with a shift in ε P to ca.
Controls on long-term variation in the marine nitrogen cycle
Although enhanced water-column denitrification has been inferred during the warm climate intervals that produced OAEs (Rau et al., 1987;Jenkyns et al., 2001;Junium and Arthur, 2007), strong 15 N depletion of contemporaneous sediments is inconsistent with globally elevated water-column denitrification rates. Our inference of reduced water-column denitrification during greenhouse climate modes (Fig. 9a) contradicts the existing paradigm linking OAEs to high watercolumn denitrification rates (Rau et al., 1987;Jenkyns et al., 2001;Junium and Arthur, 2007). A reconciliation of these views is possible if rates were high regionally in semirestricted marine basins such as the proto-South Atlantic but reduced on a globally integrated basis. Our results are also at odds with the observation that modern upwelling zones exhibit peak δ 15 N sed values in conjunction with deglaciations rather than glacial maxima (François et al., 1992;Altabet et al., 1995;Ganeshram et al., 1995). While the latter relationship is valid at intermediate timescales, our results indicate that F DW is higher on a time-averaged basis (i.e., integrating glacial-interglacial variation) for icehouse modes than for greenhouse modes. This inference is supported by a study of Plio-Pleistocene sediments in the eastern tropical Pacific, in which δ 15 N sed rose by ∼ 2 ‰ following a cooling event at 2.1 Ma (Liu et al., 2008), consistent with an increase in time-averaged water-column denitrification rates. We infer that transient, albeit repeated, shifts in favor of watercolumn denitrification (i.e., higher F DW ) during the interglacial stages of icehouse climate intervals have resulted in a sustained (i.e., multimillion-year) shift toward higher seawater δ 15 N NO − 3 that has been captured by the long-term δ 15 N sed record (Fig. 1). Such a long-term climate-related shift in seawater δ 15 N NO − 3 can occur if the positive shift associated with each glacial epoch is larger than the negative shift associated with each interglacial epoch, resulting in progressively more 15 N-enriched compositions for icehouse intervals relative to greenhouse intervals.
Several mechanisms might potentially link variations in seawater δ 15 N NO − 3 to long-term climate cycles. Sea-level elevation is known to influence the locus of denitrification in marine systems (Deutsch et al., 2004). High sea-level as a function of the fraction of watercolumn denitrification (F DW ). The dashed diagonal lines represent variable fractionation during water-column denitrification (ε DW ), and the solid diagonal lines variable fractionation during photosynthetic uptake of seawater fixed N (ε P ). Colored fields show the isotopic range of marine δ 15 N sed during greenhouse (green) and icehouse (blue) climate modes as well as modern seawater δ 15 N NO − 3 (gray) (Sigman et al., 2000). Arrows at bottom show F DW of 0.27 (this study), 0.29 (DeVries et al., 2012), and 0.36 (Eugster and Gruber, 2012). The red curve represents our "most likely scenario" of concurrent changes in F DW and ε P as a function of greenhouse-icehouse climate shifts. See text for discussion of numbered points. elevations during greenhouse climate modes favor sedimentary denitrification owing to greater burial of organic matter on continental shelves (Fig. 9a), whereas low sealevel elevations during icehouse climate modes favor watercolumn denitrification through elevated organic carbon sinking fluxes to the thermocline and expansion of oceanic oxygen-minimum zones (Fig. 9b). A first-order sea-level control on the marine N cycle is consistent with existing records of Phanerozoic eustasy and continental flooding (Fig. 10). Eustasy shows a strong relationship to firstorder climate modes, with long-term rises or highstands during greenhouse intervals and long-term falls or lowstands during icehouse intervals. δ 15 N sed exhibits a distinct pattern of negative covariation with eustatic elevation for the Phanerozoic as a whole (r 2 = 0.18; Fig. 11). This relationship is even stronger for Cretaceous-Recent units alone (r 2 = 0.37), probably because both the δ 15 N sed and eustatic records are more securely defined for this interval than for the pre-Cretaceous. δ 15 N sed also exhibits negative covariation with long-term continental flooding records (Fig. 10), although the relationship is not as strong as for eustasy due to several factors: (1) greater vintage of the flooding records, (2) provinciality of the Sloss (1963) record (which represents only North America), and (3) low resolution of the Ronov (1984) record (which provides only 2-3 area estimates for most geologic periods).
A second potential mechanism linking long-term variation in the marine N cycle to first-order Phanerozoic climate cycles may be through tectonic controls. In this scenario, changes in oceanic gateways and circulation patterns can alter the locus of denitrification through changes in upwelling intensity or thermocline ventilation. In our long-term δ 15 N sed record, the mid-Early Mississippian and mid-Late Cretaceous feature as intervals of potentially rapid changes in seawater δ 15 N NO − Icehouse climate mode Greenhouse climate mode circulation patterns. The Early Mississippian was a time of closure of an equatorial seaway in the Rheic Ocean region, which probably led to a change from circum-equatorial to meridional ocean circulation (Saltzman, 2003). The Late Cretaceous coincided with widening of the central and south Atlantic basins and a translocation of deepwater formation into the North Atlantic region (MacLeod and Huber, 1996; Barrera et al., 1997;Frank and Arthur, 1999). These examples show that the marine N cycle is intimately linked to first-order tectonic and climatic cycles, although further investigation will be needed to determine the exact nature of these connections.
The hypothesis that the marine N cycle has been a driver of long-term climate change is speculative but cannot be dismissed entirely. The critical issue is the nature of links between plate tectonics and global climate. Past work has focused largely on the role of the carbon cycle, i.e., changes (2005) and Haq and Schutter (2008) as given in Snedden and Liu (2010).
(c) Continental flooding data from Sloss (1963) and Ronov (1984) as given in Miller et al. (2005). Phanerozoic climate modes at top are from Fig. 1; all ages have been adjusted to the Gradstein et al. (2012) timescale. Tr = transitional interval.
in atmospheric pCO 2 linked to mantle degassing, rates of uplift and continental weathering, and changes in marine organic carbon burial rates as a function of oceanic circulation and seawater redox conditions (Mackenzie and Pigott, 1981;Raymo and Ruddiman, 1991;Falkowski et al., 2000;Zachos et al., 2001;Berner, 2006a). The marine N cycle is intimately connected to burial of marine organic carbon Galloway et al., 2004), but whether it is a passive responder to changes in carbon fluxes (as generally assumed) or an active control on such changes is uncertain. One mechanism by which the N cycle might be a driver is through switches between equatorial and polar sites of deepwater formation (Barrera et al., 1997;Frank and Arthur, 1999), with attendant effects on sites of deepwater nutrient upwelling. Even if the marine N cycle is a passive responder to carbon-cycle forcings, it may play an important role as an amplifier of climate change. For example, enhanced N 2 O production in low-oxygen regions of the ocean during extended intervals of climatic warming might serve as a positive climate feedback (Naqvi et al., 1998;Bakker et al., 2013) that promotes a bimodality of long-term climate conditions (i.e., greenhouse versus icehouse modes; Fig. 1).
Conclusions
The present analysis of δ 15 N variation in 153 marine sedimentary units ranging in age from the Neoproterozoic to the Quaternary is the first to assess long-term variation in the marine N cycle and controls thereon. Variation in δ 15 N sed , which serves as a proxy for seawater nitrate δ 15 N, exhibits strong secular coherence since 660 Ma, with 74 % of total variance accounted for by a LOWESS trend. This pattern is surprising because the short residence time of fixed N in modern seawater (≤ 3 kyr) suggests that short-term variation in the marine N cycle has the potential to dominate the sedimentary N-isotope record and produce no coherent long-term patterns. Average δ 15 N sed ranges from lower values (∼ −2 to +2 ‰ ) during greenhouse climate modes of the mid-Paleozoic and mid-Mesozoic to higher δ 15 N (∼ +4 to +8 ‰) during icehouse climate modes of the late Paleozoic and Cenozoic. This pattern suggests that long-term variation in the marine N cycle is controlled by first-order tectonic cycles, and that it is linked to -and possibly a driver of -longterm climate change. We tentatively link long-term variation in the marine nitrogen cycle to global sea-level changes and shifts in the dominant locus of denitrification, with sedimentary denitrification and water-column denitrification dominant during greenhouse highstands and icehouse lowstands, respectively, a relationship confirmed by reservoir box modeling. These results also challenge the widely held idea that oceanic anoxic events (OAEs) were associated with elevated rates of water-column denitrification. Rather, the present study shows that globally integrated water-column denitrification rates must have been lower during greenhouse intervals (when OAEs developed) relative to icehouse intervals. Fig. C2. Comparison of output of fully integrated and simplified N cycle models. At t ≤ 0, the model is at equilibrium and represents our "baseline" scenario (f FIX = 220 Tg a −1 , f DW = 60 Tg a −1 , f DS = 160 Tg a −1 , ε FIX = −2 ‰, ε DS = −2 ‰, ε DW = −20 ‰, ε P = 0 ‰, and F DW = 0.27), which yields a seawater δ 15 N NO − 3 value of +4.9 ‰ (i.e., equivalent to modern seawater nitrate; Sigman et al., 2000). At t = 0, changes in F DW to values ranging from 0 to 0.5 result in evolution of δ 15 N NO − 3 at a rate reflecting the response time of the system (which is closely related to the ∼ 3 kyr residence time of nitrate in seawater; Tyrrell, 1999). Differences in δ 15 N NO − 3 between the fully integrated model (blue curves) and simplified model (red curves) are generally < 0.4 ‰, indicating that the output of the simplified model is robust. We parameterized the model based on the modern marine N budget. The total mass of seawater nitrate is ∼ 8.0 × 10 5 Tg N (Brandes and Devol, 2002). Whether the present-day marine N cycle is in balance is a matter of debate (Codispoti, 1995;Brandes and Devol, 2002). Recent studies have documented strong spatial coupling of cyanobacterial N fixation and water-column denitrification in the modern ocean (Galbraith et al., 2004;Deutsch et al., 2007;Knapp et al., 2008), implying that short-term losses of fixed N are locally compensated. At longer timescales, losses of fixed N to denitrification must be balanced by new N fixation in order to maintain a N : P ratio in global seawater close to that of marine phytoplankton (16 : 1) (Tyrrell, 1999). Estimates of the fluxes associated with cyanobacterial N fixation (f FIX ), sedimentary denitrification (f DS ), and watercolumn denitrification (f DW ) vary widely in older literature, although recent analyses of large data sets are beginning to converge on a consensus range of values (DeVries et al., 2012(DeVries et al., , 2013Eugster and Gruber, 2012;Groβkopf et al., 2012). Estimates of f FIX include 120-140 Tg a −1 (Gruber and Sarmiento, 1997; Galloway et al., 2004;Gruber and Galloway, 2008), 131-134 Tg a −1 (Eugster and Gruber, 2012), and 177 ± 8 Tg a −1 (Groβkopf et al., 2012). Total oceanic denitrification has been estimated at 145-185 Tg a −1 (Gruber and Sarmiento, 1997;Galloway et al., 2004), 120-240 Tg a −1 (DeVries et al., 2013), 230 ± 60 Tg a −1 (DeVries et al., 2012), 230-285 Tg a −1 (Middelburg et al., 1996), 240 Tg a −1 (Gruber and Galloway, 2008), and > 400 Tg a −1 (Codispoti, 2007). The relative importance of sedimentary versus water-column denitrification was not well known in the past (Gruber and Sarmiento, 1997), but recent marine N budgets have provided independent estimates of each flux. Estimates for f DW include 52 ± 13 Tg a −1 (Eugster and Gruber, 2012) and 66 ± 6 Tg a −1 (DeVries et al., 2012), while estimates for f DS range from 93 ± 25 Tg a −1 (Eugster and Gruber, 2012) to 164 ± 60 Tg a −1 (DeVries et al., 2012) and > 300 Tg a −1 (Codispoti, 2007).
Supplementary material related to this article is
The δ 15 N of seawater nitrate in the deep ocean (i.e., the largest reservoir of fixed N) is +4.8 to +5.0 ‰ (Sigman et al., 2000). The δ 15 N of present-day atmospheric N 2 is 0 ‰ (Mariotti, 1984), a value inferred to have been nearly invariant through time (Berner, 2006b). The fractionation associated with cyanobacterial N fixation of atmospheric N 2 (ε FIX ) is estimated to be −1 to −3 ‰ (Macko et al., 1987;Carpenter et al., 1997). Fractionation during water-column denitrification (ε DW ) has a maximum value of ∼ −27( ± 3) ‰ (Gruber and Sarmiento, 1997;Barford et al., 1999;Voss et al., 2001;Murray et al., 2005), although the effective fractionation may be closer to −20 ‰ (Brandes and Devol, 2002). Recent culture studies have suggested that this fractionation might even be as low as −10 to −15 ‰ (Kritee et al., 2012), an idea that we explore in our modeling simulations. We did not parameterize the anammox reaction separately owing to significant uncertainties concerning the scale of this process and any associated fractionation. While this reaction is a major sink for seawater fixed N, possibly larger than water-column denitrification in some oceanic regions (Mulder et al., 1995;Kuypers et al., 2005), it is thought that field-based estimates of fractionation due to water-column denitrification have incorporated any effects related to anammox (Thamdrup et al., 2006). Denitrification in suboxic marine sediments (ε DS ) typically yields a small net fractionation (∼ −1 to −3 ‰) owing to near-quantitative utilization of porewater nitrate (Lehmann et al., 2004;Galbraith et al., 2008). However, this fractionation can range from ∼ 0 ‰ in organic-rich, reactive sediments to as high as −5 to −7 ‰ in organic-lean, unreactive sediments (Lehmann et al., 2007). An estimate of −0.8 ‰ for the global mean fractionation due to sedimentary denitrification (Kuypers et al., 2005) does not take into account effects associated with the upward diffusive flux of 15 N-enriched ammonium in reactive sediments (Higgins et al., 2012). The fractionation associated with assimilation of seawater fixed N by eukaryotic marine algal (ε P ) can be as large as −5 to −8 ‰ for nitrate (Lehmann et al., 2007) but is more typically −1 to −3 ‰ (Macko et al., 1987;Carpenter et al., 1997). We assumed a net fractionation of 0 ‰ based on complete photosynthetic utilization of seawater nitrate at longer timescales. The fractionation associated with ammonium uptake by marine algae is −10( ± 5) ‰ (Brandes | 2017-07-13T12:33:15.841Z | 2013-09-06T00:00:00.000 | {
"year": 2013,
"sha1": "408e994d5aeb5497bd3b6d3e09a9c3ae5c6f403a",
"oa_license": "CCBY",
"oa_url": "https://www.biogeosciences.net/11/1273/2014/bg-11-1273-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "00caa390981df58cf033b90637431d977f9fc96c",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geology"
]
} |
216054643 | pes2o/s2orc | v3-fos-license | Pre operative fitness score accurately predicts uneventful post operative course in gastrointestinal and hepatobiliary surgery
Aim of our study was to analyse if we can accurately predict uneventful post operative course pre operatively in gastrointestinal and HPB surgery patients.We retrospectively evaluated patients who have undergone gastrointestinal and hepatobiliary surgery at our institute in last 3 years and analysed 90 days mortality and morbidity rates among these patients. We described any 90 day morbidity and mortality as an “event”. We performed univariate and multivariate analyses for factors predicting an “event”. Then based on pre operative factors that predicted an “event” we formulated a score. Statistical analysis was done using SPSS version 23.Total 264 patient operated for gastrointestinal and HPB surgeries between April 2016 to may 2019 were evaluated .Total 45 (17%) events occurred. On univariate analysis CDC grade, ASA score,Operative time,Blood products used, emergency surgeries and open surgeries predicted an event. We developed score based on pre operative factors like ASA score, CDC grade of surgery,open surgery and emergency surgeries included in the score. We proposed score grater than 2 was associated with 90 day event. This score had sensitivity of 77.78%, specificity of 81.65%. low positive predictive value of 46.67% but very high negative predictive value of 94.68%. AUROC showed AUROC of 0.797 (p < 0.0001, 95% confidence interval 0.721-0.874). Pre operative fitness score, Open Surgery and operative time independently predicted an “event” on multivariate analysis. (p =0.003 and 0.026 respectively.Pre operative fitness score accurately predicts uneventful post operative course in gastrointestinal and hepatobiliary surgery.
These high mortalities and morbidities force us to predict pre operatively which patients will have higher chances of morbidity and mortality to accurately assess prognosis or to predict safety of the procedure and to evaluate how aggressive we can go to cure the disease and also to make patient understand the risk involved and help them to take informed decision.
AIMs of the study:
Aims of this study was to evaluate our data for 90 days morbidity and mortalities and to evaluate factors responsible for that.Based on our statistical evaluation we tried to formulate a score based in pre operative evaluation to predict 90 days morbidity and mortality on a particular patient pre operatively or to predict uneventful 90 day post operative course.
Material and Methods:
We retrospectively evaluated patients who have undergone gastrointestinal and hepatobiliary surgeries at our institute in last 3 year and analysed 90 days mortality and morbidity among these patients.We defined morbidity as any grade 3 grade 4 clavien dindo classification.[5] We described any 90 day morbidity and mortality as an "event".We performed univariate and multivariate analyses for factors predicting an "event".Then based on .pre operative factors that predicted an "event" we formulated a score and then evaluated sensitivity, specificity, positive predictive and negative predictive value of that score and also evaluated ROC curve and again performed univariate and multivariate analysis of an "event" to check weather the score developed by us independently predicted the outcome or not.Statistical analysis was done using SPSS version 23.Chi square test was done for categorical values, Mann Whitney U test for continuous variables.Multivariate analysis was done using binary logistic regression method.We also evaluated kaplan meier survival curve with log rank analysis for 90 days event free survival.
[Figure 1] Univariate and multivariate analysis after Including Pre Operative Fitness score.
On univariate analysis after including Pre Operative Fitness score open surgery, pre operative fitness score grater than 2, Emmergency Surgery, More operative time,Higher ASA grade and Higher CDC grade of surgery predicted an "event" Pre operative fitness score and Open surgery independently predicted an "event" on multi-variant analysis.(p =0.003 and 0.026) respectively.[Table 3.] On Kaplan Meier Survival analysis score less than 2 was associated with significant higher 90 days "event" free survival rates.(p < 0.0001) [Figure 2].
Discussion:
Surgeons are always worried about out comes and safe surgery is always their goal.
Pre operative prediction of surgical outcomes is like holy grail of surgery.
In this manuscript we have tried to evaluate our 90 days morbidity and mortality and factors responsible for it and tried to assess pre operative prediction of patients who are likely to develop complications or patients who are likely to have favourable outcomes and thus to accurately assess risk benefit ratio of any procedure.
CDC graded surgery and wound according to complexity of diseases and shown that wound complications and surgical site infection rates are higher with higher grades of .
On univariate analysis CDC grade, ASA score,Operative time,Blood products used, emergency surgeries and open surgeries predicted an event. [Table 1] Pre Operative Fitness Score:
Each variable was given 1 point.We proposed score grater than 2 was associated with 90 day event.This score had sensitivity of 77.78%, specificity of 81.65%.lowpositive predictive value of 46.67% but very high negative predictive value of 94.68%.[Table2]
which was not certified by peer review)
CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.( surgery.[6]Anaesthetists commonly use American society of Anaesthesia grading [ASA] of physiologic status to evaluate anaesthetic related complication.The American Society of Anaesthesiologists Physical Status (ASA PS) classification system was first Various authors have tried to predict post operative outcomes by preoperative factors in various major surgeries [11,12,13,14] but to our knowledge vary few have tried to evaluate a score in major gastrointestinal and hepatobiliary surgeries.From univariate analysis we selected all the preoperative factors affecting post operative outcome and gave each factor one point and evaluated score greater than 2 or if 3 out of four factors positive i.e.ASA score grater than 2, emergency surgery, CDC grade of surgery greater than 2 and open surgery for sensitivity, specificity, Specific Quality Improvement Opportunities for the Elderly Undergoing Gastrointestinal Surgery.Arch Surg.2009;144(11):1013-1020...
Table 2 :
Pre operative fitness score: ASA (Americal society of anesthesia), CDC (centre of disease control).3 out of 4 factors of score grater than 2 was associated with aderse surgical outcomes or score less then 2 predicted uneventful post operative course.Sensitivity -77.78%, specificity 81.65%, positive predictive value 46.67% and negative predictive value of 94.68%. | 2020-04-22T21:51:04.101Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "9b9605353b999606082f97731cdca5af5c1fd9bd",
"oa_license": "CCBY",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/04/17/2020.04.14.20057612.full.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "563772b0ccdf92e9301c6d92d1d4c5e09e28bef3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4496190 | pes2o/s2orc | v3-fos-license | A Social Problem : Individual and Group Rape
The terms that describe a non-consensual sexual behavior such as rape, sexual abuse or sexual violence are very different, as are the negative effects on a victim of a physical and psychic nature, taking into account the different approaches of the attacker. The consent of the victim is absent or obtained by the attacker with physical and/or psychological violence; but it is also possible that the victim is unconscious or incapable of understanding. Those guilty of sexual offenses fall into a special category that is called “sex offenders” within which there is a diversification of ways concerning the manner, degree and motivations. Research has shown that attackers acting in a group and those who are an individual attacker differ in their attitude towards the victim.
Introduction
Rape is defined in most jurisdictions as sexual intercourse, or other forms of sexual penetration, initiated by a perpetrator against a victim without their consent (Smith, 2004).The definition of rape is inconsistent between governmental health organizations, law enforcement, health providers and legal professions (Maier, 2008).It has varied historically and culturally.Originally, rape has no sexual connotation and is still used in other contexts in English.In Roman law, it or raptus was classified as a form of crimen vis, "crime of assault" (Berger, 1953).Raptus described the abduction of a woman against the will of the man under whose authority she lived, and sexual intercourse was not a necessary element.Other definitions of rape have changed over time.In 1940, a husband could not be charged with raping his wife.In the 1950s, in some states, a white woman having consensual sex with a black man was considered rape (Urbina, 2014).Until 2012, the Federal Bureau of Investigation (FBI) still considered rape a crime solely committed by men against women.In 2012, they changed their definition from "The carnal knowledge of a female forcibly and against her will" to "The penetration, no matter how slight, of the vagina or anus with any body part or object, or oral penetration by a sex organ of another person, without the consent of the victim."The previous definition, which had remained unchanged since 1927, was considered outdated and narrow.The updated definition includes recognizing any gender of victim and perpetrator and that rape with an object can be as traumatic as penile/vaginal rape.
The bureau further describes instances when the victim is unable to give consent because of mental or physical incapacity.It recognizes that a victim can be incapacitated by drugs and alcohol and unable to consent.The definition does not change federal or state criminal codes or impact charging and prosecution on the federal, state or local level; it rather means that rape will be more accurately reported nationwide (Russo, 2012).Health organizations and agencies have also expanded rape beyond traditional definitions.The World Health Organization (WHO) defines rape as a form of sexual assault (Krug, 2002), while the Centers for Disease Control and Prevention (CDC) includes rape in their definition of sexual assault; they term rape a form of sexual violence.The CDC lists other acts of coercive, non-consensual sexual activity that may or may not include rape, including drug-facilitated sexual assault, acts in which a victim is made to penetrate a perpetrator or someone else, intoxication where the victim is unable to consent (due to incapacitation or being unconscious), non-physically forced penetration which occurs after a person is pressured verbally (by intimidation or misuse of authority to force to consent), or completed or attempted forced penetration of a victim via unwanted physical force (including using a weapon or threatening to use a weapon) (Markovchick, 2016).Some countries or jurisdictions differentiate between rape and sexual assault by defining rape as involving penile penetration of the vagina, or solely penetration involving the penis, while other types of non-consensual sexual activity are called sexual assault (Kalbfleisch & Cody, 2012;Plummer, 2002).
In criminology, the term "serial rapist" indicates a subject who repeatedly performs the same serious crime of rape, against multiple victims.What many serial rapists have in common is primarily high hostility toward women.A rapist is a subject who is violent and angry and markedly aggressive.In addition to these strong violent tendencies, serial rapists may have specific sexual difficulties (Bryden & Lengnick, 1997).The attacker is totally lacking in empathy and emotions, and social relations are almost absent.They can show bizarre behavior and treat the victim as an object that must give pleasure.They bind or gag the victim, take off their clothes, or use weapons to increase the level of control.The victim has the function of bearing the frustration and sense of failure of the aggressor.The attacker feels anger toward himself, and events he has experienced have precipitated the subject in a desolate situation; these are the central themes of his life story.He perceives himself as the tragic hero of an endless drama.Sexual assaults are characterized by active and passive oral sex, verbal abuse, and prior knowledge of the victim (Greig, 2005).The victim is assaulted by a person who believes, in a distorted way, that they have a privileged bond; for example, the rapist could, af-ter the attack, warn the victim to pay attention to someone who might be dangerous.The assailant urges the woman to participate verbally and physically in the relationship, by making compliments and asking personal questions, which are not sexual.Other attackers are motivated by anger and vengeance on women (Newton, 2000).Rape may be the only way that these individuals can have a sexual relationship; often the subject has previously established contact with the victim.It's very important the researchon violence and rape against women in Italy (2013-2015 University of Enna "Kore").The results are to be widely disseminated also among migrant women.More specifically, 6,788,000 women have been victims of some forms of violence, either physical or sexual, during their life, that is 31.5% of women aged 16 -70.20.2% has been victim of physical violence; 21% of sexual violence and 5.4% of the most serious forms of sexual violence such as rape and attempted rape: 652,000 women have been victims of rape; and 746,000 have been victims of attempted rape.Further, foreign women are victims of sexual or physical violence on a scale similar to Italian women's: 31.3% and 31.5%,respectively.However, physical violence is more frequent among the foreign women (25.7% vs. 19.6%2012), while sexual violence is more common among Italian women (21.5% vs. 16.2% 2012).Specifically, foreign women are more exposed to rape and attempted rape (7.7% vs. 5.1% 2012) with Moldavians (37.3%),Romanians (33.9%) and Ukrainians (33.2%) who are the most affected ones.62.7% of rapes is committed by the current or the former partner while the authors of sexual assault in the majority of cases are unknown (76.8%).As for the age of the victim, 10.6% of women have been victims of sexual violence prior to the age of 16.As for women's status, women separated or divorced are those far more exposes to physical or sexual violence (51.4% vs. 31.5% 2012).It remains of great concern the situation of women with disabilities or diseases.36% of the women with bad health conditions and 36.6% of those with serious limitations have been victims of physical or sexual violence.The risk to be exposed to rape or attempted rape doubles compared to women without any health problems (10% vs. 4.7% 2012).On a positive note, sexual and physical violence cases result to be reduced from 13.3% to 11.3% ( 2012).This is the result of an increased awareness of existing protection tools by women in the first place and the public opinion at large, in addition to an overall social climate of condemnation and no mercy for such crimes.More specifically, physical or sexual violence cases committed by a partner or a former partner is reduced (as for the former, from 5.1% to 4% 2012; as for the latter, from 2.8% to 2% 2012).Also the forms of violence by a non-partner are more serious.3,466,000 women (=16.1%)have been victims of stalking during lifetime, of whom 1,524,000 have been victims of their former partner; and 2,229,000 from other person that the former partner.
Scientific Notes on Human Aggression
To understand the violence and rapes against women, the research analyzed the human aggression.The term "aggression" may indicate a great energy of the personality seen as survival and reproduction, or provision geared to hostilities and to destructive capacity (Luchterhand, 1971).The forms of aggression which are known are: a) hostile or emotional associated impulsivity, defined reactive; b) instrumental, defined proactive; c) defensive or provoked; d) offensive, closely linked to anti-social development; e) relational, which consists in the attitude of oppression and relational difficulties with peers.Longitudinal studies (Nagin & Tremblay, 1999) have shown that violent behavior is expressed with greater frequency and intensity in the first three years of life, because, afterwards, socialization allows for moderation till adolescence, and then it falls again.In particular, if the first evolutionary phase of the diagnosis of oppositional defiant disorder is measured more in boys and less in girls, then it is not found during adolescence.The risk factors and resilience factors (Ingrassia, 1998): develop as early as conception, when the genes start to develop and the report-fusion with the mother are physically carried by the umbilical cord where the possibility exists that the fetus suffers trauma, fetal suffering or infections.The risk factors can be divided into: 1) Individual, organic and/or psychological, including: temperamental difficulty, impulsive behavior, substance use, hyperactivity, disruptive behavior with early onset, withdrawn behavior or closure behavior, low intelligence, and neurological deficits; 2) Family, including: low socio-economic status, poor educational competence, parental anti-social behavior, abuse or violence suffered, and family disintegration; from a psychosocial point of view parenting appears to be the major contributing factor along with child abuse in predicting anti-social behavior; 3) School, including: inadequate academic performance, lack of academic aspirations, lack of interest, and low motivation; 4) Friendships, including: deviant peers, and rejection of the subject.
Among the protective or resilience factors, there are: a) Endogenous factors, including: high levels of behavioral inhibition (harm avoidance) linked to apprehension, anxiety, fear and shyness or conversely lower levels of behavioral activation linked to impulsivity, hyperactivity, aggression or the search for novelty and sensations (novelty and sensation seeking); b) External factors, including: a strong attachment bond to at least one caregiver and a community situation and support from the school.These factors, of risk or protection, can be the basis of development of many healthy or pathological psychological frameworks; for example, as transgression is a characteristic of adolescence, one must be careful not to confuse it with a deviant anti-social behavior because the reasons and stability of delinquent behavior are different.In this context, anti-social behavior of an adolescent during childhood can be explained by the basis of behavioral problems, cognitive, ethical (moral disengagement), narcissistic (low self-esteem) or related to attachment (the resulting emotional difficulties, and in particular the lack of development of the sense of guilt).
Narcissism and anti-social behavior (Kernberg, 1998) are considered as a continuum.Operated discrimination is also interesting (Hare, 1996) regarding the two factors of the psychopathic anti-social behavior nucleus: aggressive narcissism (egocentricity, callousness, lack of remorse and guilt, which are related to low anxiety and low empathy) and an anti-social lifestyle (irresponsibility, impulsivity, N. Malizia sensation seeking, which are related to low intelligence, impaired symbolization, low socio-economic level and low level of education).Aggression, therefore, can be considered in two axes; one is cold and unfeeling predatory-sadist (psychopathic), the other is impulsive in which the aggression is reactive (anti-social).A criminal career (Walters, 1990) includes the following stages: a) pre-criminal phase (between 10 and 18 years old), where the offenses are not specialized, are less severe, and are usually committed in a group looking for excitement; b) initial criminal phase (between 18 and 20 years old) when a criminal career begins for those who continue to commit crimes; the subject starts attending deviant groups whom they often met in prison and acquires new techniques.The frequency of the crimes decreases but the severity increases; the reasons behind such crimes are to satisfy economic needs; c) advanced criminal phase (between 20 and 40 years old), are those who continue to commit crimes and lose the ability to control themselves; d) stage of maturity (over 40 years old).
Neurobiology of Violent Behaviour
In The Error of Descartes, emotions are seen as an expression of the animal part, instinctual-residual in human beings (Damasio, 1995); in recent years, in the same field of neuroscience (LeDoux, 2014) the difference between emotions and feelings was stressed.The study of aggression cannot ignore the neural circuits involved, but, for purely experimental reasons, these circuits are not studied as a whole, but investigated as if they were isolated centers; therefore, circuits of forms of aggression have been found; in this regard (Moyers, 2007) has distinguished different forms of anger: affective, maternal, dominant on the territory (predominantly male), linked to gender (masculine).At a cerebral level the brain reward system is linked to states of addiction and aggressive brain systems; in this sense the gratification block increases aggressive behavior.The first neurological studies on anti-social behavior were conducted by Gall, who was the founder of phrenology, which is a theory that assigned specific mental functions to specific brain areas (Azouvi, 1997).Thanks to studies on animals, and specifically to the search patterns of injury (electric, neurochemical or genetically induced), electrical stimulation and the recording of electrical activity, the limbic system was detected, consisting of amygdala, cingulate gyrus, hypothalamus (especially in mammillary bodies), and thalamus (especially the anterior nuclei); and hippocampus, which regulates the endocrine, autonomic, and emotional and instinctive activities.The limbic system is linked to the prefrontal and temporal cortex.The frontal cortex, the hypothalamus and the amygdala are linked to aggressive and docile behavior (Sabatello & Stefanile, 2016).The study of developmental psychopathology can be approached on the basis of two modes: 1) assuming the continuity/discontinuity of changes in the behavior that provides the continuity of antisocial behavior with different manifestations and behavior during development, which can be considered heterotypic because the various manifestations have a psychological coherence at their base and meet the functional matching criteria; 2) Assuming the continuity/discontinuity in the development of psychopatho-logical features, anti-social behavior persists more if it starts in childhood (with attention, emotional and social deficits) rather than in adolescence.Research by the Pittsburgh Youth Study (Loeber, Stouthamer-Loeber & Raskin White, 1999) argues that the onset of anti-social behavior may range from 14 to 15 years old, often in association with the use or abuse of substances, delinquency and persistent internalization problems.The Pittsburgh Youth Study Group indicated the following types of behavioral problems: a) mixed type: those with internalizing symptoms such as anxiety and depression, and destructive behavior (externalizing) with onset in middle and late childhood; b) internalizing type: those with internalizing symptoms such as anxiety, but especially depression, persistent with onset in late childhood or early adolescence; c) delinquency type: individuals who abuse substances, and persistently commit delinquent acts in the absence of internalizing symptoms with onset in late childhood or early adolescence; d) undeviating type: adolescents who abuse substances without particular internalizing or ex- ternalizing symptoms.
Sexual Offenses Committed by Children
The implementation of sexual abuse by a child may be due to sexual exploration of a personality that is emerging and is not yet mature, or can have a coercive and violent nature.Those over 10 years old (in England this is the minimum imputable age) who commit sexual offenses are defined as juvenile offenders or juvenile sex offenders; below this age children are defined as abusers or perpetrators.
Sexual Offences (Shaw, 2002) can be divided into: sexual assault, which may be accompanied by threats, intimidation, use of force and/or authority; sexually abusive behavior, committed without consent as a result of coercion; sexual crimes, when the abuser violates the law but does not implement physical or psychological harm; paraphilia, recurring sexual fantasies; rape, which is sexual gratification obtained through the use of violence; sexual harassment, which is unwanted sexual conduct.Sexual offenses committed by children are divided into sexual behaviors that involve the presence or absence of physical contact (American Academy of Child and Adolescent Psychiatry, 1993), taking into account that, a person needs to understand what is proposed in order to give consent, and be able to assess the alternatives, have the same decision-making power of the interlocutor and is free to choose; forensically it defines any abusive behavior that occurs without consent.Recent statistics have shown that 60% of abuse involves penetration in England and about 35% of sexual offenses are committed by teenagers with a mean age of 14 years old in Germany, although in recent years this is declining.
The victims are usually women.Adolescent sexual offenders grow in multi-problematic families.Sexual diseases are frequent among parents and there is often family violence, therefore, due to inadequate parenting (family models) these adolescents do not learn to inhibit aggression (Keog, 2012).All these factors, together with abuse and neglect suffered, social difficulties, impulsiveness and lack of intimate relationships, may be at the root of the crime and the personality of sexual offenders.Juvenile sex offenders are usually socially withdrawn, not asser-N.Malizia tive, unable to form intimate relationships especially with the opposite sex, have little empathy and inadequate social skills, have learning difficulties, display disruptive behavior in class and truancy.Regarding psychiatric disorders, juvenile sex offenders may have a narcissistic, borderline, anti-social personality or conduct disorders; according to other authors, they may have mood disorders such as depression, substance dependence and the dissociative spectrum disorders, particularly important in the legal sphere.The classification of sex crimes made by adolescents is: sexual aggression against other adolescents or women, sexual behaviour against children, vaginal penetration, jerking off in the presence of victim, rape, pet lady parts victim's, vaginal victim's penetration with finger, exhibit genitals to the victim, kiss the victim against her will, oral sex against the will of victim, seizure the victim to a few hours, group rape, stalking and individual rape; cyber-bulling; cyber-stalking (Rich, 2003).
Rapists and Victims
The terms that describe a non-consensual sexual behavior are quite different: rape, sexual abuse or sexual violence; each of them has negative repercussions, both of a physical and psychic nature, on the victim, but the approaches of the aggressor are different (Walters, Chen, & Breiding, 2013).There is no single definition of rape.
Generally, it identifies a non-consensual sexual act in which the aggressor penetrates parts of the body of the victim, whether the vagina, anus or mouth, with various objects.The consent of the victim, as such, is absent or obtained by the attacker with physical and/or psychological violence; but it is also possible that the victim is unconscious or incapable of understanding (Anderson & Doherty, 2007).Studies show that rape victims rarely turn to the police.This not only limits the possibilities to offer help, but at the same time makes it difficult to study this phenomenon (Finkelhor, 2008) Studies seem to show that there are several contributing factors to the rapist's behavior, including: childhood trauma (Dhawan & Marshall, 1996), aggressive behavior patterns (Longo, 1982), taking non-inhibiting drugs (Norris & Cub-bins, 1992) and the desire to control (Hanson & Morton-Bourgon, 2005).There are three different theories, which were developed between the seventies and eighties, that tried to explain the personalities and motivations of the rapist: feminist theory; social learning theory; and evolutionary theory.According to feminist theory, sexual assault is motivated by the desire to control, dominate and humiliate the other person (Schwendinger & Schwendinger, 1983).Social learning theory, however, argues that the behavior of the rapist is imitative, therefore, the subject has learned aggressive behavior and has gradually become desensitized to the consequences of it, and subsequently believes that sexual gratification can be obtained quickly through the use of coercion (Malamuth, 1981).The theory of evolution claims that men possess an urge to impregnate as many women as possible in view of their heightened instinct of survival of the species, and in the search for such women, he omits their consensus (Ellis & Symons, 1990).The research group at the Massachusetts Treatment Center (MTC) classified six different types of rapists after collecting clinical and experimental data (Knight & Prentky, 1990): 1) The "opportunist criminal" who is characterized by a lack of impulse control and lack of empathy; 2) The "non-sexual and non-sadistic criminal", characterized by distorted representations about women and sexuality, feelings of inadequacy with respect to their sexuality and to the image of himself; 3) The "criminal with pervasive anger", who is characterized by high levels of anger and intense hostility, not directed exclusively towards women and for these reasons, commits crimes of a different nature and has no particular sexual fantasies, unless linked to violence, which they use even if it is not necessary.This profile is the criminal most likely to kill after committing rape; 4) The "nonvindictive criminal", whose anger is directed exclusively towards women and is expressed through forms of psychological and physical violence designed to humiliate, degrade and injure the victim; 5) The "blatantly sadistic" criminal, where sexual assault is often premeditated, driven by violent sexual fantasies, which can also lead to causing serious physical injury to the victim; 6) The "non-sadistic latent criminal" who only turns his rage against women, but does not express it.The category of sex offenders is divided according to the mode, the degree and motivations, and are identified as the individual rapist, group rapists, pedophiles, molesters and, finally stalkers.Sex offenders are different from "normal" people, since there is less control of inhibitions.Simon identified four basic motives, as to why a sex offender acts (Simon, Knight, & Prentky 1990): a) for domination; b) out of anger; c) for compensation; d) sadism.Therefore, four basic profiles were drawn up: 1) Exploiter profile, who implements a sexual behavior as an impulsive and predatory act.He sees rape as an act that depends on the occasion or situation, which stimulates and excites him.This type of sex offender looks for a fickle prey, which is easy to exploit and subdue.Within this category of rapists there are two sublevels: a) those who implement such behavior as a response to a hypothetical threat to their manhood; b) those who act in this way because they are pushed by an impulsiveness which is closely linked to their personality; to that effect, studies have
N. Malizia
shown that the two dominant personalities are anti-social and psychopathic.
2) Angry profile, which means having a sexual behavior that tends to express anger and rage.The type of rape falls within the maximum expression of displacement of their own feelings of anxiety and frustration towards the designated object, the victim.Sexual performance is aggressive since the attacker harbours feelings of contempt and hatred of the female sex.
3) Compensator profile, whose sexual behavior is a mere expression of sexual fantasies.Regarding the rape which is carried out, this turns out to be highly premeditated and/or planned.With regard to the motivations that led him to act, they refer to the sexual perversions as exhibitionism or extravagant masturbation.It is also an intrinsic feature of this type of sex offender to start seeing the victim after the attack.4) Sadist profile is the one who carries out a sexual behavior as an expression of aggressive sexual fantasies and reaches the maximum level of excitement by causing suffering to his victim.Among the behaviors and practices there is mistreatment, humiliation and in extreme cases, it may lead to murder.In some cases, it implements a wide range of torture such as flogging, mutilation and strangulation.
b) Victims: psychological and physical consequences of sexual assault
The likelihood that a person suffers suicidal or depressive thoughts increases after sexual violence (Yuan, Koss, & Stone, 2006).94% of women who are raped experience post-traumatic stress disorder (PTSD) symptoms during the two weeks following the rape: a) 30% of women report PTSD symptoms 9 months after the rape; b) 33% of women who are raped contemplate suicide; c) 13% of women who are raped attempt suicide.Approximately 70% of rape or sexual assault victims experience moderate to severe distress, a larger percentage than for any other violent crime.People who have been sexually assaulted are more likely to use drugs than the general public: a) 3/4 times more likely to use marijuana; b) 6 times more likely to use cocaine; c) 10 times more likely to use other major drugs.Sexual violence also affects victims' relationships with their family, friends, and coworkers: a) 38% of victims of sexual violence experience work or school problems, which can include significant problems with a boss, coworker, or peer; b) 37% experience family/friend problems, including getting into arguments more frequently than before, not feeling able to trust their family/friends, or not feeling as close to them as before the crime; c) 84% of survivors who were victimized by an intimate partner experience professional or emotional issues, including moderate to severe distress, or increased problems at work or school; d) 79% of survivors who were victimized by a family member, close friend or acquaintance experience professional or emotional issues, including moderate to severe distress, or increased problems at work or school; e) 67% of survivors who were victimized by a stranger experience professional or emotional issues, including moderate to severe distress, or increased problems at work or school.
Are We Predisposed to Criminal Aggression?
Is one born a rapist, or does one become a rapist?Lombroso identified human violence in biological and genetic models.When his studies about physiognomic finished, his interest turned to the brain areas.Lombroso is also remembered for having developed one of the first lie detectors in the legal field for measuring changes in blood pressure to detect untruthful responses during interrogation (Ingrassia, 1998).Therefore, a new biological explanation of violence was identified.Some genetic polymorphisms would be able to modulate the reactions to environmental variables, among which, in particular, exposure to stressful events and the tendency to react to them with impulsive behaviors.Some studies also conducted in very aggressive populations such as the Maori in New Zealand noted a significant increase in the risk of violent behavior development which could lead to murder in low-activity allele carriers, or could make the subject more prone to expressions of violence, whether provoked or socially excluded.The results of these studies are contradictory and not replicated.According to environmental theories it is social learning which determines the expression of aggressiveness in everyday life, for example, upbringing, learned examples, and not shared models.
The Tendency to Falsify (Simulation or Dissimulation)
Disease simulation is to invent symptoms that do not exist, or exaggerate those that exist, in order to reap benefits on a clinical and forensic legal level.Dissimulation is to otherwise hide the disease and pretend normality to avoid negative measures (disqualification and loss of parental rights).The cognitive and personality tests allow the benefits of some indices that make us suspect that a subject is falsifying (Galati, 2002).
In recent times, psychodiagnostic instruments (IAT) have been created to discover hidden emotions and attitudes, beyond the will of the person to reveal them.
Based tools are not the interpretation of stimuli and responses, but rather on the differences between the response times to certain stimuli and their association.
The basic assumptions of lie detectors are considered important as a result of research in neuroscience, and can record electroencephalographic responses.Larson created the first lie detector, which was then reworked by Reid (Alder, 2007).
Is it enough to prove that a person is aware of a fact, to declare them guilty of having caused it?The latest frontier of neuroscience applications of the law is that of reading the mind by recording brain electrical activity or FMRI (an Italian study which monitored the brain activity of subjects who could answer questions by lying or telling the truth, and it noted that when lies were told, the frontal and prefrontal regions, and the anterior cingulate cortex were activated).
When a person subject to questioning lies, it seems that different brain areas are activated and are more numerous than when telling the truth, due to having to actively suppress truthful information.The Brain Fingerprinting Lab founded by Farwell shows images while recording brain function (Farwell & Makeig, 2005).
If a familiar image is recognized, the brain emits faster signals and this "brain fingerprinting" filed in the memory is decoded, using it for deductions relating to the sincerity of the answers provided orally by the person.However, there are N. Malizia plenty of technical problems to consider.Just move a muscle or the tongue too much, can create variations that make the test unreliable.For the purposes of relations between neuroscience and judicial areas, it can be concluded that certain techniques aimed at discovering the incidence of memories and emotions are currently not as reliable and valid enough to be used without reservations in forensic investigations; and there are strong doubts that they can never be so.They are often based on studies that provide too many variables to obtain accurate results, or that rely on groups rather than on the individual.
Sociological and Scientific Profiles of Individual and Group Rape
In 2009, an important research on sexual offenses was carried out and compared with those committed by individual attackers, and those committed by groups (Hauffe & Porter, 2009).The two researchers, Hauffe and Porter, examined one hundred and twenty cases of rape and the judicial reports, including four cases where the victim had died.The scholars had found some consistent features in the group of individual rapists and those who had acted as a group, as well as the victims of their choice, in particular: a) victims: the average age of the victims attacked by an individual aggressor was 26 years old, while the average age of the victims attacked by groups was 18 years old; b) aggressor: the average age of the attackers in the group was 21 years old compared to the average age of 29 years old of the individual attackers.The individual behavior of sexual assault (Bennell, Alison, Stein, Alison, & Canter, 2001) was deepened by analyzing the statements of the victims, based on the dichotomous model of interpersonal relationship called "Leary's Interpersonal Circumplex", also called the circumplex model, which identifies the relational style of a person (Conte & Plutchik, 1981), developed by T. Leary in 1957.Leary's model can be regarded as part of the structural models because it was built on personality traits (Wiggins & Broughton, 1985).The Leary model assumes that people interact on the basis of a complex continuum of two different dimensions: Cooperation/hostility "this tipology is presented in Table 1" and dominance/submission "this tipology is presented in Table 2".In 1993, Benjamin, proposed to represent each of these dimensions as diagonals of a circle, in which the behaviors at opposite ends are divided geometrically, but also from the conceptual point of view; for example, dominant behavior is placed on the opposite side to submissive behavior.Similar to the self-fulfilling prophecy, according to the circumplex model, the person who believes with conviction to bethe victim of the other's hostile attitude, unconsciously motivates (with the intent to defend one-self), the other to implement the expected hostile attitude of which she is afraid (Alberti, 2000).
This model has proven useful in assessing the manner in which a person interacts with others and has also proved valid for explaining the interaction between the aggressor and the victim.Alison and Stein applied this circumplex model, initially, to study the behavior of individual sex offenders and, later, the model was applied to cases of gang rape.Studies have shown that the attackers acting as a group, compared to those who act alone, differ with respect to cooperation.Group aggressors, in fact, show "affectionate" behaviors more frequently, such as kissing, caressing and gently undressing the victim, to make her "work" (Holmstrom & Burgess, 1980).Alison and Stein found that submissive behavior by the aggressor is not common.
victim has the sensation of controlling the aggressor, which justifies the abuse the attacker is carrying out on the victim "this situation is presented in Table 2" (Porter & Alison, 2004).
Following the principle of complementarity, studies have outlined a relationship between the resistances by the victim (which indicates a dominant position) with the subjugation of the aggressor.Other investigations have shown that the implementation of an abuse-resistant behavior by the victim can be found more frequently in cases of individual aggression than those who act in groups.
Gang rape is driven by complex processes, including interpersonal dynamics of the group members, the social rules of the group, and the arrogance of the group and the typically shared responsibility of group contexts, which minimizes or removes the feeling of guilt; on the contrary, the aggressions of single men reflect the attacker's disease.The hostility of an aggressor within a group, in fact, emphasizes a sense of social identity, which at the same time facilitates de-identification and loss of the sense of identity and personal responsibility (Krahe, Scheinberger-Olwig, & Schütze, 2001).
Sexual assault carried out by an individual aggressor can be an immediate gratification of sexual impulses or the implementation of their own fantasies to entertain a "romantic" relationship with a woman" this tipology is presented in Table 3"; on the contrary, sexual abuse by a group enables its members to increase their reputation, express their status and demonstrate their power within the group (Bijleveld, Weerman, Looije, & Hendriks, 2007).
Rape and Victims
Some studies have examined the defensive behaviors of victims of sexual violence, as well as the strategies to be implemented to prevent the rape "this tipology is presented in Table 4" (Ressler, Burgess, & Douglas, 1988).
In addition, common characteristics of rape victims were highlighted; basically, they are chosen at random and are not previously known by the aggressor (Ressler, Burgess, Douglas, Hartman, & D'Agostino, 1986).The analysis of the confession of rape victims, who were asked to narrate the reactions that they had put in place during and after the violence, and an analysis of the response strategies drawn from literature, have identified six possible response strategies (Hentig, 1948): 1) Escape: it is a strategy that is successful if the victim is not in an isolated place and/or is not attacked by a group of attackers; however, the risk is that it may increase the attacker's aggression.
2) Verbal oppositional resistance: in this case the victim screams, expresses her anger verbally and draws attention to her; typical exclamations are "let go" or ristics, the degree of violence and the aggressor's force.However, the victim must expect that in many cases, her physical resistance would have the response of increased violence during the aggression.
4) Non-confrontational verbal responses: in these cases the victim tries to dissuade the aggressor by negotiating or by trying to make him feel empathy or negotiate.This strategy is usually put in place to prepare an escape.
5) Non-confrontational physical resistance: In these cases the victim enacts real or simulated passive resistance techniques.Among the real reactions cited are examples of nausea and crying; while among the simulated reactions, fainting, mutism, epilepsy or seizures.
6) Submission: this is the only strategy which is not offensive or defensive.The victim submits to the fear of the aggressor.However, this behavior could increase the aggressivity.
The Date Rape Drug
The existence of predispositions to becoming a victim is well demonstrated by an array of psycho-analytic literature.The risk of exposure of the subject to criminal behavior of various kinds increases, and many studies have shown the influence of the use of psycho-active substances.The presence of substance abuse in criminal subjects has been frequently observed (Luzzago & De Fazio, 2004); it has been shown, for example, that alcohol promotes the occurrence of dangerous situations by increasing the aggressor's impulsivity and mitigating prudence (Horvath & Brown, 2006).
Although alcohol constitutes the most abused substance in relation to abnormal sexual behaviors, in recent decades there has been a significant increase in crimes with the use of psycho-active substances, which are called by a specific criminal genre, Drug-Facilitated Crimes (Shbair, Eljabour, & Lhermitte, 2010).
Recent studies show that the use of benzodiazepines (unitrazepam) and GHB, which are odorless and tasteless and easily dissolved, causing euphoria and disinhibition, are useful for the attacker to facilitate the implementation of a sexual crime (Bertocco, Brunaldi, & Righini, 2011).These substances also prove functional for the assailant given their amnesic properties, as they can induce anterograde amnesia of the violence experienced (Rossi, Lancia, & Gambelunghe, 2009).
It has been suggested that GHB can increase sexual desire and thus provides an additional feature that would make such a substance functional for the sexual aggressor.After taking GHB, it is absorbed by the intestine and is rapidly distributed throughout the body.The substance also acts on the central nervous system.Finally, it is rapidly metabolized and degraded (LeBeau & Mozayani, 2001).
Conclusion
This research has tried to provide a general overview of rape, by observing it from different angles and making a distinction between rape and other forms of sexual violence which are characterized by physical aggression; it is necessary to distinguish rape from abuse.A number of educational interventions have also been designed for audiences of women only.At this point, there is a rather persuasive body of evidence to suggest that women's participation in risk reduction programs (including self-defense training) decreases their likelihood of being sexually assaulted in the future.Research also documents other positive outcomes resulting from resistance training for women, including increased assertiveness, improved self-esteem, decreased anxiety, increased sense of perceived control, decreased fear of sexual assault, enhanced self-efficacy, improved physical competence/skills in self-defense, decreased avoidance behaviors (restricting activities such as walking alone), and increased participatory behaviors (behaviors demonstrating freedom of action).Because women with assault histories are at increased risk to be sexually assaulted in the future, they merit special consideration in the design of risk reduction programs.Most rape education programs are actually designed for mixed-gender audiences, however, and the primary conclusion from evaluation research is that such programs can be effective in changing rape-supportive beliefs and/or attitudes over the short-term (several months to a year), but they have not generally been successful in changing beliefs and attitudes over the long-term.The research literature also offers suggestions regarding which specific components of educational programs are associated with positive changes.For example, educational programs that are longer appear to have more significant impact than shorter ones, as well as those facilitated by professionals and those that involve repeated exposure to programming.
However, perhaps the most robust conclusion in this area is that single-gender programs are more effective than mixed-sex ones.In fact, many experts have suggested that it violates common sense to provide sexual assault education to N. Malizia . Unlike rape, sexual abuse does not involve penetration, but describes any non-consensual sexual contact and includes (Jewkes, Sen, & Garcia-Moreno, 2002): a) use of derogatory words; b) refusal to use contraception; c) inflict physical pain to the partner during sexual intercourse; d) deliberate infection of partners with infectious diseases or sexual infections; e) use of objects that cause pain or humiliation to non-consenting partners.Even more general is the concept of sexual violence, describing any sexual activity with a person who is unwilling or unable to consent to the sexual act because of alcohol, drugs or other conditions.The term describes different behaviors including rape, unwanted sexual contact or undesired exposure to naked bodies or sexual acts, sexual abuse and incest, sexual harassment and all other various forms of behavior in which an individual is abusing their power or the condition of the victim, even without using forms of expressed violence.Often the victim of sexual assault is manipulated (Kelly, 2013).a) Rapists: personality and approach oppositional resistance: in this case the victim manifests a physical endurance, using moderate responses (wriggle, writhe) or violent responses, hitting the aggressor.The ability to implement this strategy depends on situational factors such as the scene of the attack, the presence of a weapon, physical characte-N.Malizia mixed-gender audiences, given the very different relation of men and women to the issue.Cesare Lombroso led violence back to biological and genetic models, although today the focus has shifted to areas of the brain of the human being, concluding that certain genetic polymorphisms would be able to modulate the reactions, depending on different environmental variables, including, exposure to agents, stressful events and stimulating impulsive behaviors.According to environmental theories, it is social learning which determines the expression of aggressiveness in everyday life; for example, upbringing, learned examples, and rewards for bad behavior.The victim is chosen randomly by sex offenders, and the victim implements defensive strategies such as running away, or submission.There are drug treatments for rapists which are particularly effective, such as non-permanent chemical castration, although a correlation between paraphilic behavior and endocrine function has not yet been demonstrated in both men and women.The natural or surgical castration of sex offenders has often been justified as a treatment to reduce the chance of recurrence.The inadequacies of the article and future research directions can be discussed in the article. | 2018-04-02T20:06:20.578Z | 2017-03-08T00:00:00.000 | {
"year": 2017,
"sha1": "baa45cda8ed85095d6d895e7699470eee140c5d6",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=74594",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "baa45cda8ed85095d6d895e7699470eee140c5d6",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
16321102 | pes2o/s2orc | v3-fos-license | Laser Guide Star Adaptive Optics Imaging Polarimetry of Herbig Ae/Be Stars
We have used laser guide star adaptive optics and a near-infrared dual-channel imaging polarimeter to observe light scattered in the circumstellar environment of Herbig Ae/Be stars on scales of 100-300 AU. We discover a strongly polarized, biconical nebula 10 arcseconds in diameter (6000 AU) around the star LkHa 198, and also observe a polarized jet-like feature associated with the deeply embedded source LkHa 198-IR. The star LkHa 233 presents a narrow, unpolarized dark lane consistent with an optically thick circumstellar disk blocking our direct view of the star. These data show that the lower-mass T Tauri and intermediate mass Herbig Ae/Be stars share a common evolutionary sequence.
Diffraction-limited optical and infrared astronomy from the ground requires adaptive optics (AO) compensation to eliminate atmospheric wavefront disturbances. Bright stars may be used as wavefront references for this correction, but most astronomical targets lack nearby guide stars. AO observations of these targets from the ground can only be accomplished using artificial laser guide stars (LGS) [1].
Herbig Ae/Be stars are young stars with masses between 1.5 and 10 times that of the sun; they are the intermediate-mass counterparts of the more common T Tauri stars. Excess infrared and millimeter emission shows that Herbig Ae/Be stars are associated with abundant circumstellar dust [2]. Visible and near-infrared (NIR) light scattered from dust is typically polarized perpendicular to the scattering plane [3], making polarimetry a useful tool for probing the distribution of this material [4]. While Herbig Ae/Be stars are intrinsically very luminous, many are so distant or extincted that they are too faint to act as their own wavefront references and thus require LGS AO.
The Herbig Ae/Be stars LkHα 198 and LkHα 233 (Table S1) were observed on 2003 July 22 at the 3-m Shane telescope at Lick Observatory (Fig. S1) using the Lawrence Livermore National Laboratory LGS AO system [5] and the Berkeley NIR camera IRCAL [6]. The atmospheric seeing was 0.8 arcseconds at 550 nm, and the AO-corrected wavefront produced images with Strehl ratios ≈ 0.05 − 0.1 at 2.1 microns and full-width at half maximum resolution of 0.27 arcseconds [7].
IRCAL's imaging polarimetry mode utilizes a cryogenic LiYF 4 Wollaston prism to produce two simultaneous images of orthogonal polarizations. The sum of the two channels gives total intensity, and the difference gives a Stokes polarization. The dominant noise source near bright stars in AO images is an uncorrected seeing halo. Because this halo is unpolarized and thus vanishes in the difference image, dual-channel polarimetry enhances the dynamic range in circumstellar environments [8]. The observing techniques and data reduction methods are based on [9].
LkHα 198 is located at the head of an elliptical loop of optical nebulosity extending 40 arcseconds to the star's southeast [10]. This complex region 600 parsecs distant includes a molecular CO outflow [11] and two Herbig-Haro jets [12], as well as two additional Herbig Ae/Be stars in the immediate vicinity (V376 Cas and LkHα 198-IR) [13] and a millimeter source (LkHα 198-MM) believed to be a deeply embedded protostar [14]. This proximity of sources requires high resolution observations to disentangle the relationships between the various components [15]. We discover a biconical nebula ∼10 arcseconds in diameter (6000 AU), oriented north-south, with polarization vectors concentric with respect to LkHα 198 (Fig. 1). The lobes of the reflection nebula are divided by a dark, unpolarized lane that we interpret as a density gradient towards the equatorial plane of a circumstellar disk and/or a flattened envelope. The north-south orientation indicates that LkHα 198 is unlikely to have created the giant elliptical nebula or the molecular outflow to the southeast. However, it is consistent with LkHα 198 being the source for the Herbig-Haro flow at a position angle (PA, measured east from north) of 160 • , a conclusion supported by the extension of the polarized reflection nebula along this PA. However, this is 20 • away from the observed symmetry axis of the nebula, requiring an inner disk axis tilted or precessing with respect to the outer envelope.
The embedded source LkHα 198-IR [16] is detected in our H and K s band data 5.5 arcseconds from LkHα 198 at PA=5 • . A polarized, extremely blue, jet-like feature extends >2 arcseconds (1200 AU) from LkHα 198-IR at PA=105 • [17]. The polarization vectors of this apparent jet are perpendicular to its long axis, indicating that LkHα 198-IR is the illuminating source, not LkHα 198. The jet appears to be half of a parabolic feature opening toward the southeast, with its apex at LkHα 198-IR and its southern side partially obscured by the envelope around LkHα 198. We suggest that the northwest side of a bipolar structure around LkHα 198-IR may be hidden at NIR wavelengths by the dust indicated by millimeter observations [18]. The orientations of circumstellar structures revealed by our images confirm that LkHα 198-IR is the best candidate for the origin of the Herbig-Haro outflow to PA 135 • , though based on geometrical considerations we cannot entirely exclude the protostar LkHα 198-MM. By extension, the large elliptical nebula was most likely created by outflow from LkHα 198-IR, although we see it primarily in scattered light from the optically much brighter LkHα 198.
LkHα 233 is an embedded A5e-A7e Herbig Ae/Be star which is associated with a blue, rectangular reflection nebula 50 arcseconds in extent, located in the Lac OB1 molecular cloud at 880 parsecs. Our imaging polarimetry reveals four distinct lobes bisected by a narrow, unpolarized lane with PA≈150 • (Fig. 1). The nebulosity around LkHα 233 is extremely blue in the NIR, with its east-west extent decreasing from 6 arcseconds (5300 AU) at J and H to 2 arcseconds (1800 AU) at K s . The orientation of the lobes relative to the dark lane suggests that they are the limb-brightened edges of a conical cavity in a dusty envelope illuminated by a highly extincted star. The radial extent of the dark lane (1000 AU) suggests that it is associated with an equatorial torus characteristic of a flattened infalling protostellar cloud [19], and not a rotationally-supported disk.
We find that the intensity peak of the star is shifted southwest relative to the polarization centroid, with the displacement increasing from 0.15 arcseconds at K s to 0.35 arcseconds at J. This too indicates that the lane consists of optically thick foreground material which has a flattened spatial distribution consistent with a circumstellar disk or infalling protostellar cloud. We do not see the star directly, but instead view a scattering surface above the disk midplane.
Our results are consistent with low-resolution, wide-field optical imaging polarimetry [20], which suggests a circumstellar torus in the northwest-southeast direction perpendicular to a bipolar reflection nebula. Our interpretation is also consistent with the existence of an optical [S II] emission line jet [21] blue-shifted to the southwest. The jet both bisects the nebulosity and lies perpendicular to the proposed disk. The geometry of the reflection nebula indicates that the outflow is only poorly collimated (∆θ ≈ 70 • ) despite the apparently narrow jet traced by optical forbidden line emission. Because [S II] line emission arises preferentially in regions denser than the critical density for this transition, the surface brightness distributions can take on the appearance of a highly collimated jet, despite the fact that the streamlines collimate logarithmically slowly [22].
Observations of T Tauri stars have led to a general understanding of the origin of solar-type stars [23]: The fragmentation and collapse of an interstellar cloud creates a self-gravitating protostar surrounded by a Keplerian accretion disk fed by an infalling, rotationally-flattened envelope. The disk mediates the outflows common to low-mass young stellar objects, which play a key role in the dispersal of the natal gas and dust.
It has been hypothesized that the more massive Herbig Ae/Be stars and the T Tauri stars form and evolve in similar manners [24], but this remains controversial. On large spatial scales, the disk-like nature of the circumstellar matter around Herbig Ae/Be stars is well established. Flattened structures around several sources have been resolved on 100 AU scales [25], or have Keplerian kinematics [26]. However, the evidence seems to be ambiguous on scales of tens of AU and below, with some authors arguing for a spherical geometry [27] and others favoring disks [28]. LkHα 198 and LkHα 233 are both classified as Hillenbrand Group II Herbig Ae/Be stars: they have infrared spectra that are flat or rising towards longer wavelengths. This means that they are young stars that may or may not possess circumstellar disks but do possess circumstellar envelopes which are not confined to a disk plane. We observe such envelopes around both of our sources in the form of centrosymmetrically polarized biconical nebulosities viewed approximately edge-on to the midplanes.
We compared our observations with radiative transfer models computed for a sequence of circumstellar dust distributions around T Tauri stars [29,30]. These models provide both total and polarized intensity images, which we convolved with a model instrumental point spread function to match our observed resolution (Fig. S2). Bipolar outflow cavities in these models produce a limb-brightened appearance at near-infrared wavelengths, with the brightening stronger in polarized light than in total intensity.
For both LkHα 233 and LkHα 198, the peak polarization differs between the two lobes. These asymmetries may indicate the sign of the inclination of each object. In Whitney's model envelopes, which use dust grain properties fit to an extinction curve for the Taurus molecular cloud, the closer lobe was brighter overall but had lower fractional polarization by several percent. For LkHα 198, the polarization in the northern lobe of the bipolar nebulosity is generally 10-15% lower than the southern lobe at all wavelengths (20% vs. 35% at H, for instance), suggesting that the northern lobe is oriented towards us. For LkHα 233, the southwest lobe is 14% polarized on average versus 26% for the northeast lobe. This indicates that the southern lobe is facing us, consistent with the the blueshift of the CO jet in that direction, and the southwest shift of the intensity peak relative to the polarization centroid.
LkHα 233's limb-brightened appearance provides compelling evidence for the presence of cavities swept out by bipolar outflow from the star. Cavity models with an opening angle of 30-40 • seen at an inclination of 80 • reproduce both the observed polarization fraction of 25-40% and the higher degree of limb brightening seen in the near lobe.
The absence of limb brightening may be evidence that LkHα 198 lacks polar cavities, or at most possesses very narrow ones. The detectability of limb brightening for a given angular resolution depends on the opening angle between the symmetry axis and edge of the cavity. At our resolution, envelope models with cavity opening angles greater than ∼ 20 • predict detectable limb brightening, while we observe for LkHα 198 an opening angle of 45 • without limb brightening. Thus our observations do not support the presence of evacuated cavities in the envelope of LkHα 198. The observed morphology can instead be explained by the illumination of a cavity-free, rotationally-flattened envelope by the central star; the bipolar appearance would then arise from light escaping along the path of least optical depth. However, these cavity-free infalling envelope models have opening angles which increase with wavelength, while we observe a constant opening angle, suggesting a geometric rather than optical depth origin for the observed morphology. This discrepancy may be resolvable by varying the dust particle properties.
Based on these observations, LkHα 233 is the more evolved of the two systems, with well-defined cavities swept out by bipolar outflow and bisected by a very dark lane. LkHα 198 is a less evolved system, which is only in the early stages of developing bipolar cavities and possesses lower extinction in the apparent disk midplane.
The observed circumstellar environments are consistent with the rotationally-flattened infall envelopes models developed for T Tauri stars, indicating that the process of envelope collapse has similar phases, despite the large disparities in mass and luminosity between these two classes of young stars. This morphological similarity leads us to infer that the conservation and transport of angular momentum is the dominant physical process for both classes of stars. Alternate formation pathways have been suggested for OB stars that invoke new physical mechanisms, such as magnetohydrodynamic turbulence [31] or stellar mergers [32]. The Herbig Ae stars studied here appear to be below the mass threshhold at which such effects become important. Plotted from left to right for each object are the total intensity (Stokes I), the polarized intensity (P = Q 2 + U 2 ) and the polarization fraction (P/I). I and P are displayed using log stretches, while P/I is shown on a linear stretch. Red is K s band (2.1 microns), green H (1.6 microns), and blue J (1.2 microns). Polarization vectors for H band are overplotted on the P/I image; while the degree of polarization changes somewhat between bands, the position angles do not vary much. Integration times per band were 960 s and 1440 s for LkHα 198 and LkHα 233, respectively. The dimmest circumstellar features detected in our polarimetric observations are approximately 1 − 2 × 10 4 fainter than the stellar intensity peaks. The IRCAL polarimeter is sensitive to polarization fractions as low as a few percent, resulting in a signalto-noise ratio of 5-10 per pixel for the typical polarizations of 15-40% observed around our targets.
Methods and Materials
The Lick Adaptive Optics system was developed at Lawrence Livermore National Laboratory, and can operate in both natural and laser guide star modes [5,33]. In the laser guide star mode, the atmospheric wavefront reference is created by a laser tuned to the sodium D2 line at 589 nm, which excites mesospheric sodium at roughly 90 km altitude. The 589 nm light is generated by a tunable dye laser pumped by a set of frequency-doubled solid-state (Nd:YAG) lasers. Typically, 11-14 W of average laser power is projected into the sky with a pulse width of 150 ns and a pulse repetition rate of 13 kHz. Laser guide star systems are insensitive to tip and tilt, requiring a separate tip/tilt sensor using a natural guide star. For the observations presented here, the science targets served as their own tip/tilt references.
The sodium guide star has an apparent size of 2 arcseconds in 1 arcsecond seeing and a magnitude which depends on the atmospheric sodium density, which varies on all timescales from hourly to seasonally. The sodium level was low during July 2003, decreasing the magnitude of the guide star, and forcing the adaptive optics system to operate at its lowest frame rate of 55 Hz. As a result, the Strehl ratios achieved were modest (S 0.05-0.1) despite the good atmospheric seeing. Correspondingly the full-width at half-maximum (FWHM) of the point spread function was larger than the FWHM of a diffraction limited beam, 0.27 arcseconds versus 0.15 arcseconds respectively at 2.1 microns.
The science camera used with the Lick AO system is IRCAL [6], which has as its detector a 256 2 pixel HgCdTe PICNIC array manufactured by Rockwell. The observations presented in this paper used the standard astronomical J (1.24 micron), H (1.65 micron), and K s (2.15 micron) broad-band filters. IRCAL's plate scale, 0.0754 arcsec/pixel, was chosen to Nyquist sample the diffraction-limited beam at K s . The imaging polarimetry mode of IRCAL utilizes a cryogenic LiYF 4 (frequently called "YLF") Wollaston prism to produce simultaneous images of orthogonal polarizations. YLF was chosen for its excellent achromaticity throughout the near infrared. A rotating achromatic half-wave plate mounted immediately before the camera entrance window modulates the polarization, allowing measurement of both Stokes parameters Q and U .
Each target was observed for the same amount of time in J, H, and K s , divided equally between Stokes Q and U observations. Typical exposures were 30-90 s in duration, with small dithers performed every few exposures. Total integration time was 1440 s per band for LkHα 233, and 960 s per band for LkHα 198. The data were flat-fielded and bias-subtracted in the standard manner for near infrared astronomical data. Sky background frames were obtained in polarimetric mode and subtracted from the data. However, the near infrared sky is nearly unpolarized so this step is not essential. The data from different dither positions were registered together via a Fourier transform cross-correlation code and stacked to produce mosaic Stokes I, Q, and U images. These observing techniques and data reduction methods are based on [9].
The instrumental polarization bias was established through observations of standard stars known to be unpolarized; From the derived bias ( 2% at a position angle of -85 • ) we calculate the effective Mueller polarization matrix for the instrument and apply the inverse of this matrix to the Stokes mosaics to remove the bias. V magnitude, distance, luminosity, and mass are from [2]. | 2014-10-01T00:00:00.000Z | 2004-02-25T00:00:00.000 | {
"year": 2004,
"sha1": "49e07d73b95024495f145be4eb060e6e9dc1edbc",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/92307/1/309.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a208efb05e7ba6b572d74110f1ba9463cea2860d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Physics",
"Medicine"
]
} |
196831600 | pes2o/s2orc | v3-fos-license | Improving Bayesian Local Spatial Models in Large Data Sets
Environmental processes resolved at a sufficiently small scale in space and time will inevitably display non-stationary behavior. Such processes are both challenging to model and computationally expensive when the data size is large. Instead of modeling the global non-stationarity explicitly, local models can be applied to disjoint regions of the domain. The choice of the size of these regions is dictated by a bias-variance trade-off; large regions will have smaller variance and larger bias, whereas small regions will have higher variance and smaller bias. From both the modeling and computational point of view, small regions are preferable to better accommodate the non-stationarity. However, in practice, large regions are necessary to control the variance. We propose a novel Bayesian three-step approach that allows for smaller regions without compromising the increase of the variance that would follow. We are able to propagate the uncertainty from one step to the next without issues caused by reusing the data. The improvement in inference also results in improved prediction, as our simulated example shows. We illustrate this new approach on a data set of simulated high-resolution wind speed data over Saudi Arabia.
Introduction
The rising popularity of statistical methods for environmental data calls for the development of new methods that are able to capture the underlying varying dependencies and to provide computationally efficient inference for the ever increasing amount of data. Traditional geostatistical approaches are not only computationally intensive but are also based on stationarity assumptions, which is convenient but too restrictive and rarely realistic. For instance, wind at sufficiently small temporal resolution (e.g., hourly or sub-hourly) tends to blow predominantly from specific directions because of atmospheric circulation, thus implying preferred directional dependencies. Additionally, failing to account for how physical processes such as weather patterns vary over time or space can lead to an unrealistic assessment of the dependence, and hence suboptimal inference and prediction.
Traditionally, methods have focused on characterizing the spatial and spatio-temporal nonstationarity explicitly via the covariance function. The deformation method in Sampson and Guttorp (1992) constructs a non-stationary covariance structure from a stationary structure by re-scaling the spatial distance (Sampson and Guttorp, 1992), which was subsequently extended to the Bayesian context in Damian et al. (2001) and Schmidt and O'Hagan (2003). Another class of non-stationary methods is built on the process convolution or kernel smoothing method, introduced by Higdon (1998), which uses a spatially varying kernel and a white noise process to create the covariance structure. Other well-known approaches to model non-stationarity include representing the covariance function as a linear combination of basis functions and modelling the covariance matrix of the random coefficients (Nychka et al., 2002), and to account for the effect of covariate information directly in the covariance function (Schmidt et al., 2011;Neto et al., 2014). For a review on the existing literature on non-stationary methods, see Risser (2016).
Although all of the above methods produce valid models, their computational burden for inference and prediction can be unfeasible for large data sets. Indeed, for evaluating a Gaussian likelihood in a data set of size n, O(n 2 ) entries need to be stored and O(n 3 ) flops need to be computed for the log-determinant and matrix factorization. This task is feasible in modern computers only when n is at most a few tens of thousands of points. Additionally, evaluating a non-stationary model implies inference on a larger parameter space, which requires an exponentially increasing number of likelihood evaluations for frequentist inference or posterior sampling (Edwards et al., 2019). To address the difficulties in computation for large data sets, Nychka et al. (2018) used a multi-resolution representation of Gaussian processes to represent non-stationarity based on windowed estimates of the covariance function under the assumption of local stationarity, and successfully used this idea to emulate fields from climate models. Kuusela and Stein (2018) proposed modelling Argo profiling float data using locally stationary Gaussian process regression, where parameter estimation and prediction were carried out in a moving window. Other works related to moving window methods have been developed and applied in Hammerling et al. (2012) and Tadić et al. (2015) to model remote sensing data.
The seminal work of Lindgren et al. (2011) predicated avoiding modeling the covariance function altogether and modeled the data via a Stochastic Partial Differential Equation (SPDE) instead. By considering a spatial field as a solution of an SPDE, and describing the covariance function only implicitly, inference is of the order O(n 3/2 ) , thus allowing inference on considerably larger data sets than covariance-based methods. The computational benefits arise from the precision matrix (inverse covariance matrix) resulting from the approximate stochastic weak solutions of the SPDE, which has a Markovian structure where only close neighbours are non-zero (Rue and Held, 2005). By spatially varying the coefficients in the SPDEs, it is also possible to construct a variety of non-stationary models. Bolin et al. (2011) Locally non-stationary fields were considered in Fuglstad et al. (2015a) by letting the coefficients in the SPDE vary with position, and further discussed and generalized for spatially varying marginal standard deviations and correlation structure in Fuglstad et al. (2015b). Another application of the SPDE approach to model non-stationarity is to include covariates directly into the model parameters; see Ingebrigtsen et al. (2014) for an application to annual precipitation in Norway.
The aim of this paper is to develop a new method for modelling large data sets with spatial dependence that not only improves local models in terms of inference and prediction, but is also computationally affordable. As a motivating example, we use the high-resolution simulated wind data from a computer model displayed in Figure 1a. We partition this data into several small disjoint subsets of the data, which we call 'regions', as shown in Figure 1b. Modeling and predicting such variable over a large region present several challenges. First, the data structure at this high resolution is very complex, with details and features that are difficult to capture with a single model. As a consequence, the assumption of stationarity for the entire region is inappropriate. Second, because of the large number of locations, we need a method that is computationally efficient. We show that our method is able to address not only the modeling challenges arising from the inherent non-stationarity of hourly wind, but also the computational issues that are implied by the large data size.
When choosing the size of these regions, we face the conflicting issue of bias-variance tradeoff in parameter estimation. Ideally, one wants to choose regions that accurately capture the features in the data (low variance), but also have high predictive out-of-sample skills (low bias).
Indeed, small regions reduce the model bias and allow fast computations, at the expense of low accuracy (high variance) in the parameter estimation. Large regions instead allow a control of the variance but also imply a sub-optimal characterization of the dependence structure, hence a bias.
We propose a novel three-step approach, which simultaneously allows for small regions and low variance. The key is to allow small regions to model the local dependence, and correct the estimated parameter distribution with a smoothing step that borrows strength from neighboring regions. The smoothing step produces a distribution that represents the adjusted uncertainty of the local parameters, which is then used for refitting the models. Allowing this adjusted uncertainty to be used as a new prior would imply the incorrect premise of the model being influenced by the data twice, hence our approach restricts the information propagation by including it as the new posterior estimates instead. We start with a simple example where the new posterior is the mode of the distribution from the smoothing step. Then, using the wind data in Figure 1a, we show that it is possible improve the predictive performances by also allowing the uncertainty to propagate from one step to the next.
Our three-step approach is best exemplified by considering a toy data set, where each region consists of an autoregressive process of order one, AR(1). We simulate R time series from this model, where each time series contains T observations, y r = {y r (1), . . . , y r (T )} . For each r, the observations y r are assumed to be conditionally independent, given the latent Gaussian random field x r = {x r (1), . . . , x r (T )} and the hyperparameter φ r : where t = 2, . . . , T is an index for time, |φ r | < 1 and τ is the fixed precision (known and the same for all time series). Figure 2 shows the different values of φ r used to simulate R = 100 time series from (1), where φ r changes according to a series of sine squared (black squares in Figure 2). For each time series, we set T = 50 and two different values for the precision: τ = 2 and τ = 1 in Figure 2a and 2b, respectively. In the first step, we estimate local models for each time series (red circles in Figure 2). In the second step, we apply a correction on the parameters' estimates from the first step, based on information from neighbouring regions (blue triangles in Figure 2). The third step consists of refitting the model in (1) to each time series, propagating the information from the adjusted posterior estimates from the second step back into the analysis. Figure 2 shows that our correction improves the parameter estimates substantially not only for the more extreme case where τ = 1 in panel (b), but also when τ = 2 in panel (a). More details on this example will be provided in Section 3. , estimated values of φ r from fitting the AR(1) model to the simulated data (red circles), and estimates after fitting a smoothing spline (blue triangles). The left plot corresponds to simulations with fixed τ = 2 and the right plot corresponds to τ = 1.
The remainder of this paper is organized as follows. In Section 2 we provide an overview of the proposed methodology. Further details of our approach using the AR(1) example are given in Section 3. The application to the wind speed data in Figure 1 is presented in Section 4. A comprehensive discussion and conclusions are provided in Section 5.
2 Overview of the proposed methodology
Background
We consider a non-stationary and possibly very large data set, and a partition of the domain into regions where the assumption of stationarity is plausible, defined as Ω r , r = 1, . . . , R, where each observation is associated with exactly one Ω r . Each region contains N r observations, y r = {y r (1), . . . , y r (N r )} . For each Ω r , consider the following hierarchical structure: where x r = {x r (1), . . . , x r (N r )} is the vector of the latent field that describes the underlying spatio-temporal dependence structure, θ r is the m-dimensional vector of hyperparameters and π is a generic distribution. The observations y r are assumed to be conditionally independent, given x r and θ r . The resulting joint posterior distribution of x r and θ r is given by Our main goal is to extract the posterior marginal distributions for the elements of the latent field, π{x r (i) | y r } and hyperparameters, π{θ r (j) | y r }, and use them to obtain predictive distributions at unsampled locations. Calculation of these univariate posterior distributions requires integrating with respect to x r and θ r : where θ r (−j) is the vector of all but the j-th hyperparameter component omitted. When the integrals in (3) cannot be found analytically, approximations are typically obtained via simulation based methods such as MCMC. Alternatively, Rue et al. (2009) proposed an approximate Bayesian inference approach that has become increasingly popular in the last decade. Approximations for π(x r (i) | y r , θ r ) and π(θ r | y r ) are obtained and denoted byπ(x r (i) | y r , θ r ) and π(θ r | y r ), respectively. These are then used to construct the following nested approximations (4)
Improving the local estimates
We propose a new method for improving the estimation ofπ{θ r | y r } in (4) and hence also improving the estimatedπ{x r (i) | y r }, for i = 1, . . . , N r . Since each region is selected to be small enough to approximate the local non-stationarity well, the resulting parameters' estimates are likely to have a large variance, and smoothing across the regions is used to reduce it.
The method is based on two extra steps in the estimation procedure from the previous section. In Step 2 we apply a correction to the posteriorsπ(θ r | y r ) by smoothing the mode of this distribution across r. In Section 3, we show a one dimensional example with a smoothing spline, while in Section 4.3 we describe the two dimensional case with a spatial model. We denote byπ smooth (θ r | y) the resulting smoothed distribution for region r in Step 2 of our approach, where y is the combined data sets from all regions, i.e., y = (y 1 , . . . , y R ) . In Step 3, the models are re-fitted to each region and the correction from Step 2 is propagated back into the analysis as the posterior: whereπ smooth (x r | y r , θ r ) is obtained by plugging values of θ r fromπ smooth (θ r | y r ) obtained in Step 2. Step 3 is very computationally efficient, since the posteriors for the hyperparameters have already been estimated, and as in Step 1 the models for each region can be fully parallelized.
Also, as the posterior marginals in (5) are the basis to derive the predictive distributions, the proposed correction will also have a direct impact in prediction performance.
Here, the vector θ r contains the hyperparameters that need to be smoothed, while the ones that do not require the smoothing are included in x r . In practice, it is more important to smooth hyperparameters that have a higher variability and are harder to estimate.
The information from the smoothing in Step 2 is accounted directly into the posterior distribution in Step 3, as opposed to introducing it through priors. By doing so, we prevent the estimation in Step 3 to be influenced by the data that was already used in the first step, and thus using the data twice.
Model description
In the Introduction we briefly introduced our method on a simulated example (see Figure 2) using the AR(1) model in (1). Here, we provide all the details about the methodology in light of the steps proposed in the previous section. For the ease of exposition, we fix the precision τ in (1), so that for each region r only the hyperparameter φ r needs to be estimated. No covariates or additional random effects have been included in (1), but the steps below can be easily adapted to account for them.
The model is a special case of the hierarchical framework proposed in (2). Indeed for the first equation of the hierarchy, the likelihood of the data y r given the latent field x r and the hyperparameter φ r is given by where I T is the T × T identity matrix and τ is the fixed precision, while N T is a T -dimensional normal distribution. For the latent process x r , we assume that the marginal distribution of x r (1) is Gaussian with mean zero and variance 1/(1 − φ 2 r ) to have a stationary process. The joint distribution can be written as where Q x,r is the tridiagonal precision matrix of an AR(1) process.
The three steps of our approach can be summarized as follows: Step 1: The model fitted to each region. Fit the AR(1) model in (1) with fixed known τ to each time series, y r , separately. Following the notation in Section 2, we define the variance-stabilizing transformation θ r = θ r = log 1+φr 1−φr , and we obtain the posterior marginal distributions for the latent field and for the hyperparameter θ r , which we denote byπ{x r (t) | y r } andπ(θ r | y r ), respectively, for t = 1, . . . , T and r = 1, . . . , R. Inference is performed using the R-INLA package (Rue et al., 2009).
Step 2: Smoothing the hyper-parameter. As in Lindgren and Rue (2008), we assume a continuous spline on a discrete set of knots with a second order random walk RW(2). We denote byθ r the mode forπ(θ r | y r ) from Step 1, and we assume a normal distribution: where τ θ is the precision and is such that log(τ θ ) = log(1/ sd 2 r ), where sd r is the estimated standard deviation of the posterior distributionπ(θ r | y r ). The vector u = (u 1 , . . . , u R ) is assumed to have independent second-order increments: where τ u is the precision parameter and can be used to control the degree of smoothing across regions. Section 3.2 discusses a method for choosing the optimal value of τ u .
Step 3: Re-fit the model to each region using the estimated mode. For each region r, we assume that the posterior distribution for the hyperparameters, namelyπ smooth (θ r | y r ), is a point mass concentrated atθ r from Step 2. The marginal posterior for the latent process x r is then obtained from the first equation in (5). Our choice was dictated by ease of exposition, and in the wind data application in Section 4.3 we will show a more general approach with integration points and weights instead of just the mode.
Step 3 implies a change of the original posterior in Step 1, and hence a change in the prior of the model. While retrieving the appropriate prior is not relevant for our method, it is still however possible, and in the appendix we show the steps to do so. Figure 3 shows ( Step 1 (solid red) and Step 2 (dashed blue). The vertical line is the true value φ r = 0.88.
Sensitivity of prediction to smoothing
There are different approaches to control the degree of smoothness in Step 2. This can be, for instance, dictated by the case study and prior knowledge. Here, we present one possible method, which is based on two metrics: the first focuses on the departure of the estimated posterior against the exact simulated distribution, and the second is based on cross-validation.
To assess the improved accuracy in capturing the true distribution of the latent process x r , we calculate the Kullback-Leibler Divergence (KL), a widely used metric for comparing two probability distributions. The departure from the true posterior π(x r | y r , φ r ) is defined as where π appr represents eitherπ orπ smooth . A small KL r indicates a small departure from the target posterior, and a zero KL r indicates that the two distributions are the same.
The data are simulated from a known model, and the posterior distribution of the latent process π(x r | y r , φ r ) can be easily obtained from the joint distribution π(x r , φ r | y r ): π(x r | y r , φ r ) ∝ π(x r , φ r | y r ) where, P r = Q x,r + τ I and b = y r τ . This implies that π(x r | y r , φ r ) ∼ N T (µ 0,r , Σ 0,r ), with µ 0,r = P −1 r b r and Σ 0,r = P −1 r . We also assume that the approximated posterior in (7) is normal, i.e., π appr (x r | y r , φ r ) ∼ N T (µ 1,r , Σ 1,r ), and we obtain µ 1,r and Σ 1,r based on the sample mean vector and covariance matrix from 10,000 posterior samples. The KL divergence expression in (7) can be simplified in the case of two multivariate Gaussian distributions. Indeed, if the target distribution is N T (µ 0,r , Σ 0,r ) and the approximation is N T (µ 1,r , Σ 1,r ), we have KL r = 1 2 log |Σ 1,r | |Σ 0,r | − T + tr(Σ −1 1,r Σ 0,r ) + (µ 1,r − µ 0,r ) Σ −1 1,r (µ 1,r − µ 0,r ) , where |Σ| denotes the determinant of Σ. Since the KL changes across different orders of magnitudes, we opted for a variance stabilizing estimator, the Expected Mean Log Conditional KL (EMLKL) divergence, defined as EMLKL = exp 1 R R r=1 log(KL r ) We assess the impact of smoothing on the prediction skills of the estimated process. We use the conditional predictive ordinate (CPO) for leave-one-out cross-validation, defined as CPO r (t) = π{y r (t)|y r (−t)} = π{y r (t)|y r (−t), θ r }π{θ r |y r (−t)}dθ r , in the x-axis, 'no smooth', corresponds to the estimates directly from Step 1 of our approach.
According to the EMLKL (panel (a), left y-axis) and the EMLCPO (panel (b)), the best fit occurs when log(τ u ) = 3 and log(τ u ) = −5, respectively, and both scores show that there is a clear improvement against a model with no smoothing for log(τ u ) = {−5, −1, 3, 7}. After log(τ u ) = 7 the posteriors are oversmoothed and this worsens the fit compared to no smoothing (high EMLKL and low EMLCPO values). Evidence from this numerical study suggests that smoothing almost always improves the estimation of the latent process and prediction.
Smoothing does not just improve the prediction and decrease the bias, but also results in less variable estimates. Figure 4a (right y-axis) shows the spread of KL r for the different amounts of smoothing, displayed as a boxplot. It is readily apparent that optimal smoothing results in more stable estimates by decreasing the variance across regions.
Application to the WRF data set
We apply our method to a spatial data set of simulated wind speed detailed in Section 4.1. In Section 4.2 we present the local model that is fitted to each region and in Section 4.3 we explain in details each step of our approach and present the results.
The WRF data set
We focus on a simulation generated from the Weather We first partition the domain into R regions small enough so that the assumption of stationarity is plausible. The disjoint subsets are obtained using the k-means clustering method, which minimizes the sum of squares from points to the assigned region centers (Hartigan and Wong, 1979). Our partition results into R = 2000 regions, (see Figure 1b), with the smallest region containing 26 locations and the largest 62.
The spatial model
The distribution of wind speed is bounded below by zero and is significantly right-skewed. Therefore, wind speed cannot be directly modeled with the Gaussian distribution. Common transformations for normalizing wind speed data include logarithmic transformation and square-root transformation (Taylor et al., 2009). Haslett and Raftery (1989) showed that square-root transformation is well suited for wind data, as the resulting transformed wind speed resembles the Gaussian distribution. Hence, for each region r we model the square-root transformed wind speed y r at sampling locations s = (s 1 , . . . , s Nr ) with a latent Gaussian model, a special case of the hierarchical framework proposed in (2). For each region r, we assume where z r is a p-dimensional vector of covariates, and β r is the vector of the linear coefficients.
Here, r (s i ) ∼ N Nr (0, τ −1 ,r I Nr ) is the iid random noise that accounts for the measurement error.
The aforementioned model can be written in the vector form y r |β r , u r , τ ,r ∼ N Nr (Z r β r + u r , τ −1 ,r I nr ), where y r = {y(s 1 ), . . . , y(s Nr )} is the observation vector and the N r × P design matrix is Z r = {z r (s 1 ), . . . , z r (s Nr )} . We consider p = 2, thus two covariates: elevation and distance to the coast. In terms of the hierarchical framework in Section 2.1, (8) is the first equation, i.e., the data level, in (2).
The spatial field u r (s i ) is assumed to be Gaussian and isotropic, with a covariance described by the Matérn function, a widely popular choice in spatial statistics. For two locations s 1 and s 2 at distance h = s 1 − s 2 , the Matérn covariance is defined as (Stein, 1999) cov{u r (s 1 ), u r (s 2 )} = C r (h) = σ 2 u,r 1 Γ(ν r )2 νr−1 (κ r h) νr K νr (κ r h), where σ 2 u,r = 1/τ u,r is the marginal variance and K νr is the modified Bessel function of the second kind of order ν r . The popularity of the Matérn is mainly attributable to the control of the number of mean square derivatives of the underlying process through the parameter ν r > 0. The range is controlled by κ r > 0 and ρ r = √ 8ν r /κ r represents the distance at which the spatial correlation is approximately 0.13. We set ν r = 1, which implies a mean square differentiable process (Stein, 1999).
The vector of hyperparameters to be estimated is given by the precision of the data, the precision of the latent process, and its range, so that θ r = (θ 1,r , θ 2,r , θ 3,r ) = {log(τ ,r ), log(τ u,r ), log(ρ r )} .
The linear coefficients β r in (8) are less variable, so they are not included in the vector of hyperparameters to be smoothed.
We provide a joint distribution for the range ρ r and the variance σ 2 u,r using the concept of the Penalized Complexity (PC) prior that was recently introduced by Simpson et al. (2017).
PC develops priors that allow shrinkage towards a base model, which is assumed to be the reference. The prior is then built by allowing a control of the KL divergence from the base to the actual model. Following Fuglstad et al. (2019), we assume a base model with infinite range and precision, i.e., a constant, and we assign PC priors to ρ r and τ u,r that are able to control the tail probabilities: P(σ 2 u,r > σ 2 0,r ) = α 1 and P(ρ r < ρ 0,r ) = α 2 . We choose α 1 = α 2 = 0.01, ρ 0,r to be the 20% of the range of the observations and σ 2 0,r the variance estimated from the data at region r. In other words, we assume a prior that bounds the variance to be larger than that estimated from the data with a 1% chance, and the range to be below 20% of the range of the observations with a 1% chance. For r = 1, . . . , R, we assume a vague Gamma prior with parameters 1 and 0.00005 for τ ,r and a vague Gaussian prior N (0, 1000) for β r . The R-INLA package is used for model fitting and predictions (Rue et al., 2009).
Results
We now detail our approach with the data and the model described in the previous sections.
The three steps are described as follows: Step 1: The model fitted to each region.
We fit the model outlined in Section 4.2 to each of the R = 2000 regions in Figure 1b separately, and obtain estimates of the posterior distribution for the k-the element of θ r for k = 1, 2, 3, which we denote byπ(θ k,r | y r ). We denote asθ k,r the mode ofπ(θ k,r | y r ), while the posterior standard deviation is denoted as sd k,r . We show the results for θ 3,r = log(ρ r ), since the range is the hardest parameter to identify, and hence the most variable across regions. Figure 5a shows the maps ofθ 3,r . Many regions have a considerably higher estimated posterior mode than the neighboring regions, hence smoothing is necessary. Figure 5b shows the map of the posterior standard deviation sd 3,r , and it is apparent how the locations with large range values correspond to the ones with low posterior variance. Step 2: Smoothing the hyperparameters.
The modesθ k,r from Step 1 are smoothed here independently across k for simplicity and are normalized. With an abuse of notation, we now refer to θ r and their components as their normalized version. We assume an additive model for smoothing:θ k,r (s c ) = u k (s c ) + ε k,r (s c ), where the locations s c are the centroids of each region r. The process u k (s c ) is assumed to be Gaussian and modeled with the Matérn covariance in (9), with varianceσ 2 u,k , rangeρ k , and the iid noise is ε k,r ∼ N (0,τ −1 ,k ), for k = 1, 2 and 3.
We assumeτ ,k to be fixed at the value of 1/ sd 2 k,r , r = 1, . . . , R, from Step 1. This ensures that the same degree of smoothness is applied to all three additive models, i.e., the hyperparameters with a larger standard deviation will be smoothed less than the ones with a smaller standard deviation. Here,ρ k is fixed to half of the domain of the study region. A choice of considerably different values, such as the size of the domain, would result in oversmoothing. The choice of τ u is performed via cross-validation and will be discussed later. Becauseθ k,r , k = 1, 2, 3, are at the same scale after normalization, we can use the same smoothness and thereforeτ u will not be strongly dependent on k. We use six equally spaced values for log(τ u ), varying from −7.5 to 5. The fitted values from the smoothing are then transformed back from the normalized to the original scale. Figure 5c is an example of the estimated posterior mode ofπ smooth (θ 3,r | y r ) with log(τ u ) = −5.
Step 3: Re-fit the model to each region using integration points.
In the AR(1) simulated example in Section 3, the smoothed hyperparameter posterior was assumed to be a point mass concentrated at the smoothed posterior mode from Step 2, so that calculation ofπ smooth {x r (i) | y r } in (5) was trivial. In this application, we propose a more articulated method which numerically approximates the integral in the first equation of (5).
We use the Gauss-Hermite quadrature, a numerical scheme to approximate integrals of the for a fixed L. The abscissas for the quadrature of order L, which are given by the roots of the Hermite polynomials ξ (l) , and the weights ∆ (l) , both have a closed form expression (not shown).
We operate under the assumption thatπ smooth (θ r | y) can be well approximated by a product of marginal normal distributionsπ smooth (θ r | y) ≈ 3 k=1 N (µ k,r , σ 2 k,r ), so that the first expression in (5) becomes where the latent field x r = (u r , β r ) contains the linear coefficients and the spatial process in (8). Using a change of variables, we obtain ξ k,r = θ k,r −µ k,r √ 2σ k,r ⇔ θ k,r = µ k,r + √ 2ξ k,r σ k,r . For this case study, L = 5 integration points in each of the three dimensions provide an approximation that is sufficiently accurate. Thus, the required number of configurations to evaluate the integral is L 3 = 5 3 = 125. Since each configuration can be evaluated independently, the computations can be easily parallelized.
Choice of the smoothing parameter
There is no true underlying model here, so the EMLKL in Section 3.2 is not applicable and we only focus on the cross-validation score EMLCPO. We compare the leave-one-out predictive performance using the different degrees of smoothing, as explained previously in Step 2. Figure 6 shows this comparison: lower values of log(τ u ) indicate more smoothing than higher values. The highest value corresponds to the results obtained directly from Step 1. The EMLCPO value attains its maximum at log(τ u ) = −5, and any of the smoothing levels improves the original estimates from Step 1. Differently from the AR(1) case in Section 3, where at some point the smoothing becomes excessive and the scores progressively deteriorate, here the performance is significantly improved even for a large smoothing. We also compare the predictive performances of the integration method against the approach using only the mode as in the AR(1) case. The Gauss-Hermite integration shows marginal improvement, especially for low degrees of smoothing.
For higher degrees of smoothing, the estimated posterior distribution is more narrow, and the effect of the integration is less apparent.
Discussion
In this work, we developed a new three-step approach for analyzing large data sets with spatial dependence that improves local models in terms of inference and prediction. In Step 1, the domain is partitioned into regions, and local models are fit to each region. The size of these regions is a bias-variance trade-off; larger regions will have smaller variance and larger bias, whereas smaller regions will have higher variance and smaller bias. We choose to use smaller regions, thus allowing the capture of local non-stationarities, followed by a correction for the high variance, based on borrowing information from neighboring regions in Step 2. Finally, in Step 3, the model is re-fitted to each region, propagating the uncertainty from the smoothing back into the analysis as the new posterior, thus avoiding problems of using the data twice. The approach allows flexible modeling of complex dependence structures, but is at the same time computationally affordable, as the proposed adjustment is amenable to full parallelization across regions.
In both the AR(1) simulated data and the application, the improvement from our method compared to fitting local models to each region is apparent. Indeed, the smoothing adjustment allows to better recover the true posterior distribution in the simulation study, and most importantly it allows a superior predictive skill. The smoothing can be chosen to achieve the best possible advantage over the uncorrected model. Ad-hoc sensitivity analysis shows that our method is robust with respect to the smoothing technique, with improved results for a wide range of smoothings.
Our method is general and can be applied to many settings: space, time, space/time and different domains, as long as a partition is provided. It relies on local models defined through a hierarchical latent process framework, a class large enough to allow a wide range of applications.
If better local models are provided, our method can still be used to correct the variance of the estimated parameters.
The main limitation of this approach lies in the assumption of a domain partition. For some applications such as the wind, the regions imply a discontinuity at the border, and hence prediction at unsampled locations at the border might be suboptimal. An application of our model to spatio-temporal data is possible, but would likely require additional approximations and a careful choice of the regions as the data size and the hyperparameter space will be considerably larger.
Appendix: Retrieving the priors
The re-fitting procedure in Step 3 of our approach uses the information from Step 2 as the new posterior distribution. We show how to retrieve the prior distribution that corresponds to the posterior for the toy example in Section 3.
For each φ r and corresponding data y r , with r = 1, . . . , R, let π(y r | φ r ) be the likelihood of observing data y r given the hyperparameter φ r . We denote byπ(φ r | y r ) andπ smooth (φ r | y r ) the posterior distributions from Steps 1 and 3, respectively, andπ(φ r ) andπ smooth (φ r ) are the corresponding priors.
2. The conditional distribution π(x r | y r , φ r ) : We use the fact that the conditional distribution of x r is just the joint distribution between x r and y r , without the terms that do not depend on x r since y r and φ r are fixed: π(x r | y r , θ r ) ∝ π(y r , x | φ r ) ∝ exp − 1 2 x r Q x,r x r × exp − 1 2 τ (x r x r − 2y r x r ) = exp − 1 2 x r (Q x,r + τ I)x r + τ y r x r .
Using the canonical form of the multivariate Gaussian distribution, we can write: π(x r | y r , φ r ) ∝ exp − 1 2 x r P r x r + b r x r , where, P r = Q x,r + τ I and b r = y r τ . It follows that: x r | y r , φ r ∼ N T (P −1 r b r , P r ).
Next, from the posteriorsπ(φ r | y r ) andπ smooth (φ r | y r ) on the right hand side of (10) that are computed in Steps 1 and 2, respectively, together with the likelihood term in (15), we can obtain the corresponding priors in (10). The right hand side plot of Figure 3 shows these exact scaled log prior distributions. | 2019-07-16T10:40:55.000Z | 2019-07-16T00:00:00.000 | {
"year": 2020,
"sha1": "7a076936031c06fb59c17eadc7a816125d922588",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1907.06932",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7a076936031c06fb59c17eadc7a816125d922588",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
52005474 | pes2o/s2orc | v3-fos-license | Asymptomatic Plasmodium Parasites among Adults in Eastern Uganda: A Case of Donor Blood Screening at Mbale Regional Blood Bank
Background There is a paucity of data on asymptomatic carriage of Plasmodium parasite among adult population in Eastern Uganda, an area of perennial high transmission of malaria. In this study, we estimated the prevalence of Plasmodium parasites in donor blood units at Mbale Regional Blood Bank (Mbale RBB), a satellite centre of the Uganda Blood Transfusion Service (UBTS). Method This was a cross-sectional descriptive study in which 380 screened donor blood units were examined for the presence of Plasmodium parasites. A systematic random sampling technique using the interval of 7 was used in selecting the screened blood units for testing. Two experienced malaria slide microscopists (MC1 and MC2) independently examined each thick and thin blood slide under high power magnification of X400 and then X1000 as stated on the study standard operation procedure (SOP). Each slide was examined for 100 oil immersion fields before the examiner declared them negative for Plasmodium parasites. The results by each microscopist's examination were tallied separately, and finally, the two tallies were compared. The third independent microscopist (MC3) was blinded to the results from MC1 and MC2, but whose role was to perform quality control on the slides randomly sampled and read 38 (10%) of all the slides and was available to examine any slides with inconsistent findings by MC1 or MC2. Results All the microscopists were unanimous in all the slide readings. Five of the thick smears (1.3%) confirmed the presence of Plasmodium parasites among donor blood units. Of these, 4/5 were from male donors. Plasmodium falciparum was identified in 4 positive samples, while Plasmodium malariae was identified in one of the donor units. Conclusion The 1.3% prevalence of Plasmodium malaria parasites in screened donor blood units represents risk of malaria blood transfusion transmitted infection and a pool of community transmittable malaria infections, respectively.
Background
Globally, there has been a substantial reduction in malaria incidence as a result of scaling up of control and prevention efforts. There are currently renewed interests on initiatives for malaria elimination [1,2] and accurate mapping of areas with malaria parasite prevalence [3]. This is especially in this era of malaria epidemiological transition in which some countries in Africa have documented either no decline [4], an increase in hospitalisations with severe Plasmodium falciparum (P. falciparum) malaria during the same period [5], or a resurgence of severe malaria following a period of sustained control [6]. Worldwide P. falciparum malaria causes 300-500 million clinical episodes, with up to 445,000 direct deaths attributed to the disease in 2017 [7]. In addition, malaria is a major coinfection to other diseases contributing to an estimated 3 million deaths annually across the world [8]. Ninety percent of these deaths occur in African children <5 years old [9,10]. Fifteen sub-Sahara African (SSA) countries account for 80% of the global malaria cases [7].
Control and/or prevention of malaria using vaccines has so far proven to offer limited short-term protection [7]. Thus, single-phased prevention strategies alone in many parts of Africa will not adequately contribute to substantial reduction in the disease burden. Strategies to limit malaria transmission in the community by breaking the malaria cycle and disrupting any further transmission have provided promising outcomes in malaria control and elimination efforts in malaria endemic areas [11]. However, these efforts require a combination of approaches and not a single intervention at a time [12]. Use of the once effective indoor residual spraying with Dichlorodiphenyltrichloroethane (DDT) in some endemic countries has been discouraged because of its potential health and environmental hazards [13], while in other settings, growing resistance to DDT has been reported [12]. Malaria control and elimination efforts need to be expanded to other previously recognised but poorly described risk of malaria transmission in communities, for instance, transmission of malaria through blood transfusion, especially in malaria endemic regions of the sub-Saharan Africa [3]. Targeting malaria transfusion transmitted infections will enhance the gains being reported from the current malaria control and elimination efforts [7]. Therefore, understanding the prevalence of both symptomatic and asymptomatic malaria would help inform targets for control and prevention efforts geared towards elimination of the disease. In Eastern Uganda available data demonstrates that spleen rates among children <5 years were high [14], and malaria transmission is intense [14,15]. There are no data on asymptomatic malaria carriage in adults in Eastern Uganda, just as in the rest of the country. Therefore, descriptions of human populations that harbour malaria parasites are critical. There are some studies evaluating use of chemotherapy for community control of malaria [16]. Other interventions for possible eradication of malaria parasitaemia in the community are being evaluated. The effectiveness of such interventions would be well informed by the understanding of the pools of asymptomatic carriage of malaria parasites in the community. Robust surveillance for asymptomatic carriage of malaria parasites, however, is largely lacking. Exploration of proxy estimation methods could yield useful data for mapping communities at risk or those that harbour potentially transmissible Plasmodia. We explored the potential of using Mbale RBB in Mbale municipality with a catchment of 27 districts in the region, as a hub for surveillance for asymptomatic carriage of Plasmodia. The facility receives blood units from two categories of voluntary donors; the "walk-in-donors" who voluntarily come to the regional facility to donate, and "out-reach-donors" who are accessed by the Mbale RBB mobile blood collection teams for blood donation in the communities in the catchment area including students. The blood collection teams are trained in counseling and selection of low risk blood donors in the community. Only healthy asymptomatic donors are recruited and these therefore comprise the right target population for surveillance for asymptomatic Plasmodia carriage. Before blood is considered safe and issued for transfusion, tests for blood group, viruses (HIV, Hepatitis B, and Hepatitis C), and syphilis (TPHA) are done. We have previously used data on viral screening from Mbale RBB to describe epidemiology of Hepatitis B and Hepatitis C in the communities [17]. Tests for haemoparasites such as Plasmodium are not routinely done; therefore, there is no data to describe asymptomatic malaria using donor blood in Eastern Uganda. This study was therefore designed to investigate the feasibility of surveillance for asymptomatic malaria in donor blood as proxy to estimation of community asymptomatic carriage of malaria in a malaria perennial high transmission Eastern Uganda.
Materials and Methods
This was a cross-sectional study conducted from 1 June to 31 July 2015 in Eastern Uganda. Screened donor blood units from Mbale RBB were examined for the presence of malaria parasites. The region has a stable high transmission burden for malaria with over 100 infective mosquito bites per person per year [15]. The Mbale RBB handles more than 30000 voluntary blood donations per annum. The blood donor units that met the selection criteria were serially arranged. The blood units selected for testing were determined by referring to the Mbale RBB database. The database showed that out of the 30000 collected each year, 90% (27000) of them would be safe for blood transfusion.
Sampling Criteria.
The study was conducted over 2 months for which a conservative blood collection forecast (monthly minimum 1250-maximum 2750 blood units) was made arriving at a study population of 2530 blood units. The study population of 2530 blood units was divided by the sample size of 380, obtaining a sampling interval of 7. The first unit to be selected was determined by putting pieces of paper containing numbers of the first ten units in a box. One paper piece was then picked at random and the number contained therein was considered as the first number. Every seventh unit, beginning from the currently selected blood unit number, was considered for the assay. Blood samples were extracted from the integral segment of the donor bag for analysis. All blood donor units that showed any form of hemolysis were excluded.
Data Collection.
The demographic information of the blood donors without their personal identification information was collected using a data abstraction tool. From each selected blood donor unit, at the integral tubing, about 100 l of blood was collected into a cryovial with the help of a tube sealer. Thick and thin blood smears were made for each blood unit for identification and typing of Plasmodia parasites, respectively. The thick blood smears were stained with 10% Giemsa at PH 7.2, a stable methanol based Romanowsky stain. The stained blood smears were examined microscopically under X400 and X1000 objectives. The presence of Plasmodia parasites was noted based on their staining features with Giemsa stain. We further quantified the malaria parasites by examining microscopically the X100 objective. The number of Plasmodium parasites in the positive smears was counted against 500 WBC and the value that was obtained was multiplied by 16 in order to obtain the Plasmodium parasitaemia level per microlitre of blood. We identified the Plasmodia species using thin smears. The positive smears were given arbitrary alphabetical letters A, B, C D, and E for confidentiality. Thin blood smears were stained with 10% Giemsa at PH 7.2, a stable methanol based Romanowsky stain and examined as described above.
Quality
Control. The quality of results was ensured by staining freshly prepared Giemsa stain. The standard operating procedures of staining were observed in order to obtain consistent quality results. Two laboratory technologists read each pair of thin and thick slide preparation independently. The study SOP provided for a third independent microscopist whose duty was to ensure quality control by randomly sampling and reading 38 (10%) of all the slides and to examine any slides with inconsistent findings by any of the first two microscopists.
Data Management and Analysis.
The collected data was entered into Excel spread sheet 2007. Data was then transferred to SPSS ver 20.0 for analysis. Proportions were generated from the datasets.
Ethical Approval.
The study was reviewed and approved by the Mbarara University of Science and Technology, Faculty of Medicine Research Ethics Committee (FRC), permission to conduct the study was granted by the Principal Medical Officer of Mbale RBB, and further ethical review and approval was done by the Mbale Regional Referral Hospital Research and Ethics Committee (MRRH-REC).
Results
Three hundred and eighty thick and thin blood films were made. A majority of the blood donors were males (Figure 1).
The mean age of the blood donors was 21 ± 4 years, range 18 -22 years (Table 1).
The Distribution of Samples from Each District in the Catchment Area.
The donor blood units sampled were from more than half (out of the total 27 districts) of the districts Figure 2.
The state of Plasmodium parasite parasitaemia among the blood units studied was summarized in Table 2.
Discussion
Our results indicate that the prevalence of Plasmodium parasites among ready-to-transfuse blood units was 1.3%. This was lower than expected in this malaria high transmission region. This is epidemiologically and immunologically plausible, especially given the fact that individuals who had potentially acquired immunity to the disease may have been sampled. Furthermore, the screening criteria used to identify potential blood donors may have been very stringent that it eliminated a large proportion of asymptomatic carriers in the community. Nonetheless, the 1.3% represents a malaria pool with double the chance of transmitting the disease at the time of donation in the community, that is, through mosquito bites, and in the hospital through blood transfusion. In comparison, our findings are far below the immunological assays for similar surveys. For instance, the malaria antibody prevalence of 7.6% in Saudi Arabia suggests a possible lower diagnostic accuracy of blood slide for malaria compared to immunological assays, lower endemicity notwithstanding. But in West Africa, in malaria endemic setting, malaria in donor blood was 7% [18] and 21% [19], despite similarity in age of donors at these and out sites. This could be due to the differences in blood donor screening practices on one hand but conversely may be due to the differences in malaria seasons or immunity in the donors.
As expected for this region, the dominant species was P. falciparum malaria with 4/5 positive samples confirming positivity to the speciation. This is consistent with other series in Africa [20]. The role of asymptomatic parasitaemia in the transmission of malaria and whether or not it requires treatment remains poorly understood. We found low parasitaemia, probably noninfectious [21], but how long these parasites survive in immune human beings, reproduce, and eventually get to transmittable threat holds requires additional studies. Our thinking is that, all asymptomatic parasitaemia is a potential source of infection in the community. In our study we also found P. malariae a rare malaria parasite in Uganda. Furthermore, because of its long incubation period and less severe clinical features, it is likely that the carriers of P. malariae infection do not show up for medication early, or if they do they do not receive appropriate medication for this parasite. Consequently, it continues to multiply in the body with a potential, overtime, to cause significant prevalence and/or coinfections with falciparum. We found that blood group O accounted for 4/5 cases of asymptomatic malaria ( Table 2). This is consistent with another series that indicated a strong association between blood ABO group with asymptomatic malaria P=0.022 and a high rate of parasitaemia in blood group O; P=0.003 [22]. In our study, we did not look at other factors, but elsewhere, in similar malaria transmission settings, thrombocytopaenia (<150 X10 9 ) was associated with asymptomatic malaria among blood donors [19]. The implication of our results with regard to blood transfusion and malaria control is potentially significant because asymptomatic malaria carries the risk of transmitting malaria parasites to the recipients of these blood units, of whom the majority are pregnant mothers and immunologically naive children. Our observations have demonstrated Plasmodia in screened donor blood. We recommend better and robust ways of screening for malaria in donor blood to protect blood recipients from these infections [23]. Programmes that target asymptomatic malaria pool should be included in community malaria control programs.
Conclusion
Malaria causing parasites were present in 1.3% of the donor blood units. This is of clinical and public health significance since it represents a risk of community transmission of malaria from blood transfusion, and also through mosquito bites. Use of more efficient, highly sensitive, and specific diagnostics techniques would improve the accuracy of asymptomatic malaria surveillance using donor blood.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Journal of Tropical Medicine 5
Ethical Approval
The authors conducted their research within the provisions of ethical standards in Uganda. The Mbale Regional Referral Hospital Research & Ethics Committee (MRRH-REC) approved the study and local permission to conduct the study was obtained from Mbale RBB.
Disclosure
The Mbale Clinical Research Institute (MCRI, http://www .mcri.ac.ug/), a research entity affiliated to the Uganda National Health Research Organization (UNHRO), permits the publication of this manuscript.
Conflicts of Interest
The authors declare no conflicts of interest.
Authors' Contributions
Simon Peter Inyimai, Mosses Ocan, and Benjamin Wabwire participated in various stages of data collection and participated in writing of the manuscript; Peter Olupot-Olupot conceived the idea and wrote the manuscript. All authors approved the final manuscript. | 2018-08-17T21:20:39.672Z | 2018-07-09T00:00:00.000 | {
"year": 2018,
"sha1": "262f2ddb66804e9d82a31991627908b4be49461a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jtm/2018/6359079.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a55e7bf470626c2a40bf0d09d027660a897e7a2",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221571550 | pes2o/s2orc | v3-fos-license | Enhancement of Contact Lens Disinfection by Combining Disinfectant with Visible Light Irradiation
Multiple use contact lenses have to be disinfected overnight to reduce the risk of infections. However, several studies demonstrated that not only microorganisms are affected by the disinfectants, but also ocular epithelial cells, which come into contact via residuals at reinsertion of the lens. Visible light has been demonstrated to achieve an inactivation effect on several bacterial and fungal species. Combinations with other disinfection methods often showed better results compared to separately applied methods. We therefore investigated contact lens disinfection solutions combined with 405 nm irradiation, with the intention to reduce the disinfectant concentration of ReNu Multiplus, OptiFree Express or AOSept while maintaining adequate disinfection results due to combination benefits. Pseudomonads, staphylococci and E. coli were studied with disk diffusion assay, colony forming unit (cfu) determination and growth delay. A log reduction of 4.49 was achieved for P. fluorescens in 2 h for 40% ReNu Multiplus combined with an irradiation intensity of 20 mW/cm2 at 405 nm. For AOSept the combination effect was so strong that 5% of AOSept in combination with light exhibited the same result as 100% AOSept alone. Combination of disinfectants with visible violet light is therefore considered a promising approach, as a reduction of potentially toxic ingredients can be achieved.
Introduction
With approximately 125 to 140 million contact lens wearers worldwide [1][2][3] (numbers from 2004 and 2010) the prevention of lens-related infection is a serious healthcare issue. Several ocular diseases are associated with contact lens wear, such as contact lens acute red eye (CLARE), contact lens peripheral ulcer (CLPU) and infiltrative keratitis [4][5][6][7]. Due to the high numbers of contact lens users, even complications with a rare occurrence will concern a considerable number of patients.
The incidence of contact lens related microbial keratitis is 1.9 per 10,000 for daily wear of soft contact lenses in Australia [8] and 1.8-2.44 per 10,000 in Scotland for all types [9], reaching up to 3.09 per 10,000 in Hongkong [10]. Estimates of risk appear stable over time as quantified over a 20 year period [11,12]. Contact lens wearers thus have an approximately five-to seven-fold higher risk of microbial keratitis compared to non-contact lens wearers [9,10], with increasing risk for extended or overnight wear.
One of the problems might be the partially insufficient effectiveness of contact lens disinfection solutions. When testing other isolates than the given microbial test strains in the normative standard The American Society for Microbiology conscientiously defined experimental procedures for determining synergistic effects, which are disk diffusion assays, E-tests for antibiotic susceptibility, checkerboard assays, post-antibiotic effects (PAE) and the Bliss model for biofilm testing [38]. Others claim that, because a synergism is a physiochemical mass-action law issue, it has to be calculated with Combination Index (CI) values [51], based on Loewe Additivity.
Foucquier et al. [45] deliver an overview of the mathematical background for calculations of combination effects. The authors divide approaches into effect-based and dose-effect based. "Response Additivity" is defined as the improvement when comparing the combined effect with the additive effect of both single agents, which would be the colloquial understanding of synergy. This definition belongs to the effect-based group of strategies, which inherit some limitations like, in this case, assumed linear dose-effect curves for both agents. Dose-effect-based strategies, however, rely on the mathematical framework of Loewe Additivity [52] considering non-linear dose-effect curves, determining which concentration of each drug alone produces the same effect as the combination, rather than comparing effects of given concentrations. This approach requires a certain amount of data and can rapidly become demanding. Generally, any defined effect level can be used for comparison [46]. Measurement variable can be any parameter giving knowledge about bacterial condition, such as colony forming units [38,41], change of color [53] or OD 600 (optical density at 600 nm) values after a specified incubation [38,40,54], as terms in the equation are dimensionless quantities [46]. From the results, the Combination Index (CI), also called Fractional Inhibition Concentration (FIC), can be calculated for several concentration/dose combinations, which is considered to be the most suitable analysis for synergy testing [45].
In cases of microbial keratitis associated with contact lens wear, predominantly environmental organisms were isolated as causative agents, with P. aeruginosa being the most frequently recovered organism [55][56][57][58]. The strong association between P. aeruginosa and ocular infections might also be caused by a suitable environment for Pseudomonads in the system of lens and storage cases. Microbial keratitis in contact lens wear is frequently associated with the presence of biofilm in the contact lens case [59]. Pseudomonas species are known to be biofilm builders [7] and the storage case gives a good environment for proliferation [59]. In a study of various Pseudomonas aeruginosa isolates some demonstrated the ability to grow to levels above the initial inoculum in one of the chemical disinfectants examined [15].
For this reason, we chose a Pseudomonas strain for most of our experiments. Since we are not allowed to cultivate pathogenic strains in our facilities, experiments were carried out with Pseudomonas fluorescens. In regard to visible light irradiation, it seems that relatives of the same species act similarly [32,40].
In this study we applied a disk diffusion assay, cfu (colony forming unit) determinations on agar plates and nutrient pads, including different procedures for the post-exposure elimination of the disinfection solution. For analysis on agar plates the calculation of Combination Index values based on Loewe Additivity was performed. Furthermore, the monitoring of growth delay, similar to post-antibiotic effect studies (PAE), was applied as method to investigate combination effects of contact lens disinfection solutions and visible light irradiation at 405 nm.
Pseudomonads were cultivated in 535 medium (30 g tryptic soy broth (Sigma-Aldrich Chemie GmbH, München, Germany) per liter) in an overnight culture of 3 mL at 30 • C and 170 rpm. 200 µL of this pre-culture was cultivated in 30 mL fresh medium at 30 • C and 170 rpm until an optical density of 0.35 in mid-exponential phase was reached. For E. coli and S. carnosus the same procedure at 37 • C was applied with M92 medium (30 g tryptic soy broth (Sigma-Aldrich Chemie GmbH, München, Germany), 3 g yeast extract (Merck KGaA, Darmstadt, Germany) per liter) for S. carnosus and LB medium (10 g tryptone (VWR international, Leuven Belgium), 5 g yeast extract (Merck KGaA, Darmstadt, Germany), 10 g sodium chloride (VWR international, Leuven Belgium) per liter) for E. coli. Bacterial cultures were centrifuged at 7000× g for 5 min and the resultant pellet resuspended in phosphate buffered saline (PBS). After a further washing step in PBS the suspension was diluted to the desired population density for experimental use in PBS.
For disk diffusion assays the bacterial solution was diluted to 0.5 McFarland standard, which was approximately 10 8 CFU/mL. Instead of Müller-Hinton-Broth commonly applied for this assay type, 535 medium was used and poured in equally filled dishes with 10 mL per 90 mm diameter dish. For nutrient pad analysis, the solution was adjusted to 6-8 × 10 7 CFU/mL, as the detection limit for the reduction lies one log beyond the used starting concentration and an approximately 6 log reduction was pre-determined for 100% ReNu Multiplus combined with light. For agar plate assays a concentration of 5 × 10 5 to 10 6 CFU/mL was adjusted, referring to the recommendation of the normative standard for contact lens solution testing [60]. Likewise, samples for growth delay analysis were adjusted to a concentration of 5 × 10 5 to 10 6 CFU/mL for the irradiation/disinfection solution exposure treatment. As medium was added for incubation in the microplate reader in a proportion of 1:10, the final concentration for incubation was diluted by one log. The bacterial concentrations indicated represent the concentration in the well already mixed with different concentrations of contact lens solutions. Bacteria were plated on the same media as applied in the fluid culture. Dey-Engley neutralization broth (DEB, Thermo Fisher Scientific, Waltham, MA, USA) was used to eliminate the effect of disinfection solutions after treatment for agar plate and growth delay assays. For nutrient pad assays pseudomonads were incubated on cetrimide pads 14075-47-N (Sartorius, Göttingen, Germany) after membrane filtration.
Untreated controls were analyzed for each assay type to exclude unintended bacterial reduction by environmental factors. In cases where log reductions of sample results had the same algebraic sign as the control, the absolute value of the control was subtracted, otherwise it was ignored. By this means, reductions caused by environmental factors were taken into account, in a manner not to improve inactivation results.
Contact lens disinfection solutions examined in this study were ReNu Multiplus (Bausch+Lomb, Rochester, NY, USA), OptiFree Express (Alcon, Fort Worth, TX, USA) and AOSept Plus (Alcon, Fort Worth, TX, USA). All solutions were used within expiration date.
Irradiation Setup
For irradiation a LED light source of 405 nm was applied (LZ4-40UB00-00U8 (LED Engin, Inc., San Jose, CA, USA). The emission was measured with a spectrometer (SensLine AvaSpec-2048 XL, Avantes, Appelsdorn, The Netherlands), after a pre-heating interval. The measured peak emission was determined at 405.9 nm with a bandwidth of 19 nm. The LED was mounted to a heat sink, which was actively cooled with a fan during experiments to avoid heating the sample. This package was placed on top of a truncated hollow pyramid with a high reflective inside, which ensured that the sample area was irradiated homogenously (described earlier in [61]). Experiments were performed in 48 well plates placed on a black underground to avoid unintentional potentiation of irradiation by light reflection from the white laboratory table. 1 mL of sample was transferred into several wells of a 48 well microtiter plate and the pyramid placed on top of the plate, covering 3 × 5 wells. The average sample temperature measured with an infrared thermometer (Raytek Fluke Process Instruments GmbH, Berlin, Germany) was 23.8 • C, with a maximum of 26.2 • C. Irradiation intensity depended on the experimental series and was adjusted by means of an optical power meter OPM150 (Qioptiq, Göttingen, Germany).
Disk Diffusion Assay
Disk diffusion assays are a technique anchored in routine clinical microbiology, especially in antibiotic susceptibility testing. The measurement parameter is the formation of circular growth inhibition zones, which are caused by diffusion of the applied drug from impregnated disks through the agar medium. No detailed definition of synergy is given for this method in official guidelines, although Wozniak et al. [38] defined synergy in a disk diffusion assay as an increase of the inhibition zone by 2 mm in combined treatment compared to the single treatment values.
Dilutions of the examined contact lens disinfection solutions in PBS were prepared directly before use to concentrations of 100, 80, 60, 40, 20 and 5%, respectively. 100% refers to the formulation of the specific disinfection solution that is commercially available. Bacterial solutions were irradiated as described above and plated on 535 agar plates of defined thickness. Irradiation doses used for disk diffusion assays have been 0 J/cm 2 as control, and 35 J/cm 2 , 70 J/cm 2 and 140 J/cm 2 , achieved in different time intervals with an intensity of 20 mW/cm 2 . For the plating technique a volume of 1 mL was distributed by rotary movement of the dish, letting plates air dry afterwards. As the large volume would increase the applied bacterial concentration designed for a 100 µL application, the suspensions were diluted in PBS by 1 log before plating. Soaked disks were placed manually with flamed forceps. After incubation for 24 h at 30 • C, inhibition zones were determined manually by fitting circles to a photograph of the plates in an image processing program. All plates were prepared in duplicates and each experiment was repeated three times. P. fluorescens and all three contact lens solution types were investigated in this assay.
Determination of Bacterial Reduction with Nutrient Pads
Determinations of cfu were performed on P. fluorescens for combinations of ReNu Multiplus multipurpose solution and 405 nm visible light at a dose of 140 J/cm 2 . This dose was chosen as it is easily reachable within an overnight disinfection, even when considering a low-cost LED as a potential irradiation product instead of the high-power LED used in the test setup. On the other hand, this dose exhibits a moderate effect when applied alone so that a combination treatment will still result in bacterial concentrations above the detection limit. Concentrations of 5, 20, 40, 60, 80 and 100% of ReNu Multiplus were tested on P. fluorescens as single treatment and in combination with 405 nm irradiation at 20 mW/cm 2 in a time interval of 2 h, as well as the effect of light alone in PBS (0% ReNu Multiplus). The bacterial starting concentration was 6-8 × 10 7 CFU/mL. 100 µL sample volume was diluted serially in PBS. A volume of 500 µL of the desired dilution was then immediately subjected to membrane filtration to eliminate the disinfection solution. Bacteria remained on the filters with a pore size of 0.45 µm, which were placed on moistened nutrient pads. After incubation at 30 • C for 30 h, disks were photographed and colonies enumerated manually. The resultant count was converted to CFU/mL, and in log reduction referring to the plated starting concentration. Each experiment was performed in triplicates and repeated three times.
Determination of Bacterial Reduction with Agar Plates
Just as for cfu determinations on nutrient pads, an irradiation dose of 140 J/cm 2 was chosen. The bacterial starting concentration was 5 × 10 5 to 10 6 CFU/mL, as recommended in the normative standard for contact lens solution testing. In this test series three different irradiation intensities were selected to reach this dose within different time intervals. With 10, 20 and 40 mW/cm 2 the defined dose was reached within 4, 2 and 1 h irradiation time respectively. This will automatically lead to different residence times for the disinfection solution, whereas 4 h is the minimum disinfection time given by the contact lens solution manufacturer. Each experiment for the combination effect was performed in triplicate and repeated three times.
To be able to calculate the CI value, reference experiments for the disinfection procedures applied separately were carried out in triplicates and repeated twice. Irradiations with 405 nm at 10, 20 and 40 mW/cm 2 on bacteria in PBS as well as the effect of ReNu Multiplus without irradiation over intervals of 4, 2 and 1 h at concentrations of 0, 5,20,30,40,50,70, 80 and 100% serve as reference for the combined experiments.
100 µL of each sample was transferred to 900 µL Dey-Engley neutralizing broth (DEB) and incubated for at least 15 min at room temperature. DEB samples were diluted to proper bacterial concentrations in PBS and plated manually with a glass spatula. After incubation for 30 h at 30 • C agar plates were photographed and enumerated manually. The resultant count was converted to CFU/mL, and in log reduction referring to the plated starting concentration. Each experiment was performed in triplicate and repeated at least three times.
Based on Loewe Additivity, CI values are then calculated as follows: where a and b are the concentrations of each agent used in the combination, while A and B are the concentrations of the agents that are necessary to reach the same effect when used separately. Combination Indexes are generally reported without any assessment of the degree of certainty [45], but as investigations of biological systems inevitably contain experimental errors, we used the definition from Chou [46] in a conservative way and only categorized results of "moderate synergism" or more as enhanced outcome.
Determination of Bacterial Reduction via Regrowth Behavior
In antibiotic testing, where combined testing is frequently performed, post antibiotic effects (PAE) indicate the delay of the regrowth after the exposure to a drug over a certain period and can likewise be used to monitor the differences between single drugs and their combination. The difference to checkerboard assays is that the exposure time is limited, and the drug is removed or eliminated thereafter. As continuous irradiation is not possible inside a microplate reader during incubation, the effect of the disinfection solution equally has to be stopped to achieve comparable results. As this scenario would also represent a realistic application for contact lens care, this method was chosen in place of a checkerboard assay in this study.
The exposure time was set to 4 h as this is the smallest time interval given in manufacturer instructions for contact lens disinfection solutions. This leads to an irradiation intensity of 10 mW/cm 2 to reach a dose of 140 J/cm 2 . Furthermore, higher irradiation intensities of 20 and 40 mW/cm 2 were tested with exposure times of 2 and 1 h, respectively. Contact lens disinfection solution concentrations were tested at 40, 30, 20 and 5% of the commercially available formulation. Besides ReNu Multiplus, another multipurpose solution was examined against P. fluorescens. OptiFree Express has often been reported to achieve high bacterial impact, but at the same time is aggressive to human ocular epithelium [62]. Therefore, it would be desirable to reduce concentration of ingredients through a combined use with light. Besides another multipurpose solution, further strains (S. carnosus and E. coli) were tested with this technique together with ReNu Multiplus and visible light.
After exposure, samples of 100 µL were immediately transferred to 900 µL of DEB to neutralize the effect of the disinfection solution. This was also performed with samples that have only been irradiated in PBS. After incubation for at least 15 min at room temperature 20 µL of each sample was transferred into a 96 well plate and mixed with 180 µL of specific growth medium. The violet color of DEB thereby was diluted by factor 1:10 so that the sample was translucent enough to monitor increasing turbidity through growth in a microplate reader. Microtiter plates were incubated in a Clariostar Plus (BMG Labtech, Ortenberg, Germany) at 30 • C for P. fluorescens and at 37 • C for all other strains for at least 30 h with measurement of OD 600 in 5 min intervals and shaking for 30 s before each measurement, ensuring almost continuous rotary growth conditions. Additionally, sequential ten-fold dilutions of each strain in untreated condition were measured with the same protocol. Each experiment was repeated three times. Depending on how many bacteria were inactivated during exposure of light and/or disinfection solution, the regrowth will be delayed. Based on the untreated dilutions, a calibration curve could be prepared, putting into context the measured OD value at a certain time towards the underlying log reduction.
Disk Diffusion Assay
To assess the antibacterial effect of contact lens disinfectant solution by disk diffusion assay, we applied ReNu Multiplus and OptiFree Express at concentrations of up to 100% to the agar plates. However, the multipurpose solutions used for this study did not form clear inhibition zones in any concentration that was tested. As we assumed this fact to be caused by the molecular structure of the active components, not being able to pass the cross-linked agar, we tried to decrease the agar concentration in the plates until the limit of solidity in order to achieve greater pore sizes. However, with 110 mg/10 mL agar still no enhanced effect on the appearance of inhibition zones formed by the multipurpose solutions alone or in combination with visible light was identifiable, even at 100%. With unclear inhibition zones, only visible with background lighting (Figure 1aII), disk diffusion results with multipurpose solutions were considered not analyzable. Cfu determinations of contact lens disinfection solution, however, showed a considerable decrease in bacterial count. With the hydrogen peroxide based solution AOSept, conversely clearly visible inhibition zones were detectable (Figure 1aIII).
Determination of Bacterial Reduction with Nutrient Pads
Testing disinfection solution ReNu Multiplus as single treatment with nutrient pads, nearly no inactivation was observable (Figure 2a). For 100%, which represents the pure commercially available solution, only 0.48 log reduction was achieved in 2 h at 20 mW/cm 2 with a bacterial starting concentration of 6-8 × 10 7 CFU/mL. In contrast, a combination treatment with ReNu Multiplus 100% As can be seen in Figure 1aI, the irradiation dose has to be selected carefully for this technique, as otherwise a semiconfluent growth of colonies, which is required for the development of clearly visible inhibition zones, cannot be achieved. The disinfection effect increases with the irradiation dose as shown in Figure 1b. The highest dose analyzable with a continuous bacterial lawn was 140 J/cm 2 at 405 nm. Likewise, the inhibition zones increase with the percentage of hydrogen peroxide solution. The dotted line on the graph represents the disinfection result when using the disinfectant at 100% concentration, as commercially available. Every data point above this line, leading to greater inhibition zones, shows the benefit of combining conventional contact lens disinfection techniques with visible light irradiation. The combination of 140 J/cm 2 irradiation at 405 nm with 5% of the original concentration of the disinfection solution achieves a similar result as a 100% solution without irradiation. Any higher concentrations leads to even better results in combination with 405 nm.
Determination of Bacterial Reduction with Nutrient Pads
Testing disinfection solution ReNu Multiplus as single treatment with nutrient pads, nearly no inactivation was observable (Figure 2a). For 100%, which represents the pure commercially available solution, only 0.48 log reduction was achieved in 2 h at 20 mW/cm 2 with a bacterial starting concentration of 6-8 × 10 7 CFU/mL. In contrast, a combination treatment with ReNu Multiplus 100% and visible light of 405 nm was quite successful with almost complete inactivation of the bacteria. Even at lower disinfectant concentrations, considerable bacterial reduction was achieved. Plotting the benefit of combination treatment compared to the sum of single approaches, colloquially called synergy (Figure 2b), it becomes visible that the relation is not linear. The higher the applied disinfection solution concentration, the greater is the overall combined bacterial inactivation. However, the increase of the benefit slows down for higher concentrations. The most distinct difference in the gradient of achieved log reductions lies between 20 and 40% ReNu Multiplus content with 2.86 and 4.38 log, respectively. Therefore 40% of ReNu Multiplus in combination with visible light of 405 nm appears to be the best compromise between reducing the disinfectant concentration and achieving optimal inactivation results. All other experiments with different methods were therefore carried out with disinfectant concentrations in this range. Light irradiation with 405 nm alone led to 1.42 log reduction. With nutrient pad analysis we determined that any combination and even light alone produces better results in a 2 h treatment than application of ReNu Multiplus at 100% for a P. fluorescens concentration of 6-8 × 10 7 CFU/mL. With nutrient pad analysis we determined that any combination and even light alone produces better results in a 2 h treatment than application of ReNu Multiplus at 100% for a P. fluorescens concentration of 6-8 × 10 7 CFU/mL.
Determination of Bacterial Reduction with Agar Plates
To determine if the results of the nutrient pad analysis can be confirmed with other methods, agar plate determinations of bacterial inactivation were carried out (Figure 3). In Figure 3, the inactivation effect tested with cfu determinations on agar plates is depicted for different irradiation intensities. The bacterial starting concentration was 5 × 10 5 to 10 6 CFU/mL. Comparing the different investigations methods, the combination results correspond relatively well. For a 20 mW/cm 2 irradiation in PBS without disinfectant a 1.42 log decrease of bacterial counts is achieved on nutrient pads (Figure 2a
Loewe Additivity
For comparing the combination effects with the single approach results, ReNu Multiplus and 405 nm alone have not only been tested at analogous concentrations/doses as the combinations, but over the whole concentration range and over an extended dose range (Figure 4). This allows the calculation of CI values based on Loewe Additivity, which directly gives a benchmark for synergy categorization. Interestingly, there was no large difference between exposure times of 1, 2 and 4 h for ReNu Multiplus at any concentration (Figure 4d). There is a tendency towards better efficacy for longer duration, but the deviation is lower than expected. Only at 80% disinfection solution did the 4 h results exceed the results of shorter durations, with a decrease of 4.60 log compared to 3.49 and 3.38 log at 2 and 1 h, respectively. At 100% ReNu Multiplus there were no colonies visible after any 5 6 Comparing different irradiation intensities reaching the same dose of 140 J/cm 2 over various exposure times it becomes evident that the highest irradiation intensity is not necessarily the best choice. At 40 mW/cm 2 for 1 h (140 J/cm 2 ) the lowest reduction results are achieved with only a 2.01 log decrease at 40% ReNu Multiplus in combination with 405 nm irradiation. Best combination results were achieved at 20 mW/cm 2 and 40% concentration of ReNu Multiplus with 4.49 log reduction.
Loewe Additivity
For comparing the combination effects with the single approach results, ReNu Multiplus and 405 nm alone have not only been tested at analogous concentrations/doses as the combinations, but over the whole concentration range and over an extended dose range (Figure 4). This allows the calculation of CI values based on Loewe Additivity, which directly gives a benchmark for synergy categorization. Interestingly, there was no large difference between exposure times of 1, 2 and 4 h for ReNu Multiplus at any concentration (Figure 4d). There is a tendency towards better efficacy for longer duration, but the deviation is lower than expected. Only at 80% disinfection solution did the 4 h results exceed the results of shorter durations, with a decrease of 4.60 log compared to 3.49 and 3.38 log at 2 and 1 h, respectively. At 100% ReNu Multiplus there were no colonies visible after any exposure time starting from a concentration of 5 × 10 5 to 10 6 CFU/mL. Uniform behavior over disinfection solution concentration was assumed here, represented by a linear fit. For references at 405 nm irradiation, bacterial solutions in PBS were irradiated with 10, 20 and 40 mW/cm 2 over different intervals reaching a dose of 245 J/cm 2 . The typical behavior for visible light photoinactivation, with a mostly linear slope in a half logarithmic representation and an additional shoulder at the beginning (non-mono-exponential), becomes evident here. Four data points within the linear section were achieved for each irradiation intensity. A linear fit through those points was used as reference for 405 nm irradiation as a single approach to calculate the doses necessary to reach a certain effect. In Table 1 the calculated CI values are presented, based on the combination results shown in Figure 3 and the trendlines from Figure 4, calculated with the Formula (1).
For 20 mW/cm 2 and 2 h irradiation all values lie considerably below 0.85 indicating moderate synergism for 20 and 40% and even synergism for 30%. At this irradiation intensity, the absolute log reductions also reach the highest values with 4.09 log at 30% and 4.49 log at 40%. It is noteworthy that at 10 mW/cm 2 as well as 20 mW/cm 2 the CI values for 30% ReNu Multiplus content are the lowest, showing the best beneficial effect for the combination. This fits with the decrease of the slope for higher concentration values and the leap between 20 and 40% noticed at the nutrient pad results (Figure 2b). None of the combinations examined achieves better results than the samples for ReNu Multiplus at 100% -at least in the test series with agar plates and low bacterial concentrations -as In Table 1 the calculated CI values are presented, based on the combination results shown in Figure 3 and the trendlines from Figure 4, calculated with the Formula (1).
For 20 mW/cm 2 and 2 h irradiation all values lie considerably below 0.85 indicating moderate synergism for 20 and 40% and even synergism for 30%. At this irradiation intensity, the absolute log reductions also reach the highest values with 4.09 log at 30% and 4.49 log at 40%. It is noteworthy that at 10 mW/cm 2 as well as 20 mW/cm 2 the CI values for 30% ReNu Multiplus content are the lowest, showing the best beneficial effect for the combination. This fits with the decrease of the slope for higher concentration values and the leap between 20 and 40% noticed at the nutrient pad results (Figure 2b). None of the combinations examined achieves better results than the samples for ReNu Multiplus at 100% -at least in the test series with agar plates and low bacterial concentrations -as none of the results in column B shows a percentage over 100%. However, good log results were achieved at 30 and 40% for 10 mW/cm 2 and 20 mW/cm 2 . Regardless, CI values for 10 mW/cm 2 and also for 40mW/cm 2 lie close to or even above 1, indicating no synergism. Nevertheless, such a combination approach could be employed in practice regarding overall reductions and taking into account that they have been achieved with clearly lower concentrations. According to the testing on agar plates the overall result cannot be improved, but the concentration of antibacterial ingredients in the formulation of contact lens solutions could be reduced when combining with light to preserve the consumer's epithelial health, while still achieving acceptable results.
Effectiveness Dependency of Multipurpose Solution ReNu Multiplus on Bacterial Concentration
Comparing the inactivation results for ReNu Multiplus as a single method achieved with agar plates (Figure 4d) and nutrient pads (Figure 2a), great differences were observed. While no colonies were visible after 2 h of exposure to 100% ReNu Multiplus and subsequent distribution on agar plates (5.5 log reduction), only a 0.48 log reduction was reached with membrane filtration and nutrient pads. A major difference in the two assays was the bacterial starting concentration with 5 × 10 5 to 10 6 CFU/mL for the agar plate assay and 6-8 × 10 7 CFU/mL for the nutrient pad analysis. We therefore assumed that the effectiveness of ReNu Multiplus was highly dependent on the bacterial concentration. Testing this hypothesis, we could show that the log reduction decreases with increased bacterial concentration (Figure 5a). For a 2 h exposure with a 10 6 CFU/mL starting concentration, again no CFU were observable. Yet, at 5 × 10 7 CFU/mL only 1.9 log reduction, and at 10 8 CFU/mL only 1.4 log reduction, were achieved respectively, referring to the specific starting concentration.
In Figure 5b the log results for the combination approach with visible light are presented in direct comparison between agar plate and nutrient pad assay. At samples where 405 nm was additionally applied to the different ReNu Multiplus concentrations, the differences in the bacterial inoculum do not seem to play a pronounced role. This is contrary to the disinfection results with 100% ReNu Multiplus as single approach, where an increased bacterial load leads to noteworthy loss in effectiveness.
Determination of Bacterial Reduction via Regrowth Behavior
In this experimental series the growth of bacteria after treatment with single or combined disinfection approaches is investigated. In the literature differences in behaviour towards disinfection techniques are often detected between plated results and analytical methods in fluids. The effect of ReNu Multiplus as a single approach against P. fluorescens (Figure 6a) does not show a high impact and at 5% and 20% growth can occur instead of a reduction, which was already noticeable in the two other analysis methods. Log reductions at 40%, the highest ReNu Multiplus concentration tested here, varied between 0. 35 For combination results a huge increase occurred between 30 to 40% disinfectant concentration, at least for the experiments at 4 and 2 h duration, i.e., 10 and 20 mW/cm 2 irradiation intensity. This was already noticed in the nutrient pad analysis. Log reductions for 40% ReNu Multiplus combined with light determined by growth analysis and agar plates are 4.48 and 3.86, respectively, for 10 mW/cm 2 and 3.24, 4.49, as well as 4.38 for 20 mW/cm 2 at growth,agar plate and nutrient pad analysis. For concentrations of 30% and 20% ReNu Multiplus, log reduction values determined with growth analysis are considerably lower than values achieved with the two other methods. Nutrient pad and agar plate results, however, match well at 20% with a 2.76 log reduction (2 h/20 mW/cm 2 ) on agar plates and a 2.86 log reduction (2h/20 mW/cm 2 ) with nutrient pads. It seems that at disinfectant concentrations below 40% plating techniques deliver more optimistic results than analysis in fluid.
In the experimental series of growth delay analysis, another multipurpose solution besides ReNu Multiplus was tested (OptiFree Express) which is known to be rather aggressive to bacteria as well as to the ocular surface. Results are shown in Figure 6b, where it stands out that, down to 20%
Determination of Bacterial Reduction via Regrowth Behavior
In this experimental series the growth of bacteria after treatment with single or combined disinfection approaches is investigated. In the literature differences in behaviour towards disinfection techniques are often detected between plated results and analytical methods in fluids. The effect of ReNu Multiplus as a single approach against P. fluorescens (Figure 6a) does not show a high impact and at 5% and 20% growth can occur instead of a reduction, which was already noticeable in the two other analysis methods. Log reductions at 40%, the highest ReNu Multiplus concentration tested here, varied between 0.35 and 0.64, depending on the exposure time. Compared to the results achieved with agar plates with 0.75 (1 h), 1.45 (2 h) and 1.39 (4 h) log reduction at 40%, and those of 0.0 log achieved with nutrient pads in 2 h at 40%, values reached with growth analysis lie in the middle. For irradiation with 405 nm in PBS (0%), where 0.52 (1 h/40 mW/cm 2 ), 0.48 (2 h/20 mW/cm 2 ) and 0.97 (4 h/10 mW/cm 2 ) log were measured by growth delay, the values with agar plates reaching 0.30 (1 h/40 mW/cm 2 ), 0.85 (2 h/20 mW/cm 2 ) and 0.82 (4 h/10 mW/cm 2 ) log reduction fit acceptable. Here the nutrient pad value with 1.42 log (2 h/20 mW/cm 2 ) deviates upwards.
For combination results a huge increase occurred between 30 to 40% disinfectant concentration, at least for the experiments at 4 and 2 h duration, i.e., 10 and 20 mW/cm 2 irradiation intensity. This was already noticed in the nutrient pad analysis. Log reductions for 40% ReNu Multiplus combined with light determined by growth analysis and agar plates are 4.48 and 3.86, respectively, for 10 mW/cm 2 and 3.24, 4.49, as well as 4.38 for 20 mW/cm 2 at growth, agar plate and nutrient pad analysis. For concentrations of 30% and 20% ReNu Multiplus, log reduction values determined with growth analysis are considerably lower than values achieved with the two other methods. Nutrient pad and agar plate results, however, match well at 20% with a 2.76 log reduction (2 h/20 mW/cm 2 ) on agar plates and a 2.86 log reduction (2h/20 mW/cm 2 ) with nutrient pads. It seems that at disinfectant concentrations below 40% plating techniques deliver more optimistic results than analysis in fluid. exposure times are more effective in the combination approach than higher irradiation intensities at shorter durations. For ReNu Multiplus as a single method, Pseudomonas sp. seem to be the microorganism most difficult to inactivate, which matches literature data [53] and was the reason for choosing it as principal strain in this study. The addition of irradiation increases the effectiveness of ReNu Multiplus. At a 40% concentration at least 3 log reduction could be achieved for all bacterial species tested with the combination approach. In the experimental series of growth delay analysis, another multipurpose solution besides ReNu Multiplus was tested (OptiFree Express) which is known to be rather aggressive to bacteria as well as to the ocular surface. Results are shown in Figure 6b, where it stands out that, down to 20% of OptiFree Express, all bacteria were killed with the solution alone, with the exception of 1 h exposure with 20%, where 3.14 log reduction was measured. At 5%, however, only reductions lower than one log were achieved. The combination with light does not seem to have any influence on the results, neither positive nor negative.
For E. coli ( Figure 6c) the inactivation effect of ReNu Multiplus is comparably stronger than for P. fluorescens. The same applies for irradiation with 405 nm. In each case, the combination approach achieves higher log reductions for E. coli than the specific concentration of ReNu Multiplus alone. Comparing the combination with 405 nm alone, 40%, 30% and 20% show better or equal results. The overall results for the combination of ReNu Multiplus and 405 nm investigated in the growth analysis are lowest for E. coli compared to P. fluorescens and S. carnosus. Nevertheless, with only 40% of ReNu Multiplus, 3.05 log can be achieved when combined with 405 nm at 10 mW/cm 2 for 4 h.
For S. carnosus (Figure 6d) ReNu Multiplus as a single approach achieves better results than for P. fluorescens but less inactivation than for E. coli. The impact of light, however, was comparable with the E. coli results, however, for 10 mW/cm 2 even 4.11 log was reached. Similarly, in the combination approach, it appears that the difference between the irradiation intensity of 10 mW/cm 2 and the other intensities is clearly more pronounced, as it is in experiments with P. fluorescens or E. coli. At the same time, it has to be noted that error bars are comparably large here. There is no indication of synergistic behaviour that exceeds the sum of single approaches. Apart from 5% for 1 and 2 h, the combination clearly exhibits a stronger effect than ReNu Multiplus alone. Compared with light alone, however, there is only a slight increase of 0.86 log at 20 mW/cm 2 /2 h with 40% in the combination, while most results are similar to light irradiation alone and sometimes even reach fewer log reductions. The combination with ReNu Multiplus does not seem to lead to an increased outcome compared to an irradiation in PBS for S. carnosus tested with growth delay, but the results of either light approach achieve better results than ReNu Multiplus alone at the concentrations examined.
Altogether, the growth analysis seems to indicate that lower irradiation intensities at longer exposure times are more effective in the combination approach than higher irradiation intensities at shorter durations. For ReNu Multiplus as a single method, Pseudomonas sp. seem to be the microorganism most difficult to inactivate, which matches literature data [53] and was the reason for choosing it as principal strain in this study. The addition of irradiation increases the effectiveness of ReNu Multiplus. At a 40% concentration at least 3 log reduction could be achieved for all bacterial species tested with the combination approach.
Discussion
In this study we tested the combination of contact lens disinfection solutions with visible violet light of 405 nm. Combining different approaches has a long history not only for disinfection techniques, but also for medical therapies. For just as long, experts have been discussing how to quantify these results. Dose-effect-based strategies seem advantageous as is explained in detail in [45], because they do not have limitations through assumptions such as linearity. Furthermore, it is recommended to use several different methods to come to a conclusion. In our investigations, we often achieved varying results for the same parameters, when testing with different analytical methods. Nevertheless, related tendencies are obvious in all test methods.
The combination effect is assumed to increase with light dose. This was observed at disk diffusion testing. With all other test methods, a fixed dose of irradiation (140 J/cm 2 ) was used. Synergism of pure H 2 O 2 combined with blue light of 470 nm has previously been reported in S. aureus [44]. Unfortunately, ReNu Multiplus and OptiFree Express solutions did not form clear inhibition zones on agar plates even with reduced agar concentrations. A positive combination effect could be observed for the hydrogen peroxide solution AOSept, while it is only a presumption that this would also be valid for multipurpose solutions. As high concentrations of bacteria (approximately 10 8 CFU/mL) are applied for disk diffusion assays to produce a dense bacterial lawn, bacterial concentration dependency of multipurpose solutions, as observed in Figure 5 for colony counts, could be the reason for the absence of clearly visible inhibition zones.
As the effect of light irradiation alone increases with the dose [31,40,[63][64][65], a similar dose dependency for a combined application appears likely. However, an important fact in combination testing is that it is not possible to predict the results, as some drugs have several targets or independent antimicrobial mechanisms [51]. A combination of photodynamic therapy and various antibiotics, for example, showed a decrease in development of resistance for some drugs while for other antibiotics resistance was acquired through the combination with PDT [66].
Therefore, any combination of two methods has to be investigated separately and it is not possible to test for general statements about synergy [46]. Combinations at varying doses/concentration levels can lead to very different results with the same two approaches in combination [39]. Similarly, in Table 1, where CI values are determined, 20 mW/cm 2 and 30% ReNu Multiplus lead to explicit synergy with a CI of 0.66, while 10 mW/cm 2 at the same concentration of ReNu Multiplus is only additive with a CI of 0.92. At a light intensity of 40 mW/cm 2 even moderate antagonism with a CI of 1.28 occurs. At the same time, it is important to clarify that for practical considerations it is not the most important issue to attain mechanistic synergy, but to achieve a high antibacterial impact. The occurrence of synergy does not necessarily arrive at the best overall results, because the highest antimicrobial effect can occur in the absence of synergy. For practical considerations, this shows that synergy is not necessarily relevant for product design, where overall reduction is the relevant measurand, while the best synergistic grade is achieved at the highest increase of the combination's benefit.
Concerning the irradiation intensity in combination with the multipurpose solution ReNu Multiplus, our results indicate that lower intensities used over a longer exposure period will lead to higher inactivation results, than higher irradiation intensities at shorter durations reaching the same dose. The results achieved with agar plates show that 40 mW/cm 2 irradiation does not compete with the inactivation effect of 10 or 20 mW/cm 2 (Figure 3) in combination with ReNu Multiplus. The same tendencies are observable at growth delay analysis for P. fluorescens, E. coli and S. carnosus. Combining this knowledge with the assumption of an increasing effect at higher doses, applications with long irradiation intervals seem to be advantageous.
OptiFree Express, another multipurpose solution with a very potent antibacterial effect, but likewise unhealthy for the consumer [62], shows a totally different reaction pattern than ReNu Multiplus. The solution, which kills all bacteria at a concentration of just 20%, rapidly loses activity at a concentration of 5%, while the effect of ReNu Multiplus decreases continuously with gradual dilution. For OptiFree Express the addition of light does not seem to improve effectiveness. An opposite effect can be observed for ReNu Multiplus against S. carnosus (Figure 6d) where the irradiation with light delivers the main impact. Only for 40% at 20 and 40 mW/cm 2 does the addition of ReNu Multiplus lead to an increase higher than the effect of 405 nm alone. It is therefore recommended to further investigate other contact lens disinfection solutions.
A noticeable fact in this study is the huge differences of ReNu Multiplus effectiveness examined with nutrient pads compared to agar plates. The effectiveness of multipurpose solutions under different experimental conditions may depend on the bacterial inoculum. The normative standard for testing contact lens solutions [60] suggests a starting concentration between 10 5 and 10 6 CFU/mL. The nutrient pad experiments, however, were carried out with an inoculum of 6-8 × 10 7 CFU/mL. In fact, the log reduction considerably decreased with rising bacterial load (Figure 5a) when testing ReNu Multiplus as a single method. This could lead to severe clinical problems as total viable bacterial counts between 10 6 and 10 8 /mL were found in 13 out of 18 contact lens cases of patients with corneal infiltrative infections using multipurpose solutions [67]. Testing the combination of ReNu Multiplus with 405 nm irradiation, the bacterial concentration did not seem to play a pronounced role. Irradiation procedures with visible light are less dependent on the bacterial inoculum as they are based on endogenous photosensitizers, which increase in parallel to the bacterial concentration. Only absorption and scattering issues seem to limit the effectiveness for high bacterial concentrations [31]. It is remarkable that in spite of the marginal single impact of ReNu Multiplus at high bacterial loads the combination effect in the presence of ReNu Multiplus increases to values well exceeding the effect of light alone.
Concerning the plausibility of the investigation of a non-pathogenic surrogate, we evaluated the literature data available for photoinactivation of Pseudomonads. P. aeruginosa strains are among the most often examined microorganisms regarding visible light inactivation [33], but results for other representatives of the genus are scarce. Applying 400 nm at a dose of 100 J/cm 2 on P. fluorescens, Angarano et al. [68] achieved a 0.5 log reduction, which is in good accordance with our results.
Maclean et al. [31] achieved a 1 log reduction of P. aeruginosa (NCTC 9009) at a dose of 42.9 J/cm 2 with 405 nm of 10 mW/cm 2 irradiation intensity. Fila et al. [40] examined a broad range of P. aeruginosa strains including wild-type strains, drug-sensitive clinical isolates and multi-drug-resistant clinical isolates with very similar behaviors of 7 log reduction at around 50 J/cm 2 . Dependent on whether the average dose is considered or if the shoulder is taken into account, this leads to a result of 7-12 J/cm 2 for 1 log reduction. Gupta et al. [69] isolated a P. aeruginosa strain from patients with arthroplasties for which an averaged dose of 133 J/cm 2 was needed for a 1 log reduction of 405 nm at 123 J/cm 2 .
To our knowledge, these are the most extreme examples showing the upper and lower values for P. aeruginosa eradication measured to date, the variation probably caused by differences in the setup or test protocol, as different strains examined with the same protocol react similarly [40]. Still, with 154 J/cm 2 for 1 log reduction of P. fluorescens at 20 mW/cm 2, our results are overshooting those values. It seems that P. fluroescens is less susceptible to 405 nm than its pathogenic relative.
The choice of an appropriate surrogate concerning the performance of a disinfection method should rather be more conservative and survive longer than the target organism [70]. As this seems to be the case with our choice of a Pseudomonad representative, we believe that the results can be considered meaningful. Nevertheless, this technique has to be tested with the pathogenic variant P. aeruginosa according to the standard DIN EN ISO 14729 for contact lens disinfection equipment prior to routine usage.
The reasons for combination testing can be various with different favorable outcomes, such as increasing the effectiveness or decreasing the dosage while increasing or maintaining the same efficacy to avoid toxicity [46]. Minimizing or slowing down the development of drug resistance can also be a motivation [46]. This study was designed mainly to address the second aspect. Meyer et al. [71] describe the reduction of a concentration/dose to reach the same effect as before when adding a second component as "synergistic potency", which is useful to apply in applications with side effects, in comparison to "synergistic efficacy" where the aim is to enhance the final result by use of the same drug-concentrations as before.
As mentioned before a considerable volume of aggressive ingredients is stored in the polymeric material of contact lenses after disinfection [18]. As contact lens solutions are known to have adverse effects on the patient's eye [19][20][21][22][23][24][25][26], a reduced concentration will decrease the patient's risk of epithelial damage. Since some frequently used contact lens solutions are already at the limit of efficacy, a reduction of the formulation is often not possible. For several multipurpose solutions, including ReNu Multiplus and OptiFree Express, it was shown, however, that diluting the original concentration in PBS led to higher cell viability and integrin expression [72], using concentrations between 1% and 10%, compared to 100%. Another study that investigated dilutions of three different multipurpose solutions on mouse fibroblasts reported that a 25% dilution of all solutions tested could be considered non-toxic [73]. Therefore, reducing contact lens solution concentrations seems favorable. We tried to achieve this by a combination with visible violet light of 405 nm.
It was already proven that light as a single method is usable to meet the criteria of contact lens disinfection and a prototype of an applicable, as well as commercially suitable, system has been developed [37]. The combination of disinfection solution and visible light might not only overcome the problems of efficacy limitation of disinfectants due to biocompatibility issues; the existence of a second technique may also prevent complete failure of a specific lens care product as happened in 2006 with an outbreak of Fusarium solani keratitis in several parts of the world [74]. Maintaining a system with two different disinfection strategies would not so easily lead to complete ineffectiveness.
Following on from the disinfection properties of visible light irradiation, the possible impact on the material characteristics of contact lenses has to be investigated prior to the consideration of translation into routine usage. It has to be ensured that not only the required microbiological parameters are met, but simultaneously the lens itself would not be influenced detrimentally. Preliminary tests, applying irradiation doses simulating cumulative exposure over monthly use, revealed only slight changes concerning transmission, still well within the limits defined by the standard DIN EN ISO 18369-2 (data not shown). However, differences to this test protocol can occur in routine use due to rubbing of the lens, so it is recommended to perform further tests concerning material compatibility, including examination of mechanical stability.
Conclusions
The combination of contact lens disinfection solutions with the application of visible light irradiation could provide the same antimicrobial results as commercially available disinfection systems but with much less toxicity. While only some of the combinations with ReNu Multiplus investigated in this study were close to the disinfection effect of the pure commercial disinfection solution, the collectivity of results suggests that with modest increase in concentration or exposure time the same impact as provided by commercial ReNu Multiplus formulation may be reached. Combination with light was especially effective against pseudomonas, against which the effectiveness of ReNu Multiplus alone was problematic. For the hydrogen peroxide solution examined, the combination effect with visible light was so strong that even 5% of AOSept was sufficient to reach the same result as the current 100% formulation. It was also shown that an additional disinfection technique might be advantageous, especially because light inactivation, as well as a combination approach, does not | 2020-09-10T10:02:13.976Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "a54f78e62c3a6b3f687e92582ba846a9407a6e0d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/17/6422/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17f1be808b4bf5b6934c7fe563fd83a5a48d0482",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
233296117 | pes2o/s2orc | v3-fos-license | Two-Group Flux Analysis of Neutrons which enter, internally scatter, and often escape a Shielding Layer of Iron -- with respective Group Settings circa one MeV and kilo-eV
It has been long recognized that radiation transport theory is the foundation for the planning and analysis of X-ray (gamma-ray) radiation therapy and for imaging. In less common but appropriate occasions as an alternative to X-rays or gammas, neutron radiation is used in oncological treatments or in imaging of patients. The following work is also potentially of interest to Radiation Safety Planners. Especially in regard to uses for neutron beams, we introduce and present a deterministic and semi-analytical method for doing transport analysis on neutrons and which are judiciously set to be distributed into two energy groups. There are advantages for doing such 2-group and higher multi-group analysis of radiative particles (i.e. neutrons and photons). These advantages are that we can more directly keep track of what percentages of radiative particles are close to the original high energy and how many are at significantly lower energy. For photons, the profile of any build-up function shows that the function is slightly larger than 1.0 at entry, then it rises to perhaps 2 or 3 within roughly one mean free path of the fast primary particles, and finally approaches the asymptote of 1.0 as the penetration depth gets progressively larger. Neutrons deserve a separate treatment. Although it is lengthier, our algorithm and formulation is much more complete than the popular formula used among radiologists where exponential decay is modified with a buildup coefficient. Moreover, a buildup function for neutron fluxes do not appear to be widely offered in radiological and radiation safety publications. This paper demonstrates a method to predict the ratio of high energy (circa 1 MeV) neutrons over low energy neutrons (in a group below 0.201 MeV) which are scattered backwards and forwards out of walls of Fe-56 at various thicknesses.
Introduction
The field of health physics, the essential background topics in medical physics, and the reviews of radiological safety of reactors of nuclear reactor operations have been served by various methodically practiced styles of dosimetry/transport calculations for over 70 years. This has been done with clinical caution and practical verifiability in mind in the main accepted traditions in the disciplines of health physics and in "reactor physics" engineering. Radiation shielding calculations, penetration assessments 1 , and radiation dosimetry calculations 2 are included among the vital and materially informative calculations. Tying into the physics of interaction of radiation with matter, a beam which is comprised of either energetic neutrons or high-energy photons (i.e., gamma rays or hard X-rays) have conventionally been looked upon as candidates for interception via the standard X-section (i.e., cross-section) inspired models, for which one uses the typical expression ( ) (0) z I z I e − = (1) where z is the depth and μ is the attenuation parameter or 'coefficient'.
The "simplistic" attenuation formula is an appropriate name for this short Eq. (1). Many shielding calculations done by medical physicists and health physicists over the years (since at least 1970) have been done using an extension of this "simplistic attenuation formula" coupled with a buildup factor 3 . The buildup factor is necessary if we are to use an attenuation formula as our principal tool of analysis to account for scattered neutrons or energetic photons, respectively 4 . Some of these "n's" and "γ's" are almost (or totally) elastically scattered, but some of these are down-scattered in energy. In keeping with the aspiration for excellence from the era of the "Space Age", some computer/electrical engineers and nuclear engineers have enhanced these efforts by conducting Monte Carlo simulations of the transport and penetration of neutrons or photons through walls and various barriers with various respected codes (packages with X-section libraries) such as MCNP, EGS4, EGSNrc, and the versatile but highly tedious GEANT4 5,6,7 . In this short paper, we introduce and present a deterministic and semi-analytical method for doing transport analysis on neutrons and isotropically scattering 'hard' photons which are placed in two energy and, with future ambitions, into 3 energy groups. There are advantages for doing such 2-group and higher multi-group analysis of radiative particles (i.e., neutrons and photons). These advantages are that we can more directly keep track of what percentage of radiative particles are close to the original high energy and how many are at significantly lower energy. An inspection of the profile of any build up function shows that the function is slightly larger than 1.0 at entry, then it rises to perhaps 2 or 3 within roughly one mean free path of the fast primary particles, and finally approaches the asymptote of 1.0 as the penetration depth gets progressively larger.
Review of buildup and discussion of methods of analysis.
The attenuation formula in the introduction expresses the particulate intensity, not the energetic intensity in our convention. This corresponds to the choice of analysis of particulate flux rather than energetic flux of radiation. For those with a non-nuclear background: "Flux" is used by the medical physics community and health physicists with a definition considerably different from that of the flux of electric fields. Our flux has units of "particles" per cm 2 per second. See Frank Attix's text 8 if this is unclear. Particulate intensity is less than or equal to the scalar flux of the particles. Very often intensity is defined as the magnitude of net current of transported particles per cm 2 of surface per sec. Indeed, in a case where equally many particles approach and penetrate a wall bidirectionally the net current is zero. However, the scalar flux in such an example is much larger than zero. Admittedly, one can make some inferences on the approximate ratio of down-scattered particles at a given depth as a function of position by inspecting the build up function, should it be available in published tables for a given shape and material. Here the intensity with a buildup coefficient can be expressed as ( However, the Buildup coefficient (i.e. B(E, z)) does not make clear just what percentage of scattered γ-ray or X-ray radiation is scattered so as to retain most of its energy and how much has been "demoted" to photons with 50% less or more of energy per radiative particle. This reality (of radiative particles often undergoing elastic or nearly elastic scattering) holds for 'free' neutrons which scatter off nuclei with an Atomic Number greater than 4. For example, when a 'n' with a kinetic energy of 1.0MeV collides with an Fe-56 nucleus, it has a 99% chance of undergoing elastic or quasi-elastic scattering (retaining most of its 1MeV). This same 'n' has a probability of less than 0.8% of down-scattering to a "low" energy neutron with less than 0.201MeV of kinetic energy. The chance of capture at 1MeV is less than one in a 1000. Fast neutrons are not easily captured, they usually are just scattered. This information is given based on inspecting generally available public claims of two group data and, more importantly, by our careful conducting of MCNP 9 simulations in which we reproduce the conditions of scattering and so called "buildup" of scattered neutrons in rectangular walls of Iron (Fe-56). We are authorized to use and have extensive experience with the very versatile MCNP Monte Carlo code. Regarding neutrons, the buildup coefficient data either is not widely published for nuclear engineers or not readily available. Thus, the case of neutron shielding analysis offers a major service for performing multi-group energetic neutron flux and dose calculations. In this paper we stick with 2-group n's (i.e., neutrons). The following formula is an overly simplistic and inadequate expression often used for metals with atomic number less than 84, away from the "uranic" family However, Eq. (3) gives a very incomplete story of the local neutron flux. In analogy to fluxes of photons, Eq. (1), in which I(z) is set to equal I(0)•exp(-μz), also gives an incomplete story of local current/intensity of X-rays and γ-rays. This occurs in the equations above when one fails to include the Buildup factor. The Buildup factor is included in Eq. (2) as the expression B(E, z). B(E,z) corrects the local 'flux' of γ-rays or photons at various depths (e.g. z) of penetration. It is more insightful to replace I(z) from Eq. (1) with Φ(z) in Eq. (3) where flux is more appropriate than 'n' or photon current in the geometry of a wall or box since many of the neutrons or photons no longer travel straight forward along the z-axis after one or two collisions of scatter occur.
If the nuclei/atoms of a medium which is entered by the beam of n's or photons is a pure absorber, then Eq.
(1) and Eq. (3) are acceptable solutions for penetration and the differential equation which explains the transport of the particle(s) is given by: ( ) absorb rad n = (4) and where nrad is the number density from radiation. If the medium is Boron-10 and the neutrons are at low energy, then Eq. (4) would be a realistic equation for modeling transport of the neutrons, because B-10 is nearly a pure absorber. However, most materials are not pure absorbers.
If we presume that there are two energy levels for neutrons and photons (i.e., fast n's and slow n's), then it is appropriate to write a double energy-group Maxwell Boltzmann Transport Equation (MBTE) in order to express what is going on for transmission and for energy demotions (i.e., down scatters). This pair of equations is written below for the transport of neutrons with two possible energy levels 10,11,12 . Note in Eqs. [2] for fast neutrons (of grp-2). Here Φ [2] is the scalar flux which includes neutrons in the energy grouping of 0.201MeV up to 10MeV (at least when we model and analyze iron shielding). Φ [1] is the scalar flux which includes all neutrons of energy 0.20MeV and lower. Here is the two-group MBTE: are equal to zero, as we might imagine for a medium of "super-Boron".
Then, Eq.(5a) can be rewritten as: If our wall is very broad and if the distribution to approximation depends only on coordinate z, then this simple 1 st order equation is completely equivalent to Eq. (4) with attenuation. In this paper, we presume that scattering is isotropic, which is often a good approximation for the scattering of neutrons. Therefore, we establish that is an example of a pair of integro-differential equations. Generally, it is easier to solve a purely integral equation, such as a Fredholm Int. Equation 13 . Holding on to the presumption of isotropic scattering, Eqs. (5a) and (5b) can be subjected to a special integral transformation via Green's functions in order to reexpress them as the following integral equations, (6b) Eqs. (6a) & (6b) are challenging to solve, but some solutions have been found both by the Russian Math academy of the 1950s and by a 'computations' group at Los Alamos in the 1950s. The author(s) have found a method for numerically, with arbitrary precision, to iteratively solve the monoenergetic version of Eq.(6a) and the 2-group Eqs. (6) 14 .
Eqs. (6) are more demanding and difficult to solve than it is to simplistically use Eq. (2) with the buildup coefficient to calculate relative intensities. The information which we get out of Eqs. (6) for Φ [2] and Φ [1] is much richer than what we can get from the calculation of I(z) as a function of penetration into a wall Eq.
(2). This is especially apparent for a very broad slab of shielding (presuming breadth of slab is more than 6 times the thickness of slab). Presuming a beam of neutrons enters from the left at the interface where z=0, the user would need to read, download, and interpolate a table of build-up coefficient values (or use an approximate local formula) for a slab of the given material (such as Fe, Pb, or concrete). These tables are almost non-existent for neutrons. Such tables do exist for γ-ray and X-ray photons for industrial and medical materials, but they are limited in their range and diversity of examples. Thus, the dosimetrist who is borrowing or using the data often is stuck with having to interpolate from near fits of other examples most similar to the geometry which he or she has chosen to design or assess for predictions or dose verifications. As a reminder of geometrical concepts, the portion of "battered" particulate current density which escapes from the right-hand boundary of a rectangular slab of shielding from "mid-face" equals approximately ½ or 0.6 times the "battered" scalar flux which is present on the boundary of escape from the rectangularly shaped shield. 'Battered' shall be defined as the condition of a particle which has either been coerced or forced to scatter so as to keep some or all of its kinetic energy. "Battering" of a neutron changes its direction and can reduce its energy. For a very thin shield whose thickness is less than ⅛ of a mean free path of a neutron, the first author has verified that I
Computations and Results
Following our earlier work 15 , we can iteratively solve integral equations (6a) and (6b) in the case where we have one very broad rectangular slab. If there is a monoenergetic beam of fast neutrons which enter the designated slab of shielding, then before the first iteration we find that Φ [2][0](z) goes as [0](z) is that part of the flux of neutrons at z that have never undergone scattering. The first index, which holds [2], shall denote that this is flux/current of the fast group of neutrons. In the first iteration (upon the event of scatter) we find the formula of Φ[n] [1](z) for the n-group's neutrons where n equals either 1(slow) or 2(fast). Upon integration this flux can be found analytically for the n-group of neutrons. Consider Eq. (7) ( ) where [2][ ] () n z is generated iteratively from the input of [2][ 1] () [m](z) is guaranteed to converge. Five to eight iterations have proven to be required for sufficiently convergent solutions. We use the approximation that the current of escaping neutrons which have penetrated the wall is given by the sum of the 'unbattered' flux plus half (i.e., 0.5) of the sum of fluxes comprised of scattered neutrons, where 0.5 is the analytically and geometrically guaranteed minimum. Judiciously, we occasionally replace 0.5 with 0.52 or even 0.55, where this is the dimensionless factor of escape which relates surface Φ value to current density of scattered neutrons which depart from the surface.
For the sake of space and the context of this paper, we will focus on predictions for scattering deterministically of the transmission of neutrons through sample slabs of iron at various thicknesses and on the prediction for the portion of neutrons which are returned backwards (due to back-scatter) from the slab. Our deterministic method distinguishes between the population of fast scattered neutrons and of slow neutrons (where the choice of '1' for 'm' designates the down-scattered, or slow, neutrons). We also conducted Monte Carlo simulations of neutrons from a beam which approach the same wall of iron material. The two Monte Carlo (MC) codes used are: MCNP (which was developed and updated by LANL) and a custom Monte Carlo code which was developed early in 2014 17,18 and had proven to be valid for the modeling of isotropic scattering. In Table ( Three examples are given below for a broad rectangular slab of homogeneous cast-iron where a beam of neutrons approaches the barrier at normal incidence. Our deterministic predictions, the predictions of MCNP, and the predictions of the MC code, SMUSKE, are included in Table ( In the first third of Table (1) the thickness of the rectangular slab of iron is 1 MFP long, which is 2.884 cm. According to SMUSKE, out of 10,000 incoming fast neutrons, 3132.8 fast n's travel out or backward from the wall, and slow n's escape back toward the source of the beam. Accordingly, SMUSKE predicts that 6,691.7 fast n's escape forwards out through the iron wall, and 84 slow n's escape forward. Comparing the Deterministic Iterator and SMUSKE's predictions starting with the thick wall at 2.884 cm first, we find that for the collective back-scattered n's, which escape out of wall backward, the percent differences between the Deterministic Iterator and SMUSKE are: 45.6% for slow n's (i.e., of grp.1) and 14.6% for fast n's (i.e., of grp.2). For the rate of forward escape, or transmission, the percent differences between IntegIterator and SMUSKE are: 37% for slow n's (i.e., of grp.1) and -4.85% for fast n's (i.e., of grp.2). We briefly consider the wall of 1.442 cm, for which Mfpm = ½, giving a rate of forward escape, or transmission, the percentage of disagreement between our IntegIterator and SMUSKE as: -30% for slow n's and 0.30% for fast n's (i.e., of grp.2). We briefly consider the wall which is at 3 tenths of an Mfpm at 0.8652 cm, giving a rate of forward escape, or transmission, the percentage of disagreement between our IntegIterator and SMUSKE as: 103% for slow n's and -2.03% for fast n's (i.e., of grp.2).
Next, we compare the escapee numbers of SMUSKE to those of MCNP for n's: the rate of backward escape, returning to the beam source, when the wall is one MFP thick, the percentage of disagreement between the predictions of MCNP and SMUSKE are: -168% for slow n's and 77% for fast n's. For the rate of forward escape, when the wall is one MFP thick, the percentage of disagreement between the predictions of MCNP and SMUSKE are: -164% for slow n's and 0.448% for fast n's. For the rate of backward escape, returning to beam source, when the wall is ½ MFP thick, the percentage of disagreement between the predictions of MCNP and SMUSKE are: -154.2% for slow n's and 85.2% for fast n's. For the rate of forward escape, when the wall is ½ MFP thick, the percentage of disagreement between the predictions of MCNP and SMUSKE are: -158% for slow n's and 0.3738% for fast n's. For the rate of backward escape, returning to the external beam source, when the wall is 3/10 of a MFP thick, the percentage of disagreement between the predictions of MCNP and SMUSKE are: -172.4% for slow n's and 75.1% for fast n's. For the rate of forward escape, when the wall is 3/10 of a MFP thick, the percentage of disagreement between the predictions of MCNP and SMUSKE are: -172.6% for slow n's and 0.349% for fast n's.
On the other hand, on a more impressive note, there is only a -5.30% disagreement between the respective predictions of our Deterministic Iterator (i.e., IntegIterator) and those of MCNP for the number of forward transmitted fast neutrons through the wall which has 1 MFP (2.884 cm) of thickness. Also, there is only a -3.437% disagreement between the respective predictions of IntegIterator and MCNP for the number of forward transmitted neutrons through the wall with thickness of ½ MFP (1.442 cm). Likewise, there is a -3.738% disagreement between the predictions of SMUSKE and MCNP for forward escape when Mfpm= ½. In addition, there is a -2.387% disagreement between the respective predictions of IntegIterator and MCNP for the number of forward transmitted neutrons through the wall with thickness of 3/10 of an MFP (0.8652 cm).
One can see that our deterministic IntegIterator makes predictions which are either as close as or almost as close in their respective predictions to those of SMUSKE to the predictions of forward escape and backward escape generated by using MCNP alone. IntegIterator and SMUSKE have comparable operation speeds if one is content with 1.5 percent statistical fluctuations of Monte Carlo from using SMUSKE. However, SMUSKE does not easily lend itself to decisive mapping of internal flux at high resolution. IntegIterator does, by default, offer the feature of internal mapping. We observe the effectiveness of SMUSKE and IntegIterator in spite of the disadvantage of using isotropic cross sections used in these codes. MCNP does consider angular probabilities in great detail for 'n'scattering for all the well-known isotopes on the complete table of nuclides.
Indeed, in a nuclear historical context, the compilation of the 'scatter-kernel' data of MCNP was a project which spanned more than a decade. Thus, it would be very difficult to summarize such a large amount of angular data with approximations of the zeroth, first, and second order Legendre polynomials of the cosine of scatter-angle with any tractable and manageable database which could function without incurring "data strangulation" of an analytical iterator (e.g., our IntegIterator) written in Maplesoft, high-level Python, or similar language/package. Some ask, "why not just give the M.C. 'jobs' to GEANT4 to execute?". With all due acknowledgement of the formidable abilities of GEANT4, such as readily offering options of angular data streaming, the three smaller codes SMUSKE, MCNP, and IntegIterator are all faster and easier to work with than GEANT4 for designated rectangular walls of metal bombarded by n's.
Conclusions
Our deterministic iterative algorithm, IntegIterator, agrees reasonably well regarding the prediction of the energy distribution and the direction distribution to the corresponding distributions of energy and overall forward direction which are generated via our MCNP simulations, despite the deficiency of IntegIterator not being able to process anisotropic scattering in terms of the design of a 'scatter' kernel or helpful Green's Function. This can be seen from the results posted in Table (1) and in the summary of Table (1) above. On the other hand, neither our deterministic code nor our isotropically designed SMUSKE agree extremely well with the predictions of back-scatter results from MCNP. All three methods of calculation agree well on the prediction of percent neutrons (sum of quasi-elastically scattered and down-scattered neutrons) which escape from the target slab of our ferrous metal informatively showing the trend of almost succeeding to sustain the population of free neutrons (albeit with records of past collisions and occasional energy reductions). In reference to "surviving" n's, 'free neutron' means neutron not captured by any atom. It is evident from the slowing down that every one of the iron slabs in Table (1) incidentally functions as a neutron moderator. However, the directionality and ratio of down-scattering are subject to disagreement. As explained above, MCNP has extremely detailed cross section libraries. Iron-56 turns out to be one of these extremely anisotropically scattering isotopes. Apart from the work of Chandrasekhar 19 and his use of H-functions for tracing intensities of scattered photons, there is little analytical work recorded on predicting flux densities theoretically which are solutions to the MBTE in which the scattering cross sections are anisotropic. The discrete ordinate method used to solve the MBTE is impressive in its flexibility and is deterministic, but that method is not analytical -thus being completely numerical at every step, unlike our IntegIterator which offers some "term-by-term" perspective for the theorist. It is 'somewhat' easy to do analysis of solutions of the MBTE when the neutron scatterers (i.e., the nuclei) are isotropic. Many experts of reactor physics and shielding analysis approximate the multi-group MBTE as a multi-group coupling of two, 3, or even 6 simultaneous diffusion equations of radiative particles. By its intrinsic nature, it is virtually impossible to do angle dependent "ray track tracing" of neutrons if one does modeling with a diffusion equation (or equivalently a pair of diffusion equations) rather than the MBTE.
It would be convenient from a clinical radiological treatment planners' point of view to carefully look up the data for B(E,z), where B(E,z) is the buildup factor included in Eq. (2). However, in regard to neutrons, buildup coefficient data either is not widely published or is not available to the broad national/international communities of health physicists or engineers. Moreover, a significant benefit of our deterministic method (and algorithm) of IntegIterator is the superior speed which it offers by its retention of the definitions of chosen geometries per wall to be bombarded and in its output extraction compared to the time and duties required at the conclusion of a corresponding run of an MCNP simulation. Similarly using GEANT4 is often even more involved and tedious than MCNP. Thus our 2-group IntegIterator algorithm and formulation is much faster than MCNP when predicting penetration ratios as well as the distribution of energy of penetrating neutrons.
The simulation of MCNP is sufficiently fast for modeling transmissions and down-scattering of neutrons through rectangular walls. However, the processing of the output data from the output files generated by the MCNP code require considerable data processing which is done best either in a UNIX console environment or a DOS console environment. Many of the younger physical engineers have little LINUX training and thus tend to rely on a Windows environment or a 'Mac-Windows' environment to process outputs of their chosen software modelers within Windows, 'Mac-Windows', or XWindows (if using a LINUX version of Maplesoft). Maplesoft is the language which we have selected for IntegIterator in order to conduct our local Flux calculations. Our 'Maple' version of IntegIterator can operate within the environments of Windows, XWindows of Mac OS, and Linux -as valid versions of Maplesoft can be placed in these OS's. Much of our Maplesoft code can be translated into Matlab code, for the accommodation of the preferred software environment of many electrical engineers for Matlab. Another, great benefit of our deterministic code is that one can procure a polynomial approximation of the local dose of neutron flux at any depth within the metal. The structure of the spatial internal solution for flux is a combination of log and polynomial terms. With MCNP such a feat of mapping flux as a function of the depth in a wall would require updated writing of an Input file which is more than ten-fold more elaborate than the input file for the IntegIterator code for the same slab of metallic material. Internal depth profiling with SMUSKE also is a challenge, but less so than it is with standard MCNP input file declarations.
It is reasonable to anticipate a future effort of analysis of 3-group neutron flux distributions with respect to energy by radiation transport theorists. However, for now, we focus on constructing 2-group databases of neutron cross sections of various important materials besides just iron and boron and subsequently using the algorithm and code(s) of IntegIterator to predict collective forward escape and backward escape of neutrons which initially enter slabs of the respective materials of interest. | 2021-04-20T01:15:53.553Z | 2021-03-20T00:00:00.000 | {
"year": 2021,
"sha1": "2e91a8103a4763acda6ccf55b4eeba5de8ed812b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2e91a8103a4763acda6ccf55b4eeba5de8ed812b",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
271005249 | pes2o/s2orc | v3-fos-license | Patient Perceived Quality of Virtual Group Contraception Counseling
Introduction The study examines the feasibility, quality of counseling, and knowledge after a virtual Group Contraception Counseling (GCC) session. Methods At an urban academic hospital, we recruited English-speaking pregnant women aged 15–49 who had access to a video-enabled electronic device. Participants engaged in a standardized 45-minute educational session about contraceptive methods in groups of two to five persons conducted over a video conferencing platform. The primary outcome was participant perceived quality of contraception counseling measured by the Person-Centered Contraception Counseling (PCCC) scale. The secondary outcomes were knowledge change before and after counseling, and postpartum contraception uptake. We used an adjusted multivariable linear regression model to analyze knowledge scores. Results Twenty-two participants completed the study. Participants identified primarily as Black or Hispanic/Latinx (78%), in a partnership (50%), having completed college (59%), and having an annual income of less than $50,000 (78%). A total of 77% of participants recorded a perfect score for quality of counseling using the Person-Centered Contraceptive Counseling (PCCC) scale. There was an increase in knowledge after counseling (Mean difference (M)=0.07, p<0.01). Notably, certain subsets of participants had decrease in knowledge scores after counseling. Participants who used postpartum contraception were more likely to have increase in knowledge after counseling compared to those who did not (Mean difference (M)=0.09, p<0.01). Conclusion Our findings suggest virtual group contraception counseling is feasible for providing high-quality counseling and can possibly increase contraceptive knowledge.
Introduction
Traditionally, contraception counseling has been a part of routine prenatal and postpartum care and performed on an individualized basis.2][3][4][5][6] Additionally, the availability of Long-Acting Reversible Contraception (LARC) contraceptive devices in the immediate postpartum period can increase LARC usage in postpartum adolescent mothers. 7ndividualized contraception counseling is the most common in the antenatal setting; however, different methods of contraception counseling have been researched.][10][11][12] Research on patient desires around contraception counseling has shown that developing rapport, personalizing discussions, setting goals, developing action plans and using shared decision-making improve reproductive health outcomes. 2roup contraceptive counseling (GCC) is an approach to antenatal contraception counseling that brings together small groups of people to discuss contraception options.4][15][16][17][18] GCC as a separate entity outside of centering pregnancies has limited research on its use and effectiveness.A small randomized pilot study found that GCC of resettled African refugee women increased knowledge acquisition but did not have a significant effect on contraception usage. 19GCC can signify a potential for improved patient centered, contraception education during the antepartum period.We sought to explore this potential option for patient education while simultaneously prioritizing ease of access by implementing a feasibility study of GCC in the antepartum period.This study explores the participant perceived quality of GCC and change in contraceptive knowledge after counseling in the antepartum period.
Materials and Methods
This study was approved by the University of Illinois at Chicago Institutional Review Board (Protocol # 2020-1626) and was conducted at a large urban, academic hospital center.Participants were informed of the purpose of this study and the study complies with the Declaration of Helsinki.We recruited participants between August 2021 and September 2022.We included participants who received prenatal care at the hospital center, aged greater than 16 years of age, were fluent in English and had access to a video device.Participants were recruited with flyers posted in the Obstetrics and Gynecology outpatient office.Participants were consented by a member of the study team.After enrollment, prior to their GCC session, all participants completed an online survey consisting of demographics and baseline contraceptive knowledge questions.The knowledge survey was adapted from the Fog Zone Study 20 and consisted of 22 true or false questions on contraception.The Fog Zone study was a national survey conducted in the United States on the perceptions and knowledge of contraception for unmarried young adults.The survey questions were adjusted for relevance to contraception methods being discussed during the counseling session and a small internal peer review with six reviewers was conducted prior to survey use.After completion of the surveys, participants were placed into groups of 2-4 persons for GCC.
GCC sessions took place on a HIPPA compliant web-based video communication system.Counseling sessions lasted approximately one hour and were recorded.All sessions were recorded and led by a singular Obstetrician and Gynecology resident.The counseling sessions consisted of a PowerPoint reviewing postpartum birth control options including hormonal and non-hormonal intrauterine devices, subdermal implants, female and male sterilizations, injectable contraception, oral contraceptives, contraception patch, the vaginal ring, barrier methods, coitus-related methods, abstinence, lactational amenorrhea and fertility awareness.The benefits and adverse outcomes of all options were discussed, including potential effects on breastfeeding.At the conclusion of the informational presentation, the forum was opened to a generalized discussion of questions and comments.
Upon completion of the counseling session, participants were asked to retake the knowledge survey to assess knowledge acquisition.Participants also completed the Person-Centered Contraceptive Counseling (PCCC) survey to assess the quality of contraception counseling. 21The PCCC is a validated quality improvement tool developed to assess the quality of clinician contraceptive counseling by measuring patient satisfaction after contraceptive counseling. 21The tool was created with patient input and prioritizes interpersonal connection, adequate information and decision support in contraception counseling. 22The PCCC is a Likert scale survey with four statements examining contraception counseling quality by asking about respect, prioritization of preferences, adequate information and valuing of patient input. 21articipants who completed these steps were given a $20 Target gift card.A final survey was then emailed to participants at 6 weeks postpartum to identify the contraception option chosen.If the survey was completed, participants received an additional $20 Target gift card.All participants received prenatal care at the study hospital center and had the ability to obtain contraception immediately postpartum or at their 2-or 6-week postpartum visits at the hospital center.Contraception options, including sterilization, and LARC were covered by public and private insurance in the postpartum period, with few exceptions.
The primary outcome was patient perceived quality of GCC, measured with the PCCC scale.Secondary outcomes were knowledge acquisition, measured by score change on the knowledge survey, postpartum contraception uptake and acquisition of desired contraceptive method.The PCC survey was analyzed by calculating the total sum of scores out of https://doi.org/10.2147/OAJC.S467537
DovePress
Open Access Journal of Contraception 2024:15 100 a total of 20 possible points.Knowledge survey scores were converted to percentages.The mean change in knowledge score before and after counseling was calculated.A multivariable regression using a generalized estimating equation that predicted score from pre or post counseling was performed, and adjusted for race (Black versus other), marital status (single versus other), employment (part-time and unemployed versus full-time employment as reference), education (college versus high school as reference), household income (more or less than $50,000 per year), and if they received contraception following counseling or not.A t-test was performed to compare a patient's perceived quality of contraception score with mean change in knowledge score.
Results
A total of 22 participants in eight groups completed the study.A majority of participants were Black and Latino (78%) and the mean age was 29 years (Table 1).Most participants were employed (64%), college educated (59%) with annual household income of less than $50,000 (78%).A majority of participants, 77%, rated the quality of counseling as 20 out of 20 (Figure 1).There was no significant difference in rating of the quality of contraception counseling based on change in knowledge score after counseling (p=0.218).
The mean pre-counseling knowledge score was 78%, and the mean post-counseling score was 85%.Using the multivariable regression, we found that even after adjusting for race, employment, education, marital status, and contraception uptake, scores were higher following counseling (Mean Score Difference (M)=0.07,95% CI 0.02-0.12,SE=0.02, p<0.01) (Table 2).However, certain groups were noted to have significantly lower increases or decreases in score.Black participants were more likely to see a decrease in knowledge score after counseling compared to non-Black participants (M= −0.12, p<0.01).Participants who were employed part-time (M= −0.12, p<0.01) and unemployed participants (M= −0.08, p<0.001) compared to those who were employed full time were significantly more likely to have a decrease in knowledge score after counseling.Finally, participants who received any form of postpartum contraception were significantly more likely to have a larger increase in knowledge score after counseling (M=0.09,p<0.01) compared to those who did not receive contraception.
A majority of participants (77%) completed postpartum contraception uptake surveys.Of the participants who completed the postpartum survey, 82% received some type of contraception.However, only 88% of participants who completed the postpartum survey received their preferred choice of contraception.Close to half, 47%, of participants chose a LARC device.
Discussion
This study demonstrates that virtual GCC is feasible, high quality, and can increase knowledge of postpartum contraceptive options.Virtual group contraception counseling can be an additional option for antepartum patients seeking postpartum contraception options.Contraception counseling that focuses on patient preferences and shared-decision making can empower women and reduce the risk of unintended pregnancy. 22ur study also demonstrated an association between increased knowledge score and increased postpartum contraception uptake.A randomized control trial in Ghana found that GCC was non-inferior to individual counseling and noted increased contraceptive knowledge. 23Other smaller studies have also demonstrated improved knowledge acquisition with group contraception counseling but limited effect on contraception uptake. 19Our study aligns with prior research showing a modest increase in knowledge score after counseling.These results are not generalizable; however, virtual GCC may prove to be a valuable resource for contraception counseling options.Telehealth services have increased since the COVID-19 pandemic and increased health care access, especially for underserved population and those in rural communities. 24Virtual GCC may provide an option for increasing health equity around contraception counseling while providing the same educational uptake as individual counseling.6][27] This diversity in contraception needs and desires suggests that multiple contraception counseling options and choices are needed to provide people with the counseling method they prefer.Virtual GCC is not a private setting, but participants may find comfort in having these discussions in a chosen, comfortable place.Virtual GCC should be included as a viable option for contraception counseling, especially for communities that would benefit from increased access through telehealth services.Virtual GCC could be integrated into clinical practice, for example, through monthly meetings offered at a clinic or hospital practice as part of their standard prenatal care.The variety of values and desires around contraception counseling also encourage the use of a scale like the PCCC for counseling assessment, which is adaptable to any mode of contraception counseling.
While our study demonstrated an overall increase in knowledge, certain groups were less likely to see this benefit, namely participants who identified as African American or those who were not employed full time.A multitude of factors could have contributed to this.We hypothesize that the knowledge survey used in this study may not be the ideal test to ascertain contraceptive knowledge, particularly for certain groups.
This study was limited in its small number of participants and lack of a comparison group.The Fog Zone study is not a verified measure of knowledge for this particular participant group and the results for this study establish a trend and not necessarily direct knowledge acquisition.The authors also acknowledge that, although there is an increase in knowledge demonstrated, the increase is notably small.Additional research is needed with validated tools to assess contraceptive knowledge.A notable strength of this study is the use of telehealth for contraception counseling and centering of patient preferences as a measurement of successful contraception counseling.This study does provide valid and useful insights into contraceptive counseling methods and patient perceived quality of such methods.
Conclusion
Using the PCCC scale as a tool can give objective data on patient's perceived quality of contraceptive counseling.This is an important step away from using LARC as a marker of contraceptive counseling success. 28By focusing on patient rating of counseling quality, it is possible to systematically rework the way we counsel patients on contraception and place the focus on how the patient feels about their contraception counseling and their reproductive needs.Virtual GCC may be one potential avenue to improve the way we counsel patients on contraception to ensure understanding of the subject and usage of desired contraceptive options.
Figure 1
Figure 1 Participant PCCC score for group contraception counseling by question.
Table 2
Multivariable Linear Regression Analysis of Participant Characteristics and Mean Change in Knowledge Score After Virtual GCC (2021-2022) | 2024-07-07T15:36:00.469Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "06cc9b75377ce705a715bf35a47b3aad39cf14d1",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8e890c0aa04383e3aaa94b229d4545a3171ecd74",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256482722 | pes2o/s2orc | v3-fos-license | Foreign direct investment by multinational corporations in emerging economies: a comprehensive bibliometric analysis
Purpose – Thisstudyintroducesacomprehensivebibliometricanalysisoftheforeigndirectinvestment(FDI) literature by multinational corporations (MNCs) focusing on emerging economies to identify the most influential authors, journals and articles in FDI research and reveals the fields ’ conceptual and intellectual structures. The purpose of this paper is to address these issues. Design/methodology/approach – The studyanalyzed 533 articles published between 1974 and 2020 in 226 academic journals indexed in the Web of Science (WoS) and Scopus databases. We used the R language for statistical computing to map author collaboration, co-word and develop a conceptual and intellectual map of the field. Findings – The results show that, although the FDI literature has many authors, few dominate the field. The International Business Review ( IBR ) and International Journal of Emerging Markets ( IJoEM ) are the main sources of the publications. Moreover, bibliometric laws show that our dataset follows the Lotka law of scientific productivity and Bradford law of scattering, identifying the core journals. Finally, FDI by MNCs in emergingeconomiesresearchisdividedintofoursub-researchthemesrelatedto(1)FDIdeterminants,(2)entrymode,(3)MNCsandFDIperformanceand(4)theinternationalizationprocess. Originality/value – The current article provides several starting points for practitioners and researchers investigating FDI. It contributes to broadening the vision of the field and offers recommendations for future studies.
Introduction
In the last five decades, foreign direct investment (FDI) and multinational corporations (MNCs) attracted many scholars' attention, becoming the most researched topic in FDI and EEMNCs: a bibliometric analysis international business (IB) (Quer et al., 2017;Ramamurti, 2004). Nevertheless, most of the FDI literature typically emphasizes a specific FDI field, such as entry mode strategies, location choice or investment determinants (Busse and Hefeker, 2007;De Beule and Duanmu, 2012;Li and Qian, 2008). More specifically, some authors have paid attention to foreign investments in emerging countries from the point of view of the market entry strategies or the relationships of cultural distance with entry mode choice (Tihanyi et al., 2005). Others focused on the international diversification and speed of entering international markets (Hitt et al., 2016) and the MNC performance .
FDI by MNCs plays an important key role in emerging economies' economic development (Danescu and Nistor, 2012;Huber, 2018) and facilitates the entry of new and advanced technologies (Borensztein et al., 1998). Also, emerging economies benefit from inward FDI through capital accumulation, which affects positively the host economy's Balance of Payments (BoP) (Amighini et al., 2017;Testas, 2003). Thus, inward FDI to emerging economies will enhance the flow of capital and increase production and exports. Moreover, FDI by MNCs boosts international trade by developing new international networks between host and home economies (Gammeltoft and Cuervo-Cazurra, 2021). Hence, developed and emerging economies have a common interest in encouraging FDI flows, even if their goals differ. Host economies attract FDI to improve their economic standards (Khder Aga, 2014) and to develop both an economy of scope, which focuses on the average total cost of production of a variety of goods and service and economy of scale, which focuses on the cost advantage (Mishra et al., 2017). On the other hand, corporate growth, access to natural resources, low labor costs and maximizing revenues are typical targets for MNCs (Bhaumik and Gelb, 2005;Meyer et al., 2009). Moreover, although FDI in emerging economies has been the subject of numerous business research, it is still unclear why investors prefer to take advantage of emerging economies' opportunities. Emerging economies are considered to be slow in adopting new reforms and suffer from corruption (Kaufmann et al., 1999;Uhlenbruck et al., 2006). However, some emerging economies will be the new world's developed economies (Danescu and Nistor, 2012).
A large bibliometric production already exists on FDI. Alon et al. (2018) focused on the internationalization of Chinese enterprises by reviewing 206 articles published in different journals over 13 years. Fetscherin et al. (2010) analyzed 422 articles published in 151 journals over 29 years to examine how scholarly research on FDI to China has evolved. Peter and Michele (2020) conducted a systematic and bibliometric review using 41 articles from WoS for 19 years to analyze the influence of taxes on FDI and corporate financing decisions. Moreover, recently, da Silva-Oliveira et al. (2021) carried out a bibliometric analysis covering 806 articles published between 1994 and 2019 to map key elements of the intellectual structure of the body of work on inward and outward emerging economies' FDI. Its objective was to unveil the main thematic topics that characterize the literature to draw several opportunities for future research.
Although some authors were interested in the general topic of FDI and emerging countries, we could not find literature reviews focused only on FDI by MNCs in emerging economies.
Therefore, this article's main objective is to show how the topic of FDI by MNCs in emerging economies is developing through a bibliometric analysis that gives interesting insights and recommendations for future research after presenting an analytic map of the publications over the last 46 years. To the authors' best knowledge, this study's sample compiles the largest selection of FDI by MNCs in emerging economies' articles in different journals.
We analyze the following: What are the field's most productive journals? Which authors and articles contributed most to FDI improvement and growth? To what extent do the published articles follow the bibliometric laws? What is the intellectual structure of this literature? What are the potential opportunities for FDI research?
Methodology
This article uses the bibliometric analysis methodology, which is the application of statistical and mathematical methods in science communication (Pritchchard, 1969).
We employ the popular indicators of analysis (Cancino et al., 2017), such as the number of articles for measuring productivity; leading authors, institutions and countries in this field; scientific production over time; most relevant keywords; application of bibliometric laws; co-word analysis; and collaboration network analysis (Liao et al., 2018).
We use the five step procedure workflow for conducting science mapping with bibliometric methods proposed by Zupic and Cater (2015): research design, a compilation of data, analysis, visualization and interpretation.
Data sources and keywords groups
Bibliometrics data were retrieved from WoS and Scopus databases to have a more comprehensive vision.
Three groups of keywords were used. The first group represents the field we are covering, the second, who interacts in the field and the third where the activity is taking place: (1) Internationali* OR "foreign direct investment'' OR FDI AND (2) mnc* OR mne* OR "Multinational corp*'' OR "multinational enterp*" OR "multinational comp*'' OR "international corp*" OR "international enterp*" OR "international comp*" AND (3) mena OR bric OR "emerging econo*'' OR "emerging country*'' OR "develop* country*'' OR "develop* econo*" OR"emerg* market*'' OR "transition econo*".
Data collection criteria
This study follows the "Preferred Reporting Items for Systematic Reviews and Meta-Analyses flowchart" (PRISMA) to ensure transparency and a complete reporting process ( Figure 1).
(2) Screening: In this stage, we excluded 441 records as duplicated (exclusion of duplication's records was done automatically when we merged the records into the R data frame) and 289 records for not meeting inclusion criteria. Total records eligible for the next stage: 1,241.
(3) Eligibility: Two authors manually checked the abstracts and keywords for all records eligible in this stage to exclude all documents unrelated to the three keyword groups. The third author reviewed the records excluded. In this stage, we excluded 708 records.
(4) Inclusion stage: The eligible articles included in the bibliometric analysis are 533 records.
(5) Visualization and interpretation steps are included in the results and discussion section.
Data analysis
We used the bibliometrix package, a unique tool for science mapping, statistical computing and graphics for the R language. We chose this software because it is an open-source FDI and EEMNCs: a bibliometric analysis programming language for statistical and graphic visualization (Liu and Li, 2016). It allows to implement various bibliometric tests (Pal acios et al., 2021) and performs a more comprehensive science mapping analysis (SMA) compared to other software such as VOSviewer (Moral-Muñoz et al., 2020). Moreover, it can merge the extracted data from WoS and Scopus in one data frame to perform the analysis.
Results and discussion
We organize this section into four parts related to bibliometric laws applied in this study (3.1), descriptive results (3.2), collaboration social network analysis (3.3) and the conceptual structure map of the field (3.4).
Bibliometric laws
Like classical physics laws, bibliometrics has some classical laws that seek to analyze the qualitative literature by mathematical and statistical means. Although the bibliometric scholarship is mature, there is little evidence of any Einsteinian breakthroughs proving that bibliometric laws are concrete laws. However, they are extremely valuable in developing general theories about information and providing data to study further. Three laws can be named "bibliometric laws": Lotka's law of scientific productivity (authors publishing in a particular discipline); Bradford's law of scattering (scattering of articles); Zipf's law of word occurrence (ranking of word frequency). In this study, we apply Lotka's and Bradford's laws.
3.1.1 Lotka law. Lotka's law, or the inverse square law of scientific productivity, is one of the major empirical laws of bibliometrics that is often used in literature to model information about how many authors have written 1, 2, 3 or H articles (Friedman, 2015).
This article follows Cintra et al.'s (2018) procedure of using Lotka's law to verify the number of scientific articles published by various authors working in the field and to indicate the development level of scientific production, along with identifying the most productive authors.
This law deals with authors' frequency of publication in any field. Its basic formula is as follows: Y ¼ C X N where Y is the relative frequency of authors, X is the number of articles (publications) and N and C are constants depending on the specific field (N~2) (Savanur, 2015).
Lotka's law analysis shows that 755 authors have published one article (freq. 5 82%), while 103 authors have published two articles (freq. 5 11%). Figure 2 shows our sample's empirical distribution with the theoretical distribution (this assumption implies that the theoretical beta coefficient of Lotka's law is equal to 2 or B 5 2, the red line). There is a negative logarithmic relationship of variables X, Y and C 5 0.7099, the goodness of the fit (R 2 Þ 5 0.9493, the estimated Beta coefficient is 2.7245 and Kolmogorov-Smirnoff's twosample test provides a p value of 0.269, which means there is not a significant difference between the observed and the theoretical Lotka distributions. Therefore, our dataset follows the Lotka law, where the Y value decreases with the X value increases. Moreover, our sample shows authors' divergence which leads to enriching and diversifying the field content to meet international needs and offer relevant content to wider stakeholders.
FDI and EEMNCs: a bibliometric analysis
Our results are in line with previous studies in different fields, such as the circular economy (Alnajem et al., 2021), accounting (Corbet et al., 2019), risk management (Chun-Hao and Jian-Min, 2012) and business ethics (Talukdar, 2015). 3.1.2 Bradford's law. Bradford's law helps us identify the most highly cited journals (core journals) in a given subject or field using statistical methods. The assumption is that the majority of articles tend to be published in a small number of journals (core zones), while the rest tend to be in a larger one (Venable et al., 2016). It considers that there is a relationship between those zones following this sequence: C : Ck : Ck 2 . . . Ck p−1 . C is the number of journals in Zone 1, k is a Bradford multiplier and p is the number of Bradford zones.
where, e 5 2.71828 and γ 5 0.5772 (Euler's number and constant, respectively) (Bail on- Moreno et al., 2005), Y m : the citation number of the highest-ranked journals and T: the cumulative number of journals.
Appling both equations, we can establish the Bradford zones' theoretical citation distribution. Table 1 shows the rank of journals in our sample (226 journals). The top eight journals are the highly ranked journals in the core (Zone 1). Those eight journals had a cumulative frequency of 182, which is higher than the 50 journals in Zone 2 with a cumulative frequency of 359 and 168 journals in Zone 3.
Moreover, based on the analysis, the core journals are International Business Review (
Descriptive results
This section is divided into these four subsections related to literature trends over 46 years, journals, authors and countries.
3.2.1 Literature trends. The literature on FDI by MNCs in emerging economies has been growing from 1974 to 2020 ( Figure 4). Three different periods can be highlighted. From 1974 to 2006, 81 articles were published, and the contribution dealt mainly with FDI determinants and how MNCs entered foreign markets (Kokko, 1994;Li and Resnick, 2003;Tihanyi et al., 2005 (Li and Yao, 2010;Sun et al., 2012). The third period (2015-2020) counts 241 articles: 65 articles (27%) deal with the determinants of FDI such as strategic asset seeking (Meyer, 2015) and barriers of FDI (Padilla-Perez and Gomes Nogueira, 2016), 50 articles (21%) with the entry mode of MNCs such as the employment of Outward foreign direct investment (OFDI) for knowledge seeking and integration (Liu et al., 2017), the relationship between exports and entry mode (Bhasin and Paul, 2016), 31 (13%), deal with the impacts of FDI on innovation performance and the impact of FDI on economic growth in developing countries (Huber, 2018), 26 (11%), deal with corporate governance and how the government influences internationalization (Panibratov, 2016), 14 (6%), deal with MNCs' location choice decisions Shukla et al., 2019), while 55 (23%), deal with other topics such as geographic diversification effects on firm profitability (Kim et al., 2015) or assessing the source and nature of competitive advantages of multinationals which expand in emerging economies (Williamson, 2015).
FDI and EEMNCs: a bibliometric analysis
The analysis demonstrates the increase in field contributions since 2007. It shows that academics wrote 92% of them, 2%, were professionals and 6%, jointly collaborated. Moreover, 80% of the studies were published later than 2006. This review's literature trend findings align with previous studies (Bretas et al., 2022;da Silva-Oliveira et al., 2021).
3.2.2 Journals' statistics. Based on Bradford's law, Table 2 presents the top eight sources out of 226 journals in our sample, which have published 186 articles, representing 34.9% of the whole articles in our sample.
The most relevant journals found in this review are in line with previous studies (Fetscherin et al., 2010;Paul and Singh, 2017).
3.2.3 Author statistics. Most of the 919 authors in our sample published one or two articles. The analysis also revealed that the authors of single-authored documents were 123 authors, and 796 authors were multiauthored documents. Table 3 presents the most productive authors in the field with a minimum publication of five articles. The top ten authors have a high h index, indicating their publications' impact in terms of citations received (Hirsch, 2005). Moreover, it should be noted that due to h_index criticism (Koltun and Hafner, 2021), we introduced the g_index, which is based on the distribution of citations received by a given researcher's publication as an improvement of the h_index, to provide more weight to the most cited publications for authors (Egghe, 2006). Moreover, we calculated the m_index, which facilitates the comparisons between academics with different lengths of academic careers based on the following equation (West et al., 2013 The most productive authors are Meyer, Buckley and Li with ten articles each, followed by Luo, with eight articles. Their research studies focus on MNCs, FDI and internationalization. For example, Buckley et al. (2007Buckley et al. ( , 2012 contribute to the research on the effects of FDI spillover on the local industry and linkages and specific advantages of the host-home country as determinants of foreign acquisitions while Meyer contributes to the field of FDI strategies . Moreover, Li analyzes the effects of OFDI on regional innovation performance and the role of FDI in knowledge transfer to host economies Ning et al., 2016).
3.2.4 Country statistics. During the past 46 years, 46 countries contributed to the field. Table 4 presents the top ten productive countries concerning several indicators, like total published articles, citations and average article citations. The USA published 87 articles (81, were single country publications (SCPs) and 6, were multiple country publications (MCPs)), accounting for 16.32% of total published articles. The UK and China rank in the second and third positions, respectively. Moreover, the USA, the UK and China rank top in total citations. SCP or intra-country collaboration and intercountry (MCP) collaboration are essential factors that allow the development of research directions and also can help identify research Table 4 that the publications with the highest rates of SCP and MCP collaboration belong to authors from the USA, the UK, China and Brazil. At the same time, the MCP ratio represents the degree of collaboration between authors and institutions. e.g. the USA has 87 articles, SCP, 81 and MCP, 6. Then, the USA has a 0.069 MCP ratio, an indicator of the weak collaboration between authors from other countries, while the MCP ratio for China and the UK is 0.25 and 0.1644, respectively. Therefore, China and the UK have higher author collaboration. It is worth highlighting that, among the top ten most productive countries, three are emerging countries (China, Brazil and India).
Collaboration networks
A collaboration network is a social map where authors are the nodes, and links represent co-authorships. The co-authorship network is one of the most well-documented and tangible forms of scientific and social collaboration (Gl€ anzel and Schubert, 2006). To establish the field's social structure, we analyze two collaboration levels: between authors and institutions.
The primary statistics used to analyze social networks are as follows: (1) Density of the network as metric to measure the connectivity within the network: It is defined as the percentage of the number of existing links concerning the maximum number of possible links in a given network (Wasserman and Faust, 1994).
(2) Diameter, which is the length of the longest geodesic distance (the maximum eccentricity over all the actors of the network) (Robins et al., 2007).
(3) The average path length is the average distance between any couple of nodes and may highlight interesting properties of a given graph (Perez and Germon, 2016).
3.3.1 Author collaboration. Figure 5 shows the authors' collaboration network, representing the degree level of knowledge exchange between authors through the total joint publications in a specific field. It has a size of 919 authors, a density of 0.002, an average path length of 4.664 and a diameter of 12. The node's size demonstrates the author's publications, and the link's thickness indicates the cooperation intensity.
The graph shows that the author's collaboration is grouped in nine clusters marked in different colors on the map, where Li J along with nine other authors (Meyer K, Cui L, Estrin S, Gammeltoft P, Goldstein A, Wang Y, Sutherland D, Flatotchev I and Ning L) occupy the main central cluster (blue). The total number of links between node statistics reveals a very sparsely connected network and the lack of joint collaborative research in the field.
3.3.2 Institution collaboration. The institutions' collaboration network displayed in Figure 6 has a size of 711, a density of 0.002 and a diameter of 8. The size of each node represents the density of the institution's publications, while the line thickness displays the collaboration ties.
The figure reveals five main research communities (clusters) as indicated by colors. The largest cluster, in red, comprises the six most productive universities that produce FDI by MNCs in emerging market-related research. These include China Europe Business School, the National University of Singapore, Simon Fraser University, Copenhagen Business School, the University of Macau and the University of Sussex.
Interestingly, there are two connected (think gray line) clusters (orange and blue clusters) represented by the University of Texas Dallas, the Chinese University Hong Kong and Tsinghua University (the orange cluster) and the University of Sydney, London Business School and Copenhagen University which reflect what is termed in the literature as the "locally-centralized-globally-discrete" type of collaboration (Zou et al., 2018). The figure also demonstrates that European universities are most likely collaborating with Asian universities on FDI by MNCs in emerging economies research compared to other countries. Moreover, also shows no collaboration between developing and developed countries' institutions. Overall, our findings confirm the disaggregated nature of the FDI by MNCs in emerging economies research and a lack of knowledge sharing among most research institutions.
The analysis shows that the collaborative coefficient (CC) and modified collaborative coefficient (MCC) indicators were less than 0.5 (0.410 and 0.438, respectively) (Batcha and Chaturbhuj, 2019). Besides, the collaboration index (CI) is 2.02, which means that MNCs had less collaboration in the field of FDI in emerging economies. Moreover, we conclude that some authors who usually work together tend to stay in the same work group without establishing new relationships outside their groups. This lack of collaboration affects the institutions' relationships. Therefore, research collaborations occur internally within the same cluster, and it is typically weak.
FDI and EEMNCs: a bibliometric analysis
The findings from the collaboration network analysis are consistent with our findings in Subsection 3.2.4 (country statistics). Also, these findings align with other studies that state that an author collaboration network is identical to other research fields (Alnajem et al., 2021); most productive authors are independent or tend to work within the same institute (Zou et al., 2018).
Conceptual structure map
Each research domain has various prime research themes. In this study, we use the keyword co-occurrence analysis to build a conceptual structure map and identify relationships and trends on the studied topic. This verification is done through multiple correspondence analysis (MCA) algorithms that use dimensionality reduction techniques to draw a conceptual structure map of the field to cluster common concepts.
The associations among the categories (clusters) are examined in two-dimensional plot (Figure 7). Dimension 1 accounts for 17.18% of the data variance, and Dimension 2 accounts for 9.44% of the variance between the individual variables (keywords). Table A1 in Appendix represents the prominent studies for each cluster. Keywords in our dataset were grouped in four clusters according to the association strength method as follows: 3.4.1 FDI determinants (red). This cluster focuses on FDI determinants, which affect FDI flows into the market. This cluster is considered the most comprehensive cluster. It gathers close together studies discussing topics, among others, the effects of knowledge infrastructure and institutional, human resource distance on FDI entry to the market, governance infrastructure and market size effects on FDI, the effects of human capital and skilled labor abundance on the FDI geographic distribution, political risks, economic freedom, trade costs, investment costs and corruption effects and FDI decision-making, and how FDI location choice is determined based on country-specific advantages.
We found two interconnected subclusters, institutional determinants and economic determinants.
3.4.1.1 Governance determinants. This subcluster focuses on institutional factors that affect and attract FDI into the specific economy. Such as control of corruption, voice and accountability, political stability, government effectiveness . . . This subcluster is composed of three empirical articlesthose of Estrin et al. (2009), Globerman and Shapiro (2002) and Busse and Hefeker (2007). Globerman and Shapiro (2002) adopted developed indices for examining the governance infrastructure effects on FDI inflows and outflows for a broad sample of developed and developing countries over 1995-1997. Moreover, they examined other forms of infrastructure, such as human capital and the environmental effects on FDI flows. The results reveal that governance infrastructure is a significant determinant of FDI flows. Also, investing in governance infrastructure attracts more FDI inflows and helps domestic MNCs to emerge and invest abroad. Busse and Hefeker (2007) examined the effects of political risk and socioeconomic and institutional factors on FDI inflows, using a data sample for 83 developing countries from 1984 to 2003. The results show that government stability, conflicts, law and order, ethnic tensions, and bureaucratic quality highly influence FDI. Moreover, to a lesser degree, corruption and democratic accountability are important determinants of foreign investment flows. Finally, their results show that political risk and institutional variables influence the most when MNCs decide where to invest abroad. In the same vein, Filippaios et al. (2019) found a positive relationship between a country's political governance and its ability to attract FDI. Estrin et al. (2009) investigated the role of human resource's distances on foreign investors' entry decisions by combining institutional and resource-based theories. Analyzing a dataset of 55 countries that invest in six emerging economies in Europe, Asia and Africa, their results show that the larger the distance between formal institutions and resource endowment, the more likely a greenfield entry will be the first choice. However, the impact of distance in informal institutions is found to be curvilinear.
3.4.1.2 Economic determinants. This subcluster gathers close together studies discussing topics, among others, the role of market size, skilled labor abundance, trade costs, geographic distance, infrastructure development in the host country, economic freedom, human capital and location choice in attracting and encouraging FDI inflows. For example, Noorbakhsh et al. (2001) investigated the influence of economic determinants such as market growth, macroeconomic stability, energy availability and human capital in attracting FDI by analyzing data sample which covers the period from 1980 to 1994 for 36 developing countries in Asia, Africa and Latin America. The results show that human capital is the most important determinant that affects FDI inflows, and its importance is increased over time while Galan et al. (2007) claimed that when MNCs decide to invest abroad, managers usually choose strategic asset-seeking factors when the chosen location is in a developed economy. Meanwhile, a group of authors studied how specific advantages affect the MNCs' decision to locate abroad (De Beule and Duanmu, 2012). How country-specific linkages and advantages could explain MNC location choice in light of foreign acquisitions (Buckley et al., 2012). De Beule and Duanmu (2012) argued that when MNCs decide to choose a location; they will generally search markets that are big and open, while trade openness is an important factor in enabling exports and imports. Moreover, the natural resource availability factor is a significant trigger for choosing the location. Furthermore, Buckley et al. (2012) argued that when MNCs choose the location, natural resources is a key role, together with the open economy factor. Both variables are important determinants for choosing new locations.
3.4.2 Entry mode (blue). Choosing an entry mode strategy for a new foreign economy is a crucial decision because the future strategic success or failure in that new economy and international expansion is tied to the chosen strategy (Schellenberg et al., 2018).
Studies in this cluster discuss a wide range of topics, such as MNCs' modes of entry strategies and how host economy institutional context influences the entry strategy and MNC performance (Chan et al., 2008;Filatotchev et al., 2008;Meyer et al., 2009;Meyer and Nguyen, 2005), the effects of cultural distance on the mode of entry and MNC performance and diversification (Tihanyi et al., 2005).
Among the studies in this cluster, we found three literature reviews. The first review focuses on the importance of the resource-based view in IB (Peng, 2001). The second is on the importance of studying developing economies' MNCs to extend recent internationalization theories and models (Cuervo-Cazurra, 2012), and the last is on the strategies of MNCs that invest in developing economies (Spencer, 2008). Sun et al. (2012) discussed why emerging economies MNCs (EEMNCs) (the case of Chinese and Indian MNCs) choose cross-border mergers and acquisitions as a primary strategy mode of internationalization, by developing a new comparative ownership advantage framework. The finding of Sun et al. (2012) show that value creation, institutional facilitation, dynamic learning, national industrial factor endowments and national industrial factor endowments are triggering the emerging economies' MNCs' competitive advantage and helps them improve their skills and capabilities in cross-border mergers and acquisition integration. The relationship between institutional distance and ownership strategy is discussed by Liou et al. (2016). Liou et al. (2016) argued that large institutional distances in home-host countries' institutional environments will have opposite effects on EEMNCs ownership strategies. So, EEMNCs prefer higher ownership control to benefit the governance efficiency when they invest in a developed formal institutions' host economy, while large informal institutional distance force EEMNCS to lower their ownership control to reduce legitimacy concerns.
3.4.3 MNCs and FDI performance (green). Topics in this cluster discuss a wide range of topics such as firm performance, FDI performance and innovation and also proposes a new conceptual framework that helps to explain the determinants of MNCs' performance in emerging economies (Thakur-Wernz and Samant, 2019) and the effects of globalization on firm performance (Sledge, 2006). While some studies focus on innovation determinants in emerging economies (Wang and Kafouros, 2009), how home economy innovation can influence the MNCs' investment strategies abroad (Luo and Wang, 2012), the linkages between host economy industrial policies and MNCs' innovation practices (Jormanainen and Koveshnikov, 2012) and the effects of both international experience and interfirm mobility on innovation performance (Liu et al., 2010;Thakur-Wernz and Samant, 2019) others focus on MNC performance (e.g. Chan et al., 2008;Estrin et al., 2016;Garc ıa-Garc ıa et al., 2017;Qian et al., 2008).
3.4.3.1 Innovation performance. The literature on internationalization gave an improved understanding of innovation performance determinants. However, little research has been done to understand the innovation performance determinants in emerging economies. Wang and Kafouros (2009) proposed a framework that evaluates international trade, FDI and research and development (R&D) together, for a better understanding of their effects on innovation performance. This framework can help emerging economies to increase FDI inflows by focusing more on R&D. Innovation performance is not only important for attracting FDI inflows for emerging economies but also can play a key role in drawing the home economies' FDI outflow strategies and future international expansion (Luo and Wang, 2012). Moreover, Franco et al. (2011) found that innovation performance helps EEMNCs who invest in emerging economies to create positive spillovers in the host economy and create a useful transnational network of knowledge.
3.4.3.2 MNC performance. Performance is an important objective because it is the only way that keeps MNCs innovative, expanding, developed and competitive. This subcluster discusses some factors that affect MNCs performance, such as host economy institutional development, regional diversification and speed of internationalization.
MNCs prefer to invest in host economies where the economic and political situations are stable, but do host economy development matter? Chan et al. (2008) found a negative relationship between the level of institutional development and the level of foreign affiliate performance. Qian et al. (2008) found that most MNCs internationalize regionally, not globally, to reduce diversification costs. Moreover, the relationship between regional diversification and MNC performance has a U curve, meaning the more regional diversification, the less performance. The last topic in this subcluster focuses on the speed of internationalization and its effects on MNC performance in the presence of technology (technological knowledge) and experiential knowledge in international markets. Based on a knowledge-based view and the organizational learning theory, Garc ıa-Garc ıa et al. (2017) proposed an integrated theoretical framework to explain the effects of internationalization speed on MNC performance. The findings show that low and moderate speed levels positively affect performance in the long run, while rapid internationalization with high levels of technological knowledge decreases MNC performance. Moreover, when levels of diversifications increase, Garc ıa-Garc ıa et al. (2017) found a U-shape relationship between the speed of internationalization and long-term performance. This explains why the rapidly expanding abroad MNCs enjoy increasing the diversity of their host-country portfolio.
3.4.4 Internationalization process (purple). The internationalization process theory (IPT) states that firms are more likely to expand in economies not so far from their home economy (Amdam, 2009). The relevance of the IPT to IB and international management (IM) is its ability to explain foreign entry under the assumption that firm internationalization results from decisions based on accumulated knowledge over time (Amdam, 2009). Host economy environments differ from the home economy. Then, some factors will affect the MNC internationalization process. Santangelo and Meyer (2011) found that the host economy's high institutional voids increase the cost of adaptation for subsidiaries and reduce the postentry adjustments, while high institutional uncertainty increases the chance for entrepreneurial opportunity recognition. Amal et al. (2013) and Bonaglia et al. (2007) found that strong brands, product innovation and important networks help MNCs to internationalize in foreign markets. Regarding EEMNCs, Thite et al. (2016) show that selected Indian MNCs have efficiently leveraged their knowledge in regional markets to expand in developed markets and applied the linkage-leverage-learning approach (LLL) in FDI and EEMNCs: a bibliometric analysis their internationalization process. Contrarily, Cuervo-Cazurra (2008) found that Latin firms' internationalization process and building their competitiveness advantages on the international level is influenced by the home country's structural reforms to overcome the limitations of generating FDI and becoming MNCs. The rapid development and international expansion of EEMNCs questioned the IPT and the ability of classical theories to explain whether all MNCs behave the same regarding the internationalization process or does EEMNCs' internationalization process shapes new strategies that require new theories (Chittoor, 2009;Guill en and Garc ıa-Canal, 2009;Masiero et al., 2017). Since advanced economies MNCs internationalize by using their knowledge (gradually learned to internationalize), EEMNC's motive for internationalization is learning. This makes EEMNCs expand globally before reaching the maturity stage in their home market, as the internationalization process helps EEMNCs achieve advantages and capacities they lack as latecomers on the international stage (internationalize to learn) (Girod and Bellin, 2011). Marchand (2018) show that EEMNCs should accelerate their internationalization process pace to compete and expand globally.
To this end, Marchand (2018) used empirical data for testing EEMNCs' practices and strategies to update and extend some parts of two major IB/IM theories (the internationalization process and post-acquisition integration). Results suggest that constructs and frameworks for both theories are still valid (the IPT still can answer the internationalizing questions, the where (location) and the how (entry modes)). Moreover, the post-acquisition integration framework as they organize the two major decisions to be made (levels of organizational and operational integration).
Clusters 1 and 2 (red and blue, respectively) have the most keywords, which means the researchers' attention to the study's subject matter.
The analysis above shows that almost all studies in each cluster have referred to one or more theoretical standpoints. Following Øyna and Alon's (2018) approach, we identify 17 theories and models that are considered as background for researching the FDI by MNCs in emerging economies ( Figure A1 in Appendix).
The most important theoretical background is the institutional theory referred to by all clusters. It is not a surprising finding because MNCs' choices in internationalizing depend on their capabilities, industry conditions, and formal and informal constraints of their institutional environments (Peng et al., 2008). Also, this theory can explain MNCs' strategies during expansion phases (Sahin and Mert, 2022).
The second framework is the ownership, location and internalization (OLI) framework (referred to by three clusters). It explains MNC's IB activities modes and identifies the MNCs' competitive advantage, which allows them to compete in host economies and facilitates the location choices and entry mode strategies into the host economy (Dunning, 1977).
The third framework is the resource-based view (referred to by three clusters), which identifies MNCs' employed resources to expand in the international markets (Øyna and Alon, 2018).
Finally, the internalization theory (referred to by three clusters) states that firms invest in host markets to exploit tangible and intangible assets to benefit from financial market imperfections which lower investment and operating costs and minimize their risk of business failure through greater income diversification (Buckley and Casson, 1976;Rugman, 1981).
The results obtained from the keyword analysis and the application of the statistical algorithms helped us answer the question "What is the conceptual structure of this literature"?
Conclusions
We conducted a bibliometric analysis of FDI by MNCs' publications in the IB field focused on emerging economies. The review's main objectives were to identify the most productive journals, the main authors and the articles that contributed the most to the field development and growth and besides, to show the intellectual structure of this literature and the potential opportunities for further research.
The first contribution of this research is regarding the methodology. We analyzed the relationships between authors and keywords using statistical techniques and mathematical algorithms to clarify how the field is structured. A comprehensive bibliometric analysis technique has not been used before in the topic studied to draw future research agenda using the conceptual structure map. Previous bibliometric reviews were limited to showing the articles' numerical change over time, the article's relevance through citations, the most productive journals and author collaboration, among other descriptive results. Moreover, they often studied specific topics relating to FDI such as taxes, inward FDI to China and OFDI from or to developing economies.
The second contribution is the application of bibliometric laws (those of Lotka and Bradford), which have not been previously carried out in this field. In this regard, we can assume that our dataset follows those rules.
Concerning the descriptive part of the studies, we can draw some conclusions. First, we find a progressively growing literature study on FDI by MNCs in emerging economies research. Specifically, since 2007, academics have increased studies about location choice, the impacts of FDI and the need for model developments and new theories to explain the emerging economies for internationalization.
We find that a small number of authors dominate the field, especially regarding citations same as the countries where the publications are written. There is a high geographic concentration of this topic's research. In total 18% of the countries are responsible for 49% of all publications and received 74% of the citations.
Collaborations between research groups are also unusual. Authors who work together typically do it within the same group rather than establishing new networks outside. Moreover, we find a high level of single-authored documents compared to other fields.
On the other hand, journals from different disciplines are interested in this research, mainly economics, international business, and planning and development. However, it should be highlighted that a few journals of high perceived quality are the target for publications.
Another contribution of this review is the co-word analysis theme (the intellectual structure theme). From the conceptual map, we can identify four clusters related to different aspects of the FDI by MNCs in emerging economies: (1) focused on MNCs' internationalization processes and approaches, (2) related to entry mode strategies, (3) focused on FDI and MNCs' performance in emerging economies and (4) related to determinants and factors affecting FDI decision-making for the host economy.
Future research directions and limitations
By mapping the FDI field through the literature analysis, we can answer our last research question: What are the potential opportunities for FDI future research?
More research can be done to examine whether existing internationalization theories can explain the internationalization process of EEMNCs either in another emerging country or overseas in general. Maybe changing the object of study can lead to different results. As has been seen, many of the authors addressing the topic are from China, Brazil or India. Perhaps they can delve into this topic by conducting qualitative studies that shed light on this aspect and furthermore, examining to what extent the going global together phenomenon is a relevant entry strategy. Also, more research can be done to evaluate the effects of governmental collaboration in the home country and host country to reduce market uncertainties. Also, investigate the recent determinants of FDI in emerging economies, such as exchange rate fluctuations, political and economic risks regarding host economies and the FDI and EEMNCs: a bibliometric analysis effects of terrorism on attracting FDI from developed economies. At last, more studies can address the relationship between direct taxes and FDI and investigate the performance of emerging markets' MNCs.
On the other hand, more efforts are needed to increase the relationship between international scientific collaboration. Multinational partnerships will improve FDI research effectiveness and quality, especially on this topic. Also, carry out more studies that encourage the adoption of collaboration indices (CC, MCC and CI) as a measurement of evaluating researchers in disciplines.
Despite this review's contributions to IB, it has some limitations. Concerning technical limitations, the results' interpretation might be limited to our functional human skills as researchers in text and information processing using algorithms and statistical calculations. Moreover, we used a limited number of databases (WoS and Scopus). However, we could have considered other databases, such as Journal Storage digital library (JSTOR), Sage publishing (SAGE) or Google Scholar. Finally, regarding the inclusion criterion, we only considered English articles that might underestimate research written in other languages. | 2023-02-02T16:14:35.379Z | 2023-01-31T00:00:00.000 | {
"year": 2023,
"sha1": "2b08b6dd0165cf9cdbaf44395c7f038331fc3aeb",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/IJOEM-12-2021-1878/full/pdf?title=foreign-direct-investment-by-multinational-corporations-in-emerging-economies-a-comprehensive-bibliometric-analysis",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b061397bb88000bf3c1562e5aeac5fed46b4f6fe",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
69171754 | pes2o/s2orc | v3-fos-license | Cellular energy stress induces AMPK-mediated regulation of glioblastoma cell proliferation by PIKE-A phosphorylation
Phosphoinositide 3-kinase enhancer-activating Akt (PIKE-A), which associates with and potentiates Akt activity, is a pro-oncogenic factor that play vital role in cancer cell survival and growth. However, PIKE-A physiological functions under energy/nutrient deficiency are poorly understood. The AMP-activated protein kinase (AMPK) is an evolutionarily conserved serine/threonine kinase that is a principal regulator of energy homeostasis and has a critical role in metabolic disorders and cancers. In this present study, we show that cellular energy stress induces PIKE-A phosphorylation mediated by AMPK activation, thereby preventing its carcinogenic action. Moreover, AMPK directly phosphorylates PIKE-A Ser-351 and Ser-377, which become accessible for the interaction with 14-3-3β, and in turn stimulates nuclear translocation of PIKE-A. Nuclear PIKE-A associates with CDK4 and then disrupts CDK4-cyclinD1 complex and inhibits the Rb pathway, resulting in cancer cell cycle arrest. Our data uncover a molecular mechanism and functional significance of PIKE-A phosphorylation response to cellular energy status mediated by AMPK.
Introduction
Phosphoinositide 3-kinase enhancer-activating Akt (PIKE-A) belongs to the PIKE family, a group of GTPases that interact with phosphoinositide 3-kinase (PI3K) and activate the PI3K/Akt pathway.PIKE-A is a protooncogene that has been reported to be upregulated in many cancers, including brain, breast, prostate, colon, ovary, liver, stomach, lung, cervix, and kidney, promoting glioblastoma cell proliferation and invasion [1][2][3][4] .Like variety of known proto-oncogenes, PIKE-A is usually through association with multitude binding partners, such as Akt, Unc-5 Netrin Receptor B (UNC5B), Focal Adhesion Kinase (FAK), cyclin-dependent kinase 5 (CDK5), NFκB, and STAT5a to interact with multiple signaling pathways to exercise function in cancer [5][6][7][8][9][10] .PIKE-A is localized in both the cytoplasm and nucleus and its cytoplasm-nucleus shuttling correlates with posttranslational modification and physiological or pathophysiological functions.It is now clear that the phosphorylation of PIKE-A at S279 by CDK5 regulates nuclear translocation of PIKE-A and mediates growth factorinduced migration and invasion of human glioblastoma cells 8 .Our group previous studies showed that PIKE-A interacts with different partners, which mediated by Fyn phosphorylates on both its Y682 and Y774, then promoting cell survival and adipogenesis 9,11 .
The AMP-activated protein kinase (AMPK) is crucial cellular energy sensor that plays key role in adaptive responses to energy stress and energy homeostasis by promoting catabolic pathway of ATP production 12 .AMPK is activated by starvation or other stress (e.g., glucose deprivation, hypoxia, ischemia, and metabolic poisons treatment) 13 .Moreover, the adipokines leptin and adiponectin, cytokines such as interleukin-6 and ciliary neurotrophic factor, plant products such as berberine, resveratrol, and (−) epigallocatechin-3-gallate (EGCG), and small molecules such as metformin, minoimidazole-4-carboxymide-1-β-D-ribofuranoside (AICAR), thiozolidinedione (TZD), and A-769662 all can activate AMPK 14 .Upon activation, AMPK, as a heterotrimeric Ser/Thr kinase complex, phosphorylates its targets in order to stimulate catabolic processes dramatically, and at the same time to inhibit anabolic processes to restore cellular energy homeostasis, and chronically altering gene transcription and controlling cellular fate 12 .AMPK serves as a metabolic tumor suppressor that reprograms the cellular metabolism and triggers metabolic checkpoint on the cell cycle, which results in affecting cell proliferation, cell growth, cell survival, and autophagy through its actions on mTORC1, p53, and other modulators 15 .Recently, we provided new evidence supporting that the association between AMPK and PIKE-A was regulated by phosphorylation of PIKE-A mediated by Fyn, which is critical for inhibition of AMPK kinase activity, leading to cell proliferation arrest 3 .However, the precise molecular mechanisms of tumorgenesis driven by PIKE-A phosphorylation in the nucleus remain largely unknown.
14-3-3 proteins are highly expressed in human glioma U87 cells, while they cannot be detected in the normal human astrocyte SVGp12 cells 16 .They have gained a crucial position in cell biology owing to its involvement in many vital cellular processes, such as signal transduction, metabolism, transcription, apoptosis, protein trafficking, and cell cycle regulation 17,18 .However, in general, they regulate subcellular localization of target proteins, activity, or stability.This has raised the hypothesis that it is a crucial anchor protein in the cytoplasm to block its target proteins, which are imported into nucleus.There exist at least seven separate genes that encode seven 14-3-3 isoforms including β, γ, ε, ζ, σ, τ, and η in mammalian cells.However, different 14-3-3 isoforms may act as oncogenes or tumor suppressors in different types of cancers 19 .
1][22] ).CENTG1, the gene encoding PIKE-A, co-amplified with CDK4 was observed 20 years ago; a recent report reveals that hsa-miR26a, CDK4, and PIKE-A comprise a functional integrated oncomir/oncogene DNA cluster, which promotes GBM tumorigenesis 23 .Qi et al. 21demonstrated that overexpression of PIKE-A or CDK4 alone in the TP53/PTEN double knockout GBM mouse model has longer latency of glioma onset and survival relative to co-express PIKE-A and CDK4 (ref. 21).These results reveal that PIKE coordinately acts with CDK4 amplification or overexpression to drive GBM tumorigenesis.We previously showed that PIKE-A inhibited AMPK by direct interaction which was mediated by the upstream tyrosine kinase Fyn.In parallel Fyn phosphorylates tumor suppressor LKB1.These events coordinately lead to hindering of the tumor suppressive actions of AMPK 24 .
In this report, we provide new evidence of a feedback regulation loop between AMPK and PIKE-A, showing that the phosphorylation status of PIKE-A mediated by AMPK is critical for its association with 14-3-3, and ultimately results in PIKE-A nuclear translocation.In the nucleus, the interaction between PIKE-A and CDK4 was enhanced by PIKE-A phosphorylation status, and resultedin the inhibition of CDK4 kinase activity, leading to the cell proliferation arrest.This discovery highlights a previously unappreciated regulation of PIKE-A by cellular energy status.
AMPK phosphorylates PIKE-A on serine 351 and 377 residues
All cells coordinate cellular energy status with the change during cell growth, which is an energy-consuming process.AMPK directly phosphorylates key factors involved in multiple pathways to restore energy balance under energy stress 15,25 .We investigated whether PIKE-A could be phosphorylated by AMPK.Active AMPKα1β1γ1 strongly phosphorylated PIKE-A, as determined by in vitro phosphorylation assay (Fig. 1a).These results suggested that PIKE-A might be a substrate of AMPK.We then performed an in vitro phosphorylation assay using truncations and domains of PIKE-A and validated that PIKE-A phosphorylation sites are in PH domain (Fig. 1b and Figure S1A).When exploring the sequence of PIKE-A PH domain, we found that there are two optimal AMPK consensus substrate motifs around Ser-351/377 (Fig. 1c).Next, we test whether the S351/377 on PIKE-A are the phosphorylation sites.When S351/377 is mutated to Ala, the phosphorylation of PIKE-A by AMPK in vitro was completely abolished (Fig. 1d).To confirm this result, we examined the phosphorylation of PIKE-A using a pan-AMPK substrate antibody (Figure S1B).Indeed, the S351/377A mutant demonstrated no phosphorylation signal in cells transfected with active AMPKα (Fig. 1e).These results suggest that AMPK phosphorylates PIKE-A at Ser-351 and -377.
AMPK phosphorylates PIKE-A and stimulates its nuclear translocation under cellular energy stress
As expected, serum starvation or hypoxia increased phosphorylation of AMPKα, in turn activated AMPK, which directly phosphorylates the PH domain of PIKE-A (Figure S1A and S1B).Previous studies show that PIKE-A is localized both in the cytoplasm and nucleus 8,26 .To explore the physiological consequence of PIKE-A phosphorylation of AMPK, we first monitored PIKE-A subcellular localization in LN229 cells under serum starvation or hypoxia condition by cytoplasmic and nuclear fractionation.The results showed that PIKE-A was translocated into the nucleus under energy stress (Fig. 2a, b).We blotted PARP and tubulin as the nuclear and cytoplasmic marker, respectively, showing minimum cross-contamination between these fractions.
AMPK directly monitors the cellular ATP/AMP ratio and regulates cell metabolism and growth in response to cellular energy status 15 .Therefore, we explored the role of AMPK on PIKE-A subcellular distribution.Notably, knocking down AMPK abolished nuclear PIKE-A (Fig. 2c).We then examined the PIKE-A subcellular localization in LN229 cells in the presence or absence of numerous small molecules which can activate AMPK.PIKE-A was usually localized in the cytoplasm, but translocated into the nucleus when treated with AICAR, Metformin, A23187, or H 2 O 2 , and at the same time increased phosphorylation of Thr 172 in AMPKα (Figure S2C).
To corroborate these findings, we next performed mutational analysis and generated diverse serine phospho-deficient S/A mutants and serine phosphomimetic S/D mutants to determine the subcellular localization of PIKE-A.We found that substitution of S351 or S377 alone or together with D resulted in increased PIKE-A nuclear translocation (Fig. 2d).Similarly, immunofluorescence staining results revealed that PIKE-A S351A, S377A, and S351/377A mutant were largely present in the cytoplasm.However, PIKE-A S351D, S377D, and S351/ 377D mutants were predominantly present in the nucleus (Fig. 2e), suggesting that the phosphorylation of PIKE-A mediated by AMPK leads to accumulation of phospho PIKE-A in the nucleus.These data indicate that PIKE-A nuclear translocation correlated well with increased PIKE-A phosphorylation in an AMPK-dependent manner.
14-3-3β interacts phosphorylated PIKE-A by AMPK and stimulates its nuclear translocation
Through phosphoserine/threonine recognition motif, 14-3-3 proteins act as anchor proteins that play important roles in many regulatory processes, including intracellular protein targeting and stimulates its translocation 27,28 .To explore whether PIKE-A interacts with 14-3-3β proteins, we conducted a co-immunoprecipitation assay and found that PIKE-A physically associated with 14-3-3β and serum starvation or hypoxia stimulation enhanced this interaction (Fig. 3a, b).Similarly, in the presence of AICAR, Metformin, A23187, or H 2 O 2 to stimulate PIKE-A phosphorylation, the binding between PIKE-A and 14-3-3β was increased (Figure S3A).These data suggest that the association between PIKE-A and 14-3-3β was mediated by AMPK phosphorylation.
To further confirm that the serine phosphorylation status on PIKE-A tightly correlated with the interaction between PIKE-A and 14-3-3β, we next performed a binding assay using PIKE-A mutants and 14-3-3β.We found that phospho-mimetic PIKE-A mutant (S351/377D, SD) increased the interaction slightly, while phosphodeficient PIKE-A mutant (S351/377A, SA) resulted in decreased interaction compare to PIKE-A WT (Fig. 3c).Noticeably, the depletion of 14-3-3β abolished PIKE-A nuclear translocation which escalated by AICAR, indicating that 14-3-3β is indispensable for AMPK-dependent PIKE-A nuclear translocation (Fig. 3d).Together, our data suggest that 14-3-3β interacts AMPK-phosphorylated PIKE-A and stimulates its nuclear translocation.
AMPK-mediated PIKE-A phosphorylation stimulates its association with CDK4 in nucleus
Qi's study showed that PIKE-A directly interacts with CDK4 to form the complex, which promotes cell proliferation and GBM tumorigenesis in vitro and in vivo 21 .Accordingly, we investigated the pathological consequence of PIKE-A nuclear translocation under energy stress, and observed that PIKE-A associated tightly with CDK4 in LN229 cells under serum starvation in the whole cell, particularly in nucleus, but not in cytoplasm (Fig. 4a, b).Similarly, in the presence of AICAR, Metformin, A23187, or H 2 O 2 stimulation the interaction was enhanced in the nucleus or whole cell, but not in cytoplasm (Figure S4A and S4B).We next measured the interaction between PIKE-A WT or SA mutants and CDK4 when co-transfected with constitutively active (T172D) or inactive (T172A) mutant of AMPKα.The GST-pull down assay displayed that compared with the prominent interaction between PIKE-A WT and CDK4, phospho-deficient PIKE-A mutant SA revealed lower binding affinity to CDK4.AMPKα T172D provoked the PIKE-A WT/CDK4 association, but its stimulatory effect was reduced when PIKE-ASA was employed.As expected, PIKE-A SA barely interacted with CDK4 in the presence of AMPKα T172A (Fig. 4c).Furthermore, when co-transfected CDK4 with PIKE-A WT, SA or SD, the coimmunoprecipitation assay results showed that phosphomimetic SD mutant of PIKE-A escalated the interaction between CDK4 and PIKE-A, whereas the SA mutant blocked this interaction (Fig. 4d).Hence, these data strongly suggest that AMPK phosphorylation regulates the interaction between PIKE-A and CDK4.
AMPK phosphorylation of PIKE-A prevents CDK4-Rb signaling pathway
To explore the effects of PIKE-A, CDK4, and their combined effect on downstream signaling cascades, we performed Rb phosphorylation analysis under different AMPK activation conditions.Rb is one of the major downstream targets of CDK4, and p-Rb signals are prominently elevated when CDK4 is overexpressed 22 .Our results showed that induction of AMPK activity by serum starvation and the well-characterized stimuli blocked the Rb phosphorylation level (Fig. 5a, b).In contrast, when we down AMPK, Rb phosphorylation was elevated (Fig. 5c).Then we investigated the effect of AMPK on phosphorylation of PIKE-A in mediating CDK4-Rb signaling cascades.Overexpression of PIKE-A WT or SA but not SD strongly elevated Rb phosphorylation, in alignment with what was observed in GFP empty vectortransfected cells, and these effects were abolished when knocked down by CDK4 (Fig. 5d).
To examine whether PIKE-A directly inhibits CDK4 activity, we performed in vitro CDK4 kinase assay employing the Rb peptide as a CDK4 substrate.When purified GFP-PIKE-A WT or mutant (SA and SD) recombinant proteins were incubated with the active CyclinD1/CDK4 complex, immunoblotting assay showed that PIKE-A SD strongly blocked CDK4 activity compared with PIKE-A or SA, suggesting that phosphorylated PIKE-A binds to CDK4 and inhibits its kinase activity (Fig. 5e).Collectively, our data support that PIKE-A phosphorylation suppresses the CDK4-Rb pathway, which is mediated by cellular energy stressinduced AMPK activation.
AMPK-phosphorylated PIKE-A suppresses cell proliferation in GBM cells
AMPK directly phosphorylates PIKE-A on S351/377 and affects its translocation into the nucleus.Therefore, we explored whether S351/377 phosphorylation affects PIKE-A biological consequence in glioblastoma cells.We performed cell proliferation, cell viability, and cell cycle assay in LN229 GBM cells transfected with GFP vector, GFP-PIKE-A WT, SA, and SD mutant, respectively.As expected, PIKE-A WT strongly conferred cell proliferation potential.However, PIKE-A SD mutant, which mimics PIKE-A phosphorylation by AMPK, lost the ability to promote cell proliferation and cell viability (Fig. 6a and Fig. S5A).Furthermore, we observed the similar pattern in G0/G1 to S phase transition (Fig. 6b).Next, we monitored the effect of PIKE-A WT or SA mutant on cell proliferation, cell viability, and cell cycle when co-transfected with active AMPK.The results show that active AMPK strongly inhibits PIKE-A WT but not PIKE-A SA mutant cell proliferation, cell viability, and cell cycle (Fig. 6c, d and Fig. S5B).Through the survival analysis in GBM (WHO grade IV), we found that the low levels of AMPKα phosphorylation (p-AMPKα T172) are significantly correlated with a worse prognosis of patients in TCPA (The Cancer Proteome Atlas, https:// tcpaportal.org/tcpa/)datasets, which are based on Reverse phase protein array (RPPA) from TCGA data (Fig. S5C).Moreover, higher mRNA levels of PIKE-A (AGAP2) are correlated with unfavorable clinical outcomes based on the publicly available cBioPortal for Cancer Genomics (http://www.cbioportal.org/)(Fig. S5D).Therefore, our findings indicate that AMPKphosphorylated PIKE-A induces cell cycle arrest and inhibits cell proliferation.
Discussion
Extensive studies have revealed that the PIKE-A has an essential function in promoting cancer cell survival and growth and preventing cell apoptosis [1][2][3][4] .Recent studies have revealed that PIKE-A can be phosphorylated by CDK5, Akt, and Fyn on Ser-279 (ref. 8), Ser-629 (ref. 5)/ (see figure on previous page) Fig. 3 PIKE-A phosphorylation regulates its association with 14-3-3β.a Serum starvation enhances the interaction of PIKE-A and 14-3-3β.HEK293 cells were transfected with GST-PIKE-A and GFP-14-3-3β and then serum starved for 12 h, followed by immunoprecipitation.Quantification is shown at the bottom.b Hypoxia enhances the interaction of PIKE-A and 14-3-3β.HEK293 cells were transfected with GST-PIKE-A and GFP-14-3-3β and then hypoxia (1% O 2 ) for 2 or 12 h, followed by immunoprecipitation.Quantification is shown at the bottom.c The myc-PIKE-A WT or mutants (S351A, S351D, S377A, S377D, SA, and SD) were co-transfected with GFP-14-3-3β into HEK293 cells.GFP-14-3-3β was immunoprecipitated and the coprecipitated proteins were analyzed using an anti-myc antibody.Quantification is shown at the bottom.d 14-3-3β regulates AMPK-phosphorylated PIKE-A nuclear translocation.14-3-3β shRNA or control vector-transfected LN229 cells was treated with AICAR or not, followed by subcellular fractionation.The purity of the cytosolic and nuclear fractions was confirmed by the absence of α-tubulin in the nuclear fraction and PARP in the cytosolic fraction.Quantification is shown at the bottom.All results performed above are presented as mean ± SD from three independent experiments.**p < 0.01; ***p < 0.001, ns not significant Ser-472 (ref. 7), and Tyr-682/774 (ref. 11), respectively.In addition, it has been shown that PIKE-A can also be regulated by extracellular signals, such as epidermal growth factor (EGF).In this study, we show that intracellular metabolic/energy stress regulates PIKE-A phosphorylation and nuclear translocation mediated by AMPK activation.Therefore, PIKE-A can integrate and coordinate both extracellular and intracellular signals under energy stress.AMPK detects cellular energy stress, which modulates cellular metabolism balance and limits cell growth 13 .AMPK accomplishes its regulatory functions either via direct and rapid phosphorylation of the metabolic enzymes or eliciting indirectly target gene expression 15,29 .Our present study identifies that AMPK directly phosphorylates PIKE-A in response to cellular energy stress.It is worth noting that the phosphorylation sites of PIKE-A are Ser-351 and Ser-377, which are in the PH domain (Fig. 1).Notably, our subsequent data indicated that these two phosphorylated sites of PIKE-A display the same biological function.Our previous report showed that the GTPase domain of PIKE-A is responsible for binding AMPK 24 .We propose that this binding is conducive and sufficient for PIKE-A phosphorylation by AMPK.Further evidence supports that a physiological function of PIKE-A phosphorylation in cellular energy response is PIKE-A nuclear translocation, which is dependent on AMPK activation (Fig. 2).The mechanisms underlying the nuclear translocation of PIKE-A and its role in tumorigenesis were not previously well understood.We demonstrate here that the nuclear trafficking of PIKE-A is regulated by 14-3-3, which contains a bipartite nuclear localization signal (NLS) and consequently promotes PIKE-A binding to CDK4 tightly in the nucleus (Figs. 3 and 4).This interaction shields CDK4 and disrupts the CDK4-CyclinD1 complex formation and Rb activity, and further induces cell cycle arrest (Figs 5 and 6).Therefore, PIKE-A phosphorylation is likely to play a role in maintaining cellular energy homeostasis.Energy-stress-induced PIKE-A phosphorylation and nuclear translocation mediated by AMPK activation reduces energy expenditure and cell growth, possibly by inhibiting the CDK4-Rb pathway.Our study uncovers a mechanism of cellular energy and crosstalk with PIKE-A through mechanisms of AMPK regulation.
GFP-PIKE-A WT GFP-PIKE
Considering the general role of PIKE-A in promoting cell proliferation and inhibiting apoptosis through PIKE-A modification such as phosphorylation and cleavage, etc., it is not surprising that PIKE-A phosphorylation is coordinated with cellular stress status.When cell conditions are normal or favorable, PIKE-A activates Akt and promotes cell proliferation.However, cell proliferation should not proceed if cellular energy is limited, and such conditions would allow AMPK activation to maintain basic survival.Therefore, the direct phosphorylation of PIKE-A in an AMPKdependent manner provides a mechanism to ensure that cell proliferation occurs only when favorable growth conditions are available.The phosphorylation of PIKE-A by AMPK adds new dimensions to both energy stress-mediated regulations of PIKE-A and the pathomechanisms of AMPK in controlling of cell growth.Indeed, phosphorylation of PIKE-A has the significant benefit of cell proliferation blockade and provides new target for therapeutic intervention of cancer.Thus, high activation of AMPK may pave the way for improved outcomes for cancer patients with high PIKE-A expression.
Materials and methods
Cell culture and reagents LN229 glioblastoma cells were maintained in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum at 37 °C with 5% CO 2 atmosphere in a humidified incubator.This cell line was a gift from Dr. Keqiang Ye's lab and has been previously described 24,26 .AICAR, Metformin hydrochloride, Calcium Ionophore A23187, antibody against GFP, and GST-CDK4/ CyclinD1 protein were from Sigma-Aldrich (St. Louis, (see figure on previous page) Fig. 4 PIKE-A phosphorylation by AMPK stimulates its association with CDK4. a Serum starvation enhances the interaction of PIKE-A and CDK4.LN229 cells were serum starved for 12 h, followed by immunoprecipitation with anti-CDK4 antibody and immunoblotting using anti-PIKE-A and antiphospho-(Ser/Thr) AMPK substrate antibody.Quantification is shown at the bottom.b Serum starvation enhances the interaction of PIKE-A and CDK4 in nucleus.LN229 cells were serum starved for 12 h, followed by subcellular fractionation.Then cytosolic and nuclear cell lysates were immunoprecipitated with anti-CDK4 antibody and immunoblotted using anti-PIKE-A and anti-phospho-(Ser/Thr) AMPK substrate antibody.Quantification is shown at the bottom.c HEK293 cells were co-transfected with Flag-CDK4 with GST-PIKE WT or phospho-deficient GST-PIKE SA mutant in the presence of myc-AMPKα T172D or myc-AMPKα T172A.PIKE-A was pulled down with glutathione beads and co-precipitated proteins were analyzed by immunoblotting with anti-Flag or anti-phospho-(Ser/Thr) AMPK substrate antibody.The expression levels of transfected constructs were analyzed by immunoblotting.Quantification is shown at the bottom.d Different GFP-tagged PIKE-A WT and mutants were transfected into HEK293 cells.Cell lysates were immunoprecipitated with anti-GFP (or anti-CDK4) antibody, and co-precipitated proteins were analyzed by immunoblotting with anti-CDK4 (or anti-GFP) antibody.Quantification is shown at the bottom.All results performed above are presented as mean ± SD from three independent experiments.*p < 0.05; **p < 0.01; ***p < 0.001, ns not significant
In vitro phosphorylation assay
After transfection, 500 μg protein from each sample were prepared and immunoprecipitated by adding 2 μl anti-GFP antibody and 25 μl of protein A-G agarose (Santa Cruz) at 4 °C for 3 h.Phosphorylation reactions were performed with immunoprecipitated GFP-PIKE-A from 500 μg total protein and 0.1 μg of active AMP-Kα1β1γ1 (SignalChem) in a final volume of 50 μl 10× AMPK kinase buffer (5 mM MOPS, pH 7.2, 2. β-glycerophosphate, 1 mM EGTA, 0.4 mM EDTA, 5 mM MgCl 2 , 0.05 mM DTT) and 1 μl [γ-32P]ATP (Perkin Elmer).Selected reactions were carried out in the presence or absence of active AMPKα1β1γ1.After incubation at 30 °C for 30 min, the reactions were terminated by addition of 2.5 μl of 5× sodium dodecyl sulfate (SDS) buffer, and the samples were subjected to 12.5% SDS-polyacrylamide gel electrophoresis (PAGE).The gels were dried with a Model 583 Gel Dryer (Bio-Rad) and phosphorylated proteins were visualized by autoradiography.
Cytoplasmic and nuclear fractionation
LN229 cells were collected and wash once with ice-cold 1× phosphate-buffered saline (PBS).The cell pellet was resuspended in CER I buffer.The cytoplasmic and nuclear fractions were prepared as described in the manufacturer's protocol (PIERCE, Rockford, IL, USA, NE-PER, nuclear and cytoplasmic extraction reagent).
Co-immunoprecipitation and in vitro-binding assays
These methods were performed essentially as described previously 30 .
Cell proliferation and cell viability assay 5 × 10 4 cells were seeded in a six-well plate and cultured at 37 °C for 3 days.Cell proliferation was determined by recording cell numbers 1, 2, and 3 days post-seeding, and normalizing to cell numbers at 0 day.5 × 10 3 cells were seeded in a 96-well plate 24 h before the assay starts and were cultured at 37 °C for 3 days.Cell viability was determined by using CellTiter 96® AQueous One Solution Cell Proliferation Assay (MTS) (Promega).
Cell cycle assay
LN229 WT, SA, and SD rescue cells (1 × 10 6 cells) were harvested by trypsinization and washed twice with PBS.After centrifugation, the cells were resuspended in 5 ml of 70% ethanol at 4 °C for 4 h.After rinsing with PBS, the fixed cells were resuspended in PBS containing 50 μg/ml RNaseA and 50 μg/ml propidium iodide and incubated at 4 °C for 4 h.The stained cells were passed through a nylon-mesh sieve to remove cell clumps and were analyzed by a FACScan flow cytometer.
In vitro CDK4 kinase assay GFP-PIKE-A WT, SA, or SD was transfected into HEK293 cells, and then 1 mg protein from each sample was prepared and immunoprecipitated by adding 2 μl anti-GFP antibody and 25 μl of protein A-G agarose (Santa Cruz) at 4 °C for 3 h.CDK4 kinase analysis was performed with the GST-tagged recombinant Active CDK4/Cyclin D1 complex (Sigma-Aldrich), recombinant amino acids 769-921 mapping within the carboxy-terminal domain of Rb (Santa Cruz), GFP-PIKE-A WT, SA, or SD extract with 10 mM ATP and 25 mM MOPS (pH 7.2), 12.5 mM β-glycerolphosphate, 25 mM MgCl 2 , 5 mM EGTA (pH 8.0), 2 mM EDTA (pH 8.0), and 0.25 mM DTT, and incubated at 30 °C for 30 min.The reactions were stopped by addition of sample buffer containing 125 mM Tris-HCl (pH 6.8), 10% β-mercaptoethanol, 9.2% SDS, 0.04% bromphenol blue, and 20% glycerol and boiled for 5 min.Samples were resolved by SDS-PAGE, and phosphorylation of Rb was measured by WB analysis.
Statistical analysis
Data are shown as mean ± SD from three independent experiments.The p values of less than 0.05 were considered statistically significant, *p < 0.05.Statistical differences were calculated with unpaired two-tailed Student's t-test using GraghPad prism software.
Fig. 1 Fig. 2 (
Fig. 1 AMPK phosphorylates PIKE-A on serine 351 and 377 residues.a AMPK directly phosphorylates PIKE-A.Purified GFP-tagged PIKE-A recombinant protein was incubated with active AMPK (α1β1γ1).Phosphorylated proteins were detected by autoradiography.b A series of GSTtagged PIKE-A domain were incubated with active AMPK (α1β1γ1) and detected by autoradiography.c Serine 351 and 377 residues are potential phosphorylation sites of PIKE-A.These phosphorylation sites were colored in red and compared with the consensus AMPK substrates motif.d AMPK phosphorylates PIKE-A on serine 351 and 377 residues.Purified GST-tagged PIKE-A PH domain WT and mutants (S351A, S377A, and SA) were incubated with active AMPK (α1β1γ1).Phosphorylated proteins were detected by autoradiography.Quantification is shown at the right.e PIKE-A is a substrate of AMPK in vivo.HEK293 cells were co-transfected with GFP-PIKE-A WT or mutant (S351A, S377A, and SA), and either constitutively active AMPK (myc-AMPK T172D) or inactive AMPK (myc-AMPK T172A).PIKE-A was then precipitated and its phosphorylation was detected using an antiphospho-(Ser/Thr) AMPK substrate antibody.Quantification is shown at the bottom.All results performed above are presented as mean ± SD from three independent experiments.*p < 0.05; **p < 0.01; ***p < 0.001; ns not significant
Fig. 5
Fig. 5 PIKE-A phosphorylation regulates the CyclinD1/CDK4 pathway.a Serum starvation decreases Rb phosphorylation.LN229 cells were serum starved for 12 h.Cell lysates were immunoblotted using anti-p-Rb antibody.Quantification is shown at the bottom.b AMPK activators inhibit Rb phosphorylation.LN229 cells were treated with AICAR, Metformin, A23187, and H 2 O 2 .Then Rb, as a downstream effector of CDK4, was analyzed by immunoblotting.Quantification is shown at the bottom.c Knock down of AMPKα increases Rb phosphorylation.Rb phosphorylation was analyzed by immunoblotting in AMPKα shRNA or control vector-transfected LN229 cells.Quantification is shown at the bottom.d PIKE-A phosphorylation suppresses CDK4-mediated Rb phosphorylation.In LN229 cells, CDK4 shRNA or control shRNA were transfected with GFP-PIKE-A WT and mutants (SA and SD).Cell lysates were immunoblotted using anti-p-Rb antibody.Quantification is shown at the bottom.e PIKE-A phosphorylation inhibits CDK4 activity in vitro.Purified GFP-PIKE-A WT or mutants (SA and SD) recombinant proteins were incubated with active CyclinD1/CDK4 complex, which were then incubated with Rb peptide in kinase reaction buffer for 30 min at 30 °C.Rb phosphorylation statuses were analyzed by immunoblotting.Quantification is shown at the bottom.All results performed above are presented as mean ± SD from three independent experiments.*p < 0.05; **p < 0.01; ***p < 0.001, ns not significant S+G2/M)/(G0/G1+S+G2/M) × 100%
Fig. 6
Fig. 6 AMPK-phosphorylated PIKE-A suppresses cell proliferation in GBM cells.a LN229 cells were transfected with GFP-PIKE-A WT and mutant (SA and SD) and cell proliferation was tested by cell counting.b LN229 cells were transfected with GFP-PIKE-A WT and mutant (SA and SD) and cell cycle distributions were analyzed by flow cytometry.Upper: Percentages of cells in phases of G0/G1, S, and G2/M are indicated.Lower: Cell proliferation index (PI) was calculated based on the indicated equation and is shown.c LN229 cells were co-transfected with GFP-PIKE-A WT or SA mutant and constitutively active mutant of AMPKα and cell proliferation was determined by cell counting.d LN229 cells were co-transfected with GFP-PIKE-A WT or SA mutant and constitutive active mutant of AMPKα.Cell cycle distributions were analyzed by flow cytometry.Upper: Percentages of cells in phases of G0/G1, S, and G2/M were indicated.Lower: Cell proliferation index (PI) was calculated based on the indicated equation and is shown.All results performed above are presented as mean ± SD from three independent experiments.*p < 0.05; **p < 0.01; ***p < 0.001, ns not significant | 2019-03-08T15:47:21.310Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "e91e1cb30ace301152bad5f5a35dad1bfc253799",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-019-1452-1.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0362264a1f1111f4cc772be20ff7b3a3b3d8c09f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
211042579 | pes2o/s2orc | v3-fos-license | A Land Space Development Zoning Method Based on Resource–Environmental Carrying Capacity: A Case Study of Henan, China
As a key element in China’s spatial planning, the development zoning of land space has become a focus of China’s current activity. During its rapid social and economic development, China has faced severe and diverse challenges regarding sustainable development, such as farmland occupation, environmental degradation, urban land disorder expansion, etc. Against this backdrop, research on the linkage between resource–environmental carrying capacity (RECC) and the development zoning of land space in the process of sustainable development has received increased attention, and an accurate evaluation of the RECC would provide useful guidance for Chinese policy makers to carry out the development zoning of land space. This paper uses Henan Province as an example to construct a comprehensive evaluation model of “resource carrying capacity (RCC)–eco–environmental carrying capacity (EECC)–socio–economic carrying capacity (SECC)”, which calculates the level of RECC in a provincial area. In addition, this paper designs a correlation model between the RECC and the development zoning of land space, which uses a three-dimensional magic cube evaluation model to analyze the development zoning layout of land space. The results showed that a geographical pattern exists, where in the southwestern areas of Henan Province have a higher RECC than the central and northeastern areas. The results also indicated that the land space patterns of Henan Province can be divided into seven types of areas through a three-dimensional magic cube evaluation model, which can better reflect the spatial differentiation characteristics of the comprehensive index of RECC. The results of this study offer an important reference for policy-makers to make decisions and also provide a scientific and pragmatic basis for the formulation of sustainable development strategies.
Introduction
Industrialization and urbanization have been an important feature in the process of human development throughout history [1]. Due to economic development and population growth, China's land space per capita decreased by two and a half times, and the amount of land under cultivation per capita was also cut in half, over a 65-year period [2]. With the rapid development of China's industrialization and urbanization, all land is becoming scarce due to competing demands for its use, and sustainable development. The development zoning of land space is mainly dependent on the concept of land space and is similar to the concept of land space utilization. Some issues related to the development zoning of land space, such as the intensity of land space development [34,35], land space development allocation [36,37], and spatial planning [38], were discussed. Moreover, from the viewpoint of the linkages between RECC and land space planning, Yue and Wang [39] explored the logical relationship between RECC and land space planning and discussed the model of RECC for land space planning. Wu [40] proposed land space optimization and utilization strategies from the perspective of ecological-production-living space. Zhou [41] designed a framework using system science, the entropy weight method, the triangle model, and the coupling coordination degree model for land use multi-functionalization assessment.
In summary, the current research mainly aims at single factor research on carrying capacity or the use of different models and methods to discuss the development layout of land space [42][43][44], seldom linking the study of RECC with land space planning or zoning. Meanwhile, research on land space development zoning based on the spatial differentiation of regional RECC is still rare, and there is no sufficient empirical research at the provincial level. With the continuing emphasis on spatial planning in China, the RECC will become an important link in planning evaluation. In addition, analyzing the regional differences of land space based on the RECC is crucial to addressing China's national and regional development and sustainability goals.
Based on this, we developed a three-dimensional magic cube evaluation model that would enable us to extend existing studies on the relationship between RECC and land space zoning. The three-dimensional magic cube evaluation model in this study makes it possible to analyze provincial land space zoning in the sustainable development process. This research intends to fulfill three objectives. First, this study aims to analyze the spatial patterns of RECC at the provincial level, using a new comprehensive index system from the perspectives of resource carrying capacity (RCC), eco-environmental carrying capacity (EECC), and socio-economic carrying capacity (SECC). Second, this paper aims to construct a suitable land space zoning model to assess land space multi-functionalization using a three-dimensional magic cube evaluation model to measure the relationship between the RECCs of different evaluation units and the development zoning of land space. Thirdly, this study explores the mode and principle of land space function zoning suitable for sustainable development and presents some zoning suggestions for land space development at the provincial level.
Theoretical Analysis of RECC and Land Space
RECC, as the link between the social system, environmental system, and economic system, is the key to coordinating the population, resources, and the environment; RECC is also an important foundation for sustainable development. There are many definitions of RECC. However, researchers generally agree that RECC is an important criterion for evaluating the coordination degree of resource environments and social economics. Achieving sustainable development is the ultimate goal that planners, managers, and policy makers seek. Thus, addressing the needs of resources and the environment poses a great challenge for RECC research [1]. The evaluation of RECC has changed from single resource evaluation to comprehensive evaluation, and the evaluation methods have been continuously enriched. Meanwhile, the evaluation objects have focused mostly on typical regions [37]. In practice, the evaluation of RECC in most regions remains at the strategic guidance level, and the support for optimizing land space zoning is insufficient [21].
The concept of land space zoning generally refers to the comprehensive division of an area within a certain range. Europe is the birthplace of land space zoning, and its ideas of land space function zoning can be traced back to the end of the 18th century and the beginning of the 19th century [45]. Land space function is the basis for describing the status of land use in a certain area. Thus, land space generally provides the follows three functions: a production function, living function, and ecological function. These three functions are interrelated [46]. In other words, the function of land space is generally divided into the three aspects of agricultural production (production), urban development (living), and ecological protection (ecology) to ensure the rational development of space resources ( Figure 1). The RECC can provide valuable evaluation methods and measurable indicators for assessing the land space function of a region. If the population and economy gradually increase and exceed the threshold of the carrying capacity, there will be negative impacts on the function of land space [47]. As an important basis for guiding spatial resource allocation for sustainable development, the assessment of RECC can support the application needs of land space development. Therefore, in China, a comprehensive evaluation of the RECC for different regions is carried out. Meanwhile, according to the different types of areas of RECC, it is possible to reasonably realize the development zoning of land space and ensure the optimal allocation of land resources. impacts on the function of land space [47]. As an important basis for guiding spatial resource allocation for sustainable development, the assessment of RECC can support the application needs of land space development. Therefore, in China, a comprehensive evaluation of the RECC for different regions is carried out. Meanwhile, according to the different types of areas of RECC, it is possible to reasonably realize the development zoning of land space and ensure the optimal allocation of land resources.
The RECC system can be divided into three aspects-resources, ecology, and society-which have a circular relationship. Similarly, land space can also be divided into the three aspects of production space, ecological space, and living space. The two systems of RECC and land space thus correspond to each other ( Figure 1). Therefore, establishing the corresponding relationship between the RECC and the development zoning of land space could provide a beneficial reference for the rational division of land space.
Study Area
Henan Province is situated in the middle of China and the middle and lower reaches of the Yellow River ( Figure 2). It has an area of 167,000 km 2 and a total population of 109.06 million, with rich resources and comparatively high levels of socio-economic development. In 2016, the GDP (Gross Domestic Product) of the province was 4.02 trillion Yuan, and the growth rate of major economic indicators was higher than the national level. Meanwhile, the urbanization level was 48.5%. Henan Province is composed of plains and basins, mountains, and hills, accounting for 55.7%, 26.6%, and 17.7% of the total area, respectively. Henan Province belongs to the middle ground of China's economic development from east to west and is an important hub for the comprehensive national transportation network. Meanwhile, Henan Province governs 17 municipalities directly under the Central Government, one city under direct provincial administration, 20 county-level cities, and 84 counties. By placing emphasis on the strategy of "the rise of the central plains", the economy of Henan Province has achieved steady and rapid development. However, at the same time, the area's resources and environment have been damaged to some extent. Based on this phenomenon, Henan Province has gradually realized a transition from the pure pursuit of economic development to the coordinated development of resources, ecology, and economy. As one of the nine provincial-level spatial planning pilots established by the central government, Henan Province has been used for research on the development zoning of land space under the constraints of RECC, which has important guiding significance for the improvement of spatial planning and sustainable development. Above all, with limited land resources, the issues of the rational division of land space and of coordinating the conflict between production space (resources), ecological space (ecology), The RECC system can be divided into three aspects-resources, ecology, and society-which have a circular relationship. Similarly, land space can also be divided into the three aspects of production space, ecological space, and living space. The two systems of RECC and land space thus correspond to each other ( Figure 1). Therefore, establishing the corresponding relationship between the RECC and the development zoning of land space could provide a beneficial reference for the rational division of land space.
Study Area
Henan Province is situated in the middle of China and the middle and lower reaches of the Yellow River ( Figure 2). It has an area of 167,000 km 2 and a total population of 109.06 million, with rich resources and comparatively high levels of socio-economic development. In 2016, the GDP (Gross Domestic Product) of the province was 4.02 trillion Yuan, and the growth rate of major economic indicators was higher than the national level. Meanwhile, the urbanization level was 48.5%. Henan Province is composed of plains and basins, mountains, and hills, accounting for 55.7%, 26.6%, and 17.7% of the total area, respectively. Henan Province belongs to the middle ground of China's economic development from east to west and is an important hub for the comprehensive national transportation network. Meanwhile, Henan Province governs 17 municipalities directly under the Central Government, one city under direct provincial administration, 20 county-level cities, and 84 counties. By placing emphasis on the strategy of "the rise of the central plains", the economy of Henan Province has achieved steady and rapid development. However, at the same time, the area's resources and environment have been damaged to some extent. Based on this phenomenon, Henan Province has gradually realized a transition from the pure pursuit of economic development to the coordinated development of resources, ecology, and economy. As one of the nine provincial-level spatial planning pilots established by the central government, Henan Province has been used for research on the development zoning of land space under the constraints of RECC, which has important guiding significance for the improvement of spatial planning and sustainable development. Above all, with limited land resources, the issues of the rational division of land space and of coordinating the conflict between production space (resources), ecological space (ecology), and living space (society and economy) have become major tasks which are necessary to achieve the sustainable development of Henan Province in the future. Therefore, this study provides a beneficial and timely land space development reference for land planners and policymakers. and living space (society and economy) have become major tasks which are necessary to achieve the sustainable development of Henan Province in the future. Therefore, this study provides a beneficial and timely land space development reference for land planners and policymakers.
Data Sources
This study takes the counties of Henan Province as the research unit and uses data from 2016. The water resource data were obtained from the water resources bulletins of Henan Province and cities in 2016 [48]. The environmental data were obtained from the environmental status bulletin of Henan Province and cities in 2016 [49]. The other data were obtained from the Henan Statistical Yearbook 2017 and the other cities' Statistical Yearbooks for 2017 [50], as well as the statistical bulletin on national economic and social development of Henan and cities in 2016 [51].
Construction of an RECC Evaluation Index System
In the research of an RECC, we must fully consider the feedback and interaction between the bearer object and the hosted object when selecting indicators. RECC should consider how resources, the environment, and human activities interact. Therefore, on the basis of the above principles, we reference the sustainable development index system of the Chinese Academy of Sciences combined with the actual situation of Henan Province. Meanwhile, from the perspective of RCC, EECC, and SECC, 22 indicators were determined. The results are shown in Table 1.
The RCC index mainly reflects resource abundance. This index includes the components of resources, such as construction land, cultivated land, forest resources, water resources, and grain output, which represent the level and carrying capacity of the resources. The EECC index mainly reflects the abundance of eco-environmental factors and the utilization and consumption of human activities on the environment. It covers the ecological land coverage rate, green land coverage rate, atmospheric environment capacity, water environment capacity, COD (chemical oxygen demand) emissions, industrial SO2 emissions, and dust emissions, which represent the overall quality of the ecological environment and the pressure of human activities on the ecological environment. The SECC index mainly reflects the coordinated development of the society, economy, and population, as well as the consumption of resources by social and economic activities.
Data Sources
This study takes the counties of Henan Province as the research unit and uses data from 2016. The water resource data were obtained from the water resources bulletins of Henan Province and cities in 2016 [48]. The environmental data were obtained from the environmental status bulletin of Henan Province and cities in 2016 [49]. The other data were obtained from the Henan Statistical Yearbook 2017 and the other cities' Statistical Yearbooks for 2017 [50], as well as the statistical bulletin on national economic and social development of Henan and cities in 2016 [51].
Construction of an RECC Evaluation Index System
In the research of an RECC, we must fully consider the feedback and interaction between the bearer object and the hosted object when selecting indicators. RECC should consider how resources, the environment, and human activities interact. Therefore, on the basis of the above principles, we reference the sustainable development index system of the Chinese Academy of Sciences combined with the actual situation of Henan Province. Meanwhile, from the perspective of RCC, EECC, and SECC, 22 indicators were determined. The results are shown in Table 1. Note: "+"means a positive index, "−"means a negative index.
The RCC index mainly reflects resource abundance. This index includes the components of resources, such as construction land, cultivated land, forest resources, water resources, and grain output, which represent the level and carrying capacity of the resources. The EECC index mainly reflects the abundance of eco-environmental factors and the utilization and consumption of human activities on the environment. It covers the ecological land coverage rate, green land coverage rate, atmospheric environment capacity, water environment capacity, COD (chemical oxygen demand) emissions, industrial SO 2 emissions, and dust emissions, which represent the overall quality of the ecological environment and the pressure of human activities on the ecological environment. The SECC index mainly reflects the coordinated development of the society, economy, and population, as well as the consumption of resources by social and economic activities.
Dimensionless Standardization
Considering the differences that exist in both the dimension and magnitude of each of the selected indicators, the data need to be normalized before an analysis can be undertaken. The indicators are grouped into two types: "positive indicators" and "negative indicators" ( Table 1). The positive indicators refer to the indicators for improving RECC with an increasing value. In contrast, the negative indicators refer to a deteriorating RECC with an increasing value [1], such as industrial SO 2 emissions, dust emissions, etc.
For the "positive indicators", the data are transformed by Equation (1): For the "negative indicators", the data are transformed by Equation (2): where r ij is the standardized value of the j index in the i region (i = 1, 2, . . . , m, j = 1, 2, . . . , n), m is the number of regions evaluated, and n is the number of indicators evaluated. Meanwhile, max (x ij ) and min (x ij ) represent the maximum and minimum value of indicator j in the i region. All index values are within the scope of (0,1) after treatment.
Determination of Indicator Weight
Considering the complexity of the resource-environment-society system and the uncertainty of the RECC index, in order to increase the objectivity of the carrying capacity evaluation, the entropy method was used to determine the index weight [52,53]. The entropy method is an objective weighting method. It determines the weight of the index by calculating the information entropy, and an index with a large degree of variation has a large weight. Meanwhile, this method is widely used in various fields and has strong research value. Therefore, this study attempts to use the entropy method to determine the index weights according to the sample data's degree of variation and evaluates the RECC at the county level in Henan Province. The specific steps of this method are as follows: (1) Calculate the proportion of the indicator j in the region i.
(2) Calculate the information entropy of indicator j.
(4) Calculate the weight of indicator j.
(5) Calculate the comprehensive evaluation value of RECC in region i.
According to the RECC evaluation results calculated by the entropy weight method, the evaluation results are divided into five levels using the natural break method.
Three-Dimensional Magic Cube Evaluation Model
In this study, a three-dimensional magic cube model [12,54] is used to associate the RECC with a land space function. The basic idea of the three-dimensional magic cube model is that the element vectors form spatial units with different functions in a three-dimensional space [54]. Each element has an exact position in three-dimensional space and has strong visibility and intuitiveness. This study applies this concept to the three-dimensional magic cube method to construct a model of the relationship between RECC and land space development zoning. By constructing a three-dimensional space, the various element indicators are classified, and the dimensional nodes and classification level are set according to the number of classifications. Then, the node attribute values are clarified, and the main functions embodied by each element are combined and classified; finally, a three-dimensional magic cube is formed.
According to the concept of a three-dimensional magic cube, this study will construct a three-dimensional space and divide each element level by means of its position, node, and dimension, thus forming different functional areas.
3.4.1. Constructing a Three-Dimensional Space for RECC Based on the relationship between RECC and the function of land space, which correlates RCC, EECC, and SECC to the function of agricultural production, ecological protection, and construction development, we form a three-dimensional magic cube evaluation model corresponding to the RECC and development zoning of land space, as shown in Figure 3. The model takes the RCC, EECC, and SECC as the x-axis, y-axis, and z-axis, respectively, of the three-dimensional space. First, nodes 1,2,3, and 4 are constructed according to the distance from the node to the origin of the three-dimensional space. At the same time, according to the relationship between the mean value and standard deviation of RECC, the four level intervals are determined. The division criteria are shown in Table 2. The larger the value of RECC, the farther its distance from the origin, and the stronger the RECC level. On the contrary, the smaller the value of RECC, the closer its distance from the origin, and the smaller the RECC level. According to the classification level in Table 2, the three-dimensional space is divided into a three-dimensional magic cube of 4×4×4, and 64 kinds of land space function types can be obtained. In addition, the regions represented by each combination type have different functions of land space, and the magic cube combination consists of coordinates (a, b, c). At the same time, a, b, and c represent the level of agricultural production function, ecological protection function, and construction development function, respectively. The three-dimensional magic cube space construction principle is shown in Figure 4. The model takes the RCC, EECC, and SECC as the x-axis, y-axis, and z-axis, respectively, of the three-dimensional space. First, nodes 1,2,3, and 4 are constructed according to the distance from the node to the origin of the three-dimensional space. At the same time, according to the relationship between the mean value and standard deviation of RECC, the four level intervals are determined. The division criteria are shown in Table 2. The larger the value of RECC, the farther its distance from the origin, and the stronger the RECC level. On the contrary, the smaller the value of RECC, the closer its distance from the origin, and the smaller the RECC level. According to the classification level in Table 2, the three-dimensional space is divided into a three-dimensional magic cube of 4 × 4 × 4, and 64 kinds of land space function types can be obtained. In addition, the regions represented by each combination type have different functions of land space, and the magic cube combination consists of coordinates (a, b, c). At the same time, a, b, and c represent the level of agricultural production function, ecological protection function, and construction development function, respectively. The three-dimensional magic cube space construction principle is shown in Figure 4.
Land Space Zoning Criteria
We determine the dominant development area and development area by means of the coordinate in the cube unit and the size of the unit node (Table 3). When there is only one functional category in the three-dimensional space unit coordinates (a, b, c) with a level of 4, the unit is determined to be the dominant agricultural area (A1), key ecological protective area (E1), or construction development dominant area (C1). When there is only one functional category in the three-dimensional space unit coordinates (a, b, c) with a level of 3, and the other category levels are less than 3, then the unit is determined to be the functional agricultural area (A2), functional ecological area (E1), or construction development area (C1). When the level of the functional category in the three-dimensional space unit coordinates (a, b, c) is in the interval of [1,2], the unit is determined as the potential resource area (R1). If there are two or more functional categories in coordinates (a, b, c) with the same level, the function of land space is determined according to the value of RECC and the actual situation of the region.
RECC Evaluation
According to Equations 1-7, we calculated the single factor evaluation and comprehensive evaluation results of RECC in Henan Province and divided the results into five grades (low, lower, medium, higher, and high) by using the natural break method. The larger the value, the stronger the carrying capacity, and the smaller the value, the smaller the carrying capacity. In Figure5(a), (b), (c),
Land Space Zoning Criteria
We determine the dominant development area and development area by means of the coordinate in the cube unit and the size of the unit node (Table 3). When there is only one functional category in the three-dimensional space unit coordinates (a, b, c) with a level of 4, the unit is determined to be the dominant agricultural area (A1), key ecological protective area (E1), or construction development dominant area (C1). When there is only one functional category in the three-dimensional space unit coordinates (a, b, c) with a level of 3, and the other category levels are less than 3, then the unit is determined to be the functional agricultural area (A2), functional ecological area (E1), or construction development area (C1). When the level of the functional category in the three-dimensional space unit coordinates (a, b, c) is in the interval of [1,2], the unit is determined as the potential resource area (R1). If there are two or more functional categories in coordinates (a, b, c) with the same level, the function of land space is determined according to the value of RECC and the actual situation of the region.
RECC Evaluation
According to Equations (1)- (7), we calculated the single factor evaluation and comprehensive evaluation results of RECC in Henan Province and divided the results into five grades (low, lower, medium, higher, and high) by using the natural break method. The larger the value, the stronger the carrying capacity, and the smaller the value, the smaller the carrying capacity. In Figure 5a-d represent the RCC, EECC, SECC, and RECC of Henan Province, respectively. The level and spatial distribution of each type of carrying capacity are as follows.
Development Zoning Analysis of Land Space Based on a Three-Dimensional Magic Cube Evaluation Model
According to the division criteria of RECC in Table 2 and the land space development zoning principle in Table 3, and combined with the corresponding relationship between the results of RECC and the development zoning of land space, a functional type of land space for the evaluation unit can be obtained. Based on the current thoughts of "production-living-ecological space", this paper divides the type of land space development into an agricultural development area, an ecological development area, a construction development area, and a potential resource area. Meanwhile, according to the actual situation of Henan Province, the type of land space development is refined, and the results have formed the land space development layout in Henan Province, as shown in Figure 6. Specifically, the agricultural development area is refined into a dominant agricultural area and an agricultural functional area; the division criteria are determined by the value of the RCC. The ecological development area is refined into the key ecological protective area and ecological functional area, and the division criteria are determined by the value of the EECC. The construction development area is refined into the construction development dominant area and construction development area, and the division criteria are determined by the value of the SECC ( Figure 6).
The dominant agricultural area is mainly concentrated in the Nanyang basin and the southeastern plains. The functional agricultural area is distributed around the eastern plains and includes the counties of Xinyang city, some counties of Luohe city, some counties of Kaifeng city, and some counties of Xinxiang city ( Figure 6).
The key ecological protective area is mainly concentrated in the northwest and to the south of Henan. The ecological functional area is distributed around the counties of Luoyang city, the counties of Pingdingshan city, and the counties of Nanyang city ( Figure 6). Figure 5a reflects the spatial difference in the RCC of each county in the province. The RCC index of Henan Province is between 0.0140 and 0.2674, and the high-value areas (including the high-value and higher-value areas) are mainly distributed in most western and southern counties, and the level is between 0.0942 and 0.2674. In the western high-value area, there are many hills and a smaller population, so the resource carrying pressure is small. On the other hand, southern Henan is one of the province's major grain producing areas, so it has a large area, abundant resources, and a relatively high carrying capacity. The low-value areas (including the low-value and lower-value areas) are mainly distributed in the most central and northern counties of Henan, and the level is between 0.0140 and 0.0614. In the low-value area, some of the areas belong to the Central Henan urban agglomeration, and the urban space and population density are large, thereby affecting the RCC. Medium value areas are mainly distributed in some eastern counties and a few southwest counties in Henan, with a level of 0.0614-0.0942. Figure 5b reflects the spatial difference in the EECC of each county in the province. The EECC in Henan shows a gradual increase from northeast to southwest, indicating an obvious regional difference. The EECC index of Henan province is between 0.0250-0.1846, and the maximum, minimum, and mean values of the EECC are0.1846, 0.0250, and 0.0633, respectively. The spatial distribution presents a small scattered and large centralized coexistence. Specifically, high-value areas are concentrated in the counties of southwest Henan, and the level is between 0.0895-0.1846, which indicates that the western and southern regions play important roles in maintaining ecological safety. Meanwhile, low-value areas are concentrated in the central economic development area and the northern plain area of Henan Province, and the level is between 0.0250-0.0636. In this area, the urban space and population density are large, which is not conducive to the improvement of ecological environment quality. Medium-value areas are scattered in some southern and northwestern counties of Henan, and the level is between 0.0636-0.0895. Figure 5c reflects the spatial difference in the SECC of each county in the province. The SECC index of Henan is between 0.0124 and 0.2735, of which the low-value area is obviously greater than the high-value area, which indicates that the SECC of Henan is generally in a low state. Meanwhile, the maximum, minimum, and mean values of the SECC are0.2735, 0.0124, and 0.0435, respectively. This implies that the unbalanced development of Henan is still prominent. Specifically, some central and northern counties are in the high carrying capacity area, while most of the remaining counties are in the low carrying capacity area, which suggests an obvious imbalance and greater polarization in economic development. Figure 5d reflects the spatial difference in the RECC of each county in the province. The levels of RECC range between 0.0641 and 0.4269 and mainly present a spatial pattern of east-west differentiation. The maximum, minimum, and mean values of the RECC in Henan Province are0.4269, 0.0641, and 0.1472, respectively. Overall, the county with a high carrying capacity mainly forms a semi-circular distribution along the southwestern part of Henan, and its natural resources in this region are better. Moreover, its social and economic development causes less damage to the ecological environment, and the carrying pressure is relatively low. The county areas with a low carrying capacity are mainly concentrated in the north, central, and eastern areas of Henan; the regional population and resource development density, as well as the carrying pressure, are relatively high. Medium-value areas are scattered in some counties in northwest and southwest Henan.
Development Zoning Analysis of Land Space Based on a Three-Dimensional Magic Cube Evaluation Model
According to the division criteria of RECC in Table 2 and the land space development zoning principle in Table 3, and combined with the corresponding relationship between the results of RECC and the development zoning of land space, a functional type of land space for the evaluation unit can be obtained. Based on the current thoughts of "production-living-ecological space", this paper divides the type of land space development into an agricultural development area, an ecological development area, a construction development area, and a potential resource area. Meanwhile, according to the actual situation of Henan Province, the type of land space development is refined, and the results have formed the land space development layout in Henan Province, as shown in Figure 6. Specifically, the agricultural development area is refined into a dominant agricultural area and an agricultural functional area; the division criteria are determined by the value of the RCC. The ecological development area is refined into the key ecological protective area and ecological functional area, and the division criteria are determined by the value of the EECC. The construction development area is refined into the construction development dominant area and construction development area, and the division criteria are determined by the value of the SECC (Figure 6).
The dominant agricultural area is mainly concentrated in the Nanyang basin and the southeastern plains. The functional agricultural area is distributed around the eastern plains and includes the counties of Xinyang city, some counties of Luohe city, some counties of Kaifeng city, and some counties of Xinxiang city ( Figure 6).
The key ecological protective area is mainly concentrated in the northwest and to the south of Henan. The ecological functional area is distributed around the counties of Luoyang city, the counties of Pingdingshan city, and the counties of Nanyang city ( Figure 6).
The construction development dominant area includes the cities of Zhengzhou, Luoyang, Xinxiang, Jiaozuo, Pingdingshan, and Puyang. The construction development area mainly includes the economic development zone of central, northern, and eastern Henan ( Figure 6).
The characteristics of the potential resource area are mainly manifested in its low development density and ecological optimization degree, but its development direction is unclear. Meanwhile, when resources in some key development areas are depleted, the potential resource area may have certain advantages. The potential resource area primarily includes some counties in Anyang city, some counties in Xinxiang city, some counties in Puyang city, and some counties in Zhoukou city ( Figure 6).
The construction development dominant area includes the cities of Zhengzhou, Luoyang, Xinxiang, Jiaozuo, Pingdingshan, and Puyang. The construction development area mainly includes the economic development zone of central, northern, and eastern Henan ( Figure 6).
The characteristics of the potential resource area are mainly manifested in its low development density and ecological optimization degree, but its development direction is unclear. Meanwhile, when resources in some key development areas are depleted, the potential resource area may have certain advantages. The potential resource area primarily includes some counties in Anyang city, some counties in Xinxiang city, some counties in Puyang city, and some counties in Zhoukou city ( Figure 6).
Land Space Development Zoning Characteristics and Development Suggestions
The development zoning of land space entails a comprehensive study of resources from the three aspects of RCC, EECC, and SECC. Different development zoning areas have obvious influencing factors and development characteristics. The following will be based on the development zoning types of land space alongside some reasonable suggestions for the development layout of land space.
The agricultural production area is mainly composed of the Nanyang Basin and Huanghuaihai Plain (Figure 7). In this area, the terrain is flat, water and soil resources are abundant, and the cultivated land area is large. Moreover, the RCC is strong. This is an important grain production area of the province and a modern agricultural demonstration area. As an important grain producing area in China, Henan Province plays an important role in the development of the province and needs to properly protect and optimize agricultural functional areas. Therefore, in terms of zoning construction, the Huanghuaihai Plain and the Nanyang Basin should be the principal focus for protecting cultivated land quality and constructing high-standard basic farmland; then, we should focus on building the functional area for grain production. Secondly, according to local conditions, ecological compound agriculture should be planted, and then the characteristic agriculture advantage area with a reasonable layout should be formed. Finally, there should be a reasonable arrangement of the land for rural living and non-agricultural production. Meanwhile, we should strengthen the guidance of rural planning and promote village rectification and new rural constructions in an orderly manner.
Land Space Development Zoning Characteristics and Development Suggestions
The development zoning of land space entails a comprehensive study of resources from the three aspects of RCC, EECC, and SECC. Different development zoning areas have obvious influencing factors and development characteristics. The following will be based on the development zoning types of land space alongside some reasonable suggestions for the development layout of land space.
The agricultural production area is mainly composed of the Nanyang Basin and Huanghuaihai Plain (Figure 7). In this area, the terrain is flat, water and soil resources are abundant, and the cultivated land area is large. Moreover, the RCC is strong. This is an important grain production area of the province and a modern agricultural demonstration area. As an important grain producing area in China, Henan Province plays an important role in the development of the province and needs to properly protect and optimize agricultural functional areas. Therefore, in terms of zoning construction, the Huanghuaihai Plain and the Nanyang Basin should be the principal focus for protecting cultivated land quality and constructing high-standard basic farmland; then, we should focus on building the functional area for grain production. Secondly, according to local conditions, ecological compound agriculture should be planted, and then the characteristic agriculture advantage area with a reasonable layout should be formed. Finally, there should be a reasonable arrangement of the land for rural living and non-agricultural production. Meanwhile, we should strengthen the guidance of rural planning and promote village rectification and new rural constructions in an orderly manner.
The ecological protection area is mainly situated in the west of Henan, the western part of the northern Henan area, the middle and lower reaches of the Yellow River, and the south of the Qinling Mountains-Huaihe River (Figure 7). The topography and landforms in the ecological protection area are mainly mountainous and hilly, as well as being rich in natural resources and superior in terms of the ecological environment; thus, this is an important ecological conservation area in Henan Province. As a prohibited development zone and a restricted development zone, the ecological protection area should focus on improving environmental quality and protecting ecological barriers and then constructing the ecological security pattern of "three screens, four corridors, and one district". In addition, we should implement the classification control of the ecological protection area and create an urban development layout that is ecological and livable.
The construction development area is the political and economic center of the province, which mainly includes Zhengzhou, Luoyang, Xinxiang, Jiaozuo, Xuchang, and other cities ( Figure 7); this is also the area with the largest urban space and population density. However, the main construction development area has poor resource and environmental carrying capacity, its potential resources are scarce, and the environmental pollution problems are prominent. Therefore, the region should rationally develop economic industries under the premise of protecting the ecological environment. Meanwhile, according to the evaluation results of the carrying capacity and the development intensity requirements, the region should reasonably delimit its urban development boundary and then promote the intensive and efficient development of urban space.
The ecological protection area is mainly situated in the west of Henan, the western part of the northern Henan area, the middle and lower reaches of the Yellow River, and the south of the Qinling Mountains-Huaihe River (Figure 7). The topography and landforms in the ecological protection area are mainly mountainous and hilly, as well as being rich in natural resources and superior in terms of the ecological environment; thus, this is an important ecological conservation area in Henan Province. As a prohibited development zone and a restricted development zone, the ecological protection area should focus on improving environmental quality and protecting ecological barriers and then constructing the ecological security pattern of "three screens, four corridors, and one district". In addition, we should implement the classification control of the ecological protection area and create an urban development layout that is ecological and livable.
The construction development area is the political and economic center of the province, which mainly includes Zhengzhou, Luoyang, Xinxiang, Jiaozuo, Xuchang, and other cities ( Figure 7); this is also the area with the largest urban space and population density. However, the main construction development area has poor resource and environmental carrying capacity, its potential resources are scarce, and the environmental pollution problems are prominent. Therefore, the region should rationally develop economic industries under the premise of protecting the ecological environment. Meanwhile, according to the evaluation results of the carrying capacity and the development intensity requirements, the region should reasonably delimit its urban development boundary and then promote the intensive and efficient development of urban space.
Distribution of Carrying Capacity and Functional Areas of Henan Province
In order to study the distribution characteristics of RECC and the function of land space in Henan Province, and to propose suitable spatial optimization schemes, we need to analyze the regional numbers with different RECC levels in Henan Province. Then, we need to diagnose the quantity status of each functional area under different RECC levels in Henan Province.
A previous study published in 2019 also focused on the RECC of Henan Province [55]. The study found that, from 2005 to 2015, the RECC of each city in Henan Province varied greatly, and the spatial differentiation characteristics were obvious. In addition, there were many areas with a
Distribution of Carrying Capacity and Functional Areas of Henan Province
In order to study the distribution characteristics of RECC and the function of land space in Henan Province, and to propose suitable spatial optimization schemes, we need to analyze the regional numbers with different RECC levels in Henan Province. Then, we need to diagnose the quantity status of each functional area under different RECC levels in Henan Province.
A previous study published in 2019 also focused on the RECC of Henan Province [55]. The study found that, from 2005 to 2015, the RECC of each city in Henan Province varied greatly, and the spatial differentiation characteristics were obvious. In addition, there were many areas with a medium RECC in 2015. In our study, as shown in Figure 8, from the perspective of level distribution, most of the carrying capacity levels are distributed at the lower and medium levels. On the contrary, the proportions of the higher and high levels of carrying capacity are lower than those at the other carrying capacity levels, which shows that the pressure on the resource and environmental systems within most areas of Henan Province is high. Therefore, there are similarities between the two studies. Additionally, the carrying pressure of Henan Province decreases from southeast to northwest. This also accords with the economic and population distribution pattern of Henan Province. In our study, we conjectured that a large population scale leads to greater economic development and more efficient human activities, which increases the unsustainable resource utilization and resource consumption and further causes the decline of the RECC. medium RECC in 2015. In our study, as shown in Figure 8, from the perspective of level distribution, most of the carrying capacity levels are distributed at the lower and medium levels. On the contrary, the proportions of the higher and high levels of carrying capacity are lower than those at the other carrying capacity levels, which shows that the pressure on the resource and environmental systems within most areas of Henan Province is high. Therefore, there are similarities between the two studies. Additionally, the carrying pressure of Henan Province decreases from southeast to northwest. This also accords with the economic and population distribution pattern of Henan Province. In our study, we conjectured that a large population scale leads to greater economic development and more efficient human activities, which increases the unsustainable resource utilization and resource consumption and further causes the decline of the RECC. The current study of land space function in Henan Province is mainly aimed at the city level. There are few studies that focus on the county level [56]. Meanwhile, the existing studies of land space function have paid less attention to the relationship between land space function and RECC. As shown in Figure 9, this study finds a close relationship between the land space function and RECC at the county level of Henan Province. For instance, according to this study, the agricultural production areas were mainly distributed at the level of low RECC and lower RECC. This result demonstrates that we should protect and rationally use agricultural resources and prohibit abuse and unreasonable development. In contrast, the ecological protection areas were mainly distributed at the level of medium RECC and higher RECC, which shows that most of the ecological protection areas in Henan Province are at a better load level. However, there are still some areas at lower carrying levels that require greater attention and protection in the future. Additionally, construction development areas are mainly distributed at the level of lower RECC and medium RECC, and it is shown that the area should rationally develop economic industries under the premise of protecting resources and the ecological environment. The current study of land space function in Henan Province is mainly aimed at the city level. There are few studies that focus on the county level [56]. Meanwhile, the existing studies of land space function have paid less attention to the relationship between land space function and RECC. As shown in Figure 9, this study finds a close relationship between the land space function and RECC at the county level of Henan Province. For instance, according to this study, the agricultural production areas were mainly distributed at the level of low RECC and lower RECC. This result demonstrates that we should protect and rationally use agricultural resources and prohibit abuse and unreasonable development. In contrast, the ecological protection areas were mainly distributed at the level of medium RECC and higher RECC, which shows that most of the ecological protection areas in Henan Province are at a better load level. However, there are still some areas at lower carrying levels that require greater attention and protection in the future. Additionally, construction development areas are mainly distributed at the level of lower RECC and medium RECC, and it is shown that the area should rationally develop economic industries under the premise of protecting resources and the ecological environment. medium RECC in 2015. In our study, as shown in Figure 8, from the perspective of level distribution, most of the carrying capacity levels are distributed at the lower and medium levels. On the contrary, the proportions of the higher and high levels of carrying capacity are lower than those at the other carrying capacity levels, which shows that the pressure on the resource and environmental systems within most areas of Henan Province is high. Therefore, there are similarities between the two studies. Additionally, the carrying pressure of Henan Province decreases from southeast to northwest. This also accords with the economic and population distribution pattern of Henan Province. In our study, we conjectured that a large population scale leads to greater economic development and more efficient human activities, which increases the unsustainable resource utilization and resource consumption and further causes the decline of the RECC. The current study of land space function in Henan Province is mainly aimed at the city level. There are few studies that focus on the county level [56]. Meanwhile, the existing studies of land space function have paid less attention to the relationship between land space function and RECC. As shown in Figure 9, this study finds a close relationship between the land space function and RECC at the county level of Henan Province. For instance, according to this study, the agricultural production areas were mainly distributed at the level of low RECC and lower RECC. This result demonstrates that we should protect and rationally use agricultural resources and prohibit abuse and unreasonable development. In contrast, the ecological protection areas were mainly distributed at the level of medium RECC and higher RECC, which shows that most of the ecological protection areas in Henan Province are at a better load level. However, there are still some areas at lower carrying levels that require greater attention and protection in the future. Additionally, construction development areas are mainly distributed at the level of lower RECC and medium RECC, and it is shown that the area should rationally develop economic industries under the premise of protecting resources and the ecological environment.
Relationship between Carrying Capacity and Land Space Development Zoning
In recent years, some researchers have mainly focused on single factor evaluations of carrying capacity or regional carrying capacity analyses [57][58][59] but have not considered the comprehensive capacity of resources and the environment and have not provided useful support for spatial planning work. Moreover, due to the obvious differentiation of resources in China's national territory, the different regions used as evaluation units to explore the spatial differences of RECC and then guide the layout of land space and economic development are the focus of current spatial planning research [60]. Therefore, in this study, the geographical spatial differentiation of RECC was illustrated, through which we could determine where the RECC was higher and lower in Henan Province. It can be seen from the results that the spatial differences of RECC in the regional space are obvious. In addition, the resources in the high carrying capacity area are better, and the social-economic development causes less damage to the ecological environment. On the other hand, the low carrying capacity area has a large population density, and the resource and environmental development density are high, and therefore the carrying pressure is also relatively high. Thus, controlling the overly-rapid development of regional space based on the results of RECC and engaging in reasonable development under the premise of protecting the ecological environment are the key directions required to solve the problem of land space layout.
In addition, the development zoning of land space is an objective reflection of the law of regional spatial differentiation and is an important basis for formulating differentiated land space management policies. As an indispensable basis for optimizing the development layout of land space, RECC can provide an important role for the rational division of land space [21]. In our study, we related the RECC to the functional orientation of land space, which is different from the previous analysis of land space zoning [61]. Moreover, our study discussed the method of regional spatial division from the perspective of carrying capacity. Based on the explanation of the relationship between RECC and the functional type of land space, we used a three-dimensional magic cube evaluation model. In the three-dimensional cube's model space, the RCC, EECC, and SECC, respectively, correspond to the agricultural production function, ecological protection function, and construction development function of the land space. Then, according to the results of RECC, the carrying level under different functions in the region can be judged in order to reasonably lay out the functional orientation of land space. The previous zoning results cannot fully reflect the resource and environmental carrying status in the region [43,62], so our study can compensate for the lack of carrying capacity in the traditional planning results and provide a reference for government departments to carry out research on the layout of national land space.
Conclusions
During the process of sustainable development, it is important to measure the relationship between the RECC and land space. Using a three-dimensional magic cube evaluation model, this article theoretically improved the shortcomings of traditional land space zoning methods and researched the layout of land space based on RECC levels. The results of this article prove that each county has different RECC and land space functional structures, thereby providing a decision-making basis for Henan Province to find differentiated paths of land space zoning. Based on our results, policy implications for the sustainable development and spatial planning of Henan Province were proposed. The main conclusions of this article are as follows: (1) The spatial pattern of RECC showed an unbalanced carrying capacity situation in Henan Province. The county area with a high carrying capacity is mainly formed in a semi-circular distribution along the southwestern part of Henan Province. The resources and environment in this area are better, and the damage caused by social and economic development to the ecological environment is small. The county area with a low carrying capacity is mainly concentrated in the northeastern and central part of Henan Province. In this area, the population density is large, the density of resources and environmental development is high, and the carrying pressure is relatively high. Therefore, local governments need to put forward a scientific management program to solve the unbalanced problem of RECC. Specifically, the northeastern and central regions should take the protection of the ecological environment as their premise and change their economic development mode, while the southwestern region could use ecological advantages as its foundation and create a characteristic ecological industry system.
(2) The three-dimensional magic cube model can correlate RECC evaluation with land space zoning. In the three-dimensional space, the RCC, EECC, and SECC can correspond to the function of agricultural production, the function of ecological protection, and the function of construction development. Therefore, land space zoning based on the RECC results can prove whether the region's layout is reasonable. The zoning results fully reflect the spatial patterns of RECC, which avoids subjectivity in planning.
(3) Using the three-dimensional magic cube model, the land space pattern of Henan Province can be divided into seven types of areas-agricultural dominant areas, agricultural functional areas, ecological key protective areas, ecological functional areas, construction development dominant areas, construction development areas, and potential resource areas-which can better reflect the spatial differentiation characteristics of the comprehensive index of RECC. Based on the differences of each functional area, local governments should implement different use control policies for different types of functional regions. In functional agricultural areas, the government should set up a "survival line" and then clarify the cultivated land protection area and scale of water resource development. In functional ecological areas, the government should set up "ecological lines", clarify the scope of protected areas, and improve the level of ecological security. In construction development areas, the government should set up "security lines" that ensure the construction land necessary for economic and social development.
(4) Our study provides reference and application value for spatial planning. However, the selection of indicators depends to some extent on the knowledge and experience of experts in different disciplines and different fields. In addition, because the evaluation results may be affected by subjective factors, a reduction of the uncertainty caused by the results of the index analysis needs to be further developed in a future study. | 2020-02-05T14:36:54.866Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "6aba9526d2dc7bba00594eaf6cae8320ff734c40",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/3/900/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01eadfdf67e8aeeb796d479c5d55ef9f7580a1fe",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
256217922 | pes2o/s2orc | v3-fos-license | Application of the Model of Spots for Inverse Problems
This article proposes the application of a new mathematical model of spots for solving inverse problems using a learning method, which is similar to using deep learning. In general, the spots represent vague figures in abstract “information spaces” or crisp figures with a lack of information about their shapes. However, crisp figures are regarded as a special and limiting case of spots. A basic mathematical apparatus, based on L4 numbers, has been developed for the representation and processing of qualitative information of elementary spatial relations between spots. Moreover, we defined L4 vectors, L4 matrices, and mathematical operations on them. The developed apparatus can be used in Artificial Intelligence, in particular, for knowledge representation and for modeling qualitative reasoning and learning. Another application area is the solution of inverse problems by learning. For example, this can be applied to image reconstruction using ultrasound, X-ray, magnetic resonance, or radar scan data. The introduced apparatus was verified by solving problems of reconstruction of images, utilizing only qualitative data of its elementary relations with some scanning figures. This article also demonstrates the application of a spot-based inverse Radon algorithm for binary image reconstruction. In both cases, the spot-based algorithms have demonstrated an effective denoising property.
Introduction
Imaging based on the scan data of the object under study, and the processing of scattered signals received by sensors, refers to inverse problems. Relevant direct problems are the modeling of wave signals scattered from an object with a known distribution of material properties within it. [1,2]. In particular, medical devices that provide such imaging are CT, MRI, microwave tomography, electrical resistance and capacitance tomography, ultrasound imaging, and others [3]. There are other areas of application for such image reconstruction, including radar, ground-penetrating radar, and through-the-wall radar. Geophysics also uses visualization obtained by sounding the earth with the help of sound or electrical impulses, etc. [1,4]. Imaging in all these areas, with the exception of MRI, is associated with the solution of inverse scattering problems in one or another approximation [5].
The fundamental point here is that practically it is impossible to obtain a mathematically exact solution to the inverse problem, however, it is possible to approximately reconstruct an image with a finite spatial resolution. Generally speaking, inverse problems relate to ill-posed mathematical problems, and such property can be explained by a lack of information for an exact solution due to the noise and the finite amount of sensor signals. Therefore, approximate solution methods are used that utilize regularization, filtering, interpolation, and other approaches [3]. For example, this applies to CT, MRI, and ultrasound, as well as to studies on microwave tomography [6][7][8][9][10].
Note that conventional approximate reconstruction methods such as filtered backprojection in CT or simple inverse FFT in MRI are not always adequate and may lead to artifacts. Therefore, new methods based on appropriate models have been developed that more strictly take into account the physics of objects and the real properties of sensors [11][12][13][14]. In a rigorous formulation, the inverse problem is considered a nonlinear optimization with Whitehead in 1929 [30]. On the other hand, the basic ideas of the spots model are also close to the rough set theory [48][49][50][51][52][53][54][55], the formal concept analysis [52,[56][57][58], and the fuzzy set theory [59], including fuzzy geometry [60,61]. Moreover, the suggested concept is in good agreement with the ideology of granular computing [62][63][64][65][66][67][68].
Instead of points, mereotopology uses regions of space as primitive spatial entities and utilizes qualitative information about their relations. Among other areas, it has been applied to geographical information science, and image analysis [38]. One of the important fields of mereotopology is region connection calculus (RCC) [34][35][36], which has two variants. RCC-8 defines eight relations between regions, including overlapping, disconnection, external connection, and connections (touch) of the region's boundaries. Bennett [42], as well as Jonsson and Drakengren [43] considered a shortened version of these relations-RCC-5, which does not consider the boundaries connection. A feature of RCC-5 is the uncertainty of boundaries, since here it is impossible to distinguish internal points from boundary ones.
Although most authors considered spatial relations as logical values, Egenhofer et al. [44,45] encoded them in form of logical tables. Namely, they introduced the concepts of 4-intersection [44] and 9-intersection [45] matrixes, which are logical matrices that encode the spatial relations between spatial regions. Notice that these matrixes are similar but differ from the L4 numbers proposed in [27,28] because authors of [44,45] also consider relations with the boundaries. Clementini et al. [46] generalized 9-intersection matrixes, replacing intersections for the crisp boundary with intersections for broad boundaries. Stell [47] also considered the way of representation for spatial relations using 3 × 1 logical vectors created on the base of notions part and compliment only. Finally, Butenkov et al. [69] introduced 2 × 2 logic tables for Cartesian granules, which are equivalent to L4 numbers for spots, and applied them for spatial data mining algorithms.
The rough set theory suggested by Zdzisław Pawlak [48,50] is a mathematical approach to the representation of the vagueness. One of the main advantages of the rough set approach is the fact that it does not need any preliminary or additional information about data, such as probability in statistics, or membership in the fuzzy set theory. This theory regards sets with incomplete information that does not allow to distinguish elements in some of their subsets, which are called granules. Pawlak's theory introduced such notions as the lower and upper approximations of rough sets, the boundary region, and even the membership function for elements, which is similar to that for the fuzzy sets.
A general formulation and consideration of granules, including the problem of information granulation, which was later called the concept of granular computing, were first carried out by Zadeh in [62]. His definition of granules: "the information may be said to be granular in the sense that the data points within a granule have to be dealt with as a whole rather than individually" corresponds to the equivalence classes of the universe. Zadeh regards both crisp and fuzzy granules and "considers granular computing as a basis for computing with words" [63]. The elements of a granule are indiscernible that "depends on available knowledge" [65]. The importance of the application of granulation and granular computing relates to the fact that such approximation can lead to simplification in solving practical problems.
The graph is a mathematical model convenient for the representation of the structure of links (labeled edges) between elements (nodes or vertexes) of the system under study [70]. Nowadays, the apparatus of graphs is well suited for the analysis and processing of digital images in digital geometry [71], and the analysis and metrics of the structure of the physical connection of brain neurons [72]. On the other hand, graph theory is actively used in AI to model semantic networks in the knowledge bases, which are called knowledge graphs [73][74][75][76][77]. Note that, unlike spots, the graph is only an abstraction for the representation of a structure of the relations between the entities, rather than a spatial object. However, recent works utilize graphs embedded in some continuous space to reduce the dimension of the graph when processing its data [75][76][77].
Despite the ideological closeness to these theories, the proposed model of spots has a significantly different nature, since spots are not based on sets or fuzzy set concepts, and spot elements do not define. Instead of the elements, a spot can have a structure inside that is determined from spatial relations with other spots. Having elementary spatial properties, spots combine the concepts of discreteness and continuity, while graphs are discrete mathematical objects. Generally, the presence of similar mathematical models allows us to use some approaches and ideas from them.
Definition of Spots and the Apparatus of L4 Numbers
Spots are mathematical objects with elementary spatial properties, for which their inner region, outer region (environment), and a logical connection between these regions for any two spots are defined. The logical connection ab of two spots a, b is determined by two axioms [31,35] ∀a, aa = 1 (logical) (1) ∀a∀b, ab = ba Environments a, b of spots a, b are also considered to be spots, therefore, a logical connection is also defined for them, satisfying the axioms (1) and (2). Axiomatically, we regard spots do not connect their environments, that is In general, the "shape" of spots and the properties of their environment, such as dimension and curvature of space, are not predetermined but can be evaluated from qualitative information about their elementary spatial relations (ER) to other spots, such as separation, intersection, inclusion, indistinguishability, etc. We consider crisp geometric figures as a special limiting case of spots.
Note that the connection can be defined not only in the case of the existence of a common region of space between two spots but also in a more general sense. For example, two geometric figures can be considered indistinguishable if they can be precisely coincided by a rigid movement. In general, any spot mapping can be defined with help of ER.
Definition of L4 Numbers
The elementary relations can be formalized using logical L4 numbers [27]. For spots a, b and their environments a, b the L4 number a|b is defined as a table where ab, a b . . . denote the logical connections. Such L4 numbers, in general, permit distinguishing up to 16 different ER between spots. Examples of the ER and their corresponding L4 numbers are shown in Table 1. We call these spatial relations as elementary relations because they carry the lowestlevel qualitative information about spots. However, a large amount of such qualitative data allows for extracting higher-level qualitative information and even numerical information.
The mathematical apparatus of the spot model is based on the L4 numbers, rather than on real numbers. As far as the basis of this apparatus is described in more detail in the previous works [27,28], here we will briefly outline the main content and reveal the meanings of the concepts introduced there.
In [27,28], the basis of spots is defined as a collection of "known" spots that can be in some mutual ER. The representation of a spot by their ER on the basis of spots is a mapping or imaging of the spot on this basis. Note that the system of basis spots is analogous to the numerical basis functions, and the orthogonality of the base functions is analogous to the separated basis spots, which we call orthogonal spots. Let us call the collection of spots with a certain ER between them the structure of spots. The structure of the basis spots included in a spot a will also be called the structure of the spot a.
Note that spot mapping is generally similar to the concept of projection for crisp figures. Consequently, the spot is analogous to some volumetric object, which is determined by its projections on different planes. Hence, one can improve knowledge about the structure of the spot by fusion its mappings on different bases into a "volumetric" image.
Let us define the operations union ∨ and the intersection ∧ for the spots, which permits the creation of new spots. We suggest the following definitions, which are similar but different from those of the set theory: Here, the symbol + denotes the logical disjunction operation. Note that, in contrast to the sets, (5) does not define the images of spots c, d absolutely, because they depend on the spots' basis {x i }. Following the equality c c = 0, see (3), it is possible, for example, to derive the following equations from (5): The definitions (5) also permit to derive simple properties for zero spots ∅: and express intersection parts A, B, C, and D of spots a and b in Figure 1, using the operation ∧: We call these spatial relations as elementary relations because they carry the lowestlevel qualitative information about spots. However, a large amount of such qualitative data allows for extracting higher-level qualitative information and even numerical information.
The mathematical apparatus of the spot model is based on the L4 numbers, rather than on real numbers. As far as the basis of this apparatus is described in more detail in the previous works [27,28], here we will briefly outline the main content and reveal the meanings of the concepts introduced there.
In [27,28], the basis of spots is defined as a collection of "known" spots that can be in some mutual ER. The representation of a spot by their ER on the basis of spots is a mapping or imaging of the spot on this basis. Note that the system of basis spots is analogous to the numerical basis functions, and the orthogonality of the base functions is analogous to the separated basis spots, which we call orthogonal spots. Let us call the collection of spots with a certain ER between them the structure of spots. The structure of the basis spots included in a spot will also be called the structure of the spot .
Note that spot mapping is generally similar to the concept of projection for crisp figures. Consequently, the spot is analogous to some volumetric object, which is determined by its projections on different planes. Hence, one can improve knowledge about the structure of the spot by fusion its mappings on different bases into a "volumetric" image.
Let us define the operations union ∨ and the intersection ∧ for the spots, which permits the creation of new spots. We suggest the following definitions, which are similar but different from those of the set theory: Here, the symbol + denotes the logical disjunction operation. Note that, in contrast to the sets, (5) does not define the images of spots , absolutely, because they depend on the spots' basis { }. Following the equality ̃= 0, see (3), it is possible, for example, to derive the following equations from (5): The definitions (5) also permit to derive simple properties for zero spots ∅: and express intersection parts , , , and of spots and in Figure 1, using the operation ∧:
Definition and Geometric Meaning of L4 Vectors and L4 Matrices
As mentioned earlier, information about any spot can be defined by its mapping on some basis. Then it can be encoded as a vector with coordinates, corresponding to its ER where n is the number of spots in the basis X. L4 vector (9) is similar to a numerical vector but its elements are L4 numbers. On the other hand, mapping (9) is also similar to the projection of a 3D body on some plane, which, can only represent partial information about the body. Papers [27,28] introduce the idealized concept of atomic basis and atomic spots, which do not intersect each other and other spots. Note that the atomic spots are similar to points, pixels, voxels, or elements of sets. Another useful notion is orthogonal spots, for which their mutual ER is separated. For example, intersection parts of spots are orthogonal and can be regarded as some approximation for the atomic basis.
The L4 matrix A = Y|X defines ER between the spots of two bases, X = {x i } and Y = y j and formalizes the mapping from X basis to Y basis: Here, y j X are row L4 vectors of spot y j , represented on the basis X. Note that the L4 matrix can be used to transform the L4 vector from one to another basis [27] which is similar to the mapping function in topology and geometry. Formally, it can be represented in the form of a Y = Y|X a X however, there is no simple solution to define the rules for such a product, and we will address this issue in the next section. The exception is the special case of the L4-matrix, which we call the indistinguishability matrix that is similar to the numerical identity matrix. The indiscernibility matrix I has diagonal elements corresponding to the indiscernibility and all other elements-to separation. Then multiplication of the L4 matrix I and any L4 vector a corresponds to an identity transformation: There is a special case when all the spots of two bases are separated that is analogous to orthogonal coordinates in geometry. It is obvious that in this case, it is impossible to obtain a mapping transformation using the product of an L4 matrix and L4 vector, and it is necessary to obtain additional independent data.
Multiplication Rules for L4 Vectors and L4 Matrixes
First, consider the simplest case of an atomic basis A = {u i }, where basis spots are orthogonal and do not intersect other spots [27,28]. For it, one can define an ER a|b A between the spots a and b with respect to the basis A and the "scalar" product of vectors (a A , b A ) according to the rule where the symbol "·" denotes a logical conjunction. We will apply the same rule for an orthogonal basis U = {u i }, consisting of separated spots. Let us regard a spot a, basis B = {b i } and an atomic basis A = {u i }. We suppose that a and all b i spots can be mapped on the atomic basis A. Then the rule for the product of the L4 matrix B|A (10) and L4 vector a A (9) can be defined as the following: where L4 number a|b i A is defined in (11). Note that Equation (12) defines the transformation of the mapping of the spot a from basis A to basis B.
For an arbitrary basis X = {x i }, when spots x i can be intersected, the definition of a|b X is more complicated. First, let us consider the orthogonal basis U = {u i } of all intersections of the spots in X and find the vectors a U = [ a|u k ] and b U = [ b|u k ] on the basis U. Then, we define the following equality for calculation a|b X : and apply the rule (11). We define the vectors a U and b U using the following formal matrix equations: where U|X is the L4 matrix that consists on u i x j elements and is used for mapping vectors from basis X to basis U.
To determine the transformation rule for (13), we first apply a convenient method of numbering the intersections {u k } of spots {x i }, using a binary code. Namely, generalizing (8), each u k can be defined in terms of the spots x i or x j connected by operator ∧. For example, the binary index k = 101 . . . 0 2 corresponds to the following spot u k [28]: ER a|u k for any spot a and for u k (14) can be found using the following approximate equation [28] a|u k = ax 1 ·a x 2 · . . . ·a x n a x 1 + ax 2 + · · · + ax n ax 1 · a x 2 · . . . · a x n a x 1 + ax 2 + · · · + ax n (15) which defines the rule for the product U|X a X in (13). A similar equation can be written for the spot b to determine the product U|X b X in (13).
Equation (15) was tested in [28] when solving the problem of reconstruction of the shape of plane figures, processing its ER data with basis figures (Figures 4-6 in [28]). It turned out that (15) gives uncertainty in the form of a blurred boundary. However, it is possible to eliminate it if to apply additional rules, correcting ER a|u k in (15): where the symbols <> and > denote the separation and inclusion (more) relations, respectively (see Table 1). Equations (15) and (16) help to determine the general rule for multiplying an arbitrary L4 vector a X and an L4 matrix A = Y|X defined on the basis X = {x i } and Y = y j . We can write this rule in the following form: Here the basis U = {u i } consists of the intersections of the points x i , V = {v i } is the basis of the intersections of the spots y j , and W = {w i } is the basis of the intersections of the spots of U and V basis. Note that Equation (17) should be considered as a series of transformations from one basis to another, namely Product U|X a X can be calculated, using (15) and (16), but the vectors V|W a W and Y|V a V -using (11), (12), regarding V, W as an atomic basis. Finally, let us use the following natural rule for calculating the product W|U a U : where symbols < denotes relation inclusion (less) (see Table 1). Note that rule (19) is also approximate.
Solution of Inverse Problems
It follows from the definition of L4 matrices Y|X (10) that its inverse matrix X|Y is equal to and hence it must always exist and be equal to the transposed matrix Y|X with additional transposed elements (L4 numbers). Therefore, as it following from (17) and (20), formally the solution of the equation a Y = Y|X a X can be represented aŝ where, as in (17), the basis U consists of intersections of the spots in X, the basis Vintersections of the spots in Y and W-intersections of the spots of U and V basis. Considering that Equations (15), (17), and (19) are approximate, we can conclude that in the general case, the inverse solution (21) is also approximate:
Solving Inverse Problems Using L4 Matrices by Learning Method
As mentioned in the introduction, the practical application of solving inverse problems, especially electromagnetic inverse scattering, requires a large amount of computation time and resources. Alternative approaches involve the use of neural networks to train a solving system, which, after training, can make an inverse solution for newly measured data. In the spots model, this has an analogy with the situation when the matrix A = Y|X in (17) is unknown and it is wanted to be determined on training examples. Let us evaluate an unknown L4 matrix A on the basis of a set of training examples {x i , y i }, using the equality We can regard x i , y i as L4 vectors for spots x i and y i that form the basis X = {x i } and Y = {y i } of the training data. Then, we can compose an L4 matrix Y|X and represent the matrix A in (22) as: where B X and B Y are atomic bases for L4 vectors x i and y i , correspondingly. Obviously, for the testing set {x i , y i } the matrix Y|X is equal to the indistinguishability matrix I. Note that Equation (23) is a schematic interpretation of the learning process [29]. Let us consider the application (23) of the learning system to obtain the inverse solution of the following equation To do this, we can use the transformation of the matrix A (23), similar to (21), to represent the inverse solution in the following general form: We should especially consider the case when input data c and/or output data d are numeric. Then, instead of matrixes Y|B Y or B X |X in (24) we have to apply the corresponding operators B X and B Y that transforms L4 data to numerical data or vice versa. Then, the forward problem has the form of and its inverse solution instead of (24) can be represented in the following form: where B −1 X and B −1 Y are the inverse operators.
Image Reconstruction by Processing Qualitative Data
Although the proposed theory is developed for spots, which in general correspond to vague figures, it is convenient to verify its mathematical apparatus on crisp figures, which are the limiting case of spots. Let us consider the figure under test as a conditionally unknown spot, and the figures, which are used for mapping this spot (or "sampling") as the known basis of spots [28]. More specifically, we consider the shape reconstruction of a crisp plane figure, utilizing the only qualitative information of its ER with the bases figures without additional details about these relations. However, it is possible to infinitely refine the reconstructed shape of the unknown figure by increasing the number of samplings and processing all the ER data. It may seem surprising, but it is theoretically possible, to reconstruct the shape of an object with absolute precision. This is a consequence of the following theorem. Proof. Necessity. As can be seen from the diagram in Figure 2, the condition of equality of figures a and b is equivalent to the equality of both their intersection parts A and B to the zero figure ∅. Let us suppose there is a figure c that has different ER with a and b, i.e., different connection values with these figures. For example, ac = 0, bc = 1 (Figure 2). It follows from Theorem 1 that all information about the shape of each figure is contained in the infinite set of its ER with all other figures of finite size. Therefore, in principle, it is possible to reconstruct this shape using such qualitative information. However, due to the incomplete, finite amount of ER data, figure shape reconstruction can only be approximate. This corresponds to the fact that the result of such a reconstruction corresponds to a blurry figure, that is, to a spot.
Note that the shape's reconstruction, by processing qualitative data, refers to inverse problems. Indeed, its forward problems can be formulated as Figure 2). If, for example, B = ∅, then ∃c : {ac = 0, Bc = 1} → bc = 1 that contradicts the condition of equality ER with any finite figure. Therefore, the assumption a = b is false, which proves the sufficiency condition.
It follows from Theorem 1 that all information about the shape of each figure is contained in the infinite set of its ER with all other figures of finite size. Therefore, in principle, it is possible to reconstruct this shape using such qualitative information. However, due to the incomplete, finite amount of ER data, figure shape reconstruction can only be approximate. This corresponds to the fact that the result of such a reconstruction corresponds to a blurry figure, that is, to a spot.
Note that the shape's reconstruction, by processing qualitative data, refers to inverse problems. Indeed, its forward problems can be formulated as where P is a basis for atomic spots-pixels or voxels, X = {x i } is a basis for scanning figures for testing, a X -L4 vector of ER data for the reconstruction of the figure under test a P . Following (21), the reconstructed figureâ P is the inverse solution of (26) that, similar to (21), can be represented in the form of the equation where U is the basis of intersections of spots {x i }. The mapping a U = U|X a X can be found using (15) and (16).
Inverse Radon Algorithm for Binary Figures
Let us consider scanning figures-squares as the basis X = {x i }, and use the calculated sinograms (projections) S of these squares as training data, which we will assign to the basis Y = {y i }. As before, {x i , y i } will be considered training data for the learning system, and we will determine an algorithm for the inverse solution by learning.
The forward problem is the Radon transform Sec. 8.7.3 of [2] that can be written in the form of s = R(a P ) (28) where P is the basis of pixels, R is the Radon transform operator, s is the sinograms of the a P image. Following (25), the inverse Radon solution of (28) can be represented aŝ where U is the basis of intersections of {x i }. Note, the operator B −1 Y depends on the training sinograms data S and matrix Y|X is the indiscernibility matrix I for solving by learning methods, as it was mentioned before. Therefore, Let us find rules for calculation a Y (30), defining such ER between sinograms that are presented in Figure 3. Here, small spiking sinograms (continue lines) correspond to relatively small basis squares x i and oval-type sinograms (dashed lines) correspond to ellipse figure a P (see Section 4.3).
Here, ¬ is the logical negation, -index corresponds to that of square, -index corresponds to the projection coordinate ( ) and -index corresponds to the projection angle 0 ( ) Sec. 8.7.3 of [2]. Using (30) and (31) we get a spot-based inverse Radon algorithm for the reconstruction of binary images.
Results of Image Reconstruction
To illustrate the suggested theory, MATLAB programs were written that provide processing of ER data between the figure under test and basis spots (crisp figures) x i , which are scanning (or basis) squares. To obtain a better resolution, we utilize quite tight distribution of the basis spots that makes the intersections u k (14) to be relatively small. We used a computer with an AMD Ryzen 7 2700 X processor, 8 cores, 3.70 GHz, 32GB RAM, and no GPU.
Reconstruction of Binary Images
The ER data were obtained using scanning of 4 × 4 pixels squares with the scan period 1 pixel and processed them using (27) and algorithm (15), (16). The number of basis squares was approx. 20,000, and the calculation time was about 9 min in all cases.
To compare the original and reconstructed binary images we calculated the misfit error where and are numbers of pixels that correspond to the inner regions of the original (noise-free) and the reconstructed images, correspondingly. Figure 4 represents the reconstruction of images of a five-pointed star without and with strong noise, utilizing only data from its ER with scanning squares. Note that Figure 4d demonstrates the effective denoising capability of the algorithm (15), (16), (27). The image sizes were 128 × 128 pixels, and the misfit errors for the reconstructed images are = 0.1% for Figure 4b, and = 3.1% for Figure 4d. The main idea of suggesting an algorithm is that if the basis figure x i has such ER with the figure under test a as separation or intersection, then there are projection angles for which their sinograms are separated or intersected. However, if x i is included in a then all their sinograms have ER inclusion as well. Hence, we can define the following rules for ER of projections s X (i, j, k) and s a (i, j, k), which are converted to logical values: Here, ¬ is the logical negation, i-index corresponds to that of x i square, j-index corresponds to the projection coordinate ξ(j) and k-index corresponds to the projection angle α 0 (k) Sec. 8.7.3 of [2]. Using (30) and (31) we get a spot-based inverse Radon algorithm for the reconstruction of binary images.
Results of Image Reconstruction
To illustrate the suggested theory, MATLAB programs were written that provide processing of ER data between the figure under test and basis spots (crisp figures) x i , which are scanning (or basis) squares. To obtain a better resolution, we utilize quite tight distribution of the basis spots that makes the intersections u k (14) to be relatively small. We used a computer with an AMD Ryzen 7 2700 X processor, 8 cores, 3.70 GHz, 32GB RAM, and no GPU.
Reconstruction of Binary Images
The ER data were obtained using scanning of 4 × 4 pixels squares with the scan period 1 pixel and processed them using (27) and algorithm (15), (16). The number of basis squares was approx. 20,000, and the calculation time was about 9 min in all cases.
To compare the original and reconstructed binary images we calculated the misfit error mer where N OI and N RI are numbers of pixels that correspond to the inner regions of the original (noise-free) and the reconstructed images, correspondingly. Figure 4 represents the reconstruction of images of a five-pointed star without and with strong noise, utilizing only data from its ER with scanning squares. Note that Figure 4d demonstrates the effective denoising capability of the algorithm (15), (16), (27). The image sizes were 128 × 128 pixels, and the misfit errors for the reconstructed images are mer = 0.1% for Figure 4b, and mer = 3.1% for Figure 4d. Figure 5 demonstrates the reconstruction of hand-mask images-noise-free and strongly noisy, using similar ER data and rules. Note that Figure 5d also demonstrates the effective denoising capability of the algorithm (15), (16), (27). The image sizes were 120 × 120 pixels and the misfit errors for the reconstructed images are mer = 3.1% (for Figure 5b), and mer = 4.7% (for Figure 5d). Figure 5 demonstrates the reconstruction of hand-mask images-noise-free and strongly noisy, using similar ER data and rules. Note that Figure 5d also demonstrates the effective denoising capability of the algorithm (15), (16), (27). The image sizes were 120 × 120 pixels and the misfit errors for the reconstructed images are = 3.1% (for Figure 5b), and = 4.7% (for Figure 5d).
Reconstruction of Gray Scale Images
To be able to apply the developed reconstruction algorithm (15), (16), (27) to grayscale images, it is necessary to add a new dimension to 2D spots corresponding to their intensity value. In order for this numerical coordinate to be consistent with the general spot ideology, we represent the intensity axes as a linear structure, a chain of intersected spots ( Figure 6). Figure 5 demonstrates the reconstruction of hand-mask images-noise-free and strongly noisy, using similar ER data and rules. Note that Figure 5d also demonstrates the effective denoising capability of the algorithm (15), (16), (27). The image sizes were 120 × 120 pixels and the misfit errors for the reconstructed images are = 3.1% (for Figure 5b), and = 4.7% (for Figure 5d).
Reconstruction of Gray Scale Images
To be able to apply the developed reconstruction algorithm (15), (16), (27) to grayscale images, it is necessary to add a new dimension to 2D spots corresponding to their intensity value. In order for this numerical coordinate to be consistent with the general spot ideology, we represent the intensity axes as a linear structure, a chain of intersected spots ( Figure 6).
Reconstruction of Gray Scale Images
To be able to apply the developed reconstruction algorithm (15), (16), (27) to grayscale images, it is necessary to add a new dimension to 2D spots corresponding to their intensity value. In order for this numerical coordinate to be consistent with the general spot ideology, we represent the intensity axes as a linear structure, a chain of intersected spots ( Figure 6). For example, these spots can be numerical intervals, and hence we can split the grayscale image into flat layers, corresponding to these intervals. Then one can reconstruct images in the layers independently and combine them again into the entire image. Results of the reconstruction are demonstrated in Figures 7-10, where the intensity axes of For example, these spots can be numerical intervals, and hence we can split the grayscale image into flat layers, corresponding to these intervals. Then one can reconstruct images in the layers independently and combine them again into the entire image.
Results of the reconstruction are demonstrated in Figures 7-10, where the intensity axes of 128 × 128 pixels images were divided into 20 layers. The number of basis squares was about 20,000, and the calculation times were 6-9 min.
As before, the ER data were obtained using 4 × 4 pixels squares that were scanned in each of 20 layers, and their scan period was 1 pixel. The SNR, which is defined for an average image intensity, is 9.7 dB (for Figure 9c), 23.3 dB (for Figure 10c), and 9.3 dB (for Figure 10e). Note that Figures 9d and 10d,f also demonstrate the noise reduction ability of the reconstruction algorithm (15), (16), (27). For example, these spots can be numerical intervals, and hence we can split the grayscale image into flat layers, corresponding to these intervals. Then one can reconstruct images in the layers independently and combine them again into the entire image. Results of the reconstruction are demonstrated in Figures 7-10, where the intensity axes of 128 × 128 pixels images were divided into 20 layers. The number of basis squares was about 20,000, and the calculation times were 6-9 min. For example, these spots can be numerical intervals, and hence we can split the grayscale image into flat layers, corresponding to these intervals. Then one can reconstruct images in the layers independently and combine them again into the entire image. Results of the reconstruction are demonstrated in Figures 7-10, where the intensity axes of 128 × 128 pixels images were divided into 20 layers. The number of basis squares was about 20,000, and the calculation times were 6-9 min. For example, these spots can be numerical intervals, and hence we can split the grayscale image into flat layers, corresponding to these intervals. Then one can reconstruct images in the layers independently and combine them again into the entire image. Results of the reconstruction are demonstrated in Figures 7-10, where the intensity axes of 128 × 128 pixels images were divided into 20 layers. The number of basis squares was about 20,000, and the calculation times were 6-9 min. As before, the ER data were obtained using 4 × 4 pixels squares that were scanned in each of 20 layers, and their scan period was 1 pixel. The SNR, which is defined for an average image intensity, is 9.7 dB (for Figure 9c), 23.3 dB (for Figure 10c), and 9.3 dB (for Figure 10e). Note that Figures 9d and 10d,f also demonstrate the noise reduction ability of the reconstruction algorithm (15), (16), (27).
Inverse Radon Image Reconstruction
We compared a conventional back-projection and a spot-based (31) algorithm for the reconstruction of binary images, which used under-sampled parallel-beam sinograms for 6, 9, and 18 projection angles only. Figures 11 and 12 show the results of this comparison for the 128 × 128 pixels image reconstruction. The sinograms are calculated using the Radon transform, but they imitate the real experimental sinograms of X-ray transmission through the body in a CT system with parallel-beam geometry [3]. Note that typically a CT scanner collects projection signals in approximately 1° increments, and hence the simulated examples in Figures 11 and 12 Figure 12 shows the results of the same image reconstructions, using the spot-based algorithms (30) and (31) with 5 × 5 pixels square and 1 pixel scan period. The number of basis squares was about 20,000, and the calculation times were about 6 min. These results of the reconstruction demonstrate the fact that the suggested algorithm allows the reconstruction of unblurred images even for a small number of projection angles. Images in Figure 12f-h also demonstrate a strong denoising effect for the spot-based algorithm, in contrast to the filtered back-projection algorithm. The misfit errors (32) were 3.6% (for Figure 12b,f), 4.8% (for Figure 12c,g), and 5.6% (for Figure 12d,h).
Inverse Radon Image Reconstruction
We compared a conventional back-projection and a spot-based (31) algorithm for the reconstruction of binary images, which used under-sampled parallel-beam sinograms for 6, 9, and 18 projection angles only. Figures 11 and 12 show the results of this comparison for the 128 × 128 pixels image reconstruction. The sinograms are calculated using the Radon transform, but they imitate the real experimental sinograms of X-ray transmission through the body in a CT system with parallel-beam geometry [3]. Note that typically a CT scanner collects projection signals in approximately 1 • increments, and hence the simulated examples in Figures 11 and 12 (e) (f) (g) (h) Figure 11. Examples of reconstruction of two ellipses, using their under-sampled parallel-beam sinograms by the back-projection algorithm with Hann filter. (a) Original ellipse; (b-d) Reconstructed images for 18, 9, and 6 projection angles, correspondingly; (e) Original strong noisy ellipse; (f-h) Reconstructed images for 18, 9, and 6 projection angles, correspondingly.
Discussion
In [27], the use of the apparatus of the spots model for creating neural networks of a new type is considered, in which each layer corresponds to an L4 matrix and L4 numbers are used instead of real numbers. Here, the L4 vectors play the role of input and output signals for each layer, and the L4 matrix of each layer plays the role of the weight matrix. For example, Equations (21) and (23) can be implemented using such a neural network, which consists of 4 and 3 layers, respectively. In addition, it is possible to create a neural network in the form of a neuromorphic electronic device built on solid-state elements such as field-effect transistors (FETs), FeFETs, or memristors [27]. Application of the back-projection algorithm with Hann filter is demonstrated in Figure 11 for two images of the ellipse: noise-free ( Figure 11a) and strong noisy (Figure 11e). It is clear that the results of the noisy image reconstruction in Figure 11f-h demonstrate significant blurring for the reconstructed image. Figure 12 shows the results of the same image reconstructions, using the spot-based algorithms (30) and (31) with 5 × 5 pixels square and 1 pixel scan period. The number of basis squares was about 20,000, and the calculation times were about 6 min. These results of the reconstruction demonstrate the fact that the suggested algorithm allows the reconstruction of unblurred images even for a small number of projection angles. Images in Figure 12f-h also demonstrate a strong denoising effect for the spot-based algorithm, in contrast to the filtered back-projection algorithm. The misfit errors mer (32) were 3.6% (for Figure 12b,f), 4.8% (for Figure 12c,g), and 5.6% (for Figure 12d,h).
Discussion
In [27], the use of the apparatus of the spots model for creating neural networks of a new type is considered, in which each layer corresponds to an L4 matrix and L4 numbers are used instead of real numbers. Here, the L4 vectors play the role of input and output signals for each layer, and the L4 matrix of each layer plays the role of the weight matrix. For example, Equations (21) and (23) can be implemented using such a neural network, which consists of 4 and 3 layers, respectively. In addition, it is possible to create a neural network in the form of a neuromorphic electronic device built on solid-state elements such as field-effect transistors (FETs), FeFETs, or memristors [27].
The potential advantage of the proposed neural networks over conventional ones is, in particular, that the L4 matrix apparatus does not use real numbers with complex calculations during iterations in the backpropagation algorithm of learning by examples. Instead, Equation (24) uses the inverse matrix product. Although the proposed algorithms are approximate, they are adequate to the fact that it is almost always impossible to obtain an exact solution to inverse problems due to the finite number of measured signals. In addition, the tasks of recognition and classification, in principle, do not belong to the class of tasks that require accurate calculations.
The reconstructed images in Figures 4,5 and Figures 7-12 demonstrate a good denoising ability of the proposed algorithm. This property relates to the fact that scanning squares of 4 × 4 or 5 × 5 pixels plays the role of a spatial filter and averages the sampled data. However, the spatial resolution of the reconstructed image is determined by the scan period of 1 pixel. This can be explained using Formulas (15) and (27), from which it follows that the resolution corresponds to the intersections size of 1 × 1 pixels.
An imaging algorithm using the spot-based inverse Radon transformation (Equation (31) and Figure 12) illustrates the processing of qualitative data for solving by learning. Indeed, sampling figures are basis spots and also relates to the training set, whereas the figure under test corresponds to the test example in the machine learning paradigm [29]. Finally, the reconstructed image, which is mapped on the basis of the intersections, corresponds to the solution of the trained system.
As it was noted in the Introduction, we can draw a general conclusion about the ideological proximity of the models of the spots and rough sets [48][49][50][51][52], although they have a fundamental difference. In addition, there are several close concepts between the spot model and the rough set theory (see Table 2). Table 2. Analogies between the concepts of rough sets and spots.
Concepts of the Rough Set Theory
Concepts of the Spots Model elements of the universe atomic basis granules spots attributes basis of spots attributes values L4 numbers boundary region boundary lower approximation inner region upper approximation inner region + boundary As it is shown in Table 2, spots are similar to granules, which is also the main concept in the granular computing (GC) research area [62][63][64][65][66][67][68]. A comparison of the spot and granule concepts in GC allows us to conclude that both models are also very close in many aspects.
The suggested spots model can be used for the theory of qualitative geometry (QG). For this application, it is necessary to introduce new concepts that are low-level analogies of the notion in the geometry and topology, including line, surface, dimensionality, curvature of space, etc. Based on the CG, one can introduce the concept of a semantic information space, which is an analogue of an information system characterized by an information table and is used, for example, in the theory of rough sets [50,52].
Conclusions
This article is devoted to the description of the concept and the basis of the apparatus of new mathematical objects-spots, which are introduced to represent and process qualitative data. It can be used to model human mental images, perceptions, and reasoning in AI. Furthermore, this paper demonstrates another application of the developed apparatus-for solving inverse problems by the learning method.
The proposed model uses such qualitative information about spots as elementary relations between them and introduces L4 logical numbers that encode these relations. Based on L4 numbers, the theory introduces L4 vectors and L4 matrices using the analogy with numerical matrix algebra. Although L4 numbers correspond to an elementary level of qualitative data, fusing a large number of them allows you to extract a higher level of information, including numerical.
Equations have been derived for reconstructing an image using only qualitative information about its elementary relations with a set of base figures. A general scheme for solving inverse problems for L4 and numerical data is proposed, including a learning method for solving.
The introduced apparatus was tested by solving image reconstruction problems using only qualitative data of its elementary relations with the scan figures. The application of spot-based inverse Radon's algorithm for the reconstruction of a binary image was also demonstrated.
Further research in the field of the proposed theory involves the development of algorithms for solving various inverse problems, including inverse electromagnetic scattering. Another goal of the work is to design neural networks based on the proposed spots model, where each layer corresponds to the L4 matrix. | 2023-01-25T16:01:53.034Z | 2023-01-21T00:00:00.000 | {
"year": 2023,
"sha1": "74c26875724b87b72fdb4e58fad57bad2d1b2cc7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/3/1247/pdf?version=1674291947",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcae82c62a4ac2686e300289effc2341cc9f8601",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
220509929 | pes2o/s2orc | v3-fos-license | Seventh order hybrid block method for solution of first order stiff systems of initial value problems
A hybrid second derivative three-step method of order 7 is proposed for solving first order stiff differential equations. The complementary and main methods are generated from a single continuous scheme through interpolation and collocation procedures. The continuous scheme makes it easy to interpolate at off-grid and grid points. The consistency, stability, and convergence properties of the block formula are presented. The hybrid second derivative block backward differentiation formula is concurrently applied to the first order stiff systems to generate the numerical solution that do not coincide in time over a given interval. The numerical results show that the new method compares favorably with some known methods in the literature.
Introduction
The numerical solutions of stiff systems have been one of the major worries for numerical analysts. A numerical method that is potentially good for solving systems of stiff ODEs must have some reliablity in terms of its region of absolute stability and good accuracy [1]. Consider the system of initial value problems (IVPs) given by y = f (t, y), y(t 0 ) = y 0 (1) where f : × 2s −→ s ; y, y 0 ∈ s , and s is the dimension of the system. f satisfies the Lipschitz condition, and the Jacobian (2020) 28:34 Page 2 of 11 Dahlquist, several linear multistep methods have been developed including continuous ones. Continuous multistep methods have been the subject of growing interest due to the fact that continuous methods enjoy certain advantages, such as they have the potential to provide defect control (see Enright [4]) as well as they are able to generate complementary methods, which are applied together as a single block scheme (see Onumanyi et al. [5], Akinfenwa et al. [6]- [11], and Jator [12]- [14]). Most of the block methods proposed in the literature such as Chartier [15], Shampine and Watts [16], Chu and Hamilton [17], and Suleiman et al. [18] to mention but a few are usually implemented in the predictorcorrector mode. In this paper, we propose an Hybrid Block Second Derivative Backward Differentiation Formula (HBSDBDF) which simultaneously provides the solution of (1) in each block without the use of predictors (see Akinfenwa et al. [6]- [11], Jator et al. [12], [13], and Yakubu and Markus [19]).
Development of the method
We seek the numerical estimation of the analytic solution y(t) by assuming an approximate solution Y (t) of the form where t ∈ [t 0 , T n ], m j are undetermined coefficients that must be obtained and ϕ j (t) are the basis polynomial functions of degree 7. It is required that the eight equations below must be satisfied. 7 j=0 m j t j = y n+i , i = 0, where n is the grid index, y n+i = Y (t n+i ) is the numerical estimation of the analytical solution y(t n+i ) , and g n+i = df dt (t n+i , Y (t n+i )) . Equations (3), (4), and (5) will provide a system of eight equations whose solutions generate the coefficients m j which are replaced in Eq. (2). After some algebraic manipulation, the continuous form of the hybrid second derivative formula is obtained as where α i 2 (t) , i = 0, 1, 2, . . . , 5, β k (t), and γ k (t) are continuous coefficients; k = 3 is the step number; and h is the step length. We assume that y n+i/2 = Y t n+ ih 2 is the numerical estimation of the analytical solution y t n+ ih 2 , y n+k = f t n+k , y n+k is an approximation to y t n+k , and g n+k = df dt (t, y(t))| t n+k The main method is generated from (6), by interpolating at t = (t n+3 ) to obtain While the complementary methods are obtained by differentiating (6) with respect to t to obtain and interpolating (8) The hybrid methods are implemented by applying (7), (9)-(13), as a single block method to provide the approximate solution y n+ 1 2 , y n+1 , y n+ 3 2 , y n+2 , y n+ 5 2 , y n+3 , for n = 0, 3, . . . , N − 3, N on the partition [t 0 , t 3 , t 6 , · · · , t N−3 , t N ].
Zero stability
The zero stability of HBSDBDF can be determined from the difference system (14), as h tends 0. Thus, as h → 0, the method (14) tends to the difference system (15) where W 1 and W 0 are 6 by 6 constant matrices. Hence, from (15), we obtain the first characteristic polynomial π(L) given by The HBSDBDF is zero-stable since π(L) = 0 satisfies that the roots |L j | ≤ 1 , j = 1, . . . , 6, and for those roots L j = 1, the multiplicity does not exceed 1.
Local truncation error Theorem
The HBSDBDF has a local truncation error
Consistency and convergence
The HBSDBDF (14) is consistent as each of the schemes has order p > 1. Following Henrici [20], since the HBSDBDF is zero-stable and consistent, then the HBSDBDF is convergent.
The stability domain of the method is defined as δ = {z ∈ C : (z) ≤ 1}. To determine the stability of the HBSDBDF, we state the following definitions: (2020) 28:34 Page 6 of 11 (i) Dahlquist [3]: A numerical method whose stability region contains the entire left half plane is said to be A-stable. (ii) Ehle [21]: A method that is A-stable and Lim[ R(z)] → 0 as z → −∞ is said to be L-stable.
The region of absolute stability (RAS) of the method is plotted using the boundary locus technique. Figure 1 depicts the stability region for the HBSDBDF of the dominant eigenvalue R(z). It can be seen that the HBSDBDF is A-stable because the stability region contains the whole left half complex plane whose interval [ −3.4, 0]. Also the HBSDBDF is A-stable and as in Ehle [21] the requirement that lim z→−∞ R(z) = 0 is satisfied. Thus, it is L-stable.
Numerical examples
Example 1 This example is a linear system on the range 0 ≤ t ≤ 10 (see [6]).
For the problem, the maximum absolute error was computed on the given interval. It was found that the HBSDBDF of order 7 is more accurate when compared to the BBDF in [6] of the same order. Although when h = 0.1250, the max error is greater than the error in Our aim here is to demonstrate the accuracy, rate of convergence (ROC), and good stability properties of the HBSDBDF . For different step sizes h, the relative error max i Table 2. Clearly, the ROC is consistent with the order of the new block scheme. Example 3 Consider the highly stiff system, see [19]. y 1 = −10 7 y 1 + 0.075y 2 , y 1 (0) = 1 The eigenvalues of the Jacobian of the system are approximately λ 1 = −1.000000000562500 × 10 6 and λ 2 = −0.0743749995813. The result of the HBSDBDF is compared with that of Yakubu and Markus [19] using second derivative method and shown as displayed in Table 3. Example 4 Next consider the chemistry problem in Gear [22] , Cash [23], and Yakubu [19], We solve this problem in the interval 0 ≤ t ≤ 50 using the HBSDBDF. The result is y 1 in blue, y 2 in brown, and y 3 in red as shown in Fig. 2 with the numerical values for h = 0.001 at the point T = 10, 20, 30, 40, and 50. See Table 4.
Example 5 Consider the well-known nonlinear problem (Kaps problem) in the range
This problem is solved using method in [10] and the new HBSDBDF so as to show the advantage of the new method over that in [10]. The absolute error at the end point t = 10, NFE the number of function evaluations, h the step size, and N the number of computation steps are displayed in Table 5.
Results and discussion
The stability of the newly derived method was obtained by using the boundary locus approach. The technique involves finding the roots of the stability function which is a rational complex function and by plotting the imaginary root against the real root. The interval of stability read from the plot of the region of absolute stability gives [−3.4,0]. The result obtained in Example 1 showed the accuracy and stability of the method. However, when h = 0.1250, the max error is greater then the error in [1]. This was due the fact that the new method converges with correct digit of 13 from h = 0.5 to h = 0.1250. The second example was solved to show that the rate of convergence of the method is in agreement with the order of the method. The third example is a highly stiff system, and it is solved to show the effectiveness of the method. Despite the fact that the method is of order 7, it was compared with those [19] of orders 8 and 11. Table 3 shows the superiority of the new method over those in [19]. The fourth problem solved is also a standard chemistry problem and the result plotted in Fig. 2 and the numerical solution shown in Table 4 is in agreement with those in the literature. The advantage of the HBSDBDF can be seen in Table 5 where the new method converges even for very large step size. The low number of function evaluation shows that the new method can save computer memory with reduced computation time.
Conclusion
In this article, a new hybrid second derivative block backward differentiation formula for solving stiff systems of first order initial value problems is reported. The stability analysis has been conducted based on the boundary locus technique to obtain the region of absolute stability. The HBSDBDF is implemented without the need for predictors or starting values, and therefore, subroutines that are sometimes complicated are not necessary. Five standard numerical examples, both linear and non-linear stiff systems, have been solved to show the accuracy and efficiency of the methods. From the results obtained, the rate of convergence confirmed the order of the method. Detailed results are displayed in Tables 1, 2, 3, 4 and 5. The results have shown that HBSDBDF is suitable for solution of stiff problems and converges accurately even for large step size h. The advantages of the new method are that it is more accurate than those in [10] and [19] in the manuscript. It has less number of function evaluation when compared with [10] and [19], hence reduced the time of computation and reduced use in computer memory. Another advantage is that it converges accurately with large step size h while those in [10] and [19] are less accurate as evident in Tables 3 and 5. This is the goal of numerical analysts. The disadvantage however is that the new method will converge with very fewer digit/s of accuracy when compared with the method in [10] for problems using very small h. | 2020-07-02T10:10:42.695Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "98fef06ecf05b23dff3550bcf6761b4e945b6189",
"oa_license": "CCBY",
"oa_url": "https://joems.springeropen.com/track/pdf/10.1186/s42787-020-00095-3",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1ba71540320efe0eadfaf442feda7178c1889929",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
220971209 | pes2o/s2orc | v3-fos-license | Fully automatic classification of breast MRI background parenchymal enhancement using a transfer learning approach
Abstract Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories “minimal,” “mild,” “moderate,” and “marked.” The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automatic classification of BPE categories. This IRB-approved retrospective study included 11,769 single MR images from 149 patients. The MR images were derived from the subtraction between the first post-contrast volume and the native T1-weighted images. A hierarchic approach was implemented relying on 2 dCNN models for detection of MR-slices imaging breast tissue and for BPE classification, respectively. Data annotation was performed by 2 board-certified radiologists. The consensus of the 2 radiologists was chosen as reference for BPE classification. The clinical performances of the single readers and of the dCNN were statistically compared using the quadratic Cohen's kappa. Slices depicting the breast were classified with training, validation, and real-world (test) accuracies of 98%, 96%, and 97%, respectively. Over the 4 classes, the BPE classification was reached with mean accuracies of 74% for training, 75% for the validation, and 75% for the real word dataset. As compared to the reference, the inter-reader reliabilities for the radiologists were 0.780 (reader 1) and 0.679 (reader 2). On the other hand, the reliability for the dCNN model was 0.815. Automatic classification of BPE can be performed with high accuracy and support the standardization of tissue classification in MRI.
Introduction
Magnetic resonance imaging (MRI) is an established technique for breast imaging and it is used for evaluation of the breast tissue in high-risk patients, pre-operative staging, monitoring of chemotherapy effect, evaluation of women with breast implants, or occult primary breast cancer. [1] After administration of the contrast agent, both, lesions and normal fibroglandular tissue (FGT), may enhance. [2] In some subjects, the enhancement of the FGT, that is, the background parenchymal enhancement (BPE), may present asymmetric and non-diffusive distribution, as well as a suspicious dynamic response. In those cases, the BPE can affect the diagnostic accuracy of the lesion Breast Imaging-Reporting and Data System of the American College of Radiology (ACR BI-RADS) classification. [3][4][5] Not only technical factors (e.g., concentration of the contrast agent, T1-weighted contrast of the sequence) [6] but also the vascular mammary anatomy and the hormonal status are known to affect the BPE levels. [7,8] In young patients and patients undergoing hormonal therapy, BPE is more markedly expressed than in other patients. [9] In order to account for the monthly hormonal changes of the breast, breast MRI is preferably performed during the 7th to 14th day of the menstrual cycle. [10,11] Moreover, to achieve a better standardization of the BPE classification, radiologists are requested to rate the BPE according to the BI-RADS classification [12] as minimal, mild, moderate, or marked. However, the visual rating of the BPE in prone to be reader-dependent; in a study from 2015, Grimm et al reported a fair mean inter-observed reliability in the BPE classification (Cohen's kappa, k = 0.28). [13] Besides the relevance in the definition of the diagnostic accuracy of breast MRI, few studies have claimed the association between BPE and breast cancer risk. [14][15][16] To overcome the problem of the human variability in the classification of the BPE, automatic or semi-automatic methods have been proposed. The reported technical solutions propose a volumetric or quantitative computation of the FGT enhancement. [17][18][19] Although those methods aim at an objective evaluation of the BPE, the association between quantitative parenchymal enhancement (QPE) and the BPE is only fair, due to the lacking possibility of accounting for the intensity of the enhancement or for the presence of spotted BPE patterns. [17] Similarly, to the case of the BPE, also the human visual classification of the mammographic breast density is readerdependent. In the case of the mammographic breast density, deep learning has shown to provide clinically valuable classification by relying on image pattern recognition. [20][21][22] In this study, we propose the use of a deep convolutional neural network (dCNN) for the classification of the BPE in MRI. Performances of the algorithm were clinically validated by comparing the classification of the algorithm on a real-world data set with the consensus of 2 board-certified radiologists.
The breast MRI measurement is usually performed with some margin, so before the BPE classification, the slices containing breasts must be selected. To this end, we propose an auxiliary model that recognizes slices depicting the breast.
Study design and population
This retrospective study was approved by the local Ethics Committee. All patients undergoing a breast MRI at our institution from September 2013 to October 2015 were considered for analysis. A total number of 149 patients was included. The mean patient age ± standard deviation was 49 ± 6 years. Each patient was examined once.
Image acquisition
All breast MRI examinations were performed with the patient in prone position using a 3-T unit (MAGNETOM Skyra, Siemens Medical Solution, Erlangen, Germany) and a dedicated 4-channel breast coil. For each patient, the imaging protocol included an axial T2-weighted short-tau inversion recovery sequence (TR 5600 ms or 8970 ms, TE 70 ms, inversion time 150 ms, flip angle 150°, voxel 1.3 mm  0.6 mm  0.3 mm or 0.7 mm  0.7 mm  2 mm) and an axial diffusion-weighted sequence (TR/TE 4300/89 ms, voxel 2.7 mm  2.7 mm  4 mm, b-values 0, 500, 1000 s/ mm 2 ) before contrast agent injection. Thereafter, a dynamic protocol consisting in the acquisition of a T1-weighted gradientecho three-dimensional fast low-angle shot sequences (TR/TE 11/ 4.89 ms, voxel 0.8 mm  0.8 mm  1.3 mm) before and after contrast agent administration (0, 1, 2, 3, 4, and 5 minutes) was acquired. The dose of the contrast agent was adapted to the weight of the patient (0.1 mmol/kg).
Dataset preparation
The retrieved dataset consisting in 149 studies and 11,769 MR images was used for training 2 models: the breast detection model for the recognition of slices depicting the breast, and the BPE model for the BPE classification. The dataset contained 1169 slices depicting breast with implants and 699 without depicted breast. For the breast detection model, the whole retrieved dataset was split into 3 categories: "breast," "no-breast," and "implants." Each category was randomly split into training, validation, and test set at a ratio of 70%, 20%, and 10%, respectively.
For the BPE model, 9902 single-slice MR images from 124 patients without breast implants were selected and annotated in terms of BPE according to the BI-RADS atlas (3613 as minimal, 4282 as mild, 1556 as moderate, and 451 as marked). To balance the number of images belonging to each BPE category, the data were augmented by a random shifting and zooming in the range of ±5% and by horizontal flipping. The catalog structure containing the data sets is presented in Figure 1. The images were preprocessed before feeding them into the neural networks by cutting-off the one-third of their bottom part, which does not contain breast, and by normalization the values to 0 to 1 range.
For the of BPE model, subsets of 87, 25, and 12 patients were randomly selected for the training-, validation-, and testpartitions, respectively. For the test partition, a subset of 100 images (25 for each BPE category) was chosen for the evaluation. The set partitions had been selected before the model was trained.
The images used for the training of the BPE model were annotated in consensus by 2 radiologists with more than 5 years experience in breast imaging. In this study, this assessment is regarded as the gold standard of the BPE class assessment and will be referred to as "reference." BPE scores were assigned slice-wise based on the image volume resulting from the subtraction of the native fat-suppressed T1-weighted images from the first postcontrast volume. In the case of BPE asymmetry between the left and right side, the higher level of BPE was assigned.
Model architecture and training
Both models were implemented by means of a deep convolutional neural network. The network consisted of 2 densely connected layers on top of the convolutional part of the VGG16 [23] network trained on the ImageNet dataset, which has been already successfully applied for the assessment of medical breast images. [24] Both models were trained on the NVIDIA GeForce GTX1080 graphical processing unit for 100 and 120 epochs, respectively, using the Adam optimizer. [25] To avoid overfitting, the training process was stopped as soon as the loss function calculated for the validation set had raised or the difference between the accuracy for the training and validation set had exceeded 3 percent points. Moreover, the model was saved after an epoch, after which the validation accuracy was the highest.
For the breast detection model the categorical cross-entropy loss function was applied. For the BPE model, a custom loss function was applied in order to take advantage of the graduating categories (A < B < C < D). To this end, the cross-entropy value for each sample was multiplied by the value of the mean square error. Nevertheless, the cross-entropy do not take advantage of the gradation of the BPE categories (A < B < C < D). To take this fact into account this fact, a custom loss function was applied. Namely, the value of the cross-entropy loss for each sample was multiplied by the value of the mean square error.
Statistical and clinical validation
For each model, the performances over the validation dataset were quantified in terms of the metrics of the confusion matrix. The performances of each model over the real-world dataset were compared with the reference. Performances were expressed in terms of accuracy, precision, recall, and F1-score. For each model, the confusion matrix was computed.
The output of the algorithm on the real-world data was also used for the clinical validation. In this case, each experienced radiologist was requested to perform the classification on the real-world dataset. The radiologists were blind to the results of the previously performed consensus classification and of the algorithm classification.
Based on the 3 classifications over the same dataset and on the consensus decision taken as a gold standard, the inter-rater reliability was assessed by means of the quadratic Cohen's Kappa coefficient (k). [26] 3. Results
Statistical validation
The statistical validation of the breast detection and of the BPE models is reported in Tables 1 and 2, respectively. The corresponding learning curves are presented in Figure 2. In the case of the breast model, the training was stopped after 100 epochs, when the accuracy for the validation set reached the plateau of 97.5%. For the BPE model, the training was stopped after 150 epochs. The highest accuracy for the validation set was achieved after the 67-th epoch, so the state of the model at that stage was used.
After the training, both models were validated on the real-world datasets. The accuracy, precision, recall, and F1-score for each class of the breast and the BPE models are reported in Tables 1 and 2, respectively. In the case of the breast model, the overall accuracy was equal to 96%. Only 1 image that presented breast has been erroneously classified to the "no-breast" category and 2 images without breasts have been classified as depicting the breast. The best performance has been achieved in the case of the breast with implants. All the images from the test set that presented implants have been correctly recognized and none has been erroneously assigned to this group. The corresponding confusion matrix is presented in Figure 3 in the form of a heat-map.
In the case of the BPE model, the overall accuracy was equal to 75%. Almost all misclassifications occurred only between adjacent classes, for example, mild and moderate. The confusion matrix corresponding to this model is presented in Figure 4.
In most cases, the confidence of the assignment to a particular class was greater than 99% in the case of the breast model and significantly greater than 90% in the case of the BPE model. For the BPE model, the T1-weighted native image of 1 representative subject was superimposed onto the class activation map (CAM) implemented using the Gradient-weighted Class Activation Mapping (Grad-CAM) approach [27] (Fig. 5). The Table 1 Accuracy, precision, recall, and the F1-score of the breast detection model evaluated on the real-world data. CAM map indicates the regions on which the prediction has been based on.
Clinical validation
Reader 1 assessed 69 real-world images in accordance with the reference, while reader 2 correctly assessed 52 images. Therefore, the accuracy of the human experts for the dataset of 100 images was 69% and 52%, respectively. Figure 6 presents the inter-rater reliability expressed by the Cohen's Kappa for the predictions of the BPE model, assessments done independently by 2 expert human readers (reader 1 and reader 2) and the reference in every possible combination. The lowest value of the kappa (0.679 ± 0.18) was obtained for the reliability of reader 1 and the reference, while the highest one (0.815 ± 0.13) was obtained for the model and the reference. The statistics of the assessments done by each human expert and the model are reported in Table 3. The "±" sign indicates the confidence interval. Figure 7 shows the same results presented in terms of each BPE class separately, where the four-class classification problem was translated to four one vs all classifications. The presented kappa coefficients were obtained with regard to the reference. The reliability of both readers was different for different BPE classes and ranged between 0.47 and 0.71 for the first and 0.24 and 0.49 Table 2 Accuracy, precision, recall, and the F1-score of the BPE (background parenchymal enhancement) class model evaluated on the real-world data.
Discussion
In this study, we implemented a fully automatic approach for the classification of BPE categories according to the Breast Imaging Reporting and Data System atlas and validated its clinical use by comparing the performances of the algorithm with those of consenting expert human readers. The approach relies on the use of a transfer learning method. To obtain a slice-wise classification of the BPE, a hierarchical approach was implemented, which consisted in the use of 2 computational models: the first intended to detect slices imaging the breast and the second performing the actual BPE classification. The rationale behind the study is that the use of a dCNN algorithm trained on thousands of data labeled by consenting radiologists expert in breast imaging may allow a standardized BPE classification. Although the problem of the automatic assessment of BPE has been already addressed by Ha et al, [28] so far quantitative assessment of BPE based on tissue segmentation has been proposed. To the best of our knowledge, the deep learning approach able to mimic the human evaluation of the whole image pattern for qualitative BPE classification has never been published before.
The breast detection and the BPE models were trained for 150 and 120 epochs, what allowed achieving the accuracy for the validation set equal to 97% and 90%, respectively. The accuracies obtained for the real-world data were similar, what indicates that the models are not over-fitted. The learning curves for the breast detection model (Fig. 2) achieve the first plateau after about 35 epochs. Nevertheless, the reduction of the learning rate during the training enabled to increase the accuracy by additional 2%, yielding a characteristic inflection of the accuracy and loss curves. In the case of the BPE class model, the accuracy plateau was not reached, so the dropout of the learning rate was not applied. The model was saved after the 67-th epoch, when it achieved the heist validation accuracy. As a comparison, the accuracy of human readers was 69% and 52%. Such discrepancies between experienced radiologists confirm the need of the standardization. As shown in Figure 6, almost all misclassifications done by the BPE model occurred only for adjacent categories. Since the BPE classification guidelines are subject to a human interpretation, there is no ground truth behind a given image and this kind of disagreement is common between different radiologists and even between 2 assessments of the same specialist. [29] Therefore, a more relevant way to assess the model is to compare the interrater reliability expressed by the Cohen's kappa coefficient. The kappa coefficient obtained for the agreement between both experts and each expert with the model 0.793 ± 0.15, 0.804 ± 0.14, and 0.768 ± 0.16, respectively. These values are consistent with values assessed in other studies, which ranged between 0.73 and 0.93, [29][30][31] and confirm that expert readers achieve an almost perfect agreement with the consenting classification. As compared to the reference, the inter-rater reliability of the model is higher (0.815) than that of the experienced radiologists (0.78 and 0.679). These findings suggest that deep convolutional neural networks are a reliable and standardized tool in the assessment of the background parenchymal enhancement in MRI. The class activation map presented in Figure 5 shows that the BPE model classifies the images based on the image region that contain the most important information and ignores the background, what confirms the above conclusion.
The possible source of the bias in the assessment of the BPE class is that the different BPE classes are not equally common. The lower enhancement classes occur more often than the higher ones, as can be seen from the review of 650 breast MRI examinations described by Abramovici and Mainiero [32] and in other studies. [30,32] This fact is reflected in the statistics of the radiologists' answers, as reported in Table 3. The readers classified the images to the lower classes more frequently, even though the BPE classes in the test dataset were equally represented. Since the neural network model has been trained on a balanced dataset, it is free from this kind of bias.
The main limitation of our study is a limited number of studies, what has been mitigated by application of the transfer learning and data augmentation; possible bias of the human experts in the BPE class assignment, what to some extent, has been mitigated by taking the consensus decision as the reference; and finally, the fact that all studies were performed in 1 institution using the same MRI scanner. Validation of the model using images from other institutions is proposed for the future study. Another limitation is the relatively small size of the real-world dataset. This representative set was a trade-off between robust statistics and the limited reading-time of human experts. However, in the case of a balanced class distribution, the potential bias is expected to be less severe. [33] Figure 7. The values of the Cohen's kappa for both radiologists (blue and red) and the model (green) calculated for each BPE class separately. The results were with regard to the reference.
Conclusion
In conclusion, the MRI breast images can be effectively classified according to their background parenchyma enhancement, by means of a deep convolutional neural network. The neural network is at least as accurate as an experienced radiologist. Moreover, their predictions are standardized and not influenced by the effect of the intra-reader discrepancy. The convolutional part of the VGG16 network can serve as an effective feature extractor for breast MRI, even though it was not trained on medical images. | 2020-07-23T09:06:17.826Z | 2020-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "7e76ba8287aad3c95a78ddab2265e992b2fcc659",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000021243",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "166bf852cecbc10bc061f8431a332218ea48a258",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255827764 | pes2o/s2orc | v3-fos-license | Identification of human mitochondrial RNA cleavage sites and candidate RNA processing factors
Background The human mitochondrial genome is transcribed as long strands of RNA containing multiple genes, which require post-transcriptional cleavage and processing to release functional gene products that play vital roles in cellular energy production. Despite knowledge implicating mitochondrial post-transcriptional processes in pathologies such as cancer, cardiovascular disease and diabetes, very little is known about the way their function varies on a human population level and what drives changes in these processes to ultimately influence disease risk. Here, we develop a method to detect and quantify mitochondrial RNA cleavage events from standard RNA sequencing data and apply this approach to human whole blood data from > 1000 samples across independent cohorts. Results We detect 54 putative mitochondrial RNA cleavage sites that not only map to known gene boundaries, short RNA ends and RNA modification sites, but also occur at internal gene positions, suggesting novel mitochondrial RNA cleavage junctions. Inferred RNA cleavage rates correlate with mitochondrial-encoded gene expression across individuals, suggesting an impact on downstream processes. Furthermore, by comparing inferred cleavage rates to nuclear genetic variation and gene expression, we implicate multiple genes in modulating mitochondrial RNA cleavage (e.g. MRPP3, TBRG4 and FASTKD5), including a potentially novel role for RPS19 in influencing cleavage rates at a site near to the MTATP6-COX3 junction that we validate using shRNA knock down data. Conclusions We identify novel cleavage junctions associated with mitochondrial RNA processing, as well as genes newly implicated in these processes, and detect the potential impact of variation in cleavage rates on downstream phenotypes and disease processes. These results highlight the complexity of the mitochondrial transcriptome and point to novel mechanisms through which nuclear-encoded genes can potentially influence key mitochondrial processes. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-022-01373-5.
Background
In humans, mitochondria play important roles in many fundamental and interconnected cellular processes, such as thermogenesis, cellular energy production and cell death [1], and mitochondrial malfunction has been associated with a myriad of diverse and complex diseases such as neurodegenerative and metabolic disorders, particularly through the association of mutations in nuclearencoded mitochondrial genes [1][2][3][4][5].
Mitochondria are unique organelles that contain their own independent genome, a remnant of their ancestral bacterial origin [6]. The human mitochondrial genome encodes just 2 rRNA genes, 22 tRNA genes and 13 mRNA genes, the latter coding for essential components of the OXPHOS system [7]. The compact nature of the mitochondrial genome is thought to have arisen through gene transfer to the nuclear genome over evolutionary timescales, and as a consequence, mitochondria now depend on an estimated ~1500 proteins encoded in the nucleus to carry out fundamental mitochondrial processes, including replication, transcription and translation [8,9]. As such, both genomes coordinate to carry out metabolic processes, highlighted by the fact that the expression of numerous nuclear genes correlates with mitochondrial-encoded gene expression [10,11].
The human mitochondrial genome itself is transcribed as polycistronic RNA containing multiple genes, which is then processed under the 'punctuation model' whereby tRNAs that intersperse coding and ribosomal sequences are targeted and cleaved by nuclear-encoded proteins to release gene products [12]. Canonical cleavage sites at the ends of tRNAs are processed by mitochondrial RNase P and RNase Z [13], with the cleavage of the 5′ end preceding that of the 3′ end [14]. Alongside canonical cleavage, varied processes including RNA modifications [15][16][17], non-canonical cleavage events [11,18], RNA degradation [19,20] and translation rates eventually influence the final amounts of mitochondrial proteins that will be available for use in the electron transport chain.
However, the punctuation model does not encompass all RNA cleavage events in the human mitochondria, and it is becoming clear that many other complex processes regulate the production of fully processed RNA. Furthermore, not all mitochondrial genes are flanked by tRNAs (e.g. between MTATP6 and MTCO3), and thus, other proteins and mechanisms are needed to cleave RNA. For example, FASTKD family proteins have been associated with RNA processing at some gene boundaries [21]. Knock down of FASTKD4 (TBRG4) has been associated with the accumulation of ND5-CYB precursors and strong reductions in mature ND3, ND5 and ATP8/6 mRNAs [22], as well as being needed for the stability of a subset of mitochondrial mRNAs [23]. Recent work has also implicated another FASTK protein, FASTKD5, at these junctions with gene knock down experiments leading to an accumulation of precursor RNAs that lack tRNA at both ends [24]. Moreover, rare non-canonical cleavage events have been observed at intra-ORFs, albeit not as frequently as at canonical processing sites, with resulting products having unknown function [11]. Similarly, regulation of these events is thought to be influenced by other factors such as RNA modifications, polyadenylation and translation factors [18,25,26], adding another layer of complexity. Despite this accumulation of knowledge, many genes and processes that influence the levels of fully processed mitochondrial transcripts remain unknown, and closer inspection of mitochondrial RNA cleavage events at both canonical and non-canonical junctions may allow insight into the complex regulation that occurs to influence the availability of key OXPHOS components, particularly in dynamic tissues with high energy demand. These questions become all the more pertinent, since differences in mitochondrial post-transcriptional processes have been linked to diseases such as cancer [4,27] and hypertrophic cardiomyopathy [28].
Here, we develop a computational approach to infer and quantify human mitochondrial RNA cleavage events in standard RNA sequencing libraries by assessing the structure of RNA read placement on the mitochondrial transcriptome. We apply this approach to whole blood RNA sequencing data from over 1000 individuals to quantify variation in cleavage processes on a population scale. We identify known RNA cleavage sites at gene boundaries, but also events at non-canonical sites, that replicate in independent datasets. Comparing rates of inferred mitochondrial RNA cleavage across individuals with genetic and expression data from the nuclear genome, we identify common nuclear genetic variation in known RNA processing genes that modulate these processes across individuals (e.g. MRPP3 and FASTKD5), as well as candidate genes that may play novel roles in mitochondrial RNA processing and function.
Results
Human mitochondrial RNA is initially expressed as polycistronic transcripts that in most cases cover the whole heavy and light mtDNA strands and extensive posttranscriptional processing follows to produce individual mRNAs. Therefore, it is expected that when RNA is collected from biological samples, there will be assorted forms of precursor, intermediate and fully processed transcripts. Although many library preparation techniques also include the enrichment of polyadenylated (polyA) fragments, due to the abundance of mitochondrial RNA in any given sample, non-polyA mitochondrial RNA is also likely to be present. During the standard RNAseq library preparation protocol, this RNA is thought to be cut predominately at random positions to produce fragments of the appropriate size for sequencing. Compared to this random cleavage at internal RNA positions, we would expect that the true 5′ and 3′ ends of the original transcripts will be over-represented, and thus, more of the ends of RNA sequencing reads generated from these libraries will map to genuine cleavage sites on the mitochondrial chromosome (read stacking). Between any given pair of sites along the mitochondrial genome, there will be reads stacking immediately adjacent on either side, and by counting the number of these and then dividing this by the number of reads that fully overlap the site, we can generate a proxy 'cleavage ratio' for that position. Subsequently, any site that represents the start/end of a genuine transcript in the original biological sample should have a higher cleavage ratio, and by identifying these peaks, we can infer both putative RNA cleavage sites and a proxy for the rate of cleavage at these sites ( Fig. 1).
Detection and interpretation of mitochondrial RNA cleavage sites
In order to identify putative mitochondrial RNA cleavage sites, we mapped and filtered RNA sequencing datasets from 799 whole blood samples from the CARTaGENE project using parameters designed to keep as much of the genuine RNA fragment as possible (see the 'Methods' section) and calculated cleavage ratios at all sites across all individuals (Additional file 1: Fig. S1). We then identified positions with cleavage ratios greater than 0.1 (the mean average ratio at 100 random sites >50 bp away from known gene boundaries is 0.01, standard deviation 0.01). As many of the obtained cleavage ratios clustered close to each other along the mitochondrial transcriptome, we merged sites that were within 5bp of each other and kept the one with the highest ratio as representative of the 'peak' cleavage site in the region. Finally, we kept any peak site that was present in at least 50% of the 799 individuals and had at least 20x read coverage; this left us with 79 putative RNA cleavage sites (Additional file 2: Table S1). In total, 9 of these occur exactly at known gene boundaries (defined as the region between two mitochondrial genes), and a further 11 occur no further than 10bp from these regions. The remaining 59 occur at various positions within coding and non-coding sequences and may represent novel cleavage junctions. As a sanity check, we also calculated the cleavage ratios at the 28 known mRNA or rRNA gene boundaries and find that 17 are significantly higher than the background rate (P < 0.05/28, one-sided t-test, Additional file 1: Fig. S2, see the 'Methods' section), showing the validity of our approach.
To test the reproducibility of our cleavage detection method, we applied the approach to 344 additional whole blood RNAseq samples from the GTEx project and repeated the sample detection methods as above (for distribution of cleavage rates, see Additional file 1: Fig. S1). In total, we reproduce 54 of the original peak cleavage sites (~68%, Additional file 2: Table S1). Within these 54 reproducible sites, 8 occur exactly on known gene boundaries, all of which occur between genes that contain interspersed tRNAs, and 9 further sites were detected no more than 10 bp away from a known gene boundary. As such, our approach identifies a large number of known mitochondrial RNA cleavage sites with high accuracy. Interestingly, one reproducible cleavage site was found only 1 bp away from the MT-ATP6 to MT-CO3 gene boundary, which does not contain an interspersed tRNA and is thought to be processed via other mechanisms. Of the 37 reproducible sites that occur more than 10bp away from known gene boundaries, 18 occur within coding genes, 2 within rRNAs, 15 within tRNAs and 2 within the mtDNA control region. These positions are potential candidates for novel mitochondrial RNA cleavage sites.
To test the validity of the 54 reproducible sites, we performed two additional validation steps. First, both CARTaGENE and GTEx data are generated using polyAenriched RNA fragments. Therefore, to test whether any of the 54 sites could be a result of experiment-specific technical artefacts, we obtained an additional 16 whole blood RNA sequencing datasets from Pineau et al. [29] that were generated from ribosomal RNA-depleted libraries (see the 'Methods' section) and tested whether the 54 putative cleavage sites had cleavage ratios >10% in any of the samples. In total, 47/54 sites were validated in these data, making it unlikely that they are a consequence of specific library preparation approaches. Second, since advances in long-read sequencing technologies now allow for the identification of full-length RNA transcripts, we tested for evidence of overlap between the 54 putative RNA cleavage sites and the 5′/3′ ends of RNA fragments generated from high coverage Oxford Nanopore cDNA and native RNA sequencing data from a human B lymphocyte cell line [30]. Within this study, it was shown that technical features of Nanopore direct RNA sequencing led to truncation events in mitochondrial transcripts 10-15bp from the 5′ end of genes, and as such, we removed putative cleavage sites within 3bp of these regions. Of the 44 putative cleavage sites remaining, we find evidence for validation of 29 sites (see the 'Methods' section). As such, in total, we validate 52/54 putative cleavage sites across ribosomal RNA-depleted and Nanopore datasets (Additional file 2: Table S1; for flowchart of filtering steps, see Additional file 1: Fig. S3). Although some of the 54 putative RNA cleavage sites may still be false positives or not validated because of technical features specific to each platform, since we are also interested in the molecular mechanisms underpinning these events, as well as the potential downstream consequences of variation in such processes, we continue to focus on these 54 sites for all subsequent analyses.
It has been shown in previous work that reads also tend to terminate and 'stack' at sites of RNA modification [31][32][33], and indeed, in our data, five putative RNA cleavage sites occur at known modified sites. These [11] found evidence of additional RNA cleavage events within the mitochondrial transcriptome that generate short RNAs (sRNAs) that have unknown function, but may play a role in RNA silencing. Three cleavage sites detected in this study occur close to identified sRNA boundaries: one at site 598 within MT-TF (1bp from an sRNA boundary), the second at site 3258 within MT-TL1 (0bp from an sRNA boundary) and the third at site 9157 within MT-ATP6 (1bp from an sRNA boundary). Although stringent criteria were used to identify such sRNA boundaries, these results show that our approach has the potential to identify novel RNA cleavage events with functional relevance.
To test whether RNA cleavage events either influence or co-occur with downstream processes, we compared cleavage ratios to mitochondrial-encoded gene expression levels across individuals, requiring significance (after Bonferroni correction) and the same direction of effect in both discovery (CARTaGENE) and replication (GTEx) datasets. In total, we find 18 significant relationships, involving 6 unique cleavage sites (positions 659, 1682, 9219, 10074, 10479 and 10496) and the expression levels of 9 unique mitochondrial genes (Additional file 2: Table S2 and Additional file 1: Fig. S4). Position 1682 falls 12bp from the 5′ end of MT-RNR2, and putative cleavage ratios at this site are negatively associated with the expression levels of nine different mitochondrial genes (MT-RNR1, RNR2, ND1, ND2, CO2, ATP6, ND4, ND5 and CYB). This may suggest a role for cleavage at position 1682 in modifying the processing and expression of MT-RNR2, which subsequently influences the RNA levels of most other mitochondrial-encoded genes, although this would need to be tested further. The remaining 8 cleavage sites are all associated with the expression levels of four genes (MT-RNR1, RNR2, ND1 and ND2).
Quantitative trait loci mapping
Post-transcriptional processing of the mitochondrial transcriptome is carried out exclusively by nuclearencoded proteins. Therefore, in order to identify common nuclear genetic variants and genes associated with variation in mitochondrial RNA cleavage events across individuals, we used genome-wide genetic data and the cleavage ratio at each of the 54 cleavage sites for 799 individuals in the CARTaGENE dataset to map quantitative trait loci in the nuclear genome.
In total, we identify 26 nuclear genetic variants associated with mitochondrial RNA cleavage rates (unique peak nuclear genetic variant and cleavage site pairs) after correcting for multiple tests (Table 1, P < 9.26 × 10 −10 , correcting for standard genome-wide significance at 5 × To identify the potential nuclear genes that are modulating mitochondrial RNA cleavage rates, we tested whether each significant peak nuclear variant was either functional or associated with the expression of a nearby gene in the eQTLGen consortium database [34] (P < 5 × 10 −8 , selecting the nearest gene if multiple associations were found). Applying this approach, we link a number of known and novel proteins involved in mitochondrial Table 1 Nuclear-encoded genetic variation significantly associated with inferred mitochondrial RNA cleavage rates in discovery (CARTaGENE) data and statistics for the same associations in replication (GTEx) data. Variants are linked to genes either through functional (missense) mutations or via eQTL data RNA cleavage (Table 1). First, a large number of peak nuclear genetic variants are missense and intronic mutations linked with MRPP3, and in all cases, these associations occur for mitochondrial cleavage sites that fall at mRNA-tRNA boundaries, or within a mitochondrial tRNA. MRPP3 is known to cleave the 5′ end of mitochondrial tRNAs at canonical mRNA-tRNA junctions, but results here suggest the gene may also cleave internal tRNA positions that could result in short RNA fragments that are generated from the ends of tRNAs (described above). Second, several peak nuclear genetic variants are intronic mutations linked with FASTKD5 and are associated with mitochondrial RNA cleavage events near to the MTATP6-MTCO3 junction, as well as a site near to the 5′ end of MTCO1. FASTKD5 has been shown to be required for the maturation of precursor mitochondrial RNAs that are not flanked by tRNAs [24] and results here therefore validate this finding. Third, multiple peak nuclear variants are intronic or upstream mutations for TBRG4 (FASTKD4), which are associated with mitochondrial RNA cleavage specifically around the MT-TE-MTCYB junction, but also for a cleavage site close to the MTATP6-MTCO3 junction that is not interspersed by a tRNA. TBRG4 is known to play a role in processing mitochondrial RNA precursors, as well as stabilising several mitochondrial mRNAs, but these results hint that TBRG4 may also be involved in the processing of the non-canonical junction between MTATP6-MTCO3. Fourth, an intronic mutation within SLC25A26 is associated with cleavage rates within MT-TF. SLC25A26 is a mitochondrial carrier protein involved in transporting S-adenosylmethionine into the mitochondria [35,36]. We have previously implicated genetic variants in this protein with variation in mitochondrial RNA modification levels [26,37], and thus, the link we observe here may be modulated through this process. In order to test the robustness of our findings, we attempted to replicate significant associations in an independent whole blood dataset (GTEx, using the same peak variant where present, or the closest variant in high LD, R 2 > 0.9, if not). In total, 11 of the 26 peak nuclear genetic variants show the same direction of effect with the same high-confidence cleavage site in GTEx data at nominal significance (P < 0.05), 7 of which are significant after Bonferroni correction (Table 1, P < 0.00192, corrected for 26 tests). Furthermore, association betas show strong correlation between datasets (Pearson's R = 0.623, P = 0.0002, Additional file 1: Fig. S6). Associations that replicate include a missense mutation in MRPP3 that is linked to a cleavage event that is exactly at the MTND1-MTTI gene boundary, as well as links between intronic FASTKD5 mutations and cleavage rates at sites near the MTATP6-MTCO3 and the MTY-MTCO1 junctions, thus validating the role of FASTKD5 in these processes.
Mitochondrial RNA cleavage events and nuclear-encoded gene expression
Given the role of nuclear-encoded proteins in mitochondrial RNA processing events, we sought to further explore these complex cross-genome relationships by directly comparing inferred mitochondrial RNA cleavage ratios with nuclear-encoded gene expression in the same individuals. Given the influence of multiple interconnected genetic and environmental factors on variable gene expression, we implemented a stringent set of filtering strategies in order to identify nuclear genes that may be modulating mitochondrial RNA cleavage events.
First, using linear regression, we identified nuclear genes whose expression was associated with inferred mitochondrial RNA cleavage rates at the 54 reproducible sites in both discovery (CARTaGENE) and replication (GTEx) datasets (applying Bonferroni correction for the number of sites and the number of genes, for pairs that were present in both datasets). This approach identified 14,414 gene-site pairs in the discovery dataset (see Additional file 2: Table S3 for all significant associations) and 465 in the replication dataset (see Additional file 2: Table S4 for all significant associations, and Additional file 1: Fig. S7 for P-value distributions from both the discovery and replication data). We then intersected the two lists, keeping only those associations with the same direction of effect, which left 52 gene-site pairs encompassing 43 unique genes (Additional file 2: Table S5). Five of these genes are present in MitoCarta [8] and another five are thought to be RNA binding proteins [38] (non-overlapping sets, except COX5B). To test whether the relationship between each nuclear gene/ mitochondrial RNA cleavage site pair is more likely to be driven by the nuclear gene (rather than caused by mitochondrial RNA processes), we performed mediation analysis by identifying significant peak cis-eQTLs in the nuclear genome for each of the 43 unique genes and testing whether these variants are first associated with the cleavage ratio of the corresponding mitochondrial site (P < 0.05) and second whether this relationship is significantly mediated by the expression of the nuclear gene (P < 0.05/52). In total, 12 of the tests show significant evidence for mediation; 3 of these are for site 1682 in the mitochondrial genome, which falls within MTRNR2 close to a tRNA junction; 2 are for site 9157, which is closest to the MTATP6-MTCOX3 junction; and the remaining 7 are for site 10074, which falls within MTND3 near to a tRNA junction. The 10 unique genes identified through this analysis are thus candidates for being involved in mitochondrial RNA cleavage. They include ATP5E and COX17, both of which form part of the electron transport chain, as well as CXCR2P1, ELOVL7, GNAZ, ITGB5, MAP3K7CL, MYLK, SH3BGRL2 and TUBB1.
Finally, to test whether nuclear genes might be operating through mitochondrial RNA cleavage to influence mitochondrial-encoded gene expression levels, we took all mitochondrial RNA cleavage sites that were significantly associated with both the expression of a nuclear-encoded gene and the expression of a mitochondrial-encoded gene (125 unique cases) and performed a further round of mediation analysis. In each of these cases, we first tested whether the expression of the nuclear-and mitochondrial-encoded genes were correlated (P < 0.05) and then tested whether this relationship is significantly mediated by the inferred cleavage rate of the associated site (P < 0.05/125). In total, 16 of the tests show significant evidence for mediation, implicating 9 unique nuclear genes (ACRBP, CTTN, CXCR2P1, GNAZ, ITGB5, MAP3K7CL, SH3BGRL2, SPARC and TMEM40, Additional file 2: Table S6). None of these genes are present in MitoCarta [8], and as such, they may have unidentified roles in mitochondrial processes, either directly or indirectly through interactions with other genes.
Knock down of candidate novel mitochondrial RNA processing genes
In order to further implicate nuclear-encoded genes in mitochondrial RNA processing, we sought knock down (KD) data for any gene that has been implicated above in quantitative trait loci mapping (four unique genes -MRPP3, FASTKD5, TBRG4 and SLC25A26) and expression correlation analyses (43 unique genes, Supplementary Table 5). In total, two of these genes (TBRG4 and RPS19) have shRNA KD data from the ENCODE project, containing 8 samples in total (4 from KD and 4 from controls) in 2 different cell lines. Using these data, we mapped and filtered samples as above and calculated cleavage ratios at mitochondrial RNA cleavage sites linked to the discovery of each gene. First, TBRG4 (FASTKD4) has been implicated in influencing cleavage ratios around the MTATP6-MTCOX3 junction that is not separated by a tRNA. Using KD data for this gene, we find that there is a decrease in the cleavage ratio in KD samples compared to controls at position 9219 (mean ratio 0.37 for control and 0.29 for KD samples), although this is not significant (P=0.111, one-tailed t-test, Fig. 3). The closest high-confidence site to the exact junction between MTATP6 and MTCOX3 that we detect falls at position 9207, and ratios at this site in KD samples are again lower than in control data, but this difference is also not significant (P = 0.295).
Second, RPS19 is a nuclear-encoded ribosomal gene whose expression is associated with putative cleavage ratios at site 9157, in MTATP6, 49bp from the MTATP6-MTCOX3 junction. Using KD data for this gene, we find that there is a highly significant decrease in the cleavage ratio in KD samples compared to controls (P = 0.00018, one-tailed t-test, mean control ratio = 0.125, mean KD ratio = 0.066, Fig. 3), suggesting that RPS19 may be modulating RNA processing at this site. To test whether RPS19 may be acting more globally across the mitochondrial transcriptome, we tested for differences between control and KD data for all 54 reproducible cleavage sites and find that no other sites are significant after Bonferroni correction, and indeed, the relationship at site 9157 is the only one significant at this level (P < 0.05/54). Collectively, these results implicate a novel gene (RPS19) in modulating mitochondrial RNA cleavage events.
Functional enrichment and potential disease links
In order to identify whether mitochondrial RNA cleavage events might be linked to disease, we first tested whether any peak nuclear genetic variants associated with mitochondrial RNA cleavage rates (identified above) were in linkage disequilibrium (LD) with variants listed in the GWAS catalogue [39] (R 2 > 0.8 in the CEU population from 1000 Genomes data [40], disease associations P < 5 × 10 −8 ). In doing so, we find that both rs4724362 and rs73109897 (which both appear to act through TBRG4 on sites around the MT-TE and MT-CYB junction) are in LD with rs12672022, which is associated with colorectal cancer.
Second, we tested for functional enrichment in GO and KEGG terms for nuclear genes whose expression correlated with mitochondrial RNA cleavage rates across both discovery and replication cohorts (43 unique genes) using gProfiler [41]. After adjusting for multiple tests, no GO terms were significantly enriched for the gene list; however, several KEGG pathways were enriched including oxidative phosphorylation (4 genes, adjusted P = 0.016), but also amyotrophic lateral sclerosis, (6 genes, adjusted P = 0.012), Parkinson's disease (5 genes, adjusted P = 0.019), Prion disease (5 genes, adjusted P = 0.028), Huntington's disease (5 genes, adjusted P = 0.040) and pathways of neurodegeneration -multiple genes (6 genes, adjusted P = 0.039).
Discussion
Due to the polycistronic nature of the transcription of the human mitochondrial genome, post-transcriptional events are particularly important for determining downstream events. Despite this, the genetic and molecular mechanisms modulating variation in these processes across individuals remain poorly understood. In order to elucidate key mitochondrial RNA processing events, we developed an approach to identify putative RNA cleavage sites and rates using standard RNA sequencing data. In doing so, we find 54 mitochondrial RNA cleavage junctions that are reproducible across independent whole blood datasets. Many of these sites align with well-known cleavage boundaries, thus validating our approach, but a substantial fraction also occur at novel sites, opening up the possibility of new mechanisms by which the mitochondrial transcriptome is regulated.
There are several potential limitations to our approach. First, discovery of putative human mitochondrial RNA cleavage sites occurs in RNA sequencing data that has been enriched for polyadenylated RNA. Not all mitochondrial transcripts are polyadenylated [18], and therefore, this RNA preparation step will likely lead to biases in the mitochondrial RNA fragments that are sequenced. However, due to the highly abundant nature of mitochondrial RNA in cells, we observed good coverage of the entire mitochondrial genome in these datasets (e.g. in CARTaGENE samples, 99.5% of all sites across all samples have >100X coverage), suggesting that many mitochondrial-encoded transcripts are well represented. Second, fragmentation of RNA does not always occur randomly during library preparation, with known biases occurring in AT-rich regions for example. Such biases could lead to artefacts in our data that are reproducible across experiments using the same methods. We attempt to alleviate these effects by testing for evidence of replication of putative RNA cleavage sites in alternative datasets, finding that almost all are present either in independent Oxford Nanopore sequencing data or in sequencing data generated using material that has been depleted of ribosomal RNA. Third, 'stacking' of RNA sequencing reads at the starts and ends of genuine RNA fragments tends to show a more gradual decline around known gene boundaries, rather than a clean signal. Although this makes the exact locations of putative RNA cleavage sites more difficult to detect, we attempt to reduce this problem by identifying the strongest signal of cleavage in the local region (see the 'Methods' section). However, since Oxford Nanopore sequencing data also contains prematurely truncated transcripts [30], despite our efforts to focus only on the most abundant transcript terminal sites (see the 'Methods' section), it cannot be ruled out that some of the putative RNA cleavage sites that are validated in these data are a consequence of this technical phenomenon. Fourth, it has previously been suggested that aberrant, partially digested mitochondrial RNAs undergo polyadenylation in humans to promote degradation [42], which could be observed in our data as putative RNA cleavage sites. However, analysis of sequences with intra-gene polyadenylation showed that not only were they reasonably rare events compared to full-length polyadenylated transcripts, but also that the 3′ end of polyadenylated sequences was dispersed throughout each gene and not clustered [42]. As such, it seems unlikely that they would 'stack' at the same sites across the majority individuals, as we observe here.
The putative cleavage sites detected fall across many different regions of the mitochondrial genome, including at or close to known gene boundaries, or directly within different tRNAs, rRNAs or mRNAs. By comparing inferred RNA cleavage rates to mitochondrial-encoded gene expression levels, we see a number of strong relationships that could have important implications for mitochondrial function. Within this, 10 of the sites fall within coding regions and are far away from known gene boundaries (>20bp). These sites are unlikely to be artefacts driven by alternative post-transcriptional events such as RNA edits (since they do not overlap known edit sites [37]) or RNA modifications (see the 'Results' section) and may be particularly interesting as they may modulate mRNA levels directly. Indeed, 3 of these sites show significant associations with mitochondrialencoded gene expression levels in both the discovery and replication datasets. Disentangling the direct downstream functional consequences of novel mitochondrial RNA cleavage sites more generally will require further experimental work.
Cleavage of human mitochondrial RNA at gene boundaries is known to be carried out by the RNase P (MRPP1, MRPP2 and MRPP3) [13,14,43] and Z enzymes (ELAC2) [44], as well as at least one FASTKD protein (FASTKD5) [24], yet the full compendium of genes involved in these processes is yet to be discovered. Using inferred mitochondrial RNA cleavage ratios, we link a number of nuclear-encoded genes to mitochondrial RNA processing through quantitative trait loci mapping. These include genes already implicated in RNA cleavage described above (MRPP3 and FASTKD5), but also FASTKD4 (TBRG4) and SLC25A26. Knock down of FASTKD4 has been shown to influence expression levels of MTATP6 and MTCO3 [22,23], which are not separated by a tRNA and therefore are not processed in the same way as the majority of mitochondrial-encoded mRNAs; however, our results here suggest that the gene may be directly involved in RNA cleavage around the MTATP6-MTCO3 junction. SLC25A26 has previously been linked with mitochondrial RNA modification levels, consistent with its role as a S-adenosylmethionine transporter; therefore, it seems likely that the association we find here is modulated through the gene's effects on RNA modification.
We also further implicate nuclear-encoded genes in human mitochondrial RNA processing by comparing inferred mitochondrial RNA cleavage ratios to nuclearencoded gene expression. In doing so, we find 43 unique genes that show strong relationships with these processes across independent datasets. Within these, we use mediation analysis to show that ten genes are possibly acting in a causal manner, rather than in response to changes in mitochondrial gene expression. These genes include two nuclear-encoded electron transport chain proteins (ATP5E and COX17) that may be acting directly on RNA, but are more likely to be triggering changes in mitochondrial RNA processing through intermediate mechanisms.
The remaining eight are strong candidates for involvement in mitochondrial RNA cleavage events that could be followed up with further functional work.
We validate some of our findings by integrating gene knockdown data from the ENCODE project and find that the expression of RPS19 is not only associated with cleavage rates at site 9157 (49bp from the MTATP6-MTCOX3 junction) in two independent RNA sequencing datasets, but shRNA knock down of the gene in HepG2 and K562 cells causes highly significant changes in the RNA cleavage ratios at the same site. RPS19 is a nuclear-encoded ribosomal gene containing an RNA binding domain [45]. Although RPS19 is not listed in MitoCarta, it is predicted to have a mitochondrial targeting peptide in iPSORT [46]. This may suggest that RPS19 is directly involved in cleaving mitochondrial RNA; however, it also remains possible that the protein indirectly modulates other processes that influence mitochondrial RNA post-transcriptional processes.
Finally, it is possible that human mitochondrial RNA cleavage events play a role in cell function and disease. Previous work has shown that knock down of key mitochondrial RNA binding proteins in mice leads to phenotypes such as obesity [47], cardiomyopathy [14,[48][49][50] and premature death [48], therefore linking post-transcriptional processes in mitochondrial to some of the most common human complex diseases. As such, the novel genes we identify here may be good candidates for playing roles in disorders linked to mitochondria. Indeed, we see some evidence of this as genetic variants associated with mitochondrial RNA cleavage rates are in LD with those associated with colorectal cancer. Similarly, nuclear genes whose expression correlates with mitochondrial RNA cleavage rates are enriched for those linked to Parkinson's disease, amyotrophic lateral sclerosis, Prion disease and Huntington's disease. Within this, for genes where we infer the causal direction of association through mediation analysis, ATP5E (a component of the electron transport chain) has been linked to mitochondrial ATP synthase deficiency [51], ELOVL7 (a fatty acid elongase) has recently been associated with Parkinson's disease [52] and other brain-related traits [53], and ITGB5 (an integrin subunit) has been associated with blood pressure [54], a clinically relevant trait that we have previously found to be linked to mitochondrial processes [25,26]. It will therefore be intriguing to further explore these genes in a functional setting.
Conclusions
In summary, our work interrogates large quantities of existing RNA sequencing data using novel approaches to identify putative RNA cleavage sites in mitochondrial RNA. We also use inferred cleavage rates at these sites within QTL and expression cross-correlation analyses to highlight nuclear-encoded genes that potentially influence important mitochondrial post-transcriptional processes. We validate the link between one of these genes, RPS19, and inferred mitochondrial RNA cleavage rates using gene knock down data, and more generally, it will be interesting to interrogate the roles of these genes in mitochondrial function. Since mitochondrial DNA is largely transcribed as polycistronic strands of RNA, identifying post-transcriptional events that influence the expression of key elements of the electron transport chain could lead to valuable insights across multiple strands of fundamental and disease biology.
Data description
RNA sequence and genotype data were obtained from two independent, publicly available projects: CARTaGENE [55]: CARTaGENE is a population-based cohort of healthy individuals aged 40-69, from Quebec, Canada. Whole blood samples were obtained for RNA sequencing and genotyping, generating 100-bp pairedend RNAseq reads and genotypes from the Illumina Omni2.5M genotyping array for 911 individuals. Samples with RNAseq data from multiple sequencing runs were merged before being aligned.
GTEx (Genotype-Tissue Expression) Project [56]: Samples were collected from 354 deceased individuals for RNA sequence analysis and dense genotyping. We used data from both the pilot and midpoint phases of the GTEx project, where samples were genotyped in the Illumina Omni5M and Illumina Omni2.5M genotyping arrays, respectively. RNAseq reads produced by the project varied in length, and we used only samples with 75-bp long reads.
RNA sequencing alignment and cleavage site inference
For each sample, RNAseq reads were trimmed to remove adapter sequences using TrimGalore [v0.4.0] using a stringency parameter value of 3 (www. bioin forma tics. babra ham. ac. uk/ proje cts/ trim_ galore) and Poly-A/T sequences >5 bp using PRINSEQ-lite [v0. 20.4] [57]. No quality trimming was performed in order to maintain the genuine RNA fragment end. Remaining reads with >20 nucleotides were then mapped to the human reference sequence (1000G GRCh37 reference, which contains the mitochondrial rCRS NC_012920.1) using STAR [2.6.1d] [58] with the EndToEnd alignEndsType flag (again, to avoid read trimming). SAMtools [v1.4.1] [59] was then used to retain only properly paired and uniquely mapped reads. Read start and end positions were then identified with SAMtools (start positions defined as the value of the POS field, 4th column, and end positions as POS+TLEN-1) and used to calculate the cleavage ratios as above.
To assess cleavage ratios at gene boundaries, we considered a region within 3bp of each GENCODE (v19) annotated boundary for ribosomal and messenger RNA mitochondrial genes. Within each region, we then identified the position with the highest cleavage ratio as representative of the gene boundary and then obtained the distribution of cleavage ratios for this site across all individuals. For the background rate, we randomly selected 100 sites from locations at least 50bp away from a gene boundary region and followed the procedure as above for each site. We used one-sided t-tests to assess if the gene boundary cleavage ratio was higher than the background rate and applied Bonferroni correction to account for the 28 gene boundaries tested.
To validate mitochondrial RNA cleavage sites using data generated with other library preparation and/or sequencing techniques, we first obtained an additional 16 whole blood RNA sequencing datasets from healthy controls, generated after ribosomal RNA depletion (rather than polyA enrichment) and sequenced on the Illumina Hiseq 4000 platform [29] (GEO accession GSE136371). For each sample, we aligned data and generated RNA cleavage ratios at each site as above and then tested whether any sample had a cleavage ratio of >10% at any site within 3bp of each of the 54 putative RNA cleavage sites identified in CARTaGENE and GTEx data. Next, we obtained publicly available aligned native RNA and cDNA sequencing of NA12878, generated on the Oxford Nanopore MinIon by the Nanopore WGS consortium [30] (https:// github. com/ nanop ore-wgs-conso rtium/ NA128 78). Within this study, data was aligned with mini-map2 to the GRCh38 human genome reference, which contains the exact same mitochondrial sequence as the reference used here (1000G GRCh37 reference) and data was merged across all sequencing runs to create a single alignment file for each of the native RNA and cDNA data. For each alignment file, we extracted sequencing reads that mapped to the mitochondrial genome, were labelled as the primary alignment and had mapping quality greater than 30, and then removed reads that had segments that aligned elsewhere in the mitochondrial genome and those that aligned elsewhere in the nuclear genome with equal or greater mapping quality score. For the remaining data, we calculated the start and end positions of each read using the CIGAR string (which contains information on sequence matches and insertions/ deletions for each read versus the reference sequence). To test for an overlap with putative RNA cleavage sites identified in short read data, we identified start and end positions in long read data that had at least 200 supporting reads and were in the top 1/50 of sites across the mitochondrial genome in terms of the number of reads that started or ended at that position. For validation, we required these positions to be within 3bp of the putative RNA cleavage site. We also removed samples with discrepant reported and genotypic sex information, those with >5% missing data and those with ambiguous X chromosome homozygosity estimates. SNPs were filtered for violating Hardy-Weinberg equilibrium (HWE) (P < 0.001), for having a minor allele frequency (MAF) < 1% or for having a genotype missingness >5%. SNPs coded according to the negative strand were flipped to the positive strand. SNPs remaining on autosomal chromosomes were phased using default settings within SHAPEIT [v2.r837] [61].
Quantitative trait loci mapping
Quantitative trait loci mapping was carried out for the 54 reproducible cleavage sites identified in both CARTaGENE and GTEx datasets. Analyses were carried out separately for each position (therefore comparing samples that were generated using the same library preparation and sequencing protocols), using linear models in PLINK [v1.9]. Covariates used in the linear model included 5 study-specific genetic PCs and 10 PEER factors calculated from RNAseq data using PEER [v1.0] [64]. PEER factors were calculated per dataset using all genes (nuclear and mitochondrial) that had a mean TPM >2, including no covariates. Additional covariates included in the linear model were sex and RNA sequencing batch information, where available and where relevant.
Nuclear gene expression linear regression
Mitochondrial and nuclear gene expression were generated as in [25]. The lm function within R was used to regress the inferred cleavage ratios against nuclear and mitochondrial-encoded gene expression levels, including 10 PEER factors as covariates, in CARTaGENE and GTEx independently. The obtained regression P-value was adjusted for Bonferroni correction using the p.adjust function and the two lists of significant associations we intersected (requiring significance in the discovery and replication datasets, with the same direction of effect), identifying 52 significant associations (47 of which remained significant under the same criteria after also including the first 5 genetic principle components in each linear model). Since the discovery dataset identified a significantly larger number of associations (14,414) when compared to the replication dataset (465), we checked the influence of sample size on these results by randomly resampling the discovery dataset without replacement down to the same size as the replication set (n = 344) and repeating the analysis as above. In doing so, we find 2259 significant associations (after Bonferroni correction), which more closely matches the number of associations found in the replication dataset. The remaining differences may be driven by random variation between the samples or systematic differences that may include RNA degradation levels, read length or sequencing depth.
cis-eQTL identification and mediation analysis
To identify the direction of effect between associated nuclear gene expression values and mitochondrial RNA cleavage rates, we carried out mediation analysis. First, we used PLINK to identify cis-eQTLs within 1MB of the start and end for each of the genes identified as significant in the comparison of nuclear gene expression and mitochondrial RNA cleavage rates using the CARTa-GENE dataset, including 5 study-specific genetic PCs and 10 PEER factors as covariates. We then selected the SNP with the lowest P-value as a representative cis-eQTL and only used associations that were significant after correction for multiple tests (FDR 5%). For nuclear genes/genetic variants that pass these criteria, we tested whether the expression of the nuclear gene significantly mediated the relationship between the peak nuclear variant and associated inferred mitochondrial RNA cleavage ratios using 1000 bootstrapping simulations with the 'Mediation' package in R, correcting the P-value for the number of genes tested using Bonferroni correction. To test whether nuclear genes might be influencing mitochondrial-encoded gene expression levels through mitochondrial RNA cleavage, we obtained all mitochondrial RNA cleavage sites that were significantly associated with the expression levels of both a nuclear-and mitochondrial-encoded gene (criteria outlined above, 125 cases in total) and then performed mediation analysis by first testing whether the nuclear-and mitochondrial-encoded genes were correlated in discovery data (P < 0.05) and then second whether this relationship is significantly mediated by the inferred cleavage rate of the associated site in discovery data (using 100,000 bootstrapping simulations, correcting P-values for multiple tests).
Gene knock down analysis
We sought knock down (KD) data for any gene that has been implicated in mitochondrial RNA processing in our analyses, including those linked through quantitative trait loci mapping (four unique genes) and expression correlation analyses (43 unique genes, Supplementary Table 5). In total, two of these genes (TBRG4 and RPS19) had RNA sequencing data available after shRNA knock down (KD) as part of the ENCODE project. For each gene, there were 8 RNA sequencing datasets available in total (4 from KD and 4 from controls) in 2 different cell lines (HepG2 and K562). We obtained raw RNA sequencing datasets for each sample via the ENCODE portal (for accession numbers, see Supplementary Table 7) and then performed sequence alignment and filtering as described above. We then calculated the cleavage ratio (as above) at mitochondrial sites linked to the discovery of each gene and compared control and shRNA KD data using a one-tailed t-test.
Gene enrichment analysis
Gene enrichment analysis was performed within the 'gprofiler2' R package [41]. The query list contained nuclear genes whose expression correlated with mitochondrial RNA cleavage rates across both discovery and replication cohorts (43 unique genes). The background gene set was defined as all unique nuclear genes tested for association with mitochondrial RNA cleavage rates across the discovery and replication datasets. The gene set counts and sizes (g:SCS) framework was used for multiple testing correction. | 2022-07-23T13:32:43.747Z | 2022-07-22T00:00:00.000 | {
"year": 2022,
"sha1": "85bce1fea3f2b38afbbf069ced70cb1acc431750",
"oa_license": "CCBY",
"oa_url": "https://bmcbiol.biomedcentral.com/counter/pdf/10.1186/s12915-022-01373-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6b48824fad11a1cb419bfcc0280b7b52b10fc565",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
266728553 | pes2o/s2orc | v3-fos-license | Effects of the COVID-19 pandemic on healthcare utilization among older adults with cardiovascular diseases and multimorbidity in Indonesia: an interrupted time-series analysis
Background The COVID-19 pandemic has disrupted healthcare utilization globally, but little is known about the effects among patients with cardiovascular diseases (CVDs) and other multimorbidities. This study analyzed the impacts of COVID-19 on healthcare utilization for patients aged 30 years and older with cardiovascular diseases (CVDs) with or without other chronic disease comorbidities in Indonesia. Methods We designed a retrospective cohort study based on the Indonesian National Health Insurance (NHI) sample data from 2016–2020. We defined healthcare utilization as monthly outpatient and inpatient visits related to chronic diseases at the hospital and primary healthcare levels per 10,000 NHI members. We used interrupted time series analysis to evaluate how the healthcare utilization patterns had changed due to the COVID-19 pandemic. Results Overall, hospital outpatient visits decreased by 39% when the pandemic occurred (95% Confidence Interval (CI): 0.48,0.76), inpatient visits by 28% (95% CI: 0.62,0.83), and primary healthcare visits by 34% (95% CI:0.55, 0.81). For patients with CVDs and multimorbidity, hospital outpatient and inpatient visit rates were reduced by 36% and 38%, respectively and primary healthcare visits by 32%. Some insignificant differences in the reduction of out-and inpatient visits were observed across diagnosis groups and regions. Conclusion Healthcare utilization among patients with chronic diseases decreased significantly during COVID-19 and consistently across different chronic diseases and regions. To cope with the unmet needs of healthcare utilization in the context of the pandemic, the healthcare system needs to be strengthened to cater to the needs of the population-at-risk, especially for patients with CVDs and multimorbidity. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-17568-6.
What is already known on this topic
• The COVID-19 pandemic has presented significant disruptions and challenges to healthcare systems worldwide.
• Limited research on the impact of the COVID-19 pandemic on healthcare utilization, specifically for patients with cardiovascular diseases (CVDs) and multimorbidity, is available.
What this study adds
• During the COVID-19 pandemic, there was a substantial decrease in healthcare utilization among patients with chronic diseases, observed both in hospital and primary healthcare settings.This reduction was consistent across various diagnostic groups and regions in Indonesia.
• There were some insignificant differences in the reduction of healthcare utilization across groups of diagnoses, but some results point to a more significant reduction in relatively less developed regions.
How this study might affect research, practice or policy
• Implementing an active non-communicable diseases screening program targeting vulnerable populations with chronic diseases to prevent premature mortality from non-communicable diseases during a pandemic, alongside efforts to strengthen the healthcare system and minimize the unmet healthcare needs.
Background
The COVID-19 pandemic has posed significant disruptions and challenges to healthcare systems in many countries.Studies in different healthcare settings (including but not limited to the United States, Canada, Europe, India, and China) have demonstrated a decrease in hospital admission [1][2][3][4][5], and healthcare utilization for patients with chronic diagnoses such as stroke [6] during the COVID-19 pandemic.However, little is known about how the pandemic has affected healthcare utilization among patients with Cardiovascular diseases (CVDs) and those with multimorbidity.
CVDs were the leading cause of disability and death in Indonesia in 2019 [7].Patients with CVDs require longterm continuous interactions with the health system to prevent complications and disability.Unmet healthcare needs among patients with CVDs could result in significant long-term clinical and economic burdens [1].In addition, over one-third of Indonesian adults aged > 40 live with multimorbidity [8], a condition associated with higher healthcare utilization and spending [9][10][11].Access to healthcare is affected by the availability of services and geographical location [12,13], and healthcare is much less accessible in less developed and remote areas in an archipelagic nation such as Indonesia.
The National Health Insurance (NHI) Program in Indonesia, also known as Jaminan Kesehatan Nasional (JKN), was launched as a mandatory program in 2014 to remove the economic barriers to access care for the Indonesian population.The NHI covered about 88% of the Indonesian population in 2022.The unreached 12% of the population are informal workers and those unwilling or unable to pay NHI premiums [14].The NHI covers all medical costs for treatment in primary healthcare centers and hospitals without cost-sharing policies [15].The NHI reimburses primary healthcare centers based on capitation and hospitals based on Diagnosis Related Groups.The NHI programme has contributed to increasing accessibility and utilization of outpatient and inpatient services in Indonesia since its conception [16,17].The NHI report indicates that the reimbursement for Noncommunicable diseases (NCDs) treatment was higher than for all other diagnoses between 2014 and 2019 [18] and is projected to increase in 2023-2026 [19].
In response to the COVID-19 pandemic, the Indonesian government enacted large-scale social restrictions by the end of March 2020 [20].In April 2020, the government regulated how hospitals and primary healthcare centers should adapt and deliver healthcare services during the pandemic, with a restricted number of visits and provision of Personal Protective Equipment (PPE) for healthcare workers [21].Although the restriction and COVID-19 response were nationally regulated, implementation and compliance varied across districts [22].Consequently, the effects of COVID-19 on healthcare provision also varied.The impact of COVID-19 disruption on healthcare utilization among chronic disease patients in Indonesia is unknown.
This study analyzes the impacts of COVID-19 on healthcare utilization among adults aged 30 years and older under the NHI program, specifically those with CVDs and other chronic disease multimorbidity, across regions in Indonesia.
Study population
This retrospective cohort study was based on the NHI sample dataset, covering about 1% of all NHI members in Indonesia [23].The sample dataset contained NHI membership and utilization data based on capitationbased primary care services, non-capitation-based primary care services, and hospital care services.The membership data contains sociodemographic information, the type of NHI membership, i.e., PBI [government-subsidized members], PPU [formal workers], BP [retired workers members], and PBPU [informal workers], and a sample weight variable.The healthcare utilization data consisted of variables on healthcare visits, ICD-10 codes diagnosis, procedure (ICD 9CM), the tariff of care, length of stay, referral care, and discharge status.
The NHI sample dataset included individuals and households registered up to 2020.The dataset was created based on a stratified random sample of 1% of the total 73,441,160 households enrolled at 22,024 primary healthcare centers across 514 districts in Indonesia.The NHI selected the sample from households registered at each of the primary healthcare centers, based on their healthcare utilization, i.e. (1) Households that never utilized healthcare services, (2) Households that have ever visited only primary healthcare centers, and (3) Households that have ever utilized primary healthcare center and hospital services.If the primary healthcare centers served all three types of households, then there were three strata in the primary healthcare centers.A total of 10 households were randomly sampled in each stratum, and all the individuals in the household were included.
The sample dataset consisted of 2,200,960 individuals who lived in 823,557 households.We excluded individuals aged below 30 (n = 989,484) and above 108 (n = 79), and those who passed away before 2016 (n = 36,551), resulting in a total of 1,210,924 eligible individuals.In this study, we included 378,495 unique individuals with chronic disease enrolled during the study period (Figure S1).The region is composed of five areas, each representing provinces that belong to a group of NHI tariff regulations regulated by the Ministry of Health (Figure S2).
Measurements
The outcome was healthcare utilization for individuals with chronic disease.We defined healthcare utilization as monthly outpatient and inpatient visits related to chronic disease at the hospital and primary healthcare levels per 10,000 NHI members.We applied sample weights to calculate the monthly outpatient and inpatient rates.
CVDs and chronic diseases [24] were based on the list of ICD-10 codes represented in the Global Burden of Disease [25] (Table S1).Multimorbidity was defined as the presence of two or more chronic diseases [26] other than CVDs.We stratified NHI members into five groups based on the presence of CVDs and/or multimorbidity of chronic diseases: (1) NHI members with no CVDs but with single chronic morbidity, (2) NHI members with no CVDs but with multimorbidity, (3) NHI members with CVDs diagnoses but no comorbidity, (4) NHI members with CVDs and one comorbidity, and (5) NHI members with CVDs and multimorbidity.
Statistical analyses
In the descriptive analysis (Table 1), we compared the relative changes in the average healthcare utilization rates before (January 2016-March 2020) and during the COVID-19 pandemic (March 2020-December 2020).We assessed the relative change across the five disease groups and regions.
We used the interrupted time series (ITS) analysis to analyze the healthcare utilization patterns before and during the COVID-19 pandemic.We modeled healthcare utilization using Poisson regression, as the healthcare utilization data was assumed to follow the Poisson distribution with scaling adjustment to control for potential over-dispersion [27].
Y t represents the outcome variable at time t, T is the time elapsed in months from January 2016, X t is a dummy variable capturing the pre-COVID-19 period (coded 0) and the COVID-19 period (coded 1).β 0 rep- resents the intercept of the model, β 1 captures the time trend of the outcome variable underlying pre-COVID trend, β 2 captures the immediate level change in the outcome variable when entering the COVID-19 period, and β 3 indicates the change in the trajectory (slope) of the outcome variable during the COVID-19 period ( T 0 as the beginning of the COVID-19).The error term ( e t ) at time t represents the random variability not explained by the model.The Incidence Rate Ratio (IRR) was obtained by exponentiating the Poisson regression coefficient ( β 2 ), which provided the change in monthly levels of visits due to entering the Covid-19 period [28].
We also performed the ITS analysis stratified by the five diagnosis groups to examine the heterogeneous effects of COVID-19 disruptions experienced by patients with different diagnoses.For the overall and diagnosis-stratified analyses, we utilized NHI members at the national level as the denominator in calculating the outcome rates.For patients with CVDs and multimorbidity, we stratified the analysis by geographical regions.For this analysis, we used NHI members in each region as the denominator in calculating the outcome variables.All analyses were performed, controlling for the seasonal pattern by additive Fourier Poisson time-series regression models, and were carried out using Bernal's procedures [27].
We conducted statistical significance tests (assuming independence between groups) to assess if the impact of the COVID-19 period on utilization differed across diagnosis groups and regions.Group (5) CVDs and multimorbidity and Region 1 were used as the reference group.
Ethics approval and consent to participate
This study was approved by the Research Ethics Committee of Atma Jaya Catholic University of Indonesia (Number:0002S/III/PPPE.PM.1005/03/2023), which also waived the need for informed consent.Written informed consent for participation was not required for Table 1 Relative change in the monthly rate of healthcare utilization related to chronic diseases before and during the COVID-19 pandemic The monthly rate was presented per 10,000 NHI members a Numerator was the number of visits with chronic diseases (overall group).The denominator was the number of NHI members at the national level.PHC = Primary Healthcare b Numerator was the number of visits with chronic diseases (overall group).The denominator was the number of NHI members at the regional level c Numerator was the number of visits with chronic diseases in each group.The denominator was the number of NHI members at the national level d The numerator was the number of visits by NHI members with CVDs and multimorbidity per region, and the denominator was the total NHI member population per region.See Supplementary Figure S2
Results
Descriptive analysis of healthcare utilization before and during COVID-19 The national average monthly outpatient and inpatient hospital visit rate was lower during the COVID-19 pandemic compared to the period before the pandemic (Table 1).The average visit rates were 34.67% lower for inpatient care and 15.03% lower for outpatient care during the pandemic.The decline in average monthly rates of hospital visits varied across different regions, with a range of 32.10% to 40.32% lower for inpatient visits and 10.75% to 19.10% lower for outpatient visits.
For patients in different chronic disease diagnosis groups, we observed a consistent decrease in inpatient hospital visit rates (Table 1).We also observed reductions in outpatient hospital visit rates for patients with No CVDs but with single chronic morbidity, No CVDs but with multimorbidity, and CVDs but no comorbidity.In contrast, the primary healthcare center chronic disease visits increased across all diagnosis groups except for patients with No CVDs but with multimorbidity.
For CVDs and multimorbidity patients, we observed a 29.29% lower inpatient visit rate, but a higher outpatient visit rate of 1.87% at hospitals and 34.34% at primary healthcare during the pandemic.The inpatient visit rate reduction was consistent across regions, ranging from -15.54% to -38.90%.The outpatient visit rate increment was also consistent across regions, ranging from 13.70% to 50.05% for outpatient visits rate at primary healthcare.
Heterogeneity of effects in COVID-19 disruptions among patients with different diagnoses
In general, the reductions in visit rate were similar across patients with different diagnoses, and only a few differences between the groups (Supplementary Table S2).
The reduction in inpatient visit rates ranged from 24% among patients with CVDs, and one comorbidity (IRR:0.76;95% CI:0.60, 0.96) to 38% among patients with CVDs and multimorbidity (IRR:0.62;95% CI:0.51, 0.75) (Table 2).A lower reduction in inpatient visit rates was observed among patients with CVDs but no comorbidity compared to patients with CVDs and multimorbidity (mean difference:-0.07;95% CI:-0.25,0.11).However, the reductions in inpatient visit rates across other diagnoses were relatively similar compared to patients with CVDs and multimorbidity (Supplementary Table S2).In addition, we also observed a significant slope change in the overall inpatient visit rates during COVID-19, which showed a declining trend compared to the increasing slope trend before COVID-19 (Fig. 2).
Heterogeneity of effects of COVID-19 among patients with CVDs and multimorbidity across regions
We observed differences in the reduction of out-and inpatient visit rates among patients with CVDs and multimorbidity across regions (Fig. 3).The reduction in out-and inpatient visit rates ranged from 38% in Region 3 to 53% in Region 4 for outpatient rates and from 34% in Region 2 to 52% in Region 4 for inpatient visit rates (Table 2).We observed a difference in the reduction of outpatient visit rates in Region 4 compared to Region 1 (mean difference: -0.19, 95%CI: -0.36, 0.22), but insignificant.A relatively similar reduction across regions was also observed for inpatient visit rates and outpatient visit rates at primary healthcare (Supplementary Table S2).
Discussion
Our analysis indicates substantial reductions in healthcare utilization at the hospital and primary healthcare centers by patients with chronic disease in Indonesia during the COVID-19 pandemic.These reductions were observed consistently across diagnosis groups and regions in Indonesia.
The COVID-19 pandemic has posed significant challenges for patients to access healthcare due to reduced supply and demand.On the supply side, strict health protocols were implemented at healthcare facilities to minimize the spread of COVID-19, with a restricted number of patients allowed to access care at the hospitals, and mandatory protection measurement by providing healthcare workers with PPE.Healthcare facilities that failed to equip their staff with PPE to meet the standard health protocols had to forgo service to the patients [29].To flatten the pandemic curve and preserve healthcare capacity during the pandemic, healthcare facilities delayed the management of elective cases and patients with Fig. 3 Monthly hospital outpatient and inpatient visit rates among patients with CVDs and multimorbidity across regions non-COVID-19 diagnoses to focus on providing care to COVID patients with severe conditions needing intensive care [30].
On the demand side, the reduction of healthcare utilization was driven by patients' decision to delay careseeking due to the risk and fear of COVID-19 infection [4,29].Healthcare utilization at hospitals and primary healthcare also decreased due to the social restriction policy, which paved the way for the expansion of telemedicine use in the country.Telemedicine-based services were piloted in September 2020, allowing consultations for patients with chronic diseases at the primary healthcare level and reducing the need for in-person care at the hospital [31].As the use of telemedicine was not part of the current NHI coverage, we could not quantify the extent to which the use of telemedicine has buffered the effects of foregone healthcare utilization during the COVID-19 pandemic.
The effects of the COVID-19 pandemic on healthcare utilization are relatively homogenous among patients with different chronic diseases.The pandemic could affect patients differently depending on the disease's complexity.The pandemic affected patients with less severe diseases, particularly those with averted outpatient visits as reported in South Korea [32], and eight European countries [33].Reductions in in-person healthcare utilization were also observed among Chronic Kidney Disease (CKD) patients enrolled in commercial and Medicare Advantage health plans in the USA [34].Notably, those with stage G4 CKD and a younger age exhibited higher odds of experiencing a greater healthcare deficit [34] There were some differences in the effects of COVID-19 on the reduction of healthcare utilization among patients with CVDs and multimorbidity across different regions in Indonesia.The observed reduction was less among patients in Region 1 on Java Island, where 56.2% of the population lives [35], 50.5% of the healthcare resources are spent [36], and 50% of referral hospitals are located [37], consequently with better access to healthcare services [18].This unequal pattern of healthcare utilization, with higher utilization in better-resourced regions, was also observed before the pandemic [13,18].With better resources, the healthcare system in Region 1 was more resilient in coping with the pandemic, ensuring a sufficient supply of healthcare services [22].
The reduction in healthcare utilization among patients with CVDs during the pandemic has detrimental consequences for those needing continuous treatment.CVDs have consistently been ranked as a leading cause of death in Indonesia [7].Prior to the pandemic, nearly 70% of patients at risk for CVDs failed to receive appropriate CVDs treatments [38].Non-compliance with treatments for individuals with CVDs increases their long-term health risks, thereby affecting their sustained care requirements and amplifying the economic burdens for the patients, their families and society [39].
Strengths and limitations
This study used a quasi-experimental design to analyze the impact of the COVID-19 pandemic on healthcare utilization using a large national administrative insurance database.Moreover, this study investigated the impact of COVID-19 across different combinations of chronic diagnoses and CVDs and portrayed the heterogeneity of COVID-19 disruption effects across regions.
The quality of this national health insurance administrative data depends on the coding system used across different healthcare facilities.The validity and reliability of the data, which has not been established in any study, could potentially influence the quality of the estimate of disease burdens reported in this study.In addition, we cannot compare healthcare utilization among those with unmet needs as information about needs and if needs were met do not exist in the NHI database.Moreover, despite the importance of understanding the impacts of foregone healthcare utilization, particularly among individuals with CVDs multimorbidity on subsequent mortality, the National Health Insurance database is not readily linked with the death register, rendering investigation on this issue not yet possible.
The costs for COVID-19 patients treated in hospitals were reimbursed directly by the Ministry of Health and not through the National Health Insurance.Consequently, the NHI database does not fully capture the healthcare utilization of COVID-19 patients [40], and therefore, estimates of healthcare utilization rates during COVID-19 based on the NHI database might be underestimated.
Policy implications
Evidence from this study calls for attention to support the vulnerable population, especially patients with chronic disease during the pandemic.Many patients with chronic disease have been affected by deferred care and foregone health services that could lead to increased disability and premature mortality [1,41].The pandemic control policies (including providing COVID-19 vaccination to the population and providing protective devices for healthcare staff ) contribute to the recovery of healthcare utilization.This recovery is achieved through the improvement of both the supply and demand of healthcare services [42].
Further studies are needed to explore the effect of regional policy intervention, healthcare readiness, individual factors, unmet needs and social determinants of health to explain the change in health-seeking behavior and healthcare utilization in the population during the pandemic.
Conclusion
In Indonesia, healthcare utilization among patients with chronic disease under NHI dropped significantly during the COVID-19 pandemic.The reduction was observed in out-and inpatient hospital and primary healthcare center visits.The averted healthcare utilization was different across chronic diseases and regions but insignificant.Policy intervention is essential to ensure the availability and accessibility of care and health services in the context of the pandemic.It is important to implement active NCDs screening and control programs to reduce the subsequent health and economic burden related to NCDs due to delayed treatment during the pandemic.
Fig. 1
Fig. 1 Monthly hospital and primary healthcare visit rates before and during COVID-19 at the national level
Fig. 2
Fig. 2 Monthly hospital outpatient and inpatient visit rates before and during COVID-19 across groups of diagnosis
Rates among patients with CVDs and multimorbidity across regions d
for details about the regions collection and utilisation of administrative health data in accordance with national legislation and institutional requirements.All research steps were carried out in accordance with relevant guidelines and regulations. the
Table 2
Effect of COVID-19 disruptions on monthly hospital and PHC visit rates in IndonesiaThe effect of COVID-19 disruption on monthly hospital and PHC visit rates was presented among patients in different diagnosis groups and among patients with CVDs and multimorbidity across different regions in IndonesiaThe Incidence Rate Ratio (IRR) was obtained by exponentiating the Poisson regression coefficient (β 2 ) which provided the change in monthly levels of visits due to entering the Covid-19 period.PHC Primary Healthcare a The IRR for patients with chronic diseases at the national levelbThe IRR for patients in different diagnosis groups at the national levelcThe IRR for patients with CVDs and multimorbidity at the regional level.See Supplementary FigureS2for details about the regions | 2024-01-03T14:20:03.375Z | 2024-01-02T00:00:00.000 | {
"year": 2024,
"sha1": "72808ba41f08773011c6c59071c0eeee442b063a",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-17568-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59c214b87e58b64bff2b97f0eb8a3e560e248350",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56353835 | pes2o/s2orc | v3-fos-license | Chemical composition, organic matter digestibility and energy content of apple pomace silage and its combination with corn plant, sugar beet pulp and pumpkin pulp*
The objective of this research was to investigate and compare the quality of apple pomace silage ensiled with corn plant, sugar beet pulp, and pumpkin pulp for nutrient compositions. Fresh samples of apple pomace, corn plant, sugar beet pulp, pumpkin pulp, and their combinations were fermented in glass jars. The treatment groups included i) 100% apple pomace as control, ii) 100% corn plants, iii) 100% sugar beet pulp, iv) 100% pumpkin pulp, v) 50% apple pomace and 50% sugar beet pulp, vi) 50% apple pomace and 50% pumpkin pulp, and vii) 50% apple pomace and 50% whole corn plant. The silage pH was different among treatment groups, ranging from 3.60 to 4.15, being lowest with a combination of apple pomace and pumpkin pulp, and highest with sugar beet pulp. Dry matter (DM) and crude protein (CP) contents of the silages were also different among groups, with corn silage being the highest for both values, namely 29.17% for DM and 9.92% of DM for CP. Although acid detergent fibre (ADF) and crude cellulose (CC) values differed among silages (ADF and CC contents varied between 24.47 and 38.55% of DM and 21.58–28.98% of DM among silages, respectively), neutral detergent fibre (NDF) contents remained similar. In vitro organic matter digestibility of sugar beet pulp silage (74.41% of DM) was highest among all silages, whilst corn silage (55.35% of DM) had the lowest digestibility. Sugar beet pulp silage had the highest metabolizable energy (ME) (2.67 Mcal/kg DM) and net energy lactation (NEL) (1.61 Mcal/kg DM) values among all silages. The results of the current study suggested that nutritive values of the apple pomace silage were comparable with the silages from the other plant sources. In summary, apple pomace silage is a promising feed. ______________________________________________________________________________________
Introduction
To increase the profit in animal production systems, economical feed sources, including agro-industrial residuals, should be regarded as part of ruminant feeding. The use of unconventional feed sources is also a useful way of overcoming the shortage of animal feedstuffs and has been intensively addressed by researchers worldwide (Dixon & Egan, 1987;Ali et al., 2017;Shah et al., 2017a;Shah et al., 2017b;Shah et al., 2018). Apple pomace is a by-product of the fruit juice industry. It contains 26.4% DM, 4.0% DM CP, 3.6% DM sugar, 6.8% DM crude cellulose, 0.38% DM crude ash (CA) (Vasil'ev et al., 1976), 30-48.2% DM NDF, 25-42% of DM ADF (Wolter et al., 1980;Singhal et al., 1991), and 7.7-9.1 MJ/kg DM metabolizable energy (Mafakher et al., 2010). Apple pomace is also an agro-industrial waste, which can be used as animal feed as a way of disposal. Using unconventional silages such as apple pomace in ruminant diets could be a source of low-cost nutrients. Because of the seasonal availability of fresh fruits, ensiling may offer a good solution to preserving their by-products. Pumpkin pulp is a by-product of pumpkin (Cucurbita pepo), which is grown in Turkey for seed production. Ensiling apple pomace with pumpkin pulp could be another economic source of animal feed. Ensiling apple pomace with dry sugar beet (Beta vulgaris L.) pulp also seems an interesting solution. A mixture of common silage maize (Zea mays) and apple pomace may also offer a cheaper source for admixtures.
A few studies have investigated the use of apple pomace silage in ruminant diets (Fontenot et al., 1977;Toyokawa et al., 1977;Tümer, 2001;Ahn et al., 2002). Feeding trials with apple pomace silage in the literature showed positive results. The results of a study showed that feeding dairy cows with apple pomace silage containing 10% wheat straw, 10% alfalfa hay, and 10% rice hulls resulted in an increase in milk yield and milk protein content (Toyokawa et al., 1984). Another study revealed that apple pomace silage could be added to the diet of lactating cows up to 30% (Ghoreishi et al., 2007). However, the combination of apple pomace silage and other types of silage has not been studied adequately. Therefore, the objective of this study was to investigate and compare the quality of apple pomace ensiled with corn plant, sugar beet pulp and pumpkin pulp for nutrient composition and digestibility.
Materials and Methods
Fresh apple pomace and fresh sugar beet pulp were provided by a local fruit juice factory and a sugar factory in Kayseri Province, Turkey. Fresh pumpkin pomace was provided from the field just after harvest for seeds in Develi District in Kayseri Province. Corn was harvested from the fields of the experimental farm of Erciyes University, Kayseri, Turkey. All material was collected on the same day and brought to the laboratory for ensiling processes. Fresh silage materials (1000 g for each silage) were stuffed in 1 litre glass jars and closed tightly for ensiling for 60 days. Before ensiling, the jars were placed upside down to drain water from the holes in the cap of the jar for 24 hours. There were seven treatment groups: i) 100% apple pomace as control, ii) 100% corn plant, iii) 100% sugar beet pulp, iv) 100% pumpkin pulp, v) 50% apple pomace and 50% sugar beet pulp (w/w), vi) 50% apple pomace and 50% pumpkin pulp (w/w), and vii) 50% apple pomace and 50% whole corn plant (w/w). Each treatment contained three jars of the same material.
The jars were opened after 60 days of fermentation. The silage pH was measured right after opening. A total of 25 g of silage from each jar was sampled for pH measurement in 100 ml of distilled water. The content was homogenized in a blender for five minutes. The homogenized sample was filtered through a double layer of cheesecloth and the solution was re-filtered through a filter paper until it became totally clear. The filtrated liquid was used to determine silage pH directly with a digital pH meter.
The DM, CA, CP and EE contents were analysed according to the methods described by AOAC (1990). The ADF and NDF were analysed by a method described by Goering & Van Soest (1970). Crude cellulose (CC) was determined by the method of Bulgurlu & Ergül (1978). Hemicellulose (HC) was calculated by the equation suggested by Rinne et al. (1997) as follows: The amount of 0.2 g of samples was used for gas production analysis according to Menke & Steingass (1988). The samples were placed in glass tubes containing 10 ml rumen fluid and 20 ml medium. Rumen fluid was collected from two ruminally fistulated sheep fed twice daily with a diet containing alfalfa hay (60%) and concentrate (40%). Rumen fluid was collected approximately two hours after the morning feeding and transported immediately to the laboratory for use. The medium was prepared by mixing 500 ml distilled H 2 O, 0.1 ml micro-mineral solution, 200 ml buffer solution, 200 ml macro-mineral solution, and 1 ml resazurin solution (0.1%). The buffer solution contained 4 g ammonium bicarbonate (NH 4 HCO 3 ) and 35 g sodium bicarbonate (NaHCO 3 ) in 1 L of distilled water. The macro-mineral solution contained 9.45 g sodium dihydrogen phosphate dodecahydrate (Na 2 HPO 4 .12H 2 O), 6.2 g monopotassium phosphate (KH 2 PO 4 ), and 0.6 g magnesium sulphate heptahydrate (MgSO 4 .7H 2 O) in 1 L of distilled water. These were prepared freshly before use. The micro mineral solution contained 13.2 g calcium chloride dihydrate (CaCl 2 .2H 2 O), 10.0 g manganese dichloride tetrahydrate (MnCl 2 .4H 2 O), 1 g cobalt dichloride hexahydrate (CoCl 2 .6H 2 O), and 8 g ferric chloride hexahydrate (FeCl 3 .6H 2 O) in 1 L of distilled water.
Metabolizable energy (ME) and net energy lactation (NEL) values of silages were estimated using equations suggested by Blümmel & Ørskov (1993) and in vitro organic matter digestibility (IVOMD) was calculated using the equation of Menke et al. (1979) Where: GP is 24 h net gas production (ml/200 mg) CP is crude protein (% of DM) EE is ether extract (% of DM) CA is crude ash (% of DM) values Then energy unit converted to Mcal/kg DM All data were analysed using analysis of variance (ANOVA), and means were compared using Duncan's multiple range test and least significant difference test at P <0.05 if ANOVA showed a significant effect. These analyses were carried out using SPSS (Statistical Package for the Social Sciences, version 14.0, SPSS Inc., Chicago IL, USA) software.
Discussion
The ranges of pH (3.60-4.15) obtained from the experimental silages in the present work were in agreement with the study by Mafakher et al. (2010), who reported that high-quality silage should have a pH ranging from 3.80 to 4.30. The pH of corn silage in the present work (4.03) was similar to the pH of corn silage at 3.97 at the end of 8 weeks and 3.93 at the end of 16 weeks reported by Bal (2006). The pH of apple pomace silage (3.79) in the present work was within the ranges of pH for apple pomace silage from 3.40 to 4.10 reported by other researchers ( La Van Kinh & Phuong, 1997;Pirmohammadi et al., 2006;Yalçınkaya et al., 2012). The pH values for apple pomace silages from the present work and from the literature ( La Van Kinh & Phuong, 1997;Pirmohammadi et al., 2006;Yalçınkaya et al., 2012) differ because of the varieties of apple pomace from different sources and different technologies (McDonald, 1981).
The DM of apple pomace silage (13.65% as fed) in the present work was in agreement with the DM reported as 12.37 and 14.92% by Yalçınkaya et al. (2012) and La Van Kinh & Phuong (1997). High-quality silage should have 20-35% dry matter (Ergül, 1988). However, combining apple pomace silage with corn silage (50:50) in the present study yielded a DM of 21.32%, which is close to that of good-quality silage. The results of the present work suggest that the DM of apple pomace should be increased to yield good silage. This can be achieved by adding 50-60% corn silage or a proper amount of dry forages such as straw or alfalfa hay (Gürbüz & Kaplan, 2008). Combining the apple pomace silage with pumpkin pulp (50:50) in the current work decreased the DM content of the silage (11.52%), being far from high-quality silage. The DM of corn silage (29.17%) of the present work was parallel with those who reported similar values of DM from corn silage (Idukut et al., 2009;Arslan & Çakmakçı, 2011).
Ash content of the apple pomace silage of the present work (3.44% of DM) was similar to the values (2.33-3.44% of DM) reported by other researchers (Ahn et al., 2002;Gürbüz et al., 2004;Abdollahzadeh et al., 2010). Ash content as an indicator of the mineral content of the feed material (Arslan & Çakmakçı, 2011) increased in the present work when apple pomace was combined with corn silage, pumpkin pulp, and sugar beet pulp silages. The CP content of the apple pomace silage (6.14% of DM) was similar to values (5.6-7.2% of DM) reported by other researchers (Ahn et al., 2002;Pirmohammadi et al., 2006;Abdollahzadeh et al., 2010). To increase the CP content of apple pomace silage, nitrogen sources should be added during ensiling. In addition, the EE content of the apple pomace silage (4.30% of DM) was similar to the values (4.70-5.49% of DM) reported in the literature (Ahn et al., 2002;Abdollahzadeh et al., 2010). Collectively, the ash, protein, and fat contents of apple pomace silage are similar to average silage quality from other plant sources.
The ADF and NDF contents of the apple pomace silage measured in the present work (34.91 and 46.07% of DM, respectively) were in accordance with the reported values for ADF (34.13-46% DM) and NDF (39.12-56.7% DM) (Ahn et al., 2002;Pirmohammadi et al., 2006;Abdollahzadeh et al., 2010). Wide variations among the reported values were probably due to apple types and processing technologies. In addition, the CC content of the apple pomace was in accordance with those reported in the literature (Gürbüz et al., 2004;Pirmohammadi et al., 2006).
Combining apple pomace with corn silage and pumpkin pulp (50:50) resulted in an increase in the CC content of the silage. Moreover, combining apple pomace with corn silage and sugar beet pulp (50:50) resulted in an increase in HC content of the silage. The fibre components of the apple pomace silage measured in the present work (ADF, NDF, CC, and HC) were close to those of corn silage.
In vitro organic matter digestibility of apple pomace silage was reported to be 57.5% of DM (Pirmohammadi et al., 2006) and 71.4% of DM (Mirzaei-Aghsaghali et al., 2011), which are both in agreement with the results of the present work (62.32% of DM). In the current work, the ME of apple pomace silage was 2.32 Mcal/kg DM (8.37 MJ/kg DM), which is similar to the values of ME for apple pomace silage (9 MJ/kg DM and 10.73 MJ/kg DM) reported by Pirmohammadi et al. (2006) and Mirzaei-Aghsaghali et al. (2011). The findings of the present work for NEL (4.18 MJ/kg DM) also agree with values (6.50 MJ/kg DM) reported by other researchers (Pirmohammadi et al., 2006;Mirzaei-Aghsaghali et al., 2011). Combining apple pomace with sugar beet pulp (50:50) resulted in an increase in the energy content of the silage. In general, IVOMD, ME, and NEL values of the apple pomace silages were comparable with those of the silages from the other plant sources.
The different values reported here in the present work and from the literature are mostly due to the varieties of apple pomace from different apple sources with different technologies. However, independently of the sources of apple types and harvesting technologies, apple pomace silage has the suitable nutrient composition to be used as an economic source of feed compared with silages from other plant sources. Also combining apple pomace and other agro-industrial residue materials such as pumpkin pulp and sugar beet pulp is a useful way to obtain good-quality silages in animal husbandry enterprises.
Conclusion
The results of the present study suggested that DM of apple pomace should be increased to yield good silage. In general, EE, CP, IVOMD, ME, and NEL values of the apple pomace silage were comparable to the silages from the other plant sources. In other words, apple pomace silage is a promising feed. | 2018-12-17T22:46:55.870Z | 2018-05-31T00:00:00.000 | {
"year": 2018,
"sha1": "bf0f5a9a43123182666ae5f3113a7c45b9d33fec",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/sajas/article/download/172388/161797",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bf0f5a9a43123182666ae5f3113a7c45b9d33fec",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
91256754 | pes2o/s2orc | v3-fos-license | Absorption of nutrients by soursop seedlings in response to mycorrhizal inoculation and addition of organic compost 1
Soursop (Annona muricata L.) is considered the second anonaceous in production and cultivated area, in Brazil. Its cultivation has grown considerably in recent years, especially in southern Bahia, due to the favorable edaphoclimatic conditions, and as a profitable alternative to the cacao crop (Lemos 2014). For the production of healthy and vigorous seedlings, the selection of the most suitable substrate is a fundamental step for the generation of plants suitable for field planting and, in this sense, several ABSTRACT RESUMO
INTRODUCTION
Soursop (Annona muricata L.) is considered the second anonaceous in production and cultivated area, in Brazil.Its cultivation has grown considerably in recent years, especially in southern Bahia, due to the favorable edaphoclimatic conditions, and as a profitable alternative to the cacao crop (Lemos 2014).
For the production of healthy and vigorous seedlings, the selection of the most suitable substrate is a fundamental step for the generation of plants suitable for field planting and, in this sense, several ABSTRACT RESUMO types of substrates have been tested.However, the use of organic compost can promote better results in the production of seedlings, as verified for guava (Oliveira et al. 2014), açaí (Silva et al. 2017a) and papaya (Souza et al. 2015).
The use of organic composts and the inoculation of arbuscular mycorrhizal fungi (AMF) are management alternatives in organic production systems.This study aimed to evaluate the effect of AMF inoculation (Acaulospora scrobiculata, Acaulospora colombiana and without inoculation) and organic compost of cacao bark (0 g dm -3 , 5 g dm -3 , 10 g dm -3 , 20 g dm -3 and 30 g dm -3 ) on the mycorrhizal efficiency and nutrient uptake, in 'Morada' soursop seedlings.The experimental design was completely randomized, in a 3 x 5 factorial arrangement (AMF x organic compost), with four replicates.A higher mycorrhizal efficiency was observed for the A. colombiana isolate, with the addition of 0 g dm -3 , 5 g dm -3 and 10 g dm -3 of organic compost to the soil, in relation to the A. scrobiculata isolate, which differed statistically at the doses of 20 g dm -3 and 30 g dm -3 of organic compost.The AMF inoculation promotes increases in the N, P, K, Zn, Cu, Fe and Mn contents, when compared to plants without inoculation.The organic compost exerts an effect on the inoculation, mainly on the absorption of P. The AMF inoculation, together with the organic fertilization, promotes the growth and nutrition of seedlings.KEYWORDS: Annona muricata L.; organic fertilization; composting.
Arbuscular mycorrhizal fungi (AMF), for their ability to exploit larger substrate volumes and favor the absorption of nutrients, can increase the productivity of nurseries and reduce the production time of seedlings, thus improving their nutritional status and accelerating their growth (Santos et al. 2011, Machineski et al. 2018).In addition, AMF may play a key role in the mineralization process of organic matter (Paterson et al. 2016), increasing the soil fertility (Johnson et al. 2016), and may also be stimulated by the addition of organic matter to the soil (Sheldrake et al. 2017).
In order to verify the viability of the use of organic compost obtained by cacao bark composting, for the production of soursop seedlings, the present study investigated the effect of this compost and the concomitant inoculation of AMF on the nutrient uptake, in 'Morada' soursop seedlings.
The organic compost was produced from crushed cacao bark enriched with natural phosphate (source of P).Composting piles were arranged in a conical geometric configuration and the composting process was the "Window" type.Piles were revolved every 15 days, in the first 90 days, and the temperature was fortnightly monitored.At 120 days after composting, the material was dried at room temperature, sieved in 4 mm mesh and chemically analyzed: pH (CaCl 2 ) = 7.41 g kg -1 ; P = 52.1 g kg -1 ; K = 24.1 g kg -1 ; Ca = 72.4g kg -1 ; Mg = 6.4 g kg -1 ; S = 2.7 g kg -1 ; Cu = 57 mg kg -1 ; Mn = 185 mg kg -1 ; Zn = 195 mg kg -1 ; Fe = 7,656 mg kg -1 ; B = 16 mg kg -1 .Organic compost doses were calculated based on the P content and the central dose was based on the response to the fertilization of this nutrient to soursop (Barbosa et al. 2003).
The inocula of the arbuscular mycorrhizal fungi Acaulospora colombiana and Acaulospora scrobiculata were obtained from the Embrapa Agrobiologia, located in Seropédica, Rio de Janeiro state, Brazil, and added at a depth of approximately 3.0 cm, at a density of 30 spores per pot.The inoculated material was formed by a mixture of roots and soil.
The soursop seeds were disinfested with 1 % sodium hypochlorite solution (2 min), washed in distilled water and placed to dry in the shade for 24 h.They were then sown in 150 cm 3 tubes containing sterilized expanded vermiculite.At the end of 30 days, seedlings were transplanted to containers with capacity of 5 dm 3 , containing autoclaved soil, with their respective dose of organic compost already incorporated.At the time of transplanting, AMF inoculation was carried out near the root system of plants.
The experiment was conducted in pots, using a completely randomized experimental design, in a 3 x 5 factorial scheme, with three microbiological treatments (without inoculation, inoculation with A. colombiana and inoculation with A. scrobiculata) and five organic compost doses (0 g dm -3 , 5 g dm -3 , 10 g dm -3 , 20 g dm -3 and 30 g dm -3 ), with 4 replicates.
At the end of the experiment, the soursop seedlings were collected and taken to the laboratory, for dry mass evaluation, which was obtained after oven drying under forced air ventilation at 65 ºC, for 72 h.
The dry biomass increment rate promoted by inoculation was calculated to evaluate the efficiency of the mycorrhizal inoculation, using the formula: where EF is the dry biomass increment, X the mycorrhizal plant and Y the control plant.
Samples were milled in a Wiley mill with 20 mesh sieve and stored in sealed vials.Nutrient concentrations were determined by nitric-perchloric and sulfuric digestion.P contents were determined by the molibidate method, in a molecular absorption spectrophotometer; K by flame photometry; Ca, Absorption of nutrients by soursop seedlings in response to mycorrhizal inoculation and addition of organic compost For the statistical analysis, the Komolgorov-Smirnov test was performed to evaluate the data normality distribution.An analysis of variance was then applied, and the Tukey test (p < 0.05) was used to compare qualitative treatments (AMF inoculation).For quantitative treatments, regression was performed, while the representative model of biological response with significant effect by the t test (p < 0.05) was applied for the equation parameters.Analyses were performed using the SigmaPlot 12.0 software (Systat Software, Inc.).
RESULTS AND DISCUSSION
The F test showed a significant interaction (p < 0.05) with the AMF inoculation and organic compost doses for total dry mass, mycorrhizal efficiency and leaf N, P, K, Mg and Zn contents.Isolated effects were observed for the Ca, Fe, Cu and Mn contents (p < 0.05).
The inoculation with AMF spores belonging to the A. scrobiculata and A. colombiana species stimulated the production of total dry mass of soursop plants, when compared to seedlings without mycorrhizal inoculation.Based on the total dry mass, the mycorrhizal efficiency of each fungal isolate was calculated to determine the maximum response of each AMF in the growth of soursop seedlings (Table 1).
The A. colombiana and A. scrobiculata isolates, without the addition of organic compost, presented a high symbiotic efficiency, providing increases of 487 % and 367 % in the dry matter of plants, respectively, in relation to the control, being A. colombiana statistically superior to A. scrobiculata.On the other hand, in treatments with the addition of 20 g dm -3 and 30 g dm -3 of organic compost, the inoculation with A. scrobiculata was more favorable to a symbiotic efficiency, statistically differing from A. colombiana.Plants respond differently to mycorrhization, and different mycorrhizal fungi species induce diverse effects not only on the colonization level, but also on plant growth (Burleigh & Bechmann 2002).
For the nitrogen content in shoots, increases were observed as the dose of organic compost applied to the soil was increased (Figure 1).The regression analysis showed increases with linear rates, and the maximum value obtained was greater than 35 g kg -1 of dry matter, in plants inoculated with AMF, with the addition of 30 g dm -3 of organic compost.Barbosa et al. (2003) reported average contents of 37 g kg -1 in soursop seedlings at the same development stage.This increase is a result of the organic fertilizer * Means followed by the same letter in the columns do not differ statistically by the Tukey test (p ≤ 0.05).
AMF isolates
Compost doses (g dm 1.Average values for total dry mass, at 120 days after planting the soursop seedlings inoculated with AMF and fertilized with organic compost.efficiency, and a similar result was found by Oliveira et al. (2013), in researches developed with guava seedlings cultivated with the addition of organic material.
Regarding the P content, regression equations, as a function of the organic compost doses, presented a quadratic representation for the inoculation with A. colombiana and A. scrobiculata and a linear representation for the treatment without inoculation (Figure 2).
The reduction in the P content in soursop plants inoculated with A. colombiana and A. scrobiculata at higher doses of organic compost may be attributed to the fact that, in richer nutrient substrates, the AMF activity is generally reduced.Dalanhol et al. ( 2016) attributed the negative effect of AMF inoculation on Eugenia uniflora seedlings to the high P content in the substrate.
As for the K content, it was observed that the addition of organic compost doses to the soil increased the concentration of the element in the shoot dry mass of seedlings, with values fitting to the increasing linear regression model (Figure 3).This result is similar to that observed by Oliveira et al. (2013) andPereira et al. (2010), in guava shoots and tamarind seedlings.According to Pimentel et al. (2009), the exchangeable K contents almost always respond to the addition of organic composts, regardless of the compost used, allowing inferring that this contributed to its easy absorption by the root system.
The evaluation of the AMF interaction at each dose of organic compost for the N, P, K, Ca and Mg contents is shown in Table 2.In general, the inoculation of fungal species promoted the nutritional improvement of soursop seedlings, if compared to uninoculated seedlings.The introduction of the AMF species A. colombiana and A. scrobiculata significantly influenced the N, P and K contents of plant shoots, in all doses of organic compost, with statistically superior responses, when compared to plants without inoculation.
Regarding the N content, plants inoculated with both the AMF species were statistically superior, when compared to the control plants.
A. colombiana and A. scrobiculata did not differ from each other at all organic compost doses.In these treatments, the mean increments were higher than 20 %, when compared to the treatment without inoculation.Santos et al. (2011) reported that the Glomus etunicatum inoculation also provided significant N increases in the shoots of pineapple seedlings.N increments were also reported by Farias et al. (2014), in blueberry seedlings.The contribution of AMF to increased nitrogen uptake may reach 25 %, as a function of the ability to grow beyond the depletion zone that forms near the surface of absorbing roots (Siqueira et al. 2002).Extraradicular hyphae present the ability to absorb ammonium, nitrate and some amino acids and translocate N to the plant (Hodge et al. 2001).
Mycorrhizal inoculant treatments significantly increased the P content of both plants inoculated with A. colombiana and A. scrobiculata at all organic compost levels (0 g dm -3 , 5 g dm -3 , 10 g dm -3 and ).Without the addition of organic compost, plants inoculated with A. scrobiculata differed from those inoculated with A. colombiana.At higher doses, treatments with AMF inoculation did not differ from the control treatment.Similar results were reported for Gigaspora magarita and Glomus clarum inoculation by Samarão et al. (2011) andFarias et al. (2014), respectively in blueberry and soursop seedlings.Silva et al. (2017b) reported that the A. colombiana inoculation provided an increase of 2,400 % in the P content, in relation to the control, in Australian cedar seedlings.Due to its reduced mobility, the P transportation in the soil solution impairs the absorption of this nutrient by plants, and the AMF inoculation becomes an alternative, as these microorganisms increase the root exploitation area (Cardoso et al. 2010).
It should be noted that the average K content in plants inoculated with A. colombiana and A. scrobiculata did not differ from each other for each level of organic compost, being statistically superior in plants without inoculation (Table 2).The highest K values in the present study occurred at the highest dose of organic compost (30 g dm -3 ), with the mean levels for A. colombiana, A. scrobiculata and control being 25.68 g kg -1 , 23.34 g kg -1 and 14.46 g kg -1 , respectively.Similar K levels in soursop seedlings, at 120 days after planting (DAP), were reported by Barbosa et al. (2003), being higher than those found by Chu et al. (2001) and Samarão et al. (2011), in soursop seedlings inoculated with AMF, at 150 and 90 DAP.
Plants without inoculation had higher calcium and magnesium levels in shoots than plants inoculated with A. colombiana and A. scrobiculata, not differing from each other (Table 3).The reduction in the concentration of these elements in inoculated plants may be attributed to the effect of tissue dilution, as a function of the increase in vegetative growth observed in colonized plants (Silveira et al. 2002).Nunes et al. (2008) studied peach plants inoculated with Acaulospora sp. and observed significant negative correlations (p ≤ 0.01) between the colonization percentage and the Ca and Mg contents.
The AMF inoculation promoted a higher zinc uptake and, consequently, higher levels of this micronutrient in the plants shoots at all organic compost levels (Table 3).The highest levels occurred in plants inoculated with A. colombiana, but did not differ statistically from A. scrobiculata, with contents within the foliar content range presented by Silva & Farnezi (2009).The mycorrhizal association with A. scrobiculata also increased the levels of these micronutrients in persimmon plants, when compared * Means followed by the same letter in the column do not differ statistically by the Tukey test (p ≤ 0.05). 1 Non-significant AMF x organic compost interaction (p ≤ 0.05).
AMF isolates
Organic compost (g dm to the treatment without inoculation (Machineski et al. 2018).This result is related to the performance of the extraradicular hyphae, that promote a greater root area exploration and increases the absorption of nutrients with low mobility in the soil (El-Shaik & Mohammed 2009).
Figure 1 .
Figure 1.Nitrogen content in the leaves of soursop seedlings inoculated with AMF, in response to organic compost doses.
Figure 2 .
Figure 2. Phosphorous content in the leaves of soursop seedlings inoculated with AMF, in response to organic compost doses.
Figure 3 .
Figure 3. Potassium content in the leaves of soursop seedlings inoculated with AMF, in response to organic compost doses.
Figure 4 .
Figure 4. Soursop roots of the control treatments and inoculated with Acaulopora scrobiculata and Acaulopora colombiana.A) Overview of a non-mycorrhizal root of the control treatment; B) overview of the root inoculated with A. colombiana, showing the frequency of arbuscles; C) mycorrhizal root of treatment with A. scrobiculata, where conspicuous arbuscles can be observed; D) detail of extraradicular hyphae.
Table 2 .
Average contents of the macronutrients nitrogen, phosphorus, potassium, calcium and magnesium, in the dry matter of soursop shoots.
Table 3 .
Average zinc, iron, copper and manganese contents in the dry matter of soursop shoots. | 2019-04-03T13:08:02.643Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "88b36060d41949dee7afdf8c5a3d9857d003bd43",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/pat/v48n3/1983-4063-pat-48-03-0287.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "88b36060d41949dee7afdf8c5a3d9857d003bd43",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
14360525 | pes2o/s2orc | v3-fos-license | Lattice Flavourdynamics
I present a selection of recent lattice results in flavourdynamics, including the status of the calculation of quark masses and a variety of weak matrix elements relevant for the determination of CKM matrix elements. Recent improvements in the momentum resolution of lattice computations and progress towards precise computations of $K\to\pi\pi$ decay amplitudes are also reviewed.
INTRODUCTION
One of the main approaches to testing the Standard Model of Particle Physics and searching for signatures of new physics is to study a large number of physical processes to obtain information about the unitarity triangle and to check its consistency. The precision with which this check can be accomplished is limited by non-perturbative QCD effects and lattice QCD provides the opportunity to quantify these effects without model assumptions. Of course, lattice computations themselves have a number of sources of systematic uncertainty, and much of our current effort is being devoted to reducing and controlling these errors. In this talk I briefly discuss the evaluation of quark masses and weak matrix elements using lattice simulations.
For most lattice calculations of physical quantities, the principal source of systematic uncertainty is the chiral extrapolation, i.e. the extrapolation of results obtained with unphysically large u and d quark masses. Ideally we would like to perform computations with 140 MeV pions and hence with m q /m s of about 1/25 (where m q (m s ) is the average light quark mass (strange quark mass)). In practice values m q /m s ≥ 1/2 are fairly typical, so that the MILC Collaboration's simulation with m q /m s ≃ 1/8 is particularly impressive [1] and provides a challenge to the rest of the community to reach similarly low masses. Its configurations have been widely used to determine physical quantities with small quoted errors.
The MILC collaboration uses the staggered formulation of lattice fermions and for a variety of reasons it is very important to verify the results using other formulations. With staggered fermions each meson comes in 16 tastes and the unphysical ones are removed by taking the fourth root of the fermion determinant. Although there is no demonstration that this procedure is wrong, there is also no proof that it correctly yields QCD in the continuum limit [2]. The presence of unphysical tastes leads to many parameters to be fit in staggered chiral perturbation theory (typically many tens of parameters) and to date the renormalization has only been performed using perturbation theory. It is therefore pleasing to observe that the challenge of reaching lower masses is being taken up by groups using other formulations of lattice fermions (see e.g. ref. [3]) .
In this talk I will discuss a selection of issues and results in lattice flavourdynamics. I start by describing some new thoughts on improving the momentum resolution in simulations, by varying the boundary conditions on the quark fields. I then review the status of lattice calculations of quark masses, K ℓ3 decays (for which computations have only recently began) and B K . This is followed by a discussion of some of the key issues in the computation of K → ππ decays and in heavy-quark physics.
Improving the Momentum Resolution on the Lattice
Numerical simulations of lattice QCD are necessarily performed on a finite spatial volume, V = L 3 . Providing that V is sufficiently large, we are free to choose any consistent boundary conditions for the fields φ ( x,t), and it is conventional to use periodic boundary conditions, φ (x i + L) = φ (x i ) (i = 1, 2 or 3). This implies that components of momenta are quantized to take integer values of 2π/L. Taking a typical example of a lattice with 24 points in each spatial direction, L = 24a, with a lattice spacing a = 0.1 fm so that a −1 ≃ 2 GeV, we have 2π/L = .52 GeV. The available momenta for phenomenological studies (e.g. in the evaluation of form-factors) are therefore very limited, with the allowed values of each component p i separated by about 1/2 GeV. The momentum resolution in such simulations is very poor.
Bedaque [4] has advocated the use of twisted boundary conditions for the quark fields q( x) e.g.
with integer n i . Modifying the boundary conditions changes the finite-volume effects, however, for quantities which do not involve Final State Interactions (e.g. hadronic masses, decay constants, form-factors) these errors remain exponentially small also with twisted boundary conditions [5]. Since we usually neglect such errors when using periodic boundary conditions, we can use twisted boundary conditions with the same precision. Moreover the finite-volume errors are also exponentially small for partially twisted boundary conditions in which the sea quarks satisfy periodic boundary conditions but the valence quarks satisfy twisted boundary conditions [5,6]. This is of significant practical importance, implying that we do not need to generate new gluon configurations for every choice of twisting angle {θ i }. The use of partially twisted boundary conditions opens up many interesting phenomenological applications, solving the problem of poor momentum resolution. It also appears to work numerically. Consider for example, the plots in fig. 1, obtained using an unquenched (2 flavours of sea quarks) UKQCD simulation on a 16 3 × 32 lattice, with a spacing of about 0.1 fm. The plots correspond to a value for the light-quark masses for which m π /m ρ = 0.7 [7]. The lower (upper) left-hand plot shows the energy of the π (ρ) as a function of the momentum of the meson, and the right-hand plot shows the bare values of the leptonic decay constants f π and f ρ . The x-axis denotes (| p |L) 2 . The results are beautifully consistent with expectations (particularly for pL ≤ 2π where lattice artifacts are small); the predicted dispersion relation is satisfied and the extracted decay constants are independent of the momenta. Using periodic boundary conditions only the results at values of p indicated by the dashed lines are accessible. With partially twisted boundary conditions all momenta are reachable.
QUARK MASSES
Quark Masses are fundamental parameters of the Standard Model, but unlike leptons, quarks are confined inside hadrons and are not observed as physical particles. Quark masses therefore cannot be measured directly, but have to be obtained indirectly through their influence on hadronic quantities and this frequently involves non-perturbative QCD effects. Lattice simulations prove to be very useful in the determination of quark masses; particularly for the light quarks (u, d and s) for which perturbation theory is inapplicable.
In order to determine the quark masses we compute a convenient and appropriate set of physical quantities (frequently a set of hadronic masses) and vary the input masses until the computed values correctly reproduce the set of physical quantities being used for calibration. In this way we obtain the physical values of the bare quark masses, from which by using perturbation theory, or preferably non-perturbative renormalization, the results in standard continuum renormalization schemes can be determined.
My current best estimates for the values of the quark masses as determined from lattice simulations are presented in table 1.
K ℓ3 Decays
A new area of investigation for lattice simulations is the evaluation of non-perturbative QCD effects in K → πℓν ℓ decays, from which the CKM matrix element V us can be determined. The QCD contribution to the amplitude is contained in two invariant formfactors f 0 (q 2 ) and f + (q 2 ) defined by where q = p K − p π . (Parity Invariance implies that only the vector current from the V −A charged current contributes to the decay.) A useful reference value for f + (0) comes from the 20-year old prediction of Leutwyler and Roos, f 023 is well determined, whereas the higher order terms in the chiral expansion require model assumptions.
To be useful in extracting V us from experimental measurements we need to be able to evaluate f 0 (0) = f + (0) to better than about 1% precision. This would seem to be impossible until one notes that it is possible to compute 1 − f + (0), so that an error of 1% on f + (0) is actually an error of O(25%) on 1 − f + (0). The calculation follows a similar strategy to that proposed in ref. [11] for the form-factors of B → D semileptonic decays (which in the heavy quark limit are also close to 1), starting with a computation of double ratios such as where all the mesons are at rest and q 2 max = (M K − M π ) 2 . Following a quenched calculation by the SPQR collaboration last year [13], in which the strategy for determining the form-factors was presented, there have been 3 very recent unquenched (albeit largely preliminary) results: RBC [14] f + (0) = 0.955 (12) JLQCD [15] f + (0) = 0.952 (6) FNAL/MILC/HPQCD [16] f + (0) = 0.962 (6) (9) in good agreement with the result of Leutwyler and Roos [12]. B K B K , the parameter which contains the non-perturbative QCD effects in K 0 −K 0 mixing, has been computed in lattice simulations by many groups. It is defined by B K depends on the renormalization scheme and scale and is conventionally given in the NDR, MS scheme at µ = 2 GeV or as the RGI parameterB K (the relation between the two isB K ≃ 1. The dashed lines in fig. 2 correspond to B K = .58 (3), which I am happy to take as the current best estimate. The challenge now is to obtain reliable unquenched results; such computations are underway by several groups but so far the results are very preliminary. We will have to wait a year or two for precise results, but I mention in passing C.Dawson's guesstimate [18] (stressing that it is only a guesstimate), based on a comparison of quenched and unquenched results at similar masses and lattice spacings, of B MS K (2 GeV) = 0.58(3)(6).
K → ππ Decays
A quantitative understanding of non-perturbative effects in K → ππ decays will be an important future milestone for lattice QCD. Two particularly interesting challenges are: i) an understanding of the empirical ∆I = 1/2 rule, which states that the amplitude for decays in which the two-pion final state has isospin I=0 is larger by a factor of about 22 than that in which the final state has I = 2; ii) a calculation of ε ′ /ε, whose experimental measurement with a non-zero value, Both collaborations obtain a considerable octet enhancement (significantly driven however, by the chiral extrapolation) and ε ′ /ε with the wrong sign. A particularly impressive feature of these calculations was that the collaborations were able to perform the subtraction of the unphysical terms which diverge as powers of the ultra-violet cut-off (a −1 , where a is the lattice spacing). The results are very interesting and will provide valuable benchmarks for future calculations, however the limitations of the calculations should be noted, in particular the use of chiral perturbation theory (χPT) only at lowest order. This has the practical advantage that K → ππ matrix elements do not have to be evaluated directly, it is sufficient at lowest order to study the mass dependence of the matrix ele- where M is a pseudoscalar meson and the O i are the ∆S = 1 operators appearing in the effective Hamiltonian, to determine the low-energy constants and hence the amplitudes. It is not very easy to estimate the errors due to this approximation, but they should be at least of O(m 2 K /Λ 2 QCD ). Since for ε ′ /ε the dominant contributions appear to be from the QCD and electroweak penguin operators O 6 and O 8 , which are comparable in magnitude but come with opposite signs, it is not totally surprising that the prediction for ε ′ /ε at lowest order in χPT has the wrong sign. It should also be noted that in the simulations described in ref. [19,20] the light quarks masses were large (the pions were heavier than about 400 MeV) and so one can question the validity of χPT in the range of masses used (about 400-800 MeV).
To improve the precision, apart from performing unquenched simulations and reducing the masses of the light quarks, one needs to go beyond lowest order χPT (for example by going to NLO [21,22]) and, in general, this requires the evaluation of K → ππ matrix elements and not just M → M ones. The treatment of two-hadron states in lattice computations has a new set of theoretical issues, most notably the fact that the finitevolume effects decrease only as powers of the volume and not exponentially. Starting with the pioneering work of Lüscher [23], the theory of finite-volume effects for twohadron states in the elastic regime is now fully understood, both in the centre-of-mass and moving frames, [23] - [28] and I will now briefly discuss this.
Consider the two-hadron correlation function represented by the diagram p E where the shaded circles represent two-particle irreducible contributions in the schannel. For simplicity let us take the two-hadron system to be in the centre-of mass frame and assume that only the s-wave phase-shift is significant (the discussion can be extended to include higher partial waves). Consider the loop integration/summation over p (see the figure). Performing the p 0 integration by contours, we obtain a summation over the spatial momenta of the form: where the relative momentum k is related to the energy by E 2 = 4(m 2 + k 2 ), the function f (p 2 ) is non-singular and (for periodic boundary conditions) the summation is over momenta p = (2π/L) n where n is a vector of integers. In infinite volume the summation in eq. (5) is replaced by an integral and it is the difference between the summation and integration which gives the finite-volume corrections. The relation between finitevolume sums and infinite-volume integrals is the Poisson Summation Formula, which (in 1-dimension) is: 1 If the function g(p) is non-singular, the oscillating factors on the right-hand side ensures that only the term with l = 0 contributes, up to terms which vanish exponentially with L. The summand in eq. (5) on the other hand is singular (there is a pole at p 2 = k 2 ) and this is the reason why the finite-volume corrections only decrease as powers of L.
The detailed derivation of the formulae for the finite-volume corrections can be found in refs. [23] - [28] and is beyond the scope of this talk. The results hold not only for K → ππ decays, but also for π -nucleon and nucleon-nucleon systems.
For decays in which the two-pions have isospin 2, we now have all the necessary techniques to calculate the matrix elements with good precision and such computations are underway. For decays into two-pion states with isospin 0 there are also no barriers in principle. However, in this case, purely gluonic intermediate states contribute and we need to learn how to calculate the corresponding disconnected diagrams with sufficient precision. In addition the subtraction of power-like ultraviolet divergences requires large datasets (as demonstrated in refs. [19,20] in quenched QCD). For these reasons it will take a longer time for some of the ∆I = 1/2 matrix elements to be computed than ∆I = 3/2 ones.
HEAVY QUARK PHYSICS
Lattice simulations are playing an important role in the determination of physical quantities in heavy quark physics including decay constants ( f B , f B s , f D , f D s ), the B-parameters of B −B mixing (from which the CKM matrix elements V td and V ts can be determined), form-factors of semileptonic decays (which give V cb and V ub ), the g BB * π coupling constant of heavy-meson chiral perturbation theory and the lifetimes of beauty hadrons.
The typical lattice spacing in current simulations a ≃ 0.1 fm is larger than the Compton wavelength of the b-quark and comparable to that of the c-quark. The simulations are therefore generally performed using effective theories, such as the Heavy Quark Effective Theory or Non-Relativistic QCD. Another interesting approach was proposed by the Fermilab group [29], in which the action is improved to the extent that, in principle at least, artefacts of O((m Q a) n ) are eliminated for all n, where m Q is the mass of the heavy quark Q. Determining the coefficients of the operators in these actions requires matching with QCD, and this matching is almost always performed using perturbation theory (most often at one-loop order). This is a significant source of uncertainty and provides the motivation for attempts to develop non-perturbative matching techniques. I only have time here to consider very briefly a single topic, semileptonic B-decays. For B → π decays, the pion's momentum has to be small in order to avoid large lattice artefacts, so that q 2 = (p B − p π ) 2 is large (q 2 > 15 GeV 2 or so). There continues to be a considerable effort in extrapolating these results over the whole q 2 range. Recently, as experimental results begin to be presented in q 2 bins, it has become possible to combine the lattice results at large q 2 with the binned experimental results and theoretical constraints to obtain V ub with good precision [30].
As an example I present a recent result, obtained using the MILC gauge field configurations with staggered light quarks and the Fermilab action for the b-quark [31] |V ub | = 3.48(29)(38)(47) × 10 −3 .
SUMMARY AND CONCLUSIONS
Lattice QCD simulations, in partnership with experiments and theory, play a central rôle in the determination of the fundamental parameters of the Standard Model (e.g. quark masses, CKM matrix elements) and in searches for signatures of new physics and ultimately perhaps will help to unravel its structure. With the advent of unquenched simulations, a major source of uncontrolled systematic uncertainty has been eliminated and the main aim now is to control the chiral extrapolation and reduce other systematic uncertainties. We continue to extend the range of applicability of lattice simulations to more processes and physical quantities. In this talk I have only been able to give a small selection of recent results and developments; a more complete set can be found on the web-site of the 2005 international symposium on lattice field theory [32]. | 2014-10-01T00:00:00.000Z | 2006-01-10T00:00:00.000 | {
"year": 2006,
"sha1": "fc031b1b9aace7d7246b09a8e1b42d3ed3bccfb6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/0601014",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fc031b1b9aace7d7246b09a8e1b42d3ed3bccfb6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245716295 | pes2o/s2orc | v3-fos-license | THE ESSENCE AND SCOPE OF COMPETITIVENESS OF HEALTHCARE ORGANISATIONS IN POLAND
Purpose – The paper presents an analysis of the broadly defined essence of competitiveness in the healthcare sector and the closely related issue of the competitiveness of healthcare organisations accounting for the specifics of key types of competition in healthcare. The objective of the paper is to demonstrate that the essence of competition between healthcare organisations is represented by a better use of resources by some enterprises and enhanced cost efficiency in the conditions of the increased demand for top quality services. Research method – The research methodology applied in the study was literature research, the analysis of available empirical research results and of the laws and regulations applicable to the healthcare market. The analysis focused on the specifics of the Polish healthcare market and the ensuing consequences for the competitiveness of healthcare organisations. Results – The results of the analysis show how complex the issue of competitiveness of healthcare organisations is and how relevant it is for the increase of quality, availability, and innovativeness of healthcare for patients.
Introduction
The current growth in the competitiveness of healthcare results from an intensive expansion of the medical services market. The search for answers to questions about the essence, scope or the objectives of competitiveness of medical enterprises is hindered owing to the specific nature of the healthcare sector. Competitiveness in healthcare is more complex than in other industries [Duncan, 2008, pp. 123-158]. In principle, competitiveness is the activity of market players who, striving after promoting their interests, compete against one another in terms of price, quality, and attractiveness of their products and services. In the healthcare market, competitiveness between service providers cannot be based on typical market factors usually at play, as healthcare entities must take into account the welfare of patients and they are expected to cater to the need of improving the health of the population covered by healthcare [Walsche, Smith, 2011, pp. 23-33]. The fact that the goods offered on this market are of the highest order -they are intended at saving lives and maintaining the health of humans -is also of essence.
The paper presents an analysis of the broadly defined essence of competitiveness in the healthcare sector and the closely related issue of the competitiveness of healthcare organisations accounting for the specifics of key types of competition in healthcare. The objective of the paper is to demonstrate that the essence of competition between healthcare organisations is represented by a better use of resources by some enterprises and enhanced cost efficiency in the conditions of the increased demand for top quality services.
The essence of competitiveness of healthcare organisations
The fundamental objective of the operation of a healthcare organisation is the provision of health services, i.e. in line with the Act on Medical Activity dated 15 April 2011 "the performance of activities aimed at maintaining, restoring and improving health" [Act, 2011, art. 9.1]. According to Arrow [1979], health services are not market commodities. This is so because the demand for them is not generated by the intention of satisfying a given need but it stems from necessity related to the condition of one's health. It means that the sector of health services does not meet the definition of the model of perfect competition which is characterized by the extensive knowledge of the market by the manufacturer and buyer, the absence of external effects and the stability of demand and supply. In a free market, all entities must adhere to the same rules and all transactions must comply with the legal or customary regulations. Another characteristic of a free market is the extensive knowledge about the goods and services offered and the interaction between demand and supply which affects the final price.
Apart from the above aspects of this quasi market, as the imperfect healthcare market is sometimes called, the other factors impacting the limitation of open competition include stringent industry regulations. They apply irrespective of the healthcare system model [Wielicka, 2014] adopted in a given country.
An elaboration of the problem of market mechanisms limitations in the sector of healthcare services is proposed by Rudawska [2007]. She claims that the regulation of supply takes place primarily via the laws and regulations that healthcare organisations must comply with. The most significant of these are the regulations governing the performance of medical activities i.e. the requirements that must be met in order to start conducting and to continue to conduct business activities in the healthcare market. These include sanitary and epidemiological requirements, guidelines laid down by the registration authority, as well as the necessary approvals for establishing a healthcare organisation and for the provision of contracted services granted by the regional authorities. Registered entities must also meet the healthcare standards defined by the Health Ministry and the Agency for Health Technology Assessment and Tariff System, and obtain additional certificates issued by the Centre for Monitoring Quality in Health Care. The industry regulations are discussed in more detail in sub-chapter 2.2.3. Regulatory conditions, quality standards and ethical and moral aspects.
The issue of a regulated market of healthcare services is even more complex if we take into consideration the fact that it is populated by both public and private healthcare organisations.
In Poland, since 1999 if not earlier, the private market has been clearly dominant in the field of outpatient care both in terms of primary care and outpatient specialist care. The reason behind the dominance was the systemic reform that enabled the transformation of the organisational and legal status of healthcare organisation via establishing the so-called non-public healthcare organisation (NPHO) by local government authorities in pursuance of the Act on Healthcare Organisations and other selected laws concerning the operation of local governments in Poland. These changes encompassed the liquidation of the so-called Independent Public Healthcare Organisations (IPHO) and replacing them with NPHOs or dissolving the organisational unit of an IPHO only to lease the resources and operations to a nonpublic enterprise [Rabiej, 2013]. This marked the beginning of the development of competition on the market of outpatient healthcare in Poland [Tyszko et al., 2007].
In principle, the objective of any private equity company is to generate profits. Healthcare organisations operate with this goal in mind as well. At the same time, without a doubt, maximising profits is neither the fundamental nor the only criterion for their operation and doctors have no influence on market demand. In general, healthcare professionals usually have no influence over the price of services [Kowalska, 2005]. As such we are faced with a contradiction in terms where enterprises offering life-and health-saving services, the value of which cannot be appraised in some cases, have to play by the rules of the market game, be costefficient and demonstrate operational effectiveness (acquisition and management of resources and assuring quality of healthcare services) [Stępiński et al., 2011, pp. 151-159]. However, there exists a host of evidence demonstrating that in the realm of a regulated market of healthcare services the private sector is not only more effective, but also more beneficial for the patient.
The possibility of providing healthcare services by private healthcare organisations is linked with relatively greater freedom in the management and organisation of a medical enterprise. This results in increased efficiency in the management of resources, in the possibility of hiring highly skilled managers, and adopting a professional approach to management based on work incentives, as well as in the ability to skilfully react to market changes [Greenshields, 2000]. A good business model helps to maintain the balance between the elements of demand and supply. When the entire process is integrated, starting from admission and ending at postdischarge care of the patient, enterprises can optimise their processes and costs [Corrigan, Mitchell, 2011]. The foregoing is essential in view of the tendency of public healthcare organisations to incur debt. One of the reasons behind this is the misalignment of the structure of healthcare resources with the needs of the aging population [Pędziński, 2016]. Private enterprises demonstrate a higher propensity to establish relations with other healthcare organisations, as well as with competitors, which is conducive to efficient coordination of care and better redistribution of services. Moreover, this approach improves their capacity to attract and deploy private capital and the activities carried on by private entities prevent the establishment of distribution coalitions. The commercialisation of the healthcare market enforces the adoption of an approach where the patient is treated as a customer whose needs must be catered to. The effect of this is the development of an attractive structure of service providers competing with one another in terms of price and quality [Greenshields, 2000].
Summing up, the results of research demonstrating a positive impact of competition in the healthcare industry are worth mentioning In the United States, Cooper [2011] conducted a famous study aimed at evaluating the change of the quality of services offered in the US healthcare system after the introduction of the option for patients to choose their service provider, which was conducive to the growth of competition between healthcare organisations. The study is of relevance because it was performed on an existing system that had been in operation since 2006, which permitted the performance of a detailed empirical analysis of the occurring changes without the risk of mistaking temporal fluctuations for the consequences of the implementation of a new system. An analysis of the results showed that providers facing greater competition decreased their death rates about a third of a percentage point more quickly than monopoly providers. Another clear change observed was the shortening of the pre-surgery length of stay relative to monopoly providers, with no significant difference in the post-surgery length of stay. This means that in the face of greater competition, healthcare organisations improved their efficiency without reducing the number of necessary patient services [Cooper, 2011]. The results of the study also demonstrate that there exists a correlation between the quality of medical services, including the quality subjectively perceived by patients, and health and social outcomes, understood as recovery, the improvement of life quality and the reduction of the burden for the society [The Marshal Office of the Voivvodeship of Silesia, 2012]. A study conducted by Bloom and et al [2010] also showed that greater competition had an impact on the improvement of management quality in healthcare organisations in Great Britain.
Types of competition in the healthcare sector
As given hereinabove, competition in the healthcare sector is more complex than in other industries. The services provided in the healthcare market are highly specific as they are aimed at saving the lives and health of humans. This is what makes the entire healthcare market so specific and so different from a regular free market. Nevertheless, even such a complex environment should endorse competition. According to Misiński [2007] the perfect competition model manifests itself via: -competition between insurers; -competition between insurers and service providers; -competition between service providers, giving patients the freedom to chose their provider. The first type of competition refers to the issue of management of the health insurance premium by different insurers, i.e. it assumes the existence of competition between numerous insurance companies for the health insurance premiums of the insured. In Poland there is no room for such competition since the core remitter for healthcare services is the National Health Fund which is the monopolist on the Polish health insurance market. According to experts, this is one of the drawbacks of the Polish healthcare system, primarily because the prices are set unilaterally by the National Health Fund. There is no room for negotiation that could help establish the rules of entry of other entities into the system [Misiński, 2007].
The second type refers to the competition between healthcare organisations, irrespective of their ownership structure, for the money of insurers, and the competition between insurers for the services provided by healthcare organisations. In this area of the market we are dealing with some degree of competition, namely service providers compete for contracts awarded by the insurer. In Poland, insurers do not compete for the medical services provided by healthcare organisations since, as explained above, the National Health Fund is the only insurer purchasing medical services.
The third type of competition is that between service providers, owing to which patients are given the freedom of choice where to and by whom to have their condition treated. It is worth noting that this is the only truly competitive environment in the Polish healthcare system in terms of outpatient care, which is the subject matter of this paper.
Another division of the types of competition present in the healthcare system is that based on the model of relational capital. In literature, the notion of relational capital is often defined as all relationships with the environment which converts relational capital into financial capital (funds and assets) [Perechuda, Chomiak-Orsa, 2013]. In the healthcare sector, the model allows for an evaluation of customer satisfaction (of both recipients and providers) and their relationships with the institutional environment [Łukasiewicz, 2009;Dobija, 2000].
In reference to the model, it may be found that the essence of competition between healthcare organisations is to accomplish better utilisation of resources by some enterprises and higher cost-efficiency than others. Efficiency is understood here as the achievement of the optimal balance between the effectiveness of business activities and the costs incurred despite the stringent industry regulations. Enterprises can become effective through the right management of resources but also by attracting patients and the best suppliers and via a diversification of the sources of financing of medical operations. Such effectiveness should be paired with high quality services provided by qualified medical staff. Three types of competition in the healthcare sector based on the relational capital model are discussed below.
Competing for the patient
According to the relational capital model, customers determine the growth of competitiveness of an organisation and strengthen its image in the business environment. The processes that take place between medical services providers, or healthcare organisations, and the service recipients, or patients, are fundamental to the market of healthcare services [Wiercińska, 2012], although this is an understatement. As demonstrated by the results of research, the decisive factor in attracting customers (patients) is the broadly defined quality (both in terms of customer service and the services themselves).
The quality of healthcare services is of paramount importance in the primary healthcare sector. Primary healthcare services are funded per capita, which means that organisations receive a fixed fee for each patient who filed his or her declaration to become a patient of the given organisation. Primary healthcare organisations offering high quality services are chosen by a higher number of patients and as such receive more funding from the state. However, there is a limit of patients that one general practitioner can treat. Therefore, if the number of declarations from patients exceeds the set limit, healthcare organisations might hire new physicians, which triggers the growth of the entire enterprise. In the case of hospital inpatient care and outpatient care, organisations compete for the patient to receive funding in the form of fees for the treatments and procedures performed.
Interestingly, healthcare organisations do not compete for patients with just other organisations. They also have to compete against alternative therapies such as natural medicine, acupuncture, and bioenergetics medicine. Other treatment substitutes include disease prevention via lifestyle changes and healthy living trends such as cutting down on alcohol consumption and smoking and increasing the amount of physical exercise.
To properly understand the issue of competiting for patients one must make the distinction between the actual quality of services manifested in the outcomes for the patient and patient satisfaction. According to Donabedian [1998], a given standard of healthcare services does not necessarily translate into customer satisfaction. The satisfaction of the service recipient is a broader concept and it extends beyond the clinical process itself as it is strongly associated with the emotional elements that come with the process. Wiercińska points to the issue of patient experience. Patients attribute quality to medical services based on their observation of the venue of care, the behaviour of staff and the amount and quality of medical equipment available. As such, material evidence and appearance are crucial to how patients perceive a healthcare organisation [Wiercińska, 2012]. Based on research conducted by Lisiecka-Biełanowicz [2001, p. 37], the perception of the quality of medical services by service recipients depends on the competence of the medical staff, the course of the diagnostic and treatment process and on whether the patient feels he or she recovered from a disease or if his or her health has improved. We must bear in mind that in marketing terms, the outcome of a service must be perceived not only as an improvement of health or full recovery, but also as the overall satisfaction of the patient with the treatment process. Of course, full recovery will affect satisfaction but not all medical services lend themselves well to an evaluation of the quality of the outcome measured basing on the criteria of correctly performed procedures or the final diagnosis [Hauke, 1995, p. 10].
The other factor at play, apart from the necessity of making a distinction between the perception of service quality and patient satisfaction, are the trends emerging from the specific nature of healthcare services as a commodity. The results of Polish studies have shown that Polish patients tend to behave irrationally and individual segments of patients demonstrate specific and distinct preferences [Wiercińska, 2012]. In literature, the emphasis is placed on the asymmetry of the information exchanged between service providers and patients and the insecurity regarding the healthcare outcomes. Hurley points out that patients may be looking for information regarding the diagnosis and treatment on their own, but it is the service provider that is fully informed about the different treatment options and eventually makes decisions based on their expertise. This results in a situation where demand is triggered in part by doctors. The insecurity regarding the effectiveness of the treatment gives rise to the necessity of creating a system that would take over this risk. This means that service providers can order certain medical services depending on the insurance status of the patient [Suchecka, 2010].
Competing for personnel
The personnel of a healthcare organisation is defined as a team of doctors, nurses, medical carers and other non-medical staff supporting the treatment process of patients. In the relational capital model, the quality of an entity is determined by the people who work there. Owing to the regulations governing the performance of the medical profession by doctors and nurses, as well as the regulation of compensation (in the case of public entities only), the competition for medical personnel comes down to creating friendly work conditions. In medical practice more emphasis should be placed not on hiring new employees who often choose their employer based on location and closeness to home, but on establishing a good working relationship with existing employees. This would increase the chances of keeping the employees at the given organisation and this would be perceived as a competitive edge of the given institution and could be considered as an asset by qualified personnel looking for employment. A good employer-employee relationship is defined as equal treatment of all personnel and building mutual trust within the team. In practice, this is manifested in the participation of the low-level personnel in management events or enabling the participation of employees in the decision-making process. One beneficial practice is also endorsing leadership among doctors who are expected to create teams with other employees with the aim of promoting the growth of the inividual divisions or departments. This is one of the main aims of the Healthcare Leadership Model developed by National Health Found [www 2].
Competing for contracts
In line with the model discussed, the third aspect at play are the relationships with institutions determining the creation of a climate favourable to active participation in the market. The typical mechanisms shaping market relationships are not well suited to the specifics of the healthcare sector owing to the stringent laws and regulations applicable to this market sector.
The National Health Fund is the entity financing the operations of both public and non-public healthcare organisations within the scope specified under the relevant law. The individual organisations compete on the market primarily via the most efficient use of resources to ensure cost-efficiency and secure a competitive edge over other entities. In this context, the National Health Fund does, in fact, create a climate for market competition between healthcare organisations.
In the Polish healthcare system contracts awarded by the public remitter are the fundamental source of financing the healthcare services for the vast majority of healthcare organisations regardless of their ownership structure. At the same time, it is worth noting that the overall public spending on health in Poland is among the lowest among OECD member states. In 2020, health expenditures amounted to 7.1% of GDP [www 3]. The insufficient funding of healthcare is the fundamental problem of both the entire system and the individual healthcare organisations that must compete with one another for contracts awarded by the National Health Fund.
Competing for contracts awarded by a monopolist is not a natural or typical market mechanism and there is no platform for negotiation where entry barriers for new entities could be established. Furthermore, the prices of medical services are set by the National Health Fund alone, often without taking into consideration the actual costs a healthcare organisation must incure to provide the service, including overheads and the costs of administration of the given treatment or diagnostic procedure. Therefore, given the specifics of the Polish healthcare system where public funding (PLN 121.5 bln in 2020 [www 3].) is the only source of financing healthcare organisations, entities tend to excessively lower their prices and dump services in their contract proposals. Once the contract is awarded, the organisarions perform the services but at the same time try to negotiate higher compensation from the Fund via annexes to the contracts. This practice is very risky, but it is something healthcare organisations operating in the Polish healthcare market have to deal with on a daily basis. The Polish healthcare system will soon have to respond to the challenges stemming from demographic changes and the increase of healthcare costs triggered by technological advancement, it must also undergo a transformation to become more cost-efficient. In Poland, there is no law in place that would regulate the operation of additional health insurance options or the cooperation between private insurers and the National Health Fund. The adoption of a new law is necessary in view of the growing market of private insurers, the development of which could streamline the operation of the entire healthcare sector [www 1].
The prevailing mechanisms of competition for contracts are particularly visible in tender procedures for the provision of medical services as part of outpatient and inpatient specialist care. Healthcare organisations file their tenders that are evaluated basingon the criteria of quality and price. However, this procedure gives rise to concerns with respect to private entities that compete for the higher remunerated procedures, often guided by their economic interests instead of the mission to provide patients with the necessary services.
A detailed presentation of the mechanism of competing for contracts along with a specification and evaluation criteria can be found in a memorandum of the National Health Fund President on the evaluation criteria of tenders in contract awarding procedures for the provision of healthcare services [www 4]. The services to be financed by the remitter are listed in a tender that is later evaluated basing on price and other four criteria, namely quality, complexity, availability and continuity. The quality criterion covers the competencies of the personnel, availability of equipment and medical devices, certificates, implementation of hospital infections control assessment procedure and an antibiotics policy, as well as the results of the most recent audit conducted by the National Health Fund. The complexity criterion is understood as the possibility of providing healthcare services within a given field throughout the entire process. This criterion takes into consideration the planned structure of healthcare services in the given field or the planned profile of treated cases, access to tests and procedures, having divisions/wards/diagnostic centres within the organisational structure, including centres acknowledged with an entry in the register of entities conducting healthcare activities and an offer of other types or areas of healthcare services collectively ensuring the continuity of the diagnostic or treatment process. The availability of services is not only assessed in terms of days or hours of work, but this criterion also applies to the organisation of patient admissions and the existence of barriers for people with disabilities. The continuity of provision of services is understood as the organisation of the provision of healthcare services ensuring the continuity of the diagnostic or treatment processes and it covers the organisation of services/treatment stays and the execution of the process of treatment of service recipients as part of the given group of services as at the date of the tender pursuant to a contract made with the director of the regional division of the National Health Fund. The price criterion is subjective in nature and it is analysed via comparing the price per unit offered by the tenderer or the final negotiated price with the price envisaged by the National Health Fund for the given tender procedure [www 5].
Conclusions
Summing up the discussion, the competitiveness of healthcare organisations is a complex issue. Its complexity stems above all from the specifics of the healthcare market which, as was demonstrated in the article, differs significantly from other industries. Apart from the discussed characteristics of this quasi market, as the imperfect healthcare market is sometimes called, the stringent industry regulations are also at play and they considerably limit market competition in the sector.
The paper also features an analysis of the different types of competition present in the healthcare sector. According to Misiński, the perfect competition model is manifested in three different ways, namely as the competition between insurers, between insurers and service providers and between service providers, which provides patients with the freedom to choose their provider.
Despite a high degree of complexity, competition is considered by many stakeholders of the healthcare system as a beneficial activity. The competitiveness of healthcare organisations stemming directly from the discussed problem of sectoral competition may produce many positive effects for service providers, their contractors, industry institutions, and, above all, for patients. | 2022-01-06T16:16:20.059Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "daec1b038fc0f484c2417db6759c526ef88012d6",
"oa_license": "CCBY",
"oa_url": "https://repozytorium.uwb.edu.pl/jspui/bitstream/11320/12336/1/Optimum_4_2021_T_Sikora_K_Kanecki_A_Sikora_M_Bogdan_The_essence_and_scope_of_competitiveness_%20of_healthcare_organisations_in_Poland.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "24404e087a0d0dfc97eee1c7b9bbaf952b1e7a78",
"s2fieldsofstudy": [
"Medicine",
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
246346247 | pes2o/s2orc | v3-fos-license | Experiences on the Improvement of Logic-Based Anaphora Resolution in English Texts
: Anaphora resolution is a crucial task for information extraction. Syntax-based approaches are based on the syntactic structure of sentences. Knowledge-poor approaches aim at avoiding the need for further external resources or knowledge to carry out their task. This paper proposes a knowledge-poor, syntax-based approach to anaphora resolution in English texts. Our approach improves the traditional algorithm that is considered the standard baseline for comparison in the literature. Its most relevant contributions are in its ability to handle differently different kinds of anaphoras, and to disambiguate alternate associations using gender recognition of proper nouns. The former is obtained by refining the rules in the baseline algorithm, while the latter is obtained using a machine learning approach. Experimental results on a standard benchmark dataset used in the literature show that our approach can significantly improve the performance over the standard baseline algorithm used in the literature, and compares well also to the state-of-the-art algorithm that thoroughly exploits external knowledge. It is also efficient. Thus, we propose to use our algorithm as the new baseline in the literature.
Introduction
The current wide availability and continuous increase of digital documents, especially in textual form, makes it impossible to manually process them, except for a few selected and very important ones. For the bulk of texts, automated processing is a mandatory solution, supported by research in the Natural Language Processing (NLP) branch of Artificial Intelligence (AI). Going beyond 'simple' information retrieval, typically based on some kind of lexical indexing of the texts, trying to understand (part of) a text's content and distilling it so as to provide it to end users or to make it available for further automated processing is the task of the information extraction field of research, e.g., among other objectives, it would be extremely relevant and useful to be able to automatically extract the facts and relationships expressed in the text and formalize them into a knowledge base that can subsequently be consulted for many different purposes: answering queries whose answer is explicitly reported in the knowledge base, carrying out formal reasoning that infers information not explicitly reported in the knowledge base, etc.
In fact, our initial motivation for this work was the aim of improving the performance of the tool ConNeKTion [1] in expanding automatically the content of the GraphBRAIN knowledge graph [2,3] by automatically processing the literature (e.g., texts on the history of computing [4]).
For our purposes, the system needs to know exactly who are the players involved in the facts and relationships, e.g., given the text "Stefano Ferilli works at the University of Bari. He teaches Artificial Intelligence", the extracted facts might be worksAt(stefano_ferilli,university_of_bari). teaches(he,artificial_intelligence).
So, whilst anaphora and cataphora are clearly disjoint, coreferences are a proper superset of both of them [11]. ConNeKTion already includes an implementation of the anaphora resolution algorithm RAP from [12], but it uses much external knowledge about English, that may not always be useful for technical texts or available for other languages. Thus, we would like to replace it by an algorithm that is more generic and based only on the syntactic structure of the text.
As for most other NLP tasks, the specific steps and resources to carry out an ER activity strictly depend on the language in which the text is written. Different languages have very different peculiarities, some of which have a direct impact on this activity, e.g., while in English, pronouns must always be explicit, in Italian they may be elliptical, implicitly derivable from the verb thanks to the much more varied inflection of verbs than in English. This adds complexity to the task in Italian. On the other hand, in Italian it is often easier than in English to guess the gender and number of a noun or adjective, thanks to the last letter only, which is in most cases determinant to disambiguate cases in which different associations are possible structurally. These considerations were behind the aims of this work: • Showing that a knowledge-poor, rule-based approach is viable and performant, so that it may be used to deal with languages having a more complex syntax; • Showing that knowledge about entity gender, that may improve AR performance, may be acquired automatically also for languages where the gender is not obviously detected from morphology alone.
Carrying on a preliminary work started in [13], here we will specifically focus on AR in English, proposing an approach that extends a classical algorithm in the literature in two directions: 1.
Improving the set of base rules; 2.
Taking into account gender and number agreement between anaphora and referent in the case of proper nouns.
We focused on English because datasets, golden standards and baseline systems are available for it. Still, the approach should be general and easily portable to other languages. The most relevant contributions of our approach are in its ability of: • Handling differently different kinds of anaphoras, by extending the rule set of an established baseline algorithm; and • Disambiguating alternate associations, by using automated gender recognition on proper nouns.
The paper is organized as follows. After introducing basic linguistic information about anaphora and AR, and discussing related works aimed at solving the AR task, Section 3 describes our proposed method to improve logic-based AR. Then, Section 4 describes and discusses the experimental setting and results we obtained, before concluding the paper in Section 5.
Basics and Related Work
In this section, we will discuss different types of anaphora, different algorithms developed in the literature to address the AR problem, with their associated strengths and weaknesses, and suitable evaluation approaches for them. In the following, when making examples, we will adopt the convention of using italics for anaphoras and bold for the corresponding antecedents.
Anaphora and Anaphora Resolution
As said, Anaphora Resolution, aimed at finding the references corresponding to anaphoras, is a special case of Entity Resolution. In spite of their etymology, anaphora resolution is sometimes intended as encompassing cataphora resolution, e.g., Reference [14] defines it as "the problem of resolving references to earlier or later items in the discourse. These items are usually noun phrases representing objects in the real world called referents but can also be verb phrases, whole sentences or paragraphs". Additionally, Coreference Resolution is often mistaken with AR, due to their quite similar definitions and aims and to their partial overlapping. However, the difference between them [10] is apparent if we consider that two entities are co-referring to each other "if both of them resolve to a unique referent (unambiguously)" while they are anaphoric "if A is required for the interpretation of B" (differently from coreferences, it is neither reflexive nor symmetric). An example proving that AR is not a special case of CR is provided in [15]: in the sentence "Every speaker had to present his paper", 'his' is anaphoric to 'Every speaker', but it is not co-referring to it. Indeed, by replacing the pronoun with its referent, the resulting sentence "Every speaker had to present [every speaker's] paper" is semantically different from the original one: in the former, each speaker should present one paper (the one he has authored), while in the latter each speaker presents the papers of all speakers. Nor Coreference Resolution is a subset of Anaphora Resolution, due to the larger set of references solved by the former, including cataphoric references and other, even more sophisticated, ones.
In the context of AR, the candidate item to be referenced is called anaphora, while the item linked to the anaphora is called reference (or referent). Anaphora can be intra-sentential, when both the anaphora and the reference are in the same sentence, or inter-sentential, if the reference is located in a different sentence than the anaphora. As an example, the two sentences "John took his license when he was 18. He passed his exam at his first attempt." contain several anaphoras (various occurrences of 'he' and 'his'), all referring to entity John. The first two are intra-sentential (located in the same sentence mentioning John), the others are inter-sentential (located in a different sentence).
There are several types of anaphora, that can be classified according to the grammatical form they take. Interestingly, there are also non-anaphoric pronominal references. It is important to recognize them, so as to avoid wrongly resolving false anaphora. We will now briefly discuss each type.
Pronominal Anaphora
The most common type of anaphora, called pronominal anaphora, is expressed by a pronoun (as in the example about John taking his license). Pronominal anaphoras can be classified into four main groups, depending on the type of pronoun [16]: Nominative (he, she, it, they), Reflexive (himself, herself, itself, themselves), Possessive (his, her, its, their), or Objective (him, her, it, them). We would also include Relative pronouns (who, which, that, whose, whom).
Some authors consider as belonging to this category slight variations of pronominal anaphoras, including:
•
Discontinuous sets (or 'split anaphora'), first highlighted by Mitkov in [17], where an anaphora corresponds to many entities, to be considered together as a single reference. E.g., in "John and Mary attended a conference. They say it was really inspiring.", the anaphora 'they' refers to both entities 'John' and 'Mary' as a compound entity. • Adjectival pronominal anaphora [11], an anaphora that refers to an adjectival form of the entity occurred earlier in the discourse. e.g., in "A meeting of researchers on AI was held in Rome. Such events are very interesting.", the anaphora 'such events' refers to 'A meeting of researchers on AI'.
Non-Anaphoric Pronominal References
A problem affecting ER tasks is the presence of pronouns with no references associated: some of them are simply part of complex sentences and common expressions. Three kinds of references fall into these kinds of non-anaphoric pronouns: • Extrapositions are transformations of the texts that affect clauses to be moved (extraposed) earlier or later in the discourse, referencing the subject of the clause with the pronoun 'it'. "It is well-known that in Winter colds are more frequent." 'It' has no corresponding entity as a reference, it just refers to the fact about colds as a whole. • Clefts are sentences expressing their meaning using a construction more complex than needed, in order emphasize something. "It was John who wrote the code." 'It' has no reference, it's just there to emphasize John in the sentence meaning, that is simply "John wrote the code", giving him the prize or guilt of doing that. • Pleonastic 'it' is a dummy pronoun with no actual reference, commonly used in natural language, used in sentences in which there is no subject carrying out the action, e.g., in "It's raining.", 'it' is only needed to express a statement on the weather conditions but there is no one who "is raining".
Noun Phrases
A less frequent type of anaphora is expressed by noun phrases. The most common cases in this group are Definite Noun Phrase Anaphora, where the anaphora to be referenced is a noun phrase preceded by the definite article 'the'. "Climate change is endangering the Earth. The problem is well-known". This is one of the most difficult kinds of anaphora to spot since lots of definite noun phrases in the text can have this role, and identifying them requires understanding the semantics of the text (in this case, the knowledge that climate change is a problem).
An even more subtle variant of noun phrases is the Zero Anaphora [11], in which the anaphora is not necessarily definite. A hint to spot this kind of anaphora is knowing that there is always a gap ( a punctuation mark like colon, semicolon, period, parenthesis, etc.) between the anaphora and the reference. "You have two advantages: your skill and your experience." Both anaphoras refer to the same antecedent.
A more ambiguous kind of noun anaphora than the previous ones is the Bridging Anaphora [11]. It is based on a semantic relation existing between a noun phrase and an antecedent entity. There is no way to spot this relation without knowing about that relation, e.g., in "I wanted to run that program, but I knew that the sort procedure was incomplete", the anaphora refers to a component of the reference, and the bridge is represented by the implicit fact that the specific program includes a sort procedure (including procedures is a semantic property of programs).
Other Anaphoric References
There are cases, not considered by the previous groups, that are still anaphoric when not accompanied by other terms in the same noun phrase. These anaphoras are primarily differentiated according to their lexical role, among which: • Indefinite pronouns (presuppositions) (all, no, one, some, any, more, most, a lot, lots, enough, less, few, a little, etc.). "The computer is not powerful enough. He should buy a new one." • Ordinals (first, second, third, last, etc.). "John won't spend any more money in computers. This is the third he buys in a year." • Demonstratives (this, that, these, those). "If John could run his program on this computer, he should be able to run it on that too."
Anaphora Resolution Algorithms
We will refer to a recent survey on anaphora resolution algorithms reported in [11], selecting some of the approaches to be discussed in more detail based on their closer relationship to the solution we are going to propose in this paper.
Traditionally, approaches to anaphora resolution are rule-based. Whilst more recently approaches based on Neural Networks and Deep Learning (CNN, LSTM, 2D-CNN, transformers) have been proposed, this paper is specifically aimed at showing the behavior, performance and strengths of rule-based approaches to AR. In fact, they carry several advantages: • Not depending on the distance between the anaphora and its reference, since general rules working on the syntactic structure text are used; • Not requiring huge annotated training sets, just a linguistic expert to formalize the rules for AR in the language; • Being explainable, since they work in the same way as humans do (also allowing application for educational purposes).
For this reason, we will not delve further into sub-symbolic or deep approaches to AR. Rule-based algorithms can be associated with three major philosophies that have inspired improvements and variants: Syntax-based approach which "works by traversing the surface parse trees of the sentences of the texts in a particular order". The representative of this philosophy is Hobbs' algorithm [18].
Discourse-based approach which relies on spotting "the focus of the attention, choice of referring expression and perceived coherence of utterances within a discourse segment". The representative of this philosophy is Grosz et al.'s Centering Theory [19].
Hybrid approach which relies on combining the previous two approaches. The representative of this philosophy is Lappin and Leass' algorithm [12], that determines the best reference according to its salience value, which depends on recency and grammatical function.
Another relevant distinction is between knowledge-rich and knowledge-poor approaches. Whilst the former rely on external knowledge and resources, the latter try to exploit, as far as possible, only the text itself and the syntactic information that can be derived from it. Most of the algorithms are knowledge-rich, or have optional knowledge-rich components, to improve their performance. This poses the additional problem of obtaining these resources, that are not always available or of sufficient quality for all languages [20]. One of the latest systems based on the knowledge-rich approach is COCKTAIL [21], using WordNet [22] as the external resource. WordNet is a manually developed lexical ontology for English; similar initiatives exist for a few other prominent languages, but do not always match the quality of the original. The most prominent representative of knowledge-poor algorithms is CogNIAC [23]. It adopts a minimalistic approach that focuses on specific types of references, trading recall for higher precision (see Section 2.3 for an introduction to these metrics).
Liang and Wu [16] have learnt the lesson from all the previous works, developing a system based on heuristic rules and on checking several properties based on the ideas of both Lappin and Leass (sentence recency) and the Centering Theory. They improved performance using number, gender and animacy agreement, exploiting WordNet as COCK-TAIL previously did, and carrying out pleonastic 'it' detection. After Liang and Wu's work, the research interest started to shift towards Coreference Resolution since it addresses a broader, but different scope than the AR task.
Hobbs' algorithm [18] is widely recognized as one of the most powerful baseline approaches in the rule-based strategies. It has been considered as a reference for comparison by every new approach or improvement of older approaches because of its performance. For this reason, we will take it as the baseline to evaluate performance of our approach, as well. While our approach is much simpler, we will compare it also to Liang and Wu's approach, to check how much we can approach that state-of-the-art performance without bringing to bear so much power or requiring so much previous knowledge. We will now describe in some more detail these approaches.
Hobbs' Naïve Algorithm
Hobbs' algorithm, purposely intended for the AR task, was proposed in 1978 [18]. It works on the parse trees of the sentences in the text, expressing the syntactic structure of the word sequences that make them up in terms of Part-of-Speech (PoS) tags, using as a standard reference the Penn Treebank tagset [24]. We can distinguish two kinds of nodes in the hierarchical structure of the parse tree: • Internal nodes, that contain PoS tags representing the sub-tree rooted in them; • Leaf nodes, associated with simple tokens (elementary textual parts of the sentence). Figure 1 shows the parse tree of the two sentences about John taking his license, proposed in Section 2.1. The most relevant PoS tags for our purposes are 'S', for sentences, and 'NP', for noun phrases. For each anaphora, the algorithm traverses the tree looking for the reference under the NP branches, following these steps:
1.
Starting from the NP node immediately dominating the pronoun, climb the tree until the first NP or S. Call this node X, and call the path p.
2.
Traverse left-to-right, breadth-first, all branches under X to the left of p. Propose as reference any NP that has an NP or S between it and X.
3.
If X is the highest S in the sentence, traverse the parse trees of previous sentences in order of recency. Traverse each tree left-to-right, breadth-first. When encountering an NP, propose it as a candidate reference. If X is not the highest node, go to step 4.
4.
From X, climb the tree until the first NP or S. Call it X, and p the path to X.
5.
If X is an NP and p does not pass through the node N immediately dominated by X, propose X as a candidate reference. 6.
Traverse left-to-right, breadth-first, all branches below X to the left of p . Propose any NP encountered as a candidate reference. 7.
If X is an S node, traverse all the branches of X to the right of p but do not enter the subtree under any NP or S encountered. Propose any NP as a candidate reference. 8.
Go to step 3.
The algorithm involves two main sections: • The former (steps 1-2) climbs the syntactic tree for the first time and explores it, to find inter-sentential candidates in past sentences; • the latter (steps 4-7) continues climbing, seeking for a new NP, checking if a potential NP can be the antecedent or just exploring a new path.
These two sections are delimited by steps (3) and (8), that iterate the process and eventually seek for the first NP encountered in the analysis of the past sentence nearest to the one just visited. Let us show the practical application of the algorithm for the five pronouns in the text in Figure 1.
The left-to-right, breadth-first, traversal of all branches under S (2) to the left of p involves nodes (3), (4) with only one candidate, NP (3). 3.
(2) is highest S node in sentence, but there is no previous sentence. 4.
No branches of X to the right of p.
No branch to the left of S (18) to traverse. 3.
S (18) is not the highest S node in sentence, 4.
The left-to-right, breadth-first, traversal of all branches under S (2) to the left of p involves nodes (3), (4) with only one candidate, NP (3).
7.
No branches of X to the right of p.
2.
No branch to the left of S (30) to traverse. 3.
S (30) is not the highest S node in sentence, go to step 4.
No branches of X to the right of p.
8.
No further previous sentences, stop. No branches of X to the right of p.
8.
No further previous sentences, stop. No branches of X to the right of p.
It is apparent that this algorithm is only based on the grammatical role of the sentence components, completely ignoring gender and number agreement of anaphora and referent. This information would be of great help to disambiguate some candidate references, but would require external information and thus would transform the approach into a knowledge-rich one. This issue was considered by Hobbs himself in [18] as a direction for improving his algorithm, and has been an important source of performance improvement for all subsequent works on anaphora resolution (e.g., [12,16,19,21,23,25]).
Indeed, Hobbs subsequently proposed a semantic algorithm that relies on various selectional constraints based on the impossibility of action (e.g., dates cannot move, places cannot move, large fixed objects cannot move, etc.). For our work we started from the basic (naïve) version, rather than the improved one, because we want our results to be primarily derived by rules applied to the grammatical structure of the sentence, so as to be as general as possible. This choice is motivated by the fact that relying on lots of specific semantic rules might cause overfitting, since these kinds of rules are highly specific to their very small range of action. Moreover, the two versions were compared by Hobbs himself, showing that the performance achieved by the semantic algorithm for the adopted metric (Hobbs' metric) is just +3.4% on the manually evaluated texts, which is reasonably low considering that the assessed performance of the base algorithm already reached 88.3%.
Liang and Wu's Approach
Liang and Wu's system [16] was proposed in 2004 as an automatic pronominal anaphora resolution system for English texts. Initially aimed at accomplishing its task using heuristic rules, exploiting the WordNet ontology and obtaining further information about gender and number, its accuracy was subsequently improved by extracting information about animacy and handling pleonastic 'it' pronouns.
The process carried out by this system consists of a pipeline of steps. Once the raw text is acquired, it undergoes PoS tagging and an internal representation is built. An NP finder module finds all the candidate anaphoras to be solved. All pleonastic 'it' pronouns are excluded from the processing by a specific module. Each remaining anaphora generates a candidate set of references to which number agreement is applied. After that, they undergo the gender agreement and animacy agreement checks, leveraging the support provided by WordNet. The agreeing candidates are evaluated by heuristic rules, classified into preference and constraint rules, and the final decision is entrusted to a scoring equation dependent on the rule premises, consequences and the amount of agreement to that rule. The employed heuristic rules include syntactic and semantic parallelism patterns, definiteness ('the' + NP to address people-e.g., "the good programmer" is not an indefinite noun), mention frequency, sentence recency (Lappin and Leass' algorithm highly regarded this factor), non-propositional noun phrase rule (in Lappin and Leass' algorithm, the ranking is: subject > direct object > indirect object) and conjunction constraint (conjunct noun phrases cannot refer to each other).
Evaluation
ER/AR can be considered as a kind of information retrieval task, where the queries are the anaphoras and the results are the references. So, straightforward performance evaluation metrics would be Precision: and Recall: R = TP TP + FN expressing, respectively, the ratio of correct answers among the answers given and the ratio of correct answers over the real set of correct answers, in terms of parameters TP (True Positives, the number of items correctly retrieved), FP (False Positives, the number of items wrongly retrieved), FN (False Negatives, the number of items wrongly discarded) and TN (True Negatives, the number of items correctly discarded).
These metrics require all correct answers for the dataset (the ground truth or golden standard) to be known, and they ignore the fact that, in AR, the queries themselves are not known in advance but the system itself is in charge of identifying the anaphoras (and thus it may misrecognize both the candidate anaphoras, in the first place, and their associated reference, subsequently). For this reason, Hobbs also introduced in [18] a measure which gives insights regarding the precision of AR algorithms specifically: In fact, this metric has been widely adopted in the literature, including the work by Liang and Wu which we will use for comparison.
A section in survey [11] is purposely devoted to the available dataset in the literature for generic reference resolution. It also includes a discussion on, and a comparison of, the datasets employed by the other research works in the field. The most important publicly available datasets for AR are: • The Automatic Content Extraction (ACE) corpus [26], developed between 2000 and 2008, containing news-wire articles and labelled for different languages, including English; • The Anaphora Resolution and Underspecification (ARRAU) corpus [27], developed around 2008 as a combination of several corpora, namely TRAINS [28,29], English Pear [30], RST [31] and GNOME [32].
Both are available through the Linguistic Data Consortium (https://www.ldc.upenn. edu/, accessed on 10 November 2021), but are not free. On the other hand, almost all previous works in the field of AR use books, magazines, manuals, narratives, without specific references. Exceptions are Hobbs [18] and Liang and Wu [16], that both use the Brown Corpus [33] (http://icame.uib.no/brown/bcm.html, accessed on 10 November 2021) for evaluating their algorithms (differently from Liang and Wu, Hobbs uses many sources, including part of this dataset). The main issue with this corpus is that it is not an AR corpus strictly speaking, i.e., with annotated anaphoras, but just an American English corpus, which means that it needed a preliminary annotation step for AR purposes.
The Brown University Standard Corpus of Present-Day American English (or simply Brown Corpus) is a linguistic dataset initially compiled by Kučera and Francis in the 1960s and updated several times until 1979. It consists of 500 samples with 2000+ words each, for a total of 1,014,312 words. Samples are divided into 15 different genres, identified by codes and belonging to two main categories: • Informative prose (374 samples), including: -Press texts (reportage, editorial, review); -Books and periodicals (religion, skills and hobbies, popular lore, belles lettres, biography, memoirs); -Government documents and other minorities (foundation and industry reports, college catalog, industry house organ); -learned texts (natural sciences, medicine, mathematics, social and behavioral sciences, political science, law, education, humanities, technology and engineering).
• Imaginative prose (126 samples), including: -Novel and short stories (general fiction, mystery and detective fiction, science fiction, adventure and western fiction, romance and love story, humor). Table 1 reports the performance of the most relevant AR systems as reported in [11], with notes on the experimental setting used, including the dataset and metrics. For the features, we use abbreviations 'Sn' for Syntax, 'D' for Discourse, 'M' for Morphology, 'Sm' for Semantics, and 'Sr' for Selectional rules. Both Hobbs and Liang and Wu used the Brown Corpus as the experimental dataset, and evaluated performance using the Hobbs' metric. Only the basic approach by Hobbs uses just syntax. Centering Theory [25] fiction and non-fiction books from [18], H = 77.6% D others CogNIAC [23] Narratives P = 92%, R = 64% D, Sn MUC-6 P = 73%, R = 75% D, Sn Liang and Wu [16] Brown (random) H = 77% Sm, D, Sn
Proposed Algorithm
The AR strategy we propose extends the original algorithm by Hobbs. When deciding the starting point in developing our strategy, we could choose any of the two main AR approaches known in the literature, i.e., the syntax-based approach (Hobbs) or the discourse-based approach (Centering Theory). We opted for the former because it emulates the conceptual mechanism used by humans to find the correct references for anaphoric pronouns, expressed in the form of grammatical rules. Furthermore, Hobbs' algorithm is still highly regarded in AR literature as a strong baseline for comparisons, given its simplicity, ease of implementation and perfornance too [11]. On the other hand, we left mixed (syntactic and discourse-based) approaches, like Lappin and Leass', for possible future extension of the current algorithm. In fact, Lappin and Leass' ideas have already been exploited for many works, while attempts at improving Hobbs' algorithm has been often neglected, which further motivated our choice.
In a nutshell, we propose a syntax-based algorithm for AR that takes Hobbs' naïve algorithm as a baseline and extends it in 2 ways:
1.
Management of gender agreement on proper nouns. Gender can be associated to adjectives and nouns, and in the latter case to common or proper nouns, while common nouns can be found in dictionaries and thesauri, there are less obvious standard resources to obtain the gender of proper nouns. We propose the use of rules and pattern matching, using models built by Machine Learning algorithms, starting from a training set of first names whose gender is known and using the last letters of such names (i.e., their suffixes of fixed size) as the learning features.
2.
Refinement of Hobbs rules. Hobbs' algorithm adopts a "one size fits all" perspective, trying to address all anaphora typologies with a single algorithm, but it fails on possessive and reflexive pronouns when looking for their reference intra-sententially: the subject side is never accessed. This flaw has been successfully corrected in our rules.
We consider our study on proper noun gender recognition to be our main novel contribution to the landscape of AR. Our refinement of the rules in Hobbs' algorithm should also be relevant.
GEARS
We called our approach GEARS, an acronym for 'Gender-Enhanced Anaphora Resolution System'. It takes as input a (set of) plain text(s), and returns a modified version of the original text(s) in which the anaphoras have been replaced by their referents. Algorithm 1 describes the overall processing workflow carried out by GEARS.
Algorithm 1 GEARS.
Require: set of documents C; window size w for all documents (sequences of sentences) to be processed d = s 1 , . . . , s n ∈ C do for all i = 1, . . . , n (sentences in d) do resolve all anaphoras in s i (i-th sentence in d) using as the sliding window of sentences fol(parse(s i−w+1 )), fol(parse(s i−w+2)) , . . . , fol(parse(s i )) end for end for Each document is processed separately, since an anaphora in a document clearly cannot refer an entity in another document. Furthermore, to delimit the search space for references, and ensure scalability, each document is actually processed piecewise, each piece consisting of a sliding window of a few sentences. GEARS is applied iteratively to each sentence in the text, providing a fixed-size window of previous sentences, whose size is a parameter to our algorithm. At each iteration the window is updated, by adding the new sentence to be processed and removing the oldest one (the first) in the current window.
Actually, GEARS does not work on the plain text of the sentences, but on a logical representation (obtained by applying function 'fol' in the algorithm) of their parse tree (obtained by applying function 'parse' in the algorithm).
So, when moving the sliding window, the new sentence to be added undergoes PoS tagging and its parse tree is extracted. As said, terminal (leaf) nodes in these trees are literals that represent a textual part of the sentence, while non-terminal nodes (internal ones and the root) are PoS tags indicating a phrase type or a syntactic part of the discourse. Then, the parse tree is translated into a First-Order Logic formalism to be used by the rule-based AR algorithm. Each node in the parse tree is assigned a unique identifier and the tree is described as a set of facts builts on 2 predicates: node/2, reporting for each unique node id the corresponding node content (PoS tag or literal), and depends/2, expressing the syntactic dependencies between pairs of nodes in the parse tree (i.e., the branches of the tree). Figure 2 reports an example of FOL formalization for the sentences concerning John and his license, whose parse tree was shown in Figure 1. During processing, facts built on another predicate, referent/2, are added to save the associations found between already resolved anaphoras (first argument) and their corresponding referents (second argument). We assign named tags to punctuation symbols found in the tree nodes as well, since they are not associated with names from the core parser.
depends (29,28). To date, as pre-processing is concerned. Then, actual processing takes place along two main phases: 1. Anaphora Detection from the acquired parse trees of the sentences, pronouns are extracted and compared to the already resolved pronouns.
Anaphora
Resolution for anaphoric pronouns discovered in the previous phase, the number is assessed, and the gender is assessed only for singular pronouns, that might be associated with proper nouns. Then, the rules for AR are applied to all of them in order to find their referents, ensuring that number (and gender for singular anaphoras) match. Note that the referent of an anaphora can in turn be an anaphora, generating a chain of references. In such a case, since the previous references must have been resolved in previous iterations, the original (real) referent is recursively identified and propagated to all anaphoras in the chain.
For each anaphora detected in phase 1, the activities of phase 2 are carried out by a Pattern-Directed Inference System specifying the rule-based AR algorithm described in detail in Sections 3.1.1 and 3.1.2. It works on the facts in the logical representation of the sentences in the sliding window and, for each identified anaphora, it returns a set of 4-tuples of the form: where SA represents the sentence in which the anaphora is found, A represents the anaphora itself, R represents the referent (or '-' if no referent can be found), and SR represents the sentence in which the referent is found (or '-' if no referent can be found). The 4-tuples corresponding to resolved anaphoras (i.e., anaphoras for which a referent is identified) are added, along with additional operational information, to a so-called 'GEARS table' associated with the text document under processing. Since each anaphora can have only a single referent, the pair SA, A is a key for the table entries. Finally, when the generation of the GEARS table is complete, a post-processing phase is in charge of using it to locate in the text the actual sentences including the resolved anaphoras and replacing the anaphoras by their referents found in the previous step, so as to obtain the explicit text. Since the AR core cannot 'see' the actual sentences as plain texts (it only sees their parse trees), it must regenerate the text of SA and SR by concatenating the corresponding leaf nodes in their parse trees. The sentence in the 4-tuple acts as a pattern that, using regular expressions, is mapped onto the parts of text that correspond to the words in the leaves of the parse trees. Then, find and replace methods are applied to the original text, based again on the use of regular expressions. The updated texts are saved in a new file.
A graphical representation of the overall workflow is shown in Figure 3. From the set of documents on the far left, one is selected for processing and the sliding window (red squares) scans it, extracting the parse tree of sentences and identifying pronouns (red circles in the document and parse trees). Then, our AR strategy (1) processes all these anaphora, distingushing them into singular or plural, and then further distinguishing singular ones into proper nouns and others. Pronouns that cannot be associated with any referent are considered as non-anaphoric. Among singular anaphoras, those referring to proper nouns are identified and disambiguated relying on the model for recognizing the gender of proper nouns (2) automatically obtained using Machine Learning approaches. We will now provide the details of our components (1) and (2).
Gender and Number Agreement
GEARS checks gender and number agreement using a set of ad hoc rules, applied separately to the anaphoras and the referents. Priority is given to the number of anaphoras, firstly because the anaphora is found and analyzed before the referent. If its number is plural, then we do not check the gender attribute, since plural nouns of different genders together are infrequent and, in any case, our algorithm will give priority to the closer one. So, the chances of failure in this situation are low.
Assessment of Number of Anaphoras
Each anaphora is located in a sub-tree rooted in an NP. If such NP node has a pronoun child, then its number is easily assigned based on the pronoun's number: • 'Singular' for pronouns he, him, his, himself, she, her, hers, herself, it, its, itself; • 'Plural' for pronouns they, them, their, theirs, themselves.
Assessment of Number of Referents
Like anaphoras, each referent is located in a sub-tree roted in an NP. The number is assigned to the NP node primarily based on its child in the tree, using the following rules (listed by decreasing priority).
•
If the child of the NP node is: -A plural generic noun (e.g., 'computers'), or -A plural proper noun (e.g., 'the United States'), or -A singular noun with at least one coordinative conjunction (e.g., 'the computer and the mouse', 'John and Mary'), then the number for referent is plural; • If the child of the NP node is: -A singular generic noun (e.g., 'computer'), or -A singular proper noun (e.g., 'John'), then the number for referent is singular; • If the child of the NP node is a pronoun, then the corresponding number is determined using the rules for anaphoras described in the previous paragraph.
Assessment of Gender of Singular Anaphoras
The gender of (singular) anaphoras is easily determined as for number. Analyzing the NP sub-tree that contains the pronoun, its gender is easily assigned based on the pronoun's gender: • 'Masculine' for pronouns he, him, his, himself; • 'Feminine' for pronouns she, her, herself; • 'Neutral' for pronouns it, its, itself.
Assessment of Gender of Referents
The gender is assigned to a referent NP node using the following rules, ordered by decreasing priority. The gender for referent: • Is 'neutral' if the child of the NP node is neither a proper noun nor a pronoun; • Corresponds to a person-like generic noun relating to a profession or to a parenthood; • Corresponds to the gender of the pronoun if the NP node has one as child; • Corresponds to the gender of the proper noun if the NP node has one as child.
The gender for proper nouns is recognized based on the Machine Learning approach described later in this section.
Improvement over Base Rules
As said, Hobbs quite successfully applied a single algorithm to all kinds of targeted pronoun, with remarkably good results. While his theory is correct, in practice the different types of pronouns occur in different ways in the text due to their nature, and follow different rules in natural language too. Based on this observation, we developed slight specializations of Hobbs' rules according to the kind of pronoun to be resolved. This resulted in three similar but different algorithms for the four types of anaphoras presented in Section 2 (subjective, objective, possessive and reflexive).
e.g., we observed in the sentences that both reflexive and possessive pronouns highly regard the recency of the referents with respect to the anaphoras when the algorithm is looking for intra-sentential referents. For this reason, our variant removes the intra-sentential constraint that prevents the NP nearest to S from being considered as a potential candidate. This amounts to the following change in Step 2 of Hobbs' algorithm for possessive anaphoric pronouns resolution: 2. Traverse left-to-right, breadth-first, all branches under X to the left of p. Propose as reference any NP. . . Hobbs: . . . that has an NP or S between it and X.
On the example about John taking his license in Figure 1 our algorithm works the same as Hobbs', as shown in Section 2.2.1.
On the other hand, since reflexive anaphoras are necessarily intra-sentential, we designed the following specific strategy for reflexive anaphoric pronouns resolution: 1.
Starting at the NP node immediately dominating the pronoun.
REPEAT
(a) Climb the tree up to the first NP or S. Call this X, and call the path p.
(b)
Traverse left-to-right, breadth-first, all branches in the subtree rooted in X to the left of p. Propose as a candidate referent any NP under X.
3.
UNTIL X is not the highest S in the sentence.
Let us provide two examples (one per intra-sentential kind of anaphora) that this algorithm can successfully solve whereas Hobbs' naïve one cannot: Reflexive "John found himself in a small laboratory programming." Possessive "Every day the sun shines with its powerful sunbeams." Whose parse trees are shown in Figures 5 and 6, respectively. Let us start from the reflexive example, with anaphora 'himself' (10): 1. The NP node immediately dominating the pronoun is (8).
3. X is already the highest S in the sentence: stop. Let us now turn to the possessive example, with anaphora 'its' (20): 1.
(1) is the highest S node in sentence, but there is no previous sentence. 4.
No branches of X to the right of p.
N/A
This experience shows the importance of adopting a rule-based approach over subsymbolic ones: one may understand the faulty or missing part of the AR strategy and make for them by modifying or adding parts of the strategy.
Gender Recognition
Recognizing the gender of names is relevant in the context of AR because it can improve the resolution of masculine or feminine pronouns by excluding some wrong associations. Whilst for common nouns a vocabulary might do the job, the task is more complex when the referent is a proper noun. One way for endowing our approach with gender prediction capabilities on proper nouns would be using online services that, queried with a proper noun, return its gender. However, such services are usually non-free and would require an external connection. For this reason, we turned to the use of a local model obtained through Machine Learning. This section describes our initial attempt at learning gender models for people's names through the fixed length suffix approach. Suffixes are a rather good indicator of gender in proper nouns. Indeed, their use as features has been already considered in the literature [34], yielding quite good performance. e.g., Italian names that end in '-a' most probably refer to women, while names that end in '-o' are usually for males. Of course, in general (e.g., in English) the task is much more complex, justifying the use of Machine Learning to extract non-obvious regularities that are predictive of the name gender.
For this reason, we decided to investigate the predictiveness of suffixes in proper nouns to determine their gender. We tried an approach using a fixed suffix length . Specifically, we tried suffixes of 1, 2 or 3 characters. Longer suffixes were not considered to avoid potential overfitting. Using the letters in the suffix as features, we considered different machine learning approaches: • Logistic regression as a baseline, since it is the simplest classifier to be tested on a classification task, and it is commonly used in the field of NLP; • Decision trees, that we considered an interesting option because the last n characters (n = 1, 2, 3) of the name that we used as features would become tests in the learned tree, that in this way would be interpretable by humans; • Random forests, an ensemble learning method that might improve the performance of decision trees, and especially avoid overfitting, by building multiple decision trees for the same set of target classes.
In the feature extraction step, for names made up of less characters than the length of the required suffix (e.g., 'Ed' when extracting suffixes of length 3), the missing characters were replaced by blank spaces.
Implementation and Experimental Results
The GEARS system has been implemented using different languages for different components. The core rule-based algorithm was implemented in Prolog, and specifically SWI Prolog (https://www.swi-prolog.org/, accessed on 10 November 2021). The machine learning algorithms for gender prediction exploited Python, and specifically the libraries Natural Language ToolKit (NLTK, https://www.nltk.org/, accessed on 10 November 2021) and Scikit-learn (https://scikit-learn.org/, accessed on 10 November 2021). All the parameters for the algorithm are specified in a suitable file. The main structure of the system, and the various pre-and post-processing modules, were implemented in Java, and used several libraries, including JPL (to interface Java to Prolog), and the CoreNLP library (along with its models package) . The latter carries out PoS tagging and syntactic analysis using the Stanford parser (https://nlp.stanford.edu/software/lex-parser.shtml, accessed on 10 November 2021), a tool with state-of-the-art performance. In this section, we will experimentally evaluate the effectiveness and efficiency of different aspects of our proposed approach, explain our experimental settings and discuss the outcomes.
A first and most relevant evaluation concerned the effectiveness of our proposal. It was assessed and compared to both • Hobbs' algorithm, as the most basic approach in the literature, taken as a baseline in most research works, to understand how much each proposed improvement may improve the overall performance; and • Liang and Wu's approach, as one of the latest contributions in that field, representing the state-of-the-art, to understand and possibly get indications on how the algorithm can be further improved.
Two additional experiments evaluated the efficiency of the GEARS System, by analyzing runtime for each processing phase during the computation, and the gender prediction task, by comparing the different machine learning algorithms applied to different features.
Gender Prediction
We start by discussing the experiments on gender prediction, both because gender models are learned off-line and before the AR computation takes place, and because the performance in this task obviously affects the performance of actual AR.
To evaluate the approach based on fixed length proper noun suffixes described in Section 3.2, we started from two sets of names, one per gender, and we extracted the features for each name to obtain a workable dataset. Then, we merged and shuffled the set of examples, and ran a 5-fold cross-validation procedure. It is a common setting in the literature, and in our case it allows to have sufficient data in the test set at each run of the experiment. We generated the folds once, and used them for all the machine learning algorithms, to avoid biases associated with the use of different training sets. After applying each algorithm to the folds, we collected the outcomes and computed their average performance. As said, we considered 3 machine learning approaches: logistic regression, decision trees and random forests.
The use of a Machine Learning approach required a training dataset of English proper nouns labeled with the corresponding gender. A free and reliable dataset for this purpose was the NLTK Corpus 'names' (https://www.nltk.org/nltk_data/, accessed on 10 November 2021). It is very simple and includes nearly 8000 names (2943 male names and 5001 female names). 365 ambiguous names occur in both lists. Whilst its quality is high, the number of entries is rather low, resulting in weak models in some preliminary experiments. So, we decided to expand it with more names. A larger dataset, freely distributed by the US government, is the database of 'popular baby names' (https://www.ssa.gov/oact/babynames/, accessed on 10 November 2021) available from the Social Security Agency. It provides information regarding not only the most popular baby names but also the history of all the names that have been given to babies in the US in the years 1880-2018, ordered by naming trends per year. Data are provided at three levels of granularity: national, state-specific or territory-specific. For the sake of generality, we opted for the national level. The data for each year are in a separate comma-separated values (csv) file including 3 fields: the name, the sex assigned to the name and the number of occurrences of that name with that sex in the selected year, ranked by occurrences. Only names that have at least 5 occurrences in the baby population of that year are reported. We neglected the information about the occurrences and the year. The merger of these two corpora of names included a total of more than 11,000 names, of which 4000+ male names and 7000+ female names.
Regarding the effectiveness of gender prediction, we measured performance using Accuracy: that is a standard metric for the evaluation of machine learning algorithms. The results for different fixed-length suffixes and machine learning algorithms are shown in Table 2. First of all, we note that all machine learning approaches take advantage from the use of longer prefixes as features, except logistic regression, where performance using 3-character suffixes is lower than performance using 2-character ones. However, performance of logistic regression is about 10% lower than the other (tree-based) approaches, and thus we immediately discarded this algorithm as a candidate for use in our AR approach. This is not surprising, since decision trees are known for yielding good performance in tasks involving text (and indeed Logistic Regression was included in the comparison just to provide a baseline). Still, they often tend to overfit the dataset. Random forests are a variant that is commonly used to avoid this problem, leveraging its ensemble approach that learns a set of trees and combines their outcomes. However, in our case, random forests obtained the same performance as decision trees: they differ only in the second decimal digit. Since their performance is the same, but random forests are more complex models than decision trees, we opted for using the latter in our AR approach. More specifically, we used the decision tree with highest accuracy among the 5 learned in the 5-fold cross-validation. We did not learn a new model using the entire training set, to prevent the additional examples from introducing overfitting. So, our AR approach may assume that proper noun gender recognition accuracy is around 80%, while this means that 1 gender every 5 names is misrecognized on average, still it is quite high a performance, that should positively affect the performance of the AR task, as a consequence.
Anaphora Resolution Effectiveness and Efficiency
Moving to the evaluation of our overall AR approach, the choice of a dataset was the first step to carry out. Based on the considerations in Section 2.3, we opted for the Brown Corpus [33], since it is freely available and was used by many previous relevant works, including Hobbs' [18] and Liang and Wu's [16] (differently from Liang and Wu, Hobbs uses many sources, including part of this dataset). Since the corpus is quite large, we selected 3 out of its 15 genres: two from the informative prose section (editorial press texts and popular lore editorials) and one from the imaginative prose section (science fiction novels). Whilst our main purpose is to address informative prose, we also tried our approach on imaginative prose, which is challenging since its nature and style may severely affect gender prediction. Table 3 reports some statistics about the selected subset. The most influential subset for our experiments is lore, since it includes the largest number of words and sentences. (4) personal (10), periodicals (25), letters to editor (7) # sentences ∼1000 ∼3000 ∼5000 # words ∼10,000 ∼50,000 ∼100,000 # pronouns ∼1000 ∼2000 ∼4000 # anaphoric ∼800 (80%) ∼1500 (75%) ∼3600 (90%) For the window size, we used 4, which turned out to be the best to retrieve referents based on various experiments we carried out. Indeed, whilst Hobbs [18] and Lappin and Leass [12] considered windows of size 3, experimentally we found that many anaphora had no reference within 3 sentences, especially in dramatic prose. On the other hand, no significant improvement in performance was obtained for window size larger than 4.
For both experiments aimed at evaluating the effectiveness of GEARS on the AR task we adopted the Hobbs' metric, because it is the most widely exploited for rule-based AR systems in the literature, including Hobbs' and Liang and Wu's work, to which we compare our proposal. More specifically, when comparing the system's responses to the ground truth, each anaphora was associated with one of the following values: 'not found', if the anaphora in the ground truth was not found by the system; 'wrong', if the anaphora was found by the system but associated with a wrong reference; 'wrong sentence', if the anaphora was found by the system and associated with the correct referent, but in a different sentence than the ground truth; 'correct', if the anaphora-referent pair returned by the system is correct and the latter is found in the correct sentence.
The former experiment on AR effectiveness is an ablation study that compared the original algorithm by Hobbs to various combinations of our improvements, to assess the contribution that each brings to the overall performance. It aimed at answering the following research questions: Q1 Can (our approach to) proper noun gender recognition, without the use of any vocabulary, bring significant improvement to the overall performance?
Q2 Can our modification to the basic algorithm by Hobbs improve performance, while still avoiding the use of any kind of external resource?
Its results, by genre, are reported in Table 4. The modification of the rules (Hobbs+) actually brings only a slight improvement (around 2%) over the original algorithm. Still, this is more or less the same improvement brought by Hobbs himself with his more complex approach based on selectional constraints, while we still use the sentence structure only. So, we may answer positively to question Q2. Given this result, we propose our rulebased algorithm as the new baseline to be considered by the literature on AR. Much more significant is the improvement given by the application of gender and number agreement (GN), since it boosts the performance of up to 21.13% (+60% on the new baseline) in the best case (Science Fiction), and of 14.06% (+36%) and 11.37% (+28%) in the other cases, which is still remarkably good. So, we may definitely answer positively question Q1 and use the Hobbs + GN version in our next experiments. The second experiment involves the comparison between GEARS and Liang and Wu's approach. Both systems use the Brown Corpus for the experimentation, but with slight differences , shown in the first row of Table 5 along with the results they obtained on the AR task. In this case, our research question is Q3 How does the performance obtained using our improvements to Hobbs' algorithm, while still being a knowledge-poor approach, compare to a knowledge-rich state-ofthe-art system? Whilst, as expected, Liang and Wu's system obtains better results, our results are worth appreciation, especially considering that GEARS solved 11+ times more pronouns than its competitor, which obviously increased the chances of failures due to peculiar cases. Furthermore, their experiment was carried out on random samples of texts for all the genres, while GEARS has been intensively tested on all the texts associated with the three selected genres.
For the efficiency evaluation of GEARS, the average runtime of each operation per document is shown in Table 6, obtained on a PC endowed with an Intel Core i5-680 @ 3.59GHz CPU running the Linux Ubuntu Server 14.04 x64 Operating System with 16 GB RAM. We observe that the most time-consuming activities are the PoS tagging of the text, carried out by the Stanford parser, and the execution of the AR algorithm. The latter requires equal or less (in one case half) time than the former, and thus the actual AR execution is faster than its preprocessing step.
Conclusions
Anaphora Resolution, i.e., the task of resolving references to other items in a discourse, is a crucial activity for correctly and effectively processing texts in information extraction activities. Whilst generally rule-based, the approaches proposed in the literature for this task can be divided into syntax-based or discourse-based on one hand, and into knowledgerich and knowledge-poor ones on the other. Knowledge-rich approaches try to improve performance by leveraging the information in external resources, which poses the problem of obtaining such resources (which are not always available, or not always of good quality, especially for languages different than English). This paper proposed a knowledge-poor, syntax-based approach for anaphora resolution on English texts. Starting from an existing algorithm that is still regarded as the baseline for comparison by all works in the literature, our proposal tries to improve its performance in 2 respects: handling differently different kinds of anaphoras, and disambiguating alternate associations using gender recognition on proper nouns. Our approach can work based only on the parse tree of the sentences in the text, except for a predictor of the gender of proper nouns, for which we propose a machine learning-based approach, so as to completely avoid the use of external resources. Experimental results on a standard benchmark dataset used in the literature show that our approach can significantly improve the performance over the standard baseline algorithm (by Hobbs) used in the literature. Whilst the most significant contribution is provided by the gender agreement feature, the modification to the general rules alone already yields an improvement, for which we propose to use our algorithm as the new baseline in the literature. Its performance is also acceptable if compared to the latest state-of-the-art algorithm (by Liang and Wu), that belongs to the knowledge-rich family and exploits much external information, especially considering that we ran more intensive experiments than those reported for the competitor. Interestingly, the accuracy of our gender prediction tool is high but can still be improved, with further expected benefit for the overall anaphora resolution performance. Among the strengths of our proposal is also efficiency: it can process even long texts in a few seconds, where more than half of the time is spent in pre-processing for obtaining the parse trees of the sentences.
As future work, we expect that further improvements may come from additional extensions of the rules, to handle more and different kinds of anaphoras, and from an improvement of the gender recognition model, based on larger or more representative training sets. Furthermore, versions of our approach for different languages, with different features as regards syntax and proper noun morphology, should be developed to confirm its generality.
Data Availability Statement:
The datasets used in this work were taken from repositories available on the Internet, and specifically: Brown Corpus (http://icame.uib.no/brown/bcm.html, accessed on 25 January 2022); NLTK Corpus 'names' (https://www.nltk.org/nltk_data/, accessed on 25 January 2022); US government Social Security Agency 'popular baby names' (https://www.ssa.gov/oact/ babynames/, accessed on 25 January 2022) . The code of the algorithm will be made available upon request to the authors. | 2022-01-28T16:07:20.715Z | 2022-01-26T00:00:00.000 | {
"year": 2022,
"sha1": "8016c27d993e0e4edd9126dbe5ac43d40674a016",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/11/3/372/pdf?version=1643352854",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5e2303e6b96e8a485b016ff7169a26aad6c3f4a8",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": []
} |
258298300 | pes2o/s2orc | v3-fos-license | Timing analysis of Swift J0243.6+6124 with NICER and Fermi/GBM during the decay phase of the 2017-2018 outburst
We present a timing and noise analysis of the Be/X-ray binary system Swift J0243.6+6124 during its 2017-2018 super-Eddington outburst using NICER/XTI observations. We apply a synthetic pulse timing analysis to enrich the Fermi/GBM spin frequency history of the source with the new measurements from NICER/XTI. We show that the pulse profiles switch from double-peaked to single-peaked when the X-ray luminosity drops below $\sim$$7\times 10^{36}$ erg s$^{-1}$. We suggest that this transitional luminosity is associated with the transition from a pencil beam pattern to a hybrid beam pattern when the Coulomb interactions become ineffective to decelerate the accretion flow, which implies a dipolar magnetic field strength of $\sim$$5\times 10^{12}$ G. We also obtained the power density spectra (PDS) of the spin frequency derivative fluctuations. The red noise component of the PDS is found to be steeper ($\omega^{-3.36}$) than the other transient accreting sources. We find significantly high noise strength estimates above the super-Eddington luminosity levels, which may arise from the torque fluctuations due to interactions with the quadrupole fields at such levels.
was estimated to be ∼2 × 10 39 erg s −1 at the peak of the outburst Doroshenko et al. 2020). On the other hand, the source distance (id: 465628193526364416) is revised as 5.2±0.3 kpc in the Gaia EDR3 catalogue (Bailer-Jones et al. 2021). When the distance of ∼5 kpc is taken into account, the peak luminosity would be ∼1 × 10 39 erg s −1 , which is still higher than the Eddington limit for such a neutron star (Reig et al. 2020); thus, Swift J0243.6+6124 is classified as an ultraluminous X-ray Pulsar (ULXP), the first ever detected in our own galaxy.
Despite the extensive studies, the magnetic field configuration of Swift J0243.6+6124 is not yet clear. Initial studies have demonstrated that the source pulsations are still detectable at luminosities as low as 10 34 − 10 35 erg s −1 , which indicates that the propeller regime has not yet been attained at such low luminosities; consequently, the pulsar should have a very compact magnetosphere to allow accretion to continue, which confines the upper limit of the magnetic field strength to 3 × 10 12 G Doroshenko et al. 2020). Phase-resolved spectral analysis of NuSTAR observations at different luminosity levels hints for a thick super-Eddington disc with an inner radius of 2-3 × 10 7 cm and a weakly variable reflection component, signifying a magnetic field strength 3 × 10 12 G if the field is dipolar (Bykov et al. 2022). On the other hand, the discovery of a cyclotron resonance scattering feature (CRSF) in the spectrum of Swift J0243.6+6124 at ∼120-146 keV, which is only visible in certain phases around the peak of the outburst (Kong et al. 2022) implies a magnetic field strength of ∼1.6 × 10 13 G near the surface of the pulsar. Nevertheless, it is suggested that the observed CRSF is actually associated with multipole fields (Kong et al. 2022) and the dipolar component of the field strength should be in the range of 3-9 × 10 12 G in order to describe the observed properties of the source coherently (Doroshenko et al. 2020). The accretion disc possibly penetrates into the magnetosphere more than expected, and the disc interactions are dominated by multipole components of the field at high luminosities (Doroshenko et al. 2020;Kong et al. 2022).
With its ultraluminous episode and unique properties, the source has been the target of many studies, especially in probing the nature of neutron star accretion at very high luminosities (van den Eijnden et al. 2018;Doroshenko et al. 2018;Wilson-Hodge et al. 2018;Jaisawal et al. 2019;Kong et al. 2020Kong et al. , 2022Bykov et al. 2022). In this study, we investigate the timing properties of Swift J0243.6+6124, focusing mostly on its moderately luminous stages (∼10 36 − 10 37 erg s −1 ) towards the end of the outburst in 2017-2018, during which the source remained in a subcritical accretion state. We describe the data and the relevant screening processes used for timing analysis in Section 2. In Section 3, we represent the pulse timing analyses that are used for measuring spin frequencies and generating pulse profiles. In addition, we also demonstrate our results on the torque fluctuations on different timescales and luminosities. Lastly, in Section 4, we review and discuss the results of our study in the light of the systematic luminosity-dependent evolution of pulse profiles.
DATA
Neutron Star Interior Composition Explorer (NICER) is stationed on International Space Station (ISS) since 2017 June and operated by NASA. Its primary instrument, X-Ray Timing Instrument (XTI), consists of an aligned array of 56 X-ray concentrators and focal plane modules (FPM) collecting photons from a ∼30 arcmin 2 field onto silicon field detectors in each FPM. These detectors are capable of soft X-ray spectroscopy with 0.2-12 keV energy range and <300 ns timing precision with ∼1900 cm 2 cumulative effective area at 1.5 keV (Gendreau et al. 2016 Table 1).
Additionally, we make use of the pulse frequency history of Swift J0243.6+6124 which is publicly keV count rates for Insight-HXMT appear to be consistent with those measured by Swift/BAT, and the Swift/BAT count rates can be roughly converted to bolometric luminosity using a scaling factor ∼8.2 × 10 38 , assuming a source distance of 6.8 kpc. In this article, we utilise the Gaia EDR3 distance (5.2 kpc) and revise scaling factor for the Swift/BAT count rate-luminosity conversion to ∼4.8 × 10 38 to estimate the bolometric luminosity, unless otherwise stated.
Synthetic pulse timing
During its outburst phase in 2017-2018, the X-ray luminosity of Swift J0243.6+6124 varies by five orders of magnitude. At the same time, the accretion geometry, and consequently the pulse profiles, drastically alter at different accretion regimes (Wilson-Hodge et al. 2018;Doroshenko et al. 2020). In particular, the pulse profiles are shown to be double-peaked at subcritical regime ( < 1 ) and evolve into a single-peaked shape at supercritical regime ( 1 < < 2 ), then again transform into a double-peaked structure at the highest luminosities ( use of the refined orbital solution provided by Fermi/GBM team, we used the following approach to measure the pulse frequencies from the NICER data, which reside within the same time interval as the Fermi/GBM measurements: We first divide the Fermi/GBM pulse frequency measurements into three different segments, each of which is fitted with a different polynomial model to represent the frequency evolution over time, and obtain a synthetic timing solution 4 (see Table 2). Using these timing solutions, we then calculate the deviations of the Fermi/GBM frequencies from the model to extract its residuals. Utilising a linear spline interpolation of the Fermi/GBM frequency residual data set , we convert them to a synthetic phase residual model Φ using integration: where 0 indicates the start time of the segment. Next, we fold the orbitally-corrected NICER light curve with the same synthetic timing solution to generate its phase residuals. Finally, we shift the NICER phase residuals to match with the synthetic phase residual model obtained from Fermi/GBM (see Figure 1).
In the first interval, the luminosity of the source changes substantially, resulting in significant deviations from the polynomial description of the rapid frequency evolution. However, the phase Table 2 (Doroshenko et al. 2020) that are needed to be taken into account.
Thus, we further allow phase shifts for the pulses in the supercritical regime by Δ ∼ 0.5, corresponding to the phase difference between the peaks of the double-peaked and one-peaked profiles (see Figure 4 of Doroshenko et al. (2020)) to accord them with the expected synthetic phase residuals (See Figure 1, bottom panel). On the other hand, during the late stages of the outburst (at the interval 2, 3 and 4), the source luminosity is rather low ( 8 × 10 37 erg s −1 ), and Swift J0243.6+6124 continues to accrete only in subcritical regime (i.e. < 1 ). Therefore, (17) --the synthetic phase residuals reside within a single cycle for the corresponding synthetic timing solutions given in Table 2.
Finally, in order to convert synthetic timing solutions to pulse frequency measurements, we use each consecutive pair of pulse arrivals in the NICER residual set. Each pair is fitted with a linear function = /( 2 − 1 ) where 1 and 2 are the arrival times of the first and second pulses in the pair, and is the phase difference between the pair used for fitting. Each fit is transformed into a spin frequency measurement at the corresponding interval's midpoint by using the frequency correction over the synthetic timing solutions (for applications, see Çerri-Serim et al. (2019); Serim et al. (2022)). The 1 error ranges of the slope are used as a gauge of the uncertainty in the spin frequency measurements. Figure 2 demonstrates the spin frequency history measured from NICER observations whose results are consistent with the spin frequency history shared by the Fermi/GBM APP team.
As it can be seen from the frequency history presented in Figure 2, apart from the initial stages of the Type II outburst, Swift J0243.6+6124 also spins up between MJD ∼58470-58510. Afterwards, as the flux diminishes over time, the frequency evolution trend returns back to the spin-down stage with an average frequency derivative of ∼ −1.6 × 10 −12 Hz s −1 , which is comparable to the average spin-down rate ∼ −1.8 × 10 −12 Hz s −1 observed between MJD 58150-58460.
Interestingly, when the pulse profiles obtained from the timing analysis at low flux states are examined, the pulses seem to exhibit single-peaked profiles at very low flux levels. To illustrate this behaviour more clearly, we present the luminosity-sorted pulse profiles (normalized to [0, 1] range) of all observations after MJD 58300 in Figure 3. A systematic change in the profiles emerges at a luminosity level of ∼7 × 10 36 erg s −1 , marking a potential new transitional level for the alteration of the accretion geometry. As the luminosity decreases, the main peak gradually fades away and the secondary peak grows stronger. It is also interesting to note that the source tends to exhibit spin-down episodes below this luminosity level.
Timing noise
Using the whole frequency history enriched with NICER measurements, we investigate the temporal noise behaviour of Swift J0243.6+6124. To estimate the amplitude of the timing noise at different timescales, we proceed with the rms-value technique developed by Boynton et al. (1972);Deeter (1984); Cordes & Downs (1985). This technique utilises the rms values of the timing residuals ( , ) that are acquired after eliminating the polynomial trend of order from the data set of duration . Then, the associated noise strength can be calculated via: where specifies the red noise order, and ( , 1) denotes the unit noise strength normalization factor for = 1 d and = 1. Our calculations are performed with the associated normalization factors gauged through direct evaluations (Deeter 1984, Table 1). We start by estimating the noise strength of the maximal time span of the data and iterate the calculations for halved timescales the validity of the power density estimates in each time scale, we also present the corresponding measuremental noise levels by taking the measuremental uncertainties of the frequency data set ( ) into account (green crosses in Figure 5), which are calculated via 2 ( ,1) . The measuremental noise level provides a precursor to a noise level at which the measuremental error range becomes dominant over the fluctuations in the data set.
In addition to the aforementioned standard method for PDS generation for torque fluctuations, we also follow the approach described in Serim et al. (2022) to see the effects of the accretion torques on the PDS. In principle, this approach offers a different perspective on the same PDS, with the only distinction being the minimization of torque fluctions arising from disk accretion. It should be noted that in both cases, the input frequency data set is already decoupled from orbital Figure 5. PDS of the spin frequency derivatives using quadratic polynomial trends (red) and luminosity-dependent intrinsic spin frequency evolution model (blue), along with the measuremental noise levels (green). The uncertainties of power density estimates are expressed as 1 confidence intervals, determined by the number of independent estimates present within. Corresponding fits of the PDSs are shown as maroon and dark blue lines.
Doppler delays using the orbital parameters given in Table 1. Therefore, we assume that the orbital modulations in the frequencies are completely removed and they no longer contribute to the noise strength measurements. In this case, we utilise a simple power law relation between the spin-up rate and the luminosity, which is modified with a constant spin-down rate ( = + 0 ) to account for the stable spin-down episodes observed in the frequency history of the source. Then, the luminosity-dependent frequency evolution model is built as (Serim et al. 2022): where 0 is the spin frequency at the time of the burst onset 0 . Instead of polynomial driven residuals built in the standard approach, the residuals obtained from the elimination of is assumed to inherit the noise component for this case (see Figure 4).
In both cases, generated PDSs of spin frequency derivative fluctuations are modeled with a broken power law model: where is the break analysis frequency and Γ is the power law index of the red noise component ( Figure 5). The fitting procedure is carried out by orthogonal distance regression (ODR) using the Python library of SciPy. We report the uncertainties of the best fit parameters with 1 confidence level.
For the standard approach, in which polynomial trends are used, we find that the PDS of the frequency derivatives is evolving as −3.36±0.64 within the range 1/46 1/500 days −1 , which points out a steeper red noise component when compared with the other accreting sources (Bildsten et al. 1997;Baykal et al. 2007;Serim et al. 2022Serim et al. , 2023. The steepness of this red component is comparable to the case of 4U 1626-67 (Bildsten et al. 1997;Serim et al. 2023); however, the timescales within which they are observed are dissimilar. The PDS continuum break occurs at 1/46 days −1 and evolves toward a flatter continuum at higher analysis frequencies (i.e., becomes a white noise component, S ,2 = (6.76 ± 0.16) × 10 −19 Hz 2 s −2 Hz −1 ), implying uncorrelated torque fluctuations at shorter timescales. When the regular frequency evolution model is substituted for the luminosity-dependent model, the power density estimate at the longest timescale is reduced by a factor of >100. The steepness of the red noise component is also reduced to −0.91±0.38 but it does not completely vanish unlike the case of 2S 1417-624 (Serim et al. 2022). It implies that either the luminosity-dependent model (at least through a simple power law relation) does not remove all of the red noise component of the PDS, or merely the standard disc component, which generally contributes to PDS spectra as −2 (Bildsten et al. 1997;Serim et al. 2023), is subtracted from the PDS continuum. At higher analysis frequencies ( ≈ 1/27 days −1 ), the PDS carries the same structure as the former case, with the white noise normalization S ,2 = (5.25 ± 0.17) × 10 −19 Hz 2 s −2 Hz −1 .
In order to understand the nature of the strong red noise component in the PDS of torque fluctuations, we further check the luminosity dependence of the timing noise strengths. Hence, we split the frequency history into ∼15 days long segments and calculate the noise strengths for each of them using the standard method described above. Next, using the Swift/BAT count rates, we calculate the luminosity range for each interval 5 . The distribution of the noise strength estimates as a function of luminosity is illustrated in Figure 6. The luminosity dependence of the noise 5 See Section 2 for the conversion of Swift/BAT count rates to luminosity. strength estimates yields an intriguing distribution. The noise strength amplitudes remain more or less constant up to the transitional luminosity level 2 with a slight de-escalation between 1 and
DISCUSSION AND CONCLUSION
We analyse the NICER/XTI data set and enrich the spin frequency history of Swift J0243.6+6124 with new measurements. The late-stage evolution of spin frequency indicates another torque reversal around MJD ∼58510, after which the source entered a new spin-down phase. When the frequency evolution of Swift J0243.6+6124 is examined, the spin-down phases seem to occur systematically at luminosities below ∼7 × 10 36 erg s −1 . It has already been shown that the source pulsations were observable at luminosities down to 10 34 -10 35 erg s −1 , implying that propeller stage is not yet attained at such low levels Doroshenko et al. 2020), and therefore the spin-down phase is not associated with propeller regime.
The pulse profile evolution of Swift J0243.6+6124 is very intriguing. At luminosities below ∼7 × 10 36 erg s −1 , the pulse profiles are single peaked. Between ∼ 7 × 10 36 erg s −1 < < 1 , a secondary peak component emerges and gains strength with increasing luminosity; thus, the profiles become double peaked. Furthermore, when > 1 , the pulse profiles become singlepeaked again. The transformation of the pulse profiles around ∼ 7 × 10 36 erg s −1 indicates a new transition in the accretion geometry. According to Becker et al. (2012), the critical X-ray luminosity ( 1 ) specifies the onset of the transition from fan beam to pencil beam; however, the transition does not immediately take place. There is an intermediate accretion regime < < 1 where the final phase of the deceleration of the accreted material is experienced through Coulomb braking in the plasma. In such a regime, a hybrid combination of both fan and pencil beam patterns is expected (Becker et al. 2012;Blum & Kraus 2000). They specify a limiting luminosity below which Coulomb interactions are no longer effective enough to stop the accretion flow. This transition luminosity is given by: where 12 ≡ /10 12 G is the dipolar magnetic field strength of the pulsar, Λ 0.1 ≡ Λ/0.1 is a dimensionless parameter accounting for various physical processes such as the possible role of plasma shielding, 20 ≡ /20 is the Thomson optical depth, 1.4 ≡ /1.4 is the pulsar mass, and 10 ≡ /10 km is the pulsar radius. Below this luminosity, the accretion flow is suggested to be decelerated via gas-mediated shock near the stellar surface and the radiation from the polar caps fully transforms to a pencil beam pattern. In addition, according to this model, the pencil beam pattern should also persist at lower luminosity levels ( << ). Actually, such a single-peaked pulse profile was observed by Doroshenko et al. (2020) with an 80 ks NuSTAR observation around the luminosity level of ∼3 × 10 34 erg s −1 . If we consider the transition at 7 × 10 36 erg s −1 as , neglecting the normalized dimensionless parameters of about unity and using typical neutron star parameters, then the magnetic field of the source can be estimated as 4.7 × 10 12 G. Furthermore, when the previously reported critical luminosity level (Wilson-Hodge et al. 2018;Doroshenko et al. 2020), 1 , of the onset of the transition from hybrid pattern to fan beam is updated for the same distance, it results in a magnetic field strength of 5.3 × 10 12 G. Thus, magnetic field strength estimations obtained from both transitional levels become consistent at 5.2 kpc. Therefore, we suggest that the dipolar magnetic field strength of Swift J0243.6+6124 can be confined to a range of ∼(4.7 − 5.3) × 10 12 G.
On the other hand, we also investigated the PDS of the spin frequency derivative fluctuations using the fairly sampled spin frequency data set which is improved with new measurements obtained from NICER/XTI observations. We extract two different PDSs using different models to describe regular rotational evolution. The first one utilizes the standard polynomial-driven approach, and the second one makes use of a luminosity-dependent spin frequency evolution model. Both PDSs exhibit bimodal behaviour in which the high analysis frequency ( ∼ 1/46 days −1 for the former, ∼ 1/27 days −1 for the latter case) noise components are flat while the low analysis frequency components carry red noise. It should be noted that the observed break frequencies are rather close to the orbital period of the source (∼27.7 d). The white noise components in the PDS of the spin frequency derivative fluctuations are generally attributed to the uncorrelated torque fluctuations generated via wind accretion from the companion (Boynton et al. 1972;Deeter & Boynton 1985;Bildsten et al. 1997;Serim et al. 2023). Hence, the high analysis frequency white noise component of Swift J0243.6+6124 may hint at the accretion from the stellar wind of its companion, which is effective on timescales less than the orbital period of the source. Nevertheless, the long-term spin evolution and fluctuations are governed by the disc interactions.
In general, for the sources that are presumed to have an accretion disc, the red noise continuum with −2 dependence sets in at low timescales, which are possibly saturated at viscous timescales (Bildsten et al. 1997;Serim et al. 2023). Even though the number of studies is limited, the PDSs of the torque fluctuations of transient accreting sources demonstrate that steepness of red noise components also seem to occur as ∼ −2 (e.g., SAX J2103.5+4545, Baykal et al. (2007); 2S 1417-624, Serim et al. (2022)). Utilizing the standard PDS generation method, we find that the steepness of the red noise component of Swift J0243.6+6124 is significantly higher (∼ −3.36 ) when compared with other accreting sources (Serim et al. 2023). Such a steep red noise component is only observed in ultra-compact binary system 4U 1626-67 (Bildsten et al. 1997;Serim et al. 2023) and in several magnetars (Woods et al. 2002;Çerri-Serim et al. 2019); however, the timescales in which the component arise are different than the case of Swift J0243.6+6124. To be more specific, the red noise component of the PDS of Swift J0243.6+6124 develops approximately on the orbital timescales, whereas the red noise in 4U 1626-67 is present on timescales longer than ∼1000 days (Bildsten et al. 1997;Serim et al. 2023). In the case of SGR 1806-20 and SGR 1900+14, the red noise components are observed on timescales longer than ∼100 days and the onset timescale of the red noise components are attributed to a threshold for which these magnetars become burst active (Woods et al. 2002). Therefore, we believe that the strong red noise component observed in Swift J0243.6+6124 originates from different physical processes than the aforementioned cases.
To understand the nature of this component, we used the procedure described in (Serim et al. 2022) where the rotational evolution is prescribed by a simple torque model. In the case of 2S 1417-624 (Serim et al. 2022), this model almost completely eliminates the red noise component associated with disc accretion; however, for Swift J0243.6+6124, the results are slightly peculiar.
The steepness is reduced from ∼ −3 to −1 but the red noise structure does not entirely vanish.
This situation may originate from different factors. First, it is possible that the model used in Figure 4 provides an oversimplistic view for -correlation, thus more complex models (e.g., Karaferias et al. (2022)) are required to eliminate this component. Secondly, if the −1 dependence has a physical origin, then it may indicate that −2 dependence observed for the disc component is subtracted. Noting that the steepness and strength of the torque fluctuations are generally attributed to the nature of the magnetic field (Woods et al. 2002;Çerri-Serim et al. 2019), it is possible that the remaining red noise component might be of magnetic origin. Therefore, we further investigate the luminosity dependence of the noise strength estimations to inspect the nature of torque fluctuations at different levels (see Figure 6). We find that the noise strengths remain roughly constant up to the critical luminosity level 2 , above which the RPD accretion disc regime sets in (Doroshenko et al. 2020). When the source luminosity exceeds 2 , the noise strength estimates suddenly increase by a factor of 10, which suggest a possible change in the nature of torque fluctuations above this level.
Moreover, it is recently shown that that the torque-luminosity relation of Swift J0243.6+6124 flattens at the RPD regime (Karaferias et al. 2022;Liu et al. 2022a). Hence, as the luminosity increases, the torque exertions become less efficient and more noisy, which may originate from the interactions with the quadruple components of the field (Long et al. 2007). In addition, the observed CRSF was evident only in certain pulse phases at the peak of the outburst, and it is attributed to the multipole component of the field (Kong et al. 2022). Thus, the excess noise strength above the transitional level 2 bolsters the idea that multipole components should play an important role in torque interactions at super-Eddington luminosity levels (Doroshenko et al. 2020;Kong et al. 2022).
Research Council of Turkey) through the research project MFAG 118F037. The authors also thank Prof. Dr. Sıtkı Çağdaş İnam for his insightful comments.
DATA AVAILABILITY
Whole X-ray data used in this study are publicly available. NICER/XTI data can obtained through the | 2023-04-25T01:16:22.781Z | 2023-04-24T00:00:00.000 | {
"year": 2023,
"sha1": "609e071a23125fade1c75304f826e0ef4770a064",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "609e071a23125fade1c75304f826e0ef4770a064",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
133806102 | pes2o/s2orc | v3-fos-license | Estimation of Urban Air Pollutant Levels using AERMOD: A Case Study in Nakhon Ratchasima, Thailand.
This research aims to study the possibility of using AERMOD air quality model to estimate the air pollutant levels of a selected city – Nakhon Ratchasima Municipality, Thailand. Four pollutants were studied: PM10, CO, SO2, and NOx. The measurement data were obtained from a PCD automatic pollutant monitoring station for comparison with the model’s results. The air pollution sources used were residential, furnace, traffic and industrial sources. Emission factors were used to estimate the concentration of pollutants from the activities of the population in the area. The estimation was based on the Top Down Approach method (TDA). Results shows that the pollutant concentrations obtained from the model were lower than the measurement values in all parameters. The values obtained from the model were 4.14% to 77.88% of the measurement values. The cause may be due to incomplete account of sources. However, the model can be useful for assessing the carrying capacity of the urban area.
Introduction
The city of Nakhon Ratchasima in north-eastern part of Thailand is a city of significant growth. The expansion of business and industrial sectors have been good for the economy but not so for the environment. One of the problem is air pollution, which is mostly due to human activities. Nakhon Ratchasima Municipality had only one automatic pollutant monitoring station owned and operated by the Pollution Control Departmentt (PCD). Based on recorded measurements, some air pollutants were exceeding standards. However, the fact that the measurement results are available from only one station in the city area making it difficult to gain the understanding of the overall level of pollutants. Since the AERMOD air quality model is popular for use in air quality monitoring, its ability could help solve this problem. Using the AERMOD air quality model [1] with estimated area and line sources representing the city's [2] emission can help predicting the air pollutant concentrate in the form of concentration isopleth. The model application can reduce the time and cost of the actual measurements while providing the visual interpretation of air pollution levels throughout the study area. The objective of this research is to study the possibility of estimating air pollution concentrations in the city area using the AERMOD air quality model. The study area was selected to be Nakhon Ratchasima Municipality. Estimation of pollutants levels at the PCD monitoring station in the study area were compared with actual measurements. Four pollutants were studied: particulate matters 2 1234567890 ''"" smaller than 10 microns (PM 10 ), carbon monoxide (CO), sulphur dioxide (SO 2 ), and oxides of nitrogen (NO x ). The results can be used as an example of the application of AERMOD in air quality management of urban areas.
Study Area and Pollutant Sources
The study area, Nakhon Ratchasima municipality, can be divided into residential areas based on the 4 constituencies as shown in Figure 1. The evaluation of air pollution was carried out using appropriate emission factors for 4 major air pollution sources: residential, furnace, traffic, and industrial sources.
Instrumentation
The air pollutant measurement data used was secondary data obtained from PCD. The data came from an automatic air quality monitoring station of Nakhon Ratchasima Municipality. Its location is at 14.973785N, 102.085730E, which is approximately at the center of the city. This study used the measurement data in the year 2015 for the comparison with the pollution level estimated from the model. Four principle ambient air pollutants were studied, namely particulate matters smaller than 10 microns (PM 10 ), carbon monoxide (CO), sulphur dioxide (SO 2 ), and oxide of nitrogen (Nox).
Meteorological data
This study uses the meteorological data of the year 2015 obtained from meteorological stations. The raw meteorological data was arranged into the required format of the AERMOD model: the hourly surface data was arranged into the SCRAM format, and upper air data was arranged into the FSL format. The data arrangement resulted in the forms of SFC File (*.SFC) and PFL File (*.PFL). The meteorological data was classified for every hour in the total period of one year for all parameters.
AERMOD Modeling Option
The air quality model used, AERMOD, consists of two subprograms AERMET and AERMAP. They are programs that prepare meteorological information and geological terrain information, respectively, for the main program AERMOD. The two sub-programs act as pre-processor for data going into the main program. For this study, AERMAP used the terrain map type SRTM3/SRTM1 from WebGIS, the data was based on SRTM3 (Global 90m) and the horizontal datum was WGS 84. AERMET used the meteorological data from monitoring station measurement which was arranged into the SCRAM and FSL formats. The surface station primary tower (anemometer) was 186.6 m. In the AERMOD model, 4 pollution sources were entered: residential, furnace, traffic, and industrial sources. The receptors for the study area were defined by a uniform cartesian grid system, length 12,255.6 × 7,387 square meters. One discrete cartesian receptor was defined for the PCD automatic pollutant monitoring station, coordinate 14.973805,102.085718.
Emission Factors Determination
Emission factors represent the relationship between pollutant emissions and pollution-related activities. In this study, the emission factors were based on the research on emission inventory in Nakhon Ratchasima Municipality, as shown in Table 1 [3].
Pollution levels estimated by AERMOD
The estimated annual average pollutant concentrations using the AERMOD air quality model are shown in Figure 2-5. The concentrations of NO x , SO 2 , CO and PM 10 were found to be higher in the road network. The highest NO x , SO 2 , CO and PM 10 concentrations in microgram/m 3 were 344.24, 12.53, 414.43 and 30.46, respectively. The maximum concentration of all pollutants was at the same point at coordinate 184246.47, 1657665.85. At the point of the discrete cartesian receptor, which is the location of the PCD air quality measurement station, the levels was found to be 13.78, 27.72, 0.60 and 4.63, respectively. From Figure 2-5, the dispersion of PM 10 concentrations is the most apparent. The distribution of SO 2 concentrations is minimal. From Figure 3, it can be seen that SO 2 is highly concentrated near the pollution source.
Comparison between AERMOD results and measurement data
The estimation of annual pollution levels using the air quality model AERMOD were compared to the pollutant concentration measurement at the same location, as shown in than the actual measurement may be due to pollutant sources which are unaccounted for, such as dust resuspension from traffic and ground surfaces and other non-point sources.
Conclusions
In this study, we investigated the possibility of using AERMOD to assist in predicting air pollution concentration in a city. Comparison of results with actual measurement at the same coordinate showed that the pollutant concentrations obtained from the model were lower in all parameters. The cause may be due to incomplete sources of pollutant incorporated in the analysis. Overall, the most concentrated area along the boundary road [4]. However, the results showed promising ability for the AERMOD model to be used as a tool for air quality management of a city, possibly for assessing the carrying capacity of the urban area. | 2019-04-27T13:10:01.084Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "3329483c172d8e28dfd44120fae73a4e47f30779",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/164/1/012024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1a75f00713b9aa4aec4039b5bef1d4b6b457d389",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
62832818 | pes2o/s2orc | v3-fos-license | The Swampland, Quintessence and the Vacuum Energy
It has recently been conjectured that string theory does not admit de Sitter vacua, and that quintessence explains the current epoch of accelerated cosmic expansion. A proposed, key prediction of this scenario is time-varying couplings in the dark sector, induced by the evolving quintessence field. We note that cosmological models with varying couplings suffer from severe problems with quantum corrections, beyond those shared by all quintessence models. The vacuum energy depends on all masses and couplings of the theory, and even small variations of parameters can lead to overwhelmingly large corrections to the effective potential. We find that quintessence models with varying parameters can be realised in consistent quantum theories by either: 1) enforcing exceptional levels of fine-tuning; 2) realising some unknown mechanism that cancels all undesirable contributions to the effective potential with unprecedented accuracy; or 3) ensuring that the quintessence field couples exclusively to very light states, and does not backreact on heavy fields.
It has recently been conjectured that string theory does not admit de Sitter vacua, and that quintessence explains the current epoch of accelerated cosmic expansion. A proposed, key prediction of this scenario is time-varying couplings in the dark sector, induced by the evolving quintessence field. We note that cosmological models with varying couplings suffer from severe problems with quantum corrections, beyond those shared by all quintessence models. The vacuum energy depends on all masses and couplings of the theory, and even small variations of parameters can lead to overwhelmingly large corrections to the effective potential. We find that quintessence models with varying parameters can be realised in consistent quantum theories by either: 1) enforcing exceptional levels of fine-tuning; 2) realising some unknown mechanism that cancels all undesirable contributions to the effective potential with unprecedented accuracy; or 3) ensuring that the quintessence field couples exclusively to very light states, and does not backreact on heavy fields.
I. Introduction
An important question in fundamental physics is what distinguishes general effective field theories from those that can be consistently realised in quantum gravity. Inspired by examples of compactifications from string theory, the authors of [1] conjectured that quantum gravity severely restricts the effective scalar potential, V , of the low-energy theory: for a positive constant c ∼ O(1) and in units where M Pl = 1/ √ 8πG = 1. If true, equation (1) has far-reaching implications [1][2][3][4][5][6][7]. Most notably, equation (1) forbids local de Sitter critical points (see also [8]) and forces the current period of accelerated expansion to be realised through particular models of quintessence [9]. Reference [2] argued that such models can be naturally realised in string theory where slowly rolling moduli fields can support the accelerated expansion.
Some well-known restrictions on quintessence were discussed in [2,4,6,7,10]. Very light scalar fields coupled to the Standard Model can mediate long-range forces, which are severely constrained by precision tests of the equivalence principle. Moreover, scalar fields that modify the masses and couplings of the Standard Model are constrained by astronomical observations. Finally, models of quintessence require not only that the value of the scalar potential is very small, but so must its gradient.
In reference [2], the absence of observed variations in the Standard Model parameters were interpreted as evidence for comparatively stronger couplings between the quintessence scalar and some fields in the dark sector. This is not a direct consequence of equation (1), but is arguably natural as such a scenario can be realised in string theory through branes, e.g. of type IIB or F-theory. For example, the quintessence field may control the volume of the cycle where dark matter originates, so that its evolution leads to variations in dark matter couplings. In the cosmology literature, models realising dark energy/dark matter interactions are usually referred to as 'interacting dark energy' [11].
The purpose of this note is to recall that a cosmic scalar field, φ, that cause variations in couplings and masses suffer from severe problems when considered in quantum field theory [12][13][14][15]. The basic argument (reviewed in detail below) is that small variations in couplings cause large variations in the vacuum energy. For example, a variation in a fine-structure constant α(φ) =ᾱ + δα to which matter with large mass M is coupled leads to a variation of the vacuum energy that is schematically of the form, This is a contribution to the low-energy effective potential of φ that can overwhelm any naive quintessence potential. This makes it very challenging to promote cosmological models of varying 'constants' into consistent quantum theories.
In this note, we apply these arguments to the recently proposed quintessence models of [2], and find that they can only be realised under certain restrictive conditions.
II. The vacuum energy and varying parameters
The one-loop Coleman-Weinberg potential for a general field theory in four-dimensional flat space is given by [16][17][18], where µ is scale parameter and Λ the cut-off scale. The supertrace is given by where j i is the spin of the different particles with mass eigenvalues m i . The first term is always fieldindependent, vanishes for spontaneously broken supersymmetric theories, and is only relevant for the original cosmological constant problem. In spontaneously broken supergravities, the supertrace is generically nonvanishing for n > 0, but in some special 'no-scale' supergravities, the n = 2 term can vanish even after supersymmetry breaking [19]. In this note, we will conservatively consider only the third term, which is only logarithmically sensitive to the model-dependent cutoff.
Including also higher loop-order corrections, we may write these contributions as, where the coefficients c i (α) depend on the coupling constants of the theory, and absorb any logarithmic factors. Accounting for loop-factors, ∂ p α c i ∼ O((4π) −p ). The effective quintessence potential below the scale Λ is then given by the sum of the bare contribution V 0 (φ) and the loop corrections: It is now easy to understand the particular problems associated with quintessence models with varying parameters. A change in the Standard Model fine-structure constant of the order of the current observational limit, δα/α ∼ 10 −6 , leads to a change in the vacuum energy of the order of (cf. [12]), where we have taken Max(m i ) = m top = 173 GeV and the vacuum energy is ρ 0 = (2.3 × 10 −3 eV) 4 . Since α is field dependent, this is a highly disruptive contribution to the effective potential of φ [20].
III. The flatness of the quintessence potential
Even if the quintessence is completely decoupled from the Standard Model, small changes in the parameters of the dark sector can lead to overwhelmingly large contributions to the quintessence effective potential. An important question is then: given the vacuum energy contribution of equation (4), how can the quintessence potential be sufficiently flat? Excessive fine-tuning is one option. In order for the potential to be sufficiently flat, not only the value of the potential and its gradient need to be tuned, but also higher-orders in the Taylor expansion around the presentday value. Suppose that an evolution of the quintessence field by δφ causes a variation, δα tot , in a dark sector coupling constant under which matter of mass m i is charged. Imposing that the value of the potential over this range does not exceed ρ 0 then requires cancellations of the k:th order in a Taylor expansion to at least one part in [13], for k ≤ k max = floor ln( m 4 i (8π) 2 ρ0 ))/ ln 4π δαtot . For example, if the dark matter has mass m = 100 GeV and is charged under the gauge group with a coupling constant that changes by δα total = 10 −2 , the quintessence potential need to be tuned up to order k max = 16. The total fine-tuning (on top of that required by the original cosmological constant problem) is the product of the required fine-tuning of the individual operators and is given by [13]: In the absence of this tuning, φ could get stuck in a de Sitter minimum or rapidly evolve towards a crunch. A second option is that a new symmetry or mechanism cancels all undesirable contributions to the potential with exceptional accuracy. Unbroken global supersymmetry sets STr(M n ) = 0 [21], but this is no longer true in supergravity [22,23]. A new cancellation mechanism may be intrinsically string theoretic and appear unexpected from a supergravity viewpoint. While this is an intriguing possibility, no such mechanism has yet been identified in the low-energy theories arising from string compactifications. The discovery of such a mechanism through careful string theory calculations would strengthen the case for the cosmology proposed in [2], and possibly for the correctness of equation (1).
A third option is that φ couples exclusively to very light states, so that the equation (4) gives a negligible contribution to the effective potential of φ (cf. [15] for an example). This may be realised rather naturally if m 4 i (φ) ≪ V 0 (φ) as φ descends the quintessence potential. In the present era, such fields should be no more massive than m i 4 × 10 −2 eV to allow for O(1) changes in parameters without spoiling the quintessence potential.
This solution is comparatively appealing, but has two important caveats. First, to convincingly realise such a mechanism one must demonstrate that the contribution from the second term of the Coleman-Weinberg potential (3) is negligible. If the cutoff Λ is close to the Planck scale, this may require the stricter limit m i O(H 0 ), in which case φ can only couple to other quintessence fields.
Second, the parametrically large hierarchy between particle physics mass scales and the vacuum energy requires that φ interacts extremely weakly with other moduli. For example, if the evolution of φ changes the total volume of the compactification or the string coupling constant, the spectrum of massive states will change, and the vacuum energy problem is re-introduced.
To illustrate how sharp this decoupling must be, suppose that a Standard Model gauge coupling, g, is controlled by a volume modulus, V SM = 1/g 2 . For concreteness, we take α = 1/25, as is appropriate for Grand Unified Theories. As φ evolves, V SM must stay fixed to a high accuracy, or the Standard Model vacuum energy corrections dominate over the quintessence potential. This requires, where we have again set m = m top and require δV SM < ρ 0 . Such a rigidity of the Standard Model cycle can be challenging to realise when all fields are (at least gravitationally) coupled, and φ evolves substantially.
In closing, we recall that the conjecture (1) is violated if the potential for φ is additively combined with the Standard Model Higgs potential, V H = λ H |H| 2 − v 2 2 , and evaluated at H = 0 [4]. After first identifying this issue, reference [4] considered a simple modification of the coupling between the Higgs field and φ that avoids this problem: We note that equation (9) leads to substantial variations in the Higgs sector parameters, and consequently to large quantum corrections to the potential. These models must then realise either of the first two options identified in this paper to explain the present-day accelerated expansion through quintessence.
IV. Conclusion
The drastically simple condition (1) has been proposed to delineate the 'swampland' of theories that cannot be embedded into any consistent theory of quantum gravity. The current status of this conjecture is highly uncertain and controversial [3][4][5][6][7][24][25][26][27][28], in particular as detailed calculations demonstrating the failures of apparent counter examples are still lacking. Equation (1) excludes de Sitter vacua, but is compatible with certain models of quintessence. A key prediction of reference [2] is that such models cause cosmological variations in the couplings of dark matter and other dark sector fields. In this note, we have considered the theoretical implications of this proposed cosmology, and we have shown that they suffer from severe quantum instability problems. Variations in the couplings of massive states lead to large contributions to the vacuum energy that must be cancelled to an incredible accuracy. This instability problem is distinct from the cosmological constant problem as well as the regular fine-tuning problem of quintessence models.
We have shown that if the quintessence models of [2] are realised in nature, one out of three conditions must hold: 1) the theory is incredibly fine-tuned; 2) there is a new, fantastic mechanism that surpasses even supersymmetry in taming dangerous quantum corrections; or 3) the quintessence field couples only to light states.
These conditions severely restrict the realisations of these models in any quantum theory, including string theory. | 2018-09-03T21:41:25.000Z | 2018-09-03T00:00:00.000 | {
"year": 2019,
"sha1": "dfd9e47619d509564fe79c295100e3a7cad51dc5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2018.11.001",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "dfd9e47619d509564fe79c295100e3a7cad51dc5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14050566 | pes2o/s2orc | v3-fos-license | A network pharmacology approach to establish the pharmacological mechanism of JiaWeiXianJiTang on inflammatory bowel disease
The aim of the present study was to characterize the molecular mechanism of the effective components of JiaWeiXianJiTang (JWXJT), a traditional Chinese medicine, on inflammatory bowel disease (IBD) using network pharmacology technology. Data regarding natural molecules of JWXJT, targets of IBD, and interactions between natural molecules and IBD targets were screened. Network pharmacology of the interactions between natural molecules and IBD targets were drawn using Cytoscape v3.2.1. As a result of screening, 205 interactions were identified between 673 natural molecules and 76 targets of IBD. By analyzing the effective components, the complex effect mechanism of JWXJT on IBD was identified to be via enhancement of the immune system, inhibiting novel blood vessel growth through 15-hydroxyprostaglandin dehydrogenase, 5′ AMP-activated protein kinase, interleukin-2 and macrophage colony-stimulating factor-1, and resulting in inflammation inhibition. In addition, JWXJT ameliorated IBD by antagonizing various molecular targets of IBD through its effective components, inhibiting inflammatory reactions, improving patient quality of life, as well as reducing the incidence of cancer.
Introduction
Tumor-promoting inflammation, recognized as the eighth hallmark of cancer (1), is highly involved in tumor growth, invasion and metastasis. Inflammatory bowel diseases (IBDs), including ulcerative colitis (UC) and Crohn's disease (CD), are associated with chronic relapsing inflammation of the intestinal tract of unknown etiology (2). IBD is somewhat common in western countries, with a prevalence rate of 100-200/100,000. Over the past 10 years, the incidence of IBD has increased by approximately 10-20 times in China (3). Given the serious harm and higher malignant transformation probability associated with IBD, increasing numbers of individuals are becoming concerned that this disease is a type of precancerous lesion (4). Currently, it is proposed that IBD is caused by the multifactorial interactions between environment, gene, infection and immune disorders (5). Activation of the intestinal immune system and non-specific immune system, which leads to immunoreaction and the inflammatory response, is important in the pathogenesis of IBD.
XianJiTang, which has marked curative effects on chronic diarrhoea, was created by Zhu et al (6). Long-term clinical practice and research have demonstrated its efficacy (7,8). JiaWeiXianJiTang (JWXJT), which is comprised of XianJiTang and JianPi Chinese medicine, is administered to patients for diarrhoea, colorectal cancer or advanced colorectal cancer surgery. As they are inherently complex and are comprised of numerous components, a characteristic of traditional Chinese medicine (TCM) recipes is that they affect multiple targets and interact with other herbal medicines. Studies investigating TCM at the cell and molecular levels are difficult to conduct due to the unknown effective components and mechanisms of action, as well as the unstable herbal quality. The present study analyzed the effective components of JWXJT and the therapeutic targets of IBD, and constructed a relational network between them. Based on the results of network analysis, the therapeutic targets of IBD and the mechanisms of JWXJT were characterized at the cell and molecular levels, and the regulatory interactions were analyzed in their entirety.
TCM-potential target database (PTD). The TCM-PTD (http://pharminfo.zju.edu.cn/ptd), which is dedicated to providing accurate potential targets of TCM that are predicted using state-of-the-art machine learning approaches, comprising three databases 'compounds', 'target', and 'total-relationship'. The above three databases describe the sources of the compounds, targets, as well as the interaction between compounds and targets of the herbs.
Data analyses. The compounds of each herb in JWXJT and the targets of IBD were screened in the TCM-PTD database. Following virtual screening in the TCM-PTD database, discarding the repetitive data and the data where the docking scores were less than five, the final data were applied for further network construction. Network construction was performed as follows: i) A compound-compound target network was established by linking chemical compounds and corresponding targets; ii) a herb-compound target-IBD target network was constructed by connecting the 13 JWXJT herbs, the corresponding compound targets, and the IBD targets that interacted with the compound targets. The networks were generated using the network visualization software Cytoscape version 3.2.1 (http://cytoscape.org/), which is used to visualize biological pathways and networks of molecular interactions, and to interact with these networks via profiles of gene expression, annotations, as well as other state data. The software then offers a basic set of features for data integration, analysis and visualization for complicated network analysis.
Compound-compound target network analysis. The compound-compound target network is shown in Fig. 2, and includes 205 interactions between 673 compounds in JWXJT and 76 compound targets. In the network, certain targets demonstrated more interactions with compounds than others. This indicated that a large number of targets may be regulated by multiple compounds rather than just one. For example, M-CSF and PGDH were regulated by multiple JWXJT ingredients, including Saussurea lappa and Glycyrrhiza uralensis. In addition, AMPK, interleukin (IL), CYSLTR, MAPK, NO, PTGE, prostaglandin reductase 2 and TGF are regulated by more than one compound.
Herb compound target-IBD target network analysis. The herb compound target-IBD target network was constructed to identify the interactions between 13 herbs in JWXJT, and the corresponding compound targets and IBD targets. The network was composed of 205 interactions (13 herbs, 673 compound targets and 76 IBD targets; Fig. 3). Glycyrrhiza uralensis demonstrated the highest degree of distribution followed by Saussurea lappa, Codonopsis pilosula and Poria cocos, the interactions of which with other herbs were more than one, thus, demonstrating their significance in the network.
Herb compounds, which exert more effects in 15-PGDH, AMPK, IL-2 and M-CSF-1, and fewer effects in CYSLTR, MAPK, NO and PTGE, may inhibit and reduce the expression levels of inflammatory mediators and inflammatory lesions in tissues. This explains the effect of JWXJT on the immune system, which may be referred to as FuZheng therapy according to TCM. Whereby FuZheng refers to enhancing physical fitness and improving the body's resistance to disease, accompanied by other methods, including appropriate nutrition and functional exercises, so as to overcome disease and restore health.
The key molecule for negative regulation of inflammation is 15-PGDH. It degrades prostaglandin and antagonizes cyclooxygenase (COX)-2 in vivo (11). AMPK represses expression levels of inflammation inhibitory genes, such as TNF-α, IL-1β, IL-6 and inducible nitric oxide synthase (iNOS), reduces the bioactivity of nuclear factor (NF)-κB, promotes the expression of nicotinamide phosphoribosyltransferase and peroxisome-proliferator-activated receptor γ coactivator (PGC)-1α, increases NAD + content, and enhances the acetylation enzyme activity of Sirtuin 1 (12)(13)(14)(15)(16)(17). In the early stages of IBD, p53 protein overexpression and microsatellite instability are demonstrated, which are early events in the occurrence and progression of IBD (18,19). However, p53 represses the inflammatory response, which is regulated by NF-κB (20). The expression of iNOS has been identified to be positively correlated with the expression of hypoxia-inducible factor-1 in IBD patients (21), and a negative correlation was identified between IBD and ILs (22,23). Granulocyte-macrophage colony-stimulating factor (GM-CSF) and TGF are highly expressed in IBD (24,25). GM-CSF therapy reduces inflammation in the colon of mice (26) and TGF-β promotes restoration of the intestinal mucosa (27). MAPK, an important regulatory factor, limits inflammatory responses and promotes the resolution of inflammation (28). Furthermore, anti-TNF monoclonal antibody therapy effectively blocks IBD (29). Certain types of TCM treatment enhance the immune system function of IBD patients by reducing TNF-α expression levels (30).
The most common type of chronic intestinal inflammation associated with colorectal cancer is IBD, consisting of UC and CD. In younger patients, the inflammation is more severe and the development of IBD into colorectal cancer is considered to be more dangerous. Colorectal cancer caused by IBD is referred to as colitis-associated cancer (CAC), which accounts for 15% of the cause of mortality in IBD patients (31). The study by Kassam et al (31) revealed that the formation of CAC is an 'inflammation-dysplasia-carcinoma' sequential pathological process, and multiple immune cells, cytokines and other immune mediators are involved in the carcinogenesis of colorectal cancer. CAC is predominantly caused by aggravation of IBD by immunological dysfunction, and abnormal activation of the NF-κB and Janus kinase/signal transducers and activators of transcription signaling pathways. A large quantity of inflammatory cells and pro-inflammation cytokines, such as TNF, IL-6, IL-17 and IL-23, in the intestines activate reactive oxygen species and reactive nitrogen intermediates, and alter biological processes, including cellular growth, apoptosis and proliferation (32). Furthermore, consistent chronic inflammation induces the dysregulation of signaling pathways, such as the Wnt, p53, KRAS and TGF signaling pathways, resulting in tumor development (19,20) Interferon-γ, TGF-β and IL-17 in patients with DC and colon cancer exert inflammatory and immune regulation functions (33). In addition, NF-κB, TNF-α, IL-1, IL-6, IL8 and IL-27 have significant roles in CAC (34)(35)(36). The expression of protease activated receptor-2 (PAR-2) positively correlated with the expression of COX-2 in patients with UC, and the increased expression levels of PAR-2 and COX-2 promoted the occurrence and development of colon cancer by cooperating with Figure 4. Chinese herbal medicine and target network construction. Green rectangles represent ingredients of JWXJT and the yellow circles represent the screened targets. JWXJT, JiaWeiXianJiTang. 15-PGDH, 15-hydroxyprostaglandin dehydrogenase; PRKAA1, 5'-AMP-activated protein kinase catalytic subunit α-1; PRKAB1, 5'-AMP-activated protein kinase subunit β-1; CYSLTR2, cysteinyl leukotriene receptor 2; MAP2K1, dual specificity mitogen-activated protein kinase kinase 1; IL1R1, interleukin-1 receptor type I; IL2RA, interleukin-2 receptor α chain; IL2RB, interleukin-2 receptor subunit β; CSF1R, macrophage colony-stimulating factor 1 receptor; MAPK8, mitogen-activated protein kinase 8; MAPKKK9, mitogen-activated protein kinase kinase kinase 9; NOR, nitric oxide reductase; NOS2A, nitric oxide synthase, inducible; PTGER4, prostaglandin E2 receptor EP4 subtype; PTGR2, prostaglandin reductase 2; TGFB1, transforming growth factor β-1; TNFRSF1B, tumor necrosis factor receptor superfamily member 1B.
TF and PGE-2 (37). A previous study demonstrated that granulocyte-colony stimulating factor (G-CSF) expression in a mouse model increased during the course of CAC development. Furthermore, inhibiting the secretion of G-CSF hindered the process of CAC (38).
In conclusion, the network pharmacology analyses revealed that the therapeutic effect of JWXJT on IBD functioned by affecting molecular targets of IBD via the effective compounds of JWXJT, for example, by antagonizing various pathogenic links of IBD, as well as inhibiting inflammatory responses in IBD. In addition, the scientific theory of the TCM, Fuzheng Quxie, used modern biological evidence to explain the TCM principles of monarch, minister, assistant and guide. The present study characterized, in detail, the action mechanism of JWXJT, which included enhancing the immune system and restraining novel blood vessel growth by regulating 15-PGDH, AMPK, IL-2 and M-CSF-1, as well as inhibiting adverse inflammatory reactions. Network pharmacology is able to predict the underlying mechanisms of TCM, and the current results provide a direction for future complex basic biological studies, which will, in part, prevent waste of resources. | 2018-04-03T05:46:47.679Z | 2017-02-06T00:00:00.000 | {
"year": 2017,
"sha1": "a231ab2e15ed01e826e696ce8c97777c5f9ac29d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/br/6/3/272/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a231ab2e15ed01e826e696ce8c97777c5f9ac29d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
267227648 | pes2o/s2orc | v3-fos-license | Three Rounds of Read Correction Significantly Improve Eukaryotic Protein Detection in ONT Reads
Background: Eukaryotes’ whole-genome sequencing is crucial for species identification, gene detection, and protein annotation. Oxford Nanopore Technology (ONT) is an affordable and rapid platform for sequencing eukaryotes; however, the relatively higher error rates require computational and bioinformatic efforts to produce more accurate genome assemblies. Here, we evaluated the effect of read correction tools on eukaryote genome completeness, gene detection and protein annotation. Methods: Reads generated by ONT of four eukaryotes, C. albicans, C. gattii, S. cerevisiae, and P. falciparum, were assembled using minimap2 and underwent three rounds of read correction using flye, medaka and racon. The generates consensus FASTA files were compared for total length (bp), genome completeness, gene detection, and protein-annotation by QUAST, BUSCO, BRAKER1 and InterProScan, respectively. Results: Genome completeness was dependent on the assembly method rather than on the read correction tool; however, medaka performed better than flye and racon. Racon significantly performed better than flye and medaka in gene detection, while both racon and medaka significantly performed better than flye in protein-annotation. Conclusion: We show that three rounds of read correction significantly affect gene detection and protein annotation, which are dependent on assembly quality in preference to assembly completeness.
Introduction
Oxford Nanopore Technology (ONT), a third-generation sequencing technology, serves as a platform to sequence small to large and multiplex genomes and is currently widely used globally, especially in low-and mid-income countries, due to its simplicity, feasibility, and sustainability in both medical research and clinical settings [1,2].The main advantage of ONT is the generation of real-time analysis using the user-friendly interface, EPI2ME Agent, with no bioinformatic expertise required, allowing rapid and fast detection of microbe identification and antimicrobial resistant genes (AMR) [3,4].The agile and simple library preparation for ONT sequencing without the biased PCR amplification step is another significant advantage [5].Furthermore, ONT overcomes the problems observed in next-generation sequencing (NGS) in sequencing genomic repeats and the production of incompletely assembled genomes [6].ONT sequencing generates 'long-enough' reads to exceed the length of repeated regions and generates near-complete assemblies in which the location of resistant genes can be detected-i.e., chromosomal vs. plasmid [7,8].
Despite the advantages of ONT and the rapid advancement of the technology since its development, the major shortcoming of this technology is the production of relatively high error rates (~10-15%) compared to NGS, when using R9 flow cells [9].Although increasing the depth of ONT reads can produce contiguous assembled genomes, the errors accumulate as the sequencing depth increases [10].ONT reads often require read correction with short reads to generate complete and robust genome assemblies.The hybrid genome assemblies produced using both long and short sequencing reads (with sufficient depth of both short and long reads), enhance the accuracy of assembled genomes for downstream analysis [11].However, having access to both long and short sequencing platforms and the performance of two sequencing experiments on a single sample is impractical-especially in low-and middle-income countries and in clinical settings where prompt diagnoses are important.Therefore, there is a need for alternative low-cost methods to obtain more accurate genome assemblies from ONT reads.
Computational and bioinformatics tools analysing ONT reads are freely available and rapidly expanding.These tools can be counted as a reasonable and low-cost option to reduce error rates post-assembly.These tools use varied algorithms that are designed to identify and resolve sequencing errors to not only produce a complete but also an accurate genome assembly, though the output of the read correction step is reliant on the applied methods and their specific parameters [12].Several studies are benchmarking freely available read correction tools and their impact on downstream analysis [13][14][15][16].Among the several available read correction tools, flye, medaka and racon are most commonly used for ONT reads.While flye read assembly and correction tool is based on the generalized Brujin Graph, medaka and racon are tools created to outer-perform graph-based methods generating genomic consensus in much faster time [12,13,16].The process of benchmarking freely available read correction tools holds significant importance within the scientific community as it plays a pivotal role in advancing the research domain allowing improved analytical precision and resolving critical issues.
Most benchmarking studies focus on prokaryote genome assemblies rather than eukaryote.Whilst ONT has become an important platform for eukaryotic DNA sequencing, allowing an in-depth analysis of complex eukaryotic DNA sequences for virulence factors and gene annotation, there is a need to benchmark the impact of read correction tools on eukaryotic genomes and their downstream analysis.
In this study, we retrieved ONT sequencing reads from the Sequencing Read Archive (SRA)-NCBI of four pathogenic eukaryotes: Candida albicans, Cryptococcus gattii, Saccharomyces cerevisiae, and Plasmodium falciparum, and evaluated the impact of applying three read correction tools: flye, medaka, and racon, on genome length, fragmentation and completeness, and accurate gene structure, and analysed and classified eukaryotic functional proteins.The selection of these organisms was primarily motivated by the availability of high-quality sequencing data in the SRA-NCBI database through ONT methods.This choice was further supported by their significance as model organisms, as exemplified by S. cerevisiae, and their significance as pathogens.
The quality of generated consensus FASTA files from minimap2, flye, medaka, and racon (n = 24 per species, n = 96 in total) were assessed by QUAST (version 5.0.2) using the LG parameter.The total length (bp), total aligned (bp), and GC%, were evaluated [22].
Statistical analysis was performed with Bonferroni's multiple comparison one-way ANOVA by GraphPad Prism (Boston, MA, USA) (version 8.0.1) to determine significant differences (p < 0.05, p < 0.001) existing among the consensus FASTA files generated by minimap2 before and after read correction with flye, medaka and racon, in QUAST-based assembly statistics, gene and protein detection/prediction by BRAKER1 and InterProScan.
Results and Discussion
Eukaryotic whole genome sequencing provides comprehensive insights into their complex genomes.ONT sequencing is a practical long-read sequencing platform that enables rapid and cost-effective identification of strains, and detection of virulence factors and proteins in both research and clinical settings.However, the relatively higher error rates produced by ONT reads require computational and bioinformatics efforts to produce contiguous and accurate eukaryotic genome assemblies.In this study, we examined the effect of three rounds of read corrections using flye, medaka, and racon after assembling ONT reads to a reference genome using minimap2.The evaluation was based on the genome total length (bp) and GC% produced by QUAST, genome completeness detected by BUSCO, gene prediction by BRAKER, and protein annotation by InterProScan.We used default parameters and datasets provided by the bioinformatic tools.
QUAST analysis assessed the quality and accuracy of genome assemblies pre-and post-three rounds of read correction.The total length (bp) was significantly (p < 0.05) higher after read alignment with minimap2 against the reference genomes than post-read correction of C. gattii, S. cerevisiae, and P. falciparum (Table 1).Nevertheless, the median total length after read correction was the lowest after correction with flye and significantly (p < 0.05) improved with the second and third rounds of correcting with medaka and racon, respectively (Table 1).The improvement of assemblies' total length is a common feature.Studies have reported improvements up to 57% in genome assemblies; however, in this study, we noticed improvements of 9.36% only [8,16].The variation in improvement percentage depends upon various factors, such as organism sequencing, DNA library preparation, genome assembly, and read correction tools used.Although the total aligned (bp) was highest after minimap2 assembly, it was not significant (p > 0.05) (Table 1) when compared to assemblies after read correction.The total aligned length was the highest after the second round of read correction with medaka and was the lowest after the third read correction with racon.The GC% was significantly higher (p < 0.05) (Table 1) after read correcting with flye and decreased after the second and third rounds of read correcting.In line with other studies, we previously noticed similar outcomes; although medaka and racon had significantly lower GC%, both read correction tools performed better in the overall genome assembly, especially when combined [16,[29][30][31].BUSCO provides a quantitative measure of genome completeness to evaluate the quality of genome annotation.Among the four eukaryotic species examined in this study, medaka showed improvement over minimap2 only in C. albicans assembled genomes regarding genome completeness (Figure 1a).When comparing the read correction tools, medaka was also more superior than flye and racon in genome completeness in all four species samples (Figure 1).While the usage of medaka for diploid cells has been controversial because of the diploid nature of yeast, we found that the newer version of medaka provided more accurate assemblies.These results are in line with Sigova et al. [32].In their study, they reported that read correction with medaka is superior to read correction with racon in fungal pathogens.In addition, the percentage of genome completeness significantly decreases (by ~40%) when a reference is added, even after using six read correction tools [32].Moreover, Zhang et al. showed that medaka performance was superior against other read correction/polishing tools in which medaka improved the continuity and reduced mismatches in S. cerevisiae-assembled genomes [33].In all species, except P. falciparum, flye was superior to racon in genome completeness and duplication rates (Figure 1).The rate of the fragmented genome was comparable in all species for all three rounds of read correction (Figure 1).BUSCO provides a quantitative measure of genome completeness to evaluate the quality of genome annotation.Among the four eukaryotic species examined in this study, medaka showed improvement over minimap2 only in C. albicans assembled genomes regarding genome completeness (Figure 1a).When comparing the read correction tools, medaka was also more superior than flye and racon in genome completeness in all four species samples (Figure 1).While the usage of medaka for diploid cells has been controversial because of the diploid nature of yeast, we found that the newer version of medaka provided more accurate assemblies.These results are in line with Sigova et al. [32].In their study, they reported that read correction with medaka is superior to read correction with racon in fungal pathogens.In addition, the percentage of genome completeness significantly decreases (by ~40%) when a reference is added, even after using six read correction tools [32].Moreover, Zhang et al. showed that medaka performance was superior against other read correction/polishing tools in which medaka improved the continuity and reduced mismatches in S. cerevisiae-assembled genomes [33].In all species, except P. falciparum, flye was superior to racon in genome completeness and duplication rates (Figure 1).The rate of the fragmented genome was comparable in all species for all three rounds of read correction (Figure 1).Genome completeness is majorly affected by sequencing methods and genome assembly tools rather than read correction tools [33].The higher number of genome Genome completeness is majorly affected by sequencing methods and genome assembly tools rather than read correction tools [33].The higher number of genome completeness observed in uncorrected assemblies in this study was due to minimap2 assembly, which is a reference-based alignment method.Other studies using de-novo genome assembly methods show-with sufficient sequencing depth-the advantages of using read correction tools in BUSCO analysis [33,34].
BRAKER1 is a bioinformatic tool commonly utilized for gene prediction in eukaryotic genomes using GeneMark-ET.Ideally, eukaryotic genome assemblies are combined with RNA-seq data to improve gene prediction accuracy.However, the ability to combine both DNA and RNA-seq data is not often available in real scenarios.Here, we performed BRAKER1 analysis on assembled and corrected genomes to evaluate the total number of CDs, forward CDs, reverse CDs, mRNA, and introns (Figures 2-5).The total numbers of CDs, forward CDs, and reverse CDs were significantly higher after the third round of read correction with racon (p < 0.05 vs. minimap2, p < 0.001 vs. flye, and p < 0.05 vs. medaka) (Figures 2-6).Surprisingly, the total number of CDs increased after the first round of read correction with flye but decreased after the second round of read correction with medaka (Figures 2-5).In the samples of C. albicans, C. gattii, and P. falciparum, the total number of CDs after read correction with racon was higher than flye by 55273, 176705, and 63178, respectively.However, the total number of CDs in the samples of S. cerevisiae was lower after read correction with racon.The effect of genome assembly and read correction pipelines on the S. cerevisiae genome has been well characterised [33].The authors concluded that although read correction improved contiguity and coverage, sequencing depth and choice of sequencing method affect S. cerevisiae genome annotation [33].The number of introns showed a parallel significance pattern to the total number of CDs.The total number of introns was significantly higher after read correction with racon (p < 0.05 vs. minimap2, p < 0.001 vs. flye and medaka) (Figures 2-6) in the samples of C. albicans, C. gattii and P. falciparum, but not S. cerevisiae.Similarly, Shin et al. [35] found that applying the Nanopolish read correction tools to reads assembled by the Canu-SMARTdenovo method increased the detection of CDs and introns when using MAKER2 as an annotation tool.Interestingly, the number of introns after the first round of read correction with flye was significantly higher (p < 0.05) than after genome assembly with minimap2 (Figure 6).On the contrary, the number of mRNA coding genes was the highest after genome assembly with minimap2.Among the three rounds of read correction, the highest number of mRNA coding genes was detected after the second round of read correction with medaka, which was only significant against racon (p < 0.05) (Figures 2-6).Given the size of mRNA coding gene, which is ~1500 nucleotides in average, detecting mRNA coding genes is very critical [36,37].Like other coding genes, these genes undergo quality control and trimming steps to remove low-quality and/or adapters present in the sequencing reads.Hence, the trimming process by read correction tools can generate even smaller gene sizes which no longer map to the reference genomes in the databases.Although the number of mRNA coding genes was lower after the third round of read correction with racon, this may result from removing all false-positive genes detected post-genome assembly with minimap2.Based on BRAKER1 gene prediction accuracy results, we investigated the effect of read correction tools on protein annotation by InterProScan with ProSiteProfiles analyses, describing protein domains, families, and functional sites.The overall hits of protein annotation were improved with each round of read correction in all four species, with racon being the top-performing read correction tool (Figure 7a).Several protein annotations were only detected after applying a read correction to the assembled genomes, such as TGF-beta binding (IPR017878), colipase family (IPR001981), and Cytochrome c class II (IPR002321) in C. gattii samples; streptavidin (IPR005468), Cytochrome c, class II (IPR002321), and GATA-type zinc finger (IPR000679) in S. cerevisiae; and platelet-derived growth factor (PDGF) (IPR000072), coronaviridae zinc-binding (CV ZBD) (IPR000072), Based on BRAKER1 gene prediction accuracy results, we investigated the effect of read correction tools on protein annotation by InterProScan with ProSiteProfiles analyses, describing protein domains, families, and functional sites.The overall hits of protein annotation were improved with each round of read correction in all four species, with racon being the top-performing read correction tool (Figure 7a).Several protein annotations were only detected after applying a read correction to the assembled genomes, such as TGF-beta binding (IPR017878), colipase family (IPR001981), and Cytochrome c class II (IPR002321) in C. gattii samples; streptavidin (IPR005468), Cytochrome c, class II (IPR002321), and GATA-type zinc finger (IPR000679) in S. cerevisiae; and platelet-derived growth factor (PDGF) (IPR000072), coronaviridae zinc-binding (CV ZBD) (IPR000072), GATA-type zinc finger (IPR000679), and C-terminal cystine knot (IPR006207) in P. falciparum samples (Figure 7a).Protein annotation hits of IPR002321 detected by medaka were significantly (p < 0.05) higher than minimap2, flye, and racon in C. albicans, whereas protein annotation hits of IPR00724 and detected by medaka were significantly (p < 0.05) higher than minimap2, and protein annotation hits of IPR002321 detected by medaka and racon were significantly (p < 0.05) higher than minimap2 and flye (Figure 7b).In S. cerevisiae samples, protein annotation hits of IPR007112 detected by racon were significantly higher than hits detected by minimap2 (Figure 7b).Protein annotation hits of IPR001938 detected by medaka were significantly (p < 0.05) higher than hits detected by flye in P. falciparum samples (Figure 7b).
GATA-type zinc finger (IPR000679), and C-terminal cystine knot (IPR006207) in P. falciparum samples (Figure 7a).Protein annotation hits of IPR002321 detected by medaka were significantly (p < 0.05) higher than minimap2, flye, and racon in C. albicans, whereas protein annotation hits of IPR00724 and IPR001002 detected by medaka were significantly (p < 0.05) higher than minimap2, and protein annotation hits of IPR002321 detected by medaka and racon were significantly (p < 0.05) higher than minimap2 and flye (Figure 7b).In S. cerevisiae samples, protein annotation hits of IPR007112 detected by racon were significantly higher than hits detected by minimap2 (Figure 7b).Protein annotation hits of IPR001938 detected by medaka were significantly (p < 0.05) higher than hits detected by flye in P. falciparum samples (Figure 7b).To our knowledge, this is the first study to evaluate the effect of read correction tools for long-reads on gene prediction using BRAKER1 and protein annotation using Inter-ProScan.Although BUSCO analysis showed superior genome completeness to uncorrected assemblies, we found that read correction tools offer advantages over uncorrected assemblies in BRAKER1 gene detection and protein annotation using InterProScan with ProProfiles analysis.In this study, we showed that genome accuracy after three rounds of read correction is more vital for gene prediction and protein annotation than genome completeness.We proved that gene prediction accuracy relies on the quality of assembled genomes after read correction rather than the quantity or the number of present genes after genome assembly.In other words, a more accurate genome assembly leads to more reliable gene prediction and protein annotation [38,39].However, the gene completeness analysis could still be improved.The development of more robust read assembly and read correction tools and pipelines is still an area to explore.Studies have shown that the usage of mix-and-matched freely available read assembly and read correction tools significantly improves not only assembly parameters, but also antimicrobial resistant genes detection, plasmid identification and pan-genome analysis with and without using short sequencing To our knowledge, this is the first study to evaluate the effect of read correction tools for long-reads on gene prediction using BRAKER1 and protein annotation using InterProScan.Although BUSCO analysis showed superior genome completeness to uncorrected assemblies, we found that read correction tools offer advantages over uncorrected assemblies in BRAKER1 gene detection and protein annotation using InterProScan with ProProfiles analysis.In this study, we showed that genome accuracy after three rounds of read correction is more vital for gene prediction and protein annotation than genome completeness.We proved that gene prediction accuracy relies on the quality of assembled genomes after read correction rather than the quantity or the number of present genes after genome assembly.In other words, a more accurate genome assembly leads to more reliable gene prediction and protein annotation [38,39].However, the gene completeness analysis could still be improved.The development of more robust read assembly and read correction tools and pipelines is still an area to explore.Studies have shown that the usage of mix-and-matched freely available read assembly and read correction tools significantly improves not only assembly parameters, but also antimicrobial resistant genes detection, plasmid identification and pan-genome analysis with and without using short sequencing reads for read correction [14,16,[40][41][42].In addition, adjusting the read assembly and/or read correction tools parameters could be beneficial.Schiavone et al. [43] has docu-mented the importance of applying 'tailored' bioinformatics analysis.Obtaining complete sequences of chromosome and plasmid of Salmonella enterica was possible by modifying corErrorRate and corMincoverage parameters in Canu assembler [43].
In addition, improving the sequencing platform itself can reduce sequencing error rates and increase accuracy, which has been observed since the development of ONT from the production of R6 flow cells until now [44].ONT has recently introduced the flow cells (R.10.4.1) with a quality score >20.The preliminary outcome of these flow cells is very encouraging [45].The performance of the R10 flow cells outperforms the R9 flow cells, achieving a genome accuracy of >99% [45,46].However, to achieve near-complete genomes, short reads may still be required for read correction [47].The performance of the new R20 flow cells is still being investigated, and their combination with different read assembly and read correction tools is yet to be investigated.
Conclusions
The rapid development of whole-genome sequencing platforms has revolutionised their usage and application in research and clinical settings.Using both short-and longsequencing reads to produce hybrid genome assemblies is a very robust method for gene detection and protein annotation.However, access to both short-and long-sequencing platforms is an unrealistic scenario, especially in low-and mid-income countries.ONT serves as a reliable and relatively inexpensive long-reading sequencing platform.However, the major burden of this sequencing platform is the relatively higher error rate.Therefore, improving the sequencing reads generated by ONT by computational and bioinformatics tools is a logical and cost-effective option.
Numerous long-read correction tools are regularly generated aiming to achieve robust genome assemblies.These tools often use different bioinformatic algorithms.Benchmarking the freely available read correction tools is very important and drives the research field to better analysis resolution.This study showed that genome quality is more important than genome completeness.Although genome completeness was significantly higher in pre-read correction steps, significant improvement in gene prediction and protein annotation in eukaryotic genomes was noticeable after the second and third rounds of read correction.However, the assembled genomes can still be improved for better outcomes.Therefore, the investigation of several read correction tool combinations is required along with the improvement of ONT-sequencing technology.
Figure 6 .
Figure 6.Heatmap statistical analysis for BRAKER1 results.Bonferroni's multiple comparison oneway ANOVA was performed to determine significant differences (p < 0.05, p < 0.001) among min-imap2 before and after read correction with flye, medaka and racon.
Figure 6 .
Figure 6.Heatmap statistical analysis for BRAKER1 results.Bonferroni's multiple comparison one-way ANOVA was performed to determine significant differences (p < 0.05, p < 0.001) among minimap2 before and after read correction with flye, medaka and racon.
Figure 7 .
Figure 7. InterProScan analysis using ProProfile analysis for protein annotation in C. albicans, C. gattii, S. cerevisiae, and P. falciparum, (a) number of hits detected, and (b) the significant differences among read correction methods.Bonferroni's multiple comparison one-way ANOVA statistical analysis was performed to determine significant differences (p < 0.05, p < 0.001) existing among the different groups.
Table 1 .
Total length (bp), total aligned (bp), and GC% of ONT-sequencing reads aligned with minimap2 before and after applying as read correction tools.
QUAST-based assembly statistics including for C. albicans, C. gattii, S. cerevisiae, and P. falciparum assembled genomes with minimap2 pre-and post-read correction with flye, medaka, and racon.Bonferroni's multiple comparison one-way ANOVA statistical analysis was performed to determine significant differences (p < 0.05, p < 0.001) existing among the different groups. | 2024-01-26T16:10:38.195Z | 2024-01-24T00:00:00.000 | {
"year": 2024,
"sha1": "a85104567efe1ba4140ddc4a4f66133e79c17f7b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/12/2/247/pdf?version=1706101122",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc37209e5fd99a2555b0561a114b018e32cea178",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212965577 | pes2o/s2orc | v3-fos-license | The effect of water clover (Marsilea crenata) extract addition in egg yolk and skim milk extender on frozen goat semen quality
Frozen semen has of longer storage time, but the effect of cold shock during processing reduces the semen quality. Antioxidants in water clover extract are expected to be able to maintain the quality of frozen semen. The study aimed to evaluate the effect of addition of water clover extract (WCE) in the base semen dilution (egg yolk-skim milk) to the quality of post-thawing semen. The material used in this study was goat semen. There were four treatments: T0 as control (base dilution + 0% WCE), T1 (base dilution +1% WCE), T2 (base dilution +3% WCE), and T3 (base dilution +5% WCE), with ten replications each.. This study used a randomized completed design, ANOVA, and continued with Duncan’s Multiple Range Test. The results showed that the addition of water clover extract to egg yolk skim extender had a significant effect (P <0.05) on the motility of individual spermatozoa, viability and membrane integrity, but didn’t have a significant effect (P> 0.05) on sperm abnormalities. The research concludes that the addition of 3% water clover extract in egg yolk skim milk extender was the best concentration to maintain the quality of frozen goat semen.
Introduction
Artificial Insemination (AI) is a reproductive technology which can improve the genetic quality and livestock populations. The use of frozen semen for AI has advantages as it can be stored for a longer period in liquid nitrogen and can be distributed to various regions. The low quality of frozen semen is one of the factors causing low pregnancy rates. The addition of appropriate diluents into the semen can maintain the quality of spermatozoa because during the dilution, freezing, and storage processes at low temperatures the spermatozoa will be damaged [1]. The pregnancy rate of artificial insemination using frozen semen in goats varies from 7 to 79% [2].
The quality of diluents will affect the quality of semen after thawing. In general, diluent for cryopreservation of sperm includes a non-penetrating cryoprotectant, a penetrating cryoprotectant, a buffer, one or more sugars, organic acid and antibiotics [3][4]. Skim milk used as an extender is often combined with egg yolk. Egg yolks have lipoprotein and lecithin which function is to protect spermatozoa from cold shock. Previous research revealed that goat semen diluted with egg yolk and skim milk produced better spermatozoa survival than coconut yolk, but did not provide good fertility results. Reduced fertility is thought to be related to the lack of ability of skim milk extender to prevent the damage of spermatozoa membrane. This damage is caused by the presence of free radicals produced by the metabolic process of spermatozoa, which continue to be active during cold storage. Free radicals that react with oxygen will produce reactive oxygen species (ROS) so that it will produce lipid peroxidation, and if there is excessive production of free radicals on the spermatozoa, it will cause oxidative stress.
Water clover extract contains flavonoid bioactive components. Previous research showed that water clover extract has antioxidant activity. The antioxidant content of water clover extract is expected to inhibit damage to spermatozoa due to free radicals. Through the ability of flavonoids as antioxidants in semen extender is expected to be able to optimize the quality of semen from the effects of freezing. The present research aimed to evaluate the effect of water clover extract addition in egg yolk skim milk extenders on frozen goat semen quality.
Semen collection and treatments
Semen was collected from a healthy buck, aged 2.5 to 3 years old with a body weight of 48 kg twice a week for 5 weeks using an artificial vagina after stimulating with an estrus doe. Standard quality of sample semen was individual motility ≥ 70% and sperm abnormality ≤ 10 %. Fresh semen was diluted in egg yolk skim milk as a base extender with the ratio of 1:10 (v/v). The sample of semen was divided into four treatments, as follow: T0 (Egg Yolk Skim Milk + 0% WCE), T 1 (Egg Yolk Skim Milk + 1% WCE), T 2 (Egg Yolk Skim Milk + 3% WCE), and T 3 (Egg Yolk Skim Milk + 5% WCE). Procedure of WCE based on Sokunbi et al.. [5].
Evaluation of semen quality and freezing-thawing process
Sperm motility, viability, plasma membrane integrity, and abnormality were observed before freezing and post-thawing. Sperm motility was assessed under a light microscope with 400x magnification based on the percentage of progressive sperm motility [6]. The sperm viability and abnormality were evaluated using an eosin-nigrosine staining procedure [7][8]. The hypo-osmotic swelling test evaluated the functional membrane integrity. Freezing and thawing process according to Wahjuningsih et al. [9].
Statistical analysis
This study used Completely randomized Design with ten replications of each treatment. The data on semen quality traits were analyzed using analysis of variance and then Duncan multiple range test to determine differences among the treatments. Statistical significance was set at P<0.05.
Results and discussions
It was observed that the sperm motility, viability, plasma membrane integrity, and abnormality before freezing and post-thawing in all of Water Clover Extract (WCE) concentration was decreased compared to the fresh semen. The study used mangosteen peel extract supplementation on the Tris aminomethane base diluent carried out by Isnaini [10] also found that there was a decrease in the quality of semen post-thawing compared to before freezing and fresh semen. The sperm motility and viability before freezing and post-thawing in 0%, 1% and 5% supplementation of WCE were statistically lower than 3% WCE.
Decreasing semen quality after the freezing process is caused by spermatozoa experiencing cold shock due to a very drastic decrease in temperature when frozen [11-12-13]. Decreasing the percentage of plasma membrane integrity in post-thawing is caused by plasma membranes consisting of unsaturated fatty acids, which are very easy to experience lipid peroxidation. Antioxidants from water clover extract into skim milk-egg yolk diluents can maintain the integrity of plasma membranes. Antioxidants will be subjected to free radical attacks so that these compounds can break or prevent the work of free radicals and prevent chain lipid peroxidation reactions that can damage the cell's plasma membrane. The integrity of the plasma membrane is essential because the damaged plasma membrane will affect the metabolic processes associated with the motility and viability of the spermatozoa [14]. The results showed that 3 % WCE supplementation in egg yolk skim-based extended was the best to improve semen quality at before freezing and post-thawing stage compared to other concentrations. The antioxidant content of the water clover extract can inhibit spermatozoa damage due to free radicals. Free radicals cause lipid peroxidation reactions that can damage the lipid matrix structure [15]. Lipid peroxidation reaction processes will change the structure of spermatozoa, especially the acrosome, loss of motility, rapid metabolic changes, and coating of intracellular components [16]. The antioxidants in the water clover extract which added in the diluter must be optimum concentration so that it can work optimally to protect the spermatozoa due to the reaction of lipid peroxidation, excessive antioxidant administration will be harmful on sperm quality. In table 1 and table 2, we can see that if water clover extract supplementation more than 3%, the quality of semen will decrease. The amount of antioxidant concentration added can affect the rate of oxidation, but too high a concentration of antioxidant activity becomes prooxidant [9]..
Conclusion
The addition of 3% water clover extract in the base extender of egg yolk skim milk produces the best quality of frozen semen in maintaining the quality of frozen goat semen. | 2019-12-12T10:50:11.496Z | 2019-12-05T00:00:00.000 | {
"year": 2019,
"sha1": "70ee7bb1f1be917446e37819e482faf7ef7f3208",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/387/1/012103",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3fa42b4bb78719b49775578446ae38a69fae18d2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
53231704 | pes2o/s2orc | v3-fos-license | Heterologous Expression of the Nybomycin Gene Cluster from the Marine Strain Streptomyces albus subsp. chlorinus NRRL B-24108
Streptomycetes represent an important reservoir of active secondary metabolites with potential applications in the pharmaceutical industry. The gene clusters responsible for their production are often cryptic under laboratory growth conditions. Characterization of these clusters is therefore essential for the discovery of new microbial pharmaceutical drugs. Here, we report the identification of the previously uncharacterized nybomycin gene cluster from the marine actinomycete Streptomyces albus subsp. chlorinus through its heterologous expression. Nybomycin has previously been reported to act against quinolone-resistant Staphylococcus aureus strains harboring a mutated gyrA gene but not against those with intact gyrA. The nybomycin-resistant mutants generated from quinolone-resistant mutants have been reported to be caused by a back-mutation in the gyrA gene that restores susceptibility to quinolones. On the basis of gene function assignment from bioinformatics analysis, we suggest a model for nybomycin biosynthesis.
Introduction
Actinobacteria represent a prominent source of natural products with potential industrial applications. The genus Streptomyces is especially well known to produce a diverse spectrum of compounds with antibacterial, antifungal, antitumor and even insecticide and herbicide activity [1][2][3]. The increasing amount of sequenced microbial genomes has provided insight into the unprecedented potential of actinobacteria to biosynthesize natural products [4,5]. Generally, dozens of various secondary metabolite clusters are encoded in their genomes. However, these clusters are often poorly expressed under standard cultivation conditions or even remain silent, thus preventing the isolation and characterization of the encoded compounds. Such uncharacterized clusters with unknown biosynthetic products are usually regarded as cryptic. Different approaches can be used to characterize cryptic clusters, including changing cultivation parameters (OSMAC approach), expression of pleiotropic regulatory genes, introduction of antibiotic-resistant mutations, and refactoring of the biosynthetic pathways [6][7][8][9]. Currently, characterization of the cryptic gene clusters encoding natural products often relies on expression of their biosynthetic pathways in the optimized surrogate strains called heterologous hosts or chassis strains. The heterologous expression approach has a number of advantages compared to other cluster characterization methods. The simplified metabolic background of the chassis strains facilitates the identification of natural products; fast DNA-recombineering methods in E. coli and DNA transfer methods into streptomycetes simplify biosynthetic studies, and high production yields enable product supply for structure elucidation and biological activity studies.
In this study, we report the identification and characterization of the previously uncharacterized nybomycin gene cluster from the marine strain Streptomyces albus subsp. chlorinus NRRL B-24108 through heterologous expression in Streptomyces albus Del14. Nybomycin was first isolated in 1955; however, the unique biological activity of the antibiotic was discovered only recently [10]. Nybomycin inhibits growth of quinolone-resistant Staphylococcus aureus by targeting the mutated enzyme gyrase. Interestingly, the intact gyrase encoded by the gyrA gene without the resistance mutation is not inhibited by the antibiotic. The rare nybomycin-resistant mutants derived from quinolone-resistant S. aureus have all been reported to contain the reverse mutation in the gyrA gene, causing loss of quinolone resistance. Despite this interesting mode of action, the biosynthetic gene cluster leading to nybomycin production remains unknown. Based on the cluster analysis, we also propose the biosynthetic route leading to the production of nybomycin.
Identification of the Nybomycin Gene Cluster through Its Heterologous Expression
In the course of systematic activation of cryptic secondary metabolite clusters from Streptomyces albus subsp. chlorinus NRRL B-24108 [3], a cluster annotated by the antiSMASH genome mining software [11] as "fatty acid metabolism cluster" was expressed in the heterologous host strains. For this purpose, a BAC 4N24 containing the cluster was isolated from the previously constructed genomic library of S. albus subsp. chlorinus and transferred into Streptomyces albus Del14 and Streptomyces lividans TK24 [12,13]. The obtained exconjugant strains Streptomyces albus 4N24 and Streptomyces lividans 4N24 as well as the corresponding control strains without the BAC S. albus Del14 and S. lividans TK24 were fermented in the production medium. LC-MS analysis of the exconjugant strains containing the heterologous cluster confirmed its successful expression in S. albus 4N24, as indicated by a new peak that was observed in the extract of the strain ( Figure 1A,B and Figure S1). Expression of the cluster in S. lividans 4N24 did not lead to the production of any new compounds compared with the control strain. natural products; fast DNA-recombineering methods in E. coli and DNA transfer methods into streptomycetes simplify biosynthetic studies, and high production yields enable product supply for structure elucidation and biological activity studies.
In this study, we report the identification and characterization of the previously uncharacterized nybomycin gene cluster from the marine strain Streptomyces albus subsp. chlorinus NRRL B-24108 through heterologous expression in Streptomyces albus Del14. Nybomycin was first isolated in 1955; however, the unique biological activity of the antibiotic was discovered only recently [10]. Nybomycin inhibits growth of quinolone-resistant Staphylococcus aureus by targeting the mutated enzyme gyrase. Interestingly, the intact gyrase encoded by the gyrA gene without the resistance mutation is not inhibited by the antibiotic. The rare nybomycin-resistant mutants derived from quinolone-resistant S. aureus have all been reported to contain the reverse mutation in the gyrA gene, causing loss of quinolone resistance. Despite this interesting mode of action, the biosynthetic gene cluster leading to nybomycin production remains unknown. Based on the cluster analysis, we also propose the biosynthetic route leading to the production of nybomycin.
Identification of the Nybomycin Gene Cluster through Its Heterologous Expression
In the course of systematic activation of cryptic secondary metabolite clusters from Streptomyces albus subsp. chlorinus NRRL B-24108 [3], a cluster annotated by the antiSMASH genome mining software [11] as "fatty acid metabolism cluster" was expressed in the heterologous host strains. For this purpose, a BAC 4N24 containing the cluster was isolated from the previously constructed genomic library of S. albus subsp. chlorinus and transferred into Streptomyces albus Del14 and Streptomyces lividans TK24 [12,13]. The obtained exconjugant strains Streptomyces albus 4N24 and Streptomyces lividans 4N24 as well as the corresponding control strains without the BAC S. albus Del14 and S. lividans TK24 were fermented in the production medium. LC-MS analysis of the exconjugant strains containing the heterologous cluster confirmed its successful expression in S. albus 4N24, as indicated by a new peak that was observed in the extract of the strain ( Figures 1A,B and S1). Expression of the cluster in S. lividans 4N24 did not lead to the production of any new compounds compared with the control strain. Figure 1C). A search in a natural product database revealed that the identified compound might correspond to the antibiotic nybomycin ( Figure 2). To verify this, a nybomycin standard (Santa Cruz Biotechnology, Inc., Dallas, TX, USA) was used. LC-MS analysis of the pure nybomycin, the S. albus 4N24 extract, and the S. albus 4N24 extract spiked with pure nybomycin confirmed that the new compound corresponded to nybomycin; the retention time and the mass spectrum of the new compound were identical to those of the pure standard ( Figure S2). Additionally, to validate the identity of the detected compound as nybomycin, the former was purified. For this purpose, S. albus 4N24 was inoculated into 10 L of DNPM medium, and the culture broth of the strain was extracted with ethyl acetate. The compound was purified from the extract using normal-phase, size-exclusion, and reverse-phase chromatography. The purified compound, as well as the pure nybomycin standard, was used for NMR measurements. The recorded 1 H-NMR spectra of the purified compound and of nybomycin were identical (Table S1; Figure S3), which unambiguously proved the identity of the former as nybomycin.
Mar. Drugs 2018, 16, x FOR PEER REVIEW 3 of 10 Analysis of the S. albus 4N24 extract by high-resolution MS analysis revealed that the identified peak corresponded to the compound with an [M + H] + of 299.102 m/z and the deduced molecular formula C16H15N2O4 ( Figure 1C). A search in a natural product database revealed that the identified compound might correspond to the antibiotic nybomycin ( Figure 2). To verify this, a nybomycin standard (Santa Cruz Biotechnology, Inc., Dallas, TX, USA) was used. LC-MS analysis of the pure nybomycin, the S. albus 4N24 extract, and the S. albus 4N24 extract spiked with pure nybomycin confirmed that the new compound corresponded to nybomycin; the retention time and the mass spectrum of the new compound were identical to those of the pure standard ( Figure S2). Additionally, to validate the identity of the detected compound as nybomycin, the former was purified. For this purpose, S. albus 4N24 was inoculated into 10 L of DNPM medium, and the culture broth of the strain was extracted with ethyl acetate. The compound was purified from the extract using normal-phase, size-exclusion, and reverse-phase chromatography. The purified compound, as well as the pure nybomycin standard, was used for NMR measurements. The recorded 1 H-NMR spectra of the purified compound and of nybomycin were identical (Table S1; Figure S3), which unambiguously proved the identity of the former as nybomycin. Table 1). A sequence similarity search revealed that nine open reading frames within this region shared homology at the protein level with the genes involved in the biosynthesis of streptonigrin, which is structurally related to nybomycin ( Figure 2). Sequence analysis of the DNA fragment cloned in BAC 4N24 revealed the putative streptomycin-resistant gene, a hypothetical gene, a gene encoding an ATP-binding protein, and four additional hypothetical genes followed by the nybA gene encoding a putative 3-carboxy-cis,cis-muconate cycloisomerase at its 5' end (Table 1), which implied that the nybA gene might constitute the 5' end of the nybomycin cluster. The 3' end of the cloned region comprised the genes encoding a putative methyltransferase, two isopenicillin N synthases, a transporter, three transcriptional regulators, and a hypothetical protein (nybS, nybT, nybU, nybV, nybW, nybX, nybZ, and nybY, respectively). To clarify whether these genes were part of the nybomycin biosynthetic cluster, two BACs 4M14 and 6M11 that partially overlap with BAC 4N24 were isolated from the genomic library of S. albus subsp. chlorinus and expressed in S. albus Del14. Both the isolated 4M14 and 6M11 BACs completely covered the 5' end of the fragment cloned in the original BAC 4N24, which led to nybomycin production. Compared with BAC 4N24, BAC 4M14 lacked the region downstream of the nybR gene, while BAC 6M11 lacked the region downstream of the nybL gene ( Figure 3). The obtained strains S. albus 4M14 and S. albus 6M11 were analyzed together with S. albus Table 1). A sequence similarity search revealed that nine open reading frames within this region shared homology at the protein level with the genes involved in the biosynthesis of streptonigrin, which is structurally related to nybomycin ( Figure 2). Sequence analysis of the DNA fragment cloned in BAC 4N24 revealed the putative streptomycin-resistant gene, a hypothetical gene, a gene encoding an ATP-binding protein, and four additional hypothetical genes followed by the nybA gene encoding a putative 3-carboxy-cis,cis-muconate cycloisomerase at its 5' end (Table 1), which implied that the nybA gene might constitute the 5' end of the nybomycin cluster. The 3' end of the cloned region comprised the genes encoding a putative methyltransferase, two isopenicillin N synthases, a transporter, three transcriptional regulators, and a hypothetical protein (nybS, nybT, nybU, nybV, nybW, nybX, nybZ, and nybY, respectively). To clarify whether these genes were part of the nybomycin biosynthetic cluster, two BACs 4M14 and 6M11 that partially overlap with BAC 4N24 were isolated from the genomic library of S. albus subsp. chlorinus and expressed in S. albus Del14. Both the isolated 4M14 and 6M11 BACs completely covered the 5' end of the fragment cloned in the original BAC 4N24, which led to nybomycin production. Compared with BAC 4N24, BAC 4M14 lacked the region downstream of the nybR gene, while BAC 6M11 lacked the region downstream of the nybL gene ( Figure 3). The obtained strains S. albus 4M14 and S. albus 6M11 were analyzed together with S. albus 4N24 for nybomycin production. During the LC-MS analysis, no nybomycin was detected in the extracts of either the S. albus 4M14 or S. albus 6M11 strains ( Figure S4). Nybomycin was readily detectable in the extract of S. albus 4N24. These results give evidence that the 3'-terminal region of 4N24, which contains the genes downstream of nybR (nybS to nybZ), is essential for nybomycin production. Taken together, our results suggest that the genes from nybA to nybZ might constitute the nybomycin gene cluster, which is further supported by the fact that the genes from nybA to nybF and nybN to nybP share high levels of homology with genes in the biosynthetic cluster of streptonigrin (Table 1), a compound that is structurally similar to nybomycin (Figure 2).
Mar. Drugs 2018, 16, x FOR PEER REVIEW 4 of 10 4N24 for nybomycin production. During the LC-MS analysis, no nybomycin was detected in the extracts of either the S. albus 4M14 or S. albus 6M11 strains ( Figure S4). Nybomycin was readily detectable in the extract of S. albus 4N24. These results give evidence that the 3'-terminal region of 4N24, which contains the genes downstream of nybR (nybS to nybZ), is essential for nybomycin production. Taken together, our results suggest that the genes from nybA tonybZ might constitute the nybomycin gene cluster, which is further supported by the fact that the genes from nybA tonybF and nybN tonybP share high levels of homology with genes in the biosynthetic cluster of streptonigrin (Table 1), a compound that is structurally similar to nybomycin (Figure 2).
Discussion
The antibiotic nybomycin was discovered in 1955 in the culture broth of streptomycetes A 717 [14]. The structural features of nybomycin as a fused pyridoquinolone ring system and an angularly fused oxazoline ring are of particular biosynthetic interest as they have not been reported for other natural products [15]. Despite the unique structure of nybomycin, its biosynthetic cluster and biosynthetic route have remained elusive. Only the results of feeding studies imply that acetate, methionine, and some non-identified shikimate-type intermediates serve as biosynthetic precursors for nybomycin [15,16]. In this article, we have described the identification of the nybomycin biosynthetic gene cluster from the marine streptomycete S. albus subsp. chlorinus NRRL B-24108 through its expression in the cluster-free heterologous host S. albus Del14.
Sequence analysis of the DNA fragment cloned in BAC 4N24 has revealed that a number of genes within this fragment are highly homologous to the genes involved in biosynthesis of the antibiotic streptonigrin (Table 1) [17]. Direct comparison of nybomycin and streptonigrin has shown distinct structural similarity, with both structures containing a diamino-substituted, six-membered ring (Figure 2). The structural similarity and the partial homology of the gene clusters suggest that nybomycin and streptonigrin biosynthetic routes can have some similar biosynthetic intermediates and enzymatic reactions.
Based on the sequence analysis and the results of BAC expression, we propose that the genes nybA to nybZ constitute the nybomycin gene cluster. The nybA gene, encoding a putative 3-carboxy-cis, cis-muconate cycloisomerase, is homologous to the streptonigrin biosynthetic gene stnL. The nybZ gene encodes a putative transcriptional regulator that might also participate in nybomycin biosynthesis. Similar to the streptonigrin pathway, the 3-deoxy-D-arabinoheptulosonate 7-phosphate (DAHP) synthase encoded by nybF is likely to catalyze the first reaction in the nybomycin biosynthetic route. DAHP synthase is responsible for the first reaction of the shikimate pathway-biosynthesis of 3-deoxy-D-arabino-hept-2-ulosonate 7-phosphate (DAHP) from phosphoenolpyruvate and D-erythrose 4-phosphate ( Figure 4) which is supported by the results of feeding experiments that have demonstrated that the carbons of the central ring of nybomycin are derived from a shikimate-type intermediate [15]. Catalyzing the first reaction, DAHP synthase regulates the amount of carbon entering the shikimate pathway and therefore can be responsible for its upregulation to provide sufficient amounts of biosynthetic precursors for nybomycin. The genes encoding enzymes responsible for the conversion of DAHP into chorismate are absent in the nybomycin cluster and in the streptonigrin pathway [17]. Therefore, the host's primary metabolism enzymes most likely overtake these biosynthetic steps. We propose that the second reaction catalyzed by the enzymes encoded in the biosynthetic cluster is the conversion of chorismate into 4-aminoanthranilic acid (Figure 4), which is a key intermediate in both nybomycin and streptonigrin biosynthesis (Figure 2). Isolation of 4-aminoanthranilic acid from the culture broth of the streptonigrin producer Streptomyces flocculus supports this suggestion [18]. Furthermore, 4-aminoanthranilic acid has also been shown to incorporate into streptonigrin [18]. We propose that the products of the nybC, nybD, nybE, and nybL genes might be responsible for the conversion of chorismate into 4-aminoanthranilic acid. The nybL gene, encoding putative amidohydrolase [19], has no homologue in the streptonigrin biosynthetic pathway. We suggest that the protein product of nybL may provide enough supply of ammonia for the amination of chorismate by anthranilate synthase [20], encoded by nybD, during the biosynthesis of 4-aminoanthranilic acid. In the streptonigrin biosynthesis, the function of the nybL gene may be overtaken by the one of the host's primary metabolism enzymes.
After formation of 4-aminoanthranilic acid, it is then hydroxylated in the third position and decarboxylated, generating 2,6-diaminophenol (Figure 4). The nybP gene encoding putative salicylate hydroxylase might be responsible for the hydroxylation reaction. Then, two acetoacetate units are attached to the amino groups of 2,6-diaminophenol through the action of the N-acetyltransferase encoded by the nybK gene (Figure 4). The putative acetoacetyl-CoA synthase encoded by the nybM gene catalyses the formation of acetoacetyl-CoA from acetyl-CoA and malonyl-CoA. The NybM enzyme is likely responsible for the production of sufficient amounts of acetoacetate, which is used as a precursor in nybomycin biosynthesis. The precursor role of acetoacetate in nybomycin biosynthesis is supported by the results of the feeding experiments, which have unequivocally defined acetate as the source of the exterior carbons of the pyridone rings [16]. We speculate that after attachment of the acetoacetate units, the putative cyclase encoded by the nybN gene catalyses the closure of the pyridone rings, leading to formation of intermediate 1 (Figure 4). The methylation of nitrogen within the pyridone rings is likely to be catalyzed by the SAM-dependent methyltransferase encoded by nybS. We hypothesize that the next reaction in nybomycin biosynthesis is a closure of the oxazoline ring ( Figure 4). This step might be catalyzed by the product of nybT or nybU, which code for isopenicillin N synthases (IPNS). IPNS is responsible for the biosynthesis of isopenicillin N through a bicyclization reaction-first, the formation of a C-N bond generates a β-lactam ring; then, the closure of a five-membered thiazolidine ring is accomplished by the formation of a C-S bond [21]. Sulfur and oxygen have similar properties [22]. Therefore, in a similar way to thiazolidine ring closure, IPNS could be able to catalyze the formation of a C-O bond between the carbon atom of the N-methyl group and the oxygen atom of the OH group present in intermediate 2, generating an oxazoline ring ( Figure 4). Finally, a hydroxylation reaction takes place at C-8', possibly catalyzed by the oxidoreductase NybB, giving rise to nybomycin final structure. The compound is then secreted to the extracellular space, most likely by the membrane transporter encoded by nybV. Expression of secondary metabolite biosynthesis genes is commonly regulated by activators and repressors coded by genes that are located within the same cluster. We hypothesize that the products of nybW, nybX, and nybZ, which encode putative transcriptional regulators, might control the expression of the genes involved in nybomycin biosynthesis. In this paper we report the identification of the gene cluster encoding production of the structurally unique antibiotic nybomycin. Biological activity of nybomycin is also of particular interest as it inhibits growth of quinolone-resistant Staphylococcus aureus, dormant Mycobacterium tuberculosis, and other Gram-positive and Gram-negative bacteria [10,14,23]. Interestingly, nybomycin targets solely the mutant, quinolone-resistant DNA gyrase with a Ser84Leu substitution, while it is inactive against the wild-type, quinolone-sensitive form of the enzyme [10]. The mutation described thus far to cause nybomycin resistance is a reverse Leu84Ser mutation within the gyrA gene that restores quinolone sensitivity. The identification of the nybomycin cluster presented in this paper enables biosynthetic studies of nybomycin production, generation of new nybomycin derivatives, and optimization of its production as well as a nybomycin supply for further biological studies. Together, these works might help fight the development of quinolone resistance and revive quinolones as an effective class of antibiotics.
General Experimental Procedures
All strains and BACs used in this work are listed in Table S2. Escherichia coli strains were cultured in LB medium [24]. Streptomyces strains were grown on soya flour mannitol agar (MS agar) [25] for sporulation and conjugation and in liquid tryptic soy broth (TSB; Sigma-Aldrich, St. Louis, MO, USA). For secondary metabolite expression, liquid DNPM medium (40 g/L dextrin, 7.5 g/L soytone, 5 g/L baking yeast, and 21 g/L MOPS, pH 6.8) was used. The antibiotics kanamycin, apramycin and nalidixic acid were supplemented when required.
Metabolite Extraction and Analysis
Streptomyces strains were grown in 10 mL of TSB for 1 day, and 1 mL of each culture was used to inoculate 50 mL of production medium. Cultures were grown for 7 days at 28 • C. Metabolites were extracted with ethyl acetate from the supernatant, evaporated, and dissolved in methanol. One µL of each sample was separated using a Dionex Ultimate 3000 UPLC (Thermo Fisher Scientific, Waltham, MA, USA), a 10-cm ACQUITY UPLC ® BEH C18 column, 1.7 µm (Waters, Milford, MA, USA) and a linear gradient of 0.1% formic acid solution in acetonitrile against 0.1% formic acid solution in water from 5% to 95% in 18 min at a flow rate of 0.6 mL/min. Samples were analyzed using an amaZon speed mass spectrometer or maXis high-resolution LC-QTOF system (Bruker, USA). Data were collected and analyzed with the Bruker Compass Data Analysis software, version 4.1 (Bruker, Billerica, MA, USA). Monoisotopic mass was searched in the natural product database DNP (Dictionary of Natural Products [27]).
Nybomycin Isolation and 1 H-NMR Spectroscopy
Streptomyces albus 4N24 was grown in 30 mL of TSB for 1 day, and 1 mL of the preculture was used to inoculate 100 flasks containing 100 mL of DNPM medium. Cultures were incubated at 28 • C for 7 days. Metabolites from the supernatant were extracted as described above. The crude extract was fractionated by normal phase chromatography on a prepacked silica cartridge (Biotage, Uppsala, Sweden) using hexane, dichloromethane, ethyl acetate, and methanol as the mobile phase. Fractions containing nybomycin were detected by LC-MS analysis. They were pooled together, evaporated, and dissolved in methanol. The sample was further separated by size-exclusion chromatography on an LH 20 Sephadex column (Sigma-Aldrich, USA) using methanol as the solvent. Finally, the sample was separated by semipreparative HPLC (Dionex UltiMate 3000, Thermo Fisher Scientific, USA) using a C18 column (Synergi 10 µm, 250 × 10 mm; Phenomenex, Aschaffenburg, Germany) and a 0.1% formic acid solution in acetonitrile as the mobile phase to obtain nybomycin (0.1 mg). Individual peaks were collected and analyzed by LC-MS as described above. The 1 H-NMR spectra were recorded on a Bruker Avance 500 spectrometer (Bruker, BioSpin GmbH, Rheinstetten, Germany) at 300 K equipped with a 5 mm BBO probe using deuterated trifluoroacetic acid (Deutero, Kastellaun, Germany) as solvent containing tetramethylsilane (TMS) as a reference. The chemical shifts were reported in parts per million (ppm) relative to TMS. All spectra were recorded with the standard 1 H pulse program using 128 scans.
Genome Mining and Bioinformatics Analysis
The S. albus subsp. chlorinus genome was screened for secondary metabolite biosynthetic gene clusters using the antiSMASH [11] online tool (https://antismash.secondarymetabolites.org/#!/start) and the software Geneious 11.0.3 [28]. The DNA sequence of the nybomycin gene cluster was deposited into GenBank under accession number MH924838.
Conflicts of Interest:
The authors declare no conflict of interest. | 2018-11-15T17:36:51.742Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "b4959e1b4181fb82275d2795f76111e6a59412de",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/16/11/435/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4959e1b4181fb82275d2795f76111e6a59412de",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269788007 | pes2o/s2orc | v3-fos-license | New Canary Islands Roman mediated settlement hypothesis deduced from coalescence ages of curated maternal indigenous lineages
Numerous genetic studies have contributed to reconstructing the human history of the Canary Islands population. The recent use of new ancient DNA targeted enrichment and next-generation sequencing techniques on new Canary Islands samples have greatly improved these molecular results. However, the bulk of the available data is still provided by the classic mitochondrial DNA phylogenetic and phylogeographic studies carried out on the indigenous, historical, and extant human populations of the Canary Islands. In the present study, making use of all the accumulated mitochondrial information, the existence of DNA contamination and archaeological sample misidentification in those samples is evidenced. Following a thorough review of these cases, the new phylogeographic analysis revealed the existence of a heterogeneous indigenous Canarian population, asymmetrically distributed across the various islands, which most likely descended from a unique mainland settlement. These new results and new proposed coalescent ages are compatible with a Roman-mediated arrival driven by the exploitation of the purple dye manufacture in the Canary Islands.
The Canary Archipelago is located approximately 108 kms off Morocco's southwestern Atlantic coast.It is made up of seven oceanic islands geographically and administratively divided into two provinces.The eastern province includes three of the islands named Gran Canaria, Lanzarote and Fuerteventura, and the western province the four remaining islands of Tenerife, La Gomera, La Palma and El Hierro.The eastern islands are geologically older and, due to their proximity to the Sahara desert, drier than the western ones but also more accessible by sea.Since the European maritime expansion along the Atlantic Africa in the fourteenth century, the Canary Islands attracted special attention as the only Archipelago of the area inhabited by indigenous people with a late Neolithic culture.The numerous and multidisciplinary studies carried out on this population have recently been reviewed from archaeological 1 and genetic perspectives 2 .New radiocarbon dates based on short-life samples, allowed the construction of a robust chronological model for the islands hypothesizing a permanent settlement on the Archipelago around the turn of the epoch 3 .On the other hand, new sequencing methodologies have revolutionized the analysis of ancient DNA improving success for sequencing mitogenomes and whole genomes from archaeological specimens 4 .Applying these techniques to indigenous remains from the Canary Islands, a northern African origin of their most recent ancestors has been redefined [5][6][7] .However, the bulk of the data from the indigenous remains of the Canary Islands have been obtained with Polymerase Chain Reaction (PCR) techniques and subsequent classic Sanger sequencing 6,[8][9][10] .Regrettably, these techniques are prone to contamination and sequencing artefacts 11 .The potential existence of such disturbing phenomena were suspected when the ancient haplotypes were contrasted with the largest (n = 896) sample studied so far of extant whole mtDNA Canarian genomes 12 .In addition, a disparity exists between archaeological and genetic ages, with the latter being much older 13 .Perhaps, the biggest failure of studies about the indigenous settlement of the Canary Islands is the absence of a model capable of integrating the data gathered from the different scientific disciplines in a coherent framework.
The aims of the present study are: (a) To perform a critical re-analysis of the published mtDNA indigenous haplotypes in order to clear up those contaminant types that obscured correct results; (b) To apply updated
Persistency and phylogeographic origin of the Canary Islands indigenous haplotypes
It was recently found 12 that around 50-60% of the Canary Islands indigenous mtDNA lineages are extant in the current Canary Island populations.However, when all the lineages detected in the historic and present-day samples were re-evaluated a slow decreasing trend was discerned, since the historic times indigenous lineages represented 37.9% of the total, whereas in present-day samples they account only for 26.5% of all the lineages observed (Table S1).Because all the published results from various disciplines point to a Canarian indigenous' North African origin, the abundance of exclusive matches of indigenous haplotypes to Europeans compared with North Africans was surprising in our results (p = 0.006).A graphical representation, including sub-Saharan Africa populations, showed that most indigenous haplotypes matched to both North Africans and Europeans, while those from sub-Saharan African were in the minority (Fig. 1).
About half of the indigenous haplotypes detected in El Hierro matched to Europeans alone.A partition of the 14 indigenous haplotypes shared exclusively with Europe (Fig. 2) suggested that the contribution of the Iberian and Italian peninsulas pair might be greater (20.6%) than that of Iberia and France (11.5%), but was not statistically significant (p = 0.17).
The relative affinities of the indigenous Canarian to the northern African regions (Table S1), suggested that 67.3% of the matches occurred to both northwest and northern Africa.However, exclusive matches with the northwest (26.5%) were significantly higher (p = 0.01) than to the northern region (6.1%).Of the 81 indigenous haplotypes examined, 33 (41%) were not detected in historic or contemporary samples from the Canary Islands (Table S1).Of these, 7 haplotypes (9%) exclusively matched to European regions (marked with an asterisk in Table 3 and with two asterisks in Table S1).These haplotypes could have not yet been detected in northern Africa, but it is also possible that they were brought to the islands by European males and, since mtDNA is transmitted by females, they went extinct on the Canary Islands.Similarly, there are also exclusive matches of indigenous haplotypes with the Middle East and sub-Saharan Africa that lack a congruent explanation (Table S1).There are haplotypes detected solely in the Canary Islands and South America, most probably due to the post-conquest forced migration of Canarian natives to that continent (Table S1).Another interesting case are those haplotypes derived from the autochthonous haplogroup U6b1a with prominent implantation in western islands 6 that although lacking exact matches, still have their closest counterparts in the Moroccan sister clade U6b1b 19 .Putatively sub-Saharan African haplotypes of the haplogroup L3b1a12 detected in the eastern island of Gran Canaria 5,6,20 , whose HVSI region (16223-16278-16311-16362), exact matches were within haplogroup L3b1a11 from Madagascar 21 .However, the complete sequencing of several L3b1a12 indigenous mtDNA genomes 5,6,20 revealed that the Canarian haplotypes differ from their putative African counterparts by six unique transitions in their coding region (8697, 9947, 10646, 11257, 14136, 14553), differing from the phylogenetic identity inferred from the HVSI analysis.Taken together, this denotes the need to study complete mitogenomes to obtain reliable genetic matches.
Divergence of indigenous genetic pool among islands
Present-day Canary Islands insular populations' suggest that genetic differentiation between the western (Tenerife, La Gomera, La Palma, and El Hierro) and eastern (Gran Canaria, Lanzarote, and Fuerteventura) islands may date back to pre-colonial times 22 , corroborated by recent mitogenome and whole genome analyses of the indigenous populations [5][6][7] .A pair-wise match-distance between islands is in Table S3, and a graphical representation of their respective relationships and genetic affinities with their putative continental colonizers is in Fig. 3.
In principal coordinates' analysis (Fig. 3a), the genetic match distances between continental regions are based on their respective sharing of Canarian indigenous haplotypes exclusively.The coordinate 1 axis clearly separates all the continental regions samples from those of the Canary Islands, meaning that they primarily share the same ancestral haplotypes.Coordinate 2 axis, in turn, separated the western from the eastern Canary Islands, with the least sampled eastern islands of Fuerteventura and Lanzarote and the westernmost island of El Hierro, showing the greatest genetic drift effects 23 .On the other hand, Gran Canaria was the island that shared most indigenous lineages with their putative continental maternal sources.In Fig. 3b, the genetic distances between samples from the continental regions were based on the respective sharing of their own set of continental haplotypes 12 .In this case, the closest genetic affinities between samples from regions within continents separated the Northern African regions far from European regions along the X coordinate axis.Again, the eastern and western islands were separated along the Y-axis although now Gran Canaria showed the greatest genetic affinity to northern Africa while the western islands showed a closer genetic relationship to the European regions.A sign test based on the number of haplotypes shared between groups and those unique to each group showed that the groups statistically differ (p = 0.001).However, due to the high haplotype diversity of the total indigenous sample, it is uncertain whether the eastern and western islands samples originated from different populations.In the following analysis on haplotype differences between the two groups of islands, it was assumed of northern African provenance all the haplotypes with matches in North Africa although they were also present in other regions, and of European provenance those haplotypes with exclusive matches in Europe.Prominent or exclusive haplogroups in the eastern islands included H1 (16239), H1ao (16278), H3r (16126), H4a1e (16362), T2c1d3, U5, U6a, U6c, M1, and L3b1a12 (Table S1).U6a, U6c and M1 have a pan-Mediterranean range and U6a and M1 have been in Northwest Africa since the Pleistocene 24 , which also implies H1 (16239) and H3r (16126) 25 .A recent study has extended the geographic range of H4a1e to southern Egypt prior to Roman and Greek influx 26 .In addition, some T sequences have localized matches: T1a (16126-16154-16163-16186-16189-16294) in Algeria 27 , T2c1d3 (16092-16126-16292-16294) in Morocco 28 or T2c1d3 (126-292-294-362) in the Near East.However, basal U5b1 haplotypes are present in a broad geographic range from the Western Sahara 29 and Mauritania 30 to Mediterranean Africa (Table S1).On the opposite side were the haplotypes of haplogroup L3b1a12, whose African location of origin remains unknown 6 .In relation to haplotypes having probable European origin, H1e1a9 (13934) and HV (16316) stand out for their exclusive matches in Italy.Nevertheless it has to be mentioned that an ancestral type of H1e1a was detected in Chalcolithic-Middle Bronze Age samples from Portugal 31 .In the western group, northern African heritage was represented by several haplotypes derived from the H1 haplogroup (Table S1).The H1cf complete mtDNA haplotype had its closest relative in Algeria 9 .The majority of J haplotypes in the indigenous population were from the western islands, and the J2a2d1 branch seemed to originate from northwest Africa, and was present on all western islands except El Hierro (Table S1).Haplotypes of haplogroup U6b1a primarily showed the greatest northern African contribution to the western islands, having highest incidence in La Gomera 10 and being absent from El Hierro.Although not detected on the African continent, U6b1a has its closest sister clade (U6b1b) in Morocco 19 .The U6b1a haplotypes trace to South America and the Iberian Peninsula www.nature.com/scientificreports/after the forced migration of indigenous canarian people after the conquest (Table S1).The high incidence of haplogroup H haplotypes indicates primary European contributions (Table S1).For example, the H1 (16292) haplotype was detected in all the western islands except La Gomera, with matches in the Iberian Peninsula and Italy.For the J haplotypes other than J2a2d1, J1c3 and J1c2c2 present in Tenerife had exact matches in the Iberian Peninsula and in Italy and France, respectively (Table S1).La Gomera had an enigmatic N1b1a7 lineage with an exact match in the Middle East alone 32 .La Palma also harbored two haplotypes of macrohaplogroup N. W1e1 had matches in the current populations of the Iberian Peninsula and Italy, being detected since the Neolithic in Catalonia 33 , indicating its ancient Iberian Peninsula presence.The other, a specific derivative of X3a (16111-16189-16223-16278), also had a unique match in the Iberian Peninsula (Table S1).Finally, El Hierro had a rare U5a1b4 haplotype solely found in France and a rare U7 haplotype (16309-16318T) whose nearest matches were in the Iberian Peninsula and Italy, but that also is in Egypt 34 .Finally, around 12% of the original indigenous lineages traced to sub-Saharan Africa, although some also were found in northern Africa, and approximately 30% had exclusive matches to regions where the Portuguese slave trade peaked (Table S1).This resembles the current populations of the Macaronesia Islands of Madeira and the Canarian archipelago, where about 40% of their sub-Saharan L sequences have exact matches in Cape Verde and Sao Tomé and Principe, which were main outposts of the Portuguese Atlantic slave trade 17 .
New coalescence ages for the Indigenous lineages
Some of the first radiocarbon dates placed the indigenous settlement of the Canary Islands back to late Neolithic times, which agreed with their cultural level 35 and with the first coalescent age estimations obtained for the Canary islands mtDNA autochthonous lineages U6b1a and U6c1a around 5000 ya 36,37 .However, those old radiocarbon dates have recently been reconsidered due to the inappropriate material used.New and revised archaeological dates and demographic inferences have concluded that, a permanent settlement on the islands prior to the first millennium AD is highly improbable 3 .In parallel, studies of the mtDNA evolutionary rate 15,38 have found that it is dependent on the population size and that a rate of one mutation every 3624 years extensively used in human phylogenetic analysis 13 is inappropriate to apply to relatively recent events.For shallow phylogenetic trees that concur with the time frame studied here, an alternative evolutionary rate of one mutation every 1400 years was proposed 14 which have been used in the present study.Notably, this predicted fast evolutionary rate for recent times has been empirically corroborated recently by an extended pedigree analysis, using the entire mtDNA genome, obtaining a mutation rate of 5.8 × 10 −8 (95% CI 3.10-10.8× 10 -8 ) mutation/site/year that nicely overlaps with the one used here of 4.33 × 10 -8 (95% CI 3.90-4.82× 10 -8 ) mutation/site/year 39 .Applying this evolutionary rate to the phylogenetic trees (Figs.S1 to S6) of the 16 indigenous lineages that are supported by complete mtDNA sequences, for the indigenous 6,20 and current populations 12 of the Canary Islands (Table S4), the coalescence ages ranged from 2,333 (95% CI 2300-2368) ya for the H1cf (16260) clade to 382 (95% CI 361-401) ya for the Gran Canaria autochthonous lineage L3b1a (@16124).It deserves mentioning that H1cf and H1e1a, the oldest lineages, both belonged to the European haplogroup H1.For the former, the closest sequence to the Canary cluster was an Algerian sequence 9 and for the latter an Italian sequence (Table S1).These haplogroups were followed in age by J2a2d1a and U6b1a with main introductions in the western Islands, and U6c1 limited to the eastern islands, whose ages located them in the Canarian archipelago between the second and the fifth centuries AD.At first, this apparent continuous range of ages could be compatible with a permanent flux of migrants to the Archipelago.However, this contrasts with the important genetic drift effects observed in the islands of La Gomera 10 and El Hierro 23 and the relatively high genetic differentiation found between the main islands of Tenerife and Gran Canaria 6 .These results are more in line with successive but discrete migrations that did not affect all of the islands equally.Thus, taking into account the relative proximity of their respective ages, we subdivided the indigenous lineages into three discrete time intervals (Table 4).The oldest group comprises the five lineages (H1cf, H1e1a, J2a2d1, U6b1a, and U6c1) commented above.Lineages of the middle aged group (W1e1, X3a, and U5a1b4) may have arrived to the Archipelago at the beginning of the twelfth century affecting only the western islands, coinciding in time with internal population growth marked by the autochthonous U6b1a1 lineage.These three lineages would have had a European origin instead of Arab.The third and most recent group coincides with the period of the European colonization of the Archipelago (from 1402 to 1496 years).In it U6a* represents a set of current Canarian sequences belonging to subgroups U6a1a1 (16239), U6a3a1, and U6a7a1b, all also detected in the indigenous sample (Table S1).These three clades had Chalcolithic expansions in Europe 18 .From them, it is particularly interesting the case of U6a7a1b that is related to the Sephardic radiation and historical diffusions to the American continent 18 .Clades H4a1, T2c1d3 and T2c1d1c could signal the post-conquest Moorish slave trade 6 , while the L sub-Saharan African members seemed to result from the Atlantic slave trade practiced by Portuguese traffickers 12 .Predictably, age differences between groups 1 and 2 (p = 0.0007) and between group 2 and 3 (p = 0.0026) were highly significant.
Contamination problems in ancient DNA studies
Due to the availability of many human mtDNA sequences in data banks, for which the recent contribution of Canarian samples is remarkable 12 , rare or incomplete indigenous haplotypes published in earlier studies on ancient DNA from the Canary Islands 8 appear related to lineages sampled in the current population, highlighting their potential authenticity.Paradigmatic are the cases of H* (16290) in La Palma, J1c2e2 (16069-16126-16278-16366) in Tenerife, L3d1b3a (16124-16223-16256-16311) in La Gomera, or U5a1b4 (16093-16192-16256-16270-16362) in El Hierro 12 .Remarkable are also other indigenous types detected in South America regions with demographic ties in the Canary Islands (Table 2), and those identified in continental areas where their potential ancestors originated (Table 3).The absence of matches with any published mtDNA sequences of some indigenous haplotypes might be due to insufficient sampling in their putative areas of origin but, in some cases, as evidenced here, may indicate contamination, mixed up types or incomplete sequencing, which has led to the identification of the most probable indigenous haplotype and its contaminant (Table 1).Finally, some indigenous haplotypes, with potential relatives in Europe that are not detected in historical or present-day Canarian populations, may represent pre-conquest male limited incursions that did not transmit this maternal marker.Other empirical data appear to support this hypothesis.The Y-chromosome haplogroup I-M170 is a predominant European male-lineage.It has a frequency 40 that also significantly differs from northern Africa (p = 0.0097).These results could indicate a male-mediated European gene-flow on the indigenous population before the Spanish Conquest or, alternatively, a strong contamination/admixture of the indigenous remains with potential European remains.Although more recent techniques of enrichment and sequencing of ancient DNA make it easier to identify contamination, the reassessment of doubtful sequences with the panel of publicly available sequences as performed in this study will continue to be a useful strategy.
Lack of date and context of archaeological samples
Donated archaeological samples should be accurately dated and contextualized following precise radiocarbon hygiene protocols.Regrettably, this was not the case in the first ancient DNA studies on Canarian indigenous material, in which the samples consisted of non-individualized, loose teeth, theoretically obtained from indigenous sites roughly dated around 1000 ya.Thus, in order not to duplicate samples, geneticists opted to use a single tooth type, preferably the left canine 8 for all DNA extractions.Although molecular results from that material yielded important information, including the presence in the indigenous sample of several predicted founder lineages as U6b1a 22 , the critical re-analysis performed here suggests that those putative indigenous samples contained a jumble of samples that, in addition to indigenous ones, included European remains from the conquest period, remains of Moorish and sub-Saharan Africans brought to the islands by the Europeans as forced labor and, probably, remains of fugitive Sephardic people.Thus, the supposedly high genetic diversity found in the Indigenous sample 8 was in part the result of heavy archaeological contamination.This seems to be confirmed by more recent molecular studies on dated and contextualized archaeological material, for which observed genetic diversity is appreciably lower 6,7 .Another effect of the absence of precise archaeological dating is that the longdebated hypotheses of one or more colonization waves to the Canary Islands depends on the coalescent age of those indigenous lineages that remain represented in the current population 12 .
Molecular age for a permanent indigenous settlement
The mtDNA evolutionary rate of humans may have accelerated in recent times 14 .Applying this faster rate to calculate the coalescent ages for those indigenous lineages that remain represented today (Table S4), revealed molecular ages between 2300 and 2185 years ago for the two oldest lineages, H1cf and H1e1a (Table S4), that is, two or three centuries BC.These molecular ages are earlier than the recent archaeological estimates, dating the first settlement of the Canary Islands to two or three centuries AD 41 , but are much closer to each other than those previously proposed 13 .On the other hand, age differences among lineages, and their heterogeneous settlements on the islands, provides clues to address questions such as whether the Archipelago was colonized during one or several immigration waves, or whether the pre-conquest settlers arose from one or more genetically heterogeneous populations.Focusing first on the oldest group (Table 4), two lineages (H1e1e and U6c1) showed a wide Mediterranean geographic range, including Italy and northern Africa, who exclusively settled on the eastern-Canary islands.On the other side, three lineages (H1cf, J2a2d1, and U6b1a) showed a prominent or exclusive trace to the western islands, of which at least two (H1cf and U6b1a) appeared restricted to northwestern Africa.
As the range of their ages did not allow us to significantly separate these lineages, alternative possibilities may involve only a single heterogeneous wave, or coetaneous heterogeneous waves, of settlers who colonized different groups of islands.This contradicts an earlier suggestion that the H1e1a, H4a1e, L3b1a, and U6c1 clades had an www.nature.com/scientificreports/asymmetrical implantation in the eastern islands that may signal a late secondary settlement on these islands 6 .
It further was inferred that most sites where these lineages were sampled had radiocarbon dates around the thirteenth century.However, the late age of the sites sampled does not guarantee that those lineages did not settle on the islands earlier, as suggested by their coalescence ages (Table S4).The second group of lineages indeed could point to the existence of a second wave of colonizers affecting the western islands in an interval from the end of the tenth to the beginning of the twelfth centuries albeit, if it occurred, it had a minor impact on the maternal genetic pool of the islands population.However, the incorporation of those maternal lineages, into the western islands may have been due to early pre-conquest European sporadic landings.The third group is a set of lineages that likely became incorporated into the Canarian population during the European colonization period.As previously mentioned, the clades T2c1d3 and T2c1d1c although not detected in indigenous remains suggested an autochthonous radiation, which could indicate the post-conquest forced Moorish incorporation in the eastern islands 6 , while the sub-Saharan African L haplotypes during the same period could have resulted from the Portuguese Atlantic slave trade 12 .Note, however, that the eastern islands L3b1a lineage likely should be excluded from this post-conquest input as it was detected in individualized remnants radiocarbon-dated to 1,116 + 26 years BP 20 .Because of this, the shallow age of coalescence obtained for the clade (Table S4) may be attributed to a possible loss of some divergent haplotypes due to genetic drift.Future knowledge of the place from where the L3b1a and U6b1a lineages came to the islands will help to resolve the precise origin of the indigenous Canarian settlers.Finally, since the lineages of the second and third groups mainly belonged to the western islands, their relative genetic closeness to those from European regions (Fig. 3b) likely is not due to differentiation between indigenous populations but due to contamination of the archaeological samples.
With the available ancient mtDNA data, it could not be discerned whether more than one wave of pre-conquest colonizers occurred as some archaeological investigations suggested 42 , but it does seem that a genetically heterogeneous population or populations likely colonized the Canary Islands in an asymmetric way around the first millennium AD.Earlier studies about physical anthropology of the Canary Islands indigenous people already pointed to the existence of a physically heterogeneous population.In one of those, a clear sub-Saharan African component was detected 43 although it was ruled out after the analyses of dermatoglyphics and haptoglobin types in the extant population, which did not reveal any sub-Saharan African affinities.To explain the discrepancy, it was suggested that some sub-Saharan African skulls, from the post-conquest slave trade, could have been included in the analysis inadvertently 44 .However, in this regard, it should be noted that, due to genetic recombination, a sub-Saharan African immigrant genome would have been diluted into the recipient population in a few generations, whereas a mtDNA lineage would retain its African roots without modification.More thorough analyses concluded that the skulls of the first islanders might be explained as mixtures, in varying proportions, of two ancestral types: the robust Cromagnoid from northwestern Africa and the gracile Mediterranean Capsian 45 .Both types were present in the main islands of Tenerife and Gran Canaria, with the Crogmanoid features being more prominent in the northern and mountainous regions and the Mediterranean along the coasts; in addition, the Crogmanoid type was best preserved in La Gomera 45 .However, on the contrary, a more recent study based on dental morphological measures for the same indigenous populations of La Gomera, Gran Canaria, and Tenerife found that inter-island dental differentiation was so minor that it did not require any hypothesis of separate founding populations 46 .The accumulated biological data on the first islanders is still far from forming a coherent body, and their coupling with the archaeological data only reaches some specific agreements, such as that their ancestors came from northern Africa and that a permanent settlement on the islands cannot go back much further than the beginning of the first millennium AD.Nevertheless, the ancient mtDNA information reanalyzed here is already enough to support some of the several hypotheses formulated to explain where the first settlers originated, how they arrived at the archipelago, and how they settled on the different islands.
In support of a Roman-mediated indigenous settlement of the Canary Islands
The first question about the indigenous Canarian population that seems to be resolved is when they arrived on the islands since both the archaeological and genetic data place it around the first millennium AD, questioning previous hypotheses proposing Neolithic or Phoenician-Punic settlements 47 .The genetic support for settlement in Roman times is the lack of indigenous lineages in the indigenous 6 , historical 48,49 , and current Canarian population 12 with coalescent ages older than this epoch.However, earlier arrivals to the islands that did not leave a genetic trace cannot be ruled out.Indeed, there is archaeological evidence that Romanized people landed on the eastern islands and established a purple dye extraction workshop on the islet of Lobos 50 .The high economic benefit that the purple trade achieved in Roma provides additional support explaining the far-flung and costly maritime voyages.However, for this business to be profitable, a small workshop like the one discovered on the islet of Lobos would not yield enough production.Stramonita haemastoma, the mollusk from which the purple dye was extracted in Lobos, is also abundant and easy to collect on some coasts of the other Canary Islands 51 .Thus, although the main exploitation centers must have been in the eastern Islands, where the frequency of the mtDNA Mediterranean lineages was greater, it seems likely that other purple workshops, still not detected, were established along the Archipelago at the same time.Furthermore, due to depletion of the raw material, migration among islands likely became common.However, except for a few potential Latino-Roman rock scripts on the eastern islands, there is no trace of Roman culture in the Canarian pre-Hispanic archaeology 52 .Because the coalescent ages (Table 4) of mtDNA haplotypes from concentrated ancestry in Northwest Africa (H1cf and U6b1a) are similar to those in the Mediterranean range (H1e1a and U6c1), they might have coexisted on the islands with little cultural or genetic exchange, which raises the possibility of independent arrivals for each group at the same time.However, from the beginning of the conquest, written records indicated the native islanders, although good swimmers appeared to lack navigation skills and there was no communication among islands 53 .This led to the widespread idea that they might have been voluntarily or involuntarily transported to the islands www.nature.com/scientificreports/by people with the maritime capacity to do so 54 .In favor of the first option is the fact that these island settlers brought livestock and seeds with them for their future subsistence, implying that it was a programmed migration, which presupposes previous knowledge of destination.But if this was the case, why did they not bring with them other technological advances already in use in northern Africa at that time?This includes bronze or iron tools and weapons, the Roman plow, and the ceramic lathe, just to mention a few.The second option, that they were forced to migrate, resolves these questions and could explain the genetic heterogeneity of the indigenous population.The exploitation of purple was a hierarchical business.At the top were the elite, which had the economic and technological power to carry out this undertaking.Following were the artisans specialized in dyeing the fabrics, then the workforce capable of extracting the dye, and, finally, the slaves that had to collect the mollusk; both of the latter likely were brought to the Canary Islands.Most likely, the dye extractors were recruited from the already settled Mediterranean purple dye workshops, while the slaves, for economic reasons, would have been captured or bought in the vicinity of the Archipelago in places such as the Atlantic Moroccan port of Mogador (Fig. 4).
When the purple dye industry ceased being profitable, those people likely were left to fend for themselves on the islands.For subsistence reasons, goats and barley accompanied people on their previous inter-island transfers, making their subsequent adaptation possible.Notably, the fact that the indigenous barley has been continuously cultivated since the pre-Hispanic colonization of the islands 55 and the persistence of indigenous goat breeds 56 suggest that there were no major intrusions into the islands until their European conquest.
Samples
Partial and complete mtDNA sequences of the indigenous 6,[8][9][10]20,23 , historical 48,49 , and present-day Canary Islands population samples 12,17,22 were compiled from prior published studies (Table S1 and supplementary bibliography). To fid the closest matches to the indigenous Canarian sequences, nucleotide rare variants and the co-occurrence among point variants were used to search within known haplogroups, and short sequences, including total or partial haplotypes, were used to query the whole dataset in the following databases: NCBI GenBank (http:// www.ncbi. nlm. nih. gov/gov/ genba nk/, Mitomap (http:// www.mitom ap.org/ MITOM AP 57 , Ian Logan 2020 (http:// www.ianlo gan.co.uk/ seque nces_ by_ group/ haplo group_ select.htm, Empop database (http:// www.empop.online/ haplo types 58 , and AmtDB (http:// www.amtdb. or).Mutations that have not been previously found in any haplotype of a given haplogroup were considered putative contaminant mutations and further analyzed on other haplogroup contexts.Rare mutations that appeared on different haplogroup backgrounds were considered phantom mutations.In total 336 mtDNA indigenous sequences were reanalyzed of which 288 were HVSI partial sequences
Figure 4 .
Figure 4. Putative routes followed by the indigenous carriers of the northern and northwestern African haplotypes to the Canary Islands.
Table 1 .
Possibly contaminated and/or incompleted Canarian native haplotypes, based on HVS1 variants (16,000 to 16,400 range) minus 16,000.Hg1 and Hg2 mean haplogroup classification before and after the analysis.
Table 2 .
Indigenous mtDNA haplotypes present in the historic or current Canarian population but absent in North Africa.
Table 3 .
Indigenous mtDNA haplotypes absent in the historic and/or current Canarian population.
Table 4 .
Settlements on the Canary Islands based on coalescence age and phylogeography of Indigenous mtDNA lineages. | 2024-05-17T06:17:42.475Z | 2024-05-15T00:00:00.000 | {
"year": 2024,
"sha1": "45858f566d0f9f7ea03eb0caf6c9b528584c476f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-61731-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f5c0b724283ee958b941670260fb0014c138ce3",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214641171 | pes2o/s2orc | v3-fos-license | Continuous quantum error correction for evolution under time-dependent Hamiltonians
We analyze the continuous operation of the bit flip code aimed to protect the coherent evolution in the code space due to an encoded Hamiltonian. To detect errors in real time, we filter the output signals from continuous measurement of the error syndrome operators and use a double thresholding protocol for error diagnosis, while correction of errors is done as in the conventional operation. We optimize our continuous operation protocol for evolution under quantum memory and under quantum annealing, by maximizing the fidelity between the target and actual logical states at a specified final time. In the case of quantum memory we show that our continuous operation protocol yields a logical error rate that is slightly larger than the one obtained from using the optimal Wonham filter for error diagnosis. The advantage of our protocol is that it can be simpler to implement. For quantum annealing, we show that our continuous operation protocol can significantly reduce the final logical state infidelity when the continuous measurements are sufficiently strong relative to the strength of the time-dependent Hamiltonian. These results suggest that a continuous implementation is suitable for quantum error correction in the presence of encoded time-dependent Hamiltonians, opening the possibility of many applications in quantum simulation and quantum annealing.
Introduction
Quantum error correction (QEC) is an essential component of quantum information processing. The need to either avoid or correct errors on quantum states due to imperfect quantum operations or decohering interactions with the environment places stringent requirements on realization of the promise of quantum computation and quantum simulations. Various tools have been developed to mitigate the effect of such errors, including encoding into decoherence free subspaces or subsystems [1,2], addition of penalty Hamiltonians [3][4][5], dynamical decoupling methods [6][7][8] and other applications of pulse sequences [9], as well as the use of quantum error correcting codes (QECC) that delocalize the errors over multiple physical qubits, combined with error recovery operations [10][11][12][13]. The latter provides a powerful approach to systematically correct errors that can also be made fault tolerant [14]. In this work we shall develop a theory of quantum error correcting codes that act continuously in time, in contrast to the discrete operation of conventional QECC.
The canonical operation mode for quantum error correction codes [15][16][17][18] employ projective measurements and discrete recovery operations to provide reduction of errors that are treated as discrete events occurring at a specified rate. The formalism of QEC has been developed to provide firm guarantees of protection in terms of reduced scaling of the logical error rate for an encoded state. However in practice, few measurements can be described as projective, and are instead better described as finite strength weak measurements that are characterized by a gradual collapse of the measured system wave-function [19][20][21][22][23][24][25][26][27][28][29][30][31]. A continuous quantum error correction code, i.e., a CQEC, is based on the continuous quantum measurement of the error syndrome operators of the conventional QEC code. Previous theoretical work on such continuous quantum error correction has been devoted primarily to analysis of the continuous operation performance of stabilizer [32][33][34][35][36][37][38][39][40][41][42][43] and subsystem [44,45] QEC codes for quantum memory, where the Hamiltonian of the encoding physical qubits is disregarded in the analysis. In contrast, in this work we focus on protecting the coherent evolution of an encoded qubit system evolving under a time-dependent Hamiltonian, against environmental decoherence. This problem is particularly important for the development of quantum error correction for a broad range of quantum information applications employing continuously varying Hamiltonians. These include quantum annealing and adiabatic quantum computation [46], and quantum simulation [47].
A major challenge for application of either discrete or continuous QEC to protect coherent evolution of an encoded qubit system is that perfect identification and correction of errors (in the example studied here, these will be bit-flip errors) does not imply absence of logical errors [48]. We can understand this difficulty by thinking of the action of errors on the Hamiltonian instead of on the quantum state-a perspective somewhat similar to the Heisenberg picture. In this picture, an error causes the Hamiltonian to effectively change from H(t) to EH(t)E, where E is the operator associated to the error that occurred and is assumed to be a single-qubit Pauli operator. Subsequent coherent evolution is due to the new Hamiltonian EH(t)E, until the moment when the error that occurred is detected and corrected. During this period of error diagnosis and correction, logical errors will accrue if the original Hamiltonian does not commute with the error operators, i.e., if H(t) = EH(t)E. Since Hamiltonians that commute with all error operators are difficult to implement [48], this problem has constituted a major stumbling block for the development of quantum error correction for quantum annealing and for analog quantum simulation in general. This is precisely the situation that we address in this work.
We consider here the continuous operation of a quantum code that is designed to protect the coherent evolution of the encoded qubit system. As a specific example we take the three-qubit bit flip code [18], which is a stabilizer code [17] with two commuting stabilizer operators that constitute the measurement operators. We propose and analyze an error detection protocol based on time-averaging (filtering) of the bare readout signals from simultaneous continuous monitoring of the error syndrome operator, together with a double error thresholding scheme that is applied to the filtered readout signals in order to explicitly diagnose errors. Unlike previous schemes [33,36], partial errors are not acted on-the error diagnosis is acted on only when occurrence of a complete, i.e., discrete, error has been diagnosed with high probability. Filtering is necessary in the protocol to reduce (but not eliminate) the amount of noise in the filtered readout signals, while double error thresholding is essential to reduce the probability of mis-identification of single bit-flip errors that affect several readout signals at the same time [45]. We show how an accurate open quantum system model can be developed to describe the evolution of the encoded qubit system in the presence of both bit-flip errors and CQEC. This model can then be used to optimize our proposed CQEC protocol to minimize the logical error rate for quantum memory, which is shown to be slightly larger than the logical error rate obtained from using the linear variant [42] of the optimal Wonham filter [40] for error diagnosis. The advantage of our continuous CQEC protocol is that it can be simpler to implement. We also show that the resulting optimized double thresholding error diagnosis scheme is very effectively combined with discrete recovery operations to obtain the reduced scaling of the logical error rate that is necessary for a valid quantum error correcting code. The open-system quantum model for the encoded qubit system is then extended to include coherent evolution due to an encoded Hamiltonian H(t) that commutes with all measurement operators at any time. We use this model to optimize the performance of our continuous QEC protocol for operation under quantum annealing. In this case, the performance of our protocol depends on the relative strength of three parameters; namely, the error rate γ, the Hamiltonian strength parameter Ω 0 , and the measurement strength Γ m from continuous measurements. We find that our CQEC protocol yields a significant reduction of the final logical state infidelity if continuous measurements are sufficiently strong relative to the strength of the Hamiltonian.
To demonstrate the capability of our proposed CQEC approach, we present detailed results for one logical qubit and then show that a high level of protection is also obtained for two logical qubits. Finally, we discuss how to generalize the approach to many encoded logical qubits.
Results Continuous operation of the three-qubit bit flip code.-In contrast to the discrete operation of the threequbit bit flip code [18], in the continuous operation, the error syndrome operators (stabilizer generators), are continuously measured at the same time. In Eq. (1), Z 1 represents the Pauli z operator that acts on the first physical qubit; that is, where the set of states |q 1 q 2 q 3 with q 1 , q 2 , q 3 = {0, 1} defines the computational basis. Similar definitions hold for the Pauli z operators Z 2 and Z 3 . The corresponding normalized readout signals are given by (k = 1, 2) where ρ(t) is the 8×8 density matrix of the three physical qubits and τ k is the so-called "measurement time" to distinguish between the ±1 eigenvalues of the stabilizer generator S k with a signal-to-noise ratio (SNR) of 1 [49]. Note that the detector readout signals I k (t) are given by the sum of the "signal part" Tr[S k ρ(t)] and the noise part ξ k (t), which has a vanishing mean. In the Markovian approximation, the noises ξ k (t) are assumed Gaussian and white with a two-time correlation function: where · denotes average over an ensemble of noise realizations. The evolution of the three-qubit quantum state ρ(t) in the absence of environmental decoherence is described by (in Itô interpretation [21]) The first line of Eq. (4) describes the coherent evolution of the three physical qubits due to the Hamiltonian ( = 1) where the frequency parameter Ω 0 sets the energy scale of the above Hamiltonian, and the coefficients a(t) and b(t) are functions of time with magnitudes smaller than 1. The operators X L and Z L denote the logical X and Z operators, given by where X q represents the Pauli x operator that acts on the qth physical qubit. Note that the system Hamiltonian (5) and the stabilizer generators (1) exhibit a block-diagonal matrix representation in the computational basis. The second line of Eq. (4) describes the measurementinduced quantum back-action on the three-qubit quantum state that is due to simultaneous continuous measurement of the stabilizer generators Z 12 and Z 23 . Each measurement channel is characterized by the measurement time parameter τ k and the measurement-induced ensemble dephasing rate Γ k , which are related via the quantum efficiency η k as follows τ k = 1/(2Γ k η k ) [49]. For ideal detectors, the quantum efficiency is unity, while for nonideal detectors the quantum efficiency is less than one. For simplicity of notation, we shall assume below that both detectors have identical parameters: Γ k = Γ m , τ k = τ m , and η k = η (k = 1, 2). (7) (This assumption can be readily removed and the analysis continued with different parameters for each detector.) Encoding with the three-qubit bit flip code effectively divides the full eight-dimensional Hilbert space of the three physical qubits into four two-dimensional subspaces, where the stabilizer generators Z 12 and Z 23 have definite ±1 values. As usual, the two-dimensional subspace where both stabilizer generators have values +1 is referred to as the code space, denoted as Q 0 , while the two-dimensional subspaces where (Z 12 , Z 23 ) have values (−1, +1), (−1, −1) and (+1, −1) are referred to as the error subspaces, denoted as Q 1 , Q 2 and Q 3 , respectively. The code space is spanned by the zero and one logical states, which are expressed in the computational basis as respectively. In the absence of errors, the (target) logical wavefunction evolves according to the following Schrödinger equation for the probability amplitudes of the zero (α L ) and one (β L ) logical states: In the above equation, h L (t) represents the Hamiltonian of the logical qubit and is given by the 2 × 2 diagonal sub-matrix of H(t) that corresponds to the code space, where σ x and σ z denote the conventional Pauli x and z matrices, and the coefficients a(t) and b(t) are the bwcoefficients given in Eq. (5). (In this work, we shall use the notation |ψ L (t) to denote the column matrix [α L (t) β L (t)] T .) We emphasize that evolution of the target logical wavefunction (9) is not affected by measurement, because the system Hamiltonian (5) and the stabilizer generators (1) commute with each other; i.e., there is no quantum Zeno effect (unlike the non-commuting situation, e.g., [50]). The error subspace Q 1 is spanned by the computational states |1 0 0 = X 1 |0 L and |0 1 1 = X 1 |1 L ; the error subspace Q 2 is spanned by the computational states |0 1 0 = X 2 |0 L and |1 0 1 = X 2 |1 L ; and the error subspace Q 3 is spanned by the computational states |0 0 1 = X 3 |0 L and |1 1 0 = X 3 |1 L . In addition, the 2 × 2 diagonal sub-matrices of H(t) that correspond to these error subspaces are all identical and equal to Note the factor of 1/3 in the above equation. This derives from the action of the system Hamiltonian H(t), Eq. (5), on a state with support in one of the error subspaces. For instance, for the system state, |ψ(t) = α X 1 |0 L + β X 1 |1 L , which is in error the error sub- In contrast, this factor of 1/3 does not appear when the system Hamiltonian H(t) acts on (code space) logical states, Eq. (9). We can therefore say that when the system state is in the error subspaces, coherent evolution in those subspaces is due to the spurious Hamiltonian (12), instead of the intended logical Hamiltonian (11).
In the presence of bit-flip errors, the (mixed) threequbit state ρ(t) evolves according to the evolution equation that results from adding to the right-hand side of Eq. (4) the following decoherence termṡ where γ q denotes the bit-flip error rate of the qth physical qubit. Thus in the presence of bit-flip errors, the full three-qubit state evolves aṡ Our analysis of logical errors presented below is based on the jump/no-jump method [18] for bit-flip errors. In this method, gradual decoherence due to the terms (13) is described as the average effect of bit-flip errors X 1 , X 2 or X 3 that occur at random times, as follows. At the infinitesimal time interval (t, t + δt), a bit-flip error X q occurs with probability δtγ q . If this error occurs, the system state "jumps" from ρ(t) to ρ(t + δt) = X q ρ(t) X q ; otherwise, the system state continuously evolves according to Eq. (4), without environmental decoherence. On averaging over many instances of the bit-flip errors, the jump/no-jump approach reduces to the open quantum system model (14), where errors continuously change the mixed system state ρ(t).
Our goal is to maximize the fidelity between the target logical wavefunction (9) and the true (mixed) logical state (15), at some final time, where the evolution includes the decoherence effect of bit-flip errors as well as the effect of the spurious coherent evolution due to an added Hamiltonian. To counteract the latter two effects, we introduce the double threshold CQEC protocol described in the following subsection.
The double threshold CQEC protocol.-In the three-qubit bit flip code, the error correction operations are which are applied on the physical qubits when the error syndrome (defined as the values of the stabilizer generators Z 12 and Z 23 , in this order) is equal to (−1, +1), (−1, −1) or (+1, −1), respectively. To apply these error correction operations in the continuous operation, we have to estimate the error syndrome from the noisy readout signals I k (t) given in Eq. (2). To do this, we filter the latter to obtain smoother signals I k (t) that obey the following filter equation: where τ plays the role of an averaging-time parameter. The initial condition for Eq. (17) is discussed below. In practice, the filtered readout signals I k (t) can be obtained, e.g., by passing the bare readout signals I k (t) through a resistor-capacitor circuit (RC lowpass filter [51]). Note that the SNRs of the filtered readout signals can be increased by choosing a larger value of τ . For instance, in the absence of bit-flip errors, the filtered readout signals read as in the stationary regime (t τ ) and their SNRs are equal to 2τ /τ m . The averaging-time parameter τ should not be chosen arbitrarily large; there is an optimal value that is obtained below.
To diagnose the error syndrome, we use a double thresholding scheme that is applied to the filtered readout signals I 1 (t) and I 2 (t). We introduce two error threshold parameters Θ 1 and Θ 2 (Θ 1 < Θ 2 ) that define the interval [Θ 1 , Θ 2 ], which is referred to as the "syndrome uncertainty region", see Fig. 1. If at least one of the filtered readout signals lies within this interval, we 1. Example of filtered readout signals I1(t) and I2(t) when a bit-flip error X2 occurs. This error is detected by the CQEC protocol (see main text) at the moment when both filtered readout signals have exited the "syndrome uncertainty region" below the lower error threshold Θ1. The filtered readout signals I k (t) are discontinuous since the CQEC protocol reset them to the value +1 at the moment when the occurred error is diagnosed.
say that we are not certain about the value of the error syndrome, and do nothing. More precisely, the double thresholding scheme works as follows. If I 1 (t) and I 2 (t) are both larger than Θ 2 , the diagnosed error syndrome is (+1, +1) and no error correction operation is applied, since the system quantum state is most likely in the code space. If I 1 (t) < Θ 1 and I 2 (t) > Θ 2 , the diagnosed error syndrome is (−1, +1) and the error correction operation to be applied is C op = X 1 , since the system quantum state is most likely in the error subspace Q 1 . If I 1 (t) and I 2 (t) are both smaller than Θ 1 , the diagnosed error syndrome is (−1, −1) and the error correction operation to be applied is C op = X 2 , since the system quantum state is most likely in the error subspace Q 2 . If I 1 (t) > Θ 2 and I 2 (t) < Θ 1 , the diagnosed error syndrome is (+1, −1) and the error correction operation to be applied is C op = X 3 , since the system quantum state is most likely in the error subspace Q 3 .
The error correction operations C op must now be applied immediately after an error is detected. Note that this contrasts with the situation in operation of a quantum memory, where correction of errors can be delayed to the end of the continuous operation of the code [40,42,45]. In the present analysis, we shall assume that the error correction operations are applied instantaneously on the physical qubits, changing the three-qubit state from ρ(t) to C op ρ(t)C op when the error correction operation C op is applied.
Finally, the filtered readout signals I k (t) are reset to the initial condition +1 at the moment when an error is diagnosed (see Fig. 1). Their subsequent values are dictated by the filter equation (17) until the next error is diagnosed, and so on. Figure 1 depicts an example showing how the filtered readout signals I 1 (t) and I 2 (t) are affected by the occurrence of a bit-flip error X 2 at the moment in time t err = 162Γ −1 m . Before this error occurs, the system state is in the code space, so the filtered readout signals fluctuate around 1. After the occurrence of the error X 2 , the "signal part" of the filtered readout signals becomes (for t ≥ t err ) Equation (19) is the solution of Eq. (17) with I k (t) replaced by −1, which is the "signal part" of the bare readout signal Eq. (2) after the error X 2 occurs. Even though both filtered readout signals have the same "signal part", we see in Fig. 1 that these signals follow different paths due to noise. This indicates that if we had used a single error threshold to detect errors, the error X 2 would have been most likely misdiagnosed, because the filtered readout signals do not cross the given error threshold at the same time, see Fig. 1. In contrast, our double thresholding scheme performs well unless relatively large fluctuations occur in the filtered readout signals. For instance, in the example considered above, the error X 2 would be diagnosed as X 1 if a relatively large positive fluctuation (of magnitude of the order of Θ 2 − Θ 1 ) had made the filtered readout signal I 2 (t) be above the upper error threshold Θ 2 at the moment when the other filtered readout signal I 1 (t) is below the lower error threshold Θ 1 . We will show below that the probability to misdiagnose errors in our double thresholding scheme can be made exponentially small by both increasing the length of the "syndrome uncertainty region" and increasing the averaging-time parameter τ , see Fig. 3. Generally speaking, detecting errors that affect several error syndrome signals I k (t) at the same time are the most difficult to detect under continuous monitoring (e.g., error X 2 in the three-qubit bit flip code), and the performance of the latter critically depends on suppressing misdiagnosis of such errors [45]. Effective open-system model for the logical qubit.-In this section we develop an approximate evolution equation for the mixed logical state L (t) that describes the combined action of both bit-flip errors and the above CQEC protocol, and the action of an applied time-dependent Hamiltonian. We are particularly interested in the limit of sufficiently small bit-flip error rates γ q , where single bit-flip errors are the most probable, followed by two bit-flip errors, and so on. In this regime there are three different scenarios that can give rise to logical errors during the time evolution-a single misdiagnosed bit-flip error, spurious coherent evolution in an error subspace following a correctly diagnose bit-flip error, and two bit-flip errors that are misdiagnosed as one. We analyze each of these in turn below.
For the following analysis it is convenient to introduce a timestep ∆t such that where t det denotes the characteristic time to detect a bitflip error by our CQEC protocol, and t op is the operation time of the continuous implementation. Because of the second inequality of Eq. (20), we assume below that at most two bit-flip errors occur within each timestep ∆t. We shall eventually send ∆t to zero, to obtain an effective evolution equation for the encoded density matrix L (t).
We consider first the scenario where a single bit-flip error that occurs in the time interval (t, t + ∆t) is misdiagnosed by the CQEC protocol. In this case, a wrong error correction operation is applied to one of the physical qubits: this incorrect operation transfers the system state to another error subspace, instead of back to the code space. For instance, if the actual error is X 2 but the diagnosed error syndrome is (−1, +1) instead of (−1, −1), the error correction operation that will be applied is C op = X 1 instead of C op = X 2 . This will incorrectly transfer the system state from error subspace Q 2 to error subspace Q 3 , resulting in a logical X error, since X 1 X 2 = X 3 X L and X L is the logical X operator.
The system state will be returned to the code space by the next iteration of the CQEC protocol if this iteration successfully diagnoses the new error syndrome. We shall assume that the probability to misdiagnose a bit-flip error is small enough that a series of two consecutive misdiagnoses is unlikely, and the next iteration does indeed return the system state to the code space. After completion of the next (successful) iteration of the CQEC protocol, the system state at the moment t + ∆t is equal to X L ρ(t) X L , which implies that the 2×2 logical density matrix at that moment is The probability of this scenario is given by where p (Xq) misdiag denotes the probability to misdiagnose the bit-flip error X q . We show in the Methods section that this probability depends exponentially on the parameters of the CQEC protocol, as is illustrated in Fig. 3. This scenario results in a contribution ∆ (1) L to the actual logical state L (t + ∆t) at the moment t + ∆t (Eq. (36)), with Note that in the argument leading to Eq. (21), we have disregarded the coherent evolution of the system state in the error subspaces because this leads to correction terms of the order of (t det Ω 0 ) 2 (∆tΩ 0 ) 2 , which can be neglected since we are interested in keeping terms only up to first order in ∆t in Eq. (23).
The second scenario corresponds to the case of a single bit-flip error that is correctly diagnosed by the CQEC protocol. The probability for this scenario is In contrast to the first scenario, logical errors are now due only to spurious coherent evolution in the corresponding error subspace during the time that it takes to diagnose and correct the occurred error. Let us assume that the bit-flip error X q occurs at the instant t ∈ [t, t + ∆t]. We shall denote the time to detect such an error as t (q) det , where the upper index q indicates that in general the error detection time may depend on the bitflip error type, X q = X 1 , X 2 or X 3 . The system density matrix at the moment t + ∆t is where the integral evaluates the average over the error instant t , and U(t 1 , t 2 ) with t 1 ≤ t 2 denotes the unitary evolution operator associated to the system Hamiltonian (5). If we read the right-hand side of Eq. (26) from right to left, the first X q operator accounts for the error that occurred, and the second X q operator accounts for the application of the error correction operation, which is C op = X q since the occurred error is correctly diagnosed. We now seek to approximate ρ scn-2 (t + ∆t) to first order in ∆t. Because the integral (25) is over a time interval of duration approximately equal to ∆t, we may write ρ scn- where the integrand of Eq. (25) has been evaluated at t = t. In addition, the operator V q (t) may be replaced by its zero-order approximation in ∆t: Note that the 8 × 8 matrices X q H(t)X q and H(t) exhibit a similar block-diagonal matrix representation in the computational basis, since both commute with the stabilizer generators. This blockdiagonal structure consists of 2 × 2 diagonal submatrices for each subspace Q . In particular, the 2 × 2 diagonal submatrices of X q H(t)X q and H(t) that correspond to the code space are given respectively by the spurious Hamiltonian h spurious (t) and by the logical Hamiltonian h L (t) that are defined in Eqs. (11)- (12). This implies that the 2 × 2 diagonal submatrix of V q (t) that corresponds to the code space can be approximated as Up to first order in ∆t, the logical state at the moment t + ∆t is then given by Equation (27) provides an effective parameterization of the effective action of the logical error operation V q (t) due to spurious coherent evolution in an error subspace during detection of a single bit-flip error, in terms of the error detection time t (q) det . We can estimate this time from the "signal part" of the filtered readout signals I k (t), i.e., disregarding the noise. In this noiseless approximation, the error-detection time is the same for all bit-flip errors; i.e., t (q) det = t det , so we may consider a particular case. Let us consider the bit-flip error X 2 . If we apply the CQEC protocol to the "signal part" of the filtered readout signals, the error X 2 will be diagnosed when I k (t err +t det ) = Θ 1 for k = 1, 2. From this condition and Eq. (19), we obtain the error-detection time More generally, the presence of noise in the filtered readout signals will make the error-detection times random.
For simplicity, and to obtain analytic estimates, we shall assume in this work that they are deterministic and given by Eq. (29). For this second scenario, Eqs. (27)-(28) show that t (q) det then provides a parametrization of the effect of the code space logical error operation V q that is due to spurious coherent evolution in the error subspaces. The contribution of this scenario to the logical state L (t + ∆t) at the moment t + ∆t is (see Eq. (36)) The third scenario is the case of two errors that occur sufficiently close in time that they are not individually diagnosed by the CQEC protocol; instead, the protocol diagnoses a different (false) error. Now it is clear that if two consecutive errors occur sufficiently far apart in time, both errors will be correctly diagnosed. On the other hand, if these errors occur sufficiently close in time, the CQEC protocol can fail, since our protocol determines the error syndrome from the filtered readout signals I k (t), which are slow and take some time (proportional to the averaging-time parameter τ ) to exit the "syndrome uncertainty region", as evident in Fig. 1. Let us denote ∆t qq as the time window in which two consecutive errors, first X q and then X q , are misdiagnosed as the false error X q (q = q = q ). Neglecting spurious coherent evolution in the error subspaces, application of the wrong error correction operation C op = X q effectively induces a logical X operation on the system state ρ(t) since C op X q X q = X q X q X q = X L , and then the logical density matrix changes from L (t) to at the moment t+∆t (see also Eq. (21)). The probability for this scenario is given by where the time windows ∆t 12 , ∆t 23 and ∆t 13 can be easily evaluated in the noiseless approximation, by an analogous procedure to that above for t This yields ∆t 12 = ∆t 23 = τ ln 2 1 + Θ 1 and ∆t 13 = τ ln 1 + Θ 2 1 + Θ 1 .
The factor of 2 in Eq. (32) is due to the fact that the time window ∆t qq is the same as ∆t q q , which is the corresponding time window for the case where the error X q occurs before the error X q . The contribution of this scenario to the logical state L (t + ∆t) at the moment t + ∆t is (see Eq. (36) Finally, if none of the above three scenarios occur, the logical state at the moment t + ∆t is given by the time evolved state under the logical Hamiltonian h L (t) of Eq. (11) and is equal to where we have disregarded terms of order (∆t) 2 .
The logical state at the moment t + ∆t that takes into account all of the above four scenarios is then given by Inserting the approximations Eqs. 32) and (35) into Eq. (36) and then taking the limit ∆t → 0, we obtain the following effective evolution equation for the logical state L (t): Here is now the logical X error rate for quantum memory operations [42,44]. The initial condition for Eq. (37) reads as Equation (37) is the main result of this section. To the best of our knowledge, the last term at the right-hand side of Eq. (37) has not been previously discussed in the context of QEC for quantum simulation or quantum annealing. This term quantifies the logical errors due to spurious coherent evolution in the error subspaces.
We now estimate the probabilities p (Xq) misdiag that the CQEC protocol misdiagnoses the bit-flip errors X q . Note that the bit-flip errors X 1 and X 3 are equivalent in the three-qubit bit flip code. Thus we expect that p misdiag , which is numerically verified in Fig. 3. averaging-time parameter τ 2τ m (see Fig. 7), the probability to misdiagnose the X 1 or X 3 errors is much smaller than the probability to misdiagnose the X 2 error. Thus, we may not only assume that p misdiag , but we can also neglect these terms in Eqs. (37)- (38), i.e., we can set In addition, the probability p (X2) misdiag to misdiagnose the error X 2 can be approximated as where the coefficient c = 1.607 is obtained from the fit shown in Fig. 3. The exponential dependence of the probability p (X2) misdiag on the parameters of the CQEC protocol is derived in the Methods Section.
Using these estimates for p (Xq) misdiag , the logical X error rate formula Eq. (38) can be rewritten in terms of all relevant parameters as Note that Γ L implicitly depends on the efficiency of the measurement, η, via the explicit dependence on measurement time τ m = 1/(2Γ m η). In a given experimental setup, the parameters τ, Θ 1 , Θ 2 would constitute a minimal set of tunable parameters. Figure 4 shows the non-monotonic dependence of Γ L on the time-averaging parameter τ , for fixed values of the error threshold parameters Θ 1 = −0.54 and Θ 2 = 0.8, and equal bit-flip error rates γ q = γ = 1.25 × 10 −3 Γ m . Note that, in the limit of relatively small τ , the logical X error rate increases exponentially because the SNR of the filtered readout signals decreases, leading to more frequent false diagnoses of X 2 errors. In this limit, the first term of Eq. (42) is dominant. In the opposite limit of relatively large τ , the logical X error rate increases linearly in τ , due to misdiagnosis of two errors that occur sufficiently close in time. We see that measurement inefficiency η ≤ 1 affects the logical error rate only for small averaging times τ and has no effect at large τ . This reflects the fact that while the mis-diagnosis of single qubit errors that dominates Γ L at small τ depends on measurement efficiency via τ m (measurement time parameter), the mis-diagnosis of two errors occurring close in time was evaluated in the noiseless approximation and does not depend on η.
The numerical calculations presented at the end of this Section show that the effective open-system model for the logical qubit [Eq. (37)] together with the estimates Eqs. (42), (27) and (40)-(41) for the parameters Γ L (logical X error rate), V q (t) (logical error operation parameterized in terms of error-detection times t (q) det , see Eq. (29)) and p (Xq) misdiag (probability to misdiagnose bit-flip error X q ) provide a good description for the true evolution of the logical state L (t) that is encoded into the full system state ρ(t), which evolves according to Eq. (14).
Final logical state fidelity.-The figure of merit that we aim to maximize under evolution due to a timedependent Hamiltonian is the final fidelity F between the target (9) and the true (15) logical states, defined as Using the effective evolution equation (37) for the logical state L (t), we can derive the following analytical expression for the final logical state infidelity which is expressed in terms of the coherent evolution of the target logical state |ψ L (t) . The first term on the right-hand side of Eq. (44) is the usual term in quantum memory, i.e., Γ L t op , generalized here to the case of a finite and time-dependent logical Hamiltonian (11). Note that the time integral accounts for the accumulated loss of fidelity due to logical X errors on the time-evolving logical state. The second term is due to the spurious coherent evolution in the error subspaces. Note that this term is positive, i.e., contributes a finite infidelity, because the operator V q , given in Eq. (27), is unitary. Equation (44) is the main result of this section.
To obtain this result in Eq. (44), we have applied the jump/no-jump method in Eq. (37) to estimate L (t op ) as follows: whereγ q = γ q 1−p (q) mis ,γ tot =γ 1 +γ 2 +γ 3 and U L (t 1 , t 2 ) is the unitary evolution operator associated to the error free Schrödinger evolution equation (10). When the jump/no-jump approach is applied to Eq. (37), we see that logical errors come in two forms. First, the usual logical X errors that change the logical wavefunction from |ψ L (t) to σ x |ψ L (t) (second term in Eq. (45)). These occur at the logical X error rate Γ L given in Eq. (42). Second, logical errors that are characterized by the logical error operation V q given in Eq. (27) (third term in Eq. (45)). This new type of logical errors is specifically due to spurious coherent evolution in the error subspaces. Such errors change the logical wavefunction from |ψ L (t) to V q |ψ L (t) and occur at the rateγ q . In addition, we also have the coherent no-jump evolution that is described by the unitary evolution operator U L (t 1 , t 2 ) (first term in Eq. (45)). Note that in Eq. (45) we have disregarded cases where there are more than one logical error occurrences during the continuous operation duration t op . This approximation is valid in the limit of small bit-flip error rates γ q that we assume here.
Optimization of the CQEC protocol for operation under quantum memory and quantum annealing.-In this section we derive the optimal parameters (Θ opt 1 , Θ opt 2 and τ opt ) of the CQEC protocol that maximize the final logical state fidelity (43). The optimization will be specific to a particular choice of Hamiltonian evolution, i.e., to the choice of h L (t), since the temporal dependence of |ψ L (t) is determined by this. We shall consider here the particular case of quantum annealing with a linear schedule in addition to quantum memory. In this case the logical Hamiltonian h L (t) is given by Eq. (11) with the coefficients a(t) and b(t) equal to In the context of quantum annealing we shall assume that the adiabatic limit holds, t op Ω 0 1, so that we may approximate the target logical wavefunction as the instantaneous ground state, which reads as where θ(t) = arctan a(t)/b(t) . Inserting Eq. (47) into Eq. (44), we obtain for the final logical state infidelity which is the cost function that we use in the optimization procedure. We emphasize that the result (48) applies to the special case of quantum annealing with a linear schedule, and note also that we have included terms up to second order in Ω 0 t (q) det . More generally, the final infidelity 1 − F for arbitrary annealing schedule parameters a(t) and b(t) can also be easily obtained, as long as these coefficients also satisfy the adiabatic condition |ȧ(t)|, |ḃ(t)| Ω 0 . This can be accomplished by writing the first integrand of Eq. (44) as 1 − | ψ L (t)|σ x |ψ L (t) | 2 = cos 2 θ(t) and the sec- det , whereθ(t) = arctan 3a(t)/b(t) andΩ(t) = Ω 0 a 2 (t) + b 2 (t)/9 is half the instantaneous energy gap of the spurious Hamiltonian (12). To obtain a final numerical value for the infidelity the integrals of Eq. (44) would have to be evaluated numerically for evolution under a specific annealing Hamiltonian.
Since our formulation also applies to the special case of quantum memory, Ω 0 = 0, we first present results for the optimal parameters τ opt , Θ opt 1 and Θ opt 2 in this case before presenting results for quantum annealing (Ω 0 = 0). For simplicity, we discuss here the case of equal bitflip error rates In quantum memory operation, the final logical state infidelity 1 − F is given by the first term of Eq. since the second term exactly vanishes. Assuming that the initial logical state is |ψ L (0) = |0 L or |1 L , we find that 1 − F reduces to Γ L t op , because the target logical evolution is trivial in the quantum memory case (|ψ L (t) is constant). In addition, we may assume that the operation duration t op is fixed. Then minimization of the final infidelity in quantum memory is equivalent to optimization of the logical X error rate Γ L in Eq. (42).
The numerical factors and exponents in the above equation are obtained from fitting for γ ∈ [10 −6 Γ m , 10 −4 Γ m ]. The approximate quadratic scaling of Γ opt L with γ indicates that the double threshold CQEC protocol is both effective and accurate in diagnosing single bit-flip errors. Figure 5 also shows the logical X error rate for the linear variant of the optimal Wonham filter, Γ Wonham L = 3γ 2 τ m ln(2/γτ m ), that was obtained in Ref. [42]. We point out that our optimized logical error rate Γ opt L is very close to that of the linear variant of the optimal Wonham filter.
In addition, we find that the discrete and continuous operations can exhibit similar performances if the cycle time ∆t cycle from the discrete operation is related to the strength Γ m of the continuous measurements as follows: The above results are obtained from the relation Γ opt L = Γ disc L , where Γ disc L = 3γ 2 ∆t cycle is the logical X error rate for the discrete operation [33].
We now discuss the results of optimizing the double threshold error detection parameters in the specific case of quantum annealing. To quantify the effectiveness of the CQEC protocol in correcting logical errors, we introduce here the ratio of the infidelity for an unencoded calculation, to the infidelity for an encoded calculation using the optimized double-thresholding parameters. This ratio, R(γ, Ω 0 ), is defined for a given error rate and annealing Hamiltonian, which we shall denote here only by its strength Ω 0 . Specifically, where F opt is the value of the final logical state infidelity Eq. (48), optimized with respect to τ, Θ 1 , Θ 2 (see below), and is the fidelity between the final target logical state |ψ L (t op ) and the final state ρ unenc (t op ) of an unencoded qubit subject to bit-flip errors with rate γ and coherent evolution due to a Hamiltonian h L (t). We refer to R(γ, Ω 0 ) as the "reduction factor" of the final logical state infidelity, since by construction it shows by how much the infidelity is reduced by encoding together with optimization of the error detection. It is easy to see, using the jump/no-jump method, that the unencoded final infidelity F unenc is approximately given by the first term of Eq. (44), with Γ L replaced by γ. In addition, for the quantum annealing problem with a linear schedule, Eq. (46), that is considered here, we have (54) Figure 6 shows the dependence of the reduction factor (52) on the physical qubit error rate γ. We see that R(γ, Ω 0 ) increases as γ decreases, saturating at the value R plateau in the limit of small bit-flip error rate γ. This plateau value increases with decreasing Ω 0 as follows Γ m /90, Γ m /30 and Γ m /10. Note that, in the quantum annealing operation considered here, the duration operation t op and the frequency parameter Ω 0 have to satisfy the adiabatic condition, Ω 0 t op 1, which allowed us to use the instantaneous ground state (47) as the target logical state. Assuming that this condition is satisfied, the reduction factor (52) of the final logical state infidelity due to CQEC is independent of t op .
Finally, we summarize the optimized parameters τ opt , Θ opt 1 , Θ opt 2 employed in Figure 6. Figure 7 depicts the results for the optimal averaging time τ opt that minimizes the logical X error rate Γ L in the case of quantum memory (black lines) and the final logical state infidelity (48) in the case of quantum annealing for Ω 0 = Γ m /90 (green lines), Γ m /30 (blue lines), 0.1Γ m (purple lines) and 0.3Γ m (red lines). We see that the optimal averaging-time parameter τ opt generally increases when the measurement quantum efficiency η decreases, due to the additional noise at the output of the readout signals (2) [49]. In the particular case of quantum memory (Ω 0 = 0), we obtain The above results are obtained from fitting τ QM opt for the range of error rates indicated in Fig. 7. In the case of quantum annealing, for a fixed and finite Ω 0 , the optimal We point out that in our optimization procedure we have imposed two constraints: −1 ≤ Θ 1 ≤ 0 and 0 ≤ Θ 2 ≤ 0.8. The reason for the constraint on Θ 2 is that our analytical estimates for the logical error rate Γ L , see Eq. (42), and the error detection time t det , see Eq. (29), are not accurate when Θ 1 approaches 1. The optimization finds that the optimal position of the upper error threshold should be as close to 1 as it is allowed. If we instead use the constraint 0 ≤ Θ 2 ≤ 1 (with the same previous constraint on Θ 1 ), the optimization finds that Θ opt 1 ≈ −0.4 and Θ opt 2 = 1.0. This indicates that the optimal position of the lower error threshold is robustly located around −0.5.
Overall performance of the double threshold CQEC protocol.-To quantify the effectiveness of the double threshold CQEC protocol in correcting logical errors during the entire continuous operation, we introduce the time-dependent reduction factor R t of the logical state infidelity. This is defined analogously to Eq. (52); where F unenc. (t) is now the time-dependent unencoded fidelity, defined as in Eq. (53) with the operation time t op replaced by t ∈ [0, t op ], and F t is the time-dependent logical state fidelity (60) Figure 8 shows the time dependence of the logical state infidelity, 1−F t = 1− ψ L (t)| L (t)|ψ L (t) , obtained using two approaches: "full numerics" and "effective model". In the first approach, the logical state L (t) is obtained by projecting out the code space components from the full system density matrix ρ(t), where the latter evolves according to the evolution equation (14), together with the action of the instantaneous error-correction operations C op [Eq. (16)] that are applied to the physical qubits whenever an error is diagnosed by the double threshold CQEC protocol. The ensemble average of Eq. (15) is generated over an ensemble of 20,000 realizations, using the techniques described in the Methods section. The results of this approach are depicted in Fig. 8 by the solid lines, for Hamiltonian strength parameters Ω 0 = 0.1Γ m , 0.2Γ m and 0.3Γ m . The second approach is that of our effective model derived in the previous subsections. Here the logical state infidelity is obtained from the numerical solution of the effective open-system model given by Eq. (37). The results of this approach are depicted in Fig. 8 by the dotted lines. The good agreement between the solid and dotted lines in Fig. 8 demonstrates that the effective open-system model accurately describes the evolution of the logical qubit during the entire continuous operation. This validates our analysis above for the optimized performance of the double threshold CQEC protocol. The inset of Fig. 8 shows the reduction factor R t for the logical state infidelity during the entire duration of the continuous operation for Ω 0 = 0.1Γ m . Here also, good agreement is found between the full numerics and the effective model approaches. Although in this specific example the reduction factors of the logical state infidelity are modest (varying from 5 to 15), larger reduction factors can be readily achieved with stronger continuous measurements. This can be seen explicitly in Fig. 6, where the increase in R is evident for Γ m larger than 10Ω 0 .
We now discuss how to generalize the effective opensystem model for one logical qubit, Eq. (37), to the general case of multiple logical qubits. In this general case, we again have logical errors that come in two forms: logical X errors, and logical errors that are characterized by a logical error operation V (l) q , where q now labels the three physical qubits that encode the lth logical qubit. The logical error operations V (l) q are again given by Eq. (27), where h L (t) (logical Hamitonian) and h spurious (t) (spurious Hamiltonian) are now specified respectively by the code space diagonal submatrices of the system Hamiltonian H(t) and X q H(t)X q . Logical X errors acting on the lth logical qubit occur at a rate Γ (l) L that is also given by Eq. (42). Note that the set of parameters of the double threshold CQEC protocol (τ , Θ 1 and Θ 2 ) can differ for different logical qubits, so Γ (37)). This is approximately equal to the bit-flip error rate γ q of the qth qubit, since the probability p (Xq) misdiag to misdiagnose the error X q is typically much smaller than one (see Fig. 3). As an example, we consider two logical qubits encoded by the physical qubits q = 1, 2, 3 (logical qubit with label l = 1) and q = 4, 5, 6 (logical qubit with label l = 2). Consider the two-qubit logical Hamiltoniañ h L (t) = − Ω 0 a(t) σ (1) x + σ (2) where σ (l) x and σ (l) z are the Pauli x and z operators corresponding to the lth logical qubit (l = 1, 2), and the quantum annealing coefficients a(t) and b(t) are given in this example by Eq. (46). We will assume that the initial condition for the target logical state evolution is |ψ L (0) = (|0 L + |1 L ) ⊗ (|0 L + |1 L )/2. For this example of two logical qubits, the effective open-system model reads aṡ where V Figure 9 shows that the logical state infidelity obtained from the effective open-system model for two logical qubits [Eq. (62), (dotted blue line)] agrees very well with the corresponding infidelity obtained from the full numerical calculations, (solid red line). This indicates that the effective model can be used to accurately estimate and optimize the performance of our CQEC protocol in order to protect the coherent evolution of several logical qubits. Most importantly, both effective model and full numerical calculations show that the CQEC protocol provides a significant reduction in the final state infidelity by a factor of ∼ 14 relative to the value obtained without error correction. For the twological qubit Hamiltonian considered here, Eq. (62), this reduction is similar to that obtained for the corresponding single logical qubit in Fig. 8. However, in general, this may not be the case since the reduction depends on the form of the coupling between the logical qubits. This is because if the coupling term is changed, the logical state infidelity can change since bothh L (t) and the V Discussion We have analyzed the continuous operation performance of the three-qubit bit flip code aimed to preserve the coherent evolution of the logical qubits against decoherence from bit-flip errors. Error detection is carried out using a relatively simple and nearly optimal protocol that consists of filtering (time-averaging) the noisy bare readout signals and using a double thresholding scheme to diagnose the error syndrome in real time from the filtered readout signals. In addition, immediately after diagnosing an error, discrete (i.e., instantaneous) error correction operations are applied to the physical qubits, as in the conventional code operation. We have shown that this combination of continuous detection of errors in real time with discrete correction of errors is very effective and yields, e.g., in the case of quantum memory operation, a logical X error rate that exhibits a nearly quadratic scaling on the physical error rate and has a magnitude that is slightly larger than the logical X error rate of the linear variant of the optimal Wonham filter [42]. The advantage of our double threshold CQEC protocol is that it can be simpler to implement.
Spurious coherent evolution of the system state in the error subspaces [48], due to a (time-dependent) encoded Hamiltonian, leads to a new type of logical errors, for which we have found the corresponding effective Kraus logical error operators, V q (t), that act on the instantaneous logical state. The Kraus logical error operator V q (t) is parametrized by the time t (q) det that the CQEC protocol takes to detect the error X q , see Eq. (27). The time t (q) det should be as small as possible in order to minimize the detrimental effect of logical errors due to spurious evolution on the performance of the double threshold CQEC protocol. For this protocol, t (q) det is estimated to be proportional to the averaging-time parameter τ (Eq. (29)), which, however, cannot be arbitrarily small without degrading the performance of the CQEC protocol to correctly diagnose single bit-flip errors X 1 , X 2 or X 3 .
We have developed an effective open-system model for the logical qubit state [see Eq. (37)] that accounts for the two types of logical errors that are relevant for, e.g., quantum simulation and quantum annealing applications: logical errors due to spurious coherent evolution in the error subspaces and the usual logical X errors of quantum memory operation. This effective model is very useful because it allows us to readily estimate and optimize the performance of the double threshold CQEC protocol without performing computationally expensive numerical calculations on the full encoding qubit system (full numerics). We have shown that the effective model accurately describes the actual logical state during the continuous operation, see Fig. 8. In addition, we have discussed how to generalize the effective model for multiple logical qubits, where we have again found excellent agreement with the more cumbersome and computationally expensive full numerics approach, see Fig. 9.
Using the effective open-system model for one logical qubit, we have analyzed the performance of the double threshold CQEC protocol to preserve the coherent evolution of the logical qubit due to a quantum-annealing type Hamiltonian with a linear schedule. We have introduced the reduction factor R of the final logical state infidelity, see Eq. (52), as a measure of the performance of the CQEC protocol. The performance depends on the relative magnitudes of three problem-specific parameters; namely, the bit-flip error rate, γ, the strength of the logical Hamiltonian, Ω 0 , and the strength of the continuous measurements of the code stabilizer generators, characterized by the measurement strength parameter Γ m . For a given ratio Ω 0 /Γ m , the reduction factor R increases as we decrease the physical error rate γ until R reaches a plateau level R plateau that only depends on the relative magnitude of the logical Hamiltonian strength Ω 0 and the strength of continuous measurements Γ m , see Fig. 6. For instance, we obtain R plateau ≈ 37, 184 or 1002 for measurement strengths Γ m = 10Ω 0 , 30Ω 0 or 90Ω 0 , respectively, assuming that continuous measurements are performed by ideal detectors (η = 1). These reduction factors become R plateau ≈ 15, 66 or 340, respectively, if the measurement efficiency is η = 0.5 (nonideal detectors).
It is possible to further increase the reduction factors R of the final logical state infidelity by using the following modified error correction operations: instead of the conventional error correction operations C op = X 1 , X 2 or X 3 of the three-qubit bit flip code. In Eq. (63), t is the time moment when the error X q is diagnosed, in which case C op = X q . The purpose of the second exponential factor is to lessen the effect of the spurious coherent evolution in the error subspace Q =q . Note that we cannot fully eliminate such spurious evolution, since the time to detect the error X q is actually random.
We estimate values for the error-detection time t (q) det in the noiseless approximation, given in Eq. (29). Numerical simulations indicate that using these modified error correction operationsC op (t) yields reduction factors R that are higher by approximately 20%.
The continuous time quantum error correction protocol presented in this work can be readily applied to any subspace stabilizer QEC code, such as the three-qubit repetition code studied here, and can also be extended to subsystem stabilizer codes [45]. An important direction for further work is to apply the CQEC protocol to other error models. Clearly arbitrary single qubit errors can be corrected using this approach with larger stabilizer codes. Of particular interest for quantum annealing is correction of thermal errors. This can be achieved by implementing the present CQEC protocol in a adiabatic (co-moving) frame and combining this with error suppression techniques in which an energy penalty consisting of the negative of the bit flip code stabilizer operators is added to the time-dependent Hamiltonian [52]. More generally, one would like to develop CQEC protocols for architecture-specific errors, such as biased noise. In the future, developing error correction diagnostics for physical errors encountered in realistic devices may be assisted by the use of machine learning techniques [53] or filters for non-Markovian noise [54]. In addition, experimental implementations frequently see drift of the key parameters such as the measurement rate Γ m and efficiency η as well as slow temporal variations of the offset of the measurement signals. Exploring the use of machine learning techniques to track these parameters and adjust the CQEC protocol accordingly during an experiment would be a useful direction for further work.
The favorable performance of the CQEC protocol seen for the quantum annealing application presented here, in particular the lack of any significant decrease in performance going from one to two logical qubits, indicates the potential viability of modular approaches to quantum error correction for quantum simulation and for quantum annealing in particular. For quantum computation and simulation on near term quantum machines, it is advantageous to use encodings that generate only low weight logical operators, while also requiring only low weight measurement operators. Since the weight of the logical operators of stabilizer codes, whether subspace or subsystem, always grow with the number of encoding qubits, small codes are therefore highly attractive from this perspective. Indeed, quantum annealing Hamiltonians of the Ising spin glass form, i.e., containing only terms of the form , that are encoded with the three-qubit stabilizer code result in logical operator terms of only weight two and three. The three-qubit code thus presents an attractive modular option for implementing error correction of quantum annealing with large numbers of logical qubits.
Additional directions for further study based on this CQCE approach include the investigation of extension of the ideas presented here to fault tolerant error correction, in addition to the use of machine learning methods to diagnose errors.
Methods Numerical method to generate discretized readout signals and density matrix evolution We describe here the numerical approach used to generate discretized realizations of the readout signals I 1 (t) and I 2 (t) [see Eq. (2)], the filtered readout signals I 1 (t) and I 2 (t) [see Eq. (17)], and the system density matrix ρ(t), with a timestep dt. ρ(t) evolves according to the combined action of Eq. (14) and the error correction operations C op [Eq. (16)] that are applied on the physical qubits whenever an error is diagnosed by the double threshold CQEC protocol.
We use the Bayesian update method of Ref. [49] to obtain the discretized readout signalsĪ k (t + dt) that correspond to the averages of I k (t) during the time interval (t, t+dt) and hence to measurement of the stabilizer generators S k [see Eq. (1)].Ī k (t + dt) is obtained from with s k = ±1 is a binary random number that has the value of +1 with probability equal to ρ 000,000 (t) + ρ 001,001 (t) + ρ 110,110 (t) + ρ 111,111 (t) for k = 1 (i.e., S 1 = Z 1 Z 2 ) and with probability equal to ρ 000,000 (t) + ρ 011,011 (t) + ρ 100,100 (t) + ρ 111,111 (t) for k = 2 (i.e., S 2 = Z 2 Z 3 ), and ζ k is a Gaussian random number with zero mean and variance 1. We employed a timestep dt = 5 × 10 −3 Γ −1 m in all our numerical calculations. The quantum state of the system is then updated according to the information,Ī k (t + dt), obtained from this measurement of S k , according to: Here p i (I) = exp [−(I − i|S k |i ) 2 /2D]/ √ N is the conditional probability density for the ouput signal I given that the system is in the state |i , where |i indicates one of the three-qubit computational states, i.e., i|S k |i = ±1, D = τ m /dt, and N is a normalization constant. In addition, γ ij = Γ m (1 − η) ( i|S k |i − j|S k |j ) 2 /4. Note that for ideal measurements (η = 1), we have γ ij = 0. The denominator of Eq. (65) is the probability distribution of the continuous random variable I, defined by p(I) = i=0,1,..7 ρ ii (t) p i (I), where the sum is over all three-qubit computational states |i . Equations (64)-(65) provide Bayesian updates for the discretized readout signalsĪ k (t) as well as the corresponding conditional state ρ(t), which is conditioned on the recorded readout signalĪ k (t), at all times t = n dt, n = 0, 1, . . . n op where n op dt = t op .
The discretized filtered readout signals I k (t) are then readily obtained from the discretized readout signals I k (t) using Eq. (17): The system quantum state ρ(t) also evolves due to the Hamiltonian H(t) and to decoherence (bit-flip errors in this work). The state update due solely to Hamiltonianinduced evolution during the timestep dt is obtained as where the unitary evolution operator U(t, t + dt) is approximated using the first-order Magnus expansion [55], The state update due only to decoherence is evaluated as whereρ decoh (t) is given in Eq. (13).
To account for all three processes of measurement, coherent evolution and decoherence at each timestep, we apply the quantum Bayesian update twice (once for measurement of S 1 = Z 1 Z 2 and once for measurement of S 2 = Z 2 Z 3 ), followed by state update due to Hamiltonian-induced evolution [Eq. (67)], and then state update due to decoherence [Eq. (69)]. After this we use the double threshold CQEC protocol to determine whether or not we need to apply an error correction operation C op to the system state at the moment t + dt: ρ(t + dt) → C op ρ(t + dt)C op . For example, if I 1 (t + dt) < Θ 1 and I 2 (t + dt) > Θ 2 , then the diagnosed error syndrome is (Z 12 = −1, Z 23 = +1), the diagnosed error is X 1 and so we have to apply the error correction operation C op = X 1 . After error correction, we also reset the filtered readout signals: I k (t + dt) → +1 for k = 1, 2. If there is no error correction operation in this timestep, the filtered readout signals are not reset.
Probability of misdiagnosing bit-flip error X 2 . We derive here the result Eq. (41) for the probability p (X2) misdiag to misdiagnose the bit-flip error X 2 . In contrast to the conventional implementation of the bit-flip QEC, in the continuous operation misdiagnosis of single bit-flip errors occurs when relatively large fluctuations affect one or both filtered readout signals I k (t). It is however more likely that only one of the filtered readout signals exhibits a large fluctuation, so we consider this situation to obtain an estimate for the probability p (X2) misdiag . The bit-flip error X 2 is misidentified as X 1 if, at the moment when the filtered readout signal I 1 (t) exits the "syndrome uncertainty region" by crossing the lower error threshold Θ 1 (see Fig. 1), the filtered readout signal I 2 (t) is above the upper error threshold Θ 2 due to a unusually large positive fluctuation of size larger than Θ 2 − Θ 1 . The probability that this situation occurs is given by the probability that ∆I(t) ≡ I 2 (t) − I 1 (t) ≥ Θ 2 − Θ 1 . From Eq. (17), we have where the noises ξ 1 (t) and ξ 2 (t) are the uncorrelated noises of the bare readout signals I k (t) (see Eqs. (2)-(3)). Note that Eq. (70) is valid both before and after the occurrence of the bit-flip error X 2 because the "signal parts" of the readout signals I 1 (t) and I 2 (t) cancel each other in ∆I(t). Specifically, before (after) occurrence of the error X 2 , the "signals parts" of I 1 (t) and I 2 (t) are both equal to +1 (−1). This implies that the probability that ∆I(t) ≥ Θ 2 − Θ 1 can be obtained from the stationary probability distribution, p st (∆I), of ∆I(t). From Eq. (70), we obtain p st (∆I) = τ 2πτ m 1/2 e −(∆I) 2 τ /2τm .
The probability that ∆I(t) is larger than Θ 2 −Θ 1 is then equal to where erf(·) is the error function and the approximation applies in the limit of large averaging-time parameters τ . The result (72) is our estimation for the probability that the bit-flip error X 2 is misdiagnosed as the error X 1 . The same result is also obtained for the probability that the error X 2 is misdiagnosed as the error X 3 . Therefore, the probability that the error X 2 is misdiagnosed is given by The numerical coefficient c that follows from the above analysis is 2/π ≈ 0.7979. By fitting our numerical results to Eq. (73), we obtain that the coefficient c is larger, specifically, fitting to the data in Fig. 3 yields c ≈ 1.607. Equation (73) with c ≈ 1.607 has been successfully tested against numerical results for various values of the error thresholds parameters Θ 1 and Θ 2 in addition to the values indicated in Fig. 3). | 2020-03-26T01:00:59.555Z | 2020-03-25T00:00:00.000 | {
"year": 2021,
"sha1": "7291cbb72a16daa13d09208b0dab514173d132f5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.11248",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8c58d869b1a90571b311308be469bb87ea6060c9",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
196573377 | pes2o/s2orc | v3-fos-license | Effect of combined antioxidant treatment on oxidative stress, muscle damage and sport performance in female basketball players
SUMMARYIntroduction/Objective We determined the impact of antioxidant supplementation by GE132® on sports performance, oxidative stress markers, and muscle enzymes activities in professional female basketball players. Methods Repetitive strength, explosive power, anaerobic endurance, and agility performance were measured before/after the 45-day supplementation period. The FORT (Free Oxygen Radicals Test) and FORD analysis (Free Oxygen Radical Defense) were assessed before/after basketball specific exercise bout, at the beginning/end of observational period. The grade of muscle damage was evaluated by aspartate aminotransferase (AST), creatine kinase (CK) and lactate dehydrogenase (LDH). Results After supplementation period, significant difference was not recorded regarding the basic motor skills tests. Basketball specific exercise bout induced significant increase in FORT (p < 0.05) only at the beginning of supplementation period. Both FORT and FORD significantly decreased over the observational period (p < 0.001, p < 0.01, respectively). CK and LDH were remarkably lower at the end of observational period (p < 0.05), compared to the baseline. Conclusion Exogenous supplementation with protective nutraceuticals such as those found in GE132®, could reduce acute/chronic oxidative stress and muscle damage, but had no effect on sport performance in basketball players.
INTRODUCTION
Moderate physical activity, associated with a balanced diet, provides numerous health benefits. However, exhaustive and/or intense training sessions are associated with increased production of reactive oxygen species (ROS) and might lead to oxidative stress (OS) in skeletal muscles, blood, and perhaps other tissues [1,2]. An increasing body of evidence implicates OS in the pathogenesis of numerous diseases, including diabetes, certain cancers, and cardiovascular disease. Importantly, exercise-induced OS might be associated with fatigue, muscle damage, and increased recovery time, which can all affect exercise performance [3][4][5][6].
The body contains an antioxidant defense system that depends on dietary intake of antioxidant vitamins and minerals and the endogenous production of antioxidant compounds such as antioxidant enzymes and numerous non-enzymatic antioxidants, involved in the quenching or removal of free radicals [7]. Physical training may enhance the antioxidant defense system to offset the barrage of ROS generated during exercise [8,9]. However, the body's natural antioxidant defense system might not be sufficient to counteract the increase in ROS production during high-intensity or prolonged intermittent aerobic or anaerobic exercise [10,11]. Additionally, a large number of athletes failed to ingest sufficient quantities of fruits and vegetables, which also suggest suboptimal intake of various antioxidants.
We investigated the effect of proprietary nutraceutical blend GE132® on OS, muscle damage and sport performance in female basketball players. This blend contains several components with various biological effects including antioxidant properties. Each capsule contains 100 mg of Ganoderma lucidum extract (20%), 130 mg of royal jelly (26%), 80 mg of resveratrol (16%), 30 mg of shark protein complex (6%), 80 mg of green tea extract (16%), and 80 mg of rosehip extract (16%). It was shown that these components have many therapeutic effects including anticarcinogenic and antihypertensive effects, immunomodulatory effect, rheumatism alleviation, and hepatic disease prevention, health benefits related to gastrointestinal disorders, metabolic diseases, and allergies [23][24][25][26].
The efficacy of this blend has never been tested before. Because of the great interest in using antioxidant nutrients as a preventive and therapeutic tool in clinical medicine and in physical activity, the aim of the present study was to determine the effects of this dietary supplement on OS, muscle damage and sport performance in female basketball players during competitive half season. It was hypothesized that athletes would demonstrate lower OS and muscle damage in response to exercise and training after supplementation period.
Subjects
Fourteen senior female basketball players, who play in the first league club Red Star, Belgrade, Serbia, participated in this study. Athletes gave written consent after explanation of the purpose, demands, and possible risks associated with the study. The protocol was in accordance with the Declaration of Helsinki for Research on Human Subjects and it was approved by the Ethical Committee of Sport Medicine Association of Serbia. All participants passed sports medical examination and were eligible for participation in competitive sport. None of the subjects reported any serious injury or disease six months prior to or during the study. All subjects were non-smokers and did not use oral contraceptives, anti-inflammatory drugs, or dietary supplements (i.e. antioxidants) one month before and during the study. Subjects were instructed to restrain themselves from making any drastic changes in the diet. All of them had regular menstrual cycles and none of them were in the menstrual phase at the time of blood sampling.
Study design and supplementation
The study was conducted during a competitive half season, over the 45-day period. During this period, athletes were engaged in a controlled training program, and participation in the study did not have any effect on previously determined training and competition schedule. Subjects completed two basketball specific exercise bouts, at the beginning and at the end of the observational period. Supplementation started after the first exercise bout and continued for 45 days. Before and after each exercise bout, capillary blood samples were collected for OS measurement. Venous blood samples were collected at the beginning and at the end of observational period for muscle enzyme analysis. In addition, all players performed basic motor skills tests at the beginning and at the end of the study, in order to evaluate strength, endurance, and agility as the most commonly used motor skills in basketball.
All subjects received antioxidant complex supplement, GE132®, during 45 days. Athletes were told to comply with supplementation protocol and to take two capsules daily, one before lunch and the other before dinner. Capsules were counted upon return of the capsule bottles to assess compliance.
Procedures and measurements
Baseline measurements: Prior to enrolling in the study, all subjects completed a body composition assessment, standard blood chemistry screening, medical history, and physical activity questionnaire. Anthropometric and body composition characteristics were determined by using Seca height measuring instrument (Seca GmbH, Hamburg, Germany) and Tanita scale BC-418MA (Tanita Corp., Tokyo, Japan).
Basketball-specific exercise bout: Each exercise bout consisted of a general warm up and stretching (approx.10 min), technical-tactical training (approx. 30 min), heavy training, including training of counterattacks and simulated full-or half-court basketball games (approx. 40 min), and finally a cool-down phase (approx. 10 min). Each subject served as self-control to eliminate any biological variability in the response to antioxidant supplementation. The exercises were carried out under the same conditions, in the same place and time of the day to avoid circadian variations.
Basic motor skills tests
Strength: In order to evaluate repetitive strength, players performed push-ups and sit-ups to failure. Push-ups were done by placing hands just wider than shoulders. Subjects were told to keep their elbows fairly close to body and point them back and not to flare them out to the sides. They lowered until their chests were just above the floor, paused for a split second, and then pressed themselves back up. In order to do sit-ups, subjects were told to raise the torso from a supine to a sitting position and then lie back down again without moving the legs. Knees were bent at an angle of 90°and arms were held crossed behind the neck during the test. Only correct repetitions were taken into account.
Explosive strength was assessed by Globus Ergo Tester Platform. Squat jump (SJ), countermovement jump (CMJ), vertical jump (VJ), left leg (LL) and right leg (RL) jumps values were obtained. The subjects stood on the contact platform connected to a digital timer that recorded the flight time and height of all jumps. The timer was triggered by the release of the player's feet from the platform, and stopped at the moment of touchdown. SJ was performed from a starting position of 90° knee angle without allowing any counter movement. The subjects were told to jump as high as they can without performing a countermovement. The hands were held on the hips during the jump, thus avoiding any arm swing. During the CMJ, subjects were in the position with knees slightly bent and moved into a semi-squat position before jumping. Subject's hands also remained on the hips throughout CMJ. One leg jumps (RL and LL) were performed in the same way as the CMJ. The only difference was that subjects were jumping from and landing on the same leg. Considering the VJ, the similar technique was used like in the CMJ, but subjects were able to perform the arm swing during the jump. Each player performed three jumps and the highest values achieved were recorded.
Endurance: Anaerobic endurance, as an important aspect of basketball, was evaluated by 300-yard shuttle test. Marker cones and lines were placed 25 yards apart to indicate the sprint distance. Stopwatch was used to record results of the test. Athletes were told to run as fast as they can to the opposite 25-yard line, touch it with their foot, turn and run back to the start. This was repeated six times without stopping (covering 300 yards total). After a 5-minute rest, the test was repeated. The average values of the two 300-yard shuttles were recorded.
Agility: Agility performance of basketball players was assessed by agility t-test. Four cones were set in the court (5 yards = 4.57 m, 10 yards = 9.14 m), and the subjects started at cone A. When the times sounded off, the subjects sprinted to cone B and touched the base of the cone with their right hand. Then, they turned left and shuffled sideways to cone C, also touching its base, this time with their left hand. The subjects were then shuffling sideways to the right to cone D, touching the base with the right hand. At last they shuffled back to cone B, touching it with the left hand, and ran backwards to cone A. The stopwatch was stopped as they passed the cone A. The test was performed three times and average value of all three attempts was taken into account.
Oxidative stress and biochemical measurements: In order to evaluate OS status, approximately 15 minutes before and 15 minutes after each basketball specific exercise bout, capillary blood samples were collected for FORT (free oxygen radicals test) and FORD (free oxygen radical defense) measurements. The free radical analysis system FORM PLUS 3000 (Callegari S.P.A., Parma, Italy), incorporating a spectrophotometric device reader, was used to measure these parameters. Test kits used with this instrument are highly reliable, rapid, and user-friendly for the global evaluation of the oxidative status (radical-induced damage index and the total antioxidant capacity) in the body from capillary blood.
FORT assay provides an indirect measurement of hydroperoxide, which are intermediate oxidative products of lipids, amino acids and peptides and therefore useful measure of OS. It is a colorimetric test based on the ability of transition metals, such as iron, to catalyze the breakdown of hydroperoxide into derivative radicals. These derivative radicals are then preferentially trapped by a suitably buffered chromogen: 4-Amino-N-ethyl-N-isopropylaniline hydrochloride and develop, in a linear kinetic based reaction at 37°C, a colored fairly long-lived radical cation spectrophotometrically detectable at 505 nm. The intensity of the color correlates directly with the quantity of radical compounds, which is related to the oxidative status of the sample [27].
FORD test provides an estimation of the overall antioxidant capacity of blood plasma. This test is based on the ability of antioxidants present in plasma to reduce a preformed radical cation. A stable colored cation (photometrically detectable at 505 nm) is formed in the presence of an acidic buffer (pH = 5.2) and an oxidant (FeCl 3 ). Antioxidant compounds present in the analyzed sample, reduce the radical cation of the chromogen, quenching the color and producing a discoloration of the solution, which is proportional to their amount in the sample [27].
To examine the extent of muscle damage, venous blood samples were collected from the antecubital vein of athletes in serum separator tube using vacutainer system (Greiner Bio-One, Kremsmünster, Austria). The serum separator tubes were placed on ice and left to stand for 30 minutes to facilitate clotting before being centrifuged at 3500 g for 15 minutes to obtain serum. Creatine kinase (CK), lactate dehydrogenase (LDH) and aspartate aminotransferase (AST) were determined in a clinical laboratory using current bioassays based on methods by Johnson et al. [28]. LDH activity determination is based on the conversion of pyruvate to L-lactate by monitoring the nicotinamide adenine dinucleotide (NADH) oxidation. AST is assayed in a coupled reaction with malate dehydrogenase in the presence of NADH. In the determination of CK activity, the enzyme reacts with creatine phosphate and adenosine diphosphate to form adenosine triphosphate, which is coupled to the hexokinase/guanosine diphosphate reaction generating NADPH.
Statistical analysis
Statistical analyses were performed with the software IBM SPSS Statistic version 20.0 (IBM Corp., Armonk, NY). All data were assessed for normality (one-sample Kolmogorov-Smirnov test). FORT and FORD were analyzed using twoway analysis of variance (ANOVA) with repeated measures. Significant changes in muscle enzyme activities at rest, as well as values of basic motor skills test obtained at the beginning and at the end of the study were analyzed using paired sample t-tests. Data were expressed as mean ± SD. A p-value < 0.05 was considered statistically significant.
RESULTS
Descriptive characteristics of the basketball players are shown in Table 1. All subjects consumed the appropriate amount of product throughout the study period. None of the subjects reported adverse effects related to the dietary supplement.
Statistically significant difference was not recorded regarding the results of basic motor skills tests after 45-day supplementation period. The obtained results are shown in Table 2.
Basketball specific exercise bout induced significant increase in FORT at the beginning of observational period (p < 0.05), (Figure 1). However, these changes were not recorded after 45 days of supplementation. In addition, the FORT significantly decreased after 45-day supplementation period (p < 0.001). ANOVA repeated measures re-vealed significant decrease in FORD over the observational period (p < 0.01).
We established the overall OS status of the athletes based on both the FORT and FORD results, according to the manufacturers' direction. Five major profiles: normal status NS, latent OS, compensated OS, at risk of OS and OS in progress -have been depicted [27]. The number of athletes with OS in progress was reduced from 71% (10/14 athletes) to 0% (0/14 athletes) and the number of athletes with NS (2/14 athletes) was increased from 14% to 86% (12/14 athletes) as a result of antioxidant supplementation.
The CK and LDH levels at rest, as indicators of muscle damage, significantly decreased after 45 days of supplementation (p < 0.05); while no changes were detected, regarding the AST levels ( Table 4).
DISCUSSION
In the present study, female basketball players were supplemented with complex antioxidant supplement GE132® during 45 days. The study was performed just before the start of regular basketball season and after completion of basic conditioning training period. This period was chosen since preseason training is highly demanding for athletes because they are engaged in both frequent and high intensity workouts with little or no time to recover. This training program allows neuromuscular and endocrine systems to adapt after the loads placed to them and potential redox status adaptations occur [7,9,29,30].
The major findings of this study indicate that: 1. single basketball training session can increase OS in trained females; 2. the antioxidant combination treatment with GE132® used in this study can significantly attenuate the rise of blood OS markers and muscle damage after basketball exercise and training; 3. GE132® supplementation does not provide benefit for enhancing motor skills of female basketball players. Based on FORT assay measurement, we found that OS was significantly increased in response to basketball specific exercise bout before supplementation. Basketball is one of the mixed sports that include aerobic phases (intermittent running at different intensity) and anaerobic phases (jumps, sprints). Therefore, the increased free radical generation in basketball can occur via several pathways: mitochondrial respiration, oxidase enzymatic activity (NADPH oxidase, xanthine oxidase), via phagocytic respiratory burst, a loss of calcium homeostasis and/or the destruction of iron containing proteins [2,31]. The increase in ROS production resulting from any of the above sources, could lead to oxidative changes of different biomolecules, and increased levels of OS. The antioxidant supplementation attenuated this increase after exercise bout at the end of study period, as evidenced by the non-significant changes in FORT levels. Additionally, the overall decreased FORT levels indicate less oxidative damage after 45 days of supplementation. The attenuated OS response is consistent with other studies using antioxidant supplementation [12,15,18,[32][33][34][35][36]. On the other hand, similar changes might occur as a result of adaptive response to chronic exercise [7,9]. However, since present study was conducted after the conditioning preseason training, which allowed redox status adaptations, the reduced OS observed in female basketball players might be the result of antioxidant supplementation alone.
The antioxidant system capacity of plasma, measured by FORD test, depends on individual and synergic effects of different molecules, such as proteins, glutathione, vitamin E, ascorbate, carotenoids, and phenolic compounds. We detected no changes of FORD in response to exercise at the beginning of neither the observational period or at the end. Mobilization of tissue antioxidant stores into the plasma is an accepted phenomenon that would help maintain antioxidant status in plasma at certain level and protect body against ROS [37]. In addition, soluble plasma antioxidants work synergistically to defend against oxidant production, meaning that when one antioxidant nutrient is lacking at a particular period in time, another could substitute or it may be regenerated by another that is in abundance [38]. These rapid, dynamic responses in order to maintain redox homeostasis could be the reason for non-significant changes observed in FORD after the exercise. However, plasma levels of antioxidants, measured by FORD, decreased over the entire observational period in response to supplementation. This may not increase athlete's susceptibility to OS, since supplementation decreased oxidative modifications of various biomolecules, as indicated by FORT test. In addition, OS status of the female basketball players was improved after supplementation, judging by the increased number of athletes with NS and decreased number of athletes with OS in progress. Therefore, this antioxidant supplement may provide protection against the negative health consequences of free radicals produced during training.
Although the mechanisms behind exercise-induced muscle damage are not precisely known, it is believed that along with initial mechanically induced disruption, secondary damage is caused by the free radical production and subsequent OS [39]. Some markers, such as AST, CK, and LDH, have been used as a way to indicate the grade of muscle cell damage, especially after playing a sport, since microfiber breakdown releases cell content [5,40]. The supplementation with antioxidants significantly reduced plasma muscle enzyme activities (CK and LDH), suggesting the involvement of oxidant mechanisms on tissue injury induced by the exercise. This finding can be explained by protective effect of antioxidants against lipid peroxidation, resulting in less muscle membrane damage. Our results are in accordance with several studies, which reported beneficial effects of antioxidants in terms of minimizing the rise in muscle enzyme activity in response to exercise [16-19, 41, 42].
There has been a general inconsistency of outcomes when investigating the role of antioxidant supplementation in exercise performance, with the majority of the studies reporting no benefits. In accordance, in the present study no statistically significant difference was observed regarding repetitive or explosive strength, endurance, or agility performance after supplementation period in comparison to baseline.
The limitations of the study include the small number of subjects and the short duration of supplementation. However, particular strength of this study is the fact it was conducted during a regular competitive half season, reflecting habitual conditions of nutrition and training program. In addition, this is the first study examining the effects of nutraceutical blend GE132®.
CONCLUSION
Previous studies showed that professional athletes are exposed to increased OS over the periods of intensive training. Exercise-induced OS might be associated with fatigue, muscle damage, and increased recovery time that can all affect exercise performance. For that reason, reliable and quick tests for OS status measurements, such as FORD and FORT assays, might be very useful for training and supplementation planning. Exogenous supplementation with protective nutraceuticals such as those found in GE132®, could reduce acute and chronic OS during high intensity efforts, and provide beneficial effect on muscle function recovery. The results of the present study suggest that GE132® supplementation does not enhance performance of female basketball players, but rather provide protection against detrimental health consequences of ROS produced during training. | 2019-07-15T22:29:27.479Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "9a7d20db9e25fec9cc8f35582287ab3bd4b58c73",
"oa_license": "CCBYNC",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0370-81791900063B",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "52f2923f83edd7546a16db27be46226616bb4049",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237896318 | pes2o/s2orc | v3-fos-license | Artículo Original / Original Article Increased postoperative fasting time aggravates the nutritional status in patients with gastrointestinal tract neoplasia
Surgical patients with gastrointestinal cancer often suffer from malnutrition. This study aimed to evaluate the influence of fasting time on the nutritional status of patients hospitalized with preoperative and postoperative gastrointestinal tract neoplasms. Observational, longitudinal, and prospective study conducted in the surgical unit at a public-school hospital. The patients were divided into groups: upper (UGIT) and lower (LGIT) gastrointestinal tract. Follow-up started within 72 h of hospitalization with reassessment 72 h after surgery. Data collected: sex, age, type and duration of surgery, preoperative (compared with 8 h) and postoperative (compared with 24 h) fasting time, food acceptance, Subjective Global Assessment, anthropometry, and laboratory tests. Analyses: Student t, Wilcoxon, and chi-square tests. Fifty-one patients were followed up, 29 (57%) UGIT and 22 (43%) LGIT. The preoperative fasting time was 8.2±2.8 h in UGIT and 8.1±2.2 h in LGIT groups, respectively; however, postoperative fasting times in UGIT (60.4±40.7 h) and LGIT groups (57.6±38.2 h) were longer than 24 h (P<0.001). Although eutrophic in the preoperative period, in the postoperative most patients in the UGIT and LGIT groups presented, respectively, malnutrition (71%; 59%; P<0.001), severe weight loss (79%; 80%), a significant correlation between triceps skinfold and postoperative fasting time (r= -0.306; P= 0.03), and hemoglobin and albumin values (r= 0.633; P<0.001), additionally low dietary acceptance, especially in the UGIT group. Prolonging postoperative fasting time worsened the nutritional status of surgical patients, especially in the UGIT group.
INTRODUCTION
The gastrointestinal (GI) tract is comprised of the upper (mouth, pharynx, esophagus, and stomach) and lower (small and large intestines, rectum, and anus) GI tracts¹. GI tract neoplasms are a major cause of mortality worldwide. In Brazil, with the exception of non-melanoma skin cancer, in males, bowel (8.1%) and stomach (6.3%) cancers are, respectively, the third and fourth most prevalent, falling behind prostate (31.7%) and lung (8.7%) cancers. In females, bowel cancer (9.4%) is second only in prevalence to breast cancer (29.5%)².
Patients with GI cancer often suffer from malnutrition and cachexia caused by inflammatory processes due to malignancy and therapeutic intervention³. When submitted to resection and anastomosis surgery, worsening postoperative malnutrition 4,5 may be associated with increased infections, longer hospital stays, morbidity, and mortality 6,7 .
In addition, during the perioperative period, prolonged fasting may be associated with a higher risk of malnutrition. Established by Mendelson 8 , a 6-8 hour preoperative fast was proposed to prevent pulmonary complications associated with vomiting and gastric aspiration during anesthetic induction. Throughout the 1950s, the practice was extended to elective surgery 8,9,10 . However, there are no reports on the exact time of postoperative refeeding and previously depended on the appearance of air-fluid noises and gas elimination, which may end up prolonging fasting by two days 11 . This increase can cause complications in bodily functions and metabolic conditioning, increasing hunger, thirst, and nausea, and may trigger biochemical reactions, such as gluconeogenesis, lipolysis, and proteolysis, contributing to increased blood glucose and longer hospital stays 12,13,14 .
Since the 1990s, the prolongation of pre-and postoperative fasting has been disputed. Following evidence-based medicine, fast-track or multimodal protocols such as the Enhanced Recovery After Surgery (ERAS) and Aceleração da Recuperação Total Pós-operatória (ACERTO) programs have combined several perioperative interventions to hasten postoperative recovery 15,16 . They have shown that abbreviation of the preoperative fast with clear fluid intake within 2 h before the anesthetic procedure and postoperative 24-h refeeding in the presence of hemodynamic stability do not pose risks to the patient 17 . Highly respected guidelines, such as the American Society of Anesthesiologists 18 , also recommend these practices, as there are no scientific reasons for prolonged fasting.
Due to the high prevalence of malnutrition in surgical and neoplastic patients and the consequent worsening of their nutritional status through practices such as prolonged preoperative and postoperative fasting, as well as the lack of updated protocols for postoperative fasting, the objective of this study was to evaluate the influence of fasting time on the nutritional status of patients hospitalized with preoperative and postoperative GI tract neoplasms.
Study design and ethical considerations
This was an observational, longitudinal, prospective study conducted among patients admitted to the surgical unit of a public teaching hospital in central Rio Grande do Sul, Brazil from August 2018 to March 2019. This study included female and male patients, aged ≥18 years, admitted to surgical units for elective surgery of the upper and lower GI tract for neoplasms, such as coloproctology, digestive, cancer and general surgeries; patients who received oral, enteral, or parenteral diet; and lucid patients. Regarding the exclusion criteria, patients at the head and neck, cardiovascular, traumatology, thoracic, and urology clinics; patients without neoplasms; with unmarked surgeries; discharged or death before reassessment; and patients who underwent chemotherapy and radiotherapy treatment less than 30 days before surgery were excluded.
Experimental design
Data were collected through interviews with patients or caregivers and electronic medical record review. Follow-up started within 72 h after hospital admission, and re-evaluation was performed 72 h after the surgical procedure. The patients were divided into upper and lower GI tract groups (UGIT and LGIT, respectively).
The following data were collected: sex (male and female), age (adult and elderly (≥ 60 y) 19 , surgical duration refers to the length of this procedure (II -from 2 to 4 h, III -from 4 to 6 h), and type of surgery performed. Preoperative fasting time was counted from midnight before surgery to the beginning of the anesthetic procedure according to the routine of the service and compared with the traditional fasting time of 8 h 9 . Postoperative fasting time was calculated from the end of surgery to the first meal offered to the patient (liquid, pasty, or solid) or initiation of Enteral Nutrition Therapy (ENT), and compared with the 24 h period established by modern guidelines 20 . Serum infusion and oral water intake were not considered.
Nutritional risk screening was performed by Subjective Global Assessment (SGA), which is considered the gold standard by the ACERTO Project, validated for evaluation of surgical patients and subsequently applied and adapted to other clinics 16 . Patients were evaluated 72 h after postoperative and classified as well-nourished and malnourished 21,22 .
Anthropometric evaluation including triceps skinfold thickness (TST, mm) and subscapular skinfold thickness (SST, mm) was performed with a measuring tape and scientific plicometer (Cescorf © ). Perioperative body mass index (BMI) was classified as adult 22 and elderly 23 . BMI was not calculated postoperatively because most patients had intracellular edema. A percentage of involuntary weight loss (%WL) >10% of usual weight was classified as severe 24 . Regarding nutritional status, the adductor pollicis muscle (APM), which is the measure of the muscle between the hand index finger and thumb, was performed to check for skeletal muscle depletion. The percentage of APM (%APM) was classified as no depletion (100%), mild depletion (90%-99%), moderate depletion (60%-90%), and severe depletion (<60%) 25 .
Oral hospital acceptance was assessed by means of a 24-h food recall, twice preoperatively and twice postoperatively. It was not possible to perform three recalls, as most patients were hospitalized 2 d before surgery. Home recall was not used because there is no comparative acceptance pattern. Meals were divided into snacks (breakfast, morning snack, afternoon snack, and supper), lunch, and dinner. Acceptance was classified as good (when the patient consumed more than half), medium (half), or low (less than half), compared with what was offered by the hospital's nutrition service. For patients using ENT, volume adequacy was assessed 26 , and adequacy ≥80% of volume was considered satisfactory in both groups 27 . All patients on ENT received high-calorie and high-protein diets, as prescribed by the nutrition team. In addition, no patient received immunonutrition.
Statistical analysis
Collected data were tabulated and stored in Excel (Microsoft) and analyzed with Statistical Package for the Social Sciences (SPSS) version 25. Initially, normality was analyzed by using the Shapiro-Wilk test. Then, one-sample and paired t-tests were applied to continuous variables; Wilcoxon and chi-square tests were used for categorical variables. Pearson and Spearman correlations were applied to quantitative variables and nonparametric correlations, respectively; results were considered statistically significant at p<0.05.
RESULTS
In total, 58 patients underwent elective GI tract surgeries. Nevertheless, seven patients were excluded in the postoperative period because two patients were discharged and two patients died before postoperative reassessment, two patients were in anasarca, and one patient was not submitted to surgery. Therefore, 51 patients were followed up preoperatively and postoperatively: 29 (57%) UGIT patients and 22 (43%) LGIT patients. Table 1 demonstrate the clinical characteristics of the population. In the UGIT group, 55.2% (n= 16) were elderly and 75.9% (n= 22) were male; in the LGIT group, 72.7% (n= 16) were elderly and 54.5% (n= 12) were female. The average age in both groups was 62±12.8 years (20-82 years). Most patients were admitted for digestive surgery (62.1%) in the UGIT group and coloproctology (68.2%) in the LGIT group. The most frequently performed surgery and most prevalent neoplasia was gastrectomy on account of malignant stomach neoplasia in the UGIT group (41.4%) and colectomy on account of malignant colon cancer in the LGIT group (50.0%). Surgical duration refers to the time of the procedure; in the UGIT group, 79.3% (n= 23) were size III, and in the LGIT group, 50.0% (n= 11) were size III and 50.0% (n= 11) size II. 9 . However, the postoperative fasting time in the UGIT group was 60.4±40.7 h and in the LGIT group it was 57.65±38.2 h, significantly higher (P<0.001) in both groups compared to the fasting time of up to 24 h, in relation to the modern guidelines of the ACERTO project 16 .
According to the SGA classification (Table 2), most patients in both groups had malnutrition in the preoperative and postoperative periods. In the UGIT group, there was an increase in malnourished patients in the postoperative period compared with preoperative period (P<0.001). In the LGIT group, there was a predominance of malnourished patients in the preoperative and postoperative periods (P<0.001).
Through preoperative BMI classification, 48% (n= 14) of the patients in the UGIT group and 45% (n= 10) of the patients in the LGIT were eutrophic. However, 45% (n= 13) of the patients in the UGIT group and 41% (n= 9) of the patients in the LGIT group had severe preoperative weight loss. Even though the anthropometric data (Table 3) were not statistically significant, there was a tendency for SST and APM to decrease postoperative period in the UGIT group, and for APM to decrease postoperative period in the LGIT group. In addition, a significant correlation was observed between TST and postoperative fasting time (r= -0.306; p= 0.03).
Food acceptance in the UGIT group was performed with 17 patients in the preoperative period and 16 in the postoperative period, since the remaining patients used ENT. In the LGIT group, acceptance was performed with 20 patients in the preoperative and postoperative periods. In the preoperative period of the UGIT group, most patients had good acceptance of all evaluated meals (snacks= 76%; LGIT group presented better postoperative acceptance than the UGIT group, since most patients in the LGIT group had good acceptance at all meals in the preoperative (snacks= 70%; lunch= 60%; dinner= 70%) and postoperative periods (snacks= 50%; lunch= 45%; dinner= 45%).
ENT was observed only in the UGIT group, in which four patients used it in the preoperative period and 11 patients in the postoperative period. In the preoperative period, 50% (n= 2) of the patients presented satisfactory adequacy (>80%) and 50% (n= 2) unsatisfactory adequacy (<80%). However, in the postoperative period 82% (n= 9) of the patients presented unsatisfactory adequacy, and just 18% (n= 2) presented satisfactory adequacy.
According to laboratory tests, most patients in the UGIT group had mild anemia in the preoperative period (44.5%; n= 12) and moderate anemia postoperative period (63.0%; n= 17). Comparing preoperative hemoglobin (11.9±1.9 g/ dL) with the postoperative hemoglobin (10.2±1.2 g/dL) in the UGIT group, a statistically significant decrease was observed (p= 0.001). In the LGIT group, most patients had moderate anemia in the preoperative and postoperative periods, although with no statistically significant difference (10.6±1.8 g/dl; 9.9±1.1 g/dl; p= 0.12). In addition, a significant correlation (r= 0.334; p= 0.02) between hemoglobin and APM was observed in the preoperative period.
Preoperative serum albumin values were obtained in 16 patients in the UGIT group and seven in the LGIT group. According to the classification, most patients presented some degree of protein depletion, predominantly mild depletion in 41% (n= 9) in the UGIT group and 57% (n= 4) in the LGIT group. There was also a significant correlation between hemoglobin and albumin values in the preoperative period (r= 0.633; p<0.001).
According to the TLC in the preoperative period, most patients did not present depletion in the UGIT (33.3%; n= 8) and LGIT groups (43.7%; n= 7). However, in the postoperative period, most patients presented severe depletion in the UGIT group (50.0%; n= 12) and moderate depletion in the LGIT group (43.7%; n= 7). In addition, regarding serum creatinine levels, 72% (n= 18) of patients in the UGIT group and 74% (n= 14) in the LGIT group presented values within the normal range in the preoperative period.
DISCUSSION
Among the malignant neoplasms, the stomach is the fifth most diagnosed, the colon and rectum are the third most diagnosed, and the esophagus is the eight most frequent 32,33 . These data corroborate our study, in which the most prevalent malignancies in the UGIT were gastric and esophageal, and those in the LGIT were colonic and rectal.
The postoperative fasting time in the UGIT and LGIT groups was longer than 24 h. Although traditional postoperative refeeding is offered after the return of peristalsis, in order words, by the appearance of airborne noises and gas elimination 11 , the patients' refeeding in the present study was considered a cause for concern, since the fasting time was longer than two days. According Aguilar et al. 13 , most Brazilian hospitals prolong their fasting time by adopting these traditional fasting guidelines. These data are in accordance with our results, since the postoperative fasting was based on traditional guidelines, through the appearance of airborne noises and gas elimination, thereby increasing the fasting time; although there are modern guidelines currently exist that demonstrate the benefits of early refeeding in surgical patients 17 . Early oral or enteral intake in the postoperative period minimizes the risks that worsen the nutritional status of surgical patients, such as excessive weight and muscle mass loss 3,17 .
Even though in the preoperative period, patients were eutrophic according to their BMI, most of them presented malnutrition according to the SGA classification, with severe weight loss, muscle depletion, anemia, hypoalbuminemia, which worsened in the postoperative period. These data are associated with higher surgical risk, as well as prolonged fasting time 34 . In addition, the use of isolated BMI is not adequate to identify the nutritional status of patients 35 , therefore it is important to evaluate these patients through the association of SGA, laboratory tests and food acceptance. Hypoalbuminemia is a risk factor in wound healing 36 and anemia is associated with increased morbidity and mortality in surgical patients 37 . Additionally, weight loss and anorexia are frequently observed symptoms of malignant neoplasm 3,17 , as observed in our study, especially in the UGIT group, which, associated with prolonged postoperative fasting time, further aggravates the nutritional status of these patients. Another factor that may contribute to nutritional impairment in these patients is low postoperative food acceptance, observed in the present study in the UGIT group. Low food acceptance may be related to prolonged fasting time, as this factor increases the feeling of thirst, hunger, nausea, vomiting and negatively impacts the response to surgical trauma 13,38,39 .
Scientific evidence 20,40 indicates that prolonged fasting worsens the nutritional status of surgical patients, especially when they are malnourished, increasing the length of hospital stay and contributing to morbidity and mortality. The same authors asserted that postoperative oral or enteral refeeding should be early, even in cases of digestive anastomosis, within 24-h postoperatively, provided that the patient is hemodynamically stable. In addition to being safe, refeeding decreases the compromise of nutritional status and length of stay, favors the healing of intestinal anastomosis, and improves the well-being of patients receiving a diet 41,42 .
CONCLUSIONS
In conclusion, the results of this study indicate that prolonging postoperative fasting worsens the nutritional status of surgical patients, especially those of UGIT patients. Thus, monitoring by the multidisciplinary team and the incorporation of new multimodal protocols based on scientific evidence, such as ACERTO, can minimize perioperative complications and improve the nutritional status of patients undergoing GI tract surgery. | 2021-09-01T15:31:32.237Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "7fee5ce017242ac2b146c4f9b25aa8c254241ee3",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.cl/pdf/rchnut/v48n3/0717-7518-rchnut-48-03-0329.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "515156fd9da2aa77bb787dcbe817f1fb1907e81b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249625917 | pes2o/s2orc | v3-fos-license | Indian Legal Text Summarization: A Text Normalisation-based Approach
In the Indian court system, pending cases have long been a problem. There are more than 4 crore cases outstanding. Manually summarising hundreds of documents is a time-consuming and tedious task for legal stakeholders. Many state-of-the-art models for text summarization have emerged as machine learning has progressed. Domain-independent models don't do well with legal texts, and fine-tuning those models for the Indian Legal System is problematic due to a lack of publicly available datasets. To improve the performance of domain-independent models, the authors have proposed a methodology for normalising legal texts in the Indian context. The authors experimented with two state-of-the-art domain-independent models for legal text summarization, namely BART and PEGASUS. BART and PEGASUS are put through their paces in terms of extractive and abstractive summarization to understand the effectiveness of the text normalisation approach. Summarised texts are evaluated by domain experts on multiple parameters and using ROUGE metrics. It shows the proposed text normalisation approach is effective in legal texts with domain-independent models.
I. INTRODUCTION
Text summarization is the process of constructing a concise, cohesive, and fluent summary of a lengthy text document [1]. It gives us a brief context of the story. Statutes (established laws) and Precedents (prior cases) are the two primary sources of law for countries that follow the Common Law System like India [2]. Hence, there are hundreds of prior cases that lawyers must go through. Legal documents can be lengthy. The abbreviations and terminology used in Legal documents are different from the standard language. Manual drafting of case summaries is a process that takes a lot of time. Automatic summarization of texts is possible because of the advancement of Artificial Intelligence (AI) and Machine Learning (ML). Thousands of hours of man-labour can be reduced with the help of State-of-the-Art Machine Learning models. It is also useful for beginners and ordinary citizens to understand a judgement. More than 4.70 crore cases are pending in various courts in India [3]. The use of automatic summarization of texts will also significantly reduce the number. Several legal text summarization techniques and tools have been reported in the past based on UK [4], Canadian [5] and Australian [6] court judgements. As each country has its structure and abbreviations in their Legal documents it is not suitable to use those tools and techniques for other countries. To train a model for domain-specific text summarization, we need a lot of data. As there is no publicly available dataset for summarization of Indian legal documents present, so we have proposed a different methodology to summarize them. Text summarization techniques are classified into two categories. An abstractive summarization recognises the language in the text and adds novel words to the summary if necessary [7], [8] and in extractive summarization, a summary is formed from a subset of sentences [9]. In this paper, The Indian legal texts have been normalised as a general text based on our proposed methodology. Next, two domain-independent models have been used for summarization. The authors have tried to conduct extractive summarization using BART [10] and abstractive summarization with PEGASUS [11]. Section II presents the related work behind the proposed methodology. Section III discusses a detailed explanation of the proposed methodology. Section IV presents the evaluation of the work compared to the traditional one. Finally, Section V concludes the paper along with its future applications.
II. RELATED WORK
Legal case judgements are usually lengthy and complicated due to the use of many domain-specific abbreviations [2]. There is a vast amount of research conducted for Legal text summarization in different countries like the US, Canada, Australia, and the UK. Most of the research works tried to train deep-learning models using a supervised or semi-supervised approach for legal text summarization. J. W. Yingjie and Ma in [12] selected three sentences from each topic that are having best representation of the topic. The proposed summarization method blends term description with sentence description for each topic using LSA (Latent Semantic Analysis). B. Samei et al. in [13] introduced a model for multi-document summarization using graph-based and information-theoretic concepts. A. Farzindar and G. Lapalme in [14] proposed legal documents summarization based on the exploration of the documents' architecture and thematic structures. A. Joshi et al. in [15] describes a summary based on the "three-sentence selection" metrics. These are "content relevance", "sentence novelty", and "sentence position relevance". The sentence content relevance is measured using a deep autoencoder network. S. Polsley et al. in [16] introduced a tool which takes advantage of standard summary methods based on word frequency and generates automated text summarization of the legal documents. The tool is evaluated using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and human scoring. Vijayasanthi et al. in [17] proposed a hybrid system for automatic text summarization of legal documents. Key phrase matching and case-based techniques are involved in the hybrid system. M. Saravanan and B. Ravindran in [18] proposed a system for labelling sentences with their rhetorical roles. The application of probabilistic models for extraction of the key sentences is described by them. N. Bansal et al. in [19] introduced a Fuzzy Analytical Hierarchical Process (FAHP) based feature weighting scheme for producing summaries of legal judgements. They found it to be more promising than other traditional approaches. State-of-the-Art domain-independent models are not tried in the past research for legal text summarization. The objective is to find out the effectiveness of State-of-the-Art domain-independent models for domain-specific tasks like legal case summarization. Its findings will help in legal and other sectors where the models can't be properly trained or fine-tuned due to a lack of data. In the next section, the authors discuss the detailed methodology for text summarization using State-of-the-Art domain-independent models.
III. METHODOLOGY
The public records of the Indian judiciary are disorganised and noisy [20]. There is no publicly available dataset of Indian legal documents summarization. So, the authors proposed a novel methodology without the need for a dataset. Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 In step 1 we collected the Indian Legal documents from sources such as SCI, IndianKanoon, Manupatra and ILDC [21]. In step 2, we have extracted the texts using Optical Character Recognition (OCR) from the documents. Next, the noise is removed from the extracted text using basic clean up techniques e.g., Whitespace removal, Spelling correction. After that, in step 3 the additional information at the beginning before the actual judgement is removed. Then the Legal texts are normalised After normalisation, the text is divided into small fragments in step 5, and then those fragments are given to the model in step 6. If the entire document is supplied at once, the model is unable to understand and extract key points from it. Thus the inputs are provided in small fragments to these models. Those outputs later merged for the actual summarised document.We have used two models. The first one is BART. It is used for extractive summarization. BART is described as a denoising autoencoder which is implemented as a sequence-tosequence model and a bi-directional encoder [10]. BART can be used in many downstream applications. K. M. Hermann et al. in [22] proposed a new methodology to fine-tune BART for Comprehension, Translation and Natural Language Generation with very less amount of training data. The model is built for summarization in English texts based on that paper. The second model is PEGASUS. It is a pre-trained model with extracted Gap-sentences for abstractive summarization. Good abstractive summarization performance can be achieved across a broad domain with very little supervision by fine-tuning PEGASUS [11]. Next, in step 7 the model outputs for all the small fragments are merged and in step 8 finally, we get the summarised document. The model outputs at the next stage are evaluated by different Legal experts on multiple parameters.
IV. EVALUATION
The traditional way of evaluating summaries is to use the ROUGE scores or expert evaluation. We have calculated ROUGE-1, ROUGE-2, ROUGE-3 and ROUGE-L scores . To calculate the scores we have compared model summaries generated from raw texts and normalised texts with the summaries provided by Legal experts. Table II and Table III shows the performances of summarizing the samples in terms of ROUGE-1, ROUGE-2, ROUGE-3 and ROUGE-L Precision, Recall and F-scores. Comparing these two tables we can observe that average scores on all the three parameters are increased in normalised texts. The comparison is done using BART outputs as PEGASUS is evaluated poorly by legal experts which are discussed below. Perfect scores for extractive summarization are both theoretically and computationally very difficult to achieve using ROUGE [23]. Thus we have also evaluated our results by experts. Random samples have been taken from our data sources and as per our proposed methodology, those samples are processed. We have sent those outputs to Legal experts for evaluation. We asked them to evaluate the summaries based on three parameters on a scale of 1 to 10. The parameters are conciseness, accuracy, and detail preservation. PEGASUS model is only able to summarize the contents with more than 7-point conciseness and accuracy in 20% samples. In some samples, it gives completely out of context summaries. In the case of BART on over 60% of the samples, it received more than a 7-point score in all the 3 parameters from the Legal experts. Fig.2. depicts average scores given by Legal experts on different parameters in our samples.
Both normalised text and raw Legal texts are provided to the BART model and the summary outputs are then shown to naive persons. The summaries from the normalised texts are evaluated as more conscious and easy to understand by them. Table IV compares the raw and normalised legal text summaries.
As shown in Fig. 3. on average 75% decrease in the length of the normalized text is observed by summarization using BART from our samples. It can significantly improve the efficacy of the legal system. Beginners and common people can also take advantage of it to understand different aspects of a legal case without any specific domain knowledge.
V. CONCLUSION
In this paper, the authors have experimented with two Stateof-the-Art machine learning models for Indian Legal text summarization. BART performed well in summarizing Legal texts and decreased the length of the document up to 75%. PEGASUS is used for abstractive summarization, but it did not work well most of the time. Thus, the normalization methodology is effective for extractive summarization but certainly not that much useful for abstractive summarization. The ROUGE metrics and expert evaluation show that even without domainspecific training State-of-the-Art machine learning models can be used for various domain-specific fields after normalizing the raw texts. All the supplementary files are available at https://github.com/SATYAJIT1910/ILDS. | 2022-06-14T06:41:09.539Z | 2022-06-13T00:00:00.000 | {
"year": 2022,
"sha1": "e7d9733429ce6b976159295d39987e9097c44ad5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ede23b7037fa4ace94d5a4a7d8c20adf982147c8",
"s2fieldsofstudy": [
"Law",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
6677149 | pes2o/s2orc | v3-fos-license | Syntactic transformations for Swiss German dialects
While most dialectological research so far focuses on phonetic and lexical phenomena, we use recent fieldwork in the domain of dialect syntax to guide the development of mul-tidialectal natural language processing tools. In particular, we develop a set of rules that transform Standard German sentence structures into syntactically valid Swiss German sentence structures. These rules are sensitive to the dialect area, so that the dialects of more than 300 towns are covered. We evaluate the transformation rules on a Standard German treebank and obtain accuracy figures of 85% and above for most rules. We analyze the most frequent errors and discuss the benefit of these transformations for various natural language processing tasks.
Introduction
For over a century, dialectological research has focused on phonetic, lexical and morphological phenomena. It is only recently, since the 1990s, that syntax has gained the attraction of dialectologists. As a result, syntactic data from field studies are now available for many dialect areas. This paper explores how dialect syntax fieldwork can guide the development of multidialectal natural language processing tools. Our goal is to transform Standard German sentence structures so that they become syntactically valid in Swiss German dialects. 1 These transformations are accomplished by a set of hand-crafted rules, developed and evaluated on the basis of the dependency version of the Standard German TIGER treebank. Ultimately, the rule set can be used either as a tool for treebank transduction (i.e. deriving Swiss German treebanks from Standard German ones), or as the syntactic transfer module of a transfer-based machine translation system.
After the discussion of related work (Section 2), we present the major syntactic differences between Standard German and Swiss German dialects (Section 3). We then show how these differences can be covered by a set of transformation rules that apply to syntactically annotated Standard German text, such as found in treebanks (Section 4). In Section 5, we give some coverage figures and discuss the most common errors that result from these transformations. We conclude in Section 6.
Related work
One line of research in natural language processing deals with parsing methods for dialects. Chiang et al. (2006) argue that it is often easier to manually create resources that relate a dialect to a standard language than it is to manually create syntactically annotated resources for the dialect itself. They investigate three approaches for parsing the Levantine dialect of Arabic, one of which consists of transducing a Standard Arabic treebank into Levantine with the help of hand-crafted rules. We agree with this point of view: we devise transformation rules that relate Swiss German dialects to Standard German.
In the case of closely related languages, 2 different 2 In any case, it is difficult to establish strict linguistic criteria types of annotation projection have been proposed to facilitate the creation of treebanks. See Volk and Samuelsson (2004) for an overview of the problem. In a rather different approach, Vaillant (2008) presents a hand-crafted multi-dialect grammar that conceives of a dialect as some kind of "agreement feature". This allows to share identical rules across dialects and differentiate them only where necessary. We follow a similar approach by linking the transformation rules to geographical data from recent dialectological fieldwork.
Another line of research is oriented towards machine translation models for closely related languages. It is common in this field that minor syntactic differences are dealt with explicitly. Corbí-Bellot et al. (2005) present a shallow-transfer system for the different Romance languages of Spain. Structural transfer rules account for gender change and word reorderings. Another system (Homola and Kuboň, 2005) covers several Slavonic languages of Eastern Europe and confirms the necessity of shallow parsing except for the most similar language pair (Czech-Slovak).
In contrast, statistical machine translation systems have been proposed to translate closely related languages on a letter-by-letter basis (Vilar et al., 2007;Tiedemann, 2009). However, the word reordering capabilities of a common phrase-based model are still required to obtain reasonable performances.
The main syntactic features of Swiss German dialects
A general description of the linguistic particularities of Swiss German dialects, including syntax, can be found, for example, in Lötscher (1983). Some syntactic case studies within the framework of Generative Grammar are presented in Penner (1995). Currently, a dialectological survey, under the name of SADS (Syntaktischer Atlas der deutschen Schweiz), aims at producing a syntactic atlas of Germanspeaking Switzerland (Bucheli and Glaser, 2002). Some preliminary results of this project are described in Klausmann (2006). 3 There are two main types of syntactic differences between Swiss German dialects and Standard German. Some of the differences are representative of the mainly spoken use of Swiss German. They do not show much interdialectal variation, and they are also encountered in other spoken varieties of German. Other differences are dialectological in nature, in the sense that they are specific to some subgroups of Swiss German dialects and usually do not occur outside of the Alemannic dialect group. This second type of differences constitutes the main research object of the SADS project. In the following subsections, we will show some examples of both types of phenomena.
Features of spoken language
No preterite tense Swiss German dialects do not have synthetic preterite forms and use (analytic) perfect forms instead (1a). 4 Transforming a Standard German preterite form is not trivial: the correct auxiliary verb and participle forms have to be generated, and they have to be inserted at the correct place (in the right verb bracket).
Standard German pluperfect is handled in the same way: the inflected preterite auxiliary verb is transformed into an inflected present auxiliary verb and an auxiliary participle, while the participle of the main verb is retained (1b). The resulting construction is called double perfect.
→ Wir sind ins Kino gegangen. 'We went to the cinema.' b. als er gegangen war → als er gegangen gewesen ist 'when he had gone' No genitive case Standard German genitive case is replaced by different means in Swiss German. Some prepositions (e.g. wegen, während 'because, during') use dative case instead of genitive. Other prepositions become complex through the addition of a second preposition von (e.g. innerhalb 'within'). Verbs requiring a genitive object in Standard German generally use a dative object in Swiss German unless they are lexically replaced. Genitive appositions are converted to PPs with von 'of' in the case of non-human NPs (2a), or to a dativepossessive construction with human NPs (2b).
(2) a. der Schatzmeister der Partei → der Schatzmeister von der Partei 'the treasurer of the party' b. das Haus des Lehrers → dem Lehrer sein Haus 'the teacher's house', litt. 'to the teacher his house' Determiners with person names A third difference is the prevalent use of person names with determiners, whereas (written) Standard German avoids determiners in this context: (3) a. Hans → der Hans 'Hans' b. Frau Müller → die Frau Müller 'Miss M.'
Dialect-specific features
Verb raising When two or more verbal forms appear in the right verb bracket, their order is often reversed with respect to Standard German. Several cases exist. In Western Swiss dialects, the auxiliary verb may precede the participle in subordinate clauses (4a). In all but Southeastern dialects, the modal verb precedes the infinitive (4b).
Verb raising also occurs for full verbs with infinitival complements, like lassen 'to let' (4c). In this case, the dependencies between lassen and its complements cross those between the main verb and its complements: mich einen Apfel lässt essen Verb projection raising In the same contexts as above, the main verb extraposes to the right along with its complements (4d), (4e).
(4) a. dass er gegangen ist → dass er ist gegangen 'that he has gone' b. dass du einen Apfel essen willst → dass du einen Apfel willst essen 'that you want to eat an apple' c. dass du mich einen Apfel essen lässt → dass du mich einen Apfel lässt essen 'that you let me eat an apple' d. dass du einen Apfel essen willst → dass du willst einen Apfel essen 'that you want to eat an apple' e. dass du mich einen Apfel essen lässt → dass du mich lässt einen Apfel essen 'that you let me eat an apple' Prepositional dative marking In Central Swiss dialects, dative objects are introduced by a dummy preposition i or a (5a). However, this preposition is not added if the dative noun phrase is already part of a prepositional phrase (5b).
(5) a. der Mutter → i/a der Mutter 'the mother (dative)' b. mit der Mutter → mit (*i/a) der Mutter 'with the mother' Article doubling In adjective phrases that contain an intensity adverb like ganz, so 'very, such', the determiner occurs either before the adverb as in Standard German, or after the adverb, or in both positions, depending on the dialect: (6) ein ganz lieber Mann → ganz ein lieber Mann → ein ganz ein lieber Mann 'a very dear man' Complementizer in wh-phrases Interrogative subordinate clauses introduced by verbs like fragen 'to ask' may see the complementizer dass attached after the interrogative adverb or pronoun.
Relative pronouns Nominative and accusative relative pronouns are substituted in most Swiss German dialects by the uninflected particle wo. In dative (7a) or prepositional (7b) contexts, the particle wo appears together with an inflected personal pronoun: Final clauses Standard German allows non-finite final clauses with the complementizer um . . . zu 'in order to'. In Western dialects, this complementizer is rendered as für . . . z. In Eastern dialects, a single particle zum is used. An intermediate form zum . . . z also exists.
Pronoun sequences In a sequence of accusative and dative pronouns, the accusative usually precedes in Standard German, whereas the dative precedes in many Swiss German dialects: Predicative adjectives In Southwestern dialects, predicative adjectives agree in gender and number with the subject: (9) er / sie / es ist alt → er / sie / es ist alter / alte / altes 'he / she / it is old' Copredicative adjectives A slightly different problem is the agreement of copredicative adjectives. A copredicative adjective 5 relates as an attribute to a noun phrase, but also to the predicate of the sentence (see example below). In Northeastern dialects, there is an invariable er-ending 6 for all genders and numbers. In Southern dialects, the copredicative adjective agrees in gender and number. Elsewhere, the uninflected adjective form is used, as in Standard German.
The SADS data
The SADS survey consists of four written questionnaires, each of which comprises about 30 questions about syntactic phenomena like the ones cited above. They were submitted to 3185 informants in 383 inquiry points. 7 For each question, the informants were asked to write down the variant(s) that they deemed acceptable in their dialect.
eckdaten.html, accessed 8.6.2011. The SADS data give us an overview of the syntactic phenomena and their variants occurring in the different Swiss German dialects. It is on the basis of these data that we compiled the list of phenomena presented above. More importantly, the SADS data provide us with a mapping from variants to inquiry points. It suffices thus to implement a small number of variants (between 1 and 5 for a typical phenomenon) to obtain full coverage of the 383 inquiry points. Figure 1 shows the geographical distribution of the three variants of prepositional dative marking.
For a subset of syntactic phenomena, two types of questions were asked: • Which variants are acceptable in your dialect?
• Which variant do you consider the most natural one in your dialect?
In the first case, multiple mentions were allowed. Usually, dialect speakers are very tolerant in accepting also variants that they would not naturally utter themselves. In this sense, the first set of questions can be conceived as a geographical model of dialect perception, while the second set of questions rather yields a geographical model of dialect production. According to the task at hand, the transformation rules can be used with either one of the data sets.
4 Transformation rules
The Standard German corpus
The transformation rules require morphosyntactically annotated Standard German input data. Therefore, we had to choose a specific annotation format and a specific corpus to test the rules on. We selected the Standard German TIGER treebank (Brants et al., 2002), in the CoNLL-style dependency format (Buchholz and Marsi, 2006;Kübler, 2008). 8 This format allows a compact representation of the syntactic structure. Figure 2 shows a sample sentence, annotated in this format. While we use the TIGER corpus for test and evaluation purposes in this paper, the rules are aimed to be sufficiently generic so that they apply correctly to any other corpus annotated according to the same guidelines.
Rule implementation
We have manually created transformation rules for a dozen of syntactic and morphosyntactic phenomena. These rules (i) detect a specific syntactic pattern in a sentence and (ii) modify the position, content and/or dependency link of the nodes in that pattern. The rules are implemented in the form of Python scripts.
As an example, let us describe the transformation rule for article doubling. This rule detects the following syntactic pattern: 9 ART ADV {ganz, sehr, so. . . }
ADJA X
The rule then produces the three valid Swiss German patterns -as said above, the transformation rules may yield different output structures for different dialects. One of the three variants is identical to the Standard German structure produced above. In a second variant, the positions of the article and the adverb are exchanged without modifying the dependency links: ADV ART ADJA X This transformation yields non-projective dependencies (i.e. crossing arcs), which are problematic for some parsing algorithms. However, the original TIGER annotations already contain non-projective dependencies. Thus, there is no additional complexity involved in the resulting Swiss German structures.
The third variant contains two occurrences of the determiner, before and after the intensity adverb. We chose to make both occurrences dependents of the same head node: ART ADV ART ADJA X As mentioned previously, the SADS data tell us which of the three variants is accepted in which of the 384 inquiry points. This mapping is nondeterministic: more than one variant may be accepted at a given inquiry point.
Corpus frequencies
In order to get an idea of the frequency of the syntactic constructions mentioned in Section 3, we started by searching the TIGER treebank for the crucial syntactic patterns. of the respective phenomena. 10 This preliminary study led us to exclude phenomena that could not be detected reliably because the morphosyntactic annotations in TIGER were not precise enough. For example, TIGER does not distinguish between copredicative (11a) and adverbial (11b) uses of adjectives. Therefore, it is impossible to automatically count the number of copredicative adjectives, let alone perform the necessary dialectal transformations.
10 These figures should be taken with a grain of salt. First, the TIGER corpus consists of newspaper text, which is hardly representative of everyday use of Swiss German dialects. Second, it is difficult to obtain reliable recall figures without manually inspecting the entire corpus.
'The pots frequently hang on the kitchen wall.'
Results
For each syntactic construction, a development set and a test set were extracted from the TIGER treebank, each of them comprising at most 100 sentences showing that construction. After achieving fair performance on the development sets, the heldout test data was manually evaluated. We did not evaluate the accusative-dative pronoun sequences because of their small number of occurrences. Predicative adjective agreement was not evaluated because the author did not have native speaker's intuitions about this phenomenon. Table 2 shows the accuracy of the rules on the test data. Recall that some rules cover different dialectal variants, each of which may show different types of errors. In consequence, the performance of some rules is indicated as an interval. Moreover, some dialectal variants do not require any syntactic change of the Standard German source, yielding figures of 100% accuracy.
The evaluation was performed on variants, not on inquiry points. The mapping between the variants and the inquiry points is supported by the SADS data and is not the object of the present evaluation. The overall performance of the transformation rules lies at 85% accuracy and above for most rules. Four major error types can be distinguished.
Annotation errors
The annotation of the TIGER treebank has been done semi-automatically and is not exempt of errors, especially in the case of outof-vocabulary words. These problems degrade the performance of rules dealing with proper nouns. In (12), the first name Traute is wrongly analyzed as a preterite verb form traute 'trusted, wedded', leading to an erroneous placement of the determiner.
(12) Traute Müller → *traute die Müller / die Traute Müller Imperfect heuristics Some rules rely on a syntactic distinction that is not explicitly encoded in the TIGER annotation. Therefore, we had to resort to heuristics, which do not work well in all cases. For example, the genitive replacement rule needs to distinguish human from non-human NPs. Likewise, adding a complementizer to wh-phrases overgenerates because the TIGER annotation does not reliably distinguish between clause-adjoined relative clauses and interrogative clauses introduced as complement of the main verb.
Conjunctions Many rules rely on the dependency relation type (the DEPREL field in Figure 2). According to the CoNLL guidelines, the dependency type is only encoded in the first conjunct of a conjunction, but not in the second. As a result, the transformations are often only applied to the first con-junct. However, it should not be too difficult to handle the most frequent types of conjunctions.
Word order errors Appositions and quotation marks sometimes interfere with transformation rules and lead to typographically or syntactically unfortunate sentences. In other cases, the linguistic description is not very explicit. For example, in the verb projection raising rule, we found it difficult to decide which constituents are moved and which are not. Moving polarity items is sometimes blocked due to scope effects. Different types of adverbs also tend to behave differently.
An example
In the previous section, we evaluated each syntactic transformation rule individually. It is also possible to apply all rules in cascade. The following example shows an original Standard German sentence (13a) along with three dialectal variants, obtained by the cascaded application of our transformation rules. The Mörschwil dialect (Northeastern Switzerland, Canton St. Gallen) shows genitive replacement and relative pronoun replacement (13b). The Central Swiss dialect of Sempach (Canton Lucerne) additionally shows prepositional dative marking (13c), while the Guttannen dialect (Southwestern Switzerland, Canton Berne) shows an instance of verb raising (13d). All transformations are underlined. Note again that the transformation rules only produce Swiss German morphosyntactic structures, but do not include word-level adaptations. For illustration, the last example (13e) includes wordlevel translations and corresponds thus to the "real" dialect spoken in Mörschwil.
Conclusion and future work
We have shown that a small number of manually written transformation rules can model the most important syntactic differences between Standard German and Swiss German dialects with high levels of accuracy. Data of recent dialectological fieldwork provides us with a list of relevant phenomena and their respective geographic distribution patterns, so that we are able to devise the unique combination of transformation rules for more than 300 inquiry points.
A large part of current work in natural language processing deals with inferring linguistic structures from raw textual data. In our setting, this work has already been done by the dialectologists: by devising questionnaires of the most important syntactic phenomena, collecting data from native dialect speakers and synthesizing the results of the survey in the form of a database. Relying on this work allows us to obtain precise results for a great variety of dialects, where machine learning techniques would likely run into data sparseness issues.
The major limitation we found with our approach is the lacking precision (for our purposes) of the Standard German treebank annotation. Indeed, some of the syntactic distinctions that are made in Swiss German dialects are not relevant from a purely Standard German point of view, and have therefore not been distinguished in the annotation. Additional annotation could be added with the help of semantic heuristics. For example, in the case of copredicative adjectives (11), a semantic resource could easily tell that pots can be sparkling clean but not frequent.
The purpose of our work is twofold. First, the rule set can be viewed as part of a transfer-based machine translation system from Standard German to Swiss German dialects. In this case, one could use a parser to analyze any Standard German sentence before applying the transformation rules. Second, the rules allow to transform the manually annotated sentences of a Standard German treebank in order to automatically derive Swiss German treebanks. Such treebanks -even if they are of lower quality than manually annotated ones -could then be used to train statistical models for Swiss German part-ofspeech tagging or full parsing. Moreover, they could be used to train statistical machine translation models to translate out of the dialects into Standard German. 11 Both lines of research will be tested in future work. In addition, the rules presented here only deal with syntactic transformations. Word-level transformations (phonetic, lexical and morphological adaptations) will have to be dealt with by other means.
Furthermore, we would like to test if syntactic patterns can be used successfully for dialect identification, as this has been done with lexical and phonetic cues in previous work (Scherrer and Rambow, 2010b).
Another aspect of future research concerns the type of treebank used. The TIGER corpus consists of newspaper texts, which is hardly a genre frequently used in Swiss German. Spoken language texts would be more realistic to translate. The TüBa-D/S treebank (Hinrichs et al., 2000) provides syntactically annotated speech data, but its lack of morphological annotation and its diverging annotation standard have prevented its use in our research for the time being. | 2014-07-01T00:00:00.000Z | 2011-07-31T00:00:00.000 | {
"year": 2011,
"sha1": "d5d8fd694b3ab75197699c9869b3c4879f0f2b88",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "8179a2fbd15bbbc7d2a4c932f62c4366e1ea1289",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53981580 | pes2o/s2orc | v3-fos-license | Using reflexive photography to study a principal ’ s experiences of the impact of professional development on a school : a case study
Using reflexive photography to study a principal’s experiences of the impact of professional development on a school: a
Using reflexive photography to study a principal's experiences of the impact of professional development on a school: a case study The changing social and economic environment has a direct impact on schools and their effective management.School principals have to deal with issues hitherto unknown to them in historical school cultures.This article attempts to describe a South African principal's experience of the way in which professional development (PD) impacted on the development of the school and the way in which his PD -and that of his staffmanifests itself in the functioning of the school.An exploratory qualitative study employing visual ethnography was deemed appropriate for the study.Convenient and selective sampling was used in the study, identifying a school principal who proved to be an exemplar of a principal placing a high premium on his own continuing professional development and that of others.Data were collected by means of reflexive photography, the principal's writings and a photo-elicitation interview.The following categories emerged from the data: the commitment and attitude of the principal to professional development; the head start: receiving the inviting school award; be positive (B+); a focus on client service (doing more than is expected; the blue and orange card system for learners; and inculcating a value system); and what do we do differently?Opsomming Die gebruik van refleksiewe fotografie om 'n skoolhoof se ervarings van die impak van professionele ontwikkeling te bestudeer: 'n gevallestudie Die veranderde sosiale en ekonomiese omgewing het 'n direkte impak op skole en die effektiewe bestuur daarvan.Vir skoolhoofde is dit nodig om onbekende kwessies te hanteer.Hierdie artikel poog om 'n Suid-Afrikaanse skoolhoof se ervarings van die wyse waarop professionele ontwikkeling die ontwikkeling van die skool en dié van personeel in die funksionering van die skool beïnvloed het, te bestudeer.'n Verkennende, kwalitatiewe studie deur middel van visuele etnografie is vir die studie geskik gevind.Gerieflike en selektiewe steekproefneming is gedoen om 'n skoolhoof te identifiseer wat as voorbeeld kon dien van 'n skoolhoof wat 'n hoë premie plaas op sy/haar eie en ander se voortgesette professionele ontwikkeling.Data is deur middel van refleksiewe fotografie, die skoolhoof se skrywes, sowel as 'n foto-verduidelikende onderhoud ingesamel.Die volgende kategorieë het uit die data na vore gekom: 'n toegewyde en positiewe gesindheid van die skoolhoof teenoor professionele ontwikkeling; die voorsprong: ontvangs van die uitnodigende skooltoekenning; wees positief (B+); 'n fokus op kliëntediens (doen meer as wat verlang word; die blou-en-oranjekaartstelsel vir leerders; die vaslegging van waardes); en wat doen ons anders?
Introduction
Principals and educators are facing tough and challenging times in working effectively in schools (Fennell, 2005:145;Hess & Kelly, 2005:2;Rodrigues-Campos et al., 2005:309;Vick, 2004:10).Moreover, numerous impassioned calls for school improvement escalated inside and outside schools (Darling-Hammond & Richardson, 2009;Levine, 2005:68;Southworth & Du Quesnay, 2005:219).Varying expectations of role-players arising from the substantial changes in the management and acquisition of knowledge, changes in human interaction and the composition of families all contribute to extreme pressures on schools to perform better.If the focus is on the improvement of schools, principals need to play a key role in these improvements (Donaldson, 2009;Houle, 2006;Mestry & Grobler, 2004;Vick, 2004).Many studies confirm the importance of leadership in the development and improvement of organisations (Chappuis et al., 2009;Olivier & Hipp, 2006).
The role of principals has undergone rapid change (Cardno, 2005;Graczewski & Holtzman, 2009;Houle, 2006;Kinney, 2009) and they should possess certain leadership abilities in order to achieve and maintain quality schools in complex environments (Donaldson, 2009;Rodrigues-Campos et al., 2005;Vick, 2004).It is also important for principals to understand leadership as a process and to develop the necessary human relation skills to promote joint action and ensure an improvement in school effectiveness and student learning (Jabal, 2006;McClay & Brown, 2003).Elaborating on this view, Houle (2006:145) asserts: "The tension created in shifting views on the principals requires attention to the professional development (PD) needs of principals in the light of their new roles."A focus on the professional development of leaders is also crucial to the delivery of education across schools (Cardno, 2005).Furthermore, this emphasises the fact that professional abilities require "rich, continual performance in which to grow" (Donaldson, 2009).
According to Mestry and Grobler (2004:3), education management development should be seen as a process whereby individual development and the achievement of organisational goals need to be synchronised.The individual's management development is placed within the school context and becomes a fundamental part of the daily management of schools (Mestry & Grobler, 2004;McClay & Brown, 2003).The process of development is mainly concerned with equipping principals to acquire and improve the necessary competencies to manage their schools effectively.This is in line with a literature survey between 1990 and 2001 by Gonzales et al. (2002) which found that 125 studies described the relationships between leadership practices and achievement.In this study, 60 of the studies provided evidence of the influence of leadership on student achievement.Similarly Cardno (2005:293) believes that one "aspect of leadership in its broadest sense is the capacity of key individuals to exert influence that results in positive change for the school ... and ultimately for the benefit of students".However, it is necessary to know that the link between principals' professional development and improved school performance is complex and very dynamic (Donaldson, 2009).Nevertheless, it is important to indicate how a principal as a key individual can impact the development of a school for the benefit of students.
The Urban Principals' Academy (UPA) addresses three areas of principals' professional development: instructional leadership, capacity building and personal renewal (Houle, 2006:150).As regards the latter area, principals who attend to themselves as individuals also acknowledge the fact that their professional authenticity is closely linked to their self-efficacy (Houle, 2006).This is confirmed by Hoy and Smith (2007:160) who assert, "leaders with high selfefficacy in their ability to influence others are likely to be effective in that endeavour".For principals to lead effectively, they require a great deal of interpersonal learning to help them understand how principals' "words, behaviours and moods are shaped by those of the people" they attempt to lead (Donaldson, 2009:16).Moreover, school leadership develops over an extended period of time that also includes complex processes of socialisation (Weindling, 2003).
It is widely known that principals in South Africa have a huge task to create an effective learning environment in schools (Mestry & Grobler, 2004:2).In this regard Bloch (2008:19) and Paton (2006:1) state that South African schools do not meet the requirements for a developing country -they are recognised to be in crisis and in a state of disaster.Although there are some empirical evidence regarding the impact of professional development programmes on school leaders, sustained studies on the impact of leadership development are not visible in the literature (Brundrett, 2000).This article attempts to describe a South African principal's experience of the way in which his PD -and that of his staff -impacted the development of the school and manifests itself in the functioning of the school.
Theoretical framework
Both constructive development theories and adult learning theories are used in the study to understand adult development and growth (Drago-Severson, 2007).Adults bring their life and work experiences, needs, personalities and learning styles to learning and these also influence their views on learning and PD (Drago-Severson, 2007;Knowles, 1984).The andragogy theory of Knowles is an attempt to develop a theory in particular for adult learning.Knowles (1984) puts emphasis on adults who are self-directed and expect to take responsibility for decisions.Andragogy makes the following assumptions about learning: adults want to know why they need to learn something; adults need to learn through experience; adults learn best when the topic is of immediate significance and value; and adults approach their learning as problem-solving.Hirsh (2005) and Lee (2005) are of the opinion that the beliefs and assumptions regarding adult learning need to form the foundation of PD programmes.
As regard constructive-development theories, this paper focuses primarily on the social constructive theory that is used as a lens through which to view how a principal's meaning system shapes the learning challenges he faces (Drago-Severson, 2007:80).According to this view an individual (the principal) searches for an understanding of the world in which he/she lives and works (Creswell, 2007:20).The individual develops subjective meanings of his/her experiences which are varied and multiple (Creswell, 2007:20).The aim of such a study is therefore to rely predominantly on a participant's view of a particular phenomenon, in this case PD.Social constructivism also illuminates developmental foundations of the principal's practice and the interaction between the principal's developmental capacity and his engagement in school practice.According to social constructivist learning theories, learning is constructive and learners construct and build new conceptualisations and understandings by using what they already know (Chalmers & Keown, 2006;Mahoney, 2003).Its impact is manifested within a specific environment, in this case a primary school in Gauteng, South Africa.
•
The constructed meaning of knowledge and beliefs This is a process whereby an individual discovers new knowledge, skills and approaches and then personally interprets their significance and meaning.
The situated nature of cognition
This aspect recognises the fact that PD has to be strongly linked to the actual contexts and situations of the individual school.This is also in line with Engestrom's (1987) model of expansive learning which postulates that human beings do not live in a vacuum, but are embedded in their sociocultural context (Paavola et al., 2004), and their behaviour cannot be comprehended independently of this context.
•
The importance of ample time New developments and change take time to be implemented.
There is a need for practices informed by constructivist theories about how PD can positively impact schools.This study also illuminates how principals can play a key role within their school contexts.
The key role of principals
School principals play a key role in creating and maintaining effective school environments for the sake of student performance (Cardno, 2005;Chappuis et al., 2009;Hale & Moorman, 2003;Hammersley-Fletcher & Brundrett, 2005;Lin, 2005;Olivier & Hipp, 2006;Vick, 2004).Principals also act as lynchpins facilitating change and creating an effective learning environment in schools (McClay & Brown, 2003;Van der Merwe, 2003;Southworth & Du Quesnay, 2005), such as is the case with inviting schools (Egley, 2003).It means that their leadership may have an impact on relationships and outcomes within the school (Berry, 2004).Facilitating learning for individual school leaders as well as the members of organisations is viewed as the primary goal of leadership (Amey, 2005).
When conceptualising leadership as learning, the objective is to uncover mental models that affect the way in which educational leaders view the world and act within their contexts (Amey, 2005).
Wegenke (2000) believes that continuing PD for principals is necessary in order to maintain a positive school environment.Through their PD principals can influence the school's effectiveness by encouraging a culture of renewal and change (Rodrigues-Campos et al., 2005).PD is an "effective way for principals to gain new knowledge, develop new skills, and model self-renewal" (Rodrigues-Campos et al., 2005:312).According to Benjamin (as quoted in Kent, 2002:214) there are four objectives for leadership development: to develop individual leadership effectiveness, to improve career transition into leadership positions, to instil the vision, values and mission of the organisation, and to develop knowledge and skills to implement the long-term strategic objectives of the organisation.
Transformational forms of leadership fundamentally aim to make events meaningful and to cultivate professional development and higher levels of commitment to organisational goals (Yu et al., 2000).These include: • Identifying and sharing a vision (Hoy & Smith, 2007;Kassissieh & Barton, 2009;Sternberg, 2005;Vick, 2004).The first characteristic of effective educational leaders is their ability to align vision and personal, professional and organisational values with the particular context in the school (Jabal, 2006;Hoy & Smith, 2007;Kassissieh & Barton, 2009).Charisma is a characteristic of leaders who are able to exert a profound influence on the school's performance and climate by the force of their personality, abilities, personal charm, magnetism, inspiration and emotion (Sternberg, 2005;Vick, 2004).
• Offering intellectual stimulation.Such stimulation creates a gap between the current and desired practices and could enhance emotional arousal processes (Kassissieh & Barton, 2009).
• Setting an appropriate example.Through active involvement in continuing PD, principals set examples for staff to follow (Chappuis et al., 2009;Rodrigues-Campos et al., 2005;Southworth & Du Quesnay, 2005).The school leader's role is "grounded in shared ideals where the leaders serve as the head follower by modelling, teaching, and helping others to become better followers" (McKerrow et al., 2003:2).Furthermore, by doing so principals also set an example for staff and learners to continue their own learning.Staff should also be convinced about the expertise of their principals (Hoy & Smith, 2007).
• A positive attitude and commitment to development.Studies consistently show the importance of attitudes and commitment to ensure effective development and change (Dell, 2003;Gray, 2005).However, Bernat (2000) maintains that, while it is not easy for attitudes to change, it is necessary that they change before any meaningful and permanent learning can take place.This is in line with Ottoson's view expressed in Smith and Gillespie (2007).Ottoson believes that pre-existing attitudes are among the factors that can affect the implementation of training and development.
• Strengthening school culture.Principals set the climate and tone of schools: they influence the overall culture of schools (Cardno, 2005;Chappuis et al., 2009).Invitational Education (IE) as an example of a positive school culture, aims to "make school a more exciting, satisfying, and enriching experience for everyone -all students, all staff, all visitors" (Purkey & Novak, 2008:19).
Certain key assumptions that underpin IE are intended to foster the development of human potential.These assumptions are the following (Kok & Van der Merwe, 2002;Novak & Purkey, 2001;Purkey & Siegel, 2003): − Respect: this assumption acknowledges that every person is an individual of worth (Day et al., 2001).It also supports the principle that all individuals are able, valuable and responsible and that they should be treated accordingly.
− Optimism: people possess untapped potential for development and growth (Day et al., 2001).
− Trust: it is essential for education to involve everyone to promote empowerment and interdependency.
− Intention: it is an intentional decision to act in a particular way and to achieve and carry out a set goal (Day et al., 2001).
In her exploratory study on principals' perceptions of their experiences and the impact of their effectiveness, Berry ( 2004) identified positive experiences and also insights principals gained.In another study by Reeves et al. (1998) they indicate two interlinked aspects of learning to change practices: • Internalisation: this refers to the personal sense individuals make of a concept or system of concepts linked to reflection (Lease, 2002).It also refers to the way in which individuals build experiences into their understanding of the concept as part of their professional knowledge.
• Externalisation: this refers to the process whereby individuals in their interaction with others use the new concepts or system of concepts to mediate their individual and joint practice (Olivier & Hipp, 2006;Chappuis et al., 2009).It is also related to individuals' belief that the application of concepts or systems of concepts may bring about change in their working environment.
Research design
An exploratory qualitative study employing visual ethnography was deemed appropriate to determine the principal's experiences of the impact of PD within his school.A case study design helped the researcher to get a better understanding of the phenomenon (the impact of PD) in its natural setting with an emphasis on the experiences of the principal regarding the impact of PD (Creswell, 2007;Meadows, 2003).As such it involved the exploration and description of a bounded system, in this case a particular primary school in Gauteng (Creswell, 2007:73).In recent years, visual empirical methods have been applied to many studies that have not previously been considered to be visual (Denzin & Lincoln, 2008;Harper, 2008;Schulze, 2007;Zenkov & Harmon, 2009).Furthermore, visual empirical methods help with retrospection of lived experiences of participants and, by combining photographs with other forms of data collection, ensure contextual validity through triangulation.Atkinson and Delamont (2008) believe that there are numerous social phenomena that can be captured visually and analysed in terms of their manifestations.However, a particular social phenomenon should not be separated from the social setting in which it is generated and interpreted -a rule which was also adhered to in this study.
Sample
Convenient and selective sampling was used in the study.The author has been involved in a number of previous studies in the school since 1992 (Steyn, 1994;2006;2007;2008;2009).This status also earned me the principal's trust.During these studies the participant in the study proved to be an exemplar of a principal, placing a high premium on his own and others' continuing, professional development.
The importance of the principal's own PD for the development of the staff and school was described in Steyn (2006) where he specifically mentioned the effect of a professional development programme that explained and further confirmed his commitment to PD as follows: The first law: 'The law of the lid', in The 21 laws of leadership by Maxwell actually gave me a wake-up call because I realised that if I did not develop and grow then my school wouldn't either.(Steyn, 2006:5.)Previously the principal acknowledged the importance of PD, but did not realise the effect that his own PD could have on the PD of staff and also the development of the school.He has since then placed such a focus on PD that he postponed his retirement in order to develop the staff and the school.
The school is an urban, Afrikaans primary school with approximately 1 400 learners and 80 staff members (administrative and academic).
It is located in a middle-class community of affluent families, and 8% of the learners are exempted from school fees.
When participants interpret photographs they uncover their subjective meanings (Schulze, 2007) and these photos also assist the researcher to understand the participants' subjective experiences of relevant social concerns (Zenkov & Harmon, 2009).The significant advantage of using photographs in the study is to clarify under-standing of the phenomenon, the impact of PD, and to provide a context in discussion with the participant.
Data collection
Denzin and Lincoln (1994) identify two distinct ways in which photography is used in qualitative research, namely as images generated by researchers and as images generated by participants.Following a "photo walk" introduction to the study and instructions for the use of the camera (Zenkov & Harmon, 2009:577), the principal of a primary school in Gauteng was requested to take at least twenty pictures using the researcher's camera to illustrate the manifestation and impact of PD in his school.After a week he produced reflexive photographs as part of the data collection, described the photos in paragraph-length writings (Zenkov & Harmon, 2009) and participated in a photo-elicitation interview (Harrington & Schibik, 2003;Harper, 2008).He also provided a DVD, "In Hennops se voetsporedie kaalvoet-pret-prestasieskool" (In Hennops's footsteps -the barefoot-fun-achievement school) that explains all the recent developments in the school.After listening to the DVD the information on it was transcribed.By using qualitative reflexive photography, the principal's writings, the DVD and the photo-elicitation interview, the researcher attempted to examine the principal's perceptions of the impact of PD in the school.The photo-elicitation interview specifically provided additional insight into the intrinsic meaning in the photographs.The idea is to evoke comments and discussion, since the principal can describe his own experiences via photographs and interviews (Schulze, 2007).The study explores two questions: How does the principal perceive the impact of professional development on the school?What role has the principal played in the manifestation of PD in the school?
As in previous studies at the school, the principal gave his informed consent to participate in the study.He agreed to take the photographs and to participate in a semi-structured photo-elicitation interview after the photos had been developed by the researcher.The day after the principal took the required number of photos, they were processed and the interview was conducted the following day.
Permission was granted to record the interview.The interview was conducted in Afrikaans in the natural setting of the school (the principal's office) and it was transcribed verbatim.The quotations from the photo-elicitation interview used in this study were translated according to the original transcription and paragraph writings of the principal.Member checking was done by giving the principal a copy of the draft article with the findings of the study.The principal agreed with the findings, but elaborated on certain aspects in the findings.
The following serve as examples of questions developed for the photo-elicitation interview: Which photos best reveal the impact of PD on the school?Why have you chosen these photos?How do the photos reflect your attitude to PD? How do you see the role of the principal in PD?What guidelines do you suggest for principals who are implementing PD in schools?
Analysis of data
The transcribed verbatim data of the photo-elicitation interview, the DVD and the principal's writings were analysed.The following categories emerged from the data: the commitment and positive attitude of the principal to professional development; the head start: receiving the inviting school award; be positive (B+); a focus on client service (doing more than is expected, the blue and orange card system for learners, and inculcating a value system); and What do we do differently?
Findings
Professional development is a continuous process in the school.On Wednesdays the school management team meets to identify and positively plan new innovations at the school.They also budget for PD programmes.Suitable non-departmental programmes are identified and the school selects staff in critical areas to attend workshops and seminars.On their return, participants have to convey what they learnt to the rest of the staff.The school only has an administrative meeting in the first term, while the rest of the terms are dedicated to development.
In the photo-elicitation interview it was clear that the principal placed a high premium on his commitment and attitude towards PD.On the question of which photos best exemplify the manifestation of PD in the school, he admitted that it was difficult, but he nevertheless chose three photos: the one on invitational education, the "be positive" sign and the tea tray (client service).
The commitment and attitude of the principal to professional development
Responding to the question of how the photos explain the principal's commitment to PD, he said: "Growth begins with the manager, and we [the school] cannot go without it [growth]."About six years ago he realised that he would be retiring and that he should further empower and develop his staff.He read John Maxwell's 21 laws of leadership.The first law, "The law of the lid", explicitly states that if you are a 4/10 leader, the organisation to which you belong will only be a 4. "If you do not grow, you will create a ceiling."He began to grow with the staff, and aggressively made an effort to attend courses of world standard.On certain days during the school term the school even have two staff development sessions.The school also budgets for courses, as he said -"not departmental".In addition, the principal selects staff in critical areas to attend appropriate courses.On their return, they present the programme to the rest of the staff.The current timetable also allows for staff development during the last week or two of a term.The majority of staff meetings are in the form of staff development sessions.This is the "way we can survive and function well" as a school, he commented.
On the question of how the photos reveal the principal's attitude to PD he said: "This is very difficult.A number of factors are involved."He referred to the famous story of people who came to a pastor and enquired about the congregation.They were interested in joining the church and wanted to know how things were going in the church."Where we come from people complain about the pastor and they quarrel."The pastor said that this happened there too.After another year other people came and were searching for a church to join.They said, "We are so happy where we come from.The commitment and attitude of leaders to their own and others' PD is also supported in the literature (Chappuis et al., 2009;Olivier & Hipp, 2006;Rodrigues-Campos et al., 2005;Wegenke, 2000).Their attitude and commitment to PD also serve as an example for other role players to continue their own learning.As regard constructive theories, the principal acknowledged that PD needs to be closely linked to the context of the school and that development occurs over time (Chalmers & Keown, 2006;Hodkinson & Hodkinson, 2005).Social constructivism also illuminates developmental foundations of the principal's practice and the interaction between the principal's developmental capacity and his engagement in school practice.This also explains why he selects staff to attend specific non-depart-mental PD programmes, a timetable for PD is set and the school budgets for PD each year.The principal's commitment and attitude towards personal growth and the growth of the school is explicitly explained in the way he led the school to become an inviting school.
The head start: receiving the inviting school award
The school received the prestigious Inviting School Award from The International Alliance for Invitational Education in 1993 (Fig. 1).A concerted effort by means of a number of PD sessions to staff members and other role players was required to initially introduce the Invitational Education (IE) approach.For the sake of its maintenance, continuous follow-up PD programmes on IE and related topics have been conducted at the school.
Figure 1
According to the principal, this was extremely encouraging and gave them a head start."The award is like being lowered into starting blocks.It's what you do with the award afterwards that makes the difference."For him one of the important things for the "race" towards continuous improvement is to constantly learn from schools of excellence throughout the world.Throughout the interview the principal referred to the many changes that had taken place since the school received the inviting award (some discussed in this article).
Although humble, he is very proud of all these achievements.Egley (2003) believes that the influence of principals on education is greater than the influence of any other factor.Leadership permeates all levels of schooling and has a positive (or negative) effect by creating an environment conducive to teaching and learning (Card-no, 2005;Chappuis et al., 2009).Under the leadership of the principal, the school in the study has succeeded in sustaining the inviting culture in the school since 1993 (Steyn, 2006).As regard constructive theories, the principal and staff succeeded in acquiring new knowledge, skills and approaches, in particular that of IE, and to interpret their meaning and significance within the school situation (Chalmers & Keown, 2006;Wenger, 2007;Hodkinson & Hodkinson, 2005).
A school principal can therefore influence the climate of a school and can have a significant effect on the overall culture of a school.
The B+ climate of the school serves as another example to explain the culture of the school.
Be positive (B+)
About eight years ago the principal arranged a training programme for staff at the school.
Professor Eugene Cloete, currently Dean of the Faculty of Natural Sciences at Stellenbosch, was the presenter.His whole approach was so positive in nature, on our country, education.His total stance as human being was catching.It made us realize to make that the aim of the school's point of departure, Be Positive (B+).(Fig. 2).
The principal admitted that unfortunately he often experiences difficulties with inculcating this positive approach, since the "people of our time are fairly negative about things".
Figure 2
Professor Cloete also offered other presentations at the school and "he was very positive about everything that he presented".On one occasion when the principal accompanied Cloete to his car after one of his presentations, the principal asked him if there was anything that ever bothered him."Professor Cloete did not even answer me."This motivated the principal to become even more positive.As a result of this, the principal put up a number of B+ signs in the school."I do think that it helps here and there to make people positive."Moreover, this was the beginning of his attempt to influence the attitudes of all role-players at the school: "I spent a year trying to change the attitudes of people."During that year he read a lot about attitudes and every week he wrote something on attitude in their school's newsletter.During the course of the year, he also gave many talks to staff, learners and even parents on the effect of attitude.The principal showed me the powerpoint presentations of a number of these workshops.In summary he said, "Attitudes can change your whole life … A great attitude produces a lot in life." The principal referred to the importance of a principal's positive attitude to others.It should be obvious that a principal cares for others.
In the private sector they are focused on production.But people need to be happy at a place; staff should be happy; children should be happy.Organisations pay too little attention to people and therefore the production is not good enough … One could ask the question: Is it about people or products?
Inculcating a B+ atmosphere in the school shows that human beings do not live in a vacuum, but are embedded in a particular socio-cultural context (Paavola et al., 2004).Furthermore, the development of such a culture indicates that its development is strongly linked to the actual context of the school (Chalmers & Keown, 2006;Darling-Hammond & Richardson, 2009:47).The findings also support the externalisation aspect in the study of Reeves et al. (2005) that shows how the principal, staff and learners in their interaction used the B+ concept to bring change in their school environment (Olivier & Hipp, 2006;Chappuis et al., 2009).
As mentioned before, leadership plays a key role in creating a positive teaching and learning environment.The positive school climate where people want to be and where everybody is happy also implies a focus on serving people in the school.
A focus on client service
The school's focus on clients was revealed in a number of ways: doing more than is expected, the blue and orange card system for learners, and inculcating a value system.All these aspects were the result of different PD programmes to focus on clients in the school.
Doing more than is expected
In his writings, the principal indicated how Professor Cloete emphasised the value of a focus on clients in the school.Cloete mentioned, among other things, that one should not merely offer a visitor a cup of tea or coffee, but provide a biscuit with it as well -implying that one should always do more than is expected of you.This single comment inspired the principal and the staff so much that they attended more programmes on client service, and staff members were trained accordingly.One of the workers has taken the initiative and places flowers in a vase whenever she serves a tray of tea or coffee to administrative staff in the school (Fig. 3).
Figure 3
On the day of my first visit to the school for the new study, two learners welcomed me at the gate and guided me to my parking spot where a note bearing my name was attached to the pole.On my arrival, I also received a gift from the learners who then accompanied me to the principal's office.When I commented on these welcoming efforts the principal explained: "All strangers or visitors should be cared for and they need to be welcomed." The findings show how the principal as adult is self-directed and has taken responsibility for developing staff towards a client focus (Knowles, 1984).Staff as adult learners on the other hand, accepted that this approach -client focus -is of immediate significance and value.The literature also confirms that every educational output has clients and quality is unlikely to improve without our recognising this (Kayser, 2003:173).In terms of quality management a client-focused organisation's primary goal involves determining who the client is and seeking to meet and exceed the client's needs (De Bruyn, 2003).The client-focus is also exemplified by the reward system for learners, the internal clients of the school.
5.4.2.The blue and orange card system for learners This blue and orange card system, basically a reward system, is based on the invitational education approach, an example of a PD programme.William Purkey, the co-founder of Invitational Education, introduced the blue and orange card system (Paxton, 2003) (Fig. 4).According to Purkey, blue cards carry a message that a person is able, valuable and responsible whereas orange cards inform the person that he/she is unable, worthless and irresponsible.Any positive action or behaviour of a learner is indicated on the blue side of the card.In this study, the school's approach to acknowledging and rewarding people was also motivated by a friend of the school after his visit to Disneyland.This friend noticed how employees put their awards on their office walls and how proud they were of them.
The principal calls it the Disneyland reward system (Fig. 5).
Figure 4
Figure 5 Previously the school used a black file system at the school and "we just worked very hard".In this file all the transgressions of children were recorded.The school totally moved away from that system and introduced the blue and orange card system.The principal ascribed the shift to the reinforcement of positive behaviour: We could not flourish on such a negative thing, and it cannot work -especially when children see punishment in Learners receive a stamp on a diploma for six blue entries on their blue and orange card.After six stamps they receive a certificate for their outstanding performance.After the seventh stamp, parents receive a letter to congratulate them on the noteworthy distinction their child has attained.The principal explained this system in this writings: learners can cancel a negative comment (orange), by accumulating six positives comments (blue).They still need to accumulate a total of six blue comments to receive a stamp.According to him the system works excellently, since learners are very positive about working for their stamps.On Fridays there are opportunities for learners to bring their blue and orange cards to the principal for his signature.In order to implement this reward system it required the necessary training of staff.All new staff members are also introduced to this award system.
The positive reinforcement theory of motivation is one of the best ways of influencing and modifying people's behaviour in the right way (Anon., 2009;Champoux, 2000;Gerson & Gerson, 2006;Prinsloo, 2003).The theory is based on the law of effect: those activities that are met with enjoyable consequences tend to be repeated, while those activities that are met with unpleasant consequences will probably not be repeated.
To reward learners for their achievements is noble, but it should not be at the expense of a particular value system.
Inculcating a value system
The idea of the "fruit of the spirit" comes from a parent in the school.Figure 6 reveals the "fountain of love".The following words, which are translated for the purpose of this figure, appear on the board next to it: Hennopspark -the school where God reigns.Wordsfountain that gives life.The principal explained: It is extremely important to inculcate values in the children, values like love, friendliness, humility, neatness.These values are also part of the school programme and are inculcated during the life-skills period.People become so focused on the academics that they neglect their values.Take for example Nelson Mandela who was an excellent leader with an extremely strong and noble value system.Then you get other leaders without any value systems.At the core of many things is a value system: pride, respect, love, humility.It is for that reason that our school is so focused on values.The value system should prepare learners for a successful place in society.Even at prize-giving ceremonies learners receive awards for values that they show, like friendliness or diligence, or good manners or neatness.We give awards for that ...You should work with children to help them feel that they are valuable.I also want to empower the child.Children will become the leaders of tomorrow.
Figure 6
Another way in which the school shows its commitment to inculcating a value system is the report given to learners in the third term (Fig. 7).This report does not include any academic, cultural or sports achievements, but reflects the dignity and self-worth of each child (Steyn, 2006).The principal explained: During the third term learners do not receive any academic reports.They receive a hand-written report from the teacher on all their positive characteristics.Each teacher only gives an account of the good qualities of the learner.This idea originated in a class in America.
The teacher requested learners to write a positive comment about every other child in the class.She then collected each child's comments and gave them to him/her.During the Vietnam war one of the soldiers who had been in her class was killed and when they emptied his pockets they discovered the crumpled report that his teacher and fellow classmates gave him many years ago.It was then that the principal realised actually how "few good things we say to each other".
Figure 7
The importance of inculcating values is also supported by invitational education (Kok & Van der Merwe, 2002;Purkey & Siegel, 2003).People are able, valuable and responsible and should therefore be treated accordingly.By accepting such values, staff as adult learners show that the approach is meaningful and they are willing to take responsibility to implement them in practice (Knowles, 1984).
Apart from the focus on the client, the school has accepted the challenge to constantly do things differently.
What do we do differently?
In his writings on doing things differently and comments on continuous improvement, the principal wrote: In 2007 I attended a conference led by Professor Gideon Maas on futuristic trends and modern principles of management.Among the things he said was that businesses (practices) that do not think differently will be on the downgrade.This challenged us to devote 10 minutes each week to talk about new schools of thought.This is a standing item on our agenda.This compels us to constantly think innovatively.Van der Merwe (2003:44) it is the principal's task "to initiate, facilitate and implement change".In accordance with constructivist theories, the findings reveal that new developments and change take time to be implemented successfully (Chalmers & Keown, 2006;Darling-Hammond & Richardson, 2009;Hodkinson & Hodkinson, 2005).The principal's response to the importance of continuous development is also supported in the literature (Crum & Sherman, 2008;Gokçe, 2009;Richardson, 2003).
Throughout the photo-elicitation interview it became clear that this principal has a definite stance on PD.He also explicated some guidelines for the effective implementation of PD.
Guidelines for implementing professional development effectively
On the question of what guidelines the principal would recommend for the effective implementation of PD, he made three suggestions.
• The principal's commitment and attitude to PD is very important.
I'll begin with myself.If I do not have fire in my heart, I will not be able to transfer it to others.In order to empower others, you have to empower yourself.I attended courses and read at least 10 minutes per day as is recommended … You have to keep abreast in the world.There is an unbelievable amount of material and research available.You simply cannot stay behind.It [PD] begins with yourself.
• Once the principal him-/herself is empowered, the next step is to "look at the people around you, their development and their happiness".
• Finally the principal explained the importance of learners.The main aim of professional development should be the well-being of learners and the improvement of their performance as he explained: The primary client is the child … We should look more at the child.Children should be happy.Many people only think about themselves and do not take the children into consideration.It is always important to think carefully before decisions are made: Is the decision in my interests or the children's?You cannot ignore the children in decisions.I am now taking my courage in both hands when I talk about primary schools where learners are dressed as if they are going to church, with cheese cutters and blazers, et cetera.This is an excellent illustration of thinking about oneself -not about children.Children want to dress in what they feel comfortable in … Children want to go barefoot; they want to play.I am, however, still in favour of neatness.
The guidelines for PD reflect the fact that the principal learnt through experience (Knowles, 1984) and that he has been searching for an understanding of the world in which he lives and works (Creswell, 2007;Drago-Severson, 2007).As a result he has developed subjective meanings of his experiences regarding PD (Creswell, 2007).
Conclusion
Although they provide insight, many data-collecting methods are inadequate to produce the depth of information required to measure the impact of PD programmes.Using reflexive photography to study the impact of a principal's perceptions of PD on a school has shed additional light on the development of meaning that the principal ascribes to his perceptions and experiences through and as a function of his social interaction within the school (Harrington & Schibik, 2003:36).The photos that the principal took symbolise the significance and interpretation of the principal's interaction with his social and physical environment.This technique has provided authentic examples of what impact a principal's PD and his commitment and attitude to PD can have on a school.This information is presented in both the participant's pictures and his words.They also reveal the importance of principals' positive attitude towards their own PD and the PD of their staff for the sake of continuous school development and improvement.
Finally, the requirement of the ever-changing character of principalship is that principals constantly need to update their knowledge and skills to assist their schools to face new challenges.The core objective of the professional programmes for educational leaders should be to promote high quality learning among all learners in the schools (Kent, 2002).This implicates that principals need to understand current approaches to learning and engage the staff in such a negative light ... If one looks at the whole file now, you see that most of the entries are positive.There are very few on the orange side.Between 96 and 97 percent of children have entries only on the blue side.The children work for them.Everyone wants a pat on the back, even me -and I am 65 years old.
In 2008 his presentation to prospective grade one parents was based on the principles of what the school does differently from other schools."We had one of the best enrolments ever."A DVD was also made (and is constantly adapted) as a result of doing things differently.The following words on the DVD which are translated for the purpose of this figure appear on it: "In Hennops' footsteps: the barefoot-fun-performance school." He added that continuous change and development is necessary for a school to prosper. | 2018-12-01T16:59:54.279Z | 2009-07-26T00:00:00.000 | {
"year": 2009,
"sha1": "57781a0260e865051a77a8b87bdbc85df725bc22",
"oa_license": "CCBY",
"oa_url": "https://www.koersjournal.org.za/index.php/koers/article/download/133/102",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb10f9fe61c8f2f17d2177c2754ce897d56eadcc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
77861766 | pes2o/s2orc | v3-fos-license | Laparoscopic-assisted vaginal myomectomy: a case report and literature review.
The purpose of this article is to present a case of laparoscopic myomectomy (LM) that led to the identification of a new minimally invasive technique [laparoscopic-assisted vaginal myomectomy (LAVM)] for removing multiple transmural uterine myomas and facilitating uterine suturing. In addition, we reviewed the literature to (1) describe the history leading up to LAVM, (2) relate the benefits of this technique to other more widely performed myomectomy procedures [LM and laparoscopic-assisted myomectomy (LAM)], and (3) identify criteria for LM and LAVM.
INTRODUCTION
Laparoscopic myomectomy (LM) is a minimally invasive surgical procedure for the removal of uterine myomas.It was first described in the late 1970s by Semm. 1 Subsequently, instruments have been developed to enhance the procedure.Laparoscopic myomectomy requires advanced laparoscopic skill and expertise in suturing and tissue removal.
Laparoscopic-assisted myomectomy (LAM), a procedure that combines operative laparoscopy and minilaparotomy, was described by Nezhat et al 2 in 1994.The procedure was initially developed to remove single and multiple large myomas.Nezhat reports that in addition to providing a route (via the minilaparotomy incision) for removal of the myoma(s), LAM is "technically less difficult than LM, allows better closure of the uterine defect, and may require less time to perform." 2 Goldfarb and Pelosi, independently, have worked on a variant of this procedure in which the dominant myoma is removed laparoscopically, and the uterus is delivered (via colpotomy) into the vagina for removal of secondary uterine myomas and uterine closure.Pelosi's 3 laparoscopic-assisted transvaginal myomectomy (LATM) was described in 1997.This paper discusses Goldfarb's laparoscopic-assisted vaginal myomectomy (LAVM) technique. 4
CASE REPORT
The patient was 26 years old (gravida 2, para 1) and complained of menorrhagia.A transvaginal ultrasound revealed three transmural myomas; the dominant myoma measured 7 cm.Because the patient was attempting pregnancy, myomectomy, rather than myolysis, was deemed the appropriate procedure. 5as then performed in the routine manner.The myoma was grasped with a tenaculum and removed vaginally.
During this part of the procedure, it was noted that the dominant myoma extended into the uterine cavity, the uterus was mobile, and the vagina was parous.A 5-mm laparoscopic grasper was used to guide the uterus to the colpotomy site.T-clamps were placed on the edges of the wounds, and the fundus of the uterus was delivered, via the colpotomy incision, into the vagina.The 2 additional myomas were palpated digitally and removed transmurally by electrocautery and sharp dissection.The uterus was sutured in 3 layers (endometrial, myometrial, and serosal).The repaired uterus was returned to the abdominal cavity, and the colpotomy incision was sutured.The abdomen was re-explored laparoscopically and thoroughly lavaged.An oxycellulose barrier (Interceed) was placed on the uterus.
DISCUSSION
Since this case, Goldfarb has performed 11 additional LAVM procedures.The indications and outcomes are listed in Table 1.Four patients experienced minor postoperative complications-3 patients had urinary retention (Foley catheter remained in place for 1 week) and 1 patient was febrile (additional antibiotics were prescribed).One patient had a follow-up laparoscopy that revealed minimal adhesions.Follow-up has not been long enough to discuss fertility.
CRITERION FOR LM AND LAVM PROCEDURES
Dubuisson et al 7 cautions that LM can be a lengthy and difficult procedure and should be reserved for experienced surgeons with a thorough familiarity with endoscopic sutures.Parker 8 suggests that not all women with symptomatic myomas are candidates for LM.He notes that the procedure, in some cases, results in excessive blood loss, prolonged operating time, or the need to convert to laparotomy, or both.In addition, it has been reported 9 that laparoscopic suturing of the myometrium may contribute to uterine dehiscence.
Parker suggests the following criteria in deciding whether a patient is likely to be managed successfully by LM: (1) No individual myoma should be larger than 7 cm; (2) If there are multiple myomata are present, the uterine size should not be greater than 14 weeks; (3) No myoma should be near the uterine vessels or tubal cornua.At least 50% of the myoma should be subserosal.Operative hysteroscopy is the preferred procedure for removal of submucous myomas.
For success with LAVM, we suggest surgeons consider the following: (1) Removal of the dominant myoma must render the uterus mobile enough to be delivered to the colpotomy site; and (2) The vagina and cul-de-sac must be ample enough to allow for generous colpotomy (parous preferred).
LITERATURE REVIEW Laparoscopic Myomectomy
Prior to Semm's description of laparoscopic myomectomy, laparotomy or hysterectomy were the main treatment options for uterine myomas.Since Semm, several clinicians [10][11][12] have reported success with LM.Nezhat et al 10 report that "laparoscopic myomectomy can be a safe and cost-effective alternative to laparotomy when performed by a skilled operative laparoscopist." In this series, 154 women, with symptomatic uterine leiomyomata, underwent laparoscopic myomectomy.In total, 347 intramural or subserosal leiomyomata were removed, ranging in size from 2 to 15 cm.The majority of the myomas were morcellated and removed through a 10-mm suprapubic anterior abdominal wall trocar incision or the operating channel of the operative laparoscope.In about 20% of the cases, the myomas were removed from the abdominal cavity via posterior colpotomy.The procedure ranged from 50 to 190 minutes (with a mean of 116 minutes), the blood loss was estimated at between 10 and 600 cc, and the duration of hospitalization ranged from 7 to 48 hours (with a mean of 19.6 hours).
The authors report 2 major perioperative complications.
One patient developed fluid overload postoperatively.
The authors attribute this to the hysteroscopic portion of the procedure.The other patient had intraabdominal bleeding that resulted from laceration of the epigastric vessels.The authors note that the damaged vessels were near the left suprapubic puncture, the site used for removal of the myoma.
Other important findings are that "sutured sites of intramural or deep subserosal leiomyomata healed more completely than the unsutured sites, but were associated with a greater incidence of adhesion formation." The authors conclude that in selected patients (ie, those with few and relatively small myomas), LM can replace laparotomy for the treatment of uterine myomas.They caution that (1) LM can be a difficult endoscopic procedure, (2) the strength of the uterus following LM remains unknown, and (3) postoperative adhesion formation may impair fertility.
Comparison between LM and Laparotomy
In addition to Nezhat, 10 several clinicians
Adhesion Formation
In response to concerns about postoperative adhesions following LM, Bulletti et al 16 conducted a case-control study, with 32 patients, to compare the frequency of adhesion formation after LM with that of laparotomy.The mean size of myomas was 7.4 cm for laparotomy versus 7.3 cm for laparoscopy.The authors found that the number of incision sites free of adhesions and the extent of adhesions were significantly lower in women who underwent laparoscopy.In addition, they found that suturing myomas with depth of myometrial penetration of less than 50% provided no advantage over not suturing them (ie, adhesion formation was not significantly reduced by suturing).
Laparoscopic Assisted Myomectomy
In 1994, Nezhat et al 2 describe laparoscopically-assisted myomectomy (LAM), a procedure that combines operative laparoscopy and minilaparotomy for the removal of single and multiple large leiomyomas.In this retrospective study of 57 patients, with uteri ranging from 8 to 26 weeks' gestational size and myomas ranging from 28 g to 998 g, the authors report that operative time ranged from 40 to 285 minutes (mean 127 minutes), and blood loss ranged from 50 mL to 1,600 mL (mean 267 mL).They conclude that LAM is a safe alternative to myomectomy by laparotomy.In addition, as compared with LM, they conclude that LAM is technically less difficult, allows better closure of the uterine defect and may require less time to perform.
Uterine Dehiscence and Laparoscopic Suturing
Uterine dehiscence during pregnancy is a concern after LM.Harris 9 was the first to suggest this complication.He reports that a 24-year-old woman, who conceived after laparoscopic myomectomy, experienced uterine dehiscence at 34 weeks' gestation.He notes that with laparoscopic suturing it is more difficult to reapproximate the layers of the uterus.This likely creates a weak spot in the uterus, which if stressed, as in pregnancy, causes the uterus to rupture.
Since Harris, 9 at least 3 other authors [17][18][19] have reported cases of uterine dehiscence following LM.In the most recent case report, Pelosi and Pelosi suggest that electrosurgical dissection, because it disrupts blood flow to the wound site, may also contribute to suboptimal healing of the myomectomy site, weaken the uterus, and lead to dehiscence.They suggest that electrosurgical dissection be used sparingly and sharp dissection used instead.In addition, they advance the use of endoscopic suturing or suturing by minilaparotomy or colpotomy.
Also, since Harris, 9 more sophisticated laparoscopic suturing tools (eg, Endo Stitch laparoscopic suturing device 20,21 and laparoscopic cannula cone 22 ) have been developed to aid surgeons in reapproximating the uterine layers and preventing the complication of uterine dehiscence. In
LAVM
In 1997, Pelosi and Pelosi1 9 described laparoscopic-assisted transvaginal myomectomy.In their retrospective chart review, the authors report 21 cases in which they combine traditional laparoscopic myomectomy with posterior colpotomy.They conclude that this combination allows for digital repair and inspection of the uterus while maintaining the benefits of minimally invasive surgery.
CONCLUSION
The LAVM procedure offers advantages over both the LM and the LAM.Compared with the LM, LAVM provides the control and safety of direct suturing along with the advantages of digital palpation to detect and remove smaller, less obvious myomas.In comparison with the LAM, the LAVM requires a smaller incision and avoids cutting through several layers of fascia and muscle.It is less traumatic and requires less recovery time than LAM.In addition, the literature reports 16,23 fewer postsurgical adhesions following laparoscopy as compared with laparotomy.
As Pelosi points out in a 1996 editorial, 24 "operative colpotomy, an easily performed surgical option, in combination with laparoscopy permits a much greater number of patients to benefit from both minimally invasive surgery and a traditional layered uterine repair.The technique requires only standard laparoscopic and transvaginal instrumentation."Goldfarb agrees-colpotomy, rather than minilaparotomy, is a better way to remove large transmural myomas, inspect the myoma cavity, and repair the uterine defect.Furthermore, transvaginal uterine repair results in minimal blood loss because of the acute angulation of uterine blood vessels.
Table 1 .
LAVM Procedure:* Indications † and Outcomes ‡ *This table represents data from 11 patients.†Allpatients had symptomatic uterine myomas.The myomas were associated with excessive menstrual blood loss, were large or fast-growing, or caused significant pelvic pain.‡Four patients experienced minor postoperative complications: three had urinary retention and one was febrile.One patient was found to have minimal adhesions by follow-up laparoscopy.
a retrospective chart review of 50 laparoscopic myomectomies, Stringer et al report that the Endo Stitch Laparoscopic Suturing Device (Auto Suture Company, division of US Surgical Corp, Norwalk, CT) combined with a running, locked suture technique enables the surgeon to achieve a secure multiple-layer closure of deep defects via laparoscopy.The authors suggest that repairing the uterine defect this way reduces the likelihood of uterine rupture. | 2014-10-01T00:00:00.000Z | 2001-01-01T00:00:00.000 | {
"year": 2001,
"sha1": "b78236279e7a0d00bc6ad5a1376e4a80d726621f",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4c7c23e428812f7e78b266b051f6d744dcb325ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259770131 | pes2o/s2orc | v3-fos-license | Detection for COVID-19 Chest X-ray Based on Convolutional Neural Network
. COVID-19 has been the most serious public health problem of the past decade. To date, the pandemic has taken a huge toll on the globe in terms of human lives lost, economic impact and increased poverty. However, due to its viral characteristics, determining whether a patient carries COVID-19 is not easy. RT-PCR methods are the gold standard for detecting COVID-19, but their time cost, as well as the need for specific equipment and instrumentation, limit their ubiquity in some medically underdeveloped areas. Chest X-rays, a test with high ubiquity and rapid results, require a certain number of professionals to read and determine whether a patient is likely to have COVID-19. Therefore, it is important to have a system to assist in the determination in areas where professionals are lacking. In this experiment, a convolutional neural network-based machine learning technique was used to create a model for recognizing COVID-19. Although there is no clinical evidence to prove its effectiveness, the model can assist professionals in judgment to a certain extent.
Introduction
COVID-19 was the most serious public health issue in the past decade.To date, the pandemic has cost the globe dearly in terms of human lives lost, economic impact, and increased poverty.The world has been affected and changed a lot.For example, during the pandemic, more than 94% of students worldwide were educationally impacted by the closure of their learning spaces.Some schools even shut down face-to-face instruction and switched courses to online instruction to ease the restrictions on venues [1].
COVID-19 symptoms, according to Ciotti et al., a large number of patients have symptoms such as fever and cough although most patients are asymptomatic infections [2].On chest X-ray, usually showing multiple spots and ground-glass opacities [3].COVID-19 has a high rate of misdiagnosis.And misdiagnosis has very expensive costs [4].Thus, reducing the misdiagnosis rate has become a major research direction.The most popular method for detecting COVID-19 is RT-PCR, although the test can take up to two days to complete [5].However, it is not readily available in many areas around the world due to cost and operational requirements.Due to this limitation, doctors widely use radiology-based methods for initial screening of suspected cases.Guan et al. introduced that COVID-19-positive cases shows imaging abnormalities such as ground-glass opacities, bilateral abnormalities, and interstitial abnormalities on chest X-ray and CT images [6].Thereinto, chest X-ray image analysis may have better sensitivity than RT-PCR-based diagnosis [7].Additionally, although RT-PCR testing is known as the most effective standard for detecting COVID-19, the ubiquitous availability of chest X-rays makes them an appealing choice for rapid and extensive screening compared to the equipment needed for RT-PCR testing [8].Thus, it is worth noting that most of the current methods on chest X-ray with COVID-19 are not intended to replace RT-PCR, but rather in attention in serving as an adjunctive diagnostic measure to RT-PCR to help doctors can quickly screen patients for disease [7].However, the drawback of chest X-rays is obvious: only trained and experienced professionals can read the X-rays.Most underdeveloped medical regions do not have enough of these professionals.If artificial intelligence and software can assist local doctors in image identification and symptom determination of X-rays, it will improve the level of medical care in these areas.Therefore, it is advantageous to use software and artificial intelligence to assist physicians in image judgment and diagnosis.
A COVID-19 recognition network built on a convolutional neural network is presented in this paper.It is characterized by being designed for detecting COVID-19 cases from chest X-ray images, which can quickly identify chest X-ray films and determine whether the patient is likely to have COVID-19, thus assisting physicians in making diagnostic judgments.In addition, since most of the data are derived from open-source datasets, the stability of the source and clinical reliability cannot be explored.Therefore, this study used data augmentation in this network to increase the accuracy and robustness of the model.
Dataset description and preprocessing
The dataset used in this experiment is from two publicly available datasets [6,7].In addition to chest X-ray photos of patients with COVID-19 infection, the original dataset also includes images of bacterial infections, common pneumonia, and other viral diseases including SARS.Therefore, in order to perform the classification task of COVID-19, the original dataset has been organized and merged the data from both datasets into two types of data, COVID and Non-COVID, to facilitate the subsequent model training.Two examples belong to these two groupings, as shown in Figure 1.As shown in the figure, the characteristic of chest radiographs is that they are gray images, so compared with RGB images which have 3 dimensions, gray images have only 1 dimension represented, 255 is all white and 0 is all black.Therefore, the amount of data parameters is much smaller compared to RGB images, which is very convenient for training.To unify the image size, the standard image size of 180 × 180 is used to scale the size of the dataset images in this experiment.it one of the most commonly used deep learning algorithms today.Thus, this research mainly focused on using CNN to construct the model to solve the problem.In this experiment, the features of the photos are extracted using a 3 layer convolutional neural network.In order to keep the gradient from disappearing and to make the model more nonlinear, the ReLU function is selected as the activation function.A Max-pooling layer is placed after each convolutional layer.This procedure expands the perceptual field properly and increases the model's invariance.One important thing is to increase the expression of significant features while filtering some of the weak features.After that, the dropout rule is used to increase the robustness of the model [10].Finally, a fully connected layer with 128 units and 2 units is connected to parse the features extracted from the convolutional layer and classify the images to determine whether they match the COVID-19 features.
Implementation details
TensorFlow toolkit was originally implemented by Google researchers.It is one of the most well-liked deep learning development libraries.In this research, the TensorFlow library was used to construct the CNN model.In the training process, the default learning rate of this experiment is 0.0001, and the optimizer is chosen as the adam algorithm.A total of 10 epochs.As for loss function, the Crossentropy function is chosen, and accuracy is used as the evaluation metrics.
Experimental Results
After training the model, the results of the training can be shown in Figure 2
Discussion
The results show that the CNN model has good recognition effect for this COVID-19 dataset.The algorithm is intended to be developed as a GUI application that can be run directly on computers and mobile devices, which will greatly facilitate doctors and patients to quickly understand the symptoms to a certain extent.Although the model has already been able to identify some basic COVID-19 infected radiographs, but more complex and tricky case situations may be encountered in real clinical cases.
There is still a long way to go before it can be used in clinical practice.This method still needs some improvement such as the need to increase the amount of training data.Thus, further optimization of the robustness of the model may be needed in the future to improve the ability to solve complex clinical situations.For example, using more regularization algorithms, etc.
Conclusion
By learning from chest radiographs of COVID-19 patients, this project builds a machine learning method that can recognize some signs of COVID-19 infection.The technique, which is based on a 2D convolutional neural network, exhibits above 90% accuracy in the experiment.However, due to the limitation of the dataset, there is still some degree of room for improving the generalizability and robustness of the model.In the future, more real clinical data will be collected as training dataset by cooperating with professional medical institutions, which may solve this problem well.
2. 2 .
CNN model Machine learning (ML) has recently increased in popularity in research and has been integrated into a wide range of applications.For instance, image classification, object detection, and email spam detection.Convolutional neural network (CNN) is one of the most well-liked and frequently utilized deep learning networks [9].The main advantage of CNN over previous machine learning algorithms is that it can automatically detect important features without any human supervision.And the ability of CNN for feature extraction makes it one of the top choices for detection and differentiation algorithms and makes The 2nd International Conference on Computing Innovation and Applied Physics DOI: 10.54254/2753-8818/5/20230388
Figure 2 .
Figure 2. The training accuracy and loss experienced during training process.
Figure 3 .
Figure 3.The validation accuracy and loss during the training process.
Table 1 .
and Figure3.It can be observed that with each iteration of the epoch, the accuracy rate almost always increases, accompanied by a decrease in the loss.Meanwhile the accuracy of the last epoch has reached about 98%.This shows that the model has already had a good learning effect for this dataset.Test loss, test accuracy, validation loss, validation accuracy can be seen in Table1.Formatting sections, subsections and subsubsections. | 2023-07-12T15:37:03.996Z | 2023-05-25T00:00:00.000 | {
"year": 2023,
"sha1": "fe8d51e9c4d25503cfa8f7f56b339dfd5aa92aab",
"oa_license": "CCBY",
"oa_url": "https://tns.ewapublishing.org/media/8d18a090d817419d907cbdc18034e11f.marked_2ebzwji.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7ec8533f43f5d33b2ebe85ee8dff010d24c6c11b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
3738355 | pes2o/s2orc | v3-fos-license | miEAA: microRNA enrichment analysis and annotation
Similar to the development of gene set enrichment and gene regulatory network analysis tools over a decade ago, microRNA enrichment tools are currently gaining importance. Building on our experience with the gene set analysis toolkit GeneTrail, we implemented the miRNA Enrichment Analysis and Annotation tool (miEAA). MiEAA is a web-based application that offers a variety of commonly applied statistical tests such as over-representation analysis and miRNA set enrichment analysis, which is similar to Gene Set Enrichment Analysis. Besides the different statistical tests, miEAA also provides rich functionality in terms of miRNA categories. Altogether, over 14 000 miRNA sets have been added, including pathways, diseases, organs and target genes. Importantly, our tool can be applied for miRNA precursors as well as mature miRNAs. To make the tool as useful as possible we additionally implemented supporting tools such as converters between different miRBase versions and converters from miRNA names to precursor names. We evaluated the performance of miEAA on two sets of miRNAs that are affected in lung adenocarcinomas and have been detected by array analysis. The web-based application is freely accessible at: http://www.ccb.uni-saarland.de/mieaa_tool/.
INTRODUCTION miRNAs are small non-coding RNA molecules of approximately 22 nucleotides length. Since their discovery in Caenorhabditis elegans, they have been established as important regulators of biological and pathological processes (1,2). miRNAs regulate messenger RNAs (mRNAs) in post transcriptional phases by either halting the translation or by degrading the mRNA molecule. In addition to their importance in biological processes, varying miRNA expression levels in specific diseases makes them valuable biomarker candidates, such as for Alzheimer's disease (AD) (3). The overall importance of miRNAs is emphasized by a study of Lewis et al., which estimates that as much as 30% of the human genes are regulated by miRNAs (4). As the amount of data and applications in miRNomics is increasing rapidly, driven by the fast advances in next-generation sequencing (NGS), respective tools supporting the analysis are implemented. These include miRNA prediction tools from NGS reads (5,6), prediction of miRNA targets (7)(8)(9), or miRNA enrichment (10,11) and annotation tools (12). Most tools that provide enrichment analyses for miRNAs first convert them to their targets and then perform the analysis on the target genes (13)(14)(15)(16). Recently, it has been shown that this approach is biased and leads to inaccurate results (17). To overcome this bias, Godard and van Eyll suggested converting the gene-categories into miRNAcategories and then performing the enrichment analysis directly at the level of miRNAs. Among the most widely used tools implementing this approach is the 'Tool for annotations of human miRNAs' (TAM) (18), which is based on over-representation analysis (ORA) and miRNA categories collected from databases such as HMDD (19) or miRBase (20) and additional publications.
Here, we present miEAA which relies on the established statistical framework of the gene set analysis toolkit Gene-Trail (21). GeneTrail finds significantly enriched categories for gene sets and annotates them accordingly. Like Gene-Trail, miEAA also provides the two most common statistical analyses, ORA and GSEA (gene set enrichment analysis) (22). In contrast to GeneTrail, miEAA is tailored for miRNA input as it supports miRNA precursor names, as well as mature miRNA names. To offer a broad functionality and applicability, we collected about 14 000 different miRNA sets from the literature and various important miRNA databases and integrated them into miEAA. Furthermore, we added tools for the conversion of miRNAs and precursors into different miRBase versions, as well as a converter between miRNA names and precursor names. To exemplify the functionality of miEAA, we analyzed a set of lung adenocarcinoma related miRNAs and also compared the findings to an analysis with TAM. miEAA is de-Nucleic Acids Research, 2016, Vol. 44, Web Server issue W111 signed as a web-based application and is freely accessible at http://www.ccb.uni-saarland.de/mieaa tool/.
Workflow
The workflow of our miEAA tool is presented in Figure 1. We differentiate between miRNA precursor names and mature miRNA names as input, because we collected the data specific for those entities if possible. Since our annotations rely on the miRBase v21 names, users can also convert their miRNA/precursor names from earlier miRBase versions to version 21 using an additional tool linked on the miEAA homepage. In addition, users can also convert their miR-NAs to precursor names or vice versa. After choosing the input type, the user can upload their test set and then select between ORA or GSEA (gene set enrichment analysis). The ORA also demands a reference/background set that the user can also upload. If this option is skipped, all annotated miRNAs/precursors we collected are used as reference set. If performing a GSEA, the input set of miRNAs/precursors must be sorted by some criterion, e.g. expression level. In a last step, the user can choose the miRNA/precursor categories that they want to analyze with miEAA, as well as set some statistical parameters such as significance threshold and P-value adjustment. After miEAA has finished the computation, the results are illustrated in tabular form on an HTML page.
Integrated resources
Since miEAA is intended to serve as a miRNA annotation and enrichment tool, we collected information from different miRNA-specific tools and databases (miRBase, HMDD2, miRWalk, miRTarBase) (20,(23)(24)(25) and our own publications (3,(26)(27)(28). An overview of the collected data for miRNAs and precursors is presented in Table 1. We included only subcategories in miEAA that contained at least two miRNAs or precursors, respectively. For mature miR-NAs, our tool offers 10 categories and 13 962 subcategories in total. For precursors, miEAA includes five categories and 792 subcategories. From our own publications, we collected the data sets diseases (3,28), age/gender (26) and immune cells (27). These collected literature miRNAs stem from our own studies, where miRNAs were analyzed as biomarkers in peripheral blood. For the disease category, we provide three data sets per disease: miRNAs found deregulated in this disease (significant P-value), miRNAs significantly upregulated in the disease, miRNAs significantly downregulated in the disease. The immune cell data set comprises the miRNAs we found expressed in at least three individuals for the respective cell type (CD14, CD15, CD19, CD3, CD56). In addition, we assembled the immune cell specific miRNAs showing an expression in at least three individuals of one cell type, but not in the others. For the age-and genderdependent miRNA set, we provide the miRNAs found generally correlated with age, as well as those that are positively and negatively correlated, and those that are deregulated between male and female, as well as those that are upregulated in male and upregulated in female. From miR-Base (version 21), we collected the information about chro-mosomal location, families, cluster (50 kB) and conserved miRNAs. A conserved miRNA is in our case a miRNA that has the same sequence in at least five different species. The other data sets from miRWalk 2.0 (downloaded in 2015/05), HMDD2 and miRTarbase (Release 4.5: Nov. 1, 2013) were downloaded from their homepage and the miRNA names were converted to the current miRBase v21. For miRWalk we downloaded the data from the 'validated miRNA-target interactions' (http://zmf.umm.uni-heidelberg.de/apps/zmf/ mirwalk2/holistic.html). Afterward, miRNAs were annotated to belong to a category if they were annotated by miR-Walk to target at least one gene in that category. The enrichment analysis is performed directly at the level of miR-NAs or precursors for all collected categories. The collected data sets can be downloaded from our home page (http: //www.ccb.uni-saarland.de/mieaa tool/downloads/). Additional information about the data sets can also be found in Supplementary Table S1.
Statistical analysis
Regarding the statistical analysis, miEAA implements the two most common approaches: the miRNA set based ORA and GSEA. ORA calculates the significance of categories for a test set and shows if the specific category is overrepresented or under-represented for the test set with respect to a reference set. P-values are computed by applying the Fisher's exact test according to the following formula: Given a test set consisting of miRNAs of which k belong to a certain category C and l do not belong to this category and a reference set of which j miRNAs belong to C and m miRNAs do not belong to this category, we would expect to find k = j · k+l j +m elements in the test set belonging to C by chance. If k < k, the considered category is under-represented for the test set, otherwise it is overrepresented. A standard significance threshold of 0.05 is applied to check if the computed P-value is statistically significant. In miEAA, the user can adjust this threshold arbitrarily.
While ORA relies on a partitioning of miRNAs into test and reference set, GSEA considers only a sorted list of miRNAs/precursors as input. We know that l of these input miRNAs belong to a category C, and m-l do not. While traversing now the sorted input list from top to bottom, the running sum is increased by m-l or decreased by l, whenever we find that the considered miRNA belongs to C or not, respectively. The P-value is computed as the fraction of the number of permutations that have a higher absolute maximum of the running sum as compared to the considered test set. We have implemented a dynamic algorithm, which computes the exact P-value for this unweighted GSEA variant (22). After uploading a plain text file containing these names, the user can chose the enrichment method: ORA or GSEA. Depending on this choice, the user can provide an own reference set for ORA. For GSEA, the input list must be sorted by a meaningful criterion. In a last step, the user can choose the categories that should be analyzed, as well as the P-value significance threshold and adjustment method. After computation, the results are presented in a tabular format on an HTML web site and can also be downloaded as Excel sheet or tab-separated text file. For both enrichment approaches, the user can set a lower threshold value for the number of miRNAs from the test set that must be contained in the categories. This parameter has no influence on the P-value computation and adjustment and is only applied afterward to reduce the amount of categories displayed in the output. If a category contains less miRNAs from the test set than this threshold, the category is not displayed. Furthermore, we provide the option to adjust P-values for multiple testing by two standard approaches, Bonferroni and Benjamini-Hochberg (29). However, the user can also perform the computation without Pvalue adjustment.
Results representation
miEAA visualizes the computed results on a clearly arranged web-page in tabular form. In addition, we provide the possibility to download the results as tab-separated text file or as Excel file. The respective results table contains the category, subcategory, P-value, expected and observed number of miRNAs, and the respective miRNAs per subcategory ( Figure 2). A red or green arrow in the ORA output represent an optical visualization of over-representation or under-representation, respectively. Furthermore, the results table is freely sortable and filterable. The filters can also be combined, e.g. the user can filter all results having a Pvalue '<0.001' and an observation value '>4.' In addition, we provide a link to a view where we list for each miRNA its significant subcategories, sorted by the miRNA with the Nucleic Acids Research, 2016, Vol. 44, Web Server issue W113 Figure 2. Example of an ORA for miRNA input. This screenshot visualizes an example, where we analyzed a set of miRNAs with ORA as enrichment method. The output has a tabular format containing the category (e.g. target genes from miRTarbase), subcategories (e.g. a certain target gene), the P-value, the miRNAs from the test set that are contained in the subcategory, the type of enrichment, the number of miRNAs that we would expect to find and the number of miRNAs that we actually observed. most significant annotations on top (see Supplementary Figure S1). Furthermore, we also collected sequence properties for the miRNAs/precursors in the input list (Supplementary Figure S2), e.g. the minimum free energy (MFE) as computed by RNAfold (2.1.9) from the Vienna package (30). If a miRNA has several precursors, we list all of them.
Test data for enrichment analysis
For testing the utility of our tool, we downloaded a publicly available data set from GEO (GSE48414). This data set contains array data of lung adenocarcinoma patients and normal controls and has been published by Bjaanaes et al. in 2014 (31). We downloaded the raw data of the Agilent arrays (miRBase v16) and extracted the gTotalProbeSignal, which is the average of all the background corrected signals for each replicated probe. These values were summed up to calculate the total expression value for each miRNA per sample. Normalization was applied by using the quantile normalization implemented in the preprocessCore package of the programming language R. Finally, we performed a log 2 transformation of the data. For computing the differentially expressed miRNAs, we used 20 lung cancer samples and 20 matched controls. We computed the median fold changes, Wilcoxon-Mann-Whitney P-values and AUC values for all miRNAs that showed at least expression in half of the samples of one group (434 miRNAs, Supplementary Table S2). After conversion of the miRBase v16 miRNAs into v21, 423 miRNAs remained. We sorted this list by their descending AUC values and used this list of 423 miRNAs for miRNA set enrichment analysis. As a second example, we extracted miRNAs with a fold change of more than 1.5 in tumor samples compared to normal samples and having a significant adjusted two-tailed P-value (< 0.05) in the Wilcoxon-Mann-Whitney test. This list contained 49 miR-NAs and was further converted into precursors with our miRNA-precursor conversion tool. For this conversion, we allowed non-unique mappings, which resulted in a set of 55 precursors.
RESULTS AND DISCUSSION
As mentioned in the Introduction, many enrichment tools are already available, at least for miRNA target genes. Some popular tools such as DIANA miRPath (16), miTalos (32) and miRTar (15) work on predicted or validated targets of miRNAs, which has been shown to be biased (17). In addition, they mostly provide only one or the other standard enrichment analysis or have some restrictions on the input size (e.g. 100 miRNAs for DIANA miRPath). Some other tools such as miSEA (10) or miTEA (11) provide alternative approaches, but they require some specialized input. The input for miTEA is a list of ranked genes and then it identifies the miRNAs that are significantly associated with these genes. miSEA requires the upload of a control and treatment file with expression data and seems to work on a mixture of miRNA names and precursor names from the look of the example files. It is not obvious how or if a conversion W114 Nucleic Acids Research, 2016, Vol. 44, Web Server issue of miRNAs to precursors is done. Other tools as miRNApath (13) and CORNA (33) are only available as R package and are not easily applicable for nonexpert users. Therefore, we provide in this study primarily a comparison of our tool to TAM (18), which is the most similar in functionality.
As a first use case, we explored a set of 49 miRNAs we found significantly upregulated (FC > 1.5, adjusted Pvalue < 0.05) when comparing 20 lung cancer samples and 20 matched controls (Supplementary Table S2). With these miRNAs we performed an ORA with miEAA using the default parameters and uploading the human miRNAs spotted on the Agilent array as a reference set. This analysis resulted in 59 significant categories (Supplementary Table S3). Among the most significant categories, we found for example the target genes HOXB5, KLF11, ZEB1, RASSF2 and PTPRD. The HOX genes encode transcription factors that regulate the embryonic morphogenesis and have previously been described to be deregulated in lung cancer (34). KLF11 is also a transcription factor and tumor suppressor, which plays an important role in TGF- induced growth inhibition in pancreatic cells (35). ZEB1 and ZEB2 are key regulators of the epithelial to mesenchymal transition, a process which contributes in cancer to the formation of metastases, and have also been described in lung cancer (36)(37)(38)(39). Loss of RASSF2 is associated with an enhanced tumorigenicity of lung cancer cells (40). PTPRD is a candidate for a tumor suppressor gene in lung cancer (41). Besides these target genes, we also found the pathways 'EGF EGFR signaling' and 'small cell lung cancer' significantly enriched.
To compare our tool to TAM, we performed the analysis with this tool for the same set of 49 miRNAs with the following parameters: annotation set version 2, FDR adjustment, same reference set as above. For these data, the TAM analysis finds in total 14 categories significantly enriched: Learned Helplessness, mir-8 family, Pain, Carcinoma, Cell cycle related, carbohydrate metabolism, Glomerulonephritis, Breast Neoplasms, hsa-mir-200a cluster, hsa-mir-182 cluster, Nephrosclerosis, Carcinoma Renal Cell, Carcinoma Spindle Cell, Cholangiocarcinoma, which seem to be rather unspecific and not necessarily related to lung cancer. The different annotation and data handling can explain the differences in the results between TAM and miEAA. TAM converts the input into precursor names by cutting off the -3p/-5p/* ending of the name and renaming 'miR' into 'mir.' Thus, if data sets contain two mature miRNAs of the same precursor, these would be merged, introducing a potential bias. Furthermore, this way of conversion may cause challenges if mature miRNAs stem from different precursors, e.g. in our data set of 49 miRNAs, we have four cases, where we have several potential precursors. As an example, the miRNA hsa-miR-9-3p can stem from the precursors hsamir-9-1, hsa-mir-9-2 or hsa-mir-9-3, the miRNA hsa-miR-7-5p from the precursors hsa-mir-7-1, hsa-mir-7-2 or hsamir-7-3. Another problem we noticed is that TAM sometimes does not recognize official precursor names. When using hsa-let-7f-1 and hsa-let-7f-2 as input, it seems that TAM does not recognize them correctly, although it provides annotations when hsa-let-7f is used as input. Therefore, when using TAM these conversion steps have to be kept in mind and users should take care on the conver-sion before uploading the data. To facilitate mapping tasks we implemented conversion tools between different miR-Base versions, as well as a tool that converts miRNA names to precursor names and vice versa. These supporting tools can be accessed from the miEAA homepage. To show that there are still many differences between miEAA and TAM, even if we provide TAM with correctly converted precursors and although these tools have an overlap of annotation resources, we converted the 49 miRNAs into their corresponding 55 precursors, as well as the reference set, and repeated the above analysis with the precursor input for both tools. While TAM still finds the same 14 categories significant, miEAA finds 77 categories significantly enriched. Of course, there is now an overlap of the findings of miEAA and TAM, but miEAA finds also categories such as 'carcinoma, non-small-cell lung' and 'lung neoplasms' significantly enriched (Supplementary Table S4).
In contrast to ORA, GSEA is a threshold free approach that does not require a reference set as background distribution. The above described analysis is a frequently applied procedure for extracting miRNAs from array experiments that are upregulated according to an arbitrary threshold. The results of the ORA vary largely on this chosen threshold. To overcome this issue, a GSEA can be performed by sorting the expressed miRNAs on the array, e.g., by their AUC values or fold changes. As mentioned in the Methods section, we sorted the list of 423 miRNA from the same lung cancer study by their AUC values and performed a GSEA using the default settings in miEAA. In total, this analysis yielded 148 significant categories, with most hits in pathways (69), Gene Ontology terms (34), diseases (21) and targets (11) (Supplementary Table S5). Among the pathways, we find some that are directly associated with specific cancers (Non small cell lung cancer, Melanoma, Glioma, Prostate cancer) or are known to be often influenced in cancer development and progression (ErbB signaling pathway, p53 pathway, Focal adhesion, Jak STAT signaling pathway, p38 MAPK Signaling Pathway, PI3 kinase pathway, Apoptosis, Wnt signaling pathway, Adherens junction). The GO terms having the most annotated miRNAs are involved in transcription processes. In this context, we also find transcription factors among the significant targets (HOXB5, TCF7L1, KLF11). Another interesting finding is that most of the miRNAs that are downregulated in the tumor tissue are conserved miRNAs.
Summarizing our analysis of the lung cancer data set of Bjaanaes et al., we showed that the ORA as well as GSEA in miEAA yielded interesting results and highlighted some pathways and targets that may be influenced in lung cancer development.
CONCLUSION
The development of miRNA enrichment tools gain rapidly on importance. First solutions are already available, including the TAM tool, which offers ORA for miRNA precursors. Here, we presented miEAA, a comprehensive miRNA enrichment tool in terms of statistical tests and miRNA/precursor categories. While the TAM tool contains 362 categories only for precursors, miEAA includes over 14 000 categories, defined both for precursors and ma- | 2018-04-03T02:38:06.899Z | 2016-04-29T00:00:00.000 | {
"year": 2016,
"sha1": "dd610dad87413df9644d3ff6ac1d1233257ec181",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/44/W1/W110/18787730/gkw345.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd610dad87413df9644d3ff6ac1d1233257ec181",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
]
} |
267264437 | pes2o/s2orc | v3-fos-license | Design of Preamplifier for Ultrasound Transducers
In diagnostic ultrasound imaging applications, preamplifiers are used as first-stage analog front-end amplifiers for ultrasound transducers because they can amplify weak acoustic signals generated directly by ultrasound transducers. For emerging diagnostic ultrasound imaging applications, different types of preamplifiers with specific design parameters and circuit topologies have been developed, depending on the types of the ultrasound transducer. In particular, the design parameters of the preamplifier, such as the gain, bandwidth, input- or output-referred noise components, and power consumption, have a tradeoff relationship. Guidelines on the detailed design concept, design parameters, and specific circuit design techniques of the preamplifier used for ultrasound transducers are outlined in this paper, aiming to help circuit designers and academic researchers optimize the performance of ultrasound transducers used in the diagnostic ultrasound imaging applications for research directions.
In diagnostic ultrasound imaging applications, the ultrasound systems are categorized into transmitters, receivers, and transducers [8][9][10].Figure 1 shows the transducer, transmitter, and receiver in the ultrasound system used to describe the locations of the components of the preamplifier and time-gain compensation amplifier [11,12].The computer-controlled digital-to-analog converter (DAC) produces low-voltage single or multiple-cycle pulse signals [13,14].High-voltage pulse signals, amplified by the power amplifier in the transmitter, trigger the transducer through an expander or switch [15].A limiter or switch protects the receiver from high-voltage or high-power signals generated by the power amplifiers because of the shared path between the transmitter and receiver [16].
The preamplifier is one of the first-stage receiver electronic devices after the transducer that amplifies weak acoustic signals with fewer noise effects [17].Considering a transducer with low sensitivity requires a high input dynamic range of the preamplifier, the preamplifier used for ultrasound applications is a Class-A-type amplifier that continuously conducts voltage and current [18].This preamplifier operates continuously during pulse transmission and echo reception; therefore, switches are utilized to block unwanted pulse signals and reduce power consumption; switches in the IC are normally implemented using voltagecontrolled metal-oxide-semiconductor field-effect transistor (MOSFET) switches to save occupied chip space [19,20].The low-voltage, current, or power signals received from the transducer are amplified by the preamplifier and time-gain-compensation amplifier (TGCA) in the receiver and then digitized with an analog-to-digital converter (ADC) to obtain the images [21].The TGCA needs to amplify the weak signals further when the attenuation of Sensors 2024, 24, 786 2 of 21 the ultrasound signals is exponentially degraded, depending on the target distance [22].In Figure 1, the transmitting and receiving beamforming components of the ultrasound transducer array are excluded to simplify the description of the entire ultrasound system.In a diagnostic photoacoustic system, the transmitter side is replaced by light-generating sources such as lasers, light-emitting diodes, or radio frequency sources [23][24][25].
Sensors 2024, 24, x FOR PEER REVIEW 2 of 21 gain-compensation amplifier (TGCA) in the receiver and then digitized with an analogto-digital converter (ADC) to obtain the images [21].The TGCA needs to amplify the weak signals further when the attenuation of the ultrasound signals is exponentially degraded, depending on the target distance [22].In Figure 1, the transmitting and receiving beamforming components of the ultrasound transducer array are excluded to simplify the description of the entire ultrasound system.In a diagnostic photoacoustic system, the transmitter side is replaced by light-generating sources such as lasers, light-emitting diodes, or radio frequency sources [23][24][25].The output of the capacitive micromachined ultrasonic transducer (CMUT) device is current; thus, a transimpedance amplifier is used to convert the current generated from the CMUT in the input to voltage in the preamplifier output [26].Therefore, the preamplifiers were designed as voltage and current (transimpedance) amplifiers for the piezoelectric transducer and CMUT, respectively.The output of the piezoelectric transducer is a voltage; therefore, a low-noise voltage operational amplifier was used [27].The preamplifier, also known as a low-noise amplifier (LNA), is used in piezoelectric transducers [26].
Section 2 describes the design parameters of the preamplifiers, such as voltage or current gain, bandwidth, direct current (DC) power consumption, and input-or outputreferred noises, or noise figures.Section 3 presents the topology, design parameters, and circuit design techniques of previously reported preamplifiers for specific diagnostic ultrasound imaging applications such as CMUT, piezoelectric transducer, and imaging.Section 4 discusses the design topologies and criteria for the currently developed preamplifiers used for diagnostic ultrasound imaging applications and summarizes this review.
Design Parameters of the Preamplifiers for Ultrasound Transducer Types
The design parameters of preamplifiers for ultrasound transducer types are described in this section.Figure 2 shows the relationship between the design parameters of the preamplifiers used for diagnostic ultrasound imaging applications because design engineers for ultrasound components or systems need to consider the trade-off relationship at the design level.The design parameters of the preamplifiers were based on information The output of the capacitive micromachined ultrasonic transducer (CMUT) device is current; thus, a transimpedance amplifier is used to convert the current generated from the CMUT in the input to voltage in the preamplifier output [26].Therefore, the preamplifiers were designed as voltage and current (transimpedance) amplifiers for the piezoelectric transducer and CMUT, respectively.The output of the piezoelectric transducer is a voltage; therefore, a low-noise voltage operational amplifier was used [27].The preamplifier, also known as a low-noise amplifier (LNA), is used in piezoelectric transducers [26].
Section 2 describes the design parameters of the preamplifiers, such as voltage or current gain, bandwidth, direct current (DC) power consumption, and input-or outputreferred noises, or noise figures.Section 3 presents the topology, design parameters, and circuit design techniques of previously reported preamplifiers for specific diagnostic ultrasound imaging applications such as CMUT, piezoelectric transducer, and imaging.Section 4 discusses the design topologies and criteria for the currently developed preamplifiers used for diagnostic ultrasound imaging applications and summarizes this review.
Design Parameters of the Preamplifiers for Ultrasound Transducer Types
The design parameters of preamplifiers for ultrasound transducer types are described in this section.Figure 2 shows the relationship between the design parameters of the preamplifiers used for diagnostic ultrasound imaging applications because design engineers for ultrasound components or systems need to consider the trade-off relationship at the design level.The design parameters of the preamplifiers were based on information from several textbooks on analog circuits, ICs, amplifiers, and ultrasound systems [28][29][30][31][32][33].These design parameters are useful for circuit design engineers because some ultrasound systems require specific performance parameters.
The gain of the preamplifier is an important parameter because the weak echo signal generated by the transducer must be amplified.The voltage or current gain parameters are the extent to which the input signals are amplified [34].Owing to the limited space for intravascular ultrasound (IVUS) applications, most research has focused on developing capacitive micromachined ultrasonic transducer devices with integrated circuits (IC) closely attached between the CMUT and IC [35].For IVUS areas, the small size ultrasound transducers are required due to limited areas so the received echo signals are very weak so the high gain of the preamplifier is preferable.The bandwidth of the preamplifier is typically at least twice or higher than that of the transducer because the harmonic imaging mode requires the use of second or higher-order harmonic components to improve the image resolution [36].The bandwidth can be increased while the gain reduces if the preamplifier has an operational amplifier topology [37].A preamplifier design with a high gain has high power consumption because a high gain requires a high biasing current in the preamplifier [38,39].While a preamplifier with high linearity is desirable for producing extremely weak acoustic signals from transducers, these signals affect the maximum gain performance of the amplifier.
Sensors 2024, 24, x FOR PEER REVIEW 3 of 21 from several textbooks on analog circuits, ICs, amplifiers, and ultrasound systems [28][29][30][31][32][33].These design parameters are useful for circuit design engineers because some ultrasound systems require specific performance parameters.The gain of the preamplifier is an important parameter because the weak echo signal generated by the transducer must be amplified.The voltage or current gain parameters are the extent to which the input signals are amplified [34].Owing to the limited space for intravascular ultrasound (IVUS) applications, most research has focused on developing capacitive micromachined ultrasonic transducer devices with integrated circuits (IC) closely attached between the CMUT and IC [35].For IVUS areas, the small size ultrasound transducers are required due to limited areas so the received echo signals are very weak so the high gain of the preamplifier is preferable.The bandwidth of the preamplifier is typically at least twice or higher than that of the transducer because the harmonic imaging mode requires the use of second or higher-order harmonic components to improve the image resolution [36].The bandwidth can be increased while the gain reduces if the preamplifier has an operational amplifier topology [37].A preamplifier design with a high gain has high power consumption because a high gain requires a high biasing current in the preamplifier [38,39].While a preamplifier with high linearity is desirable for producing extremely weak acoustic signals from transducers, these signals affect the maximum gain performance of the amplifier.
The input third-order intercept point (IIP3) or the output third-order intercept point (OIP3) is the intercept point at which the component at the fundamental frequency and third-order intermodulation distortion points meet [40].They are useful parameters to show the linearity of the preamplifiers.The higher the IIP3 or OIP3, the more linear the preamplifier works.Therefore, the circuit designers can increase the voltage gain before the intermodulation distortion is started [40].In the harmonic imaging mode in the diagnostic ultrasound machine, high linearity is preferable because the unwanted harmonics need to be filtered out [26].
The direct current (DC) power consumption parameter was used because the preamplifier is a power-intensive electronic component when considering ultrasound receiver construction in the wireless ultrasound machine [41,42].In addition, considering the preamplifier needs to enhance the weak signals, it needs to obtain high gain while sacrificing DC power consumption and occupied area [43,44].For smartphone-based ultrasound systems with array transducers, area and power consumption are critical issues owing to the The input third-order intercept point (IIP 3 ) or the output third-order intercept point (OIP 3 ) is the intercept point at which the component at the fundamental frequency and third-order intermodulation distortion points meet [40].They are useful parameters to show the linearity of the preamplifiers.The higher the IIP 3 or OIP 3 , the more linear the preamplifier works.Therefore, the circuit designers can increase the voltage gain before the intermodulation distortion is started [40].In the harmonic imaging mode in the diagnostic ultrasound machine, high linearity is preferable because the unwanted harmonics need to be filtered out [26].
The direct current (DC) power consumption parameter was used because the preamplifier is a power-intensive electronic component when considering ultrasound receiver construction in the wireless ultrasound machine [41,42].In addition, considering the preamplifier needs to enhance the weak signals, it needs to obtain high gain while sacrificing DC power consumption and occupied area [43,44].For smartphone-based ultrasound systems with array transducers, area and power consumption are critical issues owing to the limited space and structures because unnecessary heat generation causes performance degradation during stable operation [45].
The input-and output-referred noises are the noise voltage and currents that generate the same output noises as the practical preamplifier generates if the ideal noise source is an input signal of the noise-free preamplifier [28].The output-referred noise voltage of the preamplifier can be obtained by multiplying the gain and input-referred noise voltage of the preamplifier.The parameters of the input-and output-referred noise currents indicate the noise components of the preamplifier [46][47][48]-useful for demonstrating the noise contribution when amplifying weak echo signals through the preamplifier.Instead of inputor output-referred noise currents, a noise figure was used [49].The preamplifier design is important because the gain of the first-stage amplifier contributes to the noise current in the receiver of the ultrasound system [50].The noise figure (NF) equation is widely used in preamplifier design because it describes the noise contribution of the preamplifier [51].As shown in (1), A 1 needs to be as high as possible to reduce NF at the preamplifier [52].
where NF 1 and NF n are the noise figures of the first-and n-stage preamplifiers, respectively; A 1 and A n−1 are the gains of the first-and n − 1-stage preamplifiers, respectively.
The following section presents a detailed schematic of the preamplifiers used in previously published articles on ultrasound applications.
Design Analysis of the Preamplifiers for Ultrasound Transducers
This section describes the design and schematic analysis of the design parameters of preamplifiers for specific ultrasound transducers, such as CMUT, piezoelectric transducer, and imaging.The labels and symbols in the articles are sometimes different from those in the selected articles; therefore, all schematic diagrams of the preamplifiers in this review paper were re-labeled and re-sketched, with some of the preamplifier designs also simplified to understand the operating mechanism more clearly for academic ultrasound researchers or design engineers.In the following sections, the same labels are used for input and output.B, N, and P indicate the Bipolar, N-channel metal-oxide semiconductor (NMOS), and P-channel metal-oxide-semiconductor (PMOS) transistors, respectively, while R, C, and I represent the resistor and capacitor, respectively.
Preamplifiers for CMUT Applications
Figure 3 shows a schematic of the CMUT device preamplifier.The preamplifier was constructed using a common-source amplifier (N 1 and I DD1 ), followed by a source follower (N 2 and I DD2 ) with a feedback resistor (R 1 ).The measured gain and bandwidth of the preamplifier were 215 kΩ and 25 MHz, respectively [53].
cate the noise components of the preamplifier [46][47][48]-useful for demonstrating the noise contribution when amplifying weak echo signals through the preamplifier.Instead of input-or output-referred noise currents, a noise figure was used [49].The preamplifier design is important because the gain of the first-stage amplifier contributes to the noise current in the receiver of the ultrasound system [50].The noise figure (NF) equation is widely used in preamplifier design because it describes the noise contribution of the preamplifier [51].As shown in (1), A1 needs to be as high as possible to reduce NF at the preamplifier [52].
where NF1 and NFn are the noise figures of the first-and n-stage preamplifiers, respectively; A1 and An−1 are the gains of the first-and n − 1-stage preamplifiers, respectively.
The following section presents a detailed schematic of the preamplifiers used in previously published articles on ultrasound applications.
Design Analysis of the Preamplifiers for Ultrasound Transducers
This section describes the design and schematic analysis of the design parameters of preamplifiers for specific ultrasound transducers, such as CMUT, piezoelectric transducer, and imaging.The labels and symbols in the articles are sometimes different from those in the selected articles; therefore, all schematic diagrams of the preamplifiers in this review paper were re-labeled and re-sketched, with some of the preamplifier designs also simplified to understand the operating mechanism more clearly for academic ultrasound researchers or design engineers.In the following sections, the same labels are used for input and output.B, N, and P indicate the Bipolar, N-channel metal-oxide semiconductor (NMOS), and P-channel metal-oxide-semiconductor (PMOS) transistors, respectively, while R, C, and I represent the resistor and capacitor, respectively.
Preamplifiers for CMUT Applications
Figure 3 shows a schematic of the CMUT device preamplifier.The preamplifier was constructed using a common-source amplifier (N1 and IDD1), followed by a source follower (N2 and IDD2) with a feedback resistor (R1).The measured gain and bandwidth of the preamplifier were 215 kΩ and 25 MHz, respectively [53]. Figure 4 shows a schematic of the operational amplifier with resistor feedback loops (R 2 and R 3 ) of the CMUT device.The 0.8-µm CMOS process was used; thus, the DC supply voltage is 5 V (V DD ) [54].This operational amplifier comprises two stages.In the first stage, a differential cascade amplifier (B 1 , B 2 , N 1 , N 2 , and P 1 ) is used.In the second stage, a source follower (P 2 or N 3 ) was used to reduce the output impedance of the amplifier.A resistor (R 1 ) and a capacitor (C 1 ) were used to reduce the phase shift of the frequency response [55].The measured bandwidth, DC power consumption, and input noise voltage were 11 MHz, 2 mW, and 6.45 nV/ √ Hz, respectively [54].
(R2 and R3) of the CMUT device.The 0.8-µm CMOS process was used; thus, the DC supply voltage is 5 V (VDD) [54].This operational amplifier comprises two stages.In the first stage, a differential cascade amplifier (B1, B2, N1, N2, and P1) is used.In the second stage, a source follower (P2 or N3) was used to reduce the output impedance of the amplifier.A resistor (R1) and a capacitor (C1) were used to reduce the phase shift of the frequency response [55].The measured bandwidth, DC power consumption, and input noise voltage were 11 MHz, 2 mW, and 6.45 nV/√Hz, respectively [54]. Figure 5 shows the schematic of the common-source amplifier followed by the source follower with resistor feedback for CMUT array transducer applications.The 1.5-µm CMOS process was used; thus, the DC power supply is 5 V [56].MOSFET switches are used to turn off the power [57].The amplifier comprises a common-source amplifier (N1 and P3), followed by a source follower (N2 and N4).A source follower was used to reduce the impedance, thus increasing the amplifier bandwidth [57], which can be expressed by Equation (2).
where C1 is the feedback loop capacitance combined with the input parasitic capacitance.Figure 5 shows the schematic of the common-source amplifier followed by the source follower with resistor feedback for CMUT array transducer applications.The 1.5-µm CMOS process was used; thus, the DC power supply is 5 V [56].MOSFET switches are used to turn off the power [57].The amplifier comprises a common-source amplifier (N 1 and P 3 ), followed by a source follower (N 2 and N 4 ).A source follower was used to reduce the impedance, thus increasing the amplifier bandwidth [57], which can be expressed by Equation (2).
where C 1 is the feedback loop capacitance combined with the input parasitic capacitance.The amplifier gain depends on the feedback resistance.The input-referred noise is inversely proportional to the feedback resistance (R1); therefore, a large R1 value is preferable [56].However, the bandwidth is reduced.The bandwidth can be increased by decreasing the feedback resistance (R1) and feedback loop capacitance combined with the input parasitic capacitance (C1) [56].However, the input-referred noise current is propor- The amplifier gain depends on the feedback resistance.The input-referred noise is inversely proportional to the feedback resistance (R 1 ); therefore, a large R 1 value is preferable [56].However, the bandwidth is reduced.The bandwidth can be increased by decreasing the feedback resistance (R 1 ) and feedback loop capacitance combined with the input parasitic capacitance (C 1 ) [56].However, the input-referred noise current is proportional to the √ 4kT/R 1 ; thus, a relatively large feedback resistor is desirable if the input-referred noise current is an important design parameter [58].The measured gain, input-referred noise current, bandwidth, and DC power consumption were 4.3 kΩ, 1.2 to 2.1 mPa/ √ Hz, 10 MHz, and 4 mW, respectively [56]. Figure 6 shows a schematic of the common-source amplifier (N 1 ), followed by a source follower (N 2 and N 3 ) with a transistor feedback loop (N 4 and N 5 ) for the CMUT device.The 0.18-µm CMOS process was used [59].The source-connected NMOS transistors (N 4 and N 5 ) were used for the transistor feedback loop to function as resistances controlled by the DC voltage (V c ).This topology is useful for reducing the chip area because physical resistors require large chip space [60][61][62].The measured transimpedance gain, bandwidth, and input-referred noise are 951 dBΩ, 12 MHz, and 3.5 pA/ √ Hz, respectively [59].
Sensors 2024, 24, x FOR PEER REVIEW Figure 6.Common-source amplifier followed by a source follower with a transistor feedba for the CMUT device (biasing circuits are not shown for simplified analysis).Adapted with sion from Ref. [59].Copyright 2014, IEEE.
Figure 7a,b show the schematics of the operational amplifier with a resistor fee loop for CMUT device applications.The 0.18-µm CMOS process was used [63].Th amplifier was constructed using five operational amplifiers with a feedback resist and a Miller capacitor (C1), as shown in Figure 7b.The Miller capacitor compensa the pole and zero in the frequency response [64,65].A current mirror (P1, P2, and P used to reduce the power supply noise [66].The output nodes (R2 and C2) are the resistance and capacitance of the next stage of the electronics (ADC), respectively [ . Common-source amplifier followed by a source follower with a transistor feedback loop for the CMUT device (biasing circuits are not shown for simplified analysis).Adapted with permission from Ref. [59].Copyright 2014, IEEE.
Figure 7a,b show the schematics of the operational amplifier with a resistor feedback loop for CMUT device applications.The 0.18-µm CMOS process was used [63].The preamplifier was constructed using five operational amplifiers with a feedback resistor (R 1 ) and a Miller capacitor (C 1 ), as shown in Figure 7b.The Miller capacitor compensates for the pole and zero in the frequency response [64,65].A current mirror (P 1 , P 2 , and P 3 ) was used to reduce the power supply noise [66].The output nodes (R 2 and C 2 ) are the input resistance and capacitance of the next stage of the electronics (ADC), respectively [63].The input-referred current noise of the operational amplifier with resistor and capacitor feedback loop can be expressed in Equation (3) [63].
where RCU is the equivalent resistance of the CMUT and Cinput is the combined equivalent capacitance of the CMUT and the input parasitic capacitance at the input port.The input-referred current noise is inversely proportional to the feedback resistance (R1) and input capacitance (Cinput).The measured bandwidth, DC power consumption, and input-referred noise current were 4.5 MHz, 370 µW, and 1.5524 pA/√Hz, respectively [63].
Figure 8 shows a schematic of the operational amplifier with a feedback loop (R1 and C1).The 0.18-µm CMOS process was used [67].The input-referred current noise of the operational amplifier with resistor and capacitor feedback loop can be expressed in Equation (3) [63].
where R CU is the equivalent resistance of the CMUT and C input is the combined equivalent capacitance of the CMUT and the input parasitic capacitance at the input port.
The input-referred current noise is inversely proportional to the feedback resistance (R 1 ) and input capacitance (C input ).The measured bandwidth, DC power consumption, and input-referred noise current were 4.5 MHz, 370 µW, and 1.5524 pA/ √ Hz, respectively [63]. Figure 8 shows a schematic of the operational amplifier with a feedback loop (R 1 and C 1 ).The 0.18-µm CMOS process was used [67].
capacitance of the CMUT and the input parasitic capacitance at the input port.
Figure 8 shows a schematic of the operational amplifier with a feedback loop (R1 and C1).The 0.18-µm CMOS process was used [67].Several MOSFET switches were used to reduce DC power consumption if needed.Therefore, the active DC power consumption is 14.3 mW, whereas the inactive DC power consumption is 1.5 mW [67].The transimpedance gain of the preamplifier (A Z ) is expressed by Equation (4) [67].
where Z input and Z feedback are the input and feedback loop impedances, respectively; f and A are the operating frequency and open-loop gain of the operational amplifier, respectively.The width of the NMOS (N 1 and N 2 = 2.3 mm) was sufficiently large to obtain a high current in the biasing circuit [67].Different pairs and cascade stages were used to boost the gain and reduce the power supply noise, respectively, to achieve the high transimpedance gain (96.6 dBΩ) [67].The Miller compensation capacitance (C 2 = 5.4 pF) was used to increase the bandwidth; thus, the measured −3 dB bandwidth was 5.2 MHz [67].The source follower (N 6 = 135 µm/0.18µm and N 7 = 50 µm/0.63µm) was used to reduce the output impedance, thus reducing the signal reflection to the next-stage component [67].
If the open loop gain of the amplifier (A) is large, the gain of the operational amplifier with feedback loop is dependent on the values of the resistance (R 1 = 76 kΩ) and capacitance (C 1 = 0.45 pF).The NF of the operational amplifier with feedback can be expressed by Equation (5) [67].
where R input and R 1 are the input and feedback loop resistances, respectively; I input , I output , and V input are the input, output, and input voltage currents, respectively.
Sensors 2024, 24, 786 9 of 21 In Equation (5), a large feedback loop resistance (R 1 ) is desirable to reduce the NF value.The measured NF of the operational amplifier with a feedback loop was 10.3 dB at 3 MHz [67].
Figure 9 shows a schematic of the operational amplifier with a voltage-controlled resistance (N 5 ) for CMUT device applications.
where Rinput and R1 are the input and feedback loop resistances, respectively; I , I , and V are the input, output, and input voltage currents, respectively.In Equation (5), a large feedback loop resistance (R1) is desirable to reduce the NF value.The measured NF of the operational amplifier with a feedback loop was 10.3 dB at 3 MHz [67].
Figure 9 shows a schematic of the operational amplifier with a voltage-controlled resistance (N5) for CMUT device applications.Voltage-controlled resistance was implemented using the NMOS transistor to save space [69,70].The resistance can be expressed using Equation (6) where µ N is the carrier mobility, C ox is the unit-area gate capacitance, W and L are the channel width and length of the transistor, respectively, V C is the bias voltage, V OUPUT is the output voltage, and V TH is the threshold voltage of the transistor.The input-referred current noise of the amplifier can be expressed by Equation (7) [68].
where C in and C PR are the input and parasitic interconnect capacitances of the amplifier, respectively; C CU and R CU are the CMUT equivalent circuit capacitance and resistance, respectively; g m is the transconductance; T is room temperature; and i d and i CU are the spectral densities of the current noise squares of the operational amplifier transistors and CMUT, respectively.The input-referred current noise of the preamplifier is proportional to the input and parasitic interconnect capacitances of the preamplifier and the CMUT equivalent circuit capacitance but is inversely proportional to the voltage-controlled resistance [68].
The transimpedance gain of the preamplifier (A Z ) is expressed in Equation
where ω 0 and Q are the radian bandwidth and quality factor of the amplifier.
As shown in Equations ( 7) and ( 8), a high resistance (R N5 ) can lower the input-referred current noise and increase the transimpedance gain of the preamplifier.The measured DC power consumption, transimpedance gain, bandwidth, and input current noise density were 6.6 mW, 3 MΩ, 20 MHz, and 90 fA/ √ Hz, respectively [68]. Figure 10 shows a schematic of the two-stage operational amplifier with a capacitive feedback loop (C 2 and C 3 ) for the CMUT applications.The 0.35-µm CMOS process was used; thus, the DC power supply (V DD ) is 3.3 V [71].In the first stage, an operational amplifier was constructed using NMOS (N 1 ) and PMOS (P 1 ) transistors.In the second stage, the source follower was constructed using NMOS (N 2 ) and PMOS (P 3 ) transistors.The operational amplifier comprises a capacitor feedback loop (C2 and C3).Therefore, the transfer function of the amplifier with a capacitor feedback loop (IZ) is expressed as Equation ( 9) [71].
where s is the complex operating frequency, ω0 is the radian bandwidth of the operational amplifier, gN2 is the transconductance of the MOSFET of N2, A is the open-loop voltage gain, and C1 is the combined capacitances of the CMUT and parasitic interconnection.The gain of the operational amplifier with a capacitor feedback loop can be expressed by Equation ( 10) [71].The operational amplifier comprises a capacitor feedback loop (C 2 and C 3 ).Therefore, the transfer function of the amplifier with a capacitor feedback loop (I Z ) is expressed as Equation ( 9) [71].
Operational amplifier
where s is the complex operating frequency, ω 0 is the radian bandwidth of the operational amplifier, g N2 is the transconductance of the MOSFET of N 2 , A is the open-loop voltage gain, and C 1 is the combined capacitances of the CMUT and parasitic interconnection.
The gain of the operational amplifier with a capacitor feedback loop can be expressed by Equation (10) [71].
The measured −3 dB bandwidth, transimpedance gain, and DC power consumption were 40 MHz, 200 kΩ, and 0.8 mW, respectively.The input-referred spectral density of the amplifier current noise is expressed as Equation ( 11) [71].
where k is a process-dependent constant, g m is the MOSFET transconductance, i d is the spectral density of the current noise square of the operational amplifier transistors, and i db is the spectral density of the current noise square of the current-bias circuit.
The transconductance (g N2 ) and load resistance (R 1 ) must be high to reduce the inputreferred spectral density of the amplifier current noise.The measured input referred noise at 20 MHz was 0.31 pA/ √ Hz [71].
Preamplifiers for Piezoelectric Transducer Applications
Figure 11 shows a schematic of the operational amplifier, followed by a source follower with a capacitor feedback loop for piezoelectric micromachined ultrasonic transducer (PMUT) array applications.The source follower is constructed using NMOS (N 3 ) and PMOS (P 3 ).The 0.13-µm CMOS process was used [72].In the first stage, an operational amplifier was constructed using NMOS (N 1 ) and PMOS (P 1 ) transistors.In the second stage, a source follower was constructed using NMOS (N 3 ) and PMOS (P 3 ) transistors.where k is a process-dependent constant, gm is the MOSFET transconductance, id is the spectral density of the current noise square of the operational amplifier transistors, and idb is the spectral density of the current noise square of the current-bias circuit.
The transconductance (gN2) and load resistance (R1) must be high to reduce the inputreferred spectral density of the amplifier current noise.The measured input referred noise at 20 MHz was 0.31 pA/√Hz [71].
Preamplifiers for Piezoelectric Transducer Applications
Figure 11 shows a schematic of the operational amplifier, followed by a source follower with a capacitor feedback loop for piezoelectric micromachined ultrasonic transducer (PMUT) array applications.The source follower is constructed using NMOS (N3) and PMOS (P3).The 0.13-µm CMOS process was used [72].In the first stage, an operational amplifier was constructed using NMOS (N1) and PMOS (P1) transistors.In the second stage, a source follower was constructed using NMOS (N3) and PMOS (P3) transistors.The output of the operational amplifier with a capacitive feedback loop (C1) can be simplified using Equation ( 12) if the open-loop gain of the amplifier is high [72].
where QE is the electric charge produced by the PMUT device and C1 is the feedback loop The output of the operational amplifier with a capacitive feedback loop (C 1 ) can be simplified using Equation ( 12) if the open-loop gain of the amplifier is high [72].
where Q E is the electric charge produced by the PMUT device and C 1 is the feedback loop capacitance.The input-referred current noise of the amplifier is proportional to the input capacitance of the operational amplifier (C in ) and feedback capacitance (C 1 ); thus, it can be expressed using Equation (13) [72].
where C in is the electric charge produced by the PMUT device, g N1, and g P1 are the transconductances of MOSFET N 1 and P 1 , respectively, and i nn and i np are the square root mean square current noises of MOSFET N 1 and P 1 , respectively.The voltage gain, bandwidth, DC power consumption, and input referred noise of the preamplifier were 21.8 dB, 22 MHz, 0.3 mW, and 7.1 nV/ √ Hz at 3 MHz, respectively [72]. Figure 12 shows a schematic of the low-noise amplifier (LNA) used for high-frequency piezoelectric transducer applications.The 0.18-µm BiCMOS process was used [73].The LNA was constructed using a cascade amplifier (N 1 and N 3 ), followed by a common-source amplifier (N 4 ) with a resonant load (R 3 , C 2 , L 2 , L 3 , and R 4 ) owing to its high-frequency piezoelectric transducer characteristics [73].where Cin is the electric charge produced by the PMUT device, gN1, and gP1 are the transconductances of MOSFET N1 and P1, respectively, and inn and inp are the square root mean square current noises of MOSFET N1 and P1, respectively.The voltage gain, bandwidth, DC power consumption, and input referred noise of the preamplifier were 21.8 dB, 22 MHz, 0.3 mW, and 7.1 nV/√Hz at 3 MHz, respectively [72].
Figure 12 shows a schematic of the low-noise amplifier (LNA) used for high-frequency piezoelectric transducer applications.The 0.18-µm BiCMOS process was used [73].The LNA was constructed using a cascade amplifier (N1 and N3), followed by a common-source amplifier (N4) with a resonant load (R3, C2, L2, L3, and R4) owing to its highfrequency piezoelectric transducer characteristics [73].The voltage gain of the amplifier can be expressed as Equation (14) [73]: where gN1, gN3, and gN4 are the transconductances of MOSFET N1, N3, and N4, respectively, and CESD and CgsN1 are the ESD and gate-source parasitic capacitances of the MOSFET N1.
The noise figure (NF) of the preamplifier can be expressed using Equation ( 15) [73].
LNA for piezoelectric transducer devices (electrostatic discharge device (ESD) and electrostatic capacitors not shown for simplified analysis).Adapted with permission with Ref. [73].
The voltage gain of the amplifier can be expressed as Equation ( 14) [73]: where g N1 , g N3 , and g N4 are the transconductances of MOSFET N 1 , N 3 , and N 4 , respectively, and C ESD and C gsN1 are the ESD and gate-source parasitic capacitances of the MOSFET N 1 .
The voltage gain of the amplifier can be related to the load impedances (R 3 , R 4 , L 2 , L 3 , and C 2 ), transconductance (g N1 , g N3 , and g N4 ), ESD parasitic capacitance, and gate-source parasitic capacitance of MOSFET N 1 .The measured voltage gain, bandwidth, and DC power consumption of LNA were 24.08, 73, and 43.57mW, respectively [73].
The noise figure (NF) of the preamplifier can be expressed using Equation ( 15) [73].
where r N1 gate resistance of the MOSFET N 1 .
The NF of the LNA can be improved by a large transconductance (g N1 ) and low input, ESD parasitic capacitance, and gate-source parasitic capacitance of MOSFET N 1 (C 1 , C ESD , and C GSN1 ).The measured NF of the amplifier is 3.51 dB [73].
Figure 13 shows a schematic of the preamplifier used in piezoelectric transducer applications.The acoustic signals from the ultrasound transducer were sent to the Input-1 port, with the common ground of the transducer connected to the Input-2 port [74].The voltage gain depends on the variable resistors (R 1 and R 2 ) and the transistor sizes (N 1 , P 2 , N 2 , and P 3 ).The measured gain, bandwidth, and NF of the preamplifier were 20, 75, and 10 dB, respectively [74].
Sensors 2024, 24, x FOR PEER REVIEW 14 of Figure 13 shows a schematic of the preamplifier used in piezoelectric transducer plications.The acoustic signals from the ultrasound transducer were sent to the Inpu port, with the common ground of the transducer connected to the Input-2 port [74].T voltage gain depends on the variable resistors (R1 and R2) and the transistor sizes (N1, N2, and P3).The measured gain, bandwidth, and NF of the preamplifier were 20, 75, a 10 dB, respectively [74]. Figure 14 shows a schematic of an LNA.The 0.18-µm CMOS process was used; thus, the DC supply voltage (V DD ) is 3 V [75].The LNA is constructed using a three-stage common-source amplifier.The transistors (N 1 and P 2 ) were biased to obtain the 600 µA current (I DD1 ) and 800 µA current (I DD2 ), respectively [75].Resistor (R 1 ) can prevent leakage currents for long-cycle pulse signals and capacitor C 1 can be programmed with 6-dB steps [75].Therefore, the gain of the LNA (A I ) can be expressed by Equation ( 16) [75].
The measured center frequency, bandwidth, and input-referred noise current were 13 MHz, 21 MHz, and 4 nA/ √ Hz [75]. Figure 15 shows a schematic of the variable LNA with a resistor feedback loop for the piezoelectric transducer because the LNA topology is preferable for low impedance [76].A variable LNA with a resistor feedback loop was used because of the signal attenuation of echo signals in deep areas [77].The 0.18-µm CMOS process was used [77].This two-stage variable LNA structure had a feedback loop composed of two variable resistors (R 1 and R 4 ).The first stage of the LNA is a cascade amplifier composed of an NMOS (N 1 and N 2 ) and PMOS (P 1 , P 2 , and P 3 ), and the second stage is the source follower (N 4 and P 5 ).A variable Miller capacitor (C 1 ) is used to increase the bandwidth by improving the phase margins [78].A MOSFET switch composed of transistors (N 5 and P 6 ) was used to reduce the power consumption of the LNA during the period when the driving pulse signals were applied.
The voltage gain of the LNA can be expressed as Equation (17) [77]: The measured gain, bandwidth, and input-referred noise voltage of the variable LNA with resistor feedback loop were 32 dB, 11 MHz, and 4.1 nV/ √ Hz, respectively [77].
(R1 and R4).The first stage of the LNA is a cascade amplifier composed of an NMOS (N1 and N2) and PMOS (P1, P2, and P3), and the second stage is the source follower (N4 and P5).
A variable Miller capacitor (C1) is used to increase the bandwidth by improving the phase margins [78].A MOSFET switch composed of transistors (N5 and P6) was used to reduce the power consumption of the LNA during the period when the driving pulse signals were applied.The voltage gain of the LNA can be expressed as Equation ( 17) [77]: The measured gain, bandwidth, and input-referred noise voltage of the variable LNA with resistor feedback loop were 32 dB, 11 MHz, and 4.1 nV/√Hz, respectively [77].
Preamplifier for Ultrasound Imaging Applications
Figure 16 shows a schematic of the LNA for ultrasound imaging applications.The 0.18-µm CMOS process was used [79].The LNA was constructed using a three-stage operational amplifier with feedback resistors (R7 and R8) and variable input resistors (R1 and Variable LNA for piezoelectric transducer device.Adapted with permission from Ref. [77].
Preamplifier for Ultrasound Imaging Applications
Figure 16 shows a schematic of the LNA for ultrasound imaging applications.The 0.18-µm CMOS process was used [79].The LNA was constructed using a three-stage operational amplifier with feedback resistors (R 7 and R 8 ) and variable input resistors (R 1 and R 2 = 0.2, 0.4, 0.8, and 1.6 kΩ) [79].The PMOS inputs were used to reduce the noise of the preamplifier.The gain of the LNA was dependent on the variable resistors (R 1 , R 2 , R 7 , and R 8 ).The measured gain, bandwidth, OIP 3 , and input-referred noise current of the LNA were 15.6 dB, 10 MHz, 2.64 Vp-p, and 6.3 nV/ √ Hz, respectively [79].
Discussion and Conclusions
This review will guide the design characteristics of preamplifiers for ultrasound transducer applications.For ultrasound applications, currently used most IC fabrication processes are 0.13 µm, 0.18 µm, or 0.8 µm because the supply voltage of the 0.13 µm, 0.18 µm, and 0.8 µm IC fabrication processes are 1.8 V, 3.3 V, and 5 V, respectively.Below the
Discussion and Conclusions
This review will guide the design characteristics of preamplifiers for ultrasound transducer applications.For ultrasound applications, currently used most IC fabrication processes are 0.13 µm, 0.18 µm, or 0.8 µm because the supply voltage of the 0.13 µm, 0.18 µm, and 0.8 µm IC fabrication processes are 1.8 V, 3.3 V, and 5 V, respectively.Below the 0.13-µm process, the supply voltage is lower than 1.8 V; as a result, the maximum achievable gain of the preamplifier could be limited even though high dynamic ranges of the preamplifier are desirable.Therefore, a new IC fabrication process may not be desirable even if the sizes of the new IC fabrication processes are smaller.
The primary design parameters of the preamplifier are gain, bandwidth, noise figure (or input-or output-referred noise), power consumption, and IIP 3 or OIP 3 [80,81].These design parameters of the preamplifiers have a trade-off relationship; therefore, circuit or system designers must consider the parameter specifications for the performance of ultrasound transducers.For example, the bandwidth of a preamplifier should be larger than that of an ultrasound transducer.The input-referred noise of the preamplifier must be similar to or lower than that of the ultrasound transducer.The gain of the preamplifier should be high if the sensitivity of the transducer used in the IVUS applications is low.However, the bandwidth of the preamplifier for the operational amplifier type can be increased if its gain of the preamplifier needs is decreased [82].While a high biasing current can increase the gain of the preamplifier, it causes unnecessary DC power consumption; therefore, an appropriate current is desirable at the design level [83].A preamplifier with a wide bandwidth can increase the number of unwanted harmonic components of the acoustic signals generated by the ultrasound transducer.While a high linearity of the preamplifier can be obtained if a current-biasing circuit based on the MOSFET is used, it causes high DC power consumption [84].
For CMUT device applications, a transimpedance amplifier-an operational amplifier with a feedback loop composed of resistors or capacitors-is preferred for high impedances [85,86].For piezoelectric devices, the LNA is preferable because of the low impedance of the piezoelectric transducer [72].
To increase the gain of the preamplifier, circuit designers use a common-source amplifier with a large width of the first transistor connected to the input port or use a cascade topology to obtain a high current from the biasing circuit [28].However, this causes relatively high DC power consumption.In the last stage, the source follower is used to reduce the output impedance, thus smoothly passing the amplified signal to the next-stage amplifier or ADC.In an operational amplifier with a resistor feedback loop, the feedback resistor affects the gain and bandwidth of the preamplifier.
For the input-referred noise or NF, the transconductance value of the MOSFET is important because it can affect the noise of the preamplifier [87].For an operational amplifier with a feedback resistor loop, the feedback resistor can affect the noise parameters [66].In addition, the open-loop gain of the operational amplifier must be large to reduce the input-referred noise [28].MOSFET switches can be used to reduce power consumption during the driving pulse period when transmitted signals are applied [88].Instead of resistors, voltage-controlled MOSFETs for amplifier design could help reduce the chip area [88].However, this scheme may require an integrated preamplifier design with a more complex and accurate timing period after pulse transmission.In particular, this technique can help reduce power consumption in wireless ultrasound systems.Miller capacitors in the output port are sometimes used to increase the bandwidth by moving the pole and zero locations [30].An operational amplifier with a capacitive feedback loop was used for the PUMT device, which has a lower impedance than that of the CMUT [72].An LNA with a resonant load was developed for high-frequency piezoelectric transducers [73].The LNA constructed using a three-stage common-source amplifier used resistors to prevent leakage currents for a long-cycle transmission period and a capacitor to provide a 6-dB step gain [77].
Table 1 summarizes the design parameters of the previously published preamplifiers used for ultrasound transducers.As shown in Table 1, the gain of the transimpedance amplifier is expressed by dBΩ or kΩ units because the input is current and the output is voltage.The input-referred noise (IRN) can be expressed by current or voltage units, and the NF can be expressed on the dB scale.Topologies can be classified into common source, operational amplifier, or low-noise amplifier types.While several review papers on IC components for ultrasound systems have been published, they did not provide specific design guidelines for preamplifiers used in ultrasound transducer applications.Therefore, this is the first review paper of preamplifiers to provide design guidelines for ultrasound transducer applications, such as capacitive micromachined ultrasonic transducer (CMUT), piezoelectric transducer, and ultrasound imaging applications.
In ultrasound imaging, preamplifiers are required to amplify weak acoustic signals and obtain images for diagnostic purposes.However, their performance is limited because of transistor requirements.Therefore, the design parameters of the gain, bandwidth, inputor output-referred noise currents, and DC power consumption are described to explain the design concepts of the preamplifiers because they have a trade-off relationship when designing the preamplifier components used for diagnostic ultrasound imaging applications Recently, with the emergence of new ultrasound applications such as photoacoustic imaging, smartphone touch sensors, wireless ultrasound machines, brain stimulation, and ultrasound-combined positron emission tomography, academic researchers have used commercial components or system IC for these emerging applications.However, further performance optimization is possible if ultrasound transducers with appropriate electronic selection or a design topology that considers a trade-off relationship are developed.As such, the knowledge of preamplifier design in this review paper is expected to be helpful in this regard.
Figure 1 .
Figure 1.Block diagram of the transducer and ultrasound transmitter and receiver used for diagnostic ultrasound imaging applications.
Figure 1 .
Figure 1.Block diagram of the transducer and ultrasound transmitter and receiver used for diagnostic ultrasound imaging applications.
Figure 2 .
Figure 2. Design parameters of the preamplifiers used for ultrasound transducers.
Figure 2 .
Figure 2. Design parameters of the preamplifiers used for ultrasound transducers.
Figure 4 .
Figure 4. Operational amplifier with a resistor feedback loop for the CMUT device.Adapted with permission from Ref. [54].Copyright 2005, IEEE.
Figure 4 .
Figure 4. Operational amplifier with a resistor feedback loop for the CMUT device.Adapted with permission from Ref. [54].Copyright 2005, IEEE.
1 Figure 5 .
Figure5.Common-source amplifier followed by a source follower with resistor feedback for the CMUT device.Adapted with permission from Ref.[56].Copyright 2008, IEEE.
Figure 10 .
Figure 10.Feedback-capacitor-based operational amplifier for CMUT devices used in the IVUS and ICE device applications.Adapted with permission from Ref. [71].Copyright 2014, IEEE.
Figure 11 .
Figure 11.Operational amplifier with a capacitor feedback loop for the PMUT device.Adapted from Zamora et al. [72] with permission under the terms of the CC BY 4.0 License, Copyright 2020 MDPI AG.
Figure 11 .
Figure 11.Operational amplifier with a capacitor feedback loop for the PMUT device.Adapted from Zamora et al. [72] with permission under the terms of the CC BY 4.0 License, Copyright 2020 MDPI AG.
Figure 14
Figure14shows a schematic of an LNA.The 0.18-µm CMOS process was used; th the DC supply voltage (VDD) is 3 V[75].
Table 1 .
Summary of the preamplifiers currently used for ultrasound transducer research. | 2024-01-27T16:04:08.346Z | 2024-01-25T00:00:00.000 | {
"year": 2024,
"sha1": "1266e8ec51b8a1eb7647f4fc84a439687ea17f93",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/3/786/pdf?version=1706176527",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8123c61a1acea975ab07889614f28165b99add3",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
238211596 | pes2o/s2orc | v3-fos-license | Shuffling-SinGAN: Improvement on Generative Model from a Single Image
Recently, SinGAN takes the use of GANs into a new realm – unconditional generation learned from a single natural image. Following the SinGAN architecture, we propose Shuffling- SinGAN, an efficient unconditional generative model that trained on a single natural image for general image manipulation. Our new network includes a pyramid of fully convolutional GANs, in which each layer is responsible for learning the patch distribution at a different scale of the image. We can generate new samples with variability through our network, which have the function of maintaining the texture and global structure of the original image. New random image generated by the model after multiple training is different from the original image in detail. Inspired by sinIR, we decided to add random pixel shuffling to the network. After experimentation, we found that the changed model generated more random new samples. Shuffling-SinGAN allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. User tests confirm that the generated samples are commonly confused to be real images. With quantitative evaluation, we show that Shuffling-SinGAN has competitive performance on random image generation.
1.Introduction
Nowadays, Generative Adversarial Networks (GANs) [1] have achieved impressive results in many visual processing tasks, such as image super-resolution [2], inpainting [3], image editing [4,5,6,7,8] and image style Migration etc. From the actual results, GAN seems to produce better generated samples. Now researchers are interested in using a single image for training, and then using the training model for image processing operations. The strict requirements for the GPU memory size are not need and the code that need to be preprocessed for large-scale datasets has been reduced. Training on a single image is reasonable because it has abundant information that can be used as a powerful prior for solving various problems. Several internal deep learning methods have been proposed and achieved excellent performance comparable to external training methods on large-scale datasets, such as super resolution, restoration [9], reflection removal [10], deblur, segmentation and dehazing, denoising and inpainting tasks.
However, these methods have serious problems and do not apply deep internal learning to practice. First, with the exception of MGANs [11], most of these methods are image-specific in terms of operation. This means that the trained model can only operate on the original training image, and for other images, a separate model must be trained. Second, most of these methods are task-specific, which means that the trained model can only perform a specific image manipulation. SinGAN [12] takes the use of GANs into a new realm -unconditional generation learned from a single natural image. the internal statistics 2 of patches within a single natural image typically carry enough information for learning a powerful generative model. It does not need to rely on same type of images, and it allows us to process general natural images containing complex structures and textures. Because unconditional image generation is a relatively difficult problem, a complex loss function [13] is used in the original SinGAN to better converge and train multiple for a large number of iterations GAN.
The improvement in this article is to add an attention mechanism to each layer. Because there is only random noise input at the bottom layer, the pictures generated at the bottom layer are often weird, and there are multiple items mixed together. Therefore, we thought of the attention mechanism, which really ensures that there are not many kinds of new pictures generated mixed objects appear. We also found a problem at the same time. At high scales of sinGAN, the new generate image is the same as the training image. This violates the idea of sinGAN to generate a new random image from an original image, so we added the random pixels shuffling from sinIR [14] which does not significantly increase the computational cost, and random pixel shuffling provides additional control over the operation.
Model
The structure of resnet [15] can speed up the training of neural network, and improve the accuracy of the model. At the same time, resnet can be directly used in the Inception network [16]. The main idea of resnet is to add a direct connection channel to the network which is Highway Network. The generators structure in Our Shuffling-SinGAN Network (SSGAN) is resnet. It is composed of two main modules, generator , . . . , and discriminator , . . . , . This model is similar to the traditional GAN, except that the training samples here are patches of a single image instead of the whole image samples in the database. We process common natural images.
Our model trained in the order of each scale, from the coarsest scale to the finest scale. The parameters of each scale are independent and will remain fixed after being trained. The nth GAN training loss is comprised of an adversarial term and a reconstruction term.
Loss
L , αL . 1 Figure 1. Shuffling-SinGAN's multi-scale struct. Our model training and inference are done in a coarse-to-fine fashion using a pyramid of GANs. At each scale, the input to is a random noise image , and image generated from previous scale after random pixel shuffling and upsampled to the current resolution.
learn these images to generate image samples. learns to fool an associated discriminator , try to distinguish patches in the generated samples and the real image . [17] includes two independent sub-modules, the Channel Attention Module (CAM) and the Spatial Attention Module (SAM), which perform channel and spatial attention respectively. This not only saves parameters and computing power, but also ensures that it can be integrated into the existing network architecture as a plug-and-play module. After the introduction of CBAM, the features cover more parts of the object to be recognized, and the probability of finally discriminating the object is also higher. CBAM allows the network to learn to focus on key information. CAM: Use the input feature map ( ) through global max pooling and global average pooling based on width and height respectively, obtain two 1 1 features Figure. Then send them to a two-layer neural network (MLP), the number of neurons in the first layer is / ( is the reduction rate), the activation function is Relu. The number of neurons in the second layer is , this two-layer neural network is shared. The MLP output features are subjected to an element-wise addition operation, and through the sigmoid activation operation is performed to generate the final channel attention feature, namely _ . Finally, _ and the input feature map are subjected to an element-wise multiplication operation to generate the input features required by the Spatial attention module. SAM: The feature map output by the Channel attention module is used as the input feature map of this module. First do a global max pooling base on channel and global average pooling to get two 1 feature maps, then make channel splicing on these two feature maps based on the channel. After a 7×7 convolution (7×7 is better than 3×3) operation, the dimensionality is reduced to 1 channel 1 ). Through sigmoid, the spatial attention feature named _ generated. Finally, the feature and the input feature of the module are multiplied to get the final generated feature. The struct of CBAM can be seen in Figure 2. The overview of CBAM. This module has two sequential sub-modules: channel and spatial. Input feature map will be refined through this module. This module can be used in every convolutional block of deep networks.
Random pixel shuffling
We added random pixel shuffling [18] module before upsampling. We did not randomly set some pixels to be black like the denoising autoencoder, because we used more complex natural images. In addition, we randomly shuffled some pixels in a given single image so that the network can learn a more robust relationship between adjacent pixels. We used random pixel shuffling to constrain the images generated at the previous scale, then as the input of the next layer. During the training process, by adjusting the random pixel shuffling wiped parameter , the generated images can be adjusted. In the training process, we refered to sinIR to set the random pixel shuffling wiped parameter to 0.005. At the same time, in order to confirm the function of the random pixel shuffling module, we also made a training that is 0.3 as a reference case. When the value is too high, there will be distortion of the generated picture. Figure 3 shows the influence of parameter p on the network. Figure 4 shows the generate network after we improved. is upsampled, it added to . The result is sent into network. After calculation through 5 conv layers and CBAM, whose output is a residual image. Then the output added back to )↑ . We get the nth scale result, . Finally send the result after random pixel shuffling to next scale.
Adversarial loss
Each of the generators is coupled with a discriminator . The role of the discriminator is to classify each overlapping block of its input as true or false. In the training process, we use the same WGAN-GP loss as sinGAN, which can improve training stability. In the WGAN-GP where the final discriminant score is the average over the patch discrimination map. We define the loss over the whole image. This way allows the net to learn boundary conditions. The patch size in is the same with Gn because the architecture of is the same as the net . The net's receptive field (patch size) is 11 × 11.
Reconstruction loss
The purpose of reconstruction loss is to hope that there will be a set of random noise input, and the final output image is the original image. At the begging, we follow sinGAN use MSE (mean squared error) loss. But then we realized that MSE loss will produce blurred images [19], and considering that the meaning of the model is to generate new images on the basis of maintaining the integrity of the original image information, we modified the original loss, SSIM (structural similarity) loss is added to the MSE loss:
Datasets
We have tested our method qualitatively and quantitatively on a wide range of images covering a large number of scenes including urban and natural scenery as well as artistic and texture images. The images we used are taken from Berkeley Segmentation Database (BSD), Places and the Web.
Architecture
We used SinGAN as baseline. We added two new modules as the generator in network. They are random pixel shuffling and Convolutional Block Attention Module (CBAM). The size of the training picture is adjusted to 250px.
Training.
To make a fair comparison, the parameters are same with sinGAN. We set scale factor r to 4/3, the minimum and maximum dimension to 25px, 250px, and the number of scale N is calculated by these parameters. Shuffling-SinGAN is trained for 2000 iterations at every scale, The optimizer uses Adam, set the momentum parameter beta1=0.5, beta2=0.999, the learning rate of the generator and the discriminator is 0.0005 (decrease after 1600 iterations 0.1 times), reconstruction loss weight is 10, gradient penalty weight of WGAN-GP loss is 0.1. Our codes use the PyTorch environment, and the training process used a single NVIDIA RTX 3070 GPU.
Testing
Our testing process is similar to training process, putting the input image into the trained network to get the result. The advantage of our model is that we can set the different scales to control the generated pictures. The results have been evaluated qualitatively and quantitatively. The following section shows results.
The purpose of our improvement to sinGAN is to optimize it, so our goal is the same as it. AMT perceptual study indicator is used in sinGAN, and we use a similar method. We followed the protocol of [20,21] and set experiments: real vs fake. We conducted user research using randomly sampled images from a dedicated dataset provided by (Luan et al., 2017). 20 subjects with computer vision experience to help us complete the experiment. Workers were asked to select fake images. (1) Unpaired (either real or fake): Workers were presented with a single image for 1 second, and were asked if it was fake. In total, 50 real images, other 50 fake images generated by sinGAN were presented in random order to each worker. (2) Paired: Workers were presented with a sequence of 50 trials, in each of fake images which generated by SinGAN was presented against real training image for 1 second. Workers were asked to pick the fake image. We repeated these two protocols for two types of generation processes: Starting the generation from the N scale, and from scale N -1. In this way, we evaluate the authenticity of the results on two different scales. We have done the same test using the fake images generate by SSinGAN. In figure5, (a) In order to ensure that our model can guarantee the generation of diverse images, we have selected images different from each other in the database, such as Mountains, Hills, Desert, and Sky. We performed a calculation with a parameter called diversity of the generated images: for each training example we calculated the standard deviation (std) of the intensity values of each pixel over 100 generated images, averaged it over all pixels, and normalized by the std of the intensity values of the training image. For the N-1th scale picture, we have achieved better results, 50% of workers think the pictures we generate are real. Table 1 shows the result. A common metric for GAN evaluation is the Frechet Inception Distance (FID) [22], which measures the deviation between the distribution of deep features of generated images and that of real images. In our Shuffling-SinGAN, there is only one single real image, so we used SIFID be proposed in SinGAN, it use the internal distribution of deep features at the output of the convolutional layer before the second pooling layer. SIFID is the FID between the statistics of those features in the real image and in the generated sample. We repeated these two protocols for two types of generation processes: starting with the Nth(coarsest) scale, and starting with the N-1 scale. In this way, we assess the realism of the results in two different levels. The SIFID value is smaller, the generated image is closer to the real image. Table 2 is the result of comparing two models with two different sizes. The result shows that the image samples generated by the method in this paper is closer to that of the real image samples, and can effectively capture the detailed information in the image and the dependence between each feature channel, and generate high-quality images. We show different samples generated by different kinds of images with big gaps in Nth. Figure 6 shows the result.
Conclusion
We introduced Shuffling-SinGAN, an unconditional generative model that is learned from a single natural image based on SinGAN. Our training objective is same as SinGAN and the result proved our model is better than SinGAN in the coarsest scale. Although compared with the generation method of external training, internal learning is inherently limited in terms of semantic diversity. Our model does not surpass SinGAN in this respect, that cannot generate images that contain new information that does not exist in the training image. For example, the training image contains a zebra, our model cannot generate cat. But we do surpass sinGAN while performing the same task. For future work, although we only used one training image in this work, we can further explore whether it is feasible to use multiple images for training. This way possible explores effective method for dealing with extreme situations with fewer internal references in training images. We hope our work can bring valuable contribution and inspiration to the future research. | 2021-09-29T20:08:02.532Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "da66090d43f2822934f32f7594eab8340fb4840f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2024/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "da66090d43f2822934f32f7594eab8340fb4840f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
53030186 | pes2o/s2orc | v3-fos-license | Precipitation with polyethylene glycol followed by washing and pelleting by ultracentrifugation enriches extracellular vesicles from tissue culture supernatants in small and large scales
ABSTRACT Extracellular vesicles (EVs) provide a complex means of intercellular signalling between cells at local and distant sites, both within and between different organs. According to their cell-type specific signatures, EVs can function as a novel class of biomarkers for a variety of diseases, and can be used as drug-delivery vehicles. Furthermore, EVs from certain cell types exert beneficial effects in regenerative medicine and for immune modulation. Several techniques are available to harvest EVs from various body fluids or cell culture supernatants. Classically, differential centrifugation, density gradient centrifugation, size-exclusion chromatography and immunocapturing-based methods are used to harvest EVs from EV-containing liquids. Owing to limitations in the scalability of any of these methods, we designed and optimised a polyethylene glycol (PEG)-based precipitation method to enrich EVs from cell culture supernatants. We demonstrate the reproducibility and scalability of this method and compared its efficacy with more classical EV-harvesting methods. We show that washing of the PEG pellet and the re-precipitation by ultracentrifugation remove a huge proportion of PEG co-precipitated molecules such as bovine serum albumine (BSA). However, supported by the results of the size exclusion chromatography, which revealed a higher purity in terms of particles per milligram protein of the obtained EV samples, PEG-prepared EV samples most likely still contain a certain percentage of other non-EV associated molecules. Since PEG-enriched EVs revealed the same therapeutic activity in an ischemic stroke model than corresponding cells, it is unlikely that such co-purified molecules negatively affect the functional properties of obtained EV samples. In summary, maybe not being the purification method of choice if molecular profiling of pure EV samples is intended, the optimised PEG protocol is a scalable and reproducible method, which can easily be adopted by laboratories equipped with an ultracentrifuge to enrich for functional active EVs.
different biologically active molecules. These include proteins involved in cell adhesion (e.g. intercellular adhesion molecules [ICAMs] and integrins), intercellular cell signalling (e.g. cytokines, interleukins and chemokines) and membrane organisation (e.g. tetraspanins and flotillins) as well as coding and noncoding RNAs (including microRNAs), and different types of lipids [1,[3][4][5]. The molecular organisation of their surface provides a kind of address code allowing them to selectively interact with specific target cells [6,7]. Thus, depending on their origin and their cargo, EVs can exert specific functions. For instance, B-cell derived EVs expressing Major Histocompatibility Complex (MHC)-class II molecules are able to induce specific T-cell responses (Raposo et al. 1996). Also, EVs derived from other immune cell-types have been demonstrated to promote pro-inflammatory responses [8]; for example, EVs from mature dendritic cells (DCs) that had been pulsed with tumour-specific antigens can induce antitumour responses in mouse and man [9][10][11][12].
Although the EV field has significantly progressed within the last few years, there is no consensus on optimal isolation and purification methods. Differential (ultra)centrifugation remains the standard technique to harvest EVs from tissue culture supernatants as well as from primary body fluids [27][28][29]. In addition, amongst others immunoprecipitation techniques [30], ultrafiltration [31] and sizeexclusion chromatography [32] are used to enrich for EVs. Recently, increasing numbers of commercially available polymeric precipitation reagents allow for the precipitation of nanosized EVs at low speed centrifugation. However, all of these techniques are more suitable for preparations of small rather than large sample volumes. For example, the largest rotors for ultracentrifugation can process less than 400 mL sample volume in one run. Thus, larger-scale preparation approaches are required. Aiming to prepare exosome-sized EVs (sEVs; 70-150 nm) for therapeutic applications we searched for a novel, cost-effective method that allows harvesting of sEVs from larger sample volumes (up to several litres).
In terms of size and molecular content, EVs and viruses share a number of common features and use parts of the same endosomal machinery for their assembly and release [33]. Owing to these parallels, a discussion had been initiated as to whether some viruses, especially retroviruses, can be considered as malignant exosomes [34]. Independent from the evolutionary relation viruses and EVs indeed share, this discussion led us to the assumption that technologies allowing purification of viruses may provide feasible technologies to purify sEVs as well. Since it is a wellestablished procedure to concentrate viruses via polyethylene glycol (PEG) precipitation [35][36][37], we tested for the efficacy of PEG precipitation to concentrate sEVs from cell culture supernatants in both small and large scales. PEG precipitation is affected by the molecular weight of the PEG [38]; thus, we at first compared the efficacy of PEG to precipitate sEVs in relation to these parameters. After selecting suitable conditions, we compared the yield obtained with the small-scale PEG precipitation to that obtained with other methods. Finally, we assessed the reproducibility and the scalability of the established PEG protocol and as a proof of principle investigated the usability of prepared sEVs in downstream applications, i.e. miRNA profiling and proteomic analysis.
Generation of CD63-eGFP transduced HEK293T cells
The coding region of the tetraspanin CD63 was amplified via polymerase chain reaction (PCR) using HEK293T cell cDNA as tem plate. The oligonucleotides used for the PCR reaction were flanked by XhoI or EcoRI restriction site sequences, respectively (5ʹ ACCGATCTCGAGCA ATGGCGGTGGAAGGAGGAATG; 3ʹ ACCGA TGAATTCTCACCTCGTAGCCACTTCTGATACT). Of note, the 3ʹ-primer was designed without the stop codon of the CD63 gene. XhoI/EcoRI digested PCR products were transferred into the XhoI/EcoRI site of the transient expression vector pEGFP-N1 (Takara Bio Europe/SAS, Saint-Germain-en-Laye, France). The obtained expression cassette was confirmed by Sanger sequencing. To test for the appropriate subcellular distribution of the encoded CD63-eGFP-CD63 fusion protein, the obtained pEGFP-N1-CD63 plasmid and the original pEGFP1 plasmid were transfected into HEK293T cells using Jetpei (Polyplus, Illkirch Cedex, France) transfection reagent according to the manufacturer's recommendations.
Next, the CD63-EGFP ORF was transferred as NheI/ BsrGI restriction fragment from the pEGFP-N1-CD63 plasmid into the lentiviral vector pCL6IEGwo containing the coding region of eGFP (kindly provided by Helmut Hanenberg, University Hospital Essen). The expression cassette of the resulting pCL6-CD63-eGFP lentiviral plasmid was confirmed by Sanger sequencing.
Lentiviral particle containing supernatants were obtained following simultaneous co-transfection of HEK293T cells applying the Jetpei transfection reagent together with the lentiviral plasmid pCL6-CD63-eGFP, the helper plasmid pCD/NL-BH [39] and the codonoptimised, human foamy virus envelope encoding plasmid pcoPE01 [40]. The gene expression from the human cytomegalovirus immediate-early gene enhancer/promoter was induced with 10 mM sodium butyrate (Merck, Darmstadt, Germany) 24 h post transfection. Supernatants containing lentiviral particles were collected 48 h after transfection. Following filtration through 0.45 µm filters (Sartorius, Göttingen, Germany) and concentration by centrifugation at 25,000 × g for 90 min at 4°C in an Avanti J-26 XP centrifuge using a JA 25.50 rotor (Beckman Coulter, Krefeld, Germany) pellets of lentiviral particles were resolved in 2.5 mL Iscove's Modified Dulbecco's Medium (IMDM) . Aliquots of 250 µL were stored at −80°C.
HEK293T cells raised in Dulbecco Modified Eagle Medium (DMEM) high glucose supplemented with 10% FBS, 100 U/mL penicillin, 100 U/mL streptomycin and 100 U/mL glutamine (all Life Technologies, Darmstadt, Germany) were transduced by overnight exposure to virus stocks. Successfully transduced cells were purified via fluorescent cell sorting on a fluorescence-activated cell sorting (FACS) Aria I cell sorter (BD Bioscience, Heidelberg, Germany).
Culturing of HEK293T cells and collection of media for the EV preparation
HEK293T-CD63-eGFP cells were cultured with DMEM high glucose supplemented with 10% FBS, 100 U/mL penicillin, 100 U/mL streptomycin and 100 U/mL glutamine (Life Technologies). As soon as the mycoplasma-free cells reached approximately 50% confluency, media for the EV purification were collected each other day until the cells reached 80-90% confluency. Media were centrifuged at 2,000 × g. Supernatants were either used immediately or filtered through a 0.22 µm filter (Sartorius, Göttingen, Germany) and stored at −20°C until usage. After thawing, aliquots were pooled to 500 mL batches. All cells were tested weekly for mycoplasma contamination.
For the analyses of EV uptake equivalents of 1 × 10 8 particles of the EV-enriched samples were added to the cells. After incubation for 14-16 h at 37°C, the medium with residual particles was removed and fresh culture media were added. At first, cells were analysed by fluorescent microscopy on an Axio Observer.D1 microscope platform with Plan-Apochromat 20×/0.8 lenses (Zeiss, Oberkochen, Germany). To harvest cells for flow cytometric analysis, cells were treated with 0.25% trypsin (Lonza) for 5 min at 37°C. The enzymatic reaction was stopped by the addition of fresh culture media. Cells were pelleted by centrifugation for 5 min at 800 × g, re-suspended in isotonic solution for flow cytometry (Beckman Coulter) and analysed on a Cytomics FC500 flow cytometer (Beckman Coulter) for their eGFP-intensity. The mean fluorescence intensity was measured for all samples in comparison to untreated N-KM cells.
EV preparation from conditioned cell media
Preparation by direct ultracentrifugation 10 mL of freshly harvested conditioned media were centrifuged at 110,000 × g for 2 h at 4°C in an Optima L7-65 ultracentrifuge using a SW40 swingout rotor (k factor 299, Beckman Coulter). Obtained pellets were re-suspended in 1 mL phosphate-buffered saline (PBS; Life Technologies). Obtained EVs were either processed immediately, kept on −20°C for short-term or −80°C for long-term storage.
Preparation by differential centrifugation 10 mL conditioned media (freshly harvested or frozen and thawed) were centrifuged at 10,000 × g for 45 min in a 5810R centrifuge (Eppendorf, Hamburg, Germany). Supernatants were transferred to 10 mL Ultra-Clear centrifuge tubes (Beckman Coulter), and EVs were precipitated at 110,000 × g for 2 h at 4°C in an Optima L7-65 ultracentrifuge using a SW40 swing out rotor (k-factor 299, Beckman Coulter). For the comparison of the different methods, obtained pellets were re-suspended in 1 mL PBS, for the comparison of the reproducibility in 250 µL 0.9% sodium chloride (NaCl) (Braun, Melsungen, Germany). Fractions were washed with 11 mL PBS or 10 mL 0.9% NaCl, respectively, and re-precipitated by ultracentrifugation exactly as previously performed. Obtained EVs were either processed immediately, kept on −20°C for short-term or −80°C for long-term storage.
Preparation with PEG Small-scale preparation. 10 mL conditioned media (freshly harvested or frozen and thawed) were centrifuged at 10,000 × g in a 5810R centrifuge (Eppendorf). Obtained supernatants were supplemented with 50% w/v stock solutions of PEG 6000, 8000 or 20000 (Sigma-Aldrich, Taufkirchen, Germany) to final concentrations of 6, 8, 10, 12 and 15% PEG and with 3.75 M NaCl (Sigma-Aldrich) to final concentration of 75 mM NaCl (calculated to 15 mL). Samples were mixed gently by inverting the tubes three times. Unless indicated otherwise, samples were stored at 4°C for up to 14 h (overnight). EVs were concentrated by centrifugation at 1,500 × g for 30 min at 4°C in a 5810R centrifuge (Eppendorf, Hamburg, Germany). Supernatants were removed and pellets re-suspended in 250 µL PBS for initial experiments and 1 mL PBS for the comparison of different methods or 250 µL 0.9% NaCl by pipetting, respectively. To remove residues of PEG from the suspension, the EV-enriched fractions were washed with 11 mL PBS or 10 mL 0.9% NaCl, respectively, and centrifuged at 110,000 × g for 2 h at 4°C in an Optima L7-65 ultracentrifuge using the swing out rotor SW40 (k factor 299, Beckman Coulter). The resulting pellet was re-suspended in 250 µL/1 mL PBS or 250 µL 0.9% NaCl, respectively. Obtained EV samples were kept on −20°C for short or −80°C for long-term storage.
Large-scale preparation. After freezing and thawing 360 mL conditioned media were centrifuged at 6,000 × g in 500 mL conical centrifuge flasks (Beckman Coulter) in an Avanti J-26 XP centrifuge using the swing-out rotor JS-5.3 (Beckman Coulter). Obtained supernatants were transferred into new 500 mL conical centrifuge flasks and supplemented to a final concentration of 10% PEG 6000 and 75 mM NaCl. After overnight incubation at 4°C, EVs were precipitated at 1,500 × g in an Avanti J-26 XP centrifuge using the swing-out rotor JS-5.3 (Beckman Coulter) for 30 min at 4°C. Pellets were re-suspended in 60 mL 0.9% NaCl and transferred in 70 mL polycarbonate centrifuge bottles (Beckman Coulter). Fractions were precipitated at 110,000 × g for 2 h at 4°C in an Optima L7-65 ultracentrifuge using the tight angle rotor Ti45 (k factor 244, Beckman Coulter). Obtained pellets were re-suspended in 1 mL 0.9% NaCl and stored at −80°C until usage.
Preparation with sucrose density gradient EVs from 10 mL conditioned media were at first prepared by differential centrifugation exactly as described earlier. The obtained fractions (each 1 mL) were loaded onto an 11 mL sucrose density gradient (2.5, 5, 10, 20, 30, 40, 50, 60, 70 and 75%) and centrifuged at 110,000 × g for 16 h at 4°C in a SW45 swing-out rotor in an Optima L7-65 ultracentrifuge (k factor 244, Beckman Coulter). Fractions of 1 mL each were taken from the top to the bottom. After transferring 2 µL onto microscopic slides, drops were checked for the presence of eGFPlabelled particles by fluorescence microscopy using the Axio Observer.D1 platform (Zeiss). Fractions containing fluorescent particles were pooled and diluted with PBS to 12 mL total volume and centrifuged at 110,000 × g for 2 h at 4°C in an Optima L7-65 ultracentrifuge using the swing out rotor SW40 (k factor 299, Beckman Coulter). Resulting pellets were re-suspended in 250 µL PBS and used for EV uptake experiments and Nano particle Tracking Analysis (NTA).
Preparation by size-exclusion chromatography EVs from 10 mL conditioned media were at first prepared by differential centrifugation exactly as described earlier. The obtained fractions (each 1 mL) were loaded onto columns of 16/60 HiLoad Superdex 200 prep grade (GE Healthcare Europe GmbH, Freiburg, Germany), which were pre-equilibrated with a buffer containing 0.05 M sodium phosphate (pH 7.2) and 0.15 M NaCl using an ÄKTAexplorer 10 (GE Healthcare Europe GmbH, Freiburg, Germany). Fractions of 1 mL were collected at a flow rate of 1 mL/min. The first six elution fractions corresponding to the known retention volume for EVs were pooled and concentrated to 1 mL by ultracentrifugation.
Analysis of obtained EV samples
Fluorescence drop analysis 1-2 µL of the obtained HEK293T-CD63-eGFP samples were transferred to microscopic slides and analysed by an Axio Observer.D1 fluorescent microscope with a Plan-Apochromat 20×/0.8 objective (Zeiss).
Protein content
The protein content of the EV samples was determined using the bicinchoninic acid BCA protein assay kit (Pierce, Rockford, IL, USA). Protein analysis was performed according to the recommendations of the manufacturer using the 96-well plate procedure.
To rinse off residual primary antibodies membranes were washed three times (5, 10 and 20 min) in PBS-T/TBS-T. The following secondary antibodies and detection substrates were used: Peroxidase-AffiniPure F(ab')2 Fragment Donkey Anti-Mouse IgG (polyclonal, 1:10,000, 1 h room temperature, Jackson ImmunoResearch Laboratories, West Grove, PA, USA, 715-036-150); Peroxidase-AffiniPure F(ab')2 Fragment Donkey Anti-Rabbit IgG (polyclonal, 1:10,000, 1 h [for Tsg101 2 h] room temperature, Jackson ImmunoResearch Laboratories, 711-036-152; Substrate: SuperSignal® West Femto Maximum Sensitivity Substrate, Thermo Fisher, 34095. Sample buffer conditions for gel run: non-reducing (denatured but without DTT) for detection of CD9, CD63, CD81; reducing for detection of HSP70, Tsg101, Syntenin, Prohibitin; detection of BSA works with both conditions. Nanoparticle tracking analysis (NTA) Average size distribution and particle concentration analyses of the EV samples were performed by NTA. At the beginning of the studies, a Nanosight LM10 instrument equipped with the NTA 2.0 analytical software was used, exactly as described previously [42]. Each sample was measured three times. The 50% median value (D50) and the standard deviation were calculated. For comparison of the differential centrifugation and the PEG 6000 precipitation procedures, both, small and large scales, NTA was performed on the ZetaView platform (Particle Metrix, Meerbusch, Germany). The following dilutions and settings were used:
Electron microscopy
Drops of 4 µL of selected EV samples were loaded on formvar or carbon coated 300 mesh copper grids (Plano, Wetzlar, Germany) for 2 min. The grids were washed in double distilled water and contrasted with either 1% uranyl acetate (SPI Supplies, West Chester, USA) or 0.75% uranyl formate (SPI Supplies) for 1 min. Grids were air dried and examined either on a JEM1400 Plus electron microscope (JEOL, Tokyo, Japan) equipped with a LaB 6 cathode at 120 kV using a TemCam-XF416® FastScan CCD camera system (TVIPS, Gauting, Germany) for image acquisition or an EM 902A (Zeiss, Oberkochen, Germany) electron microscope at 80 kV with a Morada slow scan CCD camera connected to a PC running ITEM 5.2 capturing software (Olympus SIS, Münster, Germany).
PEG concentration
The PEG concentration of the samples was measured by a barium-iodide approach, which was analysed by the ODYSSEY CLx 096A infrared imaging system (LI-COR, Bad Homburg, Germany). In detail, 2-12 µg of the EV samples were re-suspended in SDS-sample puffer (Thermo Fisher), heated for 10 min to 75°C and fractioned together with a PEG 6000 concentration series on 4-12% Bis Tris NuPAGE gels (Thermo Fisher) via SDS PAGE. After electrophoresis for 40 min at 200 V in MOPS-buffer (Thermo Fisher) gels were washed twice with distilled water. To visualise the PEG, gels were incubated for 10 min in freshly prepared 5% barium chloride (ROTH, Karlsruhe, Germany) and developed with 0.1 M iodide solution (ROTH, Karlsruhe, Germany) [43,44]. After the positive detection of the PEG signals, the solution was exchanged for distilled water. PEG signals were quantified by with ODYSSEY® CLx 096A infrared imaging system and Image studio software (LI-COR, Bad Homburg, Germany).
miRNA analysis
Total RNA (including miRNA-fraction) was extracted from volumes of 400 µL of conditioned cell culture medium or from EV samples either obtained by the direct ultracentrifugation, differential centrifugation or the PEG method utilising the mirVana PARIS TM kit (Ambion, Austin, USA). All steps were performed according to the manufacturer's instructions.
Relative miRNA-quantification was performed using the miScript system (Qiagen, Hilden, Germany) according to the manufacturer's instructions. All samples, including non-RT (without reverse transcriptase) and no-template controls were assayed in duplicates. Mean Ct values and deviations between the duplicates were calculated. Samples with a deviation >0.5 within the duplicates or with any evidence for melting curve abnormality were repeated. Spike-in normalisation with synthetic C. elegans derived cel-miR-54 sequence was performed to allow relative comparison across the analysed samples [45]. A normalised Ct value (Ctnorm) for let-7 or miR-16, was determined relatively to the syn-cel-miR-54 signal (Ctnorm = CtmiR-of-interest -Ctsyn-cel-miR-54). Denoted relative abundance values correspond to 2-Ct norm .
Proteome analyses
Sample preparation for mass spectrometry of EV samples. For the in-gel digestion, 20 µg of EV samples obtained by the large-scale PEG method were separated by SDS PAGE. Staining was obtained by Imperial Stain according to the manufacturer's instructions (Thermo Fisher). After electrophoresis gels were washed three times for 5 min with distilled water and stained for 2 h. Subsequently, each gel lane was cut in five equal pieces. Destaining, alkylation, reduction, tryptic digestion and peptide extraction was performed according to Schrotter and colleagues [46]. FASP (filter-aided sample preparation) and in-solution digestion. 10 µg of EV samples obtained with the large-scale PEG method were diluted with 0.9% NaCl to volumes of 50 µL. Subsequent steps, i.e. carbamidomethylation, alkylation and tryptic digestion, were performed exactly as described before [47,48].
Peptide mass spectrometry (MS). All samples for MS measurements were purified via C18 tips as described by the manufacturer (Thermo Fisher). Adjusting an equal peptide concentration for all samples was achieved by Nanodrop analysis (Peqlan, Erlangen, Germany) and monolithic HPLC analysis (Agilent, California, USA). LC-ESI MS/MS measurements were performed on a LTQ Orbitrap Velosinstrument (Thermo Fisher) combined with Dionex UltiMateTM 3000-Rapid Separation Liquid Chromatography System (Thermo Scientific). For pre-concentration of peptides a reversed-phase trapping column (Acclaim PepMap RSLC 100 μm × 2 cm, 3 μm particle size, 100 Å pore size, Dionex) in 0.1% TFA was used. For separation of peptides on a 75 μm RP column (RSLC 75 μm × 25 cm, 2 μm particle size, 100 Å pore size) and a gradient (A 0.1% Formic acid (FA) and B 0.1% FA 84% ACN) ranging from 5 to 50% of solution B at a flow rate of 300 nL/min in 90 min. MS survey scans were acquired from 300 to 2,000 m/z at a resolution of 30,000 using the polysiloxane m/z 371.101236 as lock mass [49].
Results
HEK293T-CD63-eGFP cells release eGFP-labelled EVs that can easily be detected by fluorescence microscopy To establish a scalable method to enrich sEVs from tissue culture supernatants, we initially sought a quick way to analyse the amount of precipitated EVs on a qualitative level. To this end, we created expression plasmids containing the coding region for CD63-eGFP fusion proteins, either in a transient expression plasmid (pEGFP-N1-CD63) or in a lentiviral vector (pCL6-CD63-eGFP). Upon comparison of the subcellular distribution of eGFP in HEK293T cells either transfected with the pEGFP-N1-CD63 plasmid or the empty pEGFP-N1 vector, a localised eGFP pattern was observed at the plasma membrane and within the endosomal compartment of HEK293T cells that were successfully transfected with the pEGFP-N1-CD63-eGFP plasmid. In contrast, successfully transfected pEGFP-N1 HEK293T cells showed a uniform eGFP distribution throughout the cells. To test for the presence of eGFP in EV-enriched fractions, supernatants of transfected cells were harvested and processed by direct ultracentrifugation. Drops of 1 to 2 µL of the dissolved pellets were transferred onto slides and analysed by fluorescence microscopy. In contrast to the drops of eGFP-transfected HEK293T cells, drops of CD63-eGFP transfected cells showed a high concentration of eGFP + particles especially enriched at the drop edges, the contact zone of glass, air and liquid, that could easily be seen at 200 × magnification ( Figure 1). Thus, we concluded that CD63-eGFP fusion proteins but not eGFP itself are efficiently targeted into EVs secreted by HEK293T cells. Subsequently, we transduced HEK293T cells with CD63-eGFP encoding lentiviral particles and observed a subcellular eGFP distribution comparable to the pEGFP-N1-CD63 transfected HEK293T cells. Furthermore, the direct ultracentrifugation pellets of corresponding supernatants were highly enriched in eGFP-labelled particles as well. To obtain a permanent cell source for eGFP labelled EVs, successfully transduced HEK293T-CD63-eGFP cells, which were identified as cells with high eGFP expression, were purified by fluorescent cell sorting. Their conditioned media were used as EV source for all experiments described below.
PEG precipitates EVs from conditioned cell culture medium
Next, we evaluated whether PEG is an appropriate reagent for the precipitation of CD63-eGFP labelled sEVs from HEK293T-CD63-eGFP cell conditioned media. For the initial setting, PEG 6000, PEG 8000 and PEG 20000 were used either at 6, 8, 10 and 12% or 15% final PEG concentration. Together with NaCl (75 mM final concentration) corresponding amounts of 50% (w/v) PEG stock solutions were added to 10 mL conditioned media that after harvesting as cell supernatants had been filtered through 0.22 µm filters before. Following incubation overnight at 4°C, the samples were centrifuged for 30 min at 1,500 × g. To remove residual PEG and reduce the amount of nonvesicle-associated proteins that might had been co-precipitated, we washed the EV fractions in PBS and reprecipitated them by ultracentrifugation. As a control, EVs from the same volume were precipitated by direct ultracentrifugation.
After confirming the presence of CD63-eGFP labelled EVs by fluorescence microscopy, the particle concentration and their size distribution were analysed by NTA [42,50]. As depicted in Figure 2(a), analysis of the particles harvested per mL conditioned media (CM) revealed PEG concentration dependence for all PEG variants. For PEG 6000, the highest particle yield was obtained at a concentration of 10-12%, for PEG 8000 and PEG 20000 at 8-10% (Figure 2(a)). At these, PEG concentrations the obtained particle numbers were almost in the same range as in the pellets received by ultracentrifugation (UC).
Independent of the molecular weight and the concentration of the PEG, the average sizes of the measured particles were in comparable ranges, on an average diameter of 143 ± 20 nm (Figure 2(b)). In terms of protein concentration within the recovered fraction, the protein content increased with the PEG concentration (Suppl. Table 1). The ratio of particles per mg protein was calculated and considered as purity index of the obtained EV samples (Figure 2(c)). All PEG enriched EV samples were of higher purity than those obtained by direct ultracentrifugation. On average, the highest purities were obtained with 12% PEG 6000, 8% PEG 8000 and 8% PEG 20000.
Since NTA cannot discriminate between vesicles and other particles, western blots for the exosomal marker protein Tsg101 were performed. To test for EV recovery rather than for the purity, equal volumes of the obtained EV samples were loaded here (Figure 2(d)). In good agreement with the results of the NTA, the samples with the highest particle numbers were found to contain the highest amount of Tsg101. To further test for the presence of EVs within the obtained PEG samples, transmission electron microscopy (TEM) images were taken. Vesicle-like particles were found in all samples analysed (data not shown). In summary, 10-12% PEG 6000, Figure 2. Comparison of different PEG precipitation conditions. Conditioned media were diluted with PEG 6000 (P6), PEG 8000 (P8) or PEG 20000 (P20) to final concentrations of 6, 8, 10, 12 or 15% (v/v), respectively. Following incubation overnight at 4°C, EVs were precipitated by centrifugation at 1,500 × g for 30 min (4°C). EVs precipitated by direct ultracentrifugation (UC), served as control. Obtained pellets were re-suspended in PBS and re-precipitated at 110,000 × g for 2 h (4°C). After dissolving in 250 µL PBS, the particle concentration (n = 5; SD); (a) and their average size distribution (n = 5; SD); (b) were assessed by NTA (LM10). The purity of the obtained samples was determined as particle numbersmeasured by NTAper µg protein contentmeasured by the BCA assay (n = 5; SEM; (c). The presence of the exosomal marker protein Tsg101 was analysed by western blot; 20 µL of each fraction were loaded per lane (d). Conditioned media were diluted with PEG 8000 to a final concentration of 10% PEG 8000 (v/v). Following incubation for 1, 2, 4 and 8 h or overnight (16 h) at 4°C, respectively, EVs were precipitated by centrifugation at 1,500 × g for 30 min (4°C). Obtained pellets were re-suspended in PBS and re-precipitated at 110,000 × g for 2 h (4°C). After dissolving in 250 µL PBS, the particle concentration was measured by NTA (n = 3; SD) and the presence of the exosomal marker protein Tsg101 was analysed by western blot; 5 µg protein of each Fraction was loaded per lane (e).
8-10% PEG 8000 as well as 8-10% PEG 20000 appeared appropriate to quantitatively precipitate sEVs. Since dissolved PEG 20000 has a higher viscosity and is more difficult to handle without revealing any advantages regarding recovery and purity, PEG 20000 was not considered in the following experiments.
The efficacy of PEG precipitation increases over time reaching the maximum after 8 h Following the initial experiments in which the sEVs were precipitated overnight, we aimed to reduce the preparation time and compared the EV-yield of 10 mL of HEK293T-CD63-eGFP conditioned media upon reducing the precipitation time to 1, 2, 4 or 8 h at 4°C . Overnight precipitation served as control. As a representative for the selected PEG precipitation conditions, we chose the 10% PEG 8000 precipitation. We observed green fluorescent particles under all conditions. NTA and western blot analyses for Tsg101 showed increasing particle numbers and Tsg101 intensity with increasing precipitation times. The maximum recovery was observed in both analysis (NTA and western blot) after 8 h (Figure 2(e)).
Comparison of different methods for EV enrichment
In order to assess the efficacy of the PEG precipitation method, we decided to prepare sEVs from HEK293T-CD63-eGFP conditioned media with different EV purification technologies. We performed differential centrifugation, with and without subsequent sucrose gradient centrifugation or size exclusion chromatography using a 1.5 × 45 cm Sepharose CL-2B column, respectively. In relation to the differential centrifugation protocol, 10% PEG 6000 and 10% PEG 8000 precipitation were performed with supernatants that had been centrifuged at 10,000 × g. Direct ultracentrifugation served as control.
All methods were accomplished with a starting volume of 10 mL of freshly harvested conditioned media of HEK293T-CD63 cells.
All obtained samples were analysed by NTA and their protein content was determined. We noticed a significant loss in particle numbers following sucrosedensity gradient and size exclusion chromatography (Figure 3(a)); simultaneously, a slight reduction of the average particle sizes was observed in these samples (Figure 3(b)). In addition, their protein content was very low. However, these preparations revealed the highest particle content per protein amount, which indicates a higher purity of these samples compared with the EV samples that were obtained by PEG-precipitation, ultracentrifugation or differential centrifugation, respectively (Figure 3(c)). The yield we obtained from 10 mL conditioned medium was not sufficient to perform all analyses for each sample. In particular, western blots and cellular uptake experiments were not performed with the size exclusion samples and only with reduced amounts of the sucrose gradient samples.
The western blot analyses of 10 µg of the tested EV preparation revealed similar intensities for Tsg101 in the UC, DC and PEG samples. A stronger Tsg101 signal was obtained for the sucrose gradient fraction (Figure 3(d)). Differences in the morphology of the EVs prepared by the different methods have not been identified (Figure 3(e)). Thus, as already suggested by the particle to protein ratio, the sucrose gradient preparation allowed for the highest sEV purification amongst the methods tested here.
PEG does not recognisably affect the uptake of enriched sEVs by their target cells
Aiming to establish a scalable method that can be used for the purification of functional sEVs, it is important that the purification method does not affect their biological properties. Considering that uptake by target cells is an essential prerequisite that EVs can fulfil their function, uptake experiments were performed. To this end, HEK293T-CD63-eGFP sEVs prepared with the different methods were added to cells of the human mesenchymal stromal cell line N-KM. After 18 h, the cells were analysed via fluorescence microscopy (Figure 3(f)) and flow cytometry (Figure 3(g)).
Both technological platforms demonstrated that PEGprecipitated EVs were incorporated in virtually all N-KM cells. The eGFP tagged protein was concentrated in the perinuclear region of the HEK293T-CD63-eGFP EV treated N-KM cells (Figure 3(f)). Flow cytometric data revealed comparable eGFP intensities in N-KM cells that were incubated with EV samples of the PEG precipitation or the differential centrifugation. N-KM cells that had been incubated with sEVs obtained from direct ultracentrifugation, however, showed a more intense eGFP labelling ( Figure 3(g)). Most likely, this high intensity was caused by non-incorporated eGFP-positive aggregates that had been bound to the extracellular surface of corresponding cells (Figure 3(f)). In contrast, N-KM cells cultured in the presence of sucrose density gradient centrifugation purified HEK-293T-CD63-eGFP EVs hardly showed any eGFP label (Figure 3(f,g)).
Since comparable amounts of eGFP + particles were detected by fluorescence microscopy in sample drops (data not shown), our findings suggest that sucrose alters the EVs' physiological features during the preparation process. In contrast, PEG precipitation appears as a reliable method to enrich for EVs without affecting their uptake by selected target cells. Based on our results, we concluded that size exclusion chromatography and sucrose density centrifugation in the applied form are not appropriate to prepare EVs in larger amounts for subsequent functional studies.
The PEG-precipitation procedure is a reproducible EV preparation method So far, the PEG precipitation appeared to be a reliable method to enrich sEVs from cell culture supernatants. To finally qualify a robust PEG protocol, we slightly changed the protocol. Since PEG 20000 is very viscous and more difficult in handling than lower molecular weight PEG variants, we considered that a lower molecular weight of PEG facilitates its removal from given sEV samples. Thus, without detecting significant differences between PEG 6000 and PEG 8000 precipitation, we decided to continue with PEG 6000 only. To keep the concentration as low as possible, we utilised 10% instead of 12% final PEG concentration. To test for the reproducibility of the PEG method, we used conditioned media from four different, single cell-derived CD63-eGFP HEK293T cell lines (clones C5, F5, E7 and B2). Furthermore, we compared the PEG method to the differential centrifugation method. To this end, 60 mL of frozen and thawed conditioned media of each HEK293T-CD63-eGFP cell clone were split into six aliquots, 10 mL each. Three aliquots per clone were processed according to the differential centrifugation protocol. The other three samples per clone were Figure 3. Comparison of different methods to enrich nano-sized EVs. EVs from 10 mL HEK293T-CD63-eGFP (single-cell clones C5, F5, E7 and B2) cell conditioned media were either prepared by direct ultracentrifugation (UC), differential centrifugation (DC), PEG 6000 or PEG 8000 precipitation, density gradient centrifugation (DG), or size-exclusion chromatography (SE). The particle concentration (n = 3; SD, (a) and their average size distribution (n = 3; SD, (b) were assessed by NTA (NanoSight LM10). The purity of the obtained samples was determined as particle numbersmeasured by NTAper µg protein contentmeasured by the BCA assay (n = 3; SD, (c). The presence of the exosomal marker protein Tsg101 was analysed by western blot; 10 µg of each fraction were loaded per lane (d). Electron micrographs of samples from UC, DC, DG and PEG enriched EVs were acquired; scale bars 200 nm (e). To test for potential impacts on the physiology of the obtained EVs, uptake experiments were performed: 1 × 10 8 particles as estimated by NTA were supplemented to the media of N-KM cells. After 14-16 h, pictures were taken (f) before the amount of the eGFP labelling was quantified as the mean fluorescence intensity (MFI) by flow cytometry (n = 3; SD, t-test,*p > 0.05, (g)). Scale bars 10 µm. processed according to the PEG protocol and re-suspended in a final volume of 250 µL 0.9% NaCl. Of note, calcium phosphate is purely soluble in water (2.07 × 10 −33 mol/l) and efficiently forms nano-to micro-sized crystals (Ca 5 (PO 4 ) 3 OH), which can be detected by NTA [51] (Suppl. Figure 1). As biological samples often contain Ca 2+ ions, which may end up in prepared EV samples, we decided to replace PBS and any other phosphate-based buffer with clinical grade 0.9% NaCl as solvent and washing solution for this and the following experimental part.
All obtained samples were analysed by NTA to calculate the total particle numbers and the average size of harvested particles; in addition, total protein content of all EV samples was determined, allowing calculation of the EVs' purity as particles per mg protein (Figure 4(a-c)). In terms of particle numbers, higher yields and purities were obtained with the PEG precipitation protocol than with the differential centrifugation protocol (Figure 4(a,c)). Particle-sizes and -amounts were comparable between each set of the three samples of the conditioned media obtained from the four different HEK293T-CD63-eGFP clones, when the PEG method was used (Figure 4(a,b)). Except for one EV-sample derived from the supernatant of HEK293T-CD63-eGFP clone F5 cells, the purity within the triplicates was comparable as well (Figure 4(c)). To a larger extent, inter-experimental differences concerning the particle numbers were observed in the samples obtained by the differential centrifugation method (Figure 4(a)). Thus, in our hands, the PEG method provides higher particle yields and purities and is more reproducible than the differential centrifugation method. Figure 4. PEG-precipitation versus differential centrifugation (DC). EVs from 10 mL HEK293T-CD63-eGFP cell conditioned media were either prepared by PEG precipitation or differential centrifugation. The particle concentration per fraction (n = 3; SD, (a) and their average size distribution (n = 3; SD, (b)) were assessed by NTA (ZetaView). The purity of the obtained samples was determined as particle numbersmeasured by NTAper mg protein contentmeasured by the BCA assay (c). The presence of the EV marker proteins HSP70, CD63, CD81 and CD9, and within the individual PEG samples was analysed by western blot; 5 µg proteins were loaded per lane (d). The content of CD81, CD9 and BSA was compared in final EV samples harvested from of HEK293T-CD63-eGFP clones C5, F5, E7, either harvested by PEG 6000 precipitation (PUP) or differential centrifugation (DCP). Proteins obtained from supernatants of the final PEG pellets (PUS) or the ultracentrifugation pellet (DCS) obtained in the differential centrifugation method served as controls. 2.5 µg proteins were loaded per lane (e). Of note, following analyses of CD81 blots were stripped and re-analysed for the presence of BSA.
To confirm the reproducibility of the PEG method at the molecular level, western blot analyses with anti-HSP70, anti-CD-63, anti-CD81 and anti-CD9 were performed (Figure 4(d)). For all tested antigens, the signal intensities of the bands were comparable in each of the triplicates (Figure 4(d)). To test for contamination by serum proteins and as HEK293T cells were cultured in the presence of 10% FBS, we compared the BSA content with that of CD81 and CD9 in representative samples. To this end, one of the final PEG and one of the final differential centrifugation samples of the HEK293T-CD63-eGFP clones C5, F5, E7, either obtained by PEG 6000 precipitation or differential centrifugation, were analysed. For comparison, proteins of corresponding final supernatant samples of the PEG precipitation and supernatant samples of the first ultracentrifugation step of the differential centrifugation procedure were analysed as well. CD81 and CD9 were specifically detected in all final samples analysed, but in none of the supernatants (Figure 4(e)). A clear BSA signal was obtained in the ultracentrifugation supernatant samples of the differential centrifugation method, and a very weak signal in ultracentrifugation supernatants of the PEG method. In a comparable manner, BSA signals were hardly detected in the final PEG samples, but faint bands were recognised in the final differential centrifugation samples (Figure 4(e)).
Thus, the western blot data confirm a high reproducibility of the final PEG protocol. Furthermore, the results demonstrate that PEG-precipitated sEVs are less contaminated with BSA than sEVs prepared with the differential centrifugation method.
The PEG-precipitation procedure is a scalable method
To test for the scalability of the PEG method, 360 mL conditioned media of each of the four different clonal HEK293T-CD63-eGFP cell lines were processed with the scaled PEG method in 500 mL conical centrifuge vials. Owing to speed limitations of the vials and the JS-5.3 swing-out rotor, thawed conditioned media were centrifuged at 6,000 × g instead of 10,000 × g. PEG and NaCl were added in a scaled manner. Final EV pellets were resuspended in 1 mL 0.9% NaCl and used for all downstream analyses.
The yield of particles per 1 mL conditioned medium as measured by NTA was roughly in the same range of magnitude for all four preparations than that of the corresponding low-scale PEG preparations (Figures 4 (a) and 5(a)). The average size distribution and purity were comparable as well (Figures 4(b,c) and 5(b,c)). Thus, these data support the scalability of the PEG method. Since we had enough material for downstream analyses here, we comprehensively analysed the obtained samples. Comparable to the low-scale PEG samples, all final samples showed clear bands in western blot analyses probed with anti-HSP70, anti-Tsg101, anti-CD-63, anti-CD81 and anti-CD9 antibodies, respectively. Additionally, clear bands were obtained in western blots for Syntenin but not for BSA, which was probed on the same blot than Syntenin ( Figure 5(d)). To test for the presence of contaminating proteins, western blots for BSA and the mitochondrial protein Prohibitin were performed. Prohibitin and BSA were below the detection levels in all final samples ( Figure 5(d)).
To document enrichment of the EVs during the PEG 6000 precipitation procedure, western blot analyses for CD81, CD9, HSP70 and Tsg101 were performed on samples of the conditioned media from cells of all four HEK293T-CD63-eGFP clones, on the supernatants and the pellets of the 6,000 × g centrifugation step, the PEG pellets and on the supernatants and the pellets of the final ultracentrifugation procedure. CD81, CD9 and Tsg101 were detected in all PEG pellets and in a more enriched manner in the final samples (Suppl. Figure 2A). Parts of the HSP70 were also found in the supernatant of the ultracentrifugation pellet, resulting in comparable HSP70 concentrations in PEG and ultracentrifugation pellets (Suppl. Figure 2A). In a representative manner, corresponding samples obtained from HEK293T-CD63-eGFP clones E7 and B2 conditioned media were also probed with anti-BSA antibodies. Here, an opposite picture was obtained. BSA was detected in high amounts in the conditioned media and the supernatants and pellets of the 6,000 × g centrifugation step. Massively reduced BSA levels were detected in the PEG pellet and the supernatant of the final ultracentrifugation step but not in the final EV fraction, whichon the same blotshowed strong enrichment of CD81 (Suppl. Figure 2B). The mitochondrial protein Prohibitin was used as a representative marker to detect contaminations caused by cell organelles. It was detected in cell lysates but not in any of the conditioned media fractions (data not shown).
Our analyses demonstrate that PEG 6000 precipitation results in a specific enrichment of EV-associated proteins. Furthermore, co-precipitated contaminants within obtained PEG 6000 pellets, such as BSA, can effectively be reduced from the fraction of the PEG pellet by an additional ultracentrifugation step. Cellular organelle contaminations were not detected. Thus, the large-scale PEG precipitation appears to be a reliable method to highly concentrate sEVs.
To test whether enriched EVs can be taken up by target cells, obtained EVs were added to N-KM cells as described above. Fluorescence microscopy and flow cytometric analyses confirmed incorporation of eGFP into the N-KM cells ( Figure 5(e)), implying that purified CD63-eGFP labelled EVs still retain their properties to be taken up by N-KM cells. Using the same protocol to prepare sEVs from supernatants of MSCs, we demonstrated that these MSC-EVs were able to exert therapeutic effects in different animal models and in a human GvHD patient [14,16,52,53]. Since these PEG-prepared sEV samples exerted the same therapeutic effects in a murine ischemic stroke model than corresponding MSCs [16], our results demonstrate the usability of the PEGprecipitation method as a scalable method for the enrichment of biologically active sEVs.
PEG is efficiently depleted during the washing procedure
In order to quantify the residual PEG concentration in the final HEK293T-CD63-eGFP EV samples that might interfere with down-stream applications, different approaches were performed. Initially, a chloroformammonium-iron thiocyanate-extraction method was used to quantify pure PEG [54]. Although PEG could be quantified in PEG-containing control samples, the PEG concentration within the final HEK293T-CD63-eGFP EV samples (clone B2 and C5 EVs were tested) was below the optimal detection range of this method (data not shown). Thus, a more sensitive bariumiodide staining technique [43] was selected to analyse HEK293T-CD63-eGFP EV samples. For the quantification, different dilutions of pure PEG were separated on a SDS gel in parallel to EVs harvested from supernatants of HEK293T-CD63-eGFP clone B2 and C5 cells. Following the barium-iodide staining, the PEG bands of the final EV samples showed similar intensities than the bands of the 0.1% PEG dilutions. The analyses of the PEG signals from the EV samples and from the PEG dilution series revealed a PEG concentration of 0.02% in both final HEK293T-CD63-eGFP EV samples tested (Figure 6(a,c)). To visualise proteins, the gels were counterstained with the protein specific imperial stain. In contrast to the PEG dilutions, EV sample containing lanes were positively stained ( Figure 6(b)). Thus, only very low PEG levels were detected in both final HEK293T-CD63-eGFP EV samples, demonstrating that the PEG used for the EV precipitation was efficiently washed out during the final steps of the applied PEG precipitation procedure.
Following FASP protein PEG-samples can be effectively analysed by proteomic profiling
Owing to the presence of some residual PEG in final samples, its potential interference with proteomic profiling was investigated in a proof of principle experiment. In order to deplete residual PEG a SDS PAGE of 20 µg of the final HEK293T-CD63-eGFP clone C5 EV sample was run, including a proteolytic in-gel digestion. Owing to the sensitivity of modern MS/MS devices, PEG-originated signals were still detectable (Suppl Figure 3A), which interfered with the peptide counts of corresponding peptides resulting in a decrease of the sensitivity of the proteome analysis. To eliminate the residual PEG, a FASP was performed. After FASP, purified samples hardly contained any PEG reminiscent that was detected in subsequent MS/MS analyses (Suppl. Figure 3B). Thus, in combination with the ultracentrifugationbased washing procedure and the FASP, the applied PEG procedure can be combined with proteomic analyses.
PEG-precipitation does not interfere with the enrichment and detection of vesicle-associated microRNAs
To test whether residual PEG interferes with the enrichment and quantitative assessment of EV-associated microRNAs (miRNAs), a proof of principle miRNA analysis on a PEG-precipitated HEK293T-CD63-eGFP sEV fraction was performed in comparison to corresponding sEV samples obtained from differential centrifugation or direct ultracentrifugation, respectively. To this end, the abundance of the EV-associated miRNA let-7a and of the Argonaute 2-associated miR-16, reported as being excluded from EVs [45,55], was analysed.
Using the PCR-based StepOnePlus TM system, both microRNAs were shown to be expressed in HEK293T-CD63-eGFP cells. In RNA fractions extracted from 400 µL HEK293T-CD63-eGFP cell-conditioned medium, a robust miR-16 level was confirmed in a proof of principle experiment, whereas let-7a was hardly detectable. In contrast, the sEV samples prepared from volumes of 50 mL HEK293T-CD63-eGFP cell-conditioned medium contained increased levels of let-7a, but hardly any miR-16 (Suppl. Figure 4). Highest let-7a levels were found in the sEV fraction prepared with the PEG method. Thus, the PEG-precipitation apparently does not interfere with the qualitative and quantitative assessment of EV-associated microRNAs.
Discussion
To establish a cost-effective method allowing preparation of sEVs from larger sample volumes (up to several litres) for functional studies, we have optimised a PEG precipitation protocol. To efficiently concentrate sEVs, we finally added PEG 6000 and NaCl to concentrations of 10% PEG and 75 mM NaCl to conditioned cell culture media. Following incubation for at least 8 h at 4°C, sEVs were precipitated at 1,500 × g. The presence of contaminants was reduced by washing with 0.9% NaCl and re-precipitation of the sEVs by ultracentrifugation. Our results confirm the reproducibility and scalability of the developed method. In proof of principle experiments, we show that sEVs prepared with the PEG method are effectively incorporated into target cells. Furthermore, we have successfully used the optimised PEG method to prepare sEV samples for the preclinical and clinical setting from several litres of MSC-conditioned media [14,16,52,53]. Upon comparing the therapeutic impacts of such MSC-EV samples with that of corresponding MSCs in a murine ischemic stroke model, we did not detect any functional difference [16]. We also did not detect any impact on the average particle size of sEVs. Thus, PEG which under certain conditions can promote membrane fusion, apparently does not interfere with the integrity of EVs and allows preparation of functional sEVs from cell culture supernatants.
As described, sEVs seem to exert most therapeutic functions of MSCs [16,56]. However, in other cell systems larger vesicles have been found to also mediate functional activities [57]. Although we have not tested this, larger EVs very likely can be co-enriched if the 10,000/6,000 × g centrifugation and the 0.22 µm filtration steps are dropped. In the presented experiments, we used unprocessed FBS to raise the HEK293T-CD63-eGFP cells. As FBS is known to contain sEVs, non-metabolised FBS sEVs were very likely included in the obtained sEV samples. It was not our aim to remove them or estimate their content. However, if prepared sEV samples should not contain any medium derived-sEVs, the usage of EV-reduced/ depleted/free media is required to raise the cells. The PEG method certainly will not allow to separate different sEV types from each other.
In addition to functional studies, obtained sEVs samples can be used for proteomic and miRNA analyses. Indeed, Andreu and colleagues compared several commercial and non-commercial methods to enrich EVs from frozen serum samples to subsequently extract EV-associated miRNAs and obtained the best result with our optimised PEG protocol [58]. In addition, other groups have enriched EVs via PEG precipitation and successfully used these EVs for proteomic analyses [59,60]. Thus, we and others have demonstrated that in addition to functional assays PEG precipitation is a reliable method to enrich EVs for molecular downstream analyses. However, upon comparison of sEV samples prepared with the optimised PEG protocol with sEV samples prepared by size exclusion chromatography or sucrose density gradient centrifugation, we learned that higher purity indices can be obtained with the later methods. Accordingly, we like to conclude that despite their functional activities PEG-prepared sEV samples contain several non-EV associated molecules, which have not been identified, yet. Especially, if sEVs are enriched from plasma samples or other protein rich body liquids, non-EV associated molecules might by far be stronger concentrated than the sEVs, questioning whether PEG precipitation should be the method of choice for the sEV preparation out of such liquids. Similarly, although PEG-prepared samples can be used for molecular analyses, other low-scale methods being able to enrich EVs to higher purities might be preferred for down-stream molecular analysis. In this context, we and others have demonstrated that the combination of different methods or immunecapturing technologies allow the separation of sEVs from other components frequently co-purified with sEVs and provide several surprises, e.g. that several miRNAs considered to be EV-associated are eluted in other fractions than exosomal marker proteins [61][62][63][64].
Regarding the reproducibility of the optimised PEG protocol, we learned that the results obtained with the PEG protocol were much more reproducible than those obtained with the conventional differential centrifugation (DC) protocol. Following UC in the DC protocol, EV pellets appeared very fragile and immediately resolved partially in the supernatant if they were not handled very carefully. To avoid the Figure 6. Detection of residual PEG. Two gels were loaded with PEG 6000 in decreasing amounts (10, 5, 2.5, 1, 0.5, 0.25, 0.1, 0.01 µg), protein marker, and 6 µg of the final EV samples of HEK293T-CD63-eGF clone B2 and C5 cells. Gels were run under identical conditions. PEG was detectable after barium iodide staining (a); proteins following protein specific imperial stain (b). Signals in (a) were recorded with Image Studio™ Software (LI-COR) and plotted against the PEG concentration of the calibration values (c). The graphic evolution indicates a linear equation of y = 15.692× + 3.4984 which was used to calculate the revealed PEG concentration of 0.02% in the final EV samples. EV loss, some residual supernatant is commonly left on the pellet. Especially, non-trained experimenters tend to leave higher volumes of supernatants on the pellets than trained ones. Since supernatants have very high protein contents, residual amounts of the supernatant significantly affect the purity of obtained sEV samples, even after a washing step. Maybe by means of some residual PEG, the UC pellet following PEG precipitation is much more solid and was found to hardly get lost even if non-trained experimenters performed the PEG precipitation. Remaining supernatant could easily be rinsed off without affecting the integrity of these UC pellets. Although intensive training helped to increase the purity of DC-prepared sEV samples over time, even best trained experimenters hardly reached the same purity with the DC method than with the optimised PEG protocol. BSA contaminations were always detected in final DC sEV samples but hardly in any of the final PEG sEV samples (Figures 4(e) and 5(d)). Thus, purer sEV samples can be obtained with the optimised PEG method compared to the conventional DC method.
Several companies provide polymer-based EV precipitation reagents, commonly also based on PEG. Regularly, washing procedures of resulting pellets are not part of the proposed protocols. As demonstrated in Suppl. Figure 2, PEG pellets prepared with our method contain detectable levels of BSA before ultracentrifugation. Following ultracentrifugation, the BSA is mainly recovered in the supernatant but not in the final sEV fraction. Although we have not tested any of the commercial EV precipitation polymers in the present investigation, we like to predict that a washing procedure will also help to increase the purity of these samples. Similar to our results, Rider and colleagues demonstrated that following PEG precipitation ultracentrifugation increased the purity of the obtained EV samples [59].
Upon comparing sEV samples prepared by PEG precipitation with EV samples prepared by sucrose density gradient centrifugation or size exclusion chromatography, respectively, we learned that higher puritiesas indicated as particle numbers measured by NTA per mg protein within the EV fractionare obtained with the latter methods. However, in our hands, the recovery rates were not very high and sucrose gradient purified EVs did not get incorporated into target cells. Apparently, sucrose density gradient centrifugation affects the physiological properties of the prepared EVs. Thus, although the PEG method may not purify EVs to the highest level, EVs apparently retain their physiological activity. Furthermore, the method is scalable, for example, for our pre-clinical studies we had purified with the optimised protocol up to 10 L MSC-conditioned media within 2 days. Thus, for the first time, the PEG precipitation method provided the chance to process volumes required to obtain enough EVs for the clinical setting [14]. Although, we and others currently focus on closed systems to prepare EVs for the clinical setting, such as tangential flow filtration eventually in combination with other methods [61,65,66], according to their confirmed therapeutic effects, PEG-prepared EVs will for the moment serve as reference regarding purity and activity. | 2018-11-10T06:29:28.076Z | 2018-10-17T00:00:00.000 | {
"year": 2018,
"sha1": "eb2cff484f6f67b833d6323a323747dbb56f779b",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20013078.2018.1528109?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb2cff484f6f67b833d6323a323747dbb56f779b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
240549944 | pes2o/s2orc | v3-fos-license | A rein tension signal can be reduced by half in a single training session
Rein tension signals are, in essence, pressures applied on the horse ’ s mouth or nose, via the bit/noseband, by a rider or trainer. These pressures may feel uncomfortable or even painful to the horse and therefore it is important to reduce rein tension magnitude to a minimum. The aim of this study was to investigate the magnitude of a rein tension signal for backing up, using negative reinforcement. We wanted to assess how much the magnitude of rein tension could be reduced over eight trials and if the learning process would differ depending on headstall (bridle/halter). Twenty Warmblood horses were trained to step back from a rein tension signal with the handler standing next to the horse, holding the hands above the horse ’ s withers. As soon as the horses stepped back, rein tension was released. The horses were either trained with a bridle first (first treatment, eight trials) and then with a halter (second treatment, eight trials), or vice versa in a cross-over design. All horses wore a rein tension meter and behavior was recorded from video. The sum of left and right maximum rein tension from onset of the rein tension signal to onset of backing (signaling rein tension) was determined for each trial. Mixed linear and logistic regression models were used for the data analysis. In both treatments, signaling rein tension was significantly lower in trial 7 – 8 than the first trial ( p < 0.02). Likewise, signaling rein tension was significantly lower ( p < 0.01), and the horses responded significantly faster, ( p < 0.001) in the second treatment compared to the first, regardless of headstall. The maximum rein tension was reduced from 35 N to 17 N for bridle (sum of left and right rein) and from 25 N to 15 N for halter in the first eight trials. Rein tension was then further reduced to 10 N for both bridle and halter over the eight additional trials in the second treatment, i.e. to approximately 5 N in each rein. There was no significant difference in learning performance depending on headstall, but the bitted bridle was associated with significantly more head/neck/mouth behaviors. These results suggest that it is possible to reduce maximum rein tension by half in just eight trials. The findings demonstrate how quickly the horse can be taught to respond to progressively lower magnitudes of rein tension through the correct application of negative reinforcement, suggesting possibilities for substantial improvement of equine welfare during training.
Introduction
At present, training horses to perform various tasks is mainly accomplished through the use of pressure signals to elicit desired responses from the horse.Pressure signals used in horse riding are generally rein tension creating mouth/nose pressure, leg pressure on the horse's belly, weight shifts in the saddle, and/or tapping with the whip.
The structures of the horse's mouth and head are sensitive and mouth injuries related to bridles and bits are common in horses (Björnsdóttir et al., 2014;Uldahl and Clayton, 2019;Tuomola et al., 2021).Whereas the noseband and the type of bit have been found to influence the occurrence of mouth injuries (Björnsdóttir et al., 2014;Uldahl and Clayton, 2019), it is likely that the magnitude of rein tension is an even more important factor for the development of oral lesions (Mellor, 2020).Likewise, research suggests that even naive horses may find pressure from the bit in the mouth aversive (Christensen et al., 2011) and it seems that horses prefer lower levels of rein tension than what riders generally apply (Piccolo and Kienapfel, 2019).Well-informed horse trainers are aware of the principles of operant conditioning (McLean and Christensen, 2017) and systematically use pressure and timely release to train and maintain responses through negative reinforcement (Brown and Connor, 2017), i.e. releasing the pressure at the moment the horse performs the correct behavior, which increases the likelihood that the behavior appear again if the same stimulus is repeated (Pearce, 2008).Likewise, well-informed horse trainers will be aware of the associative learning principles of classical conditioning.In essence, by being consistent in starting each pressure signal with a light pressure, the initial light pressure becomes a conditioned stimuli, a signal, predicting the arrival of the increasing pressure (Baragli et al., 2015).Over repetitions, the horse will make an association between the initial, light pressure and the subsequent escalating pressure and will respond already at the light pressure signal (McGreevy and McLean, 2007).There are, however, knowledge gaps regarding the correct application of the learning principles among riders and horse trainers (Warren-Smith and McGreevy, 2008;Brown and Connor, 2017), e.g. the importance of the timing of the release of pressure and of always starting with a light signal (McLean and Christensen, 2017).Relentless pressure or unpredictable pressure signals can cause stress and discomfort for the horse (McLean and McGreevy, 2010) and therefore further education of equestrians in the principles of operant and classical conditioning is needed (Telatin et al., 2016).Moreover, it has been found that training horses through negative reinforcement can lead to a negative perception of humans (Sankey et al., 2010) and stress related behaviors (Hendriksen et al., 2011;Freymond et al., 2014).Subsequently, training horses to respond to rein tension signals using negative reinforcement, may pose welfare risks, and is important to investigate further.
While negative reinforcement is an operant learning principle that has been recognized and quantified since the experiments conducted by Skinner (1938), Ahrendt et al. (2015) was, to our knowledge, the first to conduct a standardized test for investigating negative reinforcement learning in horses.They trained horses to yield the hindquarters by applying pressure on the horses' hindquarters using an algometer.Inspired by Ahrendt et al. (2015), this study was designed to learn more about negative reinforcement learning of rein tension signals in horses.
Our hypothesis was that through the correct application of negative reinforcement, the magnitude of a rein tension signal can be substantially reduced over the course of a single training session.Further, it was hypothesized that the type of headstall used (bridle/halter) would not affect learning performance.The aim of this study was therefore to investigate the magnitude of a rein tension signal over repeated trials.We wanted to assess how much the magnitude of a rein tension signal could be reduced over eight trials of a backing up exercise and if the learning process would be similar regardless if the horse was trained with a bitted bridle or a halter.
Materials and methods
This study was carried out over three days in May 2019 at an Equestrian Center in Sweden.The Animal Ethics Board in Uppsala, Sweden, had given an ethical approval for the study, Dnr 5.8.18-02567/ 2019.
A study describing the characteristics of the rein tension signal has been published previously using the same data collection.Materials and methods will thus be summarized, and further details can be found in Eisersiö et al. (2021).
Twenty Warmblood horses participated in the study: 10 young horses (4-5 years old, five mares, five geldings) and 10 mature horses (>7 years old, mean 10.3 years ± 2.65, four mares, six geldings).The young horses had been in training under saddle 1-2 years.The mature horses had been trained in dressage and jumping more than 4 years.The horses were school horses at an Equestrian Center.The horses had all been backed up before, as part of normal handling and training, but the riding exercise rein back had not been specifically trained in the young horses.All horses were checked for soundness and health by a veterinarian and the staff at the Center on a regular basis.Oral exams of all horses were conducted two weeks prior to the data collection by a veterinarian with expertize in equine dentistry.The veterinarian considered all horses fit to participate in the experiment.
In brief, the experiment was conducted in an aisle (7 * 2 m) in a grooming area at the Equestrian Center.The order in which the horses entered the experiment was based on the daily activities of that horse, e. g. participation in riding lessons.Every other horse was placed in Group 1 (four young, six mature horses) and every other in Group 2 (six young, four mature horses).Group 1 was first fitted with the bridle and then the halter, and Group 2 was tested with the opposite order of headstalls.Each horse was tested once and on one day.For the bridle treatment, the horses wore their own bridle and bit (noseband removed).Eleven horses wore three-piece snaffles, five horses wore two-piece snaffles, and four horses had straight bits.The bits had a diameter of 13-20 mm close to the bit-rings.The halter treatment used the same full size, standard, nylon halter for all horses (noseband 35 mm wide) (Fig. 1).The staff at the Equestrian Center and the research team were in agreement that the bridles, bits and the halter fitted the horses properly.
The rein tension meter
To collect rein tension data, a custom-made rein tension meter was used.The rein tension meter for each rein consisted of a load cell (Futek, CA, USA) wired to amplifiers and an IMU (x-io technologies, Bristol, UK).The load cell had a measuring range of 0-500 N and weighted 20 g.The IMU had 10 bit resolution, 3.1 V battery and weighed 46 g.The sampling rate was 100 Hz.The load cells were attached to flat leather reins, with leather stoppers (15 mm wide), close to the bit.The amplifier and IMU were taped together and fastened on the crown piece of the headstall using tape (Fig. 1).The rein tension meter was attached to the headstall before it was fitted on the horse.The reins were attached to the side rings of the halter to exert pressure on the bridge of the nose during the halter treatment and to the rings of the bit to exert pressure on the oral tissues during the bridle treatment.Before and after the experiment, the rein tension meter was calibrated using ten known weights ranging from 0 to 10 kg suspended from each meter to confirm stability of voltage output.
Experimental setup
During the treatments (bridle and halter), the handler (author M.E., right-handed) was standing on the horse's left side near the horse's withers.Each treatment began with a 2 min rest period where the horse was standing still on the aisle with the handler next to the withers.After 2 min had passed, the handler shortened the reins while lifting the arms and positioned the hands above the horse's withers.Rein tension was then gradually increased, by the handler closing the hands to exert tension on the reins, until the horse took a step back.The handler stepped back along with the horse, staying next to the horse's withers.The handler released rein tension by opening and lowering the hands.The release of rein tension was given immediately when the horse stepped back with a front leg.For each horse and in each treatment, the criterion for release of rein tension started with one step backwards.During the course of the treatment, if the handler felt that the horse was responding immediately to a light rein tension signal, the criterion was raised to one additional step.Rein tension was released for each step back by the handler slightly lowering or moving the hands forward.The criterion was lowered again (to fewer steps) if the horse resisted, hesitated or seemed to have difficulty stepping back.After each backing event, the horse and handler stood still on the aisle until one minute had passed since the onset of the previous rein tension signal.The rein tension signal and rest period were repeated eight times.After the eighth time, there was a 2 min recovery period standing still on the aisle.The horse was then led to a grooming stall to change headstall and the above described procedure was repeated.
Data extraction
The horse was video recorded from a left side view during the treatments (25 Hz, Canon Legria HF R806, Canon Inc, Tokyo, Japan).Each treatment began and ended by synchronizing the rein tension meter with the video camera.This was done by pulling on the left rein tension meter five times (not affecting the horse's mouth) while counting out load.
From the video, the different events within each treatment were annotated on a frame-by-frame level using the video editing program Adobe Premiere Elements (Adobe, CA, USA).The point in time when the handler had shortened the reins and positioned the hands above the horse's withers was recorded as the onset of the rein tension signal, while the release of the rein tension was annotated when the handler started to lower the hands.The moment when the horse's chest started to move backwards, i.e. a weight shift to the rear, was noted in the protocol as the beginning of the backing response.The onset of backing was recorded at the moment when the horse's first front hoof was lifted off the floor to step back.The number of front limb steps back that the handler applied the rein tension signal was recorded.
An equine ethologist (author ME) recorded the horse's behavior from the video.Behavior was recorded in the form of one-zero sampling (Martin and Bateson, 2009) during the time interval between onset of the rein tension signal and onset of backing as well as between onset of the rein tension signal and the release of rein tension.The ethogram can be found in Table 1.The horse's backing responses were defined as successful or unsuccessful.A successful response being one where the horse started to back within two seconds, to a light rein tension signal (visible slack in the rein) without showing any other behaviors.Videos 1-4 show the experimental setup and the application of rein tension signals.All videos show the same mature horse who started with the bitted bridle as first treatment.
Data analysis
Behavior, event records and rein tension data were imported into Matlab (2019b, MathWorks Inc., MA).Descriptive variables were calculated using custom-written code.Response latency was determined by calculating the time duration from onset of the rein tension signal to the onset of backing.Signaling rein tension was taken as the sum of left and right maximum rein tension during the response latency period.Response rein tension was taken as the sum of left and right maximum rein tension during the time period between the beginning of the backing response and onset of backing.If the horse started to back when the handler shortened the reins, before the rein tension signal was given, only response rein tension was recorded for that trial, and response latency had a negative value.The resulting dataset was imported into RStudio (version 1.2.5019,RStudio, MA, USA) for statistical analysis.To describe the horses' learning process, descriptive statistics (median, IQR) were calculated for response latency, signaling rein tension, response rein tension, and number of behaviors other than backing that the horses showed during the application of the rein tension signal, by headstall, trial number, and order of treatment.The horses' behaviors were divided into two categories: head/neck/mouth behavior and inattentive behavior (Table 1).Additionally, the number of trials and horses with successful responses were summarized.
Linear mixed models were used for the statistical analysis of rein tension and response latency (RStudio, packages lmerTest, lme4).The three outcome variables, response latency, signaling rein tension, and response rein tension were not normally distributed and therefore transformed along the ladder of powers; i.e. response latency and signaling rein tension (sum of left and right rein) were log-transformed, and response rein tension was sqrt-transformed (sum of left and right rein).
Video S1.The first trial in the first treatment (in this case bridle).Rein tension applied for one step back.The behavior head forward is present.All videos are of the same mature horse.A video clip is available online.Supplementary material related to this article can be found online at doi:10.1016/j.applanim.2021.105452.
Video S2.
The seventh trial of the first treatment (bridle).Rein tension applied for two steps back.Head forward and open mouth are present.A video clip is available online.Supplementary material related to this article can be found online at doi:10.1016/j.applanim.2021.105452.
The explanatory variables were headstall (bridle/halter), age group (young/mature), order of treatment (first treatment/second treatment) and trial (1− 8).All explanatory variables were analyzed as categorical variables.Horse was included as a random variable.Plotting of Pearsons residuals was used for normality check of the models.Interactions between the four explanatory variables, headstall, age group, order of treatment, and trial number, were tested.Non-significant interactions were sequentially removed based on the type III p-value of < 0.05.Nonsignificant explanatory variable main effects were forced into the models.
Logistic regression models were used for statistical analysis of behaviors with head/neck/mouth behavior and inattentive behavior (present/absent) as outcome and headstall, order of treatment and trial number as explanatory variables.Horse was included as a random Video S3.The first trial of the second treatment (in this case halter).Rein tension applied for one step back.The behavior head upward is present.A video clip is available online.Supplementary material related to this article can be found online at doi:10.1016/j.applanim.2021.105452.
Video S4.The seventh trial of the second treatment (halter).Rein tension applied for three steps back.No other behaviors present.A video clip is available online.Supplementary material related to this article can be found online at doi:10.1016/j.applanim.2021.105452.
variable.The model with inattentive behavior as outcome did not converge with all the explanatory variables included and trial number was therefore omitted.Model fit was evaluated using ROC and AUC (RStudio, package pROC).
Results
All horses completed the experiment and met the criterion for a backing response to the rein tension signal in each trial.Twenty horses backing up eight times with the bridle and eight times with the halter resulted in a total of 320 observations, 20 observations for each trial number (1− 8) and order of treatment (first, second).For the outcome variable signaling rein tension, there were no data to be recorded for 56 rein tension signals since in those trials the horse started backing before the rein tension signal was applied (18% of the observations, distributed among 17 horses).To avoid missing values for these observations, signaling rein tension was substituted by response rein tension for the same trial, in both analytical and descriptive statistics.This was deemed adequate since the response rein tension was recorded during the initiation of the backing response, and thus equivalent to signaling rein tension in these cases.
Descriptively, the median response latency was reduced from 6 s for bridle and 5 s for halter in the 1st trial in the first treatment (IQR 5-6 s bridle, 3-7 s halter) to 2.6 s for bridle and 2.3 s for halter in the 8th trial of the first treatment (IQR 0.25-5 s bridle, − 0.05 to 3 s halter).In the 8th trial of the second treatment median response latency was 1.6 s for bridle and 1.6 s for halter (IQR 0.3-3.5 s bridle, 0.1-4 s halter) (Fig. 2).
The median maximum rein tension (sum of left and right rein) during application of the rein tension signal (signaling rein tension) was 35 N for bridle and 25 N for halter in the 1st trial of the first treatment (IQR 28-46 N for bridle, 19-36 N for halter).In the 8th trial of the first treatment, it was 17 N for bridle and 15 N for halter (IQR 4-25 N bridle, 12-19 N halter) and then further decreased to 10 N for bridle and 10 N for halter in the 8th trial of the second treatment (IQR 3-16 N bridle, 3-27 N halter) (Fig. 3).See supplementary materials for more details on signaling rein tension magnitude.
Of all the backing responses (total 320), 36% (115) were successful (Table 2).In 95% of the responses labeled as successful, rein tension was below 20 N (sum of left and right rein).A higher percentage of the trials generated a successful response in the second treatment, 43%, (46% bridle, 40% halter) compared to the first treatment, 29% (both treatments).Overall, each horse responded within one second from the onset of the rein tension signal in at least two trials (mean 5 ± 3).When the criterion was raised to two or three steps, other behaviors than backing, were present in 41% of these trials (52/126 trials), i.e. the horses showed head/neck/mouth behavior in a total of 38 trials and inattentive behavior in 25 trials when taking two or three steps back (Table 2).
The results from the linear models are shown in Table 3, with further details in the supplementary materials.From the 5th trial onward, the horses responded significantly faster to the rein tension signal than during the first trial.Also, the horses responded significantly faster in the second treatment compared to the first treatment, regardless of headstall (Table 3).
At the 7th and 8th trials signaling rein tension was significantly lower than during the first trial, regardless of headstall (Table 3, Signaling RT).Likewise, signaling rein tension was significantly lower in the second treatment compared to the first treatment.
Response rein tension was significantly lower during the second treatment (Table 3, Response RT) than the first treatment.Compared to the first trial, response rein tension tended to be lower from the 5th trial onward and from the 7th trial, response rein tension was significantly lower.
The logistic regression model of head/neck/mouth behavior demonstrated that these behaviors were significantly less common in the 7th and 8th trials compared to the first trial and during the second treatment compared to the first treatment.Head/neck/mouth behavior was also less common during the halter treatment compared to the bridle treatment (Table 4).Inattentive behavior was less common in the second treatment.
Discussion
Our results support our hypotheses, i.e. rein tension magnitude could be substantially reduced during a single training session, regardless of headstall used.By the 7th trial of the first treatment, both response latency and magnitude of the rein tension signal were reduced by half compared to at the first trial, with further reduction during the second treatment.This was consistent regardless of whether the bridle or the halter was the first treatment.In other words, the horses generalized the learning between the first and second treatment and the order of the treatments, i.e. the total number of trials, turned out to be far more important than the headstall used.Further, head/neck/mouth behaviors were significantly less common at the end of treatment than at the beginning of each treatment, and both head/neck/mouth behavior and inattentive behavior were significantly less common during the second treatment.The reduction of head/neck/mouth and inattentive behaviors was likely a key factor in reducing rein tension, as it was shown in Eisersiö et al. (2021) that horse behavior has a large influence on the magnitude of rein tension.
Rein tension was higher before the onset of backing compared to when the horse commenced the backing response.This relates to the lag between the application of pressure by the handler, the horse perceiving the pressure and deciding to step back, and the handler noting that the horse is responding and releasing the pressure.In fact, the pressure motivating the horse to step back occurs some time before the horse starts to shift their weight back.It is interesting to elaborate on if this lag can be reduced and to what extent this influences the horse's learning.Perhaps the horse can achieve a reduction in rein tension by shifting the weight back, despite that the handler had not yet initiated the release of rein tension.
The experimental design used was selected to study the learning process of rein tension signals while keeping other influential variables like gait, stride and rider influence at a minimum.Our decision to use eight trials of the rein tension signal in each treatment was based on data from two pilot studies, using four different horses who all reached the learning criterion of backing up promptly to a light rein tension signal within eight trials (unpublished data).Fenner et al. (2017) used a similar design in their study and also used eight trials of backing up using rein tension signals.Other experiments on equine learning have used between 5 and 20 trials per session (McCall et al., 1993;Ahrendt et al., 2015;Valenchon et al., 2017).It is, to our knowledge, not known how many trials a horse needs to form a conditioned response, but based on the statistical results, perhaps seven or eight trials on average is an appropriate number for teaching horses a new criterion or signal.The number of trials of an exercise will of course have to be adjusted depending on the level of physical and mental exertion the horse is subjected to.Moreover, as horses have various backgrounds, different temperamental traits and emotionality, as well as a diverse level of motivation to respond to negative reinforcement, learning performance will differ considerably between individuals (Lansade and Simon, 2010;Valenchon et al., 2017).
The structures of the horse's head and mouth are sensitive (Mellor, 2020) and warrant the usage of light rein tension signals during training Fig. 3. Signaling rein tension across trials.Group median signaling rein tension (rein tension during the time interval from onset of the rein tension signal to onset of backing) for the eight trials of the rein tension signal in the first (left) and the second (right) treatment, color and shape by bridle and halter.Data for 20 horses responding to a rein tension signal for backing up (eight times with a bridle and eight times with a halter, generating 320 rein tension signals in total).
Table 2
Number of trials/rein tension signals (RTS) and number of horses where the handler asked for one, two or three steps back.The number of RTS followed by a successful backing response and the number of horses that responded successfully in at least one trial.The number of RTS where head/neck/mouth behavior and/or inattentive behavior was shown and the number of horses that showed those behaviors in at least one trial.The experiment included 20 horses, responding to eight RTS with a bitted bridle and eight RTS with a halter resulting with a total of 320 RTS.Group 1 (four young, six mature horses) started with the bridle and group 2 (six young, four mature horses) started with the halter.sessions.While it may be necessary to escalate rein tension in the first few trials when teaching the horse a new exercise, horse trainers should aim to quickly reduce rein tension magnitude by proper use of negative reinforcement (pressure is released with the correct timing) and classical conditioning (a light signal predictably precedes escalating pressure).This study demonstrates that rein tension can be reduced substantially over the course of eight trials in a single training session.Given that horses find pressure from the bit aversive (Christensen et al., 2011) and since horses seem to prefer lower rein tension than what riders generally apply (Piccolo and Kienapfel, 2019), this finding is important from an equine welfare perspective as it demonstrates how quickly the horse can be taught to respond to progressively lower magnitudes of pressure.The fact that the horses in our study generalized the exercise between bridle and halter emphasizes that it is the proper application of the learning principles that is crucial for successful training and not the equipment used.
Training a horse successfully, using negatively reinforced pressure signals, rely on clear communication through timely and frequent release of pressure.As the release of pressure provides information to the horse about what behavior pays off, timely and consistent releases are most informative to the horse.In this experiment, the release of rein tension was given within one second from the first front hoof lifting to step back (see Eisersiö et al., 2021).However, an often forgotten variable in animal training is the criteria the trainer has set up.Low criteria, i.e. simple tasks and low requirements for precision, are more likely to be met by the horse (McCall, 1990), while increasing the demands too quickly, without a sufficient number of successful attempts at the initial level, will inevitably make it more difficult for the horse to figure out what behavior that pays off/leads to release of pressure.Releases will then be less frequent as it takes longer time for the horse to figure out the correct response.One could argue that with a too high criterion, information about the correct response is withheld from the horse, prolonging the duration from the onset of the signal to its release.This is unfortunate since lengthy application of the rein tension signal may disassociate the aimed conditioned stimuli, the initial light pressure, and the reinforcing release and thus associative learning of the light rein tension signal will fail (McGreevy and McLean, 2007).
In our study, the criterion for release of rein tension was one step back.In hindsight, the learning process would probably have benefitted from applying a lower criterion to begin with.Even in the second treatment only 43% of rein tension signals resulted in a successful response, i.e. in 57% of the trials it took longer than 2 s before the horse responded to the rein tension signal or the horse showed evasive behavior before stepping back (Table 2).Some of the horses seemed to struggle to understand how to get relief from the pressure applied.Particularly in the bridle treatment, several horses performed numerous head/neck/mouth behaviors before taking the first step back.Perhaps the learning process would have been more effective if the criterion was gradually increased from shifting the weight back to a step back, as suggested by McCall (1990).Further, it is possible that the criterion was raised too soon for some horses.To be able to sustain a continuous learning process throughout the eight trials in each of the two treatments in all horses, the handler was given the possibility to ask the horse for an additional step back, above the one step back that was asked initially.Even though the handler only applied the rein tension signal for additional steps if the horse responded with a short latency to a light signal and showed no other behaviors during signaling rein tension, the additional steps included other behaviors than backing in 41% of the trials.This finding indicates that the horses were still searching for what behavior would lead to the release, even though they were already backing or had started to step back.By staying longer on the criterion of one step back, the backing response would likely have been more firmly established and less other behaviors would probably have been shown when the criterion was raised.The head/neck/mouth behaviors did, however, decrease over trials for the horses as a group, implying that the experimental setup was effective in training the horses to respond correctly to the rein tension signal.In future research on equine learning it would be interesting to study criteria levels in relation to learning efficacy.
Practical applications
The results from this study show that the correct use of the learning principles; negative reinforcement and classical conditioning, can lead to a reduction in magnitude of a rein tension signal in a single training session.These results can likely be applied to other (ridden) exercises as well and are thus applicable for the average rider.Horse may, however, try several other behaviors before making the correct response and it seems that the bitted bridle provokes more trial and error behavior than
Table 3
Results for mixed linear models of response latency, signaling rein tension (Signaling RT) and response rein tension (Response RT).The Estimate and SE are on the log scale for response latency and signaling RT and on the square root scale for response RT.A negative estimate indicates that this level of the explanatory variable predicted a decrease in the outcome variable value.The models each include data from 20 horses, responding to eight rein tension signals fitted with a bitted bridle and eight rein tension signals fitted with a halter (320 observations total).Bold p-values are considered significant (p < 0.05).Baseline categories and estimates from trial 2-4 have been omitted.Model output and R code can be found in the supplementary materials.
Conclusion
Investigating the magnitude of a rein tension signal (with a timely release) training the horse to step back, it was found that both response latency and magnitude of rein tension could be significantly reduced during a single training session consisting of eight trials.The reduction in rein tension magnitude likely reflects the increased frequency of the horses promptly stepping back, instead of trying other behaviors, in response to the rein tension signal.There was no significant difference between the bitted bridle and the halter in terms of learning performance.However, the bitted bridle was associated with significantly more head/neck/mouth behaviors regardless if the bridle was the first or the second treatment.Both response latency and maximum rein tension were reduced by half over the first eight trials and then further reduced during the second treatment.The findings demonstrate how quickly horses can be taught to respond to progressively lighter rein tension signals through the correct application of negative reinforcement.These results are important from a horse welfare point of view as pressures applied on the horse's mouth and/or nose can cause discomfort or even pain.
Contributions to study
The idea for the study was initiated by M.E.All authors contributed in refining and improving the experimental design.M.E., J.Y. P.B. and A. E. conducted the data collection.The rein tension analysis was performed by A.E. and the behavioral recordings were done by M.E.The statistical analysis was performed by M.E., A.E. and A.B. The manuscript was drafted by M.E.and all authors contributed to improving the content.
Fig. 1 .
Fig. 1.The headstalls.The headstalls used in data collection with the rein tension meter attached.Bitted bridle to the left and halter to the right.The horses also wore a halter underneath the treatment headstall.
Fig. 2 .
Fig. 2. Response latency across trials.Group median response latency (s) for the eight trials of the rein tension signal in the first (left) and second (right) treatment, color and shape by bridle and halter.Data are from 20 horses responding to a rein tension signal for backing up (eight times with a bridle and eight times with a halter, generating 320 rein tension signals in total).
Table 1
The ethogram used for behavioral observations during the application of the rein tension signal.Originally
Table 4
Odds ratios (OR) and 95% confidence intervals from the logistic regression models with head/neck/mouth behavior or inattentive behavior as outcomes.Bold p-values indicate that the explanatory variable had a significant influence (p < 0.05) on the outcome.The results presented are presence/absence of behavior from onset rein tension signal to onset backing (20 horses, 2 treatments, 8 trials in each treatment).Baseline categories and estimates from trial 2-4 (non-significant) have been omitted.Model output and R code can be found in the supplementary materials.plain halter.During initial training of a new response it may thus be an advantage to use a soft, non-aversive headstall to avoid complicating the learning process.Reducing rein tension magnitude, in combination with a timely release, is crucial to avoid stress and discomfort due to uncomfortable sensations from the bit or bridle.A broad application of the results from this study, when training responses to rein tension signals, would largely benefit equine welfare. a | 2021-10-19T16:35:53.053Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "e06743f900e8d3c213e6513259621cdbd7d3fd88",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.applanim.2021.105452",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "509388e4a260dea1e436cc9dcf385173fd0f5710",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
249965429 | pes2o/s2orc | v3-fos-license | Optimal Design of Ferronickel Slag Alkali-Activated Material for High Thermal Load Applications Developed by Design of Experiment
The development of an optimal low-calcium alkali-activated binder for high-temperature stability based on ferronickel slag, silica fume, potassium hydroxide, and potassium silicate was investigated based on Mixture Design of Experiment (Mixture DOE). Mass loss, shrinkage/expansion, and compressive and flexural strengths before and after exposure to a high thermal load (900 °C for two hours) were selected as performance markers. Chemical activator minimization was considered in the selection of the optimal mix to reduce CO2 emissions. Unheated 42-day compressive strength was found to be as high as 99.6 MPa whereas the 42-day residual compressive strength after exposure to the high temperature reached 35 MPa (results pertaining to different mixes). Similarly, the maximum unheated 42-day flexural strength achieved was 8.8 MPa, and the maximum residual flexural strength after extreme temperature exposure was 2.5 MPa. The binder showed comparable properties to other alkali-activated ones already studied and a superior thermal performance when compared to Ordinary Portland Cement. A quantitative X-ray diffraction analysis was performed on selected hardened mixes, and fayalite was found to be an important component in the optimal formulation. A life-cycle analysis was performed to study the CO2 savings, which corresponded to 55% for economic allocation.
Introduction
The world is currently seeking to reduce and, if possible, minimize carbon emissions. Each year, the construction sector consumes approximately 1.6 billion tons of cement, which translates to the release of 1.28 billion tons of CO 2 [1]. The cement industry is the secondlargest producer of carbon emissions, amounting to 5-8% of total global emissions [2]. While Portland cement is-and will be for many years-the primary material of choice for construction, there is a growing concern regarding the sustainability of this material [3], and thus, there is a need for investigating alternative, more sustainable binders. A lowcalcium alkali-activated material (AAM) comprises an aluminosilicate source that is low in calcium and an alkaline solution. This type of material has the potential to develop high mechanical strength and excellent fire and chemical resistance [4]. Low-calcium AAM technology also has the potential to reduce carbon emissions by 80% [5] compared to OPC. The increase in the popularity of low-calcium alkali-activated materials in the last decades can be explained by their ability to consume readily available industrial by-products, with the consequence of reducing problems related to by-products storage and the shortage of natural resources [6].
The exposure of an OPC-based paste to high thermal loads (higher than 330 • C) results in the degradation of its mechanical properties due to changes in the chemical composition of the hydration products and micro-cracks produced by differential thermal stresses in the matrix. The deterioration of chemical compounds and bonds occurs as calcium hydroxide groups begin to decompose between 330 • C and 400 • C. Calcium carbonate decomposes at 700 • C and melts at 800 • C [7]. The main difference and advantage in the fire resistance of low-calcium AAM is the absence or low presence of C-S-H gel. The decomposition of this gel after 300 • C is the primary reason for mechanical strength degradation in OPCbased products [8][9][10]. Zhang et al. [3] found that the compressive strength of hardened OPC paste after exposure to 800 • C for 1 hour was completely lost. An identical outcome was reported by Mendes et al. [11], who subjected OPC paste cylinders to 750 • C for 1 h followed by 800 • C for an additional hour. For low-calcium AAM, the residual strength after a fire depends on the precursor and activator solutions, with the results varying greatly. After exposure to the same temperature, Kong et al. [12] reported a residual compressive strength of 49% of the initial one for a metakaolin-based low-calcium AAM. Kong and Sanjayan's [13] results showed an increase in compressive strength of 53% after similar high-temperature treatment. Martin et al. [14] reported higher mechanical strength and better fracture performance for alkali-activated materials when compared to OPC-based ones after a high thermal load (HTL).
Ferronickel slag (FNS) is a by-product of the production process of ferronickel alloy. The process is carried out in an electric arc furnace, which reaches a temperature of 1300-1500 • C. The liquid slag flows out of the furnace at high temperatures and is led over a seawater jet under pressure. There, it is instantly turned into finer particles which are cooled and transported by the water stream to a collection pond. Because of the instantaneous cooling to ambient temperature, the slag comes practically in an amorphous phase. The granules' size is under 4 mm, with a prevailing size between 0.6 and 1.5 mm [15]. The yearly production of ferronickel slag was approximately 2 million tons for Greece (reported in 2022 [16]), and only 20-30% of this amount is being put back into the economy by using it as a sandblasting material and raw ingredient for cement production [17]. The remaining slag must be disposed of in surface locations or under the sea. The cost of disposal was reported to be EUR 650,000 per year in 2007 [18]. A sustainable production model for the metal industry necessitates action in the management of the residual slags. It is essential for this industry to find applications for ferronickel slag [17].
Ferronickel slag is only an example of a type of metallurgical slag produced every year from the ferrous, nonferrous, and steel industries. Millions of tons of these slags are disposed of on/in land, causing several environmental impacts [18,19]. Thus, the creation of a circular economy-based scheme that valorizes them and develops new applications of added value is necessary to improve the sustainability of the metallurgical sector and minimize environmental damage. This study, like others before [20][21][22], represents an effort to put these million tons of slag back into the economy. The focus is given to analyzing the potential of FNS as a primary ingredient to produce high thermal load-resistant "cement", thus reducing the consumption of raw materials and waste allocation efforts.
Sakkas [23] demonstrated the applicability of ferronickel slag for the development of construction materials with fire-resistance. The author reported formulations with compressive strengths as high as 120 MPa and thermal conductivities at 300 K as low as 0.27 W/ mK (not for the same formulations). This suggested that FNS AAMs have the potential for use in the development of high thermal load-resistant construction materials. The FNS mixes developed in this study belong to the group of low-calcium alkali-activated materials which are known to have high resistance to fire. Possible applications of fire-resistant low-calcium AAMs could be in highly important constructions such as tunnels, underground edifices [24], and tall structures [25]. Another advantage of low-calcium AAMs is their high resistance to chemical attacks. Komnitsas et al. [26] studied the durability of FNS AAMs and found the product to possess high resistance to corrosive environments such as sulfuric acid and chloride acid solutions after 30 days of exposure. A study of the same author on the toxicity of FNS AAM and raw FNS revealed that the raw materials' toxicity levels exceed the allowable limit only for Ni and Cr. After alkali activation, the low-calcium AAM exhibited zero toxicity. So, FNS binders not only have potential as fire and chemical-resistant products but can be used to trap potentially hazardous elements to reduce their toxicity [27].
AAM properties are strongly dependent upon the type and content of the chemical activators and precursors. While AAMs can provide benefits such as those previously mentioned, the design of an AAM requires a delicate balance which can be achieved by employing statistical tools such as Design of Experiment (DOE). DOE is a systematical approach for determining cause-and-effect relationships. DOE is commonly used in industry for the optimization of industrial processes (Factorial DOE) or the formulation of mixes (Mixture DOE). The procedure applies to any process with measurable inputs and outputs. Until 1980, DOE was mainly used in the chemical, food, and pharmaceutical industries; in recent decades, the methodology has found a place in the concrete industry as well. This study makes use of the advantages of DOE for the formulation of low-calcium AAM binders focused on high thermal load applications and the reduction of CO 2 emissions. This is not the first time that DOE has been used for low-calcium AAM design. In 2016, Mohd Basri et al. [28] used a factorial DOE to optimize the mechanical properties of a low-calcium AAM matrix by considering different quantities of the alkaline activator (AA), AA/precursor ratio, curing temperature, curing time, and sodium hydroxide (NaOH) concentration. While a factorial design is preferred for optimization problems with categorical variables, it is mixture DOE (the optimization of ingredient quantities varying within a range) that better qualifies for mix design optimization, and this was thus the selected approach for this study. In this type of design, a fixed amount of ingredients was selected, and only their proportions changed. Komnitsas used a DOE factorial design to optimize the compressive strength of FNS-based low-calcium AAM. Once again, this type of experiment design was focused on factors (with discrete values) such as aging, curing temperature, and curing time rather than ingredient amounts [18]. At the time this paper was written, the authors found no literature corresponding to the mixture DOE of FNS AAMs.
Mixture DOE permits the user to find the proportions of ingredients for a multiresponse optimization. The ingredients used were ferronickel slag, potassium hydroxide (KOH), potassium silicate (KS), silica fume (SF), and water. The studied responses were flow spread, mass loss, shrinkage/expansion, and compressive and flexural strength before and after a high thermal load; additionally, cost and CO 2 emissions estimates of the binder formula were considered in the selection of the optimal formula. The result was a numerically optimized formulation to produce an alkali-activated binder with a high performance, a minimal cost, and reduced CO 2 emissions with respect to OPC alternatives. The reduction of chemical activators was key to reducing the cost and CO 2 emissions and to making the product safer and more comfortable to work with for operators in potential construction field future applications. Considering the need for easy in situ application, it was decided to cure pastes only at ambient temperatures.
Raw Materials
Ferronickel slag was selected as the main bulk component of the paste and was kindly provided by The General Mining and Metallurgical Company SA in Larissa, Greece, better known as LARCO. Ferronickel slag and silica fume were analyzed in terms of particle size distribution using laser diffraction; more specifically, the analysis was carried out using a Malvern Mastersizer 2000. The slag was ground in a ball mill to a d 50 solution called Geosil ® 14517 (modulus 1.6, 45% dry content). Potassium hydroxide pellets with a 90% purity were selected. Potassium products were preferred over more traditional sodium alkaline activators, as they have been reported to produce more thermally resistant low-calcium AAMs with higher compressive strengths [29]. In support of this, Kovalchuk and Krivenko [30] provided a very clear example by comparing the fusion temperature (T f ) of two almost identical mineral phases: orthoclase (K 2 O·Al 2 O 3 ·6SiO 2 ) with T f = 1170 • C and albite (Na 2 O·Al 2 O 3 ·6SiO 2 ) with T f = 1118 • C, where the only difference was the alkaline metal.
The chemical composition data of ferronickel slag and silica fume (Table 1) were gathered through X-Ray fluorescence (XRF), revealing major (SiO 2 , Al 2 O 3 , CaO, MgO, MnO, Fe 2 O 3 , K 2 O, Na 2 O, P 2 O 5 , TiO 2 ) and minor elements. An amount of 1.8 g of dried ground sample was mixed with 0.2 g of wax (acting as a binder) and was pressed on a base of boric acid to a circular powder pellet that was 32 mm in diameter. Analyses were performed with a RIGAKU ZSX PRIMUS II spectrometer, which was equipped with Rh-anode running at 4 kW, for major and trace element analyses. The spectrometer was equipped with the following diffracting crystals: LIF (200), LIF (220), PET, Ge, RX-25, RX-61, RX-40, and RX-75. Table 2 includes additional important information for each binder ingredient (referred to as "component") such as the estimated cost (by weight of the ready-to-mix component) and the environmental impact quantified through both CO 2 emissions (in CO 2-eq. terms) and energy consumption for component production. The cost of the chemical activators and silica fume corresponds to the purchase of the laboratory scale purchases. The authors predict an important drop in price for a large-scale operation. The cost of the grounded was estimated to be similar to the cost of ground granulated blast furnace slag, as reported by Hendrik G [31]. The CO 2-eq. emissions were estimated using the SimaPro v8.5 software linked to the Ecoinvent v3.4 database. In Table 2, each component was assigned a single-letter code (A to D), which will appear in figures throughout the text. Component C (KS) data always refer to the dry part (45%) of the KS solution available in the form of Geosil ® 14517.
Sample Preparation
A chemical activator solution was prepared one day in advance of paste preparation. Faucet water was added to a glass jar, followed by potassium silicate in the form of Geosil ® 14517; finally, potassium hydroxide pellets were dissolved in the solution and mixed with an agitator. Silica fume and ferronickel slag were dry-mixed by hand until homogenization. The activator solution was added and, once again, the paste was formed by hand-mixing the ingredients for approximately half a minute. Once the dry mix absorbed all the activator and the risk of spilling the solution was low, the mixing continued by means of an electric hand mixer for 1 min at a low speed, immediately followed by 2 min at a high speed . The consistency of the fresh paste samples was assessed by a flow table test and expressed in terms of the mean diameter of the paste after jolting the table 15 times (as per EN 1015-3). Mortar prisms of 40 × 40 × 160 mm were cast and tested after 42 days for flexural and compressive strength according to EN 1015-11. This age was deemed adequate for the mechanical characterization of the prisms aimed at a comparative study of different formulations. Once filled, the molds were vibrated for 30 s using an electric shaking table. The molds were covered with a thin plastic sheet until the paste was hard enough to demold, which was between 5 h and 24 h after mixing. The prisms were cured in two airtight plastic bags to prevent the loss of moisture. The temperature during curing was approximately 20 ± 5 • C; no heat curing was used. The specimens were tested after 42 days of curing in airtight plastic bags.
Heat Treatment
All prisms programmed for heating were exposed to a thermal load comprising a ramp of 5 • C/min up to 900 • C, followed by a constant temperature step spanning 2 h; then, the oven was turned off and the specimens were left to cool down in the oven (with the door shut) until the next day. For heating, an electrical 500 mm 3 oven was used, with a maximum heating capacity of 1100 • C. This heating regime was adopted to allow for comparison with previous studies that followed the same protocol [32,33] and others varying only the ceiling temperature, which was kept constant for 1 h [34][35][36]. The specimens' dimensions were measured with a vernier caliper, and the shrinkage/expansion was evaluated as the difference between pre-and post-high temperature exposure measurements. The specimens were also weighed before and after heating, using an electronic scale to calculate the mass loss.
The position of the prisms in the oven was found to be quite important. Preliminary testing showed that placing the prisms in an upright position resulted in differential thermal cracking, with higher damage in the upper part of the specimens. Prisms placed closer to the resistances of the oven (in the preliminary phase) showed higher thermal damage. Based on these findings, it was decided to heat only four prisms at a time, which were symmetrically placed, as shown in Figure 1. All of the specimens were thus exposed to a similar thermal load. The prisms were placed on ceramic blocks to prevent overheating.
Testing of Mechanical Properties
The prisms were tested for flexural and compressive strength according to EN 11. The loading rate was equal to 0.003 mm/s. For each mix, two prisms were test flexural strength before high thermal load exposure and one prism was tested after; the testing was performed under ambient conditions. The results in the form of av strength values are provided in a following section (average of two values of unh
Testing of Mechanical Properties
The prisms were tested for flexural and compressive strength according to EN 1015-11. The loading rate was equal to 0.003 mm/s. For each mix, two prisms were tested for flexural strength before high thermal load exposure and one prism was tested after; all of the testing was performed under ambient conditions. The results in the form of average strength values are provided in a following section (average of two values of unheated flexural strength, four values of unheated compressive strength, and two test results of heated compressive strength). The mechanical properties assessed after heating and cooling down are typically lower than the corresponding values measured while the specimens are exposed to a high thermal load (higher than 800 • C) [37]. Thus, this methodology was deemed suitable for a conservative estimation of the compressive and flexural strength of heated low-calcium AAM products. The heated samples were removed from their plastic bags one day in advance to measure their dimensions and mass and were then placed in the oven for heat treatment. The mass and dimensions were measured again after cooling down to determine the shrinkage/expansion and mass loss.
XRD Method
Parts of the fired prisms (≈30 g) were milled for 30 s into a fine powder for X-ray powder diffraction with a Retsch RS200 vibratory ring mill (from Haan, Germany) at 1000 rpm. A bruker D2 phaser measured the diffraction patterns in the 5-70 • 2q range with a 0.02 • 2q and a step time of 0.6 s. CuKa radiation at 30 kV and 10 mA was used. The quantitative analysis was carried out using the Topas ® academic software V5 (created by Allan A. Coelho, Brisbane, Australia) [38].
Design of Experimental Matrix
The test matrix was designed with the aim of studying the effects of different proportioning scenarios of four dry components-namely FNS, KOH, KS (solid part), and SF-on selected wet and hardened properties of the pastes. For each binder blend, the sum of all the solid ingredients was kept equal to 3000 g, whereas 528 g of water was used to form paste. The water-to-binder (w/b) ratio was fixed to 0.176 based on preliminary trials (not described herein) to determine the lowest water-to-binder ratio that allowed for the formation of a workable alkali-activated paste. Table 3 shows the 34 combinations of the ingredients' quantities (mixes) for each mix: (i) the components' quantities have been scaled-up so that they sum to 1 ton of the dry binder, and (ii) the total water (accounting for the sum of mixing water and the liquid part of KS) is fixed at 176 kg so as to keep the w/b ratio at 0.176. The sum of all the ingredients is kept constant in order to isolate and study the effect of their proportions. This type of equality constraint is a prerequisite for the formation of a mixture DOE problem.
The DOE matrix production and post-processing of the results were performed using Design Expert Software v11.1.2.0 (Stat-Ease Inc., Minneapolis, MN). Each component of the binder was assigned a letter to represent it: FNS = A, KOH = B, KS = C, and SF = D (see Table 2). As a first step to the design of the test matrix by DOE, single-component constraints were defined. The constraints for the lower and upper bounds of each ingredient (given in Table 2) aimed at achieving a certain range of potassium hydroxide and potassium silicate molar concentrations. The former was set to vary between 2 M and 7 M (the volume of reference being that of the chemical solution). Previous work on the activation of ferronickel slag [29] reported the attainment of high compressive strength when the KOH ranged between 4 M and 8 M and slightly lower strength at 2 M. The potassium silicate was set to vary from 0 M to 3.5 M. Higher molar concentrations were not possible due to the use of the commercial product Geosil 14517, which comes with a water content of 55%, thus making it impossible to increase the concentration of the silicate. The silica fume content varied between 0% and approximately 15% of the binder blend, in terms of weight. An I-optimal mixture design that minimizes the average variance of prediction over the region of the test-derived results was used [39]. A special cubic Scheffe model was selected to account for three-component blending effects. The design comprised 34 runs: 14 design points were required to fit the model, whereas 10 additional points, 5 lack-of-fit points, 4 replicate points, and 1 additional center point were added as well in order to improve the prediction efficiency. Special care was given to keep the various work method processes (paste mixing and sampling) as consistent as possible between different runs: mixing time, vibrating time, and curing procedure. The experimentally evaluated responses (that is, wet and hardened paste properties) considered in the study were: shrinkage/expansion, mass loss, cost, and mechanical properties (flexural and compressive strength) before and after high thermal load exposure.
Life Cycle Assessment (LCA) Method
The LCA method was applied to estimate the environmental impacts of each FNS AAM formulation life cycle. Following the guidelines of ISO 14040, the LCA includes the following stages: goal and scope definition, life cycle inventory analysis, and environmental impact assessment and interpretation. To evaluate the environmental impacts associated with the developed mixes, a functional unit of 1 ton of the FNS AAM dry binder was defined. The system boundaries were limited to the production stage of the raw mix constituents in a "cradle-to-gate" framework". As the AAM mixes were produced at ambient temperatures, the environmental impacts associated with the manufacturing of mixes are assumed to be similar across the specimens. The global warming potential (GWP) of the FNS AAM measured in CO 2 -equivalent emissions was estimated using the IPCC2013 method. The life cycle inventories (LCI) were collected from the literature, as well from the Ecoinvent v3.4 database accessible through the SimaPro 8.5 software. The consideration of the ferronickel slag as a waste material or industrial by-product determines the choice of the emission allocation method between the main process of ferronickel production and slag production through smelting in an electric arc furnace. For comparison, no allocation, mass allocation, and allocation based on economic value were examined. Additionally, the grinding of slag requires approximately 60 kWh/t at an industrial scale [40]. The environmental impacts associated with silica fume and potassium hydroxide production were estimated using Ecoinvent data. There is lack of data on the LCI of potassium silicate in the literature and in the LCI databases. Assuming similar production routes of the potassium and sodium silicates, we have estimated the environmental impacts of potassium silicate by using the LCI data of sodium silicate production through a hydrothermal process, where potassium hydroxide is used instead of sodium hydroxide as an input material.
Results and Discussion
The responses for all the DOE runs are provided in Table 4. These results were subsequently used for the derivation of regression equations (or models) in an effort to express each response (except for spread) as a function of the ingredients' contents (see Section 3.1). Normal Plot, Residual Versus Predicted Plot, and Cook's Distance were the tools used to identify the outliers for each response dataset. The outliers removed from a specific response dataset for the model derivation of this response are shown in Table 4 with a strike. The maximum spread measured was that of the table diameter (300 mm). Mixes with a reported spread of 300 mm are likely to have a real spread higher than this value. Some replicate mixes, such as 28, 29, and 30, provided unexpectedly different results. The large variability could be reduced by producing more samples per mix for test purposes. Moreover, the variability in the results of the replicate mixes can be attributed to the inherent inconsistency of the precursor (no quality control over slag production, leading to varying chemical composition).
Derivation of Models
Regression equations (or models) were derived for all responses (except for spread), and their coefficients are reported in Table 5. Equations for the prediction of each response can be obtained by multiplying the coefficients in Table 5 with the amount in grams of the respective ingredient (A: FNS, B: KOH, C: KS, D: SF, as provided in Table 3). It must be noted that the regression equations predict the properties of a hardened paste based on the combination of the ingredients A, B, C, and D and an amount of water suitable to keep the water-to-binder ratio at 0.176. The presence of binary and tertiary coefficients (i.e., AB, ABC, etc.) indicates that nonlinear blending effects are relevant in this mix. The spread was measured as the diameter of the mortar sample after being jolted 15 times. The table diameter was 300 mm, which limited the maximum measured values. Several mix formulations spread beyond the flow table border, and the real spread diameter was not measured. Considering the lack of data for a big portion of the formulations, it was decided that the derived regression equation was not representative of this response. Table 6 contains the mean response, the standard deviation (Std. Dev.), and the coefficient of variation (C.V.) of all the results for each response. The regression analysis was designed to fit a special Scheffe model. The final regression equations for each response were selected on an individual basis by utilizing an automatic model selection provided by Design Expert ® software. The criterion for term selection was to filter those that were significant for an associated significance level of 0.05 and simultaneously maximize the adjusted R 2 . The adjusted R 2 was chosen to prevent the bias that comes with the pure R 2 , which is higher as more terms are added to the model, with the risk of overfitting the model by adding terms that are unrelated to the response.
For all the models, the difference between the predicted R 2 and the adjusted R 2 is less than 0.2, with the exception of the model that correlates ingredients' contents and flexural strengths after high-temperature exposure. The high discrepancy between these two statistics indicates that the average value of heated flexural strength predicts the response better than the proposed model. All of the formulations subjected to high temperatures developed extensively distributed shrinkage/expansion cracking. Cracking damage had a remarkable impact on the flexural strength and promoted the early failure of all the samples. This early failure suggests that the real flexural strength potential of each formulation was not observed. It was hypothesized that preventing cracking of the prisms by utilizing fine aggregates or fibers would allow for higher heated flexural strengths to develop. A follow-up DOE in which fine aggregate and fibers were added to reduce thermal shrinkage/expansion followed this work and will be considered for publication.
The adequate precision for all of the models was higher than 4, which is considered the minimum desirable ratio. This statistic measures the ratio of the signal (an effect in the response due to a change in the components) to the noise (random irregularities in the measured response). Contour and 3D surfaces were used to visualize the models and graphically study the impact of each component (total sum 3000 g). Figure 2a shows the heated compressive strength as a function of components A, B, and C, while component D was set to its minimum value (SF = 0). In this graph, the contour lines indicate an increase in strength as the content of component C (KS) increases. This suggests that removing silica fume from the mix results in a need for silicates to enhance the heated compressive strength. remarkable impact on the flexural strength and promoted the early failure of all the samples. This early failure suggests that the real flexural strength potential of each formulation was not observed. It was hypothesized that preventing cracking of the prisms by utilizing fine aggregates or fibers would allow for higher heated flexural strengths to develop. A follow-up DOE in which fine aggregate and fibers were added to reduce thermal shrinkage/expansion followed this work and will be considered for publication. The adequate precision for all of the models was higher than 4, which is considered the minimum desirable ratio. This statistic measures the ratio of the signal (an effect in the response due to a change in the components) to the noise (random irregularities in the measured response). Contour and 3D surfaces were used to visualize the models and graphically study the impact of each component (total sum 3000 g). Figure 2a shows the heated compressive strength as a function of components A, B, and C, while component D was set to its minimum value (SF = 0). In this graph, the contour lines indicate an increase in strength as the content of component C (KS) increases. This suggests that removing silica fume from the mix results in a need for silicates to enhance the heated compressive strength. In Figure 2b, the silica fume has been set to its maximum amount (SF = 441 g). Any more silicious oxide in the form of KS would result in the deterioration of the residual compressive strength. This is visible in the contour lines; as the lines get closer to the KS vertices (mixes with a higher content of KS), the strength decreases. The colors in the contour plot correspond to the amount of residual compressive strength. The blue color shows the zone where mix combinations including a high content of KS would result in low compressive strengths (around 5 MPa) compared to the high strength region displayed in green (15 to 20 MPa). The green area in the ternary plot contains the combination of three ingredients corresponding to a minimum value of KS and a higher concentration In Figure 2b, the silica fume has been set to its maximum amount (SF = 441 g). Any more silicious oxide in the form of KS would result in the deterioration of the residual compressive strength. This is visible in the contour lines; as the lines get closer to the KS vertices (mixes with a higher content of KS), the strength decreases. The colors in the contour plot correspond to the amount of residual compressive strength. The blue color shows the zone where mix combinations including a high content of KS would result in low compressive strengths (around 5 MPa) compared to the high strength region displayed in green (15 to 20 MPa). The green area in the ternary plot contains the combination of three ingredients corresponding to a minimum value of KS and a higher concentration of KOH.
A similar visual analysis can be done in the 3D surfaces, which allows for the identification of local peaks more easily; such is the case for thermally induced volumetric change, as shown in Figure 3. The low level of silica fume resulted in shrinkage, with an average of -5%, as observed by the surface in Figure 3a. On the contrary, an excess of SiO2 for a scenario in which the silica fume is set to its upper bound yields varying volumetric change results (Figure 3b). The response surface model predicts that, at high silica fume contents (441 g), an increase in KS would result in a remarkable increase in volume.
rials 2022, 15, x FOR PEER REVIEW 12 change, as shown in Figure 3. The low level of silica fume resulted in shrinkage, wi average of -5%, as observed by the surface in Figure 3a. On the contrary, an excess of for a scenario in which the silica fume is set to its upper bound yields varying volum change results (Figure 3b). The response surface model predicts that, at high silica contents (441 g), an increase in KS would result in a remarkable increase in volume.
Optimal Paste Mix
Finally, after all the mix properties (with the exception of spread) were assoc with a regression model (Table 5), it was possible to look for a combination of ingred for an optimal formulation. The search of the "sweet spot" was executed numeri through the construction of a desirability function. To build the desirability function the Design-Expert ® software, each response was assigned an importance factor and terion of optimization (minimization or maximization). Importance coefficients in De Expert ® are assigned on a scale from 1 to 5. The heated compressive strength was assi the highest level, whereas the unheated compressive and flexural strengths were assi level 4. All other responses were set at level 3. Heated flexural strength was assigne importance of zero, as the regression model for this attribute was found to not be rel due to the negative predicted R 2 . A negative R 2 is produced when the average of the re is a better predictor of the response than the numerical model. By visual inspection the heated samples, it was clear that they showed extensive thermal cracking. As tioned before, this induced premature flexural failure, making it unrealistic to try to relate ingredients' contents with residual flexural strength values.
The optimal dry binder composition is reported in Table 7 (ingredients adding u 1 t). The performance markers reported in Table 7 correspond to a hardened paste pared with a quantity of mixing water suitable to ensure a w/b ratio of 0.176. The op
Optimal Paste Mix
Finally, after all the mix properties (with the exception of spread) were associated with a regression model (Table 5), it was possible to look for a combination of ingredients for an optimal formulation. The search of the "sweet spot" was executed numerically through the construction of a desirability function. To build the desirability function with the Design-Expert ® software, each response was assigned an importance factor and a criterion of optimization (minimization or maximization). Importance coefficients in Design-Expert ® are assigned on a scale from 1 to 5. The heated compressive strength was assigned the highest level, whereas the unheated compressive and flexural strengths were assigned level 4. All other responses were set at level 3. Heated flexural strength was assigned an importance of zero, as the regression model for this attribute was found to not be reliable due to the negative predicted R 2 . A negative R 2 is produced when the average of the results is a better predictor of the response than the numerical model. By visual inspection of all the heated samples, it was clear that they showed extensive thermal cracking. As mentioned before, this induced premature flexural failure, making it unrealistic to try to correlate ingredients' contents with residual flexural strength values.
The optimal dry binder composition is reported in Table 7 (ingredients adding up to 1 t). The performance markers reported in Table 7 correspond to a hardened paste prepared with a quantity of mixing water suitable to ensure a w/b ratio of 0.176. The optimal ingredients' contents were found to be practically identical to those of Mix 13. Therefore, the measured responses of this mix were used for comparison with the predicted ones for the Optimal Mix proposed by Design-Expert ® . The difference in performance markers (also given in Table 7) was found to be low for all responses (<9%), excluding the unheated compressive strength. The large error in prediction was anticipated, as the Predicted R2 of the unheated compressive strength model was the lowest-0.18 out of 1. It was expected before designing the experiment constraints that only a few maximum local points would be encountered. In the case of unheated compressive strength, the number of local optimal points is higher than those that can be captured by the models chosen while designing the experimental matrix. There are few combinations of ingredients that result in a compressive strength that is either too low (local minimums, suggested by the results measured from mix 4 (16.44 MPa), mix 5 (23.49 MPa), and mix 15 (13.88 MPa)) or too high (local maximums mix 13 (80 MPa), mix 25 (99 MPa) and mix 30 (94.5 MPa)). The lack of fit of these models could have been prevented if the lower and upper limits of the components would have been more restricted, thus resulting in a design space containing fewer local optimal points and in models with a higher level of prediction. In the following sub-sections, the results presented in Table 4 are discussed per response. Additionally (and also per response), the optimal formulation is compared against both low-calcium alkali-activated pastes and cement-based ones. For the sake of a fair comparison, only works in which tests were executed at a thermal load between 800 • C and 1000 • C and kept constant for 1 to 2 h were considered. All values in the following bar charts correspond to samples cured at ambient temperature. Finally, the authors provide visual inspection notes on the specimens, along with the cost, LCA and XRD results, and relevant commentary.
Mass Loss
Mass loss was relatively consistent, varying from 12.5% to 17.5%. The amount of mixing water in each formulation was approximately 15%, by weight. A large part of the mass loss likely corresponded to the release of physical water (free water) between 20 • C and 100 • C, followed by the expulsion of chemically bonded water between 100 • C and 300 • C. Furthermore, between 200 • C and 650 • C, hydroxyl groups have been found to evaporate [41], adding to the total amount of mass loss due to water loss. The dehydroxylation of the Al-OH, Si-OH, and Ca-OH groups represents the second major reason for mass loss right after the evaporation of free water [42]. Nath et al. [43] and Rakhimova et al. [44] reported an additional mass loss after 750 • C due to the decomposition of carbonate groups.
A comparison of the mass loss for low-calcium alkali-activated and cement-based pastes after exposure to a thermal load is reported in Figure 4. The mass loss of the optimal formulation is highlighted in the figure with a darker color. Each bar label nomenclature corresponds to the main author and the year of publication of the bibliographic source, precursors (MK: Metakaolin, FA: Fly ash), and precursor fineness (d50, or specific surface). The figure shows that the FNS optimal mix performs equally well or better than cementbased pastes and much better than low-calcium alkali-activated ones.
Thermal Shrinkage/Expansion
The thermally induced volumetric change of the ferronickel mixes in this test campaign ranged from −8.3% (negative for shrinkage) to 21.7% (positive for expansion); the latter is atypical for alkali-activated materials, with the trend being a decrease in volume. After high thermal load exposure (900 °C), all of the prisms developed cracks of various intensities and patterns. In some cases, the cracks were abundant and wide, which resulted in an apparent increase in the size of the sample. The wider the cracks, the higher the apparent expansion value. Mineral phase transformation occurs as the temperature increases from the outside of the sample towards the core. The rate of transformation is thus not uniform throughout the cross-section of the specimen. The temporal incompatibility of the phases would result in differential stress throughout the cross-section, followed by crack formation. This is illustrated in Figure 5, which shows the expansion behavior of mix 31 to be an effect of the opening of cracks in the specimen. The appearance of cracks after high-temperature exposure is not unique for FNS AAM. Kong et al. [12] reported microcracks of 0.1-0.2 mm appearing in the surface of a metakaolin AAM after exposure to a thermal load of 800 °C. It has been observed in metakaolin [45] and fly ash [46] low-calcium AAMs that the addition of silica fume can increase the residual mechanical properties after a high thermal load by reducing the thermally induced volumetric change. A similar phenomenon was observed in FNS AAM. At low contents of KS, the increase in SF translates into a decrease in shrinkage. At high levels of KS, the shrinkage is not only reduced but reversed, and thermal expansion is recorded. A comparison of the thermal volumetric behavior at the paste level for several lowcalcium AAMs after exposure to a thermal load is reported in Figure 6. The thermal
Thermal Shrinkage/Expansion
The thermally induced volumetric change of the ferronickel mixes in this test campaign ranged from −8.3% (negative for shrinkage) to 21.7% (positive for expansion); the latter is atypical for alkali-activated materials, with the trend being a decrease in volume. After high thermal load exposure (900 • C), all of the prisms developed cracks of various intensities and patterns. In some cases, the cracks were abundant and wide, which resulted in an apparent increase in the size of the sample. The wider the cracks, the higher the apparent expansion value. Mineral phase transformation occurs as the temperature increases from the outside of the sample towards the core. The rate of transformation is thus not uniform throughout the cross-section of the specimen. The temporal incompatibility of the phases would result in differential stress throughout the cross-section, followed by crack formation. This is illustrated in Figure 5, which shows the expansion behavior of mix 31 to be an effect of the opening of cracks in the specimen. The appearance of cracks after high-temperature exposure is not unique for FNS AAM. Kong et al. [12] reported microcracks of 0.1-0.2 mm appearing in the surface of a metakaolin AAM after exposure to a thermal load of 800 • C. It has been observed in metakaolin [45] and fly ash [46] low-calcium AAMs that the addition of silica fume can increase the residual mechanical properties after a high thermal load by reducing the thermally induced volumetric change. A similar phenomenon was observed in FNS AAM. At low contents of KS, the increase in SF translates into a decrease in shrinkage. At high levels of KS, the shrinkage is not only reduced but reversed, and thermal expansion is recorded.
Thermal Shrinkage/Expansion
The thermally induced volumetric change of the ferronickel mixes in this test paign ranged from −8.3% (negative for shrinkage) to 21.7% (positive for expansion latter is atypical for alkali-activated materials, with the trend being a decrease in vol After high thermal load exposure (900 °C), all of the prisms developed cracks of va intensities and patterns. In some cases, the cracks were abundant and wide, whic sulted in an apparent increase in the size of the sample. The wider the cracks, the h the apparent expansion value. Mineral phase transformation occurs as the temper increases from the outside of the sample towards the core. The rate of transformati thus not uniform throughout the cross-section of the specimen. The temporal incom bility of the phases would result in differential stress throughout the cross-section lowed by crack formation. This is illustrated in Figure 5, which shows the expansio havior of mix 31 to be an effect of the opening of cracks in the specimen. The appear of cracks after high-temperature exposure is not unique for FNS AAM. Kong et al reported microcracks of 0.1-0.2 mm appearing in the surface of a metakaolin AAM exposure to a thermal load of 800 °C. It has been observed in metakaolin [45] and fl [46] low-calcium AAMs that the addition of silica fume can increase the residual mec ical properties after a high thermal load by reducing the thermally induced volum change. A similar phenomenon was observed in FNS AAM. At low contents of KS increase in SF translates into a decrease in shrinkage. At high levels of KS, the shrin is not only reduced but reversed, and thermal expansion is recorded. A comparison of the thermal volumetric behavior at the paste level for several calcium AAMs after exposure to a thermal load is reported in Figure 6. The the A comparison of the thermal volumetric behavior at the paste level for several lowcalcium AAMs after exposure to a thermal load is reported in Figure 6. The thermal shrinkage of the optimal FNS mix (7%) is comparable to the shrinkage values reported by Zhang et al. (6.5%, [47]) and Rovnanik et al. (5%, [48]) for MK and FA AAM, respectively. The values are high compared to ordinary Portland cement paste, which shrinks down to around 1.7% at 800 • C [49]. It is possible to achieve low shrinkage geopolymer mixes, as Zhang et al. [47] demonstrated by studying the optimal combination of FA and MK, which yielded a thermal shrinkage of 1.05%, even lower than that of OPC. [49]. It is possible to achieve low shrinkage geopolymer mixes, as Zhang et al. [47] demonstrated by studying the optimal combination of FA and MK, which yielded a thermal shrinkage of 1.05%, even lower than that of OPC. Figure 6. Thermal shrinkage of low-calcium AAM pastes after exposure to a thermal load between 800 and 1000 °C.
The intense shrinkage of low-calcium AAM pastes is a common characteristic of this family of materials. The thermal shrinkage of FNS corresponds to a typical behavior already documented by Rickard et al. [50] and Provis et al. [51], who found that alkali-activated materials with low calcium have an overall volumetric decrease produced mainly by the loss of free water between 100 and 300 °C, the dehydroxylation between 250-600 °C, and the densification by sintering typically between 550-900 °C. The volumetric contraction has been associated with an increase in the surface energy of the low-calcium AAM gel as water is released and a consequent partial collapse of the gel network takes place [41]. Bernal et al. [52] proposed that this partial collapse would likely induce damage in the form of microcracks, which would result in an overall decrease in strength. These microcracks may be responsible for the strength drop of the optimal mix from 80 MPa to 16 MPa. Bakharev [53] proposed that volumetric stability should be a fundamental characteristic of a fire-resistant low-calcium AAM. The reduction of the considerable shrinkage of FNS AAM represents a challenge that will be undertaken in future studies.
Compressive Strength
The compressive strength before high thermal load (900 °C) exposure ranged between 12.28 MPa and 99.6 MPa, with an average of 51.2 MPa. Runs 13, 14, 25, and 30 are of particular interest due to their high compressive strength exceeding 80 MPa. From these mixes, runs 26 and 31 are unsuccessful due to excessive thermal cracking, which is similar to what is pictured in Figure 5. Runs 13 and 14 have approximately the same composition, with mix 14 having a slightly higher content of silicates. The latter showed a higher compressive strength value, suggesting a correlation between silicate content and strength development. A similar trend was observed in runs 26 and 31 which had a considerably high amount of silicate in the form of both SF and KS. Runs 4,5,15,17,18, and 33 have a compressive strength under 25 MPa. These mixes were designed with zero potassium silicate contents, which again highlights the importance of silicates as a key ingredient for strength development.
Si/Al ratios have previously been associated with the strength of low-calcium AAM both before and after high-temperature exposure [12,54]. Increasing the Si/Al ratio implies The intense shrinkage of low-calcium AAM pastes is a common characteristic of this family of materials. The thermal shrinkage of FNS corresponds to a typical behavior already documented by Rickard et al. [50] and Provis et al. [51], who found that alkali-activated materials with low calcium have an overall volumetric decrease produced mainly by the loss of free water between 100 and 300 • C, the dehydroxylation between 250-600 • C, and the densification by sintering typically between 550-900 • C. The volumetric contraction has been associated with an increase in the surface energy of the low-calcium AAM gel as water is released and a consequent partial collapse of the gel network takes place [41]. Bernal et al. [52] proposed that this partial collapse would likely induce damage in the form of microcracks, which would result in an overall decrease in strength. These microcracks may be responsible for the strength drop of the optimal mix from 80 MPa to 16 MPa. Bakharev [53] proposed that volumetric stability should be a fundamental characteristic of a fire-resistant low-calcium AAM. The reduction of the considerable shrinkage of FNS AAM represents a challenge that will be undertaken in future studies.
Compressive Strength
The compressive strength before high thermal load (900 • C) exposure ranged between 12.28 MPa and 99.6 MPa, with an average of 51.2 MPa. Runs 13, 14, 25, and 30 are of particular interest due to their high compressive strength exceeding 80 MPa. From these mixes, runs 26 and 31 are unsuccessful due to excessive thermal cracking, which is similar to what is pictured in Figure 5. Runs 13 and 14 have approximately the same composition, with mix 14 having a slightly higher content of silicates. The latter showed a higher compressive strength value, suggesting a correlation between silicate content and strength development. A similar trend was observed in runs 26 and 31 which had a considerably high amount of silicate in the form of both SF and KS. Runs 4,5,15,17,18, and 33 have a compressive strength under 25 MPa. These mixes were designed with zero potassium silicate contents, which again highlights the importance of silicates as a key ingredient for strength development.
Si/Al ratios have previously been associated with the strength of low-calcium AAM both before and after high-temperature exposure [12,54]. Increasing the Si/Al ratio implies more Si-O-Si bonds that are stronger when compared to the Si-O-Al ones and would thus result in a higher (unheated) strength [55]. In this study, such correlation was not found, as depicted in Figure 7. [55]. In this study, such correlation was not found, as depicted in Figure 7. The calculated coefficients of determination between the Si/Al ratios and compressive strengths for ferronickel slag low-calcium AAM were found to lie below 0.22 and indicate very poor correspondence. The lack of correlation between the compressive strength and Si/Al ratio has also been reported in other studies [35]. The addition of silica fume in the mixture likely had a strong influence on the mechanical properties of the mix, which obscures the role that the Si/Al ratio has on the measured responses. Additionally, the range of the Si/Al ratio studied herein (2.8 to 4.3) does not coincide with that in studies reporting a correlation between Si/Al and compressive strength (see Kong et al. [12] (1.4 to 2.3) and Duxson, Lukey, & van Deventer [54] (1.15 to 2.15)). Finally, the Si/Al ratios in this study were calculated based on the assumption that the oxides of silicon and aluminum have fully reacted in the AAM matrix. This is unlikely, since there will be a portion of Si and Al containing particles that are undissolved after the matrix formation [56].
Several studies indicate that an increase in compressive strength is linked to a higher concentration of the alkaline activator (NaOH and KOH) [57][58][59]. This trend has been confirmed in further studies, indicating that the trend is nonlinear. Nevertheless, it has been observed that, at high levels of the alkaline activator, a strength drop occurs. The previously monotonous relationship becomes non-linear [60,61]. Previous studies on ferronickel slag-based low-calcium AAM have confirmed the latter observation, and the inflection point was found at a NaOH molar concentration of 7M. In this study, the nonlinear behavior becomes even more complex due to the addition of silica fume. The linear and even the nonlinear correlation become less evident, as some mixes with a low chemical concentration showed some of the highest values of compressive strength. Mix 13, 14, and 25 with a KOH concentration of 2M had compressive strength values after 42 days of 80 MPa, 85 MPa, and 99 MPa respectively, while there were cases of mixes with moderately high KOH concentrations (7M), which resulted in low strengths (mixes 12 and 15, which reported 16.7 MPa and 13.9 MPa, respectively). The nonlinear interaction of ingredients brings forth the possibility of achieving high mechanical properties with low chemical concentrations, which improves the sustainability, safety of use, and economic feasibility of ferronickel slag low-calcium AAM.
After heating, the visible reduction in strength is a consequence of phase transformations and damage produced by pore pressure effects [62,63]. Phase transformations have been reported to result in large cracks and a consequent strength reduction in fly ash-based low-calcium AAM [64,65]. Large cracks were also observed in FNS-based lowcalcium AAM, which suggests a similar damage mechanism. The compressive strength The calculated coefficients of determination between the Si/Al ratios and compressive strengths for ferronickel slag low-calcium AAM were found to lie below 0.22 and indicate very poor correspondence. The lack of correlation between the compressive strength and Si/Al ratio has also been reported in other studies [35]. The addition of silica fume in the mixture likely had a strong influence on the mechanical properties of the mix, which obscures the role that the Si/Al ratio has on the measured responses. Additionally, the range of the Si/Al ratio studied herein (2.8 to 4.3) does not coincide with that in studies reporting a correlation between Si/Al and compressive strength (see Kong et al. [12] (1.4 to 2.3) and Duxson, Lukey, & van Deventer [54] (1.15 to 2.15)). Finally, the Si/Al ratios in this study were calculated based on the assumption that the oxides of silicon and aluminum have fully reacted in the AAM matrix. This is unlikely, since there will be a portion of Si and Al containing particles that are undissolved after the matrix formation [56].
Several studies indicate that an increase in compressive strength is linked to a higher concentration of the alkaline activator (NaOH and KOH) [57][58][59]. This trend has been confirmed in further studies, indicating that the trend is nonlinear. Nevertheless, it has been observed that, at high levels of the alkaline activator, a strength drop occurs. The previously monotonous relationship becomes non-linear [60,61]. Previous studies on ferronickel slagbased low-calcium AAM have confirmed the latter observation, and the inflection point was found at a NaOH molar concentration of 7M. In this study, the nonlinear behavior becomes even more complex due to the addition of silica fume. The linear and even the nonlinear correlation become less evident, as some mixes with a low chemical concentration showed some of the highest values of compressive strength. Mix 13, 14, and 25 with a KOH concentration of 2M had compressive strength values after 42 days of 80 MPa, 85 MPa, and 99 MPa respectively, while there were cases of mixes with moderately high KOH concentrations (7M), which resulted in low strengths (mixes 12 and 15, which reported 16.7 MPa and 13.9 MPa, respectively). The nonlinear interaction of ingredients brings forth the possibility of achieving high mechanical properties with low chemical concentrations, which improves the sustainability, safety of use, and economic feasibility of ferronickel slag low-calcium AAM.
After heating, the visible reduction in strength is a consequence of phase transformations and damage produced by pore pressure effects [62,63]. Phase transformations have been reported to result in large cracks and a consequent strength reduction in fly ash-based low-calcium AAM [64,65]. Large cracks were also observed in FNS-based low-calcium AAM, which suggests a similar damage mechanism. The compressive strength after a high thermal load varied from 2.8 to 37.5 MPa. Mixes 20, 21, and 23 showed the highest values of residual compressive strength after HTL exposure. These runs had almost the same content of SF. It must be noted that some of the mixes have a high residual compressive strength despite the high iron content, which has been previously correlated with negative effects on the high-temperature performance of low-calcium AAM [50,66].
Previous works (between 800 • C and 1000 • C) associated lower Si/Al ratios with higher residual strengths [14,35]. On the contrary, other authors like Kong [12] correlate higher Si/Al ratios with smaller reductions in strength after a high thermal load, or even with increases in residual strength [67]. Despite this, while not observable in the present FNS AAM (see Figure 7), a vast amount of literature indicates that the Si/Al ratio is a critical parameter that affects the mechanical properties and phase formations of low-calcium AAM materials after high-temperature exposure [53,68,69].
Authors such as Rashad & Zeedan [70] studied the effect of increasing chemical activator content in low-calcium fly ash AAM exposed to a range of temperatures between 200 • C and 1000 • C. The authors found that an increased content of waterglass resulted in a further decrease in residual compressive strength. Once again, this trend was not found in the ferronickel slag low-calcium AAM developed in this investigation. Likely, the reason for this is the upper limit of the KOH concentration (7 M), which is low when compared with concentrations studied by other authors, reaching 10 M. In the range studied herein (2 M to 7 M), an increase in KOH was found to improve residual compressive strength. Previous studies on FNS found higher mechanical strength values with lower NAOH concentrations-specifically 6 M and 8 M, with a better performance than low-calcium AAM prepared with 10 M and 12 M [26]. Komnitsas et al. [29] reported that an excess of 10M KOH resulted in decreased strength for FNS AAMs. The compressive strength of the binder after exposure to a high thermal load (900 • C) ranked above average in the comparison chart presented in Figure 8. Again, the figure includes only results from studies not employing heat curing. All the pastes in Figure 8 had unheated strengths of at least 35 MPa (hence, they could be used as the paste phase in a prospect concrete formulation). The high compressive strength of the FNS paste (darker color in Figure 8) is partially provided by an optimal percentage of silica fume. Sivasakthi [71] studied the effect of silica fume on low-calcium AAM mixes from 0 to 10% and found the highest gain at 5%, not far away from the 6% value obtained in this study. As mentioned before, FNS AAMs showed extensive damage due to thermal cracking; thus, the authors believe that the mechanical strength of the FNS binder in compression after a high thermal load could be much higher if the cracking is controlled with fibers or by using fine fillers. By expressing residual compressive strength as a percentage of the initial (unheated) one, it can be seen that both alkali-activated binders in Figure 8 are similar (14% [52] and 20% for FNS), whereas the cement-based ones result in zero or very low residual strengths [3,11,72]. The strength before high-temperature exposure was found to be one of the highest among the low-calcium AAM binder results compiled. High strength, as reported by Duxson et al. [73], is a consequence of the chemical bonds formed in the aluminosilicate gel and the physicochemical interaction between the unreacted particles and the gel.
Flexural Strength
As mentioned before, FNS AAMs showed extensive damage due to thermal cracking; thus, the authors believe that the mechanical strength of the FNS binder in compression after a high thermal load could be much higher if the cracking is controlled with fibers or by using fine fillers.
Flexural Strength
Runs 13 and 14 showed the highest unheated flexural strength of all the combinations (6.7 MPa and 7.3 MPa, respectively). In general, most mixes (65%) have an unheated flexural strength of 3 MPa or higher. The residual flexural strength after exposure to a high thermal load (900 • C) was found to be low in all specimens. The strength loss is probably a result of the cracking damage suffered during the heating phase. The cracks substantially affected the flexural strength by reducing the effective cross-section of the specimen. Thermal expansion has been associated with cracking in metakaolin low-calcium AAM and is also likely the reason for the occurrence of cracks in FNS AAM (also supported in [56]).
The flexural strength before exposure to a high temperature was found to be comparable to fly ash and metakaolin alkali-activated materials, as shown in Figure 9. Values were selected for comparison based on a criterion of a minimum flexural strength of 5 MPa. Only non-heat-cured geopolymer results were included. The strength of FNS after HTL exposure was zero due to the cracks formed in the prismatic specimens. Similar results were reported for the OPC samples [3,72]. The MK mix by Rovnanik [48] barely yielded any strength after high temperature exposure (0.2 MPa), while the work of Kovarik showed an outstanding residual flexural strength using MK as precursor [74]. As mentioned before, FNS AAMs showed extensive damage due to thermal cracking; thus, the authors believe that the mechanical strength of the FNS binder in compression after a high thermal load could be much higher if the cracking is controlled with fibers or by using fine fillers.
Flexural Strength
Runs 13 and 14 showed the highest unheated flexural strength of all the combinations (6.7 MPa and 7.3 MPa, respectively). In general, most mixes (65%) have an unheated flexural strength of 3 MPa or higher. The residual flexural strength after exposure to a high thermal load (900 °C) was found to be low in all specimens. The strength loss is probably a result of the cracking damage suffered during the heating phase. The cracks substantially affected the flexural strength by reducing the effective cross-section of the specimen. Thermal expansion has been associated with cracking in metakaolin low-calcium AAM and is also likely the reason for the occurrence of cracks in FNS AAM (also supported in [56]).
The flexural strength before exposure to a high temperature was found to be comparable to fly ash and metakaolin alkali-activated materials, as shown in Figure 9. Values were selected for comparison based on a criterion of a minimum flexural strength of 5 MPa. Only non-heat-cured geopolymer results were included. The strength of FNS after HTL exposure was zero due to the cracks formed in the prismatic specimens. Similar results were reported for the OPC samples ( [3,72]). The MK mix by Rovnanik [48] barely yielded any strength after high temperature exposure (0.2 MPa), while the work of Kovarik showed an outstanding residual flexural strength using MK as precursor [74]. As was mentioned in the previous section, there is a poor correlation between the Si/Al ratios and the compressive strengths of the FNS AAMs produced in this study. While this is likely due to the interference of silica fume and its double role as both a source of silica and a micro filler, there is another parameter that, while less popular in the literature, is relevant for the analysis of low-calcium AAM at a high temperature and K/Al ratio. Kohout et al. [36] studied the relationship between the K/Al ratios and post-fired properties of low-calcium alkali-activated systems. The author recommended keeping materials intended for fire resistance between a K/Al ratio of 0.55 and 0.70. Remarkably, the optimal mix produced by the DOE design of mixtures has a K/Al ratio of 0.66, falling right in the middle of this desirable range.
Appearance
In Figure 5, the change in color from greenish to red can be noticed. This phenomenon has also previously been observed in other low-calcium alkali-activated materials and is usually associated with the oxidation of the iron species [10,50,75], which have been confirmed to exist in abundance in ferronickel slag.
Cost
The authors found no correlation between (either unheated or residual) flexural or compressive strength and cost. This lack of causation indicates that the formulation can be optimized to increase the mechanical properties without directly implying an increase in cost. The same behavior was observed for the residual mechanical properties after a high thermal load. Figure 10 shows the scattered plots of mechanical properties as a function of cost. Notice the values of R 2 , which, in all cases, fall below 0.124.
Si/Al ratios and the compressive strengths of the FNS AAMs produced in this study. While this is likely due to the interference of silica fume and its double role as both a source of silica and a micro filler, there is another parameter that, while less popular in the literature, is relevant for the analysis of low-calcium AAM at a high temperature and K/Al ratio. Kohout et al. [36] studied the relationship between the K/Al ratios and postfired properties of low-calcium alkali-activated systems. The author recommended keeping materials intended for fire resistance between a K/Al ratio of 0.55 and 0.70. Remarkably, the optimal mix produced by the DOE design of mixtures has a K/Al ratio of 0.66, falling right in the middle of this desirable range.
Appearance
In Figure 5, the change in color from greenish to red can be noticed. This phenomenon has also previously been observed in other low-calcium alkali-activated materials and is usually associated with the oxidation of the iron species [10,50,75], which have been confirmed to exist in abundance in ferronickel slag.
Cost
The authors found no correlation between (either unheated or residual) flexural or compressive strength and cost. This lack of causation indicates that the formulation can be optimized to increase the mechanical properties without directly implying an increase in cost. The same behavior was observed for the residual mechanical properties after a high thermal load. Figure 10 shows the scattered plots of mechanical properties as a function of cost. Notice the values of R 2 , which, in all cases, fall below 0.124.
Life Cycle Analysis
The GWP of the examined mixes is presented in Figure 11. The activator contributes the most towards the GWP of the analyzed mixes. Potassium silicate is responsible for up
Life Cycle Analysis
The GWP of the examined mixes is presented in Figure 11. The activator contributes the most towards the GWP of the analyzed mixes. Potassium silicate is responsible for up to 69% of the total CO 2 eq . emissions (mix 31). The GWP of mix 13 is 99 kg, where the activator contributes 76% and FNS 31%.
(c) (d) Figure 10. Correlation between cost and: (a) unheated flexural strength; (b) flexural strength a high thermal load; (c) unheated compressive strength; (d) compressive strength after a high the load. For all samples, the water-to-binder ratio equals 0.176, and the tests occurred after 42 da curing in ambient conditions, preventing moisture loss.
Life Cycle Analysis
The GWP of the examined mixes is presented in Figure 11. The activator contrib the most towards the GWP of the analyzed mixes. Potassium silicate is responsible fo to 69% of the total CO2 eq. emissions (mix 31). The GWP of mix 13 is 99 kg, wher activator contributes 76% and FNS 31%. When FNS is classified as a waste material-and therefore only emissions attrib ble to the grinding are accounted for-the total environmental impacts of the AAM the lowest compared to the economic allocation. In this study, the economic allocati based on the ferronickel price of EUR 11,036/t [76] and of the FNS of EUR 31/t. Mix compared to 1 ton of OPC, as a similar mechanical performance can be expected by u these materials [19]. The production of 1 ton of OPC is associated with 870 kg CO However, according to the environmental product declarations, there is a variabili the reported environmental impacts across cement production plants [77], which is trated in Figure 12. The GWP of Mix 13 is 99 kg CO2 eq., whereas, considering the econ When FNS is classified as a waste material-and therefore only emissions attributable to the grinding are accounted for-the total environmental impacts of the AAM are the lowest compared to the economic allocation. In this study, the economic allocation is based on the ferronickel price of EUR 11,036/t [76] and of the FNS of EUR 31/t. Mix 13 is compared to 1 ton of OPC, as a similar mechanical performance can be expected by using these materials [19]. The production of 1 ton of OPC is associated with 870 kg CO 2 eq. However, according to the environmental product declarations, there is a variability in the reported environmental impacts across cement production plants [77], which is illustrated in Figure 12. The GWP of Mix 13 is 99 kg CO 2 eq ., whereas, considering the economic allocation approach, it is 388 kg CO 2 eq. In comparison with the average OPC impact, the environmental footprint of Mix 13 is 89% lower when FNS is considered as a waste material and remains 55% lower when FNS is considered as a by-product and economic allocation is applied (Figure 12). allocation approach, it is 388 kg CO2 eq. In comparison with the average OPC impact, the environmental footprint of Mix 13 is 89% lower when FNS is considered as a waste material and remains 55% lower when FNS is considered as a by-product and economic allocation is applied ( Figure 12). Figure 12. Impact of the allocation method on the total GWP of FNS AAM compared to OPC.
XRD
The main crystalline phases for all samples ( Figure 13) were magnetite (Fe3O4) and the pyroxene solid-solution (Ca,Mg,Fe)2(Al,Si,Fe)2O6, which crystallizes from the slag and
XRD
The main crystalline phases for all samples ( Figure 13) were magnetite (Fe 3 O 4 ) and the pyroxene solid-solution (Ca,Mg,Fe) 2 (Al,Si,Fe) 2 O 6 , which crystallizes from the slag and binder, as iron-containing glasses are prone to crystallization, especially in air. Leucite (KAlSi 2 O 6 ) crystallized for the samples with a lower SiO 2 /K 2 O molar ratio (i.e., samples with a higher addition of the alkaline solution Fayalite ((Mg,Fe) 2 SiO 4 ) was also found to be a minor crystalline compound in the samples (except in sample 13, where it was the major phase). Fayalite crystallizes from the unreacted parent slag and binder phase and decomposes further to hematite/magnetite and silica in the air [78]. Additionally, hematite (Fe 2 O 3 ) was present in most of the samples, which is the stable form of the iron oxide phase at this temperature; however, the porosity and microstructure of the sample will determine the oxidation rate of fayalite and magnetite, which crystallizes initially from the binder. No correlations between the mechanical performance after firing and the phase assemblage were found. The mechanical strength after firing depends more on the cracks/defects forming during firing as well as on how well the samples densify (porosity after firing) than it does on the specific phase assemblage formed after firing.
Conclusions
The Design of Experiment was proven to be useful in finding the combination of components (SF, FNS, KS, KOH) that enhanced the mechanical properties before and after firing and simultaneously minimized the chemical activator content. This resulted in a low CO2 emissions recipe for AAM for high temperature applications.
The mass loss of ferronickel slag low-calcium AAM activated by a potassium silicate and potassium hydroxide solution has shown a similar or lower value compared to wellknown metakaolin and OPC formulations while still underperforming when compared to fly ash alkali-activated materials.
The thermal shrinkage/expansion of the FNS optimal formulation was high in comparison to OPC and the other AAM. The shrinkage/expansion might be reduced by the addition of fine aggregates, which requires a study that will follow this paper.
KOH and KS are fundamental parameters that are necessary to fine-tune a low-calcium AAM mix for high temperature applications. The optimal mix design was optimized to minimize chemical activators, and it was found that FNS can be activated with molar concentrations as low as 2 M KOH and 1.36 M KS to produce compressive and flexural No correlations between the mechanical performance after firing and the phase assemblage were found. The mechanical strength after firing depends more on the cracks/defects forming during firing as well as on how well the samples densify (porosity after firing) than it does on the specific phase assemblage formed after firing.
Conclusions
The Design of Experiment was proven to be useful in finding the combination of components (SF, FNS, KS, KOH) that enhanced the mechanical properties before and after firing and simultaneously minimized the chemical activator content. This resulted in a low CO 2 emissions recipe for AAM for high temperature applications.
The mass loss of ferronickel slag low-calcium AAM activated by a potassium silicate and potassium hydroxide solution has shown a similar or lower value compared to wellknown metakaolin and OPC formulations while still underperforming when compared to fly ash alkali-activated materials.
The thermal shrinkage/expansion of the FNS optimal formulation was high in comparison to OPC and the other AAM. The shrinkage/expansion might be reduced by the addition of fine aggregates, which requires a study that will follow this paper.
KOH and KS are fundamental parameters that are necessary to fine-tune a low-calcium AAM mix for high temperature applications. The optimal mix design was optimized to minimize chemical activators, and it was found that FNS can be activated with molar concentrations as low as 2 M KOH and 1.36 M KS to produce compressive and flexural strengths as high as 80 MPa and 6.8 MPa, respectively.
Silica fume has an important role in improving the thermal performance of FNS-based low-calcium AAM; the optimal ratio was found to be 6%.
The DOE proved to be a powerful tool for low-calcium AAM mix design, as the optimal formulation not only resulted in a high strength but also in the minimization of cost and environmental impact.
There is no strong correlation between mechanical properties such as flexural and compressive strength and the cost of the formulation. This implies that improving the performance of a low-calcium AAM formulation does not necessarily translate to a higher cost.
The optimal formulation would cut CO 2 emissions by up to 55% if FNS is considered as a by-product and by up to 89% if FNS is categorized as a waste. In addition to this reduction in the manufacturing, there is a reduction in the environmental impact, as the slag would not be allocated in landfills.
It is possible to produce a geopolymer mix for ambient and high temperature applications with a low concentration of chemical activators and without the need for heat curing. This reduces the environmental and safety cost of utilizing AAM in the construction industry.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-06-24T15:21:48.903Z | 2022-06-21T00:00:00.000 | {
"year": 2022,
"sha1": "b172c79dc49e3d017d0693bb505bbf9def56634d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/13/4379/pdf?version=1655891194",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7953b6efc5aff7b7d1c7f2b1c87153c4c4cab974",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
63647418 | pes2o/s2orc | v3-fos-license | Lung function measurements as a clue to aetiological diagnosis
A boy was born prematurely in the 31st week of pregnancy, with a birth weight of 1,470 g and length 42 cm. Due to respiratory complications, he was mechanically ventilated for 5 days and, thereafter, he received continuous positive airways pressure (CPAP) for 2 weeks. He needed oxygen supplementation for 10 weeks.
A boy was born prematurely in the 31st week of pregnancy, with a birth weight of 1,470 g and length 42 cm. Due to respiratory complications, he was mechanically ventilated for 5 days and, thereafter, he received continuous positive airways pressure (CPAP) for 2 weeks. He needed oxygen supplementation for 10 weeks.
At the age of 1 yr he developed pneumonia with a major atelectasis and was mechanically ventilated for 1 day. At the age of 1.5 yrs he suffered bronchiolitis due to respiratory syncytial virus positive (RSV+ve) bronchiolitis. Following this, he was treated with regular daily nebulised budesonide (Pulmicort ) and salbutamol (Ventoline ) until 6 yrs of age. Nebulised treatment was then discontinued. Respiratory symptoms reoccurred and he was then started on regular treatment with inhaled fluticasone (Flutide ) and salmeterol (Serevent ) by Discus .
When the patient was able to perform lung function testing, it was noted that he had a reduction in maximum expiratory flow/volume loops. This did not respond to trials of increased doses of inhaled steroids. He did not experience acute exacerbations of his respiratory symptoms, but he had marked limitation of his ability to take part in physical activity, which was attributed to exercise-induced asthma.
Due to reduced lung function and exercise-related respiratory symptoms, he was referred to Voksentoppen BKL at the age of 10.5 yrs.
Clinical examination
The patient presented as a moderately overweight 10-yr-old boy, with an increased anterioposterior diameter of thorax and breathing with moderately heightened shoulders. On auscultation, he had slightly symmetrically reduction of respiratory sounds in the posterior lower thorax.
Lung function measurements as a clue to aetiological diagnosis
Diagnostic steps
Lung function measurements
A maximum expiratory flow/volume loop was obtained for the patient and is shown in figure 1.
Exercise test
There was no reduction in forced expiratory volume in one second (FEV1) observed after running on a treadmill. Marked audible inspiratory stridor during maximum exercise was noted. The assessment of maximum oxygen uptake was 60% of the predicted value.
Bronchial responsiveness measured by metacholine inhalation
The concentration of metacholine which caused a 20% fall in FEV1 (PD20,met) was measured as 0.19 µmol. This corresponds to marked bronchial hyperresponsiveness. The flattened initial part of the maximum expiratory flow/volume loop and markedly reduced peak expiratory flow rate indicate a centrally located airways obstruction. In addition, there was no evidence of reversibility to inhaled salbutamol.
Task 4.
What additional diagnostic steps should be taken?
Answer 1.
As shown below, diagnostic steps that should be taken include: 1. lung function measurements, 2. exercise tests and 3. bronchial responsiveness assessments.
Task 2.
How would you interpret the flow/volume loop?
Additional diagnostic steps X-ray of the trachea and HRCT of the thorax
X-ray of the trachea revealed no pathological findings (not shown). A high-solution computed tomographic (HRCT) scan of the thorax revealed minor areas with air trapping and a slight volume reduction in the right lower lobe; however, there changes were very minor and do not explain the symptoms which were observed during exercise ( fig. 2).
Answer 3.
The lung function measurements and exercise test results suggest a centrally located extra-thoracic bronchial obstruction as the most probable explanation. Bronchial hyperresponsiveness may be related to reactive airways after bronchopulmonary dysplasia in the newborn period.
Answer 4.
As shown below, X-ray of the trachea and HRCT of the thorax, fibreoptic laryngeo tracheoscopy, and CT of the trachea with virtual construction. Figure 5 shows a virtual "bronchoscopic" reconstruction of CT trachea, which demonstrates a thin web formation in the trachea.
Follow-up
After 3 months the patient experienced improved physical capacity with much less exercise limitation, including improved participation in physical activity ("new life").
Lung function measurements were repeated and the results are shown in figure 6; there was marked improvement in the shape of the curve, although not normalisation.
Reversibility to salbutamol was demonstrated; FEV1 increased 16% (increased up to 86% of predicted value). The curve with measurement after inhaled salbutamol suggests bronchial obstruction, in agreement with the history of bronchopulmonary dysplasia. Tracheo bronchoscopy was repeated and demonstrated minor scarring of no obvious clinical significance.
Answer 5.
Surgical removal by argonlaser of tracheal web and scarring.
Diagnosis:
Tracheal web after tracheal intubation in infancy. Reactive airways disease after bronchopulmonary dysplasia.
Task 5.
What is your suggested treatment? | 2019-02-16T14:30:41.908Z | 2004-09-01T00:00:00.000 | {
"year": 2004,
"sha1": "e4a7c3f8d1c339f76d600eb4d736665641315bfb",
"oa_license": "CCBYNC",
"oa_url": "http://breathe.ersjournals.com/content/1/1/61.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "c4bd06e4e8d8fbbc2b80b9ab9e56242ab7827cae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
52935651 | pes2o/s2orc | v3-fos-license | Phase II trial of palbociclib in patients with metastatic urothelial cancer after failure of first-line chemotherapy
Background The majority of urothelial cancers (UC) harbor alterations in retinoblastoma (Rb) pathway genes that can lead to loss of Rb tumour suppressor function. Palbociclib is an oral, selective inhibitor of CDK 4/6 that restores Rb function and promotes cell cycle arrest. Methods In this phase II trial, patients with metastatic platinum-refractory UC molecularly selected for p16 loss and intact Rb by tumour immunohistochemistry received palbociclib 125 mg p.o. daily for 21 days of a 28-day cycle. Primary endpoint was progression-free survival at 4 months (PFS4) using a Simon’s two-stage design. Next-generation sequencing including Rb pathway alterations was conducted. Results Twelve patients were enrolled and two patients (17%) achieved PFS4 with insufficient activity to advance to stage 2. No responses were seen. Median PFS was 1.9 months (95% CI 1.8–3.7 months) and median overall survival was 6.3 months (95% CI 2.2–12.6 months). Fifty-eight percent of patients had grade ≥3 hematologic toxicity. There were no CDKN2A alterations found and no correlation of Rb pathway alterations with clinical outcome. Conclusions Palbociclib did not demonstrate meaningful activity in selected patients with platinum-refractory metastatic UC. Further development of palbociclib should only be considered with improved integral biomarker selection or in rational combination with other therapies.
BACKGROUND
Urothelial cancer (UC) is a common malignancy with limited treatment options and poor outcomes in patients with metastatic disease. For decades, platinum-based combination chemotherapy was the only effective treatment option for metastatic UC. More recently, with the FDA approval of several immune checkpoint inhibitors, the landscape of treatment options has broadened significantly. However, immune checkpoint inhibition remains effective in only a subset of patients with metastatic UC, and tumour progression remains inevitable in most patients. Additional effective agents are needed for the treatment of this deadly disease and molecularly targeted therapies hold promise for patients with disease progression after platinum-based chemotherapy and/or immunotherapy. For example, the irreversible ErbB family receptor blocker afatinib had significant activity in patients with platinum-refractory UC with HER2 or ERBB3 amplifications. 1 Additionally, the VEGF2-R antagonist ramucirumab in combination with docetaxel improved progression-free survival (PFS) compared with docetaxel alone in unselected patients with platinum-refractory UC. 2 The Cancer Genome Atlas (TCGA) has expanded our knowledge of the molecular landscape of urothelial carcinoma and has demonstrated the frequent alteration of retinoblastoma (Rb) pathway genes in UC. Cyclin-dependent kinase inhibitor 2A (CDKN2A) alteration is the most common focal deletion in UC and is found in up to 20-30% of tumours. 3,4 Loss of CDKN2A leads to upregulation of cyclin-dependent kinase (CDK) 4 and 6 activity, thus phosphorylating and inactivating the tumour suppressor Rb, leading to cell cycle progression and tumour growth. Other Rb pathway alterations include loss of function mutations in CDKN1A (p21) in 9% and amplification of E2F3 in 12% of tumours. Together, these molecular alterations suggest that therapeutic agents targeted to the Rb pathway may have activity in the treatment of metastatic UC.
Palbociclib is an oral, highly selective inhibitor of CDK 4 and 6. Inhibition of CDK4 and CDK6 acts to restore the tumour suppressor role of Rb and promote cell cycle arrest. Intact Rb is critical to the mechanism of CDK4/6 inhibition in cancer treatment, and CDKN2A loss with intact Rb mechanistically predicts sensitivity to CDK4/6 inhibitors. Preclinical data in bladder cancer cell lines have shown inactivation of RB1 confers resistance and inactivation of CDKN2A confers sensitivity to palbociclib. 5 Several CDK4/6 inhibitors have recently been FDA-approved for metastatic, hormone-receptor-positive breast cancer in combination with hormone therapy with impressive prolongation of PFS observed in patients on these agents. 6,7 Prior studies in UC have found striking similarities between ER-positive and luminal breast cancers and the luminal subtype of UC, including the discovery of estrogen receptor signaling and enrichment in luminal breast cancer-specific gene signatures and pathways. Given these similarities in gene expression, as well as the overall molecular landscape of UC and preclinical data, CDK4/6 inhibition is a promising treatment strategy for metastatic UC. 8,9 We hypothesised that palbociclib would demonstrate clinical activity in patients with UC with Rb pathway alterations who had progressed after standard first-line chemotherapy. We therefore conducted this phase II trial of palbociclib in molecularly selected patients with platinum-refractory UC.
Patients
Patients aged ≥18 years with metastatic histologically confirmed UC of the bladder, urethra, ureter, or renal pelvis who had progressed after prior platinum-based chemotherapy in the perioperative or metastatic setting were enrolled. Immunohistochemistry (IHC) was performed on archival tumour tissue and patients were deemed eligible if the tumours were positive for Rb and negative for p16 as determined by the central study genitourinary pathologist (S.J.M.) in a CLIA-certified laboratory at the lead study site (University of North Carolina). Eligible patients had metastatic disease that was not amenable to curative surgery or radiation, including at least one measurable disease site. Perioperative chemotherapy must have been received within 1 year and no more than two prior cytotoxic chemotherapy regimens were allowed. Inclusion criteria included Eastern Cooperative Oncology Group (ECOG) performance status ≤ 2, absolute neutrophil count ≥ 1500/μL, hemoglobin ≥ 8 g/dL, platelets ≥ 75,000/μL, total bilirubin ≤ 1.5 times the institutional upper limit of normal (ULN), AST/ALT ≤ 2.5 times ULN, serum creatinine ≤ 2.5 times ULN, life expectancy > 3 months, and the ability to provide informed consent. Patients were ineligible if they had received a prior CDK4/6 inhibitor, had active brain metastases, were pregnant or breastfeeding, had uncontrolled systemic disease or uncontrolled infection, or were unable to swallow oral medications. All patients provided written informed consent.
Study design and treatment
This was an open-label, single-arm multicentre (University of North Carolina, University of Michigan, Vanderbilt) phase II trial to evaluate the efficacy of palbociclib in patients with previously treated UC with p16 loss and intact RB by IHC. Patients received palbociclib 125 mg orally once daily with food on a 21 days on/ 7 days off schedule in 28-day cycles. Cycles were repeated until disease progression, death, or intolerability. All patients were monitored for toxicity by history, physical examination, and complete blood counts and serum chemistry analysis every 2 weeks for the first two cycles, then every 4 weeks. Dosing was held for any grade 3 hematologic toxicity on day 1 of each cycle or any grade 3 nonhematologic toxicity at any time. Dose reductions were allowed for intolerability with initial dose reduction to 100 mg/day and a subsequent dose reduction to 75 mg/day. Palbociclib was permanently discontinued in patients that required <75 mg/day. Dose intensity was calculated as the dose administered during the total time receiving palbociclib divided by the standard dose intensity specified in the protocol.
Radiologic disease evaluation was performed every 8 weeks with assessment of disease response based on RECIST version 1.1. All adverse events from the start of treatment to 28 days after treatment cessation were graded according to the Common Terminology Criteria for Adverse Events (CTCAE), version 4.0.
Statistical analysis
The primary endpoint was progression-free survival at 4 months (PFS4) defined as time from treatment initiation to disease progression or death due to any cause. A Simon's two-stage design was planned to allow for early stopping for futility. The null median PFS was assumed to be 2.5 months, 10 and a 2-month improvement in PFS was felt to be clinically meaningful, translating to PFS4 improvement from historical rate of 33 to 54%, assuming an exponential distribution. The null hypothesis that the true percentage is 33% was tested against a one-sided alternative. In the first stage, 15 patients were planned to be accrued and if there were five or fewer patients with PFS4 the study would be stopped. Otherwise, 21 additional patients were planned to be accrued for a total of 36. This design yields a type I error rate of 0.046 and power of 0.81, when the true percentage of PFS4 is 54%. It was planned to enroll 40 total patients to allow for potential dropouts and nonevaluable patients. Secondary endpoints were overall response rate, median PFS, and median overall survival (OS) estimated by the Kaplan−Meier method. Study and safety analysis included all patients that received at least one dose of palbociclib.
Biomarker eligibility and analysis Eligibility criteria included tumour CDKN2A loss and intact Rb, assessed via IHC of p16 and RB respectively, at the primary study site. IHC staining for RB using the purified mouse antihuman RB protein (BD Biosciences Clone G3-245) was validated as a sensitive and specific biomarker for molecular alteration in Rb prior to study initiation. For validation, IHC for Rb was performed on 19 unrelated cancer cases and results were confirmed using prior targeted RB gene sequencing results. Thresholds for positive and negative staining were assigned based on staining observed in cases with known intact or deleted Rb. IHC for p16 was performed as per institutional protocol on clinical specimens using a Ventana monoclonal antibody for p16, clone E6H4, epitope retrieval 2/10 with commercial positive and negative controls. For RB, positive staining was >5% of tumour cells showing at least moderate nuclear staining or 20% of tumour cells showing at least weak nuclear staining. Staining less than this was considered negative. For p16, positive staining was >70% of cells showing positive, diffuse expression with strong intensity, with both nuclear and cytoplasmic staining. Staining less than this was considered negative. 11,12 Based on prior sequencing data in UC, it was estimated that 40% of patients would be eligible for the trial, with approximately 100 patients to be screened to accrue the planned 40 patients. Planned exploratory analyses included correlation of the IHC results with high-throughput sequencing results for DNA alterations in the Rb pathway.
Somatic mutation and copy number analysis was performed on available archival formalin-fixed paraffin-embedded tumours for 11 of the 12 enrolled patients. A pathologist assessed each tumour block for percentage of viable tumour using hematoxylin and eosin-stained slides. Targeted exon sequencing was conducted through the UNCSeq pipeline (v8) to analyse nearly 800 genes associated with cancer as previously described. 13 Samples were sequenced on the illumina platform on NextSeq 500 sequencers using a commercial customised targeted Agilent SureSelect panel (UNCseq v8). The samples were aligned to hg19 with additional viral reads by BWA (0.7.9a), then sorted, indexed, and duplicates removed by biobambam Reported analyses are confined to mutations predicted to have a high or moderate impact on protein function through UNCseq. Pathway mutation frequency was calculated based on the number of samples that contained at least one mutation in the gene list associated with that pathway. The cell cycle pathway was represented by the CCND1, CCNE1, CDK4, CDK6, CDKN1A, CDKN1B, CDKN2A, CDKN2B, E2F3, and RB1. Statistically significant CNAs (p < 0.05) were selected and categorised based on the standard deviation of the gene-associated logR values (amplification ≥ 2, gain ≥ 1 and <2, shallow deletion ≤ −1 and > −2, and deep deletion ≤ −2). Genes and copy number alterations that were previously reported to be significantly mutated in TCGA analysis of UC are reported here, as well as all relevant cell cycle pathway genes.
Patient characteristics
Of 34 patients screened, 25 were eligible based on tumour IHC demonstrating p16 loss and intact Rb (Fig. 1). Of those, 12 patients were enrolled between April 2015 and January 2017. Reasons that eligible patients did not enroll included: decline in performance status (2), received other treatment (2), had not yet progressed on current therapy (1), closure of the study before enrollment could occur (4), and unknown (4). Baseline patient characteristics are included in Table 1. Most patients were male (67%), age > 65 (67%), and had bladder as the primary site of their UC (58%). Postplatinum prognostic factors included hemoglobin less than 10 g/dL, 17%; liver metastases, 0%; and ECOG performance status more than 0, 58%. All patients received prior platinum-based chemotherapy and ten patients had definitive surgical treatment of their primary tumour (eight had cystectomy, two had nephroureterectomy). Most patients (58%) had only received one prior systemic therapy regimen. Two patients (17%) had received any prior radiation and two patients (17%) had received prior immune checkpoint inhibitor treatment. Three patients (25%) had lymph node-only metastatic disease. Median time from prior chemotherapy to study treatment was 5.0 months.
Efficacy
Two of the first 12 enrolled patients met the primary endpoint of PFS4 and the best response was stable disease. Since only 2 patients achieved PFS4 in the first 12 patients, it was not possible to meet the criterion for study continuation to stage 2 and thus the study was terminated. The overall response rate was 0% and median PFS was 1.9 months (95% confidence interval 1.8-3.7 months). Median OS was 6.3 months (95% CI 2.2-12.6 months). Eleven of 12 patients have expired; no deaths were felt to be treatment-related. Three patients received subsequent anticancer therapy after protocol treatment.
Dose intensity and adverse events The median duration of treatment with palbociclib was 8 weeks (range 2.6-32 weeks). Only four patients were treated beyond cycle 2. The mean dose intensity was 91% and two patients had a dose intensity of <80% (one due to AEs, one due to noncompliance). All patients discontinued the study drug, most commonly due to disease progression (83%). Two patients discontinued the study drug due to adverse events. One patient had a dose reduction due to hematologic toxicity. Seventy-five percent of study participants experienced grade 3/4 treatment-related AEs and 92% of participants experienced any grade treatment-related AEs ( Table 2). Almost all the clinically significant toxicity was hematologic and seven (58%) patients had grade 3 hematologic toxicity (no grade 4). The most common AEs included anemia, leukopenia, lymphopenia, neutropenia, and thrombocytopenia. Serious AEs (any grade) suspected to be related to the study drug did not occur in any patients.
Genomic alterations
Sequencing data were available on 11 of the 12 enrolled patients. The most frequently observed somatic mutations were ARID1A, MLL2, PIK3CA, and TP53 (55% of patients for each). Other common mutations included TSC1, EP300, and ERCC2 (Fig. 2a). There were no patients with CDKN2A deletions (Fig. 2b). Alterations in the PI3K pathway were common with 82% of patients having a somatic mutation in either PIK3CA or TSC1. Although 82% of patients also had alterations in cell cycle genes (Fig. 2c), only 36% of patients had alterations that would predict sensitivity to palbociclib ( Table 3). The two patients who responded to treatment did not harbor alterations predicted to confer sensitivity.
DISCUSSION
Single agent palbociclib did not demonstrate meaningful activity in molecularly selected patients with platinum-treated UC. Palbociclib has been shown to be active in some cancers with CDKN2A loss, although impressive single agent efficacy is limited to case reports and it is in general primarily cytostatic. 14 Preclinical data support the potential for benefit in cancers with Rb pathway alterations, 15 but clinical trials in liposarcoma, 16 ovarian cancer, 17 nonsmall cell lung cancer, 18 and colon cancer 19 have demonstrated stable disease in a subset of patients with few tumour responses. Our study similarly showed a minority of patients had stable disease without any tumour responses.
Despite the limited efficacy seen in our study with single agent CDK4/6 inhibition, combination therapy could be considered for further investigation. Heilmann et al. demonstrated that despite loss-of-function CDKN2A mutations in approximately 80% of sporadic pancreatic ductal adenocarcinomas, these tumours are inherently resistant to CDK4/6 inhibition, but that this resistance can be overcome with the addition of mTOR inhibition. 20 In addition, single agent PI3K inhibitors have had only modest activity across cancers, but laboratory studies show that combined CDK4/6-PI3K inhibition can synergistically reduce cell viability. 21 Interestingly, the patients in our study demonstrated a surprising number of alterations in the PI3K/mTOR pathway, perhaps suggesting a mechanism of resistance. It is likely, however, that tumour resistance to palbociclib occurs by multiple mechanisms, as evidenced by the breast cancer experience with the PALOMA-3 trial, which show no differential sensitivity to palbociclib based on PIK3CA mutation status. 22 Additionally, the efficacy of CDK4/6 inhibitors in breast cancer occurred in combination with hormonal therapy, potentially highlighting the need to investigate combination therapy in UC. Palbociclib had a tolerable safety profile in our study, with predictable and manageable AEs. Hematologic toxicity was common but only one patient required dose reduction. Neutropenia occurred frequently, but as seen in breast cancer, did not result in any episodes of febrile neutropenia. Anemia was slightly more common in our study than has been previously seen in breast cancer, which may be expected based on the patient population enrolled on this study compared to the patients typically enrolled on breast cancer clinical trials.
The lack of clinical efficacy in our trial could suggest that our inclusion criteria did not select for the patients most likely to receive benefit from palbociclib. In the PALOMA-1 trial of hormone-receptor-positive breast cancer that treated women with palbociclib plus letrozole or letrozole alone, the demonstrated improvement in PFS with palbociclib was less pronounced in the molecularly selected cohort 2 compared with the unselected cohort 1 patients, despite the presence of alterations predictive of CDK4/6 inhibitor response in cohort 2. In that trial, molecular selection was slightly different than the current study and included amplification of cyclin D1 (CCND1), loss of CDKN2A, or both. The authors postulated that relative amount of Rb protein could explain their results, and we therefore include Rb positivity by IHC as required eligibility in our trial. The results in breast cancer, however, may also simply reflect the relative importance of CDK4/6 activity in breast cancer cell proliferation independent of Rb pathway genomic alterations. UC tumours may be inherently less sensitive to the cytostatic cell cycle arrest induced by palbociclib, either due to less dependence on CDK4/6 activity or resistance mechanisms from other pathway members that are less evident in breast cancer. The palbociclib experience in our study does highlight the difficulty in molecularly selecting patients for clinical trials via an integral biomarker at screening. Although we demonstrated the feasibility of screening patients in this setting with IHC biomarkers, IHC is a marker of protein expression and not a direct measurement of underlying molecular alterations. Molecular validation of IHC for RB in UC was performed prior to study enrollment and IHC for p16 is in widespread clinical use and correlates with CDKN2A loss reliably in other cancers. 23 We accurately selected patients without loss of Rb (0 of 12 enrolled patients had Rb loss determined with NGS). However, there were no patients with CDKN2A loss in our study and more than the expected number of patients were eligible by our IHC screen. Prior studies in other tumours demonstrate a high sensitivity of p16 expression by IHC for underlying CKDN2A deletion, but also a substantial proportion of cases without CDKN2A deletion with p16 immunonegativity, suggesting our integral biomarkers did not select for CDKN2A deletion. 24 Recent data confirm that CDKN2A loss is less frequent than reported in the original TCGA analysis and is now estimated at around 20-30%. 25 Our findings suggest that the lack of p16 protein expression by IHC was not due to alterations in CDKN2A itself, but instead due to alterations in other Rb pathway members or by methylation of CDKN2A as a cause of decreased protein expression, as has been seen in other cancers. [26][27][28][29] Alternatively, the lack of CDKN2A loss by NGS could be secondary to the noted difficulties in documenting copy number variation in single samples by NGS. 30 This may also explain why an unexpectedly high number of patients were eligible for the study by IHC criteria. Prior studies have found that CDKN1A mutations and CDKN2A loss are mutually exclusive in bladder cancer, a finding recapitulated in our small dataset. 31 We did not see a correlation between alterations that we predicted to confer sensitivity to palbociclib (CDKN2A inactivation, CCND1 amplification, CDKN2B inactivation) or resistance to palbociclib (Rb loss, CDKN1A inactivation, CCNE1 amplification, or E2F amplifications) and efficacy of palbociclib in our patients. Similarly, in the phase 1 trial of ribociclib (another CDK4/6 inhibitor) in solid tumours and lymphoma, CDKN2A loss by nextgeneration sequencing was not associated with chance of remaining on drug ≥8 weeks, although CCND1 alterations were associated with response (our study had no patients with CCND1 amplifications). 32 One consideration is whether our trial population had more aggressive disease than a standard second-line UC population. Prior work has shown that UC tumours with CDKN2A alterations are associated with worse cancer-specific survival compared with other UCs. 33,34 However, the median PFS in our study of 1.9 months is in keeping with other studies in this patient population and reflects the inherent aggressive nature and difficulty in treating metastatic UC after platinum-based chemotherapy. Our trial included a slightly higher percentage of upper tract UC, which could have selected for more aggressive disease, although included no patients with metastases to the liver. 35 There are several limitations to the current study. Firstly, the integral biomarker inclusion was designed to enrich for patients more likely to respond to treatment, but does not allow for an analysis of differential response based on presence or absence of Rb pathway alterations. We are therefore unable to conclude if our inclusion criteria excluded patients that could have benefited from treatment. The only biomarker that is clinically used for CDK4/6 inhibitor treatments is hormone receptor positivity in breast cancer. Given the lack of clinical activity seen in our study, it remains unknown whether the molecular subtype of UC (i.e., basal vs. luminal) could also be a predictive biomarker in UC. An additional limitation is that sequencing was done on available archival tumour tissue, which was not confirmed to be prechemotherapy specimens and biopsy of metastatic lesions for genomic analyses was not required, which can influence detectable alterations.
In conclusion, palbociclib did not demonstrate significant activity in molecularly selected patients with platinum-refractory metastatic UC. Further development of palbociclib in UC could include combination regimens with PI3K pathway inhibitors. Additionally, if palbociclib is developed further in UC, we do not recommend IHC-based molecular selection of included patients. | 2018-10-22T06:13:30.609Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "4f1d67c6957ba5f0de3fdb34454f05fd2e586d95",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41416-018-0229-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f1d67c6957ba5f0de3fdb34454f05fd2e586d95",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208284717 | pes2o/s2orc | v3-fos-license | M AINTAINING C LOUD P ERFORMANCE U NDER D DOS A TTACKS
The popularity of cloud computing has been growing where the cloud became an attractive alternative rather than classic information processing system. The distributed denial of service (DDoS) attack is one of the famous attacks to cloud computing. This paper proposes a Multiple Layer Defense (MLD) scheme to detect and mitigate DDoS attacks which due to resource depletion. The MLD consists of two layers. The first layer has an alarm system send alarms to cloud management when DDoS attacks start. The second layer includes an anomaly detection system detects VM is infected by DDoS attacks. Also,MLD tested with a different DDoS attack ratio to show scheme stability. MLD evaluated by The energy consumption and the overall SLA violations. The results show the great effect of the MLD to reduce the energy consumption and the overall SLA violation for all datasets. Also, the MLD shows acceptable stability and reactivity with different DDoS attack ratio.
INTRODUCTION
A pay-as-you-go (PAYG) model is an innovative paradigm was designed by cloud computing providers to apply for application, platforms, services and computing resources to users.[1].various Quality of Service (QoS) aspects, like performance, availability, and reliability are used to measure performance of different services provided by cloud computing platform Theseperformance metrics are explained in a Service Level Agreement (SLA) negotiated between users and cloud providers.Cloud services are classified as service as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).SaaS is a service of software deployment where a service is hosted as a service delivered to users across the Internet.SaaS is used to mention to business software rather than user software, which belongs to Web 2.0.without needing to install and execute a service on a user's computer it is considered as a way for businesses to get the same profits as commercial software with smaller cost outlay.the cloud computing provider suffers from a lot of stripes results from growing pressure from deliver services where platform as a services is a form of cloud computing that enables potential to assistant developer's designs, write, and test web applications services that presented to customers.alot of venders such as Salesforce.com(Force.com),Microsoft (Azure, starting next month), and startups such as Wave Maker are contributed in arising up online development environment.One development language or methodology has been used by these platforms which is good thing for the enterprise.
A high level of availability and reliability of the application and services have been available where the cloud computing environment is characterized by a high volatility.the ability of system to perform as possible when the services are requested where it is not failed or repaired action it is called availability.Cloud computing reliability is defined as the framework of security or the framework of resource and service failures.the ability of components and parts of system to do the required jobs for section of time with a certain level of confidence it is defined as reliability.
The relation between the reliability and availability controls with the third compound called maintainability.Performing a successful repair action within a certain time it is named maintainability.Reliability, Availability, and Maintainability (RAM) encompass the essential features of SLA.RAM are correlated in such as a way that it is necessary to have both high reliability and good maintainability in order to achieve high availability [2].The relation between reliability and availability isa positive relationship at constant maintainability.
One of the most important features of the cloud is high availability service to the customer.Cloud computing focuses on that user can get information anywhere anytime.Availability does not only refer the software and data but also it provides hardware as demand from authorized users.The availability of cloud computing services is targeted by cloud resource based attack such as Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack [3,4].The DDoS attack is looked one of the greatest serious attacks in the cloud environment.DDoS is known as a cyberattack where the attacker tries to make a machine resource unavailable to its intended users by momentarily or forever disrupting services of a host connected to the internet.DDoS defines as a group of machines that are targeted at a particular service.DDoS attacks goal to consume a system's resources such that it compromises its capability to offer the intended service and thus rendering it unreachable [5].
Many DDoS attacks cases gained a lot of attention in the research community.Such as, lizard Squad attacked cloud-based gaming services of Microsoft and Sony which removed down the services on Christmas day in 2015.A massive DDoS attack was targeted The cloud service provider Rack space on its services.Amazon EC2 cloud servers faced a massive DDoS attack.The DDoS attack caused heavy downtime, business losses and many long-term and short-term effects on the business processes of victims [6].The economic losses per hour at peak times is 470% more than the former year.In March 2015, Greatfire.orgwas targeted by a heavy DDoS attack by costing it an enormous bill arrive at 30000$ daily on Amazon EC2 cloud [7].In [8], the authors reported that up to 444000 USD as totally is the average financial damage by a DDoS attack.The categories of DDoS is classified as bandwidth based and resource-based attacks.The bandwidth attack devours the bandwidth of the victim or target system by overflowing with the undesirable traffic to stop legitimate traffic from arriving the victim network.The bandwidth is divided into flooding attack and amplification attack.On another side, the resource depletion targets to exhaust the victim system's resources.The main two branches of the resource depletion attack are protocol exploit attack and malformed packet attack [9].
The DDoS attacker harnesses the most important advantages of cloud computing like pay-as-yougo model, auto-scaling, and multi-tenancy to get its goal.For pay-as-you-go model, the cloud instances are rented on an hourly basis and thus the minimum renting period is an hour.A virtual machine (VM) owner may need to update its own resources on-the-fly as and when required.In addition, cloud computing offers better hardware utilization, a consumer does not want provisions like power, space, cooling, and maintenance.Pricing or accounting plays a vital role in DDoS attacks in the cloud.Attackers need only to pay the cost of hours that VMs are active [6].Multiple providers support the auto scaling concept [6,10].For auto-scaling, this property permits allocation of additional CPUs, memory, storage, and network bandwidth to a VM when the resources are required or removed from a VM when a VM does not need these resources.In addition, it can also transfer from the host to another host.The advantage of multi-tenancy gives the benefit that is an architecture in which a single instance of a software application serves multiple customers.A DDoS attacker uses this advantage specifically such that if attacker success to attack single VM which serves a lot of applications, a lot of applications will be out of order [6].
The DDoS attacks of effects are categorized into two sets; direct and indirect effect [6,11,12,13].The direct attack effect such as service downtime, auto-scaling driven resource/economic losses, business and revenue losses, economic losses due to the downtime, and the service's downtime which is based on the victim service.while indirect attack effect on the cloud such as energy consumption costs, reputation and brand image losses, attack mitigation costs, collateral damage to the cloud components and the effects due to recent smoke-screening attacks.The economic losses have multiple phases in direct and indirect DDoS effect.as Economic Denial of Sustainability (EDoS) attack is the economic losses depend on DDoS attack which is known or Fraudulent Resource Consumption (FRC) attack.The DDoS attack takes the shape of an EDoS attack when the victim service is hosted in the cloud [6,11].
The defense System for DDoS attacks in the cloud is categorized into two main modes: proactive defense mode and reactive defense mode.The proactive defense mode includes DDoS attack prevention.The reactive defense mode includes attack detection and attack mitigation and recovery [6,9].At the proactive defense mode, the DDoS attack prevention methods are based on one or more functions like; challenge response, hidden servers or ports, restrictive access, and resource limit.
For the reactive defense mode, The DDoS attack detection method works in a situation that the attack has been done.The DDoS attack detection method starts to run according to signals that are sent from the cloud management system.The received signals announce that the attack starts.Also, cloud performance will degrade.The DDoS attack detection is constructed based on one or more functions like; anomaly detection, sources and spoof trace, count-based filtering, botcloud detection, resource usage.Anomaly detection defines as the process of recognizing unanticipated patterns or events in datasets, which are varied from normal patterns or events.Anomaly detection is classified into three groups.They are; supervised, semi-supervised, and unsupervised anomaly detection.Supervised anomaly detection is similar to supervised methods.So, the labeled train and test data are required.On another hand, the unsupervised anomaly detection is suffered from highly sensitive to outliners.
For the attack mitigation and recovery, the proposed methods support an infected server to remain serving requests in the presence of an attack.The published methods are based on one or more functions like; victim migration, OS resources management, Software Defined Networking (SDN), DDoS mitigation as a Service.In Cloud computing, it is very vulnerable to a DDoS attack due to the structural features of the Cloud system.When logical resources have been being delivered virtualization layer on physical resources, a set of virtual machines can be affect by DDoS attacks on one physical resource which virtual machines used the resource on physical resources between them.DDoS attack consumes the system resources, and users can not able to receive reliable services.As a result of DDoS attacks due to SLA violation.At the same time, DDoS attack depletes the system resources which causes a high consuming for CPU/memory resources.CPU/ memory resources are two of the highest compounds causing energy consumption.So, the DDoS attacks have an indirect effect on energy consumption by raising CPU/memory utilization resources that become them busy all the time.This paper proposes Multiple Layer Defense (MLD) scheme.The proposed scheme sends alerts to a cloud management system for notifying about the attack.Also, the proposed scheme mitigates the effects of DDoS attack on the cloud system.The proposed defense scheme focuses on resource depletion DDoS attack.The proposed scheme consists of two layers.The first layer aims to alert the cloud system when attacks start.This layer is based on a prediction method.The prediction method depends on a rigid regression learning model.The prediction method works to forecast the requested workload size by the users for cloud services.The predicted requested workload size is based on the requested workload size in the previous stage, day, and time.The predicted requested workload size uses as a dynamic threshold for the workload size.If the amount of the real requested workload size is larger than the predicted requested workload size, the cloud manager announces that the cloud exposes to attack.
To avoid the false positives and false negatives alarms can produce from the first layer, the second layer aims to detect anomaly patterns which are presented on the DDoS attacks.The detection process depends on the behavior of the VM resource utilization during the lifetime of VM.The second layer contains a one class support vector machine model to detect the DDoS attacks.A most of published papers studied the effect of DDoS attacks on cloud computing through response time.This paper will gauge the effect of DDoS attack in cloud computing through the SLA violation and energy consumption.The key contributions of this paper are: 1-For predicting the size workload in cloud computing, the prediction model based on rigid regression is proposed.2-A new dynamic threshold for the number of jobs requested proposes is proposed.3-Using SLA violation and energy consumption to evaluate the performance of the cloud with the MLD scheme under DDoS attacks.
The organization of paper is as follows: section 2 has a set of research manuscript published.Section 3 consists of an explanation and analysis of the dynamic threshold for the workload size.Section 4 discusses and analyzes the performance SVM-one class.Section 5 explains the process of clustering workload.Section 6 has the experiment design and metrics evaluation. in section 7, The results and analysis are explained.Conclusion and future work are discussed in section 8.
RELATED WORK
According to previous sections, the defense System classes for DDoS attacks in the cloud are; attack prevention, attack detection, and the attack mitigation and recovery.This paper is going to focus on the DDoS attack detection and mitigation with more focus on approaches based on count based filtering, resource usage methods, and OS level resource management methods.The next section is going to discuss a set of the published researchers in cloud computing workload prediction area.
Workload Prediction in Cloud Computing
In this section, the proposed workload size prediction approaches will discuss.To develop a more accurate prediction approach, data mining and machine learning techniques including regression, decision trees, neural networks (NNs), fuzzy logic, genetic algorithm (GA), support vector machine (SVM) that apply to predict workload submitted in cloud computing.A new service cloud architecture is presented and a linear regression model was applied to predict the workload size trace in [14].The workload prediction aimed to forecast the size of services requested at the next time interval.The authors used a linear regression model (LRM) to solve this problem and compared with other models like autoregressive moving average method filter (ARMA), mean workload prediction, and max workload prediction.The LRM outperforms other methods.The Workloads are used in this paper consists of video service for 6 hours every time interval in a service cloud.A set of published papers on workload prediction, the different methods of regression like linear regression and Auto Regression (LR) Integrated Movie Average (ARIMA) are mostly used to predict the size of request workload in the cloud computing.The main shortage of LR that it does not suitable for the cloud computing environment.The cloud computing environment has a high change in the size of the request workload.In addition, another the disadvantage of the linear regression, the ARIMA model selection process based on greatly on the competence and knowledge of the scientists to yield targeted results [15].
A Cloud Resource Prediction and Provisioning scheme (RPPS) is proposed in [16].The RPPS proposed for automatically estimate the future request and achieve proactive resource provisioning for cloud applications.RPPS is based on an autoregressive integrated moving average model (ARIMA) to forecast the workloads in the near future, combines both coarsegrained and fine-grained resource scaling under various situations, and adopts a VMcomplementary migration approach.RPPS can resolve a predictive resource provisioning challenge when enterprises confront request variations in the cloud data center.The model of RPPS assessed with traces collected by the authors using typical CPU intensive applications and as well as workloads from a real data center.Anew prediction approach is suggested in [17].The suggested prediction approach classifies the workload and assigns diverse prediction models according to the workload features.The key idea is that the authors convert workload classification into a 0-1 programming problem, and formulate an optimization problem to maximize the prediction precision, and then present an optimization algorithm.The proposed approach tested with real traces of typical online services to evaluate prediction method accuracy.
In [18], the authors extended the research of [17].The authors discussed the classified prediction approach in the perspective of the IaaS layer.The authors analyzed the problems in a large-scale heterogeneity cloud environment and assess the classified prediction method with google cluster real trace data.An adaptive categorical prediction scheme is proposed according to factors in the IaaS layer.The suggested approach categorizes the workloads into different sets corresponding to different prediction methods as in [17].In addition, in order to modify the prediction scheme timely, feedback from the workload monitoring was applied.With establishing an integer programming model, an ideal method has been adopted to classify workloads.
Three models to predict the workload based on analyzing monitoring data are proposed in [19].In [20], the realization of a cloud workload prediction module for SaaS providers is presented.
The proposed prediction model is based on the autoregressive integrated moving average (ARIMA) model.The accuracy of future workload prediction evaluated using real traces of requests to web servers.In addition, the effect of the achieved accuracy in terms of efficiency in resource utilization and quality of services QoS assessed.A Bayesian model is proposed in [21] to predict virtual resource requirement of applications in short and long-term.The factors considered for prediction in the model are a day, weekday or weekend, time-interval of application access, workload, benchmarks, and availability of virtual machines, etc. Dependencies between related parameters were identified.The assessment of the model is carried out using the cross-validation method on the basis of training, validation and test datasets.The datasets are reflected CPU intensive transactional requests.The SamIam Bayesian network simulator is used to build the model.The presented model verified cross workload traces of Amazon EC2 and Google CE data centers in real time scenarios.The main limitation of previous approaches focuses on static, and data size.A lot of papers ignores the dynamism environment of cloud computing.The data size used in the previous approaches is less than the real data size of cloud computing.In addition, the accuracy of the proposed approaches still low.
DDoS Attack Detection & Mitigation
In this section, the published papers proposed detection and mitigation of DDoS attacks methods will discuss.In [22], the authors proposed a novel detection and mitigation technique against EDoS attack in cloud computing called EDoS-Shield.The EDoS-Shield aims to confirm whether the user requests are a legitimate person or generated by bots.The EDoS-Shield has two lists, the white list for a legitimate person while the blacklist for the un-legitimate person (DDoS attack).The detection mechanism works by forwarding the first request to a verifier node in EDoS-Shield architecture.This verifier node is responsible for the detection process and updating the white and blacklists based on the results of this detection process.The following requests sending from the bots will be obstructed by a virtual firewall where their IP addresses will be assigned in the blacklist.On the other hand, the following requests scheduling from legitimate users will be send directly to the target cloud service where their IP addresses will be placed in the whitelist.In [23], the authors extended the previous work published in [22].The issue of IP spoofing discussed within EDoS-Shield architecture to detect and mitigate the DDoS attacks.In the enhanced EDoS-Shield architecture, the authors applied the time-to-live (TTL) value found in the IP header for the objective of detecting the IP spoofed packets.To mitigate, these spoofed packets will be deleted before reaching the protected server.In this architecture, when a V-Node achieves detection of a request, it will assign to the corresponding TTL value related to the source IP address.With enhanced EDoS-Shield architecture, both values of the IP address and TTL value will be allocated in the white or blacklists.In the future time, the acquiring information will be help to distinguish the packets having spoofed IP addresses, and then can selectively filter out these packets through a virtual firewall (VF).
In [24], the authors used the request count threshold idea for detecting DDoS attacks.The threshold is based on the basis of human behavior at the time period.For mitigation, all subsequent requested are dropped all from the same IP for a finite period.The detection and mitigation system is proposed in [25].The defense system includes a virtual machine monitor and an isolated system.The job of a virtual machine monitor (VMM) detects the DDoS attacks depends on the resources consumed.Once the DDoS attack detects, the VMM creates an isolated environment for running application by using duplication.When the isolated environment is completed, the isolated environment does not communicate with others anymore, because it has no I/O function.The isolated environment simply keeps the execution of tagged applications.
After the DoS attack stops, VMM puts back the OS status as well as the tagged applications in the isolated environment to the VM and the isolated environment can be shut by VMM.
A dynamic resource allocation strategy is proposed in [26] to prevent DDoS attacks against individual cloud customers.The DDoS attack occurs when the time of a packet spends in nonattack mode (constant) is larger than the time of a packet spends in attack mode.The Intrusion Prevention System (IPS) is responded to monitoring the time of packets.To relieve the effect of DDoS attacks, the proposed method will be automatically and dynamically located additional resources from the available cloud resource pool, and a fresh VMs will be replicated depending on the image file of the original IPS using the current replica technology.All IPSs will work together to elect attack packets out and guarantee QoS for benign users at the same time.When the volume of DDoS attack packets decreased, the proposed method will automatically reduce the amount of its IPSs and release the additional resources back to the available cloud resource pool.The amount of IPSs that require to sustain the goal depends on the volume of the attack packets.
The defensive framework is proposed in [27] called ATOM.ATOM applies cross the IaaS layer and provides automated tracking, orchestration, and monitoring of resource utilization for a large amount of VMs running on an IaaS cloud, in an online mode.ATOM presents an online tracking module running at Node Controller (NC) and continuously tracks several performance metrics and resource utilization values for every VMs.The Cloud Controller (CLC) is referred to as the tracker, and the NCs are denoted as the observers.The two main objectives of tracking and monitoring are; (1) exchanging the basic view at the CLC with a realization of system status, with minimum overhead, (2) studying the performance of resource utilization data reported by the online tracking module to discover an anomaly.ATOM includes a naïve method to define a threshold value for any an interesting metric choose by a user.Enhanced ATOM is able to defend the dynamic and complex attacks and anomalies in cloud computing.The optimized ATOM applies a dynamic online monitoring mode developed depended on Principal Component Analysis (PCA) to do mining in the resource data and creates anomaly information to assistance further analysis by the orchestration component when this happens.The orchestration component in ATOM leverages virtual machine introspection (VMI) tools.A VMI is a process that permits indirect inspection and manipulation of the state of virtual machines.The monitoring component sends the VMI tool with a priori knowledge of what might have gone wrong.Also, it works as a trigger to voice VMI tools when and where to do introspection.With this information, the overhead of using VMI techniques is greatly reduced.
The authors in [28] proposed a DDoS aware resource allocation strategy in which the overloaded VMs are not directly flagged for resource increase.Instead, authors propose to separate the traffic and increase the resources only on the basis of the demands of genuine flagged requests.A set of published papers are used machine learning techniques to detect DDoS attacks.An effective DDoS attacks detection approach based on K-nearest Neighbor traffic classification with correlation analysis (CKNN) is proposed in [29].The approach benefits from correlation information analysis of training data.With the correlation analysis, the hidden relations can find in the training data from a data center, which able to improve the classification accuracy and is not affected by the density of training data.To reduce the overhead of KNN, the authors map the training data into the grid.The testing examples are only calculated with the training examples in neighboring cells rather than all the training data by applying the r-polling method which can decrease the overhead of CKNN professionally.Furthermore, the CKNN method is affected less by the mass of training data which directly effects the effectiveness and precision of traditional KNN classifier.The proposed approach tested with the Internet, data center traffic trace and the KDD'99dataset.
A DDoS attack detection system is proposed in [30].The DoS attack detection is based on using Multivariate Correlation Analysis (MCA).MCA has the ability to extract the geometrical correlations between network traffic features which help to gain more accurate network traffic characterization.The presented MCA-based DoS attack detection method used the standard of anomaly-based detection in attack recognition.Which due to the presented solution ableof discovering known and unknown DoS attacks efficiently by knowledge the patterns of legitimate network traffic.A triangle-area-based method is presented to improve and to speed up the process of MCA.The effectiveness of the proposed detection method assessed using KDD Cup 99 data set, and the effects of both non-normalized data and normalized data on the performance of the proposed detection system are observed.A profile based network intrusion detection and prevention system are presented in [31].The proposed system aims to secure the cloud against malicious insiders and outsiders.The proposed system mix both fine-grained data analysis and Bayesian technique approach to detect DDoS attacks using unsupervised learning algorithm.The goal of the proposed system is detecting network attacks, such as TCPSYN flooding.
The authors in [32] designed three stages of anomaly detection to detect DDoS attacks.The first stage is the monitoring stage, which uses a rule-based system to preprocess known DDoS attack patterns.The second stage offers lightweight anomaly detection.The second stage forecasts the future load on each customer interface using time series modeling.The traffic volume over the network is divided into large and small volumes during the time axis, and the Bayesian technique applied to analyze DDoS attack candidate on the network topology.The last stage is focused on anomaly detection to identify both known and unknown DDoS attack patterns using an unsupervised learning algorithm.The main limitations of the previous approaches are; static, user communication based, and network focused.The static method is not suitable for dynamic cloud computing behavior.If the malicious user success to simulate the behavior ofa real user, the cloud system will be a victim of a huge number of attacks.A lot of the published papers are focused on detecting DDoS attacks based on network devices behavior.In addition, there are continuously developing on IP spoof trace method which requires more development for detecting DDoS attack cross the network.According to these shortages, the proposed scheme will depend on detecting DDoS attack on the cloud side.The cloud side means that the DDoS attacks are going to detect based on monitoring cloud computing performance.This will be a good help to avoid the challenges of IP spoof.In addition, the proposed scheme does not depend only on the network, the CPU and memory resource utilization will be considered.an approach for visualizing network attacks data using clustering is proposed in [33].The proposed approach based on K-means algorithm with the Kdd Cup 1999 network data set to evaluate the performance of an unsupervised learning method for anomaly detection.The proposed approach consists of three stages.After entering corrected KDD dataset, the first stage fragments the 37 attacks which are founded in this dataset into four general categories (DOS, Probe, R2L, and U2R).The second stage uses Cluster 3.0 tool for apply k-means technique to cluster attacks.The third stage implements Tree View visualization tool to visualize k-means result.The results of the evaluation showed that a high detection rate can be achieve while maintaining a low false alarm rate.
MULTIPLE LAYER DEFENSE (MLD) SCHEME
MLD scheme aims to reduce the harmful effect of DDoS attacks in a cloud computing environment.MLD focuses on reducing SLA violation as a direct effect of DDoS attacks. in this study, SLA violation is caused by reducing availability and reliability.Also, MLD aims to reduce energy consumption as an indirect effect of DDoS attacks.The energy consumption raises up when cloud resources become high utilization.In the next sections, two layers of MLD will be explained.The comparison between real workload requested by users and the predicted workload determines that if the alarms will send to cloud management or not.At the second stage, the detection and mitigation will run.In the case of the resources utilization of VM is in an increasing manner.The mitigation process will run by detecting the VM has an increasing manner.
Layer One: Dynamic Threshold for Workload Size
A lot of published papers are used as a static threshold to detect DDoS attacks.While the environment of cloud computing is very dynamic.This paper proposesa method to establish an alarm system.This alarm system is based on creating a dynamic threshold for the size of request jobs.The comparison between the real jobs requested with dynamic threshold helps to detect DDoS attacks.If the real jobs requested is higher than the threshold, then the alarm system sends alters to cloud management to notify that DDoS attacks start.The proposed method is based on a rigid regression learning model.The rigid regression has two main advantages.First, rigid regression has a penalty term that reduces overfitting.The rigid regression uses L2 penalty term as shown in equation 1.Second, the rigid regression works to increase the correlation between the model features which improves the performance of a model at all.The objective of rigid regression is minimizing the gradient descent function (J).Equation 1explains the objective of rigid regression.
The day, hour, and previous request workload size are used as features for the rigid regression learning model.Two processes are applied as a preprocessing operation to increase the learning time and improve model performance.First, the regression performance improves with a low number of features.The hour and day are mixed to be one feature.Second, the normalization process is applied for all features to keep the same range between 0 and 1.The normalization process aims to increase converge speed.The Leave-One-Out cross-validation learning technique was applied to in train stage.
Layer Two: One Class Support Vector Machine (OCSVM)
The different between the anomaly detection and the classification methods that the anomaly detection is able to use for an unlabeled data, taking only the internal structure of the dataset into account [34].This paper implementssemi-supervised method where the label data does not require.At the same time, a semi-supervised is less sensitive to outliners.The paper process a method to detect and mitigate DDoS attack.The presented method depends on one class support vector machine (OCSVM) model.The main objective of the proposed method that reduces the cost of the false positive and false negative alarms can send from layer one.
The OCSVM model transforms input data into a high dimensional feature space by applying the kernel and iteratively finds the maximal margin hyper plane which best separates the training data from the origin.The OCSVM might be seen as a normal two-class SVM where all the training examples lies in the normal class, and the origin is taken as the abnormal class.A separated model created for different resources utilization.The Gaussian kernel is used in OCSVM model.
LibSVM library on JAVA used to create a model.
WORKLOAD CLUSTERING
K-mean is a well-known unsupervised learning algorithm.The k-Means method allocates N data points to k diverse clusters, such that the clusters number needs to know a priori.This paper uses the K-means for clustering the size of workload requested.The objective from the workload cluster is simulated the real number of VM type requested by users.A different number of clusters are tested starting from 1 to 10.A heuristic approach is applied to determine the best number of clusters.The heuristic approach is called the Sum of Squared Distances (SSD).SSD represents the sum squares of the distance between points and cluster center [35].The low values of SSD refer to that the cluster is coherent.Figure 2 shows the relation between SSD and the number of clusters.Form figure 2, it clears that the best number of clusters is 4.
EXPERIMENTAL DESIGN
The MLD scheme will be evaluated in a practical cloud scenario, the Clouds simulation toolkit has been used.CloudSim is the most popular simulation tool available for the cloud computing environment.It is an event-driven simulator built upon the core of grid simulator GridSim.Base programming language for CloudSim is Java.CloudSim is open source, so its modules are easy to extend based on Java.A set of experiments will test MLD over a different real workload including DDoS attacks.Also, MLD will test under a different ratio of DDoS attacks.The MLD focusses on resources depletion DDoS Attack.
Experiment Setup
For hardware, the experiments were test cross a data center that has 800 heterogeneous physical nodes.A data centerenvironmentisheterogeneous where has of two types of hosts; the first half of the hosts are HP ProLiant ML110 G4 servers with 1,860 MIPS per core, and the other half are HP ProLiant ML110 G5 servers with 2,660 MIPS per core.Each server has 2 cores, 4 GB of memory and 1 GB/s of network bandwidth.The power consumption of active 2 servers in the simulation is derived from the corresponding figures in the Standard Performance Evaluation Corporation (SPEC) [36].Table 1 summarizes a data center configuration.
According to the result of the workload clustering, the best number of clusters is 4. So, four different virtual machines are used following Amazon EC2 instances [37]: High-CPU Instance (2500 MIPS, 0.87 GB), Extra Large Instance (2000 MIPS, 1.74 GB), Small Instance (1000 MIPS, 1.74 GB), and Micro Instance (500 MIPS, 613 MB).At beginning the simulation, VMs are hosted according to the resource needsthat is defined by the VMs.Table 2 summarizes virtual machines configurations.
For software, a real-world workload represented VM utilization.The MLD defense is tested with three different days extracted from Google Cluster Data real workload.The GCD workload consists of the resources utilization form Google Cluster Data (GCD) dataset for a 29day period in May 2011 [38].The GCD workload includes 670983 jobs, each job with one or more tasks with a total number of tasks of 144841618, and contains the normalized value of the average number of used cores and the utilized memory.To create the CPU and the memory utilization of VMs, the tasks of each job was aggregated by summing their CPU and memory consumption every five minutes in a period of 24 hours.We extracted experiment workload form GCD workload by extracting computing jobs (high priority and non-missed value).The computing jobs are high resources utilization in GCD compare with other jobs, such that if a machines resource utilization is very full but 90% of utilization is attributed by low jobs, a machine is considered idle [39].Table 3 summarizes the characteristics of the workload submitted by three days at different days and hours.
Evolution Metrics
The utilization of resources is measured every 5 minutes over 24 hours which is the system lifetime.The proposed framework evaluates by the following metrics: The power of hosts depends on the maximum power of the hosts and CPU utilization of hosts.
The energy consumption is calculated as the difference between host powers for time cascaded.
Where , is the maximum power consumption of host i, is the fraction of power consumption when the host i is in idle state and , () is the CPU resource utilization of the host on time t. is the energy consumed by host i from start time t0 to end time t1.SLA violation level is measured by two compounds [40]; SLA violation Time per Active Host (SLATAH) and Performance Degradation due to Migrations (PDM).The percentage of the time, during which active hosts experienced the CPU utilization of 100%, called SLA violation Time per Active Host (SLATAH).SLATAH calculates as shown in (4).The overall performance degradation by VMs due to migrations is called Performance Degradation due to Migrations (PDM).PDM is computed as shown in (5).The reasoning behind the SLATAH is the reflection that if a host serving requests are experiencing 100% usage, the performance of the requests is restricted by the host capacity; therefore, VMs are not being provided with the need performance level.SLA violation is defined as shown in (6).
Where N is the number of hosts, Tsi is the total time during which the host experienced the utilization of 100% leading to an SLA violation, Tai is the total of hosts in active mode.M is the number of VMs, Cdjis the estimate of performance degradation of the VM caused by migration, Crj is the total CPU capacity requested by VM during its lifetime.
RESULT& ANALYSIS
In this section, the results of our experiments will be discussed.The following experiments are divided into three classes.In the first class of experiment, the method of prediction workload of the first layer of MLD will be evaluated and discuss.Also, one class support vector machine model in the second layer of MLD will be analyzed.In the second class, the performance of the MLD scheme will be evaluated with a different real workload according to table 4. The real workload is mixed with DDoS attack resource depletion.The attack ratio is 10% of the real workload.The third class of experiment, the MLD will be tested under a various attack ratio.The DDoS attacks ratio are 10%, 20% and 50% of the real workload.For all experiments, the static threshold uses to detect the overutilization hosts.The static threshold for CPU and memory are 0.8290, 0.7651 respectively.Also, all experiments use VM selection policy called the Random Selection (RS) is proposed in [39].RS was selected to avoid the overhead that another VM selection policies can cause.
MLD layers Evaluation
Both of MLD layers will be evaluated according to prediction accuracy (R2), the root means square error (RMSE), and the Percentage of Predictions (25) (PRED ( 25)) metrics for the rigid regression model.The best values for R2 and PRED ( 25) are close to one and the worst values are close to zero.The best value for RMSE is 0. For the second layer which contains one class support vector machine model, precision (P), recall(R), F-scores, and accuracy.
MLD layer one: Dynamic Threshold for Workload Size
Figure 3 shows the performance of the proposed prediction model over the day.The proposed method tested on Google Cluster Data (GCD) which published in [38].For figure 3, both predicted values and real values are similar most of the time.At the end of curves, the penalty term success to avoid the over fitting term as the advantage of rigid regression.Google cluster is used in different world sides.the size of people requests jobs form google is varied for people behavior and cultures.Therefore, the increasing or decreasing of people requests can have done suddenly.this is what can be explained the highest difference between the prediction values and real values done at for change the people requests suddenly.The one class support vector machine model in the second layer will be able to avoid this problem.Table 4 shows the numerical value of the evolution metrics.According to the previous workload size requested and its time, the method will predict the size of the workload request at the current hour.If the dynamic threshold is lower than the real submitted workload, the cloud system will suffer from DDoS attacks.when DDoS attacks start, we recommend increasing the time of jobs spend at schedule stage to avoid job failure at the scheduling stage.The next section will discuss how the DDoS attacks will detect.
MLD Performance under DDoS Attack
The comparison between cloud computing performance with and without the MLD scheme will discuss.The MLD will test with three different real workloads mixed with DDoS attacks called D1, D2, and D3.The MLD will evaluate according to the number of VM migration, energy consumption, the overall SLA violation, and the number of hot hosts.
The Number Hot Host
When the number of VM migration is increasing, thehothost's number is going to increase as well.The main reason is that the VM of the DDoS attacks is looking for free provisions for hosting.If there are no free allowances, an inactive host wakes up.Therefore, the hothost's number is increasing.Figures 5a, 5b, and 5c demonstrate the effect of the proposed MLD scheme on the number of hot host for various data set.It clears that the MLD achieves success to reduce the number of the hot host.In addition, the behavior of the hot host under the MLD is more stable than the hot host without the MLD.For figures 5, the number of the hot host will increase at last hour which it is supporting that the cloud system is going to fail in the near future.In the average, the MLD reduced the number of the hot host by 40.9%, 24%, and 24% for D1, D2, and D3respectively .
Energy Consumption
Both the numbers of VM migration and the hot hosts' number have a high impact on energy performance.According to equations 2 and 3, the change rate of power is effective in energy consumption.The small number of VM migration due to the rate change of power and energy consumption becomes low.Also, the small number of hot host aims to reduce the amount of energy consumption.Therefore, energy consumption becomes low.Figures 6a, 6b, and 6c explain the effect of the proposed MLD scheme on the energy consumption for various data set.For figure 6, it notes that the dataset D2 is the highest in energy consumption because it has the highest number of VM migration and the average highest number of the hot host.The MLD achieves success to reduce energy consumption by 38.6%, 29.1%, and 29% for D1, D2, and D3 respectively .
DDoS Attacks Ratio
In this section, the stability and reactivity of MLD will evaluate under a various DDoS attacks ratio.The MLD will be tested under 10%, 20%, and 50% DDoS attack ratio respectively.Figures 8a, 8b, and 8c show the effect of a various DDoS attacks ratio on the proposed MLD scheme on the number of VM migration, the energy consumption, and the overall SLA violation for the dataset D1. Figure 8 demonstrates that the MLD has the ability to maintain good performance.
(a) (b) (c) under a various attack ratio.For the number of VM migration, it clears that the number of VM migration is increasing with increase the attack ratio in the case of the cloud system without the MLD scheme.At 10% and 20% attack ratio, the number of VM migration increases with increasing attack ratio.While at 50% attack ratio, the number of VM migration decrease due to For energy consumption, the MLD achieves success to reduce energy consumption at a different attacks ratio.It observes that the energy consumption of the cloud system without the MLD increases with increasing attacks ratio.Otherwise, the MLD reduces energy consumption with increasing attack ratio.The main explanation is that the MLD detects and removes the DDoS attacks and only the good VMs are keeping run.In addition, the high attack ratio is equivalent to the number of good VMs are less.Therefore, the energy consumption with the MLD scheme becomes less at high attack ratio.For the overall SLA violation, it notes that the MLD still stable with a different attack ratio.Also, MLD is able to reduce the SLA violation at all attack ratios.In the case without MLD, with increasing attack ratio, the overall SLA violation is growing because the number of attack machines is higher than good machines.It observes that the MLD scheme detects and removes the DDoS at early times for all attacks ratio, for that, the number of hosts arrives to lower utilization early.
CONCLUSION AND FUTURE WORK
This paper discussed the DDoS attack detection and mitigation problem.This paper focused on the DDoS attack resource depletion category.The MLD scheme is proposed to detect and mitigate the DDoS attacks.The MLD scheme consists of two layers.The first layer contains an alarming system, the main objective of alarm system sends alerts to the cloud system management when the DDoS attacks start.The alarm system is based on predicting the size of the workload requested.The predicted workload is used as a dynamic threshold to compare with real workload requested at the current time.According to the result of comparison between the dynamic threshold and real workload, the alarm system sends its alerts or not.When the DDoS attack starts, we recommend that extend the time of schedule stage to avoid VM failure in the scheduling stage according to increasing the size of requested jobs than normal.The second layer includes an anomaly detection system based on one class SVM.The main benefit of the one class SVM is that is more robust and less sensitivity by outliners.Also, a labeled data for training does not required.The MLDwas tested through a variety of real workload of different size mixed with DDoS attacks.The number of VM migration, the number of hot hosts, energy consumption, and the overall SLA violations have been used to evaluate the performance of cloud computing under the MLD scheme.The MLD scheme provides great help in detecting and mitigating DDoS attacks.The results show that the MLD scheme reduces the number of VM migration, the number of hot hosts, the energy consumption, and the overall SLA for a various real workload that were used in the evaluation process.In addition, the MLD scheme tested with various DDoS attacks ratio 10%,20%, and 50%.The MLD showed more stability and reactivity for all tested DDoS attack ratio.In the future, our future plan aims to add a new layer to the MLD scheme.The proposed new layer will containa prevention method.The anticipated new layer will elevate the MLD scheme to a complete defense system.Also, in the future, we aim to test our proposed scheme in a real cloud system.
Figure 1
Figure1explains the MDL architecture.At the first stage, the prediction model receives the input parameters which are data and time of previous workload and the size of the requested workload.The comparison between real workload requested by users and the predicted workload determines that if the alarms will send to cloud management or not.At the second stage, the detection and mitigation will run.In the case of the resources utilization of VM is in an increasing manner.The mitigation process will run by detecting the VM has an increasing manner.
Figure2.
Figure2.The Relation between SSD and Number of Cluster.
1 -
The Number of host hot during 24 hours.2-Energy Consumption for all hosts during simulation time.3-The Number of VM migration.4-Overall SLA volition.
Figure 3 .
Figure 3.The Performance of the Proposed Prediction Model.
, and accuracy are closed to 1 and the worst values are close to zero.The P-value referees to the ratio of the true positive values to the sum of the true positive values and the false positive values.When the false prediction value minimizes as possible then the P-value becomes high which mean that the OCSVM model is good.The R-value expresses about the ratio of true positive to the sum of true positive and false negative values.When the false negative value minimizes as possible, the R-value becomes high.According to P and R values, the proposed OCSVM model success to reduce the false positive and false negative alarms.The F value is a weighted average of P and R-value.The accuracy measures how the real and predicted values are similar.
Figure. 4 .
Figure. 4. The Performance of The MLD for The Number of VM migration with various datasets: dataset D1; (b) Dataset D2; (c) Dataset D3.
Figure 5 .
Figure 5.The Performance of The MLD for The Number of Hot Host with various datasets: Dataset D1; (b) Dataset D2; (c) Dataset D3.
Figure 7 .
Figure 7.The Performance of The MLD for The Overall SLA Violation with various datasets: Dataset D1; (b) Dataset D2; (c) Dataset D3.
Figure 8 .
Figure 8.The Performance of The MLD under several Attack ratios for the dataset D1: The number of VM Migration; (b) Energy Consumption; (c) Overall SLA Violation.
The first prediction model uses a time series approach to analyze monitoring data.The authors compared a set of time series approaches.The time series approaches include Movie Average (MA), Auto Regression (AR), ARIMA, Difference Model (DM) and median model (MM).The time series approach can analyze and predict the CPU utilization using the history data, but sometimes it lost the real data and has low prediction accuracy.A Kalan filter model is proposed as the second prediction model to forecast the cloud workload.Kalman filter model can estimate the true data based on the observed data.The Kalman filter model works in a two-step process, including the prediction step and update step.The pattern matching model is the third prediction model.The pattern matching works by matching the sequence with some history patterns.The third model is based on the string matching algorithm and Euclidean distance.This model includes two steps, preprocessing and match.All prediction models have been evaluated by the Mean of Absolute Percentage Error (MAPE).The results showed that the Kalman filter has the least MAPE compared with all prediction models.
Table 1 .
Data Center Configuration.
Table 4 .
The Metric Evolution of the Proposed Prediction Model.
Table 5
shows the result of evolution metrics for CPU and memory OCSVM model.For table 5, the resulting model of CPU and memory results are very similar.The best values for P, R, F score
Table 5 .
The Evaluation Performance for CPU and Memory OCSVM Model.
The Number of VM Migration According
to table 5, three data set have a different size of workload requested.The largest data set is D1 and the smallest is D2.Figures4a, 4b, and 4c display the effect of the proposed MLD scheme on the number of VM migration for various data set.It clears from figure4that the number of VM migration is produced on the smallest data set D2 is the highest than the number of VM migration of others datasets.The main explanation for this issue that the cloud environment has a lot of empty allocation able to receive which VM will migrate.Also, the majority of VM in D2 is from micro VM which can migrate simply.The MLD has a good effect to detect and mitigate the DDoS attacks.The MLD reduced the number of VM migration by 76.3%, 86.7%, and 84.9% for D1, D2, and D3 respectively.The huge VM migration is produced from that DDoS attacks consume more resources than are have.So, the VM migrated from host to another to find resources are required.When the MLD achieves success to detect and remove the DDoS attacks, the VMs are caused the DDoS attacks remove, then the number of VM migration decrease.
6.2.4. Overall SLA Violation Two
main parameters are affected in the SLA violation are; the degradation due to VM migration, and the time of full utilization host according to equation 6.The degradation due to VM migration is calculated by the memory of VM migrated, the network bandwidth, and the number of VM migration.Where the network bandwidth in cloud computing become more developed, then the most effective in the degradation due to VM migration is the memory of VM migrated and the number of VM migration.The number of VM migration defines as the amount of VM migrated from host to reduce the overutilization of host and the amount of VM migrated from host to become inactive.Figures7a, 7b, and 7c appear the effect of the proposed MLD scheme on the overall SLA violation for various data set.According to table 5, the dataset D1 is the largest than the dataset D2 and D3.The overall SLA violation of D1is the largest because the number of extra and high VMs in D1 is higher than extra and high VMs in D2 and D3.The extra and high VMs have the highest memory than micro and small VMs.Therefore, the dataset D1 has a higher SLA violation than the others.Both D2 and D2 have similar SLA violation because they have a similar total number of extra and high VMs.However, the dataset D3 has a huge number of micro VMs, the SLA violation of D3 is lower than the SLA violation of D1.Consequently, the huge number of micro VMs is less effective in degradation due to VM migration and the overall SLA violation.The MLD achieves success to reduce the overall SLA violation by 99.73%, 98.8%, and 97.6% for D1, D2, and D3 respectively. | 2019-09-16T23:08:29.275Z | 2019-11-30T00:00:00.000 | {
"year": 2019,
"sha1": "997e66d98a3f99d25004249970097da36190f26a",
"oa_license": null,
"oa_url": "https://doi.org/10.5121/ijcnc.2019.11601",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "997e66d98a3f99d25004249970097da36190f26a",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
28733028 | pes2o/s2orc | v3-fos-license | Myricetin: a potent approach for the treatment of type 2 diabetes as a natural class B GPCR agonist
The physiologic properties of glucagon‐like peptide 1 (GLP‐1) make it a potent candidate drug target in the treatment of type 2 diabetes mellitus (T2DM). GLP‐1 is capable of regulating the blood glucose level by insulin secretion after administration of oral glucose. The advantages of GLP‐1 for the avoidance of hypoglycemia and the control of body weight are attractive despite its poor stability. The clinical efficacies of long‐acting GLP‐1 derivatives strongly support discovery pursuits aimed at identifying and developing orally active, small‐molecule GLP‐1 receptor (GLP‐1R) agonists. The purpose of this study was to identify and characterize a novel oral agonist of GLP‐1R (i.e., myricetin). The insulinotropic characterization of myricetin was performed in isolated islets and in Wistar rats. Long‐term oral administration of myricetin demonstrated glucoregulatory activity. The data in this study suggest that myricetin might be a potential drug candidate for the treatment of T2DM as a GLP‐1R agonist. Further structural modifications on myricetin might improve its pharmacology and pharmacokinetics.—Li, Y., Zheng, X., Yi, X., Liu, C., Kong, D., Zhang, J., Gong, M. Myricetin: a potent approach for the treatment of type 2 diabetes as a natural class B GPCR agonist. FASEB J. 31, 2603–2611 (2017). www.fasebj.org
Type 2 diabetes mellitus (T2DM) remains a serious global health threat, and its prevalence is increasing at an epidemic rate (1). The evidence suggests that the incidence of T2DM is increasing in children and adolescents (2)(3)(4)(5). T2DM can lead to metabolic syndrome, which is a cluster of metabolic abnormalities that includes glucose intolerance, hypertension, hyperlipidemia, and a noninfective inflammatory state (6)(7)(8). Additionally, the metabolic syndrome induced by T2DM can increase the risk of cardiovascular disease (CVD), which is the primary cause of premature death in patients with type 2 diabetes (9)(10)(11). The development of novel antidiabetic medicines has not ceased in the decades since the discovery of the utility of insulin. Developments in molecular biology and cell biology have contributed to novel drugs that act on updated targets or mechanisms of T2DM, such as glucagon-like peptide 1 (GLP-1).
GLP-1 was discovered in 1990 and is gut hormone that is released from intestinal L cells after oral glucose administration (12). The function of GLP-1 is to stimulate the secretion of insulin, which balances abnormal blood glucose levels (13)(14)(15)(16)(17). Compared with the direct administration of insulin, GLP-1 is an intelligent approach to achieving blood glucose control in a blood glucose level-dependent manner because GLP-1 stimulates the secretion of insulin based on increased glucose levels. In healthy conditions, the function of GLP-1 is halted to prevent the risk of hypoglycemia, which is a main side effect of the use of insulin by patients with type 2 diabetes (18,19). GLP-1 has been demonstrated to regulate blood glucose concentrations by mechanisms that include enhancing insulin synthesis/secretion, suppressing glucagon secretion, slowing gastric emptying, and enhancing satiety (20). GLP-1 is also capable of inhibiting the apoptosis of b-cells, which suggests that GLP-1 might recover the b-cell functions that are impaired in patients with T2DM (21). The distinct clinical utility of GLP-1 makes it a potent therapeutic strategy for T2DM. However, the poor stability of GLP-1 (3-5 min) has significantly limited its clinical utility because of the rapid degradation, which is catalyzed by the enzyme dipeptidyl peptidase IV (DPP-IV) (22). The extremely poor stability renders the therapeutic administration of GLP-1 impractical; therefore, many efforts have focused on altering the pharmacokinetic properties of GLP-1 and identifying a synthetic GLP-1 receptor (GLP-1R) agonist.
Myricetin is a flavonoid extracted from the leaves of Myrica rubra (Lour.) Sieb. et Zucc (23). Accumulating evidence suggests that myricetin possesses antidiabetic properties that are mediated via regulation of the transport of glucose through the function of glucose transporter-2 in Xenopus laevis oocytes (24). It has also been hypothesized that the glucose-regulating effects of myricetin are partially attributable to associations with glucose transporter-4 (25,26). In addition to the associations with glucose transporters, myricetin might exhibit effects on other glucose regulation mechanisms (27). Myricetin has been observed to increase the activity of glycogen synthase 1 in the hepatocytes of rats with diabetes (28,29). Recent evidence suggests that myricetin administration might be beneficial for increasing insulin sensitivity and inhibiting islet b-cell apoptosis (30). However, the effects of myricetin on glucose regulation in animal models of type 2 diabetes are not fully understood.
Myricetin has been hypothesized to be a natural GLP-1R agonist in this study because the physiologic characteristics of myricetin described in the literature are similar to those of GLP-1, including the inhibition of b-cell apoptosis, glucoregulation, and the prevention of hypoglycemia. In the present study, the effects of myricetin on GLP-1R and the role of myricetin in glucose clearance were investigated in vitro and in vivo.
Materials
The cAMP kits were purchased from CisBio Bioassays (Bedford, MA, USA). The rat INS-1 cell line was obtained from Ying Li (Tianjin Medical University General Hospital). The rat insulin and leptin detection kits were purchased from Phoenix Technology, Inc. (Beaverton, OR, USA). A 1-touch blood glucose meter and filters were purchased from Abbott (Shanghai, China). Myricetin (HPLC purity .98.0%) was obtained from Sigma-Aldrich (6760; St. Louis, MO, USA). The other chemicals were purchased from Sigma-Aldrich unless otherwise specified.
Animals
Kunming mice, male ZDF (fa/fa) rats, lean male ZDF rats, and male Wistar rats were purchased from Shanghai Laboratory Animal Co. (China Academy of Sciences, Shanghai, China). Male GLP-1R-knockout (GLP-1RKO) mice were derived from heterozygous mating pairs (AppTech.12630.TJIPR; WuXi, Shanghai, China). These GLP-1R-deficient mice are a knockout model in which the transgenic construct contains a PGK-neo cassette replacing 2 coding exons of the GLP1R gene in the same transcription orientation, along with 4.8 and 3.5 kb of the GLP1R sequence 59 and 39 to the PGK-neo sequence. The loss of these exons equates to absence of the first and third transmembrane domains and the intervening sequence of the GLP-1R.
Ethics statement
All animal studies were performed in accordance with the approved guidelines of the Animal Experiments Inspectorate of China. All experimental protocols were approved by the Tianjin Institute of Pharmaceutical Research committee (TJIPR0267-003-02).
Receptor binding assay
Binding affinity measurement was carried out in 96-well microtiter plates in buffer containing 25 mM Hepes/0.1% bovine serum albumin (pH 7.4). Myricetin was dissolved in buffer. 125 I-GLP-1(7-37) (80 kBq/pmol) (APP TEC; WuXi) was dissolved in buffer and added at 50,000 cpm per well. Nonspecific binding was determined with 1 M GLP-1 solution. Buffer (165 ml) with or without GLP-1 was added to each well, followed by 10 ml myricetin/25 ml plasma membrane/ 25 ml tracer. The plates were then incubated at 37°C for 1 h. The bound tracer and the unbound tracer were separated by vacuum filtration (Millipore vacuum manifold; Millipore, Billerica, MA, USA). Prism software (GraphPad, La Jolla, CA, USA) was used for all curve fittings. The binding curves were fitted as 1-site competition, and the saturation data were fitted as 2-site binding.
cAMP stimulation assay
For the cAMP assay, INS-1 cells (1.0 3 10 5 cells; passage number, 2) were seeded and plated in 96-well opaque white plates. After 24 h, the medium was replaced with RPMI 1640 medium containing 500 mM 3-isobutyl-1-methylxanthine (an inhibitor of cAMP phosphodiesterase). Subsequently, myricetin was titrated into the incubations to a final concentration of 3 mM. The assay plate was titrated with 2.5 ml/well cAMP and an equal volume of anti-cAMP conjugate after 1 h of incubation. The homogeneous time-resolved fluorescence signal was read on a SpectraMax M5 (Molecular Devices, Sunnyvale, CA, USA) microplate reader after 60 min. The ratio of the absorbances at 665 and 620 nm (310,000) was calculated and plotted.
The insulin secretion was examined by performing a static incubation assay in EBSS culture (Thermo Fisher Scientific). The islets were incubated in Earle's balanced salt solution containing 3.3 mM glucose at 37°C for 30 min. Three size-matched islets were then selected and incubated in 0.3 ml of Earle's balanced salt solution containing the indicated amount of glucose (normal blood glucose condition of 3.3 mM or high blood glucose condition of 16.7 mM) in the presence of 3 mM myricetin at 37°C for 90 min. The supernatants were collected and assayed for secreted insulin using a rat insulin RIA kit (Meso Scale Diagnostics, Rockville, MD, USA).
Insulin secretion assay in Wistar rats
The insulin secretion assays were performed by injecting these derivatives into male Wistar rats (n = 6 per group; ;300 g). GLP-1 (100 mg/kg body weight) was subcutaneously injected, and myricetin (250 mg/kg body weight) was administered orally. Glucose (2 g/kg body weight, which is a standard dose for glucose administration) was administered intraperitoneally 30 min after the peptide injection. Blood samples were collected via tail vein incisions at the indicated times after glucose administration. The blood samples were assayed for the insulin levels using a rat insulin RIA kit.
Glucose tolerance test
To clarify whether myricetin possessed glucose-regulating activity, single-dose glucoregulatory assays were performed. In these assays, myricetin was administered once, and the blood glucose levels were monitored over 48 h. We presumed that the apparent half-life of myricetin could be obtained with this single-dose glucose tolerance test (GTT), and this information would be beneficial for the determination of the appropriate administration frequency for future long-term GTTs. After a period of food withdrawal, myricetin (250 mg/kg body weight) was administered orally to male Wistar rats (n = 7 per group; ;300 g) 30 min prior to glucose administration. GLP-1 and liraglutide were injected into the control animals. The rats were given 2 g glucose per kilogram body weight via intraperitoneal injections. Blood was drawn from the tail vein, and the glucose levels were measured using a glucometer 30 min after glucose administration. Chronic glucose injections (2 g/kg body weight) were administered 30 min prior to each blood glucose measurement time point during the 48-h experimental period. After the observations of the glucose clearance activity of myricetin, the dosage-effect relationship of myricetin was investigated after 12 h of treatment with dosages of 5, 50, 250, and 500 mg/kg.
HbA 1c and body weight measurement
Based on the glucoregulatory properties exhibited by myricetin, the long-term glucose tolerances of ZDF rats were investigated to determine the antidiabetic activities of the derivatives. The HbA1c levels were assessed using a DCA 200 analyzer (Bayer Diagnostics, Tarrytown, NY, USA), and the body weights and glucose levels were monitored during the experimental period. Male ZDF rats (n = 10 per group) were treated with myricetin (250 mg/kg/12 h) for the entire experimental period (40 d). The control groups were injected with wild-type GLP-1 (100 mg/kg/ 12 h) and liraglutide (300 mg/kg/24 h).
Myricetin measurements in GLP-1RKO mice
We further characterized the effect of myricetin on GLP-1RKO mice to provide direct evidence supporting myricetin as an agonist for GLP-1R. A 24-h GTT was performed in GLP-1RKO mice (n = 7 per group; ;26 g) upon the administration of myricetin. In this experiment, myricetin was administered either orally or by intravenous injection. Glucose was given at dosage of 2 g/kg body weight 30 min before administration of myricetin. Blood glucose levels were monitored throughout the 48-h experiment.
Real-time PCR
RNA was isolated from the harvested INS-1 cells treated with myricetin (10 nM, 100 nM, 500 nM, 1 mM, and 3 mM) for 24 h using the RNeasy Mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. cDNA was produced from the extracted RNAs using the cDNA synthesis kit (Fermentas, Waltham, MA, USA). Each forward and reverse primer (10 pmol, 0.2 ml) and 2 ml cDNA were added to the PCR reaction mix (final volume, 25 ml) containing 12.5 ml of SYBR Green Master Mix (Fermentas) and 10.1 ml DNAse-free water. The primer sequences of Glut-2 were as follows: forward, CAGCTGTCTTGTGCT-CTGCTTGT; reverse, GCCGTCATGCTCACATAACTCA. PCR amplification was done over 40 cycles using the following program: 95°C for 10 min, 95°C for 15 s, 5°C for 30 s, and 60°C for 34 s. Data were analyzed using the 2 2DDCt method. Expression values were corrected for the housekeeping genes b-actin and glyceraldehyde-3-phosphate dehydrogenase (Gapdh). The b-actin gene produced similar results to those obtained with Gapdh.
Statistical analyses
Student's t tests were used to analyze the data. Unless otherwise stated, the results are reported as the means 6 SE. Values of P , 0.05 were considered significant.
Myricetin is a small chemical agonist for GLP-1R
As a potent agonist of GLP-1R, the receptor binding assay of myricetin against GLP-1R was performed initially. In radioligand-binding experiments, myricetin bound to the receptor in a concentration-dependent manner, and the EC 50 was calculated to be 465.75 6 33.24 nM (Fig. 1). GLP-1 is capable of stimulating the secretion of insulin after its binding to GLP-1R in the islet. Accordingly, GLP-1R agonists are presumed to act as insulin secretagogues in the presence of high glucose levels. To examine this property of myricetin (structure in Fig. 1), pancreatic islets isolated from Wistar rats were incubated in either normal-glucose (3.3 mM) or high-glucose (16.7 mM) medium. Neither GLP-1 nor myricetin stimulated the secretion of insulin into the medium after 90 min of incubation at the normal glucose level (3.3 mM) ( Fig. 2A). However, in this condition, the incubation of islets with sulfonylurea glibenclamide (5 mM) led to a significant increase in the insulin concentration in the incubation mixture.
As an agonist of GLP-1R in islets, GLP-1 exhibited the appropriate insulinotropic properties in the highglucose concentration of 16.7 mM. As predicted, the treatment of the islets with myricetin in the highglucose condition caused a 3-fold increase in the insulin level (Fig. 2B). The ability of myricetin to cause cAMP accumulation in the INS-1 cells (Fig. 3A), in combination with its glucose-dependent insulinotropic activity, supports the conclusion that myricetin is a GLP-1R agonist.
Because insulin secretion activity is considered to be an important characteristic trait of a GLP-1R agonist (31), whether a novel agonist exhibits this property is a crucial indicator in the evaluation of the success of the development of a GLP-1R agonist or GLP-1 derivatives. To explore the in vivo insulinotropic effects of myricetin, glucose-stimulated insulin secretion was measured in Wistar rats undergoing a GTT. In this study, the insulinotropic properties of myricetin were ascertained in Wistar rats, and GLP-1 (100 mg/kg body weight) was used as a control (Fig. 3B). In the rats treated with only wild-type GLP-1, the administration of oral glucose dramatically increased the insulin level to 745. 30 6 42.48 pM at 5 min, and this level returned to baseline (210.45 6 31.94 pM) by 30 min. Myricetin, which possesses physiologic insulinotropic characteristics, exhibited similar temporal features (Fig. 3B). The insulin secretion response induced by myricetin observed in this assay was consistent with the effects demonstrated by peptide GLP-1R agonists, which supports the results obtained from the islet assays that demonstrated that myricetin is a glucose-dependent insulin secretagogue.
Blood glucoregulatory properties of myricetin
To examine the blood glucose clearance activity of myricetin, GTTs were performed 48 h after single-dose administrations of myricetin, GLP-1, or liraglutide into male Wistar rats (n = 6 per group). The peptides were injected subcutaneously, myricetin was orally administered 1 h before the first measuring point, and glucose was administered 30 min before the first measuring point. The glucose concentrations in the rats treated with GLP-1 remained at the plateau of approximately 12 mM throughout the 48-h experimental period (Fig. 4). Presumably, the GLP-1 had been degraded before the first measuring point (1 h after GLP-1 administration). However, the rats injected with myricetin exhibited better glucose tolerance in this single-dose injection experiment than those treated with GLP-1. The treatment with myricetin resulted in blood glucose levels that were maintained at 7.5-8.5 mM over 8 h, and these results were similar to those after liraglutide administration. Additionally, myricetin seemed to exhibit a similar glucoregulatory duration (0-8 h), which suggests that myricetin might possess a halflife similar to that of liraglutide (11.3 h). In a combination dose-efficacy assay, it was suggested that administration frequency of myricetin could be reduced to 2 injections each day at dosage of 250 mg/kg body weight orally (Fig. 4B).
Long-term treatment with myricetin
To evaluate the profile of the physiologic effects of longterm treatment with myricetin, the glucose levels, HbA 1c levels, and body weights of ZDF rats were monitored over 40 d in Fig. 5. In this trial, the blood glucose levels and body weights were monitored every 3 d. After 40 d of treatment with myricetin, the HbA lc levels decreased by 0.97 6 0.01% from 9.1 6 0.12% (blank group), whereas treatment with GLP-1 alone failed to reduce the HbA lc levels upon rapid proteolysis. Liraglutide induced a 1.01 6 0.02% decrease, which agrees with the reports in the literature. Combined with the changes in the glucose levels and the body weight indices, these results clearly indicate that myricetin exhibits efficient antidiabetic effects while possessing the unique advantages oral administration and a natural origin.
Gene expression of Glut-2 upon the myricetin treatment in INS-1 cells
To determine whether myricetin induces changes in gene expression profiles for pancreatic cells (rat INS-1 cell), the Glut-2 gene was assessed using real-time PCR. Low expression of Glut-2 was detected in untreated The stimulation of insulin secretion by native GLP-1 and mericetin in Wistar rats after oral glucose administration. GLP-1 (100 mg/kg body weight) and myricetin (250 mg/kg body weight) were injected into Wistar rats; glucose was administered intraperitoneally (2 g/kg). The concentration of insulin was measured using a rat insulin detection kit. (Fig. 6). However, expression of Glut-2 genes in myricetin-treated cells increased nearly 4.9-fold (P , 0.01).
DISCUSSION
Class B GPCRs are activating receptors for many endocrine peptide hormones (32,33), including glucosedependent insulinotropic polypeptide, glucagon, parathyroid hormone, vasoactive intestinal peptide, secretin, corticotropin-releasing factor, calcitonin, and GLP-1. The endogenous ligands of class B GPCRs are typically 30-40 amino acids in size, which limits their clinical applications owing to poor stability and the requirement of injections (32), like GLP-1 peptide. Unfortunately, traditional small-chemical molecules minimally modulate the activities of class B GPCRs because of the unique structural architectures and activation mechanisms used by these GPCRs (33). Although many efforts have been made to screen smallclass B GPCR agonists, much difficulty has been encountered in identifying small organic molecules by class B GPCRs. Recently, substantial progress was made in the generation of small-molecule agonists of GLP-1R, such as substituted quinoxalines, pyrimidine derivatives, and a cyclobutane derivative (Boc-5) (34)(35)(36). The quinoxaline derivatives exhibit chemical similarities to the pyrimidine derivatives in terms of the manner of activation of GLP-1R. Quinoxaline derivatives of GLP-1R agonists have been hypothesized to bind in a pocket formed by the first and second extracellular loops of the juxtamembrane regions of GLP-1R (34). Another GLP-1R agonist, Boc-5, has physiochemical properties that probably make it unsuitable for oral administration (34,36).
The known physiologic functions of GLP-1 suggest that it plays a critical role in the regulation of glucose homeostasis and is thus a feasible candidate target in the treatment of T2DM (37,38). In addition to its potential role in the treatment of T2DM, GLP-1 is also presumed to affect cardiovascular function or CVD because GLP-1R has been identified in the heart, kidneys, and blood vessels. A preclinical study of GLP-1 derivatives approved by the U.S. Food and Drug Administration provided evidence that GLP-1 favorably affects endothelial function, sodium excretion, recovery from ischemic injury, and myocardial function in animals. Preliminary data also suggest that GLP-1 reduces markers of CVD risk, such as C-reactive protein and plasminogen activator inhibitor-1. Ongoing studies are examining the effects of the administration of GLP-1 to patients who are at risk of CVD, postangioplasty patients, post-CABG patients, and patients with heart failure (39). Despite its attractive physiologic characteristics, the therapeutic potential of native GLP-1 is limited by its short lifetime (,2 min) in vivo. This rapid in vivo clearance rate is due primarily to rapid enzymatic inactivation by DPP-IV (40) and a renal clearance ,10 min (41). To provide clinical utility, therapeutic derivatives of human GLP-1 require extended half-life properties or oral administration. To date, many efforts have focused on altering the pharmacokinetic properties of GLP-1 and screening novel chemical agonists of GLP-1R. Along with the development of novel drug carrier, oral preparations of GLP-1 were also developed, such as TTP-054 or TTP-273 from VTV Therapeutics (High Point, NC, USA) and Exendin-4 from Oramed (Jerusalem, Israel) (42,43).
The findings of the present study demonstrate that the modulation of GLP-1R with myricetin is feasible. Myricetin is a bioflavonoid that is abundant in tea, berries, fruits, and vegetables (44). Myricetin has been described as a promising therapeutic agent for the treatment of T2DM, but the effects of myricetin in animal models of T2DM are not fully understood (45). Evidence has demonstrated that the injection of myricetin enhances insulin activity in rats that are receiving fructose-rich chow (46). Chang et al. (47) found that dietary myricetin decreases the body weight and improves the blood lipid profiles of rats fed a high-fat diet. We found that myricetin induced glucose-dependent insulin secretion in vitro and in vivo. The single-dose glucoregulatory assay revealed the blood glucose clearance activity of myricetin, and the time frame of this effect was likely 8 h, which suggests that this dosage of myricetin could be administered twice daily. Notably, myricetin failed to exert its glucoregulatory in GLP-1R-deficient GLP-1RKO mice, strongly demonstrating that myricetin is an agonist for GLP-1R (Fig. 7). The numerous physiologic profiles of the ZDF rats after long-term treatment with myricetin indicated that the glucose-regulating properties and body weightcontrolling activities of myricetin were similar to those of other GLP-1R agonists, such as exenatide, liraglutide, and albiglutide. In addition, myricetin did not retard the proteolysis property of DPP-IV, a main endogenous protease against GLP-1. The incubation of myricetin had no effect on degradation assay of GLP-1 by DPP-IV in vitro (Fig. 8). However, the treatment of myricetin in Wistar rats did not induce the secretion of leptin in comparison with that feature of GLP-1 (Fig. 9). Together, these findings support the notion that myricetin activated GLP-1R and subsequently stimulated the secretion of insulin. The glucose-and body weight-controlling properties of myricetin make it a potent, natural-origin, noninjection antidiabetic candidate drug for T2DM treatment.
In summary, myricetin was determined to be a small-molecule chemical agent that activates GLP-1R, and its physiochemical properties suggest that myricetin could be the first natural agonist of GLP-1R that can be orally administered. However, the potency and pharmacokinetic properties of myricetin might require optimization for further clinical development. Additionally, many more structural details regarding the binding of myricetin to GLP-1R require clarification via mutation studies or computer simulations. Improved understandings of the pocket and the mechanism of activation will facilitate molecular modeling strategies for the development of more potent small-molecule GLP-1R agonists. Structural modifications of myricetin might be beneficial to our understanding of the interactions between myricetin and the receptor and thus | 2018-04-03T01:17:38.635Z | 2017-03-07T00:00:00.000 | {
"year": 2017,
"sha1": "e66f8f90aeb21184d31a1f24e03c230cd690ad0c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1096/fj.201601339r",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "c997d68eba14f4ff915ddb2fc0e8be9af6e9e3f8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236999919 | pes2o/s2orc | v3-fos-license | EURODICAUTOM
The phraseological approach is very important in the scientific and technical field where we have to deal with so-called special languages. Specialisation by our translators is not always possible. For this reason we have to offer full information on the use of the terms in this particular field and very often also a lot of purely technical information. If these phrases or sentences are well chosen they can cater for both the linguistic and technical information needed.
The phraseological approach is very important in the scientific and technical field where we have to deal with so-called special languages. Specialisation by our translators is not always possible. For this reason we have to offer full information on the use of the terms in this particular field and very often also a lot of purely technical information. If these phrases or sentences are well chosen they can cater for both the linguistic and technical information needed.
For subtle distinctions between scientific or technical terms a definition is very often the best solution, but it is not always easy to find definitions for all terms appearing in a translation.
Although mere term-to term equivalents are often dangerous in the hands of a "multi-purpose" translator, we did not want to be cut off from this important source of scientific and technical terminology. We believe that being able to accept terminological information in any form of presentation is an ideal situation for exchanges with other centres.
What is the origin of the information we put into a terminology bank? Our own terminological information is the result of an analysis of original documents in each language. The comparative study of these original documents gives real equivalents of professional language usage. This means that the equivalency is very often at the level of the phrase and not necessarily between corresponding words. The semantic content of the corresponding phrases is the same but it can be distributed in a different way from one language to another among the morphemes But we also use terminology compiled by specialized institutions such as AFNOR, the French standards institute, the International Welding Institute, and the European Brewery Convention which is concerned not only about the quality of beer but also about its terminology.
In what order is this information presented? In our system each information unit is called an entity. I understand that in recent congresses on applied linguistics the French word "fiche" has been generally and internationally accepted for the same concept.
A "fiche" in automated terminology contains purely terminological information, documentation data, and information needed for the electronic data processing and general organisation of the data base and the whole system. In the EURODICAUTOM system each "fiche" is determined by three elements: NI: identification number (as we started from French, the letters are the wrong way round, of course); BE: "bureau émetteur", which is the terminology office which created this terminological information or assumed the responsibility for its input; TY: "type", which is a three-letter code describing an homogeneous collection of terminology or a collection of individual cards designated by the code TFI (fiche individuelle, fiche isolée). This is what we call the "BETYNI" of a fiche.
These three elements determine exactly the terminology unit, the "fiche". This apparently unimportant detail has important consequences, It means that a network of terminology centres can feed EURODICAUTOM and, independently of one another, can use their own organisation, as far as choice of subjects for their terminology collections and their own numbering systems are concerned. There is no risk that it might interfere with the collections of other centres belonging to the network.
In practice this means decentralisation of information collection, centralised storage and distribution. This is important for, as you may know, the European Communities have several institutions each with their own translation department and usually a terminology office.
When we further examine this EURODICAUTOM fiche we see that it also mentions the author. We respect intellectual property. Nevertheless this possibility has not been used so far. Terminologists are modest people. Or could it be that they are not quite sure of the overall value of their inventions? In most cases we stick to professional usage. Our creative function in terminology is only called upon when there is no other way out.
Then we have a reliability code which in fact, has nothing to do with reliability because it is not a real measure of the reliability of the information. Terminologists can also have terminology problems of their own.
The so-called reliability code also indicates whether the information has the value of a standard. This is the case of international or national standardized terminology.
Extracts from the European Treaties or regulations are of the same type. In quotations they have of course, to be reproduced literally as they stand in the official texts. This type of fiche bears the code 5.
If all the language versions are supported by solid sources, the fiche will be given the code 4. As soon as perfection is no longer guaranteed the figure is reduced.
This does not mean that the information is bad. We simply are not sure, for lack of bibliographical information. Consequently we know that we still have to work on this fiche to bring it up to the standards of good, reliable terminological information. So it still has something to do with reliability.
Scientific, technical, political and economic terminology, as well as a certain dose of EEC home-jargon represents an enormous mass of linguistic information. Some classification would therefore do no harm. This is why my colleague Dr. LENNOCH developed on the basis of the UDC a three-digit alphanumeric subject code system. It is now used not only for EURODICAUTOM but also for the Target-project of the Carnegie Mellon University in Pittsburgh, USA and by the terminology office of the World Bank in Washington. The multiple coding in this system results in a high degree of refinement.
Subject codes give a supplementary retrieval parameter for polysemic terms. They are especially useful for extracting miniglossaries on certain special fields from the corpus of the terminology bank. This material can serve as a valuable aideme"moire for interpreters having to prepare for a meeting of experts on some exotic new technique.
All these documentary and data processing data are there to highlight and support the real heart of the matter, the purely terminological information in "vedette"or in context, with or without definition.
If there is something more to say about the "vedette" that cannot be part of a definition, such as nationally or geographically limited usage, peculiar plural form etc., this can be done in a scope note NT.
Until now I have only spoken about the organisation of the fiche, the presentation of the terminological unit with its explanatory documentation. Let us now come to the retrieval stage.
As I said in the beginning, EURODICAUTOM should offer the translator a working tool which allows him to achieve higher efficiency and quality without implying fundamental changes in his working methods .
After an extremely short and simple sign-on procedure, the terminal invites the user: "Type your question".
Although we have only 130,000 "fiches" or entities at the moment, we have taken into account from the beginning the necessity of having many more, 1 million perhaps, and the difficulties arising then because of polysemy etc.
That is why we have incorporated a weighting system. The idea was to give first of ail the best answer according to the principle of the longest match.
if a multiterm ABC is the subject of a question, the system first gives ABC if it is in the corpus, and then AB, BC or AC. This "partial" information can be useful. If not the translator just stops the interrogation. This reveals again our basic concept: a working tool for a specialist not a "penny-in-the-slot machine" for anybody.
Let me give an example: A translator is looking tor the translation of the technical expression "relative cinematic viscosity". The first answer gives the translation of this multiterm. But if the user continues with the interrogation he will obtain consecutively "relative viscosity" and "cinematic viscosity".
If the full expression had not been available, it would have been very easy to reconstruct it from these two "partial answers".
It is perfectly clear that the higher the number of terms in the expression looked up the more a partial answer is likely to give useful information. With two terms the risk of irrelevant information is much greater, but as as the system is made for translators they must be capable of judging immediately if the partial information for is useful or not.
To improve the system we shall reduce the partial answers to those containing not less than n-2 of the terms contained in the question.
Furthermore the partial answers to the question AB (two terms) will give alternatively answers with A only and answers with B only. Let us assume that you are looking for the translation of roll-on-roll-off-ship . If the system gives you a series of partial answers with the term "ship" it is highly unlikely that this will prove useful information. On the other hand any partial answer with the term "roll-onroll-off" will give a useful hint for the right translation of the original expression. So to avoid a long series of poor information containing the term ship, the system will give alternatively both elements of the expression . "Roll-on-roll-off craft" e.g. would be helpful for translating the original "RO/RO ship".
Congress attenders of the clever type will have realised that there is a retrieval problem with the phraseological entries, because the words in the phrases are not always in the standard form. This is especially true of languages like German with its numerous inflections and Danish because of the suffixation of the article. To solve this problem we use the truncation device. If, for example, a phrase contains a plural form, truncation will still allow the information to be obtained. Even in Italian it provides the possibility of asking for a form ending in CA or CO and obtaining as an answer the plural form ending in CHE or CHI.
We can do even better: the expression "in and outgoing ships" is a form which is not very frequent in English but more so in German and in Dutch.
A fiche containing the expression "Stuetz-und Bewegungsapparat" can be the answer to a question requesting the translation of Stuetzapparat.
For interrogation regarding a polysemic term or a document concerning a very specific subject field, the interrogation can be made after introducing one or more subject codes.
This should not eliminate other information corresponding to the question asked but without the subject code asked for.
Coding is often a very subjective matter but wouldn't it be a pity to lose information because of a mere coding error? On the other hand some terms can be common to different fields and have the same equivalent in other languages. Forming techniques in plastics are partly the same as in metals. A terminologist introducing this information on the basis of a document on metal forming could forget to also assign it the general code for mechanical treatment. A user asking about a document dealing with plastics forming might make the same mistake while composing his interrogation parameters. | 2017-08-11T03:01:18.245Z | 1978-01-01T00:00:00.000 | {
"year": 1978,
"sha1": "98da6ca848580ffbcce605fde15ea8cf7242126b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "98da6ca848580ffbcce605fde15ea8cf7242126b",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": []
} |
23569758 | pes2o/s2orc | v3-fos-license | A Neurological Rehabilitation Unit: Audit of Activity and Outcome
A clinical audit was carried out to determine the impact of multidisciplinary rehabilitation in a specialist neurorehabilitation unit, and to demonstrate how outcome measurement can be incorporated into routine clinical audit.The study describes and interprets the results of one year's activity and outcome in a neurorehabilitation unit. A total of 138 patients were admitted to the 18 bedded unit between April 1994 and March 1995. The main outcome measures were: length of inpatient stay, admission and discharge destination, disability as measured by the Barthel Index and Functional Independence Measure, handicap as measured by the Environmental Status Scale and the Handicap Assessment Scale, and the time spent undertaking the audit. Improvement in disability was demonstrated in 112 (83%) patients and in handicap in 89 (66%) patients. The time taken to analyse the data on a quarterly basis was reduced from 20 hours for the first quarter to 4.5 hours for the last quarter. The results show that multidisciplinary inpatient neurorehabilitation leads to functional improvement in the majority of neurologically impaired patients. Outcome measurement and data collection can be incorporated into routine clinical practice once a sound methodology has been established.
neurorehabilitation unit. A total of 138 patients were admitted to the 18 bedded unit between April 1994 and March 1995. The main outcome measures were: length ?f inpatient stay, admission and discharge destination, disability as measured by the Barthel Index and Functional Independence Measure, handicap as measured by the Environmental Status Scale and the Handicap Assessment Scale, and the time spent undertaking the audit. Improvement in disability was demonstrated in 112 (83%) patients and in handicap in 89 (66%) Patients. The time taken to analyse the data on a quarterly basis was reduced from 20 hours for the first quarter to 4.5 hours for the last quarter. The results show that multidisciplinary inpatient neurorehabilitation leads to functional improvement in the majority of neurologically impaired patients. Outcome measurement and data collection can be incorporated into routine clinical practice once a sound methodology has been established.
Audit involves the systematic critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources and the resulting outcome and quality of life for the Patient [1]. The aim of outcome measurement is to provide health care providers (clinicians and managers) and purchasers with objective information ?n the effectiveness of health care intervention [2].
It has been traditional to use mortality rates to describe the outcome from acute illness but they are ^adequate for describing the health care problems of People with chronic conditions. The International classification of impairments, disabilities and handicaps (ICIDH), developed by the World Health Organisation [3], has provided a framework for describing the long-term consequences of illness. At the neurorehabilitation unit of the National Hospital for Neurology and Neurosurgery (NHNN) outcome measurement using standardised measures of impairment, disability and handicap' has been part of routine clinical activity since 1990.
Information gathered from audit activity is used locally but rarely distributed widely. Consequently, it is not available to other units to aid evidence-based decision-making. Provided the structure and process are described, the information obtained from measuring outcome should be generally applicable. The purpose of this paper is to describe our unit's patient activity and multidisciplinary team assessment of outcome over a period of one year and comment on the methodology used to incorporate these measures into routine practice.
Methodology
The neurorehabilitation unit at the NHNN is an 18 bedded unit which specialises in the rehabilitation of patients with neurological disease. For three months only 12 beds were available due to building development.
Patients were referred from within the National Hospital by consultant staff, directly by consultants from surrounding teaching and district hospitals, and by general practitioners. Assessment was performed prior to admission by a multidisciplinary team (consultant neurologist, a clinical nurse specialist, a senior physiotherapist and occupational therapist and, when appropriate, a speech and language therapist and psychologist) to determine the main purpose of the admission. Patients were admitted only when medically stable and likely to improve functionally or were in need of 'set-up' in the community.
Within 24 hours of admission to the unit, patients were assessed at a joint meeting by all members of the treating team. The core members of this team always included a nurse, occupational therapist and physiotherapist. A psychologist, social worker and speech and language therapist were involved as appropriate. The patient and close family/carers actively participated in this joint assessment. At the end of the week of admission the treating team jointly listed impairments, disabilities and handicaps, set short and long-term goals which were agreed by the patient, and used standardised assessments to score, by consensus, the patient's level of disability and handicap. Patients participated in a structured multidisciplinary programme which specifically addressed the problems identified on assessment. This typically included efforts to improve functional independence, mobility, bladder and bowel function, and communication. Advice and education regarding work and leisure pursuits, muscle tone management, fatigue management and strategies to compensate for memory dysfunction were also regular components of the rehabilitation process. On completion of this programme the same outcome measurements were repeated at the time of each patient's discharge report. The team took approximately 45 minutes each for the admission assessment and discharge report. For all patients basic demographic details were recorded including age, sex, admission and discharge destination, diagnosis and length of stay. The time taken to perform the audit was also documented.
The following measures of disability and handicap were recorded: Barthel Index (BI), Functional Independence Measure (FIM), Environmental Status Scale (ESS) and a Handicap Assessment Scale (HAS). Both the FIM and BI are widely used measures of disability which have been assessed in terms of their psychometric and clinical properties. The BI [4] is an ordinal scale with a range of 0-20, an increasing score indicating less disability. It was designed to assess the ability of the patient to care for himself, and has been used as a measure of disability in clinical research for many years. Although used widely, this instrument appears to be less sensitive to clinically relevant change in patients with moderate to severe disability [5]. The FIM provides a more comprehensive and sensitive assessment not only of self-care activities and mobility but also of communication and cognitive function. It is an 18 item instrument which measures and scores disability in terms of burden of care, addressing both motor and cognitive function [6].
Few handicap scales are available and fewer are fully evaluated. The ESS was developed as a measure of handicap for the Minimal Record of Disability in Multiple Sclerosis [7]. But there is some concern that its validity is limited, it mixes disability and handicap, and has a misleading scoring system [8]. The HAS was developed at the NHNN to overcome these difficulties, and is currently undergoing reliability studies. Like the ESS it comprises six items, each with a score of 0-5, a decreasing score indicating a reduction in handicap; the categories comprise: productivity, financial status, personal residence, transportation, social activity and autonomy.
All data were stored and analysed on an IBM compatible computer using a commercially available statistical software package [9]. Patients were divided into subgroups on the basis of their diagnosis. For the Rasch analysed FIM subscales [10], parametric statistics were used to determine group changes in mean score from admission to discharge. Non-parametric statistics were used to analyse the changes in the ordinal scales. The central tendency of such scores is most appropriately represented by the median and this is quoted, together with the range.
Results
In the year 1 April 1994 to 31 March 1995, 138 patients (66 men, mean age 44, range 16-87) were admitted. Three patients were transferred back to their referring hospitals within one week because they were medically unstable and were therefore excluded from subsequent analyses.
Eighty-one patients were married, 37 were single, 11 were either separated or divorced and 6 widowed. In total 104 people lived with a spouse or family and 31 lived alone. Eighty-four patients required assistance with their care. In 53 cases this was provided by a family member or friend, and in 31 cases care was paid for. Twenty-three patients were full-time homemakers, 6 were students; 40 patients were retired, 33 of them on medical grounds. Among the 46 unemployed patients 38 were not currently seeking employment. The mean duration of stay was 33 days (range . Its relationship to diagnosis is outlined in Table 1. The shortest duration of stay was for patients with multiple sclerosis and the longest was for stroke and neuropathies; within these groups the longer durations related to those with recent infarct and Guillain-Barre syndrome.
Of the 135 patients, 59 patients were admitted from acute hospitals, 60 from home and 15 from other rehabilitation units. One patient was admitted from a residential unit. On discharge 126 patients returned home, 7 to an acute hospital, 1 to another rehabilitation unit and 1 to a nursing home.
Disability and handicap scores
Admission and discharge scores on the BI and FIM were available for all 135 patients. The scores on the BI improved in 106 patients, worsened in 5 and were unchanged in 24. For the FIM motor subscale, 112 patients improved and 9 deteriorated. On the cognitive subscale of the FIM, 57 patients improved and 28 deteriorated. Scores on the ESS were available on 30 patients of whom 18 improved and 2 deteriorated. The HAS was carried out on 105 patients of whom 71 improved and 8 deteriorated. Table 2 shows the change in scores for each scale from admission to discharge. Case history Jhis is best illustrated by a case presentation: Mr C is a 50-year old man with a 25-year history of multiple sclerosis. He has been wheelchair bound for the past 12 years. Before admission to hospital he had been living alone at home in an adapted ground floor council flat, receiving home help service three times per Week to assist with shopping, laundry and housework. He was in regular contact with his three adult children. Over the past 12 months he had experienced a steady deterioration in his function and was struggling to maintain independence in many self-care activities.
Eventually Mr C was admitted to his local hospital on 3 June 1994 because of frequency of micturition, con- By the end of the first week of admission, the following long-term goal was established in agreement with Mr C: 'to return home with minimal assistance for selfcare, independent in all transfers (including car) and relevant domestic tasks, sitting with improved posture, independent in performing a home exercise programme, and with appropriate bladder, bowel and tone management'. It was anticipated that this goal would be achieved within eight weeks. A series of measurable short-term goals was also set and monitored.
Throughout the admission period Mr C participated in an intensive programme involving joint input from the neurologist, nursing staff, occupational therapist, psychologist, psychiatrist, physiotherapist, social worker and continence adviser. Management included: ? Re-education in self-care activities, transfers, sitting balance, and domestic activities including food preparation ? Education regarding pressure care, prevention and treatment of urinary tract infections and selfmedication ? A regime of suppositories and regular aperients ? Temporary adaptations to wheelchair, with recommendations to the local wheelchair service regarding provision of a lightweight wheelchair, and pressure-relieving cushion ? Assessment by psychiatrists, began antidepressant medication ? Advice regarding strategies to compensate for memory dysfunction, education in relaxation techniques to cope with anxiety and stress ? Advice regarding leisure activities, including referral for a full driving assessment ? Assessment of home environment prior to discharge with recommendations for rails to be fitted beside his toilet and repositioning of intercom system. Mr C was discharged home after two months, having achieved his long-term goal. Close liaison with community services was crucial throughout the rehabilitation Changes in impairment, disability and handicap between admission and discharge for Mr C. Note: an increase in disability scores, denotes an improvement in overall function (a and b) while a decrease in impairment and handicap scores (c and d) denotes improvement in these dimensions process to ensure safety on return home and carryover ?f the improvements gained. Referral was made to district nurses, community physiotherapist, social services, occupational therapist, local wheelchair service, review by psychiatrist, social worker and general practitioner.
Mr C's progress is illustrated by the positive changes measured between admission and discharge in the outcomes of disability and handicap (Fig 2a and c), which Were carried over to the home environment on review three months later. These demonstrate that despite unchanging impairment (Fig 2d) Mr C was able to improve both his functional independence and level ?f handicap.
Discussion
Clinical audit is essential for the continuing evaluation of neurological rehabilitation. There is evidence to suggest that careful measurement of any activity results in higher standards of observation, documentation and response, thus improving the quality of care [12]. Previous studies have documented the feasibility of using the BI to monitor disability in the acute management [13] and rehabilitation [14] of elderly people. This study demonstrates that the measurement of outcome can be incorporated into routine clinical practice in a high turnover, intensive inpatient neurorehabilitation unit.
In this unit, scoring of disability and handicap scales takes place during routine assessment and discharge meetings. The scales provide a structure for multidisciplinary assessment of the patient's problems. Use of the scales in this way is time efficient but demanding. Its success requires continued commitment from all staff who understand the value of outcome measurement as an integral part of clinical practice.
As in many units, junior staff rotate on a regular basis and staff training is therefore essential. This maintains accuracy of scoring and ensures that new members learn to appreciate the relevance of outcome measurement. A standardised record of the audit data has been developed to ensure that clinical staff document the information in a consistent manner which enables non-clinical staff to perform the audit. The results of the audit are discussed within the unit on a quarterly basis. Staff share the credit for the success (or failure) of data collection, are free to comment on the results, and are encouraged to instigate changes in practice where necessary.
The proportion of patients requiring assistance with their daily care reflects the severity of the disability and handicap in this population where over half the population have a progressive neurological impairment. There is little scientific evidence to prove that disability and handicap improve with rehabilitation. The results indicate that functional improvement occurs between admission and discharge with inpatient re-habilitation in most of our patients. The results are complementary to those described in a previous study [15]. An anomaly appears in the worsening of FIM cognitive subscale scores, probably because initial assessment often underestimates the extent of cognitive and psychosocial difficulties.
The results of this audit focus on outcome but this is just one aspect of monitoring the quality of patient care. Review of the process of service delivery and goal achievement is also important. Integrated Care Pathways (ICPs) originally established within the acute sector to monitor service delivery, have been developed within this unit as a method of auditing the rehabilitation process [16]. The combination of auditing process and outcome ensures delivery of efficient and effective health care.
These audits were performed quarterly. The first took 20 hours to complete while the last required only 4.5 hours. This reduction in time was achieved by establishing a systematic filing system, improving recording and collation of data through staff education and identifying a single coordinator.
Clinicians tend to feel that outcome measurement uses time that could be better spent in direct patient contact. This approach prevents collection of information which providers and purchasers could use to raise the standards of patient care. We have shown that data collection can be incorporated into patient focused activity and that regular analysis can be performed by non-clinical staff. | 2018-04-03T01:31:04.115Z | 1996-01-01T00:00:00.000 | {
"year": 1996,
"sha1": "30669a44ca9631aabc25cb5d444a06c46c0a2ed9",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0cdadedbcf3f040ebf428164f2b605f2231bbcf6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267699677 | pes2o/s2orc | v3-fos-license | Pre-treatment 68 Ga-PSMA-11 PET/CT Prognostic Value in Predicting Response to 177Lu-PSMA-I&T Therapy and Patient Survival
Purpose To assess the prognostic value of pre-treatment [68Ga]Ga-PSMA-11 PET/CT and other baseline clinical characteristics in predicting prostate cancer (PCa) patients response to [177Lu]Lu-PSMA (PSMA-I&T), as well as patient survival. Procedures In this retrospective study, 81 patients who received [177Lu]Lu-PSMA-I&T between October 2018 and January 2023 were reviewed. Eligible patients had metastatic castration-resistant PCa, underwent pre-treatment [68Ga]Ga-PSMA-11 PET/CT, and had serum prostate-specific antigen (PSA) levels available. On PET/CT images, SUVmax, SULmax, SUVpeak, and SULpeak of the most-avid tumoral lesion, as well as SUVmean of the parotid gland (P-SUVmean) and liver (L-SUVmean), were measured. Also, whole-body PSMA tumour volume (PSMA-TV) and total lesion PSMA (TL-PSMA) were calculated. To interpret treatment response after [177Lu]Lu-PSMA-I&T, a composite of PSA values and [68Ga]Ga-PSMA-11 PET/CT findings were considered. The outcomes were dichotomised into progressive versus controlled (stable disease or partial response) disease. Then, the association of baseline parameters with patient response was evaluated. Also, survival analyses were performed to assess baseline parameters in predicting overall survival. Results Sixty patients (age:73 ± 8, PSA:185 ± 371) were included. Patients received at least one cycle of [177Lu]Lu-PSMA therapy (median = 4). Overall, half of the patients showed disease progression. In the progressive versus controlled disease evaluation, the highest SULmax, as well as SUVmax and SULmax to both backgrounds (L-SUVmean and P-SUVmean), were significantly correlated with the outcome (p-values < 0.05). In the multivariate analysis, only SULmax to the L-SUVmean remained significant (p-value = 0.038). The best cut-off was 8 (AUC = 0.71). With a median follow-up of 360 days, 11 mortal events were documented. In the multivariate survival analysis, only SULmax to P-SUVmean (cut-off = 2.4; p-value = 0.043) retained significance (hazard ratio = 4.0). Conclusions A greater level of PSMA uptake, specifically higher tumour-to-background uptake in the hottest lesion, may hold substantial prognostic significance, considering both [177Lu]Lu-PSMA-I&T response and patient survival. These ratios may have the potential to be used for PCa patient selection for radioligand therapy.
Thus, through these comprehensive investigations, it has been shown that [ 177 Lu]Lu-PSMA is of benefit in mCRPC patients overall.However, it seems that since 20-30% of patients may not respond to this therapy, there is a need for predictors that can further prognosticate treatment response and optimize patient selection.This will maximize therapeutic efficacy, preventing the blind administration of this beneficial therapy to mCRPC patients in the clinic merely based on the diagnosis of mCRPC [8][9][10][11].In this regard, several baseline factors, including laboratory and clinical parameters (e.g.serum Chromogranin A and lactate dehydrogenase levels, age, and pain experience), have been discussed to estimate the patient's survival and response to treatment [12,13].Nevertheless, it is essential to note that these factors have some limitations in predicting response to treatment.
[ 68 Ga]Ga-PSMA positron emission tomography/computed tomography (PET/CT) is a reproducible, robust modality playing a crucial role in the diagnosis and treatment planning of advanced PCa [14,15].Despite varying inclusion criteria based on baseline [ 68 Ga]Ga-PSMA PET/ CT across clinical trials and treatment facilities globally, experts recommended utilizing [ 68 Ga]Ga-PSMA PET/CT for patient selection.Recent trials, such as VISION and TheraP, have employed different criteria to assess tumour uptake.In the VISION trial, qualitative thresholds were used to assess tumour uptake relative to liver uptake.The cut-off used in the TheraP trial for lesions' maximum standardized uptake value (SUVmax) was 20, being significantly higher, approximately 2-3 times greater than liver uptake and relatively similar to the uptake observed in the parotid gland [6,7].Thus, although these trials have shown the efficacy of [ 177 Lu]Lu-PSMA therapy in highly PSMA-avid patients based on their inclusion criteria, their findings may not be entirely compatible with our daily routine observations in the clinic to decide whether patients would really benefit from receiving [ 177 Lu]Lu-PSMA or not.
Moreover, as another gap in the current literature, most of the previous studies only assessed treatment response by measuring biochemical changes, primarily PSA levels.While PSA reduction is widely used in clinics due to its simplicity, there is ongoing debate regarding its preciseness to be the best response evaluation criteria.Although the literature on imaging-derived predictors using pre-treatment [ 68 Ga]Ga-PSMA-11 PET/CT is limited, existing studies demonstrated a correlation between high PSMA expression (as evaluated by SUVs) and favourable results [8,[15][16][17].The utilization of molecular response assessment in PSMAtargeted imaging is currently under investigation and has recently been endorsed by a joint consensus [16].A recent investigation showed that response evaluation criteria in PSMA PET/CT (RECIP) classification could be robust, not only quantitatively but also interpreted qualitatively [17].Furthermore, Gafita et al. introduced PSA + RECIP as a novel composite-based approach for evaluating treatment response [18].This composite response classification system (PSA + RECIP) had a higher prognostic accuracy for OS, being superior to relying solely on PSA measurements or RECIP criteria.
Hence, in this study, we aimed to assess the potential of pre-treatment [ 68 Ga]Ga-PSMA-11 PET/CT, as well as other baseline characteristics, in predicting response to [ 177 Lu]Lu-PSMA (PSMA-I&T), considering a composite of both PSA and pre-treatment [ 68 Ga]Ga-PSMA-11 PET/CT based on the state-of-the-art response assessment framework [18].To our knowledge, our study is the first to assess the prognostic value of pre-treatment PSMA PET/CT and baseline clinical characteristics to predict treatment response classified by this novel method.In addition, we performed survival analysis and evaluated the prognostic value of the baseline measurements to predict patients' OS.
Study Population
In this retrospective single-centre study, we identified 81 patients who received treatment with [ 177 Lu]Lu-PSMA-I&T between October 2018 and January 2023.Eligible patients had mCRPC, underwent [ 68 Ga]Ga-PSMA-11 PET/CT before treatment, and had serum PSA levels available at baseline and after each cycle of treatment.We excluded patients who discontinued treatment due to major adverse events such as renal failure or had a second malignancy.Finally, 60 patients were included (Fig. 1).Prior to PSMA radioligand therapy (RLT), all patients underwent standard second-line androgen deprivation therapy (ADT) with enzalutamide or abiraterone and third-line chemotherapy.Local radiotherapy and systemic [ 223 Ra]Ra-dichloride RLT have been performed in 41 (68%) and 2 (3%) patients, respectively.Blood testing was carried out upon patient admission for every RLT session.
PSMA Preparation and PET/CT Acquisition
The [ 68 Ga]Ga-PSMA-11 was prepared using a commercially available cold kit (Telix Pharmaceuticals, Inc. Australia) and a commercial 68 Ge/ 68 Ga generator (Galli Ad®) manufactured in compliance with good manufacturing practices (GMP).Quality check was performed by thin-layer chromatography (TLC) to ensure radiochemical purity > 95%.The imaging was conducted systematically following standard procedure guidelines [19], which included scanning from the base of the skull to the proximal femur using two PET/ CT scanners (Philips Ingenuity TF, Amsterdam/the Netherlands, and Siemens Biograph mCT, Erlangen/Germany).The [ 68 Ga]Ga-PSMA-11 PET acquisition time was 2.5 min.per bed-position with a mean interval of 60 min (SD ± 14.4 min) between tracer administration and the start of the imaging.The mean injected activity per kg/body weight was 2.15 MBq.For attenuation correction and localization, a non-contrast-enhanced low-dose CT scan was performed (Siemens: Care Dose 4D, Care kV, slice thickness 1.2 mm and pitch 1.5; Philips: 100 kV, 33 mAs, slice thickness 1.5 mm and pitch 0.8).The reconstructed slice thickness was 3 mm, using iDose mode level 3 (Philips Ingenuity TF), respectively, SAFIRE level 3 (Siemens Biograph).Both PET/CT scanners are EARL/EANM accredited, and thus their performance is assumed to be similar.
PET/CT Analysis and Interpretation
Quantitative analysis was performed using Syngo.viaplatform (Siemens Healthineers, Erlangen, Germany).The visual evaluation was conducted by two experienced nuclear medicine physicians, who reached diagnostic decisions through consensus.The readers were blind to the clinical information of patients, including serum PSA levels, and were only aware of PCa diagnosis.Structures with physiologic PSMA uptake (e.g.salivary glands) or known PSMA false-positive findings (e.g.celiac ganglia) were excluded.Lesions with visually higher uptake than the lumbar vertebral body were rated as PSMA-positive, indicating metastases [20].Measurements of SUVmax, SULmax, SUVpeak, and SULpeak of the most-avid lesion, as well as SUVmean values for the parotid gland (P-SUVmean) and healthy liver tissue (L-SUVmean) as backgrounds, were taken using a standard volume region of interest (VOI).
Volumes of interests (VOIs) were delineated using isocontours set at two different thresholds: 45% of the maximum uptake [20] and a fixed SUVmax of 3 [21,22].These contours were drawn for all PSMA-positive lesions, and the contoured volumes were summed up for each patient.Subsequently, the PSMA tumour volume (PSMA-TV) and total lesion PSMA (TL-PSMA; calculated as PSMA-TV multiplied by SUVmean) were determined and reported at the aforementioned thresholds separately (generating PSMA-TV-45% and PSMA-TV-3, as well as TL-PSMA-45% and TL-PSMA-3).The number of metastatic lesions and prominent sites of the disease (prostate, lymph nodes, bone, and viscera) was also recorded.
[ 177 Lu]Lu-PSMA Therapy patient received intravenous hydration (500 mL 0.9% NaCl) and cooling of the salivary glands, starting 30 min before treatment infusion.The [ 177 Lu]Lu-PSMA-I&T solution was administered intravenously by a perfusion system within 20 min.
Response Assessment
All patients had serum PSA measurements after each cycle within 6-8 weeks.In the interpretation of treatment response, we interpreted the PSA values based on the prostate cancer working group 3 (PCWG3).Blindly to these interpretations and patients' serum PSA levels, we evaluated [ 68 Ga]Ga-PSMA-11 PET/CT findings based on RECIP (version 1.0).Finally, we synthesized these findings based on a novel framework for response evaluation criteria (PSA + RECIP) for those patients who underwent follow-up PET/CT imaging [18].The final interpretation of patients' response to therapy was made based on clearcut definitions for each response group (partial response, PSA decline ≥ 50% or RECIP-PR; progressive disease, PSA increase ≥ 25% or RECIP-PD; stable disease, being stable in both evaluations).Detailed definitions are provided in Table 1.Then, we dichotomised the results into progressive versus controlled disease (stable disease or partial response), according to the patient's outcome [16,17].
Patient Follow-up
The follow-up period was determined starting from the date of the first RLT cycle.Typically, patients underwent monthly laboratory testing during this period.In cases where patients passed away during their treatment, the date of their death was recorded.
Statistical Analysis
All parameters were analyzed at the patient level.Continuous and categorical variables were presented as mean ± standard deviation (SD) and frequency (%), respectively.The differences in the clinical and PET/CT parameters between response groups were evaluated using the chi-square test or Student's t-test for the categorical or continuous variables, respectively.Next, we evaluated the association of the serum PSA and pre-treatment [ 68 Ga]Ga-PSMA-11 PET/CT semiquantitative parameters with the response to treatment using logistic regression.We tried to find a cut-off for SULmax based on the Youden index (maximization of the summation of sensitivity and specificity using receiver operating characteristic curves).The dichotomized variables entered the multivariate analysis to find the most significant predictor.
In the prognostic evaluation, the continuous variables that were significantly associated with the response were converted to categorical variables.Again, the Youden index was used for this conversion.Regarding OS, univariate analysis was performed using the Kaplan-Meier method.The significance of the difference was investigated using the univariate Mantel-Cox log-rank test.Significant parameters in the univariate analysis entered the multiple Cox regression and were provided with their hazard ratio.All data were gathered and analyzed using SPSS software (IBM, ver.22).The statistical significance level was set at a two-sided p-value less than 0.05.
Response Prediction in the Final Assessment
Overall, 30/60 (50%), 4/60 (7%), and 26/60 (43%) of patients showed disease progression, stable disease, and treatment response in the final assessment, respectively.Regarding the differences between progressive and controlled disease groups at pre-treatment, ISUP GG > 3, therapy cycles > 2, pre-treatment highest SULmax, SUVmax to backgrounds (parotid and liver), and SULmax to backgrounds (parotid and liver) significantly differed between the two groups.Details are provided in Table 3.
Association between [ 68 Ga]Ga-PSMA-11 PET/CT parameters and the response to treatment
In the progressive versus controlled disease evaluation, the highest SULmax (p-value = 0.046), highest SUVmax to the L-SUVmean (p-value = 0.024), highest SULmax to the L-SUVmean (p-value = 0.021), highest SUVmax to the P-SUVmean (p-value = 0.023), and highest SULmax to the P-SUVmean (p-value = 0.020) were significantly correlated with the outcome.In the multivariate analysis, only the highest SULmax to the L-SUVmean was significant (p-value = 0.038).The highest SULmax to the L-SUVmean had and AUC of 0.71 in determining controlled disease patients.The best cut-off was 8, showing a sensitivity of 67% and specificity of 74%.
Response Prediction in the Fourth Cycle
In this step, we limited the response to treatment prediction to the responses achieved in patients with ≥ 4 cycles of [ 177 Lu]Lu-PSMA therapy (n = 46) and, again, calculated the differences in the variables noted previously in Table 3.The details are provided in Table 4.
To visualize the SULmax to backgrounds' cut-off application in the response prediction, some cases are provided (Fig. 2).A higher SULmax ratio resulted in a better response to [ 177 Lu]Lu-PSMA therapy.
Discussion
In this study, we evaluated the prognostic value of [ 68 Ga] Ga-PSMA-11 PET/CT parameters alongside other clinical factors for predicting response to treatment, as well as OS, in mCRPC patients who received [ 177 Lu]Lu-PSMA-I&T therapy.Treatment response, considered as combined molecular imaging and biochemical response (PSA + RECIP), was assessed in two categories: after the termination of all cycles (final assessment) and after completing the 4th cycle of treatment (the minimum recommended cycles to reach therapy efficacy).We found that the highest SULmax to the L-SUVmean had the highest AUC in detecting controlled disease, with a best cut-off value of 8.In the fourth cycle assessment, SULmax to the P-SUVmean was the only significant variable in the multivariate analysis to predict controlled disease, with a best cut-off value of 2.7.Notably, tumour volume or site of disease did not predict response to treatment.In the survival analysis, the highest SULmax to the P-SUVmean (cut-off = 2.4) was the only significant variable in the multivariate analysis to predict OS.
Regarding the clinical parameters predicting response to treatment, the study conducted by Ferdinandus et al. revealed that younger age and higher Gleason scores had a negative impact on treatment response [13].However, our findings did not reach significance in relation to these parameters.On the other hand, they demonstrated that basal PSA did not reach significance as a predictor, which was in line with our results [13].Similarly, Rathke et al. showed that baseline PSA had no significant prognostic value for predicting treatment response [12].Considering the value of PSMA PET/CT-derived factors, our findings were consistent with the results of a study conducted by Emmett et al. [8] on PSMA PET/CT predictive parameters for response assessment in a limited cohort of 14 patients.Similarly, they revealed that maximal PSMA intensity is a reliable predictor of response to treatment.However, our study expanded on these findings by including a larger sample size and evaluating additional PET/CT parameters such as SUVmax, SULmax, and their ratio to background.In another study by van der Sar et al. they also demonstrated a significant correlation between the SUV of the most-avid metastases and response to treatment [23].
The other valuable finding in our study could be the less importance of the site of the metastases in therapy response.As we have seen in our routine clinic, sometimes there is a concern about the difference in the objective response among various metastatic patients (e.g.bone or visceral involvement).There are also reports that visceral metastasis can have a negative impact on patients' disease course [9,24].However, our results, like some other previous studies [8,23], demonstrated no significant difference between sites of metastasis, at least when compared to other prominent factors such as maximal PSMA uptake.Moreover, PETbased parameters like PSMA-TV and TL-PSMA, although significantly different between groups (in the fourth cycle assessment), did not keep their level of significance when compared to the more significant parameters (e.g.tumourto-background ratios) in the therapy response prediction.A study conducted by Widjaja et al. was consistent with our findings, observing that PSMA expression from pre-therapeutic PET/CT exhibited superior performance compared to PSMA-TV and TL-PSMA [25]; though in the clinic, larger tumour volumes may initially seem to be associated with lower likelihoods of treatment response.Hence, distinguishing between metastatic patients based on their sites of metastases or solely based on pre-treatment involvement volumes may not be of a high value for patient selection prior to RLT, comparing them relative to the intensity of the involvement.
Considering OS, the percentage of registered mortal events in our study was more or less similar to the reported findings in the literature [26,27].Our results showed that the hottest lesion SULmax to P-SUVmean could predict patient survival.In the most recent international multi-centre study [28], similar to our findings, authors revealed that this ratio (notably, SUVmean to the P-SUVmean) was prognostic for therapy response and patient survival.They used the cut-off of > 1.5 to predict a higher survival rate in their study, being lower than our 2.7 coordinate point, which could be, to some extent, due to using lesions' SULmax in our study instead of SUVmean.Notably, it has been shown that the tumourto-background ratio could also be predictive of PFS [5].In another recent study by Karimzadeh et al. they demonstrated that adhering to the patient selection criteria outlined in TheraP (PSMA-positive disease with a minimum SUVmax of 20 at the site of disease and SUVmax greater than 10 at all other sites of measurable metastatic disease) resulted in improved treatment responses and overall outcomes [29].Notably, their inclusion criteria closely mirrored the criteria based on the parotid uptake threshold, which may suggest that our findings can be in alignment with the conclusions of their study in this regard.Although we followed a different methodology than TheraP, we also showed that highly intense uptake, more than backgrounds, can help identify responsive patients, and intense uptake, higher than parotid tissue, can predict better OS when patients are treated by RLT.Thus, their higher response rate than us can be justified by our different selection criteria.Moreover, previous studies noted that PSA level (even PSA doubling time), Gleason score, and sites of metastases cannot predict patient survival [30,31].This can be of importance when selecting patients to benefit from [ 177 Lu]Lu-PSMA-I&T therapy, not excluding patients with high levels of PSA with a presumption of their highly extensive/aggressive disease.
This study suffered from some limitations, the most important being the inclusion of a rather small heterogeneous patient cohort receiving various cycles of [ 177 Lu] Lu-PSMA-I&T therapy.To address this problem to some extent, we reselected patients who underwent at least four cycles of therapy to re-evaluate the studied parameters.However, this was a double-edged sword; although it reduced the heterogeneity to some extent, it made the study population smaller at the same time.The other limitation would be the retrospective design of the study, which may affect the findings, particularly the assessment of some laboratory parameters (e.g.alkaline phosphatase, Chromogranin A).Moreover, the statistical analysis might be affected by multiple comparisons.Also, calculating the SULmax-to-SUVmean (background) ratio could be of some concern because of the different nature of SUL and SUV based on body surface area adjustment, though it was a more robust predictor than SUV-to-SUV ratios and could retain its significance alongside other variables in the multivariable analyses.So, we decided to report our own experience and recommend further investigations in this regard.Regarding OS, in the survival analysis, we could not follow all patients for too long so that we could register all mortal events, resulting in patient censoring because of follow-up termination.Lastly, we did not include the response to the therapy class of the patients in our survival analysis.Although this may be considered a limitation, we intentionally did that in order to report helpful factors prior to treatment initiation to guide patient selection.
In conclusion, our study showed that higher PSMA uptake, best represented as high tumour-to-background uptake in the hottest lesion, can be of the most significant prognostic value in PCa patients receiving [ 177 Lu] Lu-PSMA-I&T therapy.Additionally, we showed that a higher tumour-to-background ratio is associated with improved patient survival, which might possibly result from their better response to the treatment.These ratios can also be used for more robust patient selection, including subjects who are more likely to benefit from novel combination therapies.Future long-term prospective studies are strongly recommended to enhance the reported cut-offs for patient selection.
[
177 Lu]Lu-PSMA-I&T was administered based on the recommendations of the multidisciplinary team, including board-certified nuclear medicine physicians, urologists, oncologists, radiologists, and pathologists.All patients received at least one cycle of [ 177 Lu]Lu-PSMA-I&T RLT, with a mean interval of 6-8 weeks between consecutive cycles.The standard therapy protocol included 4-6 cycles of [ 177 Lu]Lu-PSMA-I&T RLT unless patients revealed a major adverse event or showed a significant progression which resulted in therapy termination based on the multidisciplinary team consensus.In each cycle, 7.4 GBq of [ 177 Lu]Lu-PSMA I&T was administered, with a reduction of approximately 20% in activity if an individual exhibited decreased renal or haematological function.Prior to treatment infusion, each
Fig. 2 [
Fig. 2 [ 68 Ga]Ga-PSMA-11 PET/CT (MIP-maximum intensity projection images) of representative cases with a a low tumour burden (patients A and B) and b a high tumour burden (patients C and D).Patient A exhibits a low SULmax-to-parotid ratio (< 2.7), while patient B was categorized in the high group (≥ 2.7).Therefore, despite having a low tumour burden, patient A did not respond to the
Table 1
Response to treatment frameworkPCWG3 the prostate cancer working group 3, PSA prostate specific antigen, PR partial response, PD progressive disease, SD stable disease, RECIP response evaluation criteria in PSMA PET/CT, CR complete response
Table 2
Comparison
Table 5
Detailed results of the survival analysis; [ 68 Ga]Ga-PSMA PET/CT parameters to predict overall survival | 2024-02-17T06:17:02.632Z | 2024-02-15T00:00:00.000 | {
"year": 2024,
"sha1": "fca6cb2f2466024ea24b45f55a881766426c7455",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11307-024-01900-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3368dcc545a148474f041dbfa6aab4da5b78722a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248866179 | pes2o/s2orc | v3-fos-license | Effects of Noninvasive Brain Stimulation Combined With Antidepressants in Patients With Poststroke Depression: A Systematic Review and Meta-Analysis
Objective: To evaluated the efficacy and safety of noninvasive brain stimulation (NIBS) combined with antidepressants in patients with poststroke depression (PSD). Methods: Seven databases were searched to identify randomized controlled trials of NIBS combined with antidepressants in the treatment of PSD based on the international classification of diseases (ICD-10) criteria and exclusion criteria. The retrieval time was from the database establishment to 31 October 2021. Two researchers independently screened the identified studies through the search strategy, extracted their characteristics, and evaluated the quality of the included literature. Cochrane Collaboration’s tool was used to assess risk of bias. RevMan 5.3 software was applied for meta-analysis. Results: A total of 34 randomized controlled trials were included, involving 2,711 patients with PSD. Meta-analysis showed that the total effective rate was higher in the combined therapy than the antidepressant alone [odds ratio (OR): 4.33; 95% confidence interval (CI): 3.07 to 6.11; p < 0.00001]. The Hamilton depressive scale (HAMD) score was significantly lower in repeated transcranial magnetic stimulation (rTMS) (≤10 Hz) combined with antidepressant than in antidepressant alone [standard mean difference (SMD): −1.44; 95% CI: −1.86 to −1.03; p < 0.00001]. No significant difference was seen in rTMS (>10 Hz) combined with antidepressant versus antidepressant alone (SMD: −4.02; 95% CI: −10.43 to 2.39; p = 0.22). In addition, combination therapy more strongly improved the modified Barthel index (MBI) scale than antidepressants [mean difference (MD): 8.29; 95% CI: 5.23–11.35; p < 0.00001]. Adverse effects were not significantly different between two therapies (OR: 1.33; 95% CI: 0.87 to 2.04; p = 0.18). Conclusion: Low-frequency rTMS (≤10 Hz) combined with antidepressants tends to be more effective than antidepressants alone in patients with PSD, and there are no significant adverse effects. In addition, combined therapy may enhance quality of life after stroke. Combination therapy with high-frequency rTMS (>10 Hz) showed no advantage in treating PSD. The transcranial electrical stimulation (TES) combined with antidepressants might be more effective than antidepressants alone, which are needed to confirm by more clinical trials since the.
INTRODUCTION
Stroke is now the third leading cause of death worldwide (Benjamin et al., 2017). About 795,000 people in the United States experience new or recurrent stroke every year and, on average, a person has a stroke every 40 s (Benjamin et al., 2019). In addition to dyskinesia, patients with stroke often have psychological and emotional problems. One of the most common psychiatric complications of stroke is poststroke depression (PSD), which has an incidence in the first year after stroke as high as 33% (Hackett and Pickles, 2014). It severely affects the rehabilitation process after stroke and also exerts a heavy burden on patients' family and on society. Despite their prevalence, depression and other mood-related deficits generally get the least attention. Accordingly, mood disorders need to be addressed during the rehabilitation process of stroke to improve quality of life.
Antidepressants are currently the mainstay of treatment for PSD, but certain adverse reactions are inevitable (Hackett et al., 2008;Coupland et al., 2011). For example, tricyclic antidepressants (TCAs) and selective serotonin reuptake inhibitors (SSRIs) increase the risk of cardiovascular and anticholinergic adverse effects. Fluoxetine has also been reported to be unable to improve PSD symptoms (Robinson et al., 2000;Rice et al., 2021), and some patients with stroke do not respond to antidepressants (Anderson et al., 2004;Hackett et al., 2008). Thus, an effective combination therapy for PSD is urgently required.
Repeated transcranial magnetic stimulation (rTMS) and transcranial electrical stimulation (TES) have been proven to be effective in boosting upper limb rehabilitation and improving aphasia after stroke (Vines et al., 2008;Szaflarski et al., 2011;Hu et al., 2018;Lefaucheur et al., 2020;Kuzu et al., 2021). More and more clinical trials have recently focused on noninvasive brain stimulation (NIBS) treatment of PSD (George et al., 1995;Jorge et al., 2008;Tang et al., 2018), and most have identified positive effects. We have found that the clinical effect of NIBS combined with antidepressants may be better than that of antidepressants alone, and many studies have also mentioned this possibility (Slotema et al., 2010;Brunoni et al., 2013). The current metaanalysis evaluated the efficacy and safety of NIBS combined with antidepressants in the treatment of PSD to provide evidencebased information for clinical decision-making and guideline recommendations.
Search Strategy
Relevant randomized controlled trials (RCTs) of NIBS combined with antidepressants in the treatment of PSD were retrieved from the following databases: PubMed, EMBASE, Web of Science, CNKI, Cochrane Library, Biology Medicine Disc (CBM), and the Wanfang database. The retrieval time was from database establishment to October 2021. Search criteria were formulated according to different databases. The keywords included "noninvasive brain stimulation," "repeated transcranial magnetic stimulation," "transcranial direct current stimulation," "transcranial magnetic stimulation," "antidepressant," "antidepressant drugs," "western medicine," "after stroke," "poststroke," and "depression". Only English and Chinese articles were considered.
Inclusion Criteria
The literature included conformed to the following inclusion criteria (I) participants: patients were diagnosed with PSD and included those with ischemic stroke and hemorrhagic stroke, with no limit on the degree of depression. The diagnosis of PSD met the international classification of diseases (ICD-10) criteria for organic mental disorder (Brämer, 1988), and the score of Hamilton rating scale for depression (HAMD) exceeds 7 (Hamilton, 1960), the first onset, and the diagnosis of stroke was confirmed by magnetic resonance imaging (MRI) or computed tomography (CT), along with being down in spirits, fatigue and lack of interest. (II) study type: RCT; (III) interventions and comparisons: studies comparing the combination of noninvasive brain stimulation and antidepressants with antidepressants alone, such as fluoxetine, paroxetine, sertraline, fluvoxamine, citalopram, maprotiline, imipramine, amitriptyline, doxepin, and chlorimipramine, with only rTMS and TES chosen as noninvasive brain stimulation in this analysis and no frequency limit in the rTMS; (IV) primary outcomes: total effective rate and Hamilton depressive scale (HAMD) score; and (V) secondary outcomes: adverse effect rate and modified Barthel index (MBI) scale score.
Exclusion Criteria
Exclusion criteria of this study were as follows (I) language: non-English or non-Chinese studies; (II) study type: not RCTs, such as animal experiments, reviews, retrospective studies, case reports, conference, and comments; (III) duplicate records, those with incomplete, unclear or inconsistent outcomes, or those with missing information that could not be obtained from the authors; and (IV) studies without a control group or with placebo stimulation or NIBS at a different frequency to the control group.
Data Extraction and Management
Two researchers independently searched and browsed the databases according to the retrieval strategy and then carefully read the full article and extracted the characteristics of the
Quality Assessment
Cochrane Collaboration's risk of bias tool was used to assess the quality of the included studies. The tool considers six items: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other biases. Each item was judged as one of three levels: low risk, unclear risk, or high risk.
Statistical Analysis
We used RevMan 5.3 software to perform this meta-analysis. The weighted mean difference was used for continuous variables, whereas the odds ratio (OR) was used for dichotomous variables. All data were calculated with 95% confidence intervals (95% CIs). Heterogeneity analysis and sensitivity analysis were also performed using RevMan 5.3. The random-effects model was selected if significant heterogeneity was identified (p < 0.05 or I 2 >50%). Subgroup analysis and investigation of heterogeneity in subgroups were conducted when necessary. The fixed-effects model was selected if the heterogeneity was low (p ≥ 0.05 or I 2 ≤ 50%). Reporting bias was assessed by funnel plot, with dissymmetry indicating significant reporting bias in the analysis.
Selection of Results
A total of 555 records were identified in the electronic databases. Of these, 248 records remained after the two researchers read the titles. After the deletion of duplicates and exclusion of 71 studies due to inconsistent primary standards after abstract screening, the full text of 50 articles were read for further assessment. Finally, 34 studies were selected for analysis. Figure 1 shows the flow diagram of the article selection. The 34 selected studies (Sun and Song, 2013;Hm, 2014;Ma and Ma, 2015;Wang and Ding, 2015;Xing and Wang, 2016; Tan Supplementary Table S1, including the authors' names, publication year, sample size, participant age, type of stroke, intervention, control, outcome indicators, and stimulation frequency, intensity, orientation, control, and duration. There was no significant difference in the baseline data between the two groups. The quality assessment of the included studies is shown in Figures 2, 3.
Meta-Analysis Results
The main indicators were the HAMD score and the total effective rate after treatment. The secondary outcome indicators were the MBI score and adverse effects after treatment.
Adverse Effect Rate
The adverse effect rate was reported as an outcome indicator in 12 of the included studies (Zhu, 2018;Zhang, 2019;Yang and Hu, 2020;Wang and Wu, 2020;Tian, 2018;Sun and Song, 2013;Liu, 2015;Liu and Wang, 2020;Li and Liang, 2016;Hl, 2021;Fj, 2020), which involved 981 patients. Because heterogeneity test analysis showed that there was significant heterogeneity among the included articles (I2 = 47%, p = 0.04), the fixed-effects model was used to combine results. The meta-analysis showed that there was no significant difference in the adverse effect rate between the two groups (OR = 1.33; 95% CI: 0.87-2.04, p = 0.18) (Figure 7). Adverse reactions mainly included behavioral toxicity, nervous system abnormalities, and cardiovascular system abnormalities. Behavioral toxicity included somnolence and epilepsy. Nervous system abnormalities commonly included headache. Digestive system abnormalities included nausea, vomiting, and indigestion. In the NIBS combined with antidepressant group, 36 patients had headache, three had insomnia, three had thirst, eight had nausea, 12 had vomiting, and two had cardiovascular system abnormalities. In the antidepressant group, four patients had headaches, three had insomnia, four had thirst, five had nausea, 13 had vomiting, one had fatigue, and one had cardiovascular system abnormalities.
Sensitivity Analysis
Sensitivity analyses of each outcome indicator were performed by excluding single articles one-by-one to test the effect of each study on the pooled effect size. In the meta-analysis of the HAMD score, the heterogeneity decreased from 92% to 34% after deleting the study by Liu FJ from 2020 (Fj, 2020). The results showed that this heterogeneity was mainly due to this study. There was no qualitative change in the combined effect for all outcome indicators. Thus, the pooled results of the included studies were steady.
Publication Bias
Funnel plot analysis was used to analyze the publication bias of the HAMD score, total effect rate, and adverse effects. There was no obvious publication bias in the studies of the total effect rate and adverse effects. The poor symmetry of the funnel plot indicated the existence of a publication bias due to the study by Liu LB from 2020 (Liu and Wang, 2020). After deleting this study, the combined effect was not changed but the total heterogeneity decreased to 79%. The publication bias results for the HAMD score analysis are shown in Figure 8.
DISCUSSION
Our meta-analysis included 34 studies of the effects of NIBS combined with antidepressants for patients with PSD. The results showed that the combination of NIBS and antidepressants might have a better effect on PSD and could improve the depression scale score and quality of life compared with antidepressants alone. It is well known that guidelines recommend rTMS for the treatment of major depression, and many meta-analyses have shown that TMS intervention with PSD was positive (Shen et al., 2017;Liu et al., 2019;Shao et al., 2021), but a growing number of studies have recommended multi-module combination therapy and population-specific personalized treatment Nestor and Blumberger, 2020), which warrants further research on the frequency and site of TMS intervention. The use of low-frequency TMS by Daniel R Schaffer significantly improved depression with cognitive impairment, suggesting that low-frequency TMS is more effective in specific populations (Schaffer et al., 2021). Compared with previous reviews (Bucur and Papagno, 2018;Liu et al., 2019), our analysis had the following advantages (I) combination NIBS and antidepressant therapy; (II) internationally recognized depression assessment scales; (III) inconsistent results with previous studies due to negative outcomes of high-frequency TMS combined with antidepressants; and (IV) the inclusion of more than 30 studies. There have been no studies evaluating NIBS in combination with antidepressants for PSD, although combination therapy is more clinically appropriate. Therefore, this meta-analysis may have a greater reference value than previous reviews. According to the results of this analysis, combined NIBS and antidepressant therapy reduced the HAMD score of PSD more than antidepressants alone. However, this result was highly heterogeneous. We grouped the studies by a variety of clinically relevant factors, including age, intervention frequency and intensity, drug type, type of stroke, and stimulation orientation and duration. In the final analysis, rTMS combined with fluoxetine (less than 1 Hz and between 5 and 10 Hz) was more effective than fluoxetine alone, but the effect was not better with a frequency exceeding 10 Hz. TES combined with antidepressants improved the HAMD score more than antidepressants alone, although only two included articles examined this combination. After deleting the study by Liu FJ from 2020(Fj, 2020 due to its high heterogeneity, rTMS combined with paroxetine was also more effective in reducing the HAMD score. This previous study by Liu FJ (Fj, 2020) was probably a retrospective study due to its vague description and was excluded from the pooled effect. Martijn (Arns et al., 2010) also commented that there were possibly differential effects of different rTMS stimulation frequencies, although many searches concluded that high-frequency rTMS has the same effect as lowfrequency rTMS or antidepressants (Berlim et al., 2013).
The results of 17 studies (Zhu, 2018;Zhang, 2019;Xy, 2014;Xing and Wang, 2016;Wei, 2021;Wang and Qin, 2020;Hm, 2014;Hl, 2021;Wang and Li, 2019;Tian, 2018;Lu and Yang, 2016;Liu and Wang, 2020;Li and Chen, 2019;Li and Liang, 2016;Cheng, 2011;Fj, 2020) also indicated that NIBS combined with antidepressants was better than antidepressants alone regarding the total effect rate. Moreover, for the MBI score, seven studies (Xy, 2014;Xj, 2018; Tan and Zhou, 2017;Li and Pan, 2013;Hl, 2021;Cheng, 2011) showed that the combination therapy has potential benefits in patients with PSD. Combination therapy may be able to improve quality of life after stroke. Since a few included articles reported MBI scores, the metaregression did not be conducted. Subgroup analyses were added based on clinical characteristics, including frequency, intensity and location of intervention, degree of depression, and course of disease. Heterogeneity still could not decrease to a reasonable range. We used sensitivity analysis to find no articles causing high heterogeneity, and adopted a random effect model. This result is stable and conservative. Some studies (Zhu, 2018;Zhang, 2019;Yang and Hu, 2020;Ma and Ma, 2015;Fj, 2020) have reported headache, nausea, vomiting, insomnia, thirst, and fatigue in both control and experimental groups. The adverse reactions may be caused by antidepressants. Twelve studies (Zhu, 2018;Zhang, 2019;Yang and Hu, 2020;Wang and Wu, 2020;Tian, 2018;Sun and Song, 2013;Liu, 2015;Liu and Wang, 2020;Li and Liang, 2016;Hl, 2021;Fj, 2020) demonstrated consistent and stable results in adverse reaction rates. This suggests that combined NIBS and antidepressant therapy is safe. There is still a contradiction between the advantages and disadvantages of the different frequencies of NIBS, and the effects of different frequencies of NIBS are still disputed. Different frequencies of rTMS have been shown to reduce fluorodeoxy glucose F18 ( 18 F-FDG) uptake in the dorsal cortical region while simultaneously increasing 18 F-FDG uptake in the ventral region (Parthoens et al., 2016). The rTMS decreased glucose metabolism in the stimulated temporal region, with increases in the bilateral precentral, ipsilateral superior and midfrontal, prefrontal, and cingulate gyri. This suggests that 1 Hz rTMS could induce cortical regulation and extensive changes in the neural network through long-range neuronal connectivity (Lee et al., 2013). Studies have also shown that low-intensity TMS mainly stimulates lowthreshold inhibitory neurons (Duan et al., 2018). Highfrequency TMS caused greater activation than low-frequency TMS in normal humans. However, oxidative stress, lipid peroxidation, and protein oxidation were found in the neural tissue of stroke patients. Any of these pathophysiological processes may be related to PSD (Nabavi et al., 2015). Kimbrell et al. (1999) also reported that the antidepressant response to rTMS might depend on the pretreatment cerebral metabolism and the stimulation frequency. Thus, it is possible that patients with PSD are more sensitive to low-frequency TMS. Due to abnormal expression of amine neurotransmitters and cytokine expression after stroke, the combination of antidepressants with rTMS may be more effective with the mild stimulation of low-frequency TMS.
The mechanism of PSD is still unclear, which may involve neurobiological pathways, inflammation and apoptosis mechanisms (Robinson and Jorge, 2016;Medeiros et al., 2020). Robinson (Robinson et al., 1984;Narushima et al., 2003) suggested that lesions in the left frontal lobe or left basal ganglia were associated with PSD. And focal brain stimulation using rTMS was only effective when administered to the left dorsolateral prefrontal cortex in patients with vascular depression (Jorge et al., 2008). A metaanalysis (Carson et al., 2000) showed that stroke site was not associated with depression, and a study (Wei et al., 2015) suggested a significant association between stroke in the right hemisphere and the incidence of depression. Some studies have hypothesized that stroke lesion area is related to depression degree, which could be explained by some pro-inflammatory factors (Spalletta et al., 2006). For example, the increase of IFNγleads to the cascade reaction of other pro-inflammatory cytokines IL-6, IL-1βand TNF-α, which aggravates depression. Secondly, IFN-γcan affect the HPA axis (Capuron et al., 2003), leading to increased adrenocortical hormone and cortisol levels, resulting in increased reactive oxides (Altieri et al., 2012;Ferrari and Villa, 2017), which further cause cell death and damage. Proinflammatory factors also stimulate the activity of indoleamine 2, 3-dioxygenase, which degrades tryptophan, the biological precursor of serotonin, into a toxic metabolite (Bansal and Kuhad, 2016). Compared with common depression, PSD is associated with focal ischemia, which leads to programmed cell death, cell swelling, or cell necrosis and a series of complex events related to cellular and molecular mechanisms (Brouns and De Deyn, 2009). Whether the neuroanatomical location of stroke affects depression remains controversial. It remains unknown whether the severity of stroke is positively correlated with the severity of depression, or whether there are differences in depression at different times after stroke. It is hoped that more RCTs will be designed in this direction in the future.
LIMITATIONS AND PROSPECTS
This meta-analysis emphasized the clinical efficacy and depression improvement of combination therapy in PSD patients but also examined quality of life and safety. However, all included RCTs were from China, which may indicate publication bias. Funnel plot analysis revealed that a study by Liu LB (Liu and Wang, 2020) had significant publication bias due to selective reporting of outcomes. Accordingly, the result should be treated with caution. This meta-analysis was not registered and there may be a small deviation, but we still strictly followed the procedures of systematic evaluation. In addition, some indicators were significantly heterogeneous. Therefore, caution is required for these findings. More basic studies are needed to determine the mechanism underlying the effect of low-frequency TMS combined with antidepressants on depression after stroke. Moreover, large multicenter studies are needed to assess the best frequency and type of depression drugs to promote the final translation of combination treatment into daily clinical practice and guidelines.
CONCLUSION
Our analysis demonstrate that low-frequency rTMS(10 ≤ Hz) combined with antidepressants tends to be more effective than antidepressants alone in patients with PSD and there are no significant adverse effects. In addition, combined therapy may boost quality of life after stroke. Combination therapy with highfrequency rTMS (>10 Hz) showed no advantage in treating PSD. The transcranial electrical stimulation (TES) combined with antidepressants may be more effective than antidepressants alone. More randomized controlled studies with detailed design for different stroke periods, depression levels and stroke location are needed to verify this conclusion.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
HC conceived the conception and design of the study. JL did the search of studies, performed the meta-analytic statistics. JL, JF, and YJ did the search of studies, performed the data extraction and prepared the tables and figures. JH was involved in the study selection and did the data extraction. HZ prepared the references and provided the detailed critical comments. JL and JF wrote the manuscript. All authors approved the final version of the manuscript. | 2022-05-19T13:34:45.453Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "6e1f64fa01a371eb07e9f6d4e6836f4e7f2b83da",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "6e1f64fa01a371eb07e9f6d4e6836f4e7f2b83da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18132698 | pes2o/s2orc | v3-fos-license | The immunological identity of tumor
By means of well-characterized autoimmunity models, we comparatively probed the “selfness” of malignant cells and their normal counterparts. We found that tumors activate self-tolerance mechanisms much more efficiently than normal tissues, reflecting a status of immunoprivileged “self.” Our findings indicate that potent autoimmune responses can eradicate established malignancies, yet the collateral destruction of healthy tissues may prove difficult to circumvent.
Recent clinical trials testing immunotherapeutic anticancer regimens have generated exciting results. [1][2][3] The ultimate success of such interventions, however, will likely depend on the immunological identity of tumors. Adaptive immunity is characterized by fine specificities, owing to a lymphocyte repertoire that is capable of discriminating the "self" from "nonself" tissues. Tumors represent a dilemma to this dichotomy. Cancer cells originate indeed from the malignant transformation of healthy cells, i.e., they have a self origin. However, neoplastic cells are also characterized by genomic instability 4 and hence presumably generates an array of new antigens (neoantigens) that may not be perceived as self by the immune system. A long-standing premise of tumors as "altered self" entities posits that malignant cell bear sufficient antigenic changes to elicit immunosurveillance. 5 However, the identification of bona fide tumor-specific antigens (TSAs) in humans is difficult, and the clinical benefits of anticancer immunotherapy are often paralleled by robust autoimmune reactions, 6 suggesting that tumor cells, no matter how malignant they are, remain for the most part self entities.
To examine how immune effectors specific for self antigens deal with tumors, we used CD4 + or CD8 + effector T (T eff ) cell clones that are fully capable to drive spontaneous autoimmune responses. 7 These CD4 + and CD8 + autoimmune T eff cells were tested in vivo for their efficacy against insulinoma or lymphoma cells as well as against normal cells expressing the same antigens within the same animals. A few observations from this study have profound implications for anticancer immunotherapy. First, autoimmune T eff cell clones were able to eradicate established tumors even in the presence of myeloidderived suppressor cells (MDSCs), provided that immunosuppressive cells of the adaptive immune system were absent. Second, a suboptimal fraction of self antigen-specific, Foxp3 + regulatory T (T reg ) cells that failed to protect normal tissues from autoimmune T eff cells was sufficient to exert prominent immunosuppressive effects to block tumor-targeting immune responses, in both adoptive T-cell transfer and acute T reg depletion experiments. Third, in an adoptive T-cell transfer setting, the depletion of cytotoxic lymphocyte antigen 4 (CTLA4) by RNA interference (RNAi) could substantially boost the efficacy of autoimmune T eff cells against tumors. 7 We concluded that tumor represents an immunoprivileged self entity, based on the observation that malignant cells could employ self tolerance mechanisms more efficiently than their normal counterparts to avoid autoimmune responses. 7 The concept of immunoprivilege has long been used to explain the status of increased protection from immune responses exhibited by a few critical organs, such as the brain, eyes and testes. The traditional view of immunoprivilege involved the exclusion of immune cells from the privileged sites. However, recent studies have demonstrated that immunoprivileged tissues rather exhibit increased levels of immune regulation. 8 Along similar lines, it would be tempting to speculate the existence of an exclusion-based immunoprivilege for some types of cancer, e.g., lung carcinoma, and an immunoprivilege mainly mediated by in situ immune regulation for other neoplasms, e.g., melanoma.
Of note, a large body of evidence from experimental tumor models indicates that cancer-specific immunity can be readily achieved, and that antitumor immune responses can eradicate neoplasms in the absence of prominent autoimmune reactions (reviewed in ref. 9). Our study does not contradict these findings. 7 Its focus was indeed to test how potent autoimmune T cells respond to an established tumor, beginning from when the tumor size is very small, and our experiments did not address the potential role of autoimmune T eff cells in immunosurveillance at By means of well-characterized autoimmunity models, we comparatively probed the "selfness" of malignant cells and their normal counterparts. we found that tumors activate self-tolerance mechanisms much more efficiently than normal tissues, reflecting a status of immunoprivileged "self." Our findings indicate that potent autoimmune responses can eradicate established malignancies, yet the collateral destruction of healthy tissues may prove difficult to circumvent. Figure 1. tumor as an "altered self" or "immunoprivileged self" entity. the hypothesis that self epitopes are abundant in the antigenic repertoire of tumor cells is based on the facts that tumor-specific antigens (tsAs) are difficult to identify and that antitumor immune responses often target self antigens. Blue dashes depict the immunosuppressive microenvironment that is often associated with tumors. Oval areas reflect overall tumor burdens and do not necessarily represent individual tumor sites. Ab, antibody; tIC, tumor-initiating cell.
oncogenesis. Thus, the study was not a direct refutation of the "altered self" view or the immunosurveillance hypothesis. 5 Likely, both a situation of "altered self" and one of "immunoprivileged self" could be represented in the natural history of spontaneous tumors.
Nevertheless, the premises of tumor as an "altered self " or an "immunoprivileged self " entity have distinct implications for antitumor immunity and immunotherapy (Fig. 1). On one hand, according to the "altered self " view, genetic changes in tumor-initiating cells (TICs) generate an array of neoantigenic epitopes. that can inhibit even potent autoimmune responses. In this setting, neoantigenspecific antitumor immunity edits the antigenic identity of neoplasms to limited extents, leaving untouched the tumor immune privileges.
Is cancer an immunological problem or an oncological one? 10 The "immunoprivileged self" hypothesis would suggest that cancer is an immunological problem at its root, yet the eradication of this problem would be beyond the reach of immunology in the absence of oncological interventions. "But the worst enemy you can meet will always be yourself…," as the nineteenth century German philosopher Friedrich Nietzsche wrote in Thus Spoke Zarathustra, which also stated, "You Tumors evade the attack of the immune system by establishing a microenvironment constituted by immunosuppressive cells and factors. Targeting tumor-specific antigens while blocking immunosuppressive factors can reduce the tumor burden and eventually eradicate neoplastic lesions. On the other hand, according to the "immunoprivileged self " view, despite substantial genetic and epigenetic changes, neoantigens would account for a minimal fraction of the antigenic repertoire of TICs as compared with self antigens. Thus, established tumors are largely "self " in their immunological identity. Furthermore, immunosuppressive elements orchestrated by self antigen-specific Treg cells form a local microenvironment must be ready to burn yourself in your own flame…." Tumor as an "immunoprivileged self" entity may constitute the worst possible challenge for the immune system. Autoimmune inflammatory reactions could be effective as the body's own "flame" but only if "burns" are not life-threatening. Therefore, the impact of immunotherapy by itself may be limited, unless the tumor antigenic repertoire is substantially altered or its immunoprivilege eliminated by physical interventions such as surgical removal, radiation therapy or chemotherapeutic agents.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed. | 2018-04-03T01:01:45.002Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "4ee073d642223b6fd50e1092715c86481b6e801b",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/onci.23794?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ee073d642223b6fd50e1092715c86481b6e801b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221372724 | pes2o/s2orc | v3-fos-license | Acute phase protein expressions in secretory and cistern lining epithelium tissues of the dairy cattle mammary gland during chronic mastitis caused by staphylococci
Background Mastitis is the most common disease in dairy cattle and the costliest for the dairy farming industry, as it lowers milk yield and quality. Mastitis occurs as a result of interactions between microorganisms and the individual genetic predispositions of each animal. Thus, it is important to fully understand the mechanisms underlying these interactions. Elucidating the immune response mechanisms can determine which genetic background makes an animal highly resistant to mastitis. We analyzed the innate immune responses of dairy cows naturally infected with coagulase-positive staphylococci (CoPS; N = 8) or coagulase-negative staphylococci (CoNS; N = 7), causing persistent mastitis (after several failed treatments) vs. infection-free (i.e., healthy [H]; N = 8) dairy cows. The expressions of the acute phase protein genes serum amyloid A3 (SAA3), haptoglobin (HP), ceruloplasmin (CP) genes in the tissues most exposed to pathogens— mammary gland cistern lining epithelial cells (CLECs) and mammary epithelial cells (MECs)—were analyzed. Results We found constitutive and extrahepatic expressions of the studied genes in both tissue types. HP expression in the MECs of the CoPS-infected group was higher than in the H group (p ≤ 0.05). Moreover, higher SAA3 expression in the CoPS and CoNS groups than in the H group (p = 0.06 and 0.08, respectively) was found. No differences between SAA3 and HP in CLECs were revealed, regardless of the pathogen type. However, higher expression of CP (p ≤ 0.05) in the CoPS group than in the H group was noted. Conclusions The expressions of selected acute phase proteins were similar between CLECs and MECs, which means that CLECs are not only a mechanical barrier but are also responsible for the biological immune response. Our findings agree with the results of other authors describing the immunological response of MECs during chronic mastitis, but the results for CLECs are novel.
Background
Over the course of many years, breeding programs in the dairy industry have selected mainly for the traits of high milk yield and quality. This selection has resulted in highly productive dairy cattle; however, these animals are prone to many infections, especially those related to the mammary gland. Bovine mastitis is the most common disease in dairy cattle worldwide and is the costliest disease for the dairy industry [1]. Very complex processes occur during mastitis. Briefly, the disease is present when microbes overcome anatomical barriers, enter the udder, and activate cellular and soluble factors within the mammary gland. Occasionally, mastitis symptoms may occur in response to chemical, mechanical, or thermal trauma to the udder [2]. Moreover, toxins released by some bacteria damage the milk-secreting tissue and milk ducts, resulting in reduced milk yield and quality [3]. This damage can even lead to animals becoming unable to produce milk, which in turn results in animal culling [4].
The major role of lymphocytes, macrophages, neutrophils, and natural killer (NK) cells in response to mastitis has been well recognized. Mammary epithelial cells (MECs), which form the secretory tissue, also play a very well-known immunological role during udder inflammation. MECs are responsible for secreting a number of factors related to the host's defense against pathogen invasion in ruminants (e.g., lactoferrin and antimicrobial peptides) [8,9].
However, the udder's first line of defense against pathogens is the lining epithelial cells, which line the whole teat canal and gland cistern. Rich in keratin, these cells are also located around the teat canal end and form the keratin plug that protects the teat canal entrance from pathogen intrusion; however, up to half an hour after milking, the canal stays open. The cells of this lining epithelial tissue are tightly connected, building a strong mechanical barrier that prevents pathogens from passing through. Yet, despite their importance, there is still very limited information about cisternal lining epithelial cells (CLECs) as a biological barrier. Suppression of a bacterial infection often depends on a quick immune response, mostly by the innate immune system, which occurs within hours after pathogen entrance [10]. Thus, if the CLECs release antimicrobial agents, such as acute phase proteins (APPs), their secretion at the initial step of the infection probably helps protect the udder against pathogens.
APPs are one of the organism's first lines of defense against infection during the systemic reaction to inflammation. They belong to a heterogeneous group of proteins in terms of their structure, function, and mode of action, and are produced mainly in the liver. Moreover, their properties can differ significantly: some are antiinflammatory, while others are pro-inflammatory. The concentration of these proteins is substantially altered during inflammation, trauma, or infection (i.e., acute phase reaction) as a result of complement system activation or after the release of various pro-inflammatory mediators [11]. Human APPs are the most recognized of these proteins, and they can be divided, in general, into two groups: 1) positive APPs, whose concentration increases after trauma, e.g., serum amyloid A (SAA), haptoglobin (HP), ceruloplasmin (CP), fibrinogen (FB), C-reactive protein (CRP), lipopolysaccharide binding protein (LBP), ferritin (FT), or lactoferrin (LF), and 2) negative APPs, whose reaction is opposite during acute phase reaction, e.g., albumins, transferrin (TF), or transthyretin (TTR). However, for different animal species, various other proteins have been recognized as APPs [12]. As of now, the following APPs have been recognized for cattle: SAA, HP, CP, FB, CRP, LBP, FT, LF, bovine cluster of differentiation 14 (CD14), and calcitonin gene-related peptide (CGRP) during different diseases, such as mammary gland infections, uterine infections, lameness, or fatty liver syndrome [12]. SAA and HP are the most valuable biomarkers of diseases, especially during mastitis [13]. CP is very well known as an inflammatory indicator in cattle, as it protects tissues from iron-mediated free radical injury [14]. It has been assessed as a marker of animal welfare and health. Moreover, researchers have also recognized its role during mastitis [12].
The aim of the study was to determine the expressions of SAA3, HP, and CP genes in MECs and CLECs during chronic subclinical mastitis caused by CoPS and CoNS vs. bacteria-free udder samples to compare the immune response of both tissues to staphylococcal infection.
Results
In this study, we found the constitutive expression of the studied genes within the analyzed tissues. Transcripts for all of the studied genes were found even in the infection-free samples of both types of tissues. However, HP expression in the MEC samples infected with CoPS was approximately four times higher than that of the H group (p ≤ 0.05). Moreover, we found a~1.8-fold higher expression of SAA3 in CoPS and a~1.5-fold higher expression in CoNS than in the H group (p < 0.06 and 0.08, respectively). No differences in CP expression in MECs were found, regardless of the animal's health status (Fig. 1).
Furthermore, no differences between SAA3 and HP expression in CLECs were revealed, regardless of the pathogen type. However, we found a higher expression of CP (p ≤ 0.05) in the CoPS-infected samples than in the H samples (~25 times higher) (Fig. 2).
In the comparison of the selected APPs' expression levels between CLECs and MECs with different pathogen types, no differences between SAA3, HP, and CP expression were found.
However, strong positive correlations between the mRNA levels of the CP and SAA3 (p ≤ 0.01) and CP and HP (p ≤ 0.01) genes in CLECs were found, while no correlation between the transcript levels of the SAA3 and HP genes was stated (Table 1). Similar observations were made in MECs, such as positive correlations between SAA3 and CP gene mRNA levels (p ≤ 0.05) and between CP and HP gene expression levels (p ≤ 0.01), with the addition of a positive correlation between SAA3 and HP expression levels (p ≤ 0.01). Moreover, an analysis of the selected APP gene expression patterns revealed a negative correlation for CP and HP between both tissues (p ≤ 0.05).
Discussion
The higher expression of the HP gene in MECs infected with CoPS compared to that of the H group (p ≤ 0.05) may suggest that HP plays an important role as a major APP in cattle during chronic mastitis, which is consistent with results obtained by another research team [13]. The main role of HP is the binding of free hemoglobin. Iron-utilizing pathogens have developed many mechanisms for extracting iron from free hemoglobin. The binding of hemoglobin protects iron ions from being used by harmful bacteria. Moreover, free hemoglobin is highly toxic to tissues due to the lipophilic character of heme (i.e., it interacts with the cellular lipid bilayer), and the iron present in heme facilitates the generation of reactive oxygen species [15]. Our research illustrates the extrahepatic nature of HP synthesis with MECs-a finding that is in line with other scientific reports [16][17][18].
The higher expression of the SAA3 gene in samples infected with CoPS and CoNS compared to the control group may imply a crucial role for SAA, together with HP, during chronic mammary gland inflammation. Although the main roles of SAA3 are the binding and transportation of lipoproteins, it also plays an important role in the immune system, e.g., SAA3 activates neutrophils and macrophages and their migration, stimulates T cell adhesion, participates in monocyte chemotaxis, and also facilitates lymphocyte and endothelial cell proliferation [19]. The elevated level of SAA3 and HP mRNA transcripts in MECs during mastitis found in our study is in accordance with a study conducted by Eckersal et al. [20]. That team found increased HP and SAA3 transcript levels in mammary gland tissue at the trend level, specifically in MECs as well as in CLECs, after an experimental infusion of S. aureus (the animals were kept for~30 days and then euthanized 48 h after the last bacterial infusion). Furthermore, in their immunocytochemical study, the researchers found a higher concentration of SAA3 within the infected tissues.
We have shown the presence of CP mRNA transcripts within both infected and bacteria-free samples, which implies the constitutive expression of this gene. However, in CLECs, a higher expression level within tissue infected with CoPS compared to H was found (p ≤ 0.05), while no differences in MECs were observed. The presence of mRNA transcripts of this gene in both analyzed tissues indicates its extrahepatic expression; however, although several studies have reported CP expression in mammary gland secretory tissue, the liver remains the main source of this APP [19]. The higher level of CP transcription in the CLECs of the CoPS vs. the H samples may suggest an increased presence of reactive oxygen species during this type of infection [21]. The elevated level of HP and CP gene expression within the tested tissues may be related to their main role, which is the binding of free iron ions. As part of the immune response, these proteins keep iron ions tightly bound, which inhibits pathogens from utilizing the iron, thus protecting the organism. This function prevents bacteria from propagating, as they need iron to grow in the udder [12]. An elevated level of these proteins during chronic infections could be related to their protective role against tissue damage caused by pathogens. Moreover, CP exhibits oxidase activity, e.g., catechol or amine oxidase activity, toward different substituted organic compounds. In addition, this protein may act as a scavenger of reactive oxygen species, such as singlet, superoxide, and hydroxyl radicals [21].
In our study, all tested samples from the CoPS and CoNS groups were obtained from dairy cattle suffering from naturally infected mammary glands after several failed antimicrobials therapies. The animals from the experimental groups were culled because of recurrent, chronic, and incurable udder inflammation. Therefore, the obtained results show the importance of SAA3 and HP during persistent infection in MECs and of CP in CLECs. During acute inflammation, the concentration of APPs rises up to 100-fold in the first 48 h, while during chronic inflammation, which occurs usually after acute inflammation, the concentration of APPs is only 10-fold higher than in healthy tissues as a result of its decreasing concentration after the acute phase [22]. These findings are consistent with our results however, the 24-fold higher expression of the CP gene in CLECs infected with Other research teams have obtained results similar to ours, but considering the unique character of our analysis, it was challenging to find research data on mRNA transcript levels to compare with ours. However, some study have been conducted at the protein level. Moreover, it was difficult to find any other studies with the same animal model as the one we used. Horadagoda et al. [23] observed elevated levels of SAA3 and HP proteins in the blood serum of cows diagnosed with different types of chronic, as well as acute, inflammation (including mastitis), showing higher SAA3 and HP concentrations during the acute state compared to the chronic one. However, some of our results differ from the literature because of differences in the research model. In general, most knowledge about APPs has been gained from a laboratory-induced model, and much of the research has been conducted in a short period of time, e.g., 24-72 h after bacterial challenge.
MECs are very well known for their biological role during mammary gland inflammation. This secretory tissue is not only responsible for milk secretion but is also, upon interaction with invading bacteria, able to produce different pro-inflammatory or anti-inflammatory mediators such as cytokines, chemokines, APPs, as well as antimicrobial peptides and proteins (β-defensins, cathelicidins, and LF) [24,25]. Moreover, these cells may be involved in recruiting neutrophils and lymphocytes to milk [26]. In contrast, CLECs, present in teats, a milk cistern, and ducts, have always been considered to form a strong mechanical barrier against bacterial invasion to the gland due to the strong and tight connections between cells. It is the very first tissue that comes into contact with pathogens invading the udder and is the most exposed to contact with microorganisms. Knowledge of the role of CLECs in the innate immune system is very limited. A number of studies have revealed that other cell types are involved in the immune activity within the udder, such as MECs [27,28] and probably CLECs, as it was shown in our study.
Our research shows the elevated expression of SAA3 (p < 0.1) in MECs for CoPS and CoNS groups compared to the H group, which may suggest the comparable role of this APP regardless of pathogen type during chronic infection caused by Gram-positive bacteria. This assumption may prove that it belongs to the main APPs in dairy cattle. However, our study revealed high individual variations within the groups, thus further research with larger experimental groups is needed to fully elucidate this thesis.
CoPS have always been considered major pathogens. The pathogenicity of this group of bacteria is related to the virulence factors of the causative agent, and eventually, resistance to commonly used antibiotics develops [29]. In contrast, until recently, CoNS have been regarded as environmental microorganisms that are harmless until specific conditions occur (e.g., a temporary weakening of the immune system). Formerly, mastitis caused by CoNS was often left untreated because the spontaneous cure rate of this condition is considered high (16-70%) [30]. However, recently scientists have started to recognize CoNS as a potential threat to animal health. Infection caused by CoNS could be persistent during lactation, similar to mastitis caused by CoPS [31]. It should be stressed that CoNS have been found to be the most common udder infection-causing bacterial pathogen isolated from milk samples, thus they could be described as emerging pathogens (despite their initial classification) [6]. Moreover, CoNS infection is usually connected with a mild increase in somatic cell count, while CoPS bacteria usually cause a much higher increase. However, as it was mentioned above, an organism with strong immunity is able to cope with CoNS by itself, while it is usually not able to fight CoPS bacteria, especially S. aureus [1,6].
In summary, we observed a higher expression level of CP within the CLECs of the CoPS samples, as well as a higher level of HP mRNA transcription within the MECs of the CoPS samples compared to those of the H group (p ≤ 0.05), which may suggest different infection and iron acquisition mechanisms in these two groups of pathogens. Almost all living organisms require iron for living, and staphylococci are no exception. There is a lack of easily accessible iron in vertebrate tissue due to the presence of high-affinity iron-binding proteins, such as TF and LT. However, some S. aureus isolates may exhibit hemolytic activity to obtain iron from heme. These strains grow well under iron-restricted conditions due to the ability of the strains to produce siderophores (high-affinity iron-binding molecules). Conversely, only a small number of CoNS isolates produce siderophores, and most grow poorly in an iron-limited environment [32]. Thus, during mastitis caused by staphylococci, there is an urgent need to protect iron from being "downloaded" by bacteria, and it is probably the reason for the elevated expression of the genes encoding iron-binding APPs in the CoPS group in our study.
We did not note any differences in immunological responses in the selected APPs' transcript levels for MECs or CLECs. We found comparable expression of selected APP genes within both analyzed tissue types for the same pathogen. This finding may suggest that both tissues react similarly during mastitis. It may also imply that during chronic mammary gland inflammation, when bacteria are present for a longer time in the whole udder, the tissues produce different molecules, such as APPs, to combat infection and reduce pathogenic propagation; however, these molecules are produced at a lower level than during acute inflammation to ensure that the host's cells are not permanently harmed. These findings are in line with different publications describing the immunological response of MECs during mammary gland inflammation [10,17,24,25], but the results for CLECs obtained in this study have not been described before.
The strong positive correlations between the transcripts of the CP and SAA3 genes and between those of the CP and HP genes in CLECs imply the combined action of their coded proteins against infection within these cells during chronic mastitis. Similarly, the correlations between the mRNA levels of the CP and SAA3, the CP and HP, and the SAA3 and HP genes within MECs may suggest the combined action of the genes' protein products to combat persistent inflammation, especially since the proteins encoded by these genes participate in iron metabolism. However, the negative correlation found between CP and HP gene expressions in both studied tissues may suggest their different iron-binding properties during bacterial infection in these two tissues.
It should be noted that this study was conducted on a relatively small group of animals-23 cows, with two types of tissue per cow. However, cows were homologous in term of breed, animal keeping conditions, and stage of lactation. During the study, only the two different types of staphylococci were present-no E. coli nor streptococci were detected. The cows were naturally infected with staphylococci and culled because of recurrent incurable mastitis. Thus, the time from the infection was different for each cow, and the animals had undergone a different number of therapies. Moreover, the study was especially focused on the chronic form of the inflammation, thus cows with clinical mastitis were excluded from the experimental groups to eliminate acute responses to infection. However, all cows were assigned to their experimental groups approximately 1 month after their last therapy, while their somatic cell count in milk remained elevated: the median value was 2.44 × 10 6 /mL for CoPS and 4.61 × 10 5 /mL for CoNS.
Conclusions
In summary, in this study, the constitutive and extrahepatic expression of the studied APPs was reported. We also found that the transcript levels of selected APPs were similar between CLECs and MECs, which means that CLECs not only act as mechanical barriers but also play important biological roles during chronic mastitis. Thus, APPs secreted by those cells may play a crucial role in the protection of the udder against pathogens.
Animals and tissue samples
The study was conducted on 23 Polish Holstein-Friesian dairy cows of the Black-and-White variety, which were maintained on the Experimental Farm at the Institute of Genetics and Animal Breeding in Jastrzębiec, near Warsaw, Poland. The cows were between their first and fourth lactation, and those from the experimental groups had naturally occurring chronic mastitis. The conditions in which the cows were kept were previously described by Kościuczuk et al. [25]. The herd participates in the milk recording system, and complete information was available on the milk parameters of the animals, including somatic cell count, as well as on the number of treatments of each cow over the course of her life. All animals were under constant veterinary supervision. According to herd management, the animals were culled due to recurrent udder problems, diagnosed by a veterinarian based on clinical or subclinical signs of mastitis (flakes or clots in milk, somatic cell count) and milk microbiological examination, and after several failed therapies with antimicrobials (elevated somatic cell count during the entire last lactation despite therapies). The samples for the experimental groups were derived from these animals; however, those with clinical signs of mastitis (acute inflammation) were excluded from the study. Eight cows were culled due to reproduction problems (no pathogenic bacteria in milk and a somatic cell count below 15 × 10 4 cells/mL during the whole lactation). The samples derived from these animals served as the control group. All cows were culled at the end of lactation (approximately 280 days, SD = 25). They were slaughtered at the registered certified slaughterhouse under constant monitoring by authorities at least 1 month after their last antimicrobial administration during a two-stage process for the non-ritual slaughter: electrical stunning to render an animal unconscious and then exsanguination was done. MEC and CLEC samples were obtained immediately after the animals were slaughtered. CLEC samples were taken from the bottom part of the gland cistern, in close proximity to the teat cistern. The MEC samples were collected from deep inside the secretory tissue of the gland. One sample of each tissue (from only one udder quarter from only one cow) was used in the present study. The samples were immediately washed in ice-cold phosphate-buffered saline (PBS; pH 7.2) to remove any blood and milk contamination and then were instantly frozen in liquid nitrogen. Tissue samples were stored at − 80°C prior to subsequent analysis. The samples were derived from all cows culled from the herd in 2010-2013, and all eligible samples were included in the analysis. The assessors were blinded to any stages of the methodological process to limit the occurrence of conscious and unconscious bias in the conduct of trials and interpretation of outcomes.
Microbiological analysis of milk
The samples of 'first milk' were collected aseptically from each quarter of the udder 2 days before slaughter and were tested for the presence of bacteria. To identify microorganisms, 100 μl of the milk sample was inoculated on Columbia agar supplemented with 5% sheep blood (BioMaxima, Lublin, Poland). The plates were incubated at 37°C for 24 h. All isolates were assessed for phenotype, the morphology of the colony, and biochemical properties (APIanalytical profile index; bioMérieux, Craponne, France). Coagulase production ability was tested via the rabbit plasma tube test. Furthermore, a SLIDEX StaphKit (bioMérieux) was used for S. aureus identification.
RNA isolation
RNA was isolated from tissue samples with the commercially available RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. Qualitative and quantitative analyses of RNA were performed using a NanoDrop1000 spectrophotometer (Spectro-Lab, Warsaw, Poland) and a 2100 Bioanalyzer (Agilent Technologies, Santa Clara, USA). Samples with an RNA integrity number value greater than seven were selected for further analysis. Reverse transcription reactions were performed using the Transcriptor First Strand cDNA Synthesis Kit (Roche, Meylan, France) following manufacturer's protocol.
Gene expression analysis
Expression levels of the SAA3, HP, and CP genes were analyzed using an RT-qPCR LightCycler 480 system (Roche, Meyla, France) on 96-well plates with the SYBR Green technique, according to the manufacturer's protocol. Primer sequences designed by Whelehan et al. [33] were used for qPCR analysis. Information on the primer sequences, amplicon sizes, annealing temperatures, and GenBank accession numbers is shown in Table 2. Three types of negative controls were included: no template addition, no reverse transcriptase addition, and no polymerase addition. The presence of the product of interest was confirmed by electrophoresis in 2% agarose gel (G: BOX visualization system, Syngene, Cambridge, UK). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was used as a reference. The process of housekeeping gene (HKG) selection in MECs has been described by Kościuczuk et al. [25], and the GAPDH was one of the genes with an M below 0.5. The present study was conducted on the samples derived from the same animals as in the above-mentioned study.
The C T values obtained from qPCR results were calculated according to a modified version of Pfaffl's formula [34]: CPcrossing-point qPCR value: the cycle at which the fluorescence rises above the background fluorescence (defined threshold).
ΔCP target -CP deviation of mean expression minus the expression of the targeted gene in a sample.
ΔCP ref -CP deviation of mean expression minus the expression of the reference gene in a sample.
meanthe average arithmetic value of the CP from all reactions for the studied gene (in the numerator) or for the reference gene (in the denominator).
sampleeach CP value for the studied gene (in the numerator) or for the reference gene (in the denominator) for each sample.
Statistical analysis
All collected samples were divided by tissue type (MECs or CLECs) and antimicrobial agent (CoPS (N = 8) or CoNS (N = 7) or bacteria-free samples (H; N = 8). Altogether, 46 tissue samples were obtained: 23 samples per tissue type. To search for differences in gene expression levels, analyses of variance were performed using the ANOVA procedure with post-hoc Tukey-Kramer test (SAS/STAT 2002-2012, ver. 9.4), taking into account the fixed effect of the interaction between the microbiological status of the milk (CoPS-infected, CoNS-infected, H) and the tissue type, and error as random.
The normality of the distribution of all traits was checked using a Univariate procedure (SAS/STAT, 2002-2012, ver. 9.4), and values for the expression of genes at the mRNA level were transformed into a natural logarithmic scale.
The authors chose the following cut-off points for significance: the values differ significantly at p ≤ 0.01 (indicated as A, B); the values differ significantly at p ≤ 0.05 (indicated as a, b); the values differ at the trend level at 0.05 < p < 0.1 (indicated as 1, 2); and the values do not differ significantly at p ≥ 0.1. The final statistical model did not include the lactation number because in a prior analysis, lactation number did not influence the expression of the selected genes. | 2020-08-31T14:00:49.120Z | 2020-08-31T00:00:00.000 | {
"year": 2020,
"sha1": "a3b56736a3aa8d88fa8a044c3bd49321d4ef90b6",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-020-02544-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3b56736a3aa8d88fa8a044c3bd49321d4ef90b6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
253237280 | pes2o/s2orc | v3-fos-license | Deep Learning-based Protoacoustic Signal Denoising for Proton Range Verification
Objective: Proton therapy offers an advantageous dose distribution compared to the photon therapy, since it deposits most of the energy at the end of range, namely the Bragg peak (BP). Protoacoustic technique was developed to in vivo determine the BP locations. However, it requires large dose delivery to the tissue to obtain an averaged acoustic signal with a sufficient signal to noise ratio (SNR), which is not suitable in clinics. We propose a deep learning-based technique to acquire denoised acoustic signals and reduce BP range uncertainty with much lower doses. Approach: Three accelerometers were placed on the distal surface of a cylindrical polyethylene (PE) phantom to collect protoacoustic signals. In total 512 raw signals were collected at each device. Device-specific stack autoencoder (SAE) denoising models were trained to denoise the input signals, which were generated by averaging 1, 2, 4, 8, 16, or 32 raw signals. Both supervised and unsupervised learning training strategies were tested for comparison. Mean squared error (MSE), signal-to-noise ratio (SNR) and the Bragg peak (BP) range uncertainty were used for model evaluation. Main results: After SAE denoising, the MSE was substantially reduced, and the SNR was enhanced. Overall, the supervised SAEs outperformed the unsupervised SAEs in BP range verification. For the high accuracy detector, it achieved a BP range uncertainty of 0.20 +/- 3.44 mm by averaging over 8 raw signals, while for the other two low accuracy detectors, they achieved the BP uncertainty of 1.44 +/- 6.45 mm and -0.23 +/- 4.88 mm by averaging 16 raw signals, respectively. Significance: We have proposed a deep learning based denoising method to enhance the SNR of protoacoustic measurements and improve the accuracy in BP range verification, which greatly reduces the dose and time for potential clinical applications.
Introduction
Proton therapy is an ion therapy that receives increasing interest in both research studies and clinical applications in radiation oncology. The linear energy transfer (LET) of proton dramatically increases for small velocities and thus most of the particle energy is deposited at the end of its trajectory before completely stopped in materials, resulting a so-called Bragg peak (BP) (Newhauser and Zhang, 2015;Mohan and Grosshans, 2017). Due to the large LET at the end of range, none or minimal doses are deposited beyond the range (Wilson, 1946). Compared with the photon therapy, whose dose distribution decreases exponentially in deep tissues, the proton therapy provides a well-characterized dose depth and a more conformal dose. Such good conformity can benefit to accurately deliver a high dose to tumors, while largely sparing the adjacent organs at risk (OARs). Therefore, the range verification of BP within tissues is critical in treatment. Several noninvasive techniques including the positron emission tomography (PET) (Parodi et al., 2007;Parodi, 2015) and prompt gamma imaging (Min et al., 2006) have been proposed to localize the BP location in vivo during real-time radiation therapy (Knopf and Lomax, 2013). However, both methods depend on bulky and complex instrumentation and only reveal indirect information of the BP position, and hard to achieve accuracy in a few millimeters for clinical application.
Protoacoustic determination of BP range has been actively studied for the localization of BP since a direct correlation between the acoustic signals and BP locations can be utilized (Sulak et al., 1979;Hayakawa et al., 1995;Albul et al., 2001;Bychkov et al., 2008;Jones et al., 2016). When pulsed proton beam deposits energy in medium, the energy would dissipate in fast heat expansion and emit acoustic waves due to thermoacoustic conversion of heat to pressure. It is proposed that the proton range verification can be directly measured based on the time-of-flight (TOF) of the generated acoustic waves. Another advantage of the protoacoustic technique is that it may be used to monitor proton dose distribution in a patient in realtime owing to its relatively simple device-setup compared to PET or prompt gamma imaging. However, it remains a challenging task since proton acoustic signal is very weak and noisy, and it typically requires delivering a large number of pulses to a single spot in the medium to obtain high-quality proton acoustic signals with a high signal-to-noise ratio (SNR). Jones et al. (Jones et al., 2015) averaged as high as 2048 single pulses to obtain a stable and accurate measurements, Nie et al. (Nie et al., 2018) averaged over 1024 signals, and Kellnberger et al. (Kellnberger et al., 2016) used a lower number of 512 averages for their 3D ionoacoustic scans. The large number of signals required to average for the final measurement will result in high doses (e.g., averaging over 2048 proton pulses (Jones et al., 2015) correspond to 38.9 Gy ) to the medium as well as a long beam delivery time, hindering the clinical application of protoacoustic range verification. If BP can be identified with as few measurements as possible, both the delivery dose and time can be reduced, and that is the way forward to the clinic. Therefore, an appropriate denoising method to improve the acoustic signal acquisition is essential to apply protoacoustic technique in clinics.
In previous literatures, denoising techniques such as low pass filter (Freijo et al., 2021) and wavelet-based transformation (Sohn et al., 2020) were employed to eliminate the high-frequency noise. However, such methods rely on the choice of various thresholding methods such as hard and soft thresholding. Moreover, though a large portion of the noise is removed in the reconstructed signal, the residual noise, which is often complex with an unknown distribution in the frequency domain, needs to be treated by advanced approach. Recently, data-driven approaches with deep learning-based methods have demonstrated success in denoising applications, specifically, the stacked autoencoder (SAE) paradigm for local denoising and feature learning (Vincent et al., 2010;Vincent et al., 2008). SAE composes of an encoder to learn the useful higher-level representations and a decoder to reconstruct the denoised signal, thus removing the noisy components in the original corrupted input. In medicine field, SAEs have been widely used in the electrocardiogram (ECG) denoising and significantly enhanced the signal to noise ratios (Nurmaini et al., 2020;Xiong et al., 2016;Xiong et al., 2015;Liu et al., 2021). Our hypothesis is that the proton acoustic signals, which are corrupted with noises in the similar pattern as the ECG signals, could also be denoised with the SAE network while keeping as few measurements as possible. In addition, a patch-based method was utilized for data augmentation to address overfitting. The long signals were first cut into smaller sections with moving origins and large overlaps, and then the small sections were fed into the SAEs for denoised output. Finally, the denoised small patches were put back to their corresponding origins and a long-merged signal was obtained by averaging these small overlapping sections. In this work, we used three detectors to collect numerous protoacoustic signals generated by proton pulses in a plastic phantom, and then utilized SAEs to denoise the proton acoustic signals while preserving the BP signal with minimized signal acquisitions.
Experiment setup and signal acquisition
To collect protoacoustic signals, three detectors (accelerometers) were placed on the distal surface of a cylindrical polyethylene (PE) phantom (diameter=20.88 cm, length=33.58 cm) as illustrated in Figure 1 (a). A 226 MeV proton beam was first attenuated by a 2 cm solid water (Gammex 457-CTG, Middleton, WI, USA), and then the proton pulse was incident onto the PE phantom from one end and generated a BP inside the phantom, emitting protoacoustic signals. The acoustic signals were measured by the three detectors placed on the other end. The three detectors are two accelerometers of relatively low accuracy (Type 4374, Denmark) and one accelerometer of high accuracy (Brüel&Kjaer, Type-4017-C), namely L1, L2, and H. The electric charge signal was amplified (65 dB, ×1780 gain) and filtered (10 Hz high pass and 100 kHz low pass filter) by a Nexus charge amplifier 2692 before output through a 4-channel digital oscilloscope (Picoscope 5444B, PicoTech, UK).
In total 512 proton pulses were incident onto the PE phantom, and each proton pulse would produce a BP inside the phantom. An incident proton pulse has an average current of 490 nA, consisting of 5.7 × 107 protons, which is equivalent to 2.36 cGy of dose delivery to the BP in phantom. For one BP, each of the three detectors would independently collect an acoustic signal emitted from the BP, which is to say, we collected altogether 3*512 raw acoustic signals with three detectors and each detector collected 512 raw data. By averaging the 512 single raw waves, we can obtain a clean and stable signal for each detector. The averaged signals of three detectors as well as the proton beam are shown in Figure 1
BP range verification
Protoacoustic is a straightforward technique to measure the location of BP directly from the time-of-fly (TOF) between proton pulses and arrival of acoustic waves. As shown in Figure 2, the TOF is characterized by the time elapse between the minimum of proton pulse and the first maximum of acoustic wave. The BP position can be calculated through the equation below: where is the acoustic TOF for the th detector, c is the speed of sound in PE phantom (2.07 mm/µs), is the distance between BP location and the th detector, and is the unique responsive time of each detector. can be calibrated through systematic phantom experiments as described in previous studies , but the calibration of detector responsive time is not of interest in this work. Our goal is to locate the first maximum of acoustic wave that characterizes the arrival of acoustic signal, since the identification of the acoustic arrival from a noisy signal is one of the most difficult tasks in reducing the number of pulses needed for protoacoustic measurements, and yet potentially feasible with SAE reconstruction method, as we will show in the following sections.
Workflow and training strategy
After we have obtained the 512 raw signals for each detector (L1, L2, and H), SAE networks were trained to reconstruct the denoised signals from fewer number of raw signals. Since each detector is a unique device, e.g., has specific response time, we trained device-specific models for each detector. For each detector, we randomly chose 256 raw signals for model training and the remaining 256 raw signal for external testing.
For the training dataset, the grand ground truth (GT) is obtained by averaging over the 256 raw signals. We then generated the input noisy signals by averaging a small number (e.g., 1, 2, 4, 8, 16, and 24) of raw waves, and generated the corresponding clean signals by averaging 192 out of the 256 raw waves, including the raw waves used in the noisy signals. We employed two training strategies to learn denoising models using SAE. The first strategy is a supervised learning that trains the input noisy signals with the clean signals as learning target, and the other strategy is a self-supervised learning that minimizes the reconstruction error between the input noisy signals and output denoised signals, without any reference to ABCD channel average the generated clean signals (unsupervised learning) (Vincent et al., 2010). These two training strategies are named as SAE_clean and SAE_self, respectively. During the training phase, for SAE_clean, the input are noisy signals and ground truth are clean signals for the model, while for SAE_self, the input are noisy signals but the target ground truth is also the input noisy signals. The differences between the two training strategies would be reflected in the loss function, as discussed in the following section. For the other 256 raw signals in testing dataset, similar treatment was applied to obtain the grand GT signal, the noisy test signals, and clean targe signals. After training the SAEs, a new noisy signal from the test dataset was fed into the trained SAE model to obtain the denoised signal, and subsequently compared to the grand GT for model performance evaluation.
Due to limited resources, we only performed the protoacoustic measurements at one energy level of 226 MeV, so the patterns of acoustic waves are highly limited, basically containing only three intrinsically independent patterns since we had three detectors. If we only consider the whole long signals (more than 450 µs) as training objectives, it is prone to overfitting. To address the overfitting issue, our SAEs were trained in a patch-based manner. By cutting the long signals into much smaller sections with shifting origins, we were able to greatly augment the size and variation of training and testing dataset. The reconstructed long signals were obtained by merging the SAE output patches. The workflow of the denoising the acoustic signals is sketched in Figure 3.
Network architectures
The architectures of the proposed SAEs are shown in Fig. 3, composing of a compressing encoder path and an expanding decoding branch. All the one-dimensional input noisy signals are first normalized into [0,1] and cut into small patches as ̂. Similarly, the patches of corresponding training clean signals ̂ were also obtained. Through encoder layers, the input ̂ is encoded to output ĥ in the hidden layers. This step serves as extracting higher-level useful representations of the signals and eliminating the redundant noisy components of the input. Then, the useful latent features ĥ expand to the output ̂′ via decoder layers. After that, the aim of SAEs is to minimize the reconstruction error either between ̂′ and ̂ for SAE_clean, or between ̂′ and ̂ for SAE_self strategy.
For each denoised signal ̂′, it is reconstructed form the noisy signal ̂, and is expected to resemble either the clean signals ̂ (SAE_clean) or the input ̂ (SAE_self). Thus, for SAE_clean the learnable parameters are optimized by minimizing the reconstruction error: for SAE_self the learnable parameters are optimized by minimizing the reconstruction error: where the L2 norm aims to minimize the difference between ̂′ and the learning target.
Experiment design and training parameters
In order to arrange all of the experiments mentioned above, we had to perform multiple experiments for both SAE_clean and SAE_self strategies, the different numbers of averaging (n_avg) input raw signals, as well as the three detectors. Thus, we name the series of experiments as: clean/self-detector name-n_avg. For example, "clean-L1-2" means the experiment used pressure waves collected on detector L1 (one of the low accuracy detectors), averaged 2 raw signals to obtain the noisy input, and employed the SAE_clean strategy, while "self-H-16" means the experiment used pressure waves collected on detector H (the high accuracy detector), averaged 16 raw signals as the noisy input, and employed the SAE_self strategy to train the SAE model. During the training phase, 100 long noisy input signals were generated by randomly averaging 1, 2, 4, 8, 16, or 24 raw signals, and the patch size used for data augmentation was 66 µs with an origin shift of 3.2 µs. After data augmentation, we obtained 13400 signal patches for training. Among the 13400 patches, 2/3 of the data was used for training and 1/3 was used for validation of SAE models.
Model evaluation and BP range verification
Two metrics were used for performance measurement, mean squared error (MSE) and SNR: Where and are the long grand GT signal and long noisy/denoised signals from the test dataset (the remaining 256 raw signals outside the training dataset), respectively. Both MSE and SNR were computed for noisy and denoised signals, to evaluate the efficacy of SAEs.
Moreover, we evaluated the BP displacement between the pre-training and after-training signals by calculating the time differences based on the grand GT. As we have discussed in previous sections, in this work, we only focus on identifying the first maximum of pressure wave, which characterize the arrival of acoustic signal. We used the first peak of grand GT (averaging over all 256 raw signals) as the reference timepoint of acoustic arrival. For simplicity, we automatically located the maximum between 0 to 192 µs of a merged long acoustic signal to identify the arrival of first pressure peak. For both noisy and denoised signals, their first peaks can have shifts from the reference timepoint. The time shift of TOF multiplied by the sound speed (about 2.07 mm/µs in the experimented PE phantom) corresponds to the uncertainty of BP range verification. To estimate the BP range uncertainty before and after denoising, we used two metrics to calculate the shifts, mean error of the shifts (ME BP ) and mean absolute error of the shifts (MAE BP ). This step is helpful to intuitively examine the feasibility of applying the SAE denoising in clinics. Table 1 shows the MSE and SNR values of L1 detector signals before and after denoising on the test dataset, using the SAE_clean training strategy. The metrics are calculated from the first 400 µs of merged long signals. Table 2 shows the MSE and SNR changes in percentage for all experiments on the test set. Figure 4 shows the example noisy input signals, reconstructed denoised signals, and the grand GT signal of H detector, for both SAE_clean and SAE_self using various n_avg. For both training strategies, the MSE values of after-denoising were dramatically reduced while the SNR were enhanced. However, at low n_avg (e.g., 1, 2, and 4), SAE_clean outperformed SAE_self in both metrics.
BP range uncertainty
From our experiments, the BP uncertainties began to show obvious improvement at n_avg greater than 2 for the detector H and at n_avg greater than 4 for detector L1 and L2, especially considering the largely decreased standard deviation of ME BP and MAE BP . Table 3 shows the ME BP and MAE BP representing BP range verification uncertainties before and after denoising, with n_avg starting at 4. The values have been n_avg = 1 n_avg = 4 n_avg = 16 SAE_clean SAE_self converted to mm by multiplying the TOF peak shifts with the sound speed in PE phantom (2.07 mm/µs). Figure 5 shows examples of enhanced BP locations after SAE_clean denoising with n_avg of 4, 8 and 16, on all the three detectors. For better observe the correction of peak localization, we zoom in the range between 100 and 192 µs in Figure 5. Among the three detectors, the high accuracy detector H is most accurate in identifying BP positions, with the least range uncertainties. Besides, with the increasing n_avg of raw signals in the input, the BP range verification got improved both before and after denoising. 2.43 ± 1.99 1.93 ± 1.14 0.10 ± 3.14 0.11 ± 2.24
Discussion
Protoacoustic method has been actively investigated for its feasibility of in vivo BP range verification.
Though the method has unique advantages such as relatively simple instrumentation setup and straightforward correlation between the pressure/acoustic wave and BP localization, the key measuring objectives, acoustic signals, are susceptible to various sources of noises (thermal, scattering, and internal electric (Assmann et al., 2015;Ahmad et al., 2015;Jones et al., 2016)), which in turn requires a large number of measures to achieve high SNRs. Such numerous measurements result in high dose delivery in tissue and not applicable for clinical use. We have proposed a SAE network based deep learning denoising method to reconstruct clean signals from only a small number of measurements, as well as to optimize the BP range verification by reducing the localization uncertainty.
We have tested two training strategies with SAE networks, namely the supervised SAE_clean and the selfsupervised SAE_self. Overall, both models achieved substantial reduction of MSE and enhancement of SNR for low n_avg and essentially provided high quality denoised signals. From Table 2, for SAE_clean, the MSE reduced by about 80% at n_avg of 1 and about 50-60% at n_avg of 24, the SNR increased by 60-85% at n_avg of 1 and 10-20% at n_avg of 24, showing a steadily weaker enhancement of the signals for larger n_avg. This trending correlates to the fact that using more raw waves for average yields more stable and accurate signals. However, this trending is almost reversed for SAE_self models. According to Table 2, the MSE reduction and SNR increase were worse for low n_avgs of 1, 2, and 4, but better for high n_avgs of 8, 16, and 24. Such differences between SAE_clean and SAE_self models arise from the training philosophy behind them.
SAE_self tries to map back to the input signals rather than any given ground truth. As shown in Figure 4, when n_avg equals to 1, the input signal was very noisy and lacking any obvious mode, SAE_self had no knowledge of underlying patterns and would simply treat those noises as some embedded truth to mimic during denoising, producing an output resembling those noises. But for SAE_clean, it is a supervised learning strategy that knows the clean ground truth and thus was able to distinguish those noisy components in low n_avg signals. As n_avg increased from 1 to 16, the input signals began to contain more information itself and show obvious underlying patterns, so SAE_self was also able to capture representative features apart from the random noises, and both SAE_self and SAE_clean achieved similar results. In many SAE denoising applications, the ground truth labels are missing or unavailable, but in this specific denoising task, we were lucky to have the averaged signals serving as ground truth and thus were able to change the training strategy from self-supervised learning to fully supervised learning, and achieved better results especially for small n_avgs.
One of the most important tasks in protoacoustic measurement is to localize BP in tissues. From Table 3, we see that the BP range uncertainty represented by MAE BP and ME BP was substantially reduced after SAE denoising, especially the standard deviations of MAE BP and ME BP . Comparing the three detectors, overall, the high accuracy detector (H) performed better than the two low accuracy detectors (L1 and L2), both before and after denoising. For the high accuracy detector, the ME BP decreased from -10.93 ± 40.36 mm to 0.68 ± 5.54 mm by averaging only 4 raw signals, and decrease from -2.12 ± 22.52 to 0.20 ± 3.44 mm if averaging over 8 raw signals, while for L1 and L2, the uncertainty at n_avg = 8 are well above 5 mm. For L1, the ME BP decreased from -12.09 ± 37.88 mm to 1.44 ± 6.45 mm by averaging 16 raw signals. For L2, the ME BP decreased from -6.48 ± 25.65 to -0.23 ± 4.88 mm by averaging 16 raw signals. Such dramatic improvement suggests that with the aid of deep learning techniques, it is possible to achieve accurate BP range verification with just a few acoustic signals, which brings down the dose and time required in protoacoustic measurements.
Previous literatures have reported BP range uncertainty in both simulation and experimental studies. The reported BP range uncertainty varied from submillimeter to 5 millimeters (Otero et al., 2019;Paganetti, 2012;Kellnberger et al., 2016;Assmann et al., 2015;Jones et al., 2018), and in some extreme cases, the standard deviation was up to 10 mm (Paganetti, 2012). Moreover, Jones et al. found that the BP range uncertainty depends on the proton pulse width (Jones et al., 2016). For an extremely narrow Dirac-deltafunction-like (FWHM < 4 μs) proton pulse, the systematic error of BP determination is <2.6 mm, but for longer non-δ-function-like beams (FWHM = 56 μs), a systematic error up to 23 mm can be expected. Narrower proton pulse favors in producing acoustic pressure waves; however, the amplitude of acoustic signal is also limited by the energy deposited by proton pulse, assuming the peak proton beam current remains at the same level. Therefore, the desired proton pulse width for typical medical proton cyclotron is ~10-14μs, the proton beam used in our was modulated by a function generator, where ~14-18μs pulse width (FWHM) was achieved. the expected BP range uncertainty should be around a few millimeters, much greater than the optimal submillimeter records (Assmann et al., 2015), which is consistent with our denoised results. When treat patient with proton beams, typically a 3.5% of the range plus 1-3 mm margin is allowed to account for the uncertainty of BP range (Paganetti, 2012), corresponding to an uncertainty of 3-5 mm for the BP depth of around 50-60 mm in our experiment. Our denoised results of the high accuracy detector fall well within that range and the low accuracy detectors fall marginally in that range.
The main idea of this work is to reduce the dose delivery without compromising the accuracy of BP range verification. In our measurement, each raw acoustic signal was generated by a proton pulse equivalent to 2.36 cGy, so the dose delivery would be 24.2 Gy to average 512 signals. After denoising, we can obtain BP localization with 16-24 signals for the two low sensitive detectors and with only 4-8 signals for the high accuracy detector, corresponding to 37.8-56.6 cGy and 9.4-18.9 cGy, respectively. The required doses for BP range verification were largely reduced with the aid of deep learning denoising techniques.
Our study is performed with the proton beam of one specified energy of 226 MeV, incident on a uniform PE phantom, while in clinics the patient tissues are heterogeneous and behave more complicated than the phantom. Though we tested our SAE denoising method on a relatively simple setup, to the best of our knowledge, this is the first trial of applying deep learning SAE models to denoise the protoacoustic signals, and achieved satisfactory results. However, as we have discussed in the above paragraphs, the proton beam we used was not δ-function-like pulses and could intrinsically increase the BP range uncertainty. Besides, the high accuracy detector outperformed the low accuracy ones in all metrics. Other techniques, such as acoustic array with time-reversal algorithm, can be used to further reduce the BP range uncertainties, but it is beyond the scope of this study (Yu et al., 2021). To push for better results in future, possible solutions include using narrower proton pulses as the dose deposition source and sticking to the higher accuracy acoustic wave detectors available.
Conclusion
We have demonstrated that the SAE networks can be used to denoise protoacoustic signals for BP range verification. Besides the commonly used self-supervised training strategy, we introduced the full supervised learning using the averaged signals (averaging 192 raw signals) as ground truth and achieved better results. Decreased MSEs and increased SNRs were obtained for all experiments after SAE denoising. For the high accuracy detector, denoised BP uncertainty was well within 5 mm by averaging only 8 raw signals. For the low accuracy detectors, denoised BP uncertainty was marginally within 5 mm by averaging 16-24 raw signals. Deep learning denoising techniques can be integrated to the data acquisition for signal processing during protoacoustic measurements, to reduce the dose required for obtaining stable and clean signals.
Conflict(s) of interest
None. | 2022-11-01T01:16:13.518Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "b1f2e73daa5f6068231f4ce975d30ac554375d19",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1f2e73daa5f6068231f4ce975d30ac554375d19",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
250238555 | pes2o/s2orc | v3-fos-license | The Health-Promoting and Sensory Properties of Tropical Fruit Sorbets with Inulin
Inulin is a popular prebiotic that is often used in the production of ice cream, mainly to improve its consistency. It also reduces the hardness of ice cream, as well as improving the ice cream’s organoleptic characteristics. Inulin can also improve the texture of sorbets, which are gaining popularity as an alternative to milk-based ice cream. Sorbets can be an excellent source of natural vitamins and antioxidants. The aim of this study was to evaluate the effect of the addition of inulin on the sensory characteristics and health-promoting value of avocado, kiwi, honey melon, yellow melon and mango sorbets. Three types of sorbets were made—two with inulin (2% and 5% wt.) and the other without—using fresh fruit with the addition of water, sucrose and lemon juice. Both the type of fruit and the addition of inulin influenced the sorbet mixture viscosity, the content of polyphenols, vitamin C, acidity, ability to scavenge free radicals using DPPH reagent, melting resistance, overrun and sensory evaluation of the tested sorbets (all p < 0.05). The addition of inulin had no impact on the color of the tested sorbets, only the type of fruit influenced this feature. In the sensory evaluation, the mango sorbets were rated the best and the avocado sorbets were rated the worst. Sorbets can be a good source of antioxidant compounds. The tested fruits sorbets had different levels of polyphenol content and the ability to scavenge free radicals. Kiwi sorbet had the highest antioxidant potential among the tested fruits. The obtained ability to catch free radicals and the content of polyphenols proved the beneficial effect of sorbets, particularly as a valuable source of antioxidants. The addition of inulin improved the meltability, which may indicate the effect of inulin on the consistency. Further research should focus on making sorbets only from natural ingredients and comparing their health-promoting quality with the ready-made sorbets that are available on the market, which are made from ready-made ice cream mixes.
Introduction
Sorbets are low in calories (60-120 kcal/100 g of the product), mainly due to the lack of added fat, milk, cream and animal protein [1]. Due to the high water content, they are a very light and refreshing dessert, eagerly eaten, especially in the summer months when the air temperature is high. Moreover, they are a suitable type of ice cream for people suffering from allergies or intolerance to the ingredients of milk-based ice cream. Sorbets are made of fruits and/or vegetables, juices and water, with the addition of sweeteners and stabilizers; they should have an attractive taste, low energy value and be easily digestible.
Inulin is a natural prebiotic. Prebiotic was described as "a non-digestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon, and thus improves host health" [2]. This definition was almost unchanged for more than 15 years. According to this definition, inulin can be classified as a prebiotic due to its selective promotion of the activity and hepatoprotective effects, as well as activity against hypothyroidism and immune-modulator action [25][26][27][28][29]. Kiwifruit (Actinidia) is a nutrient-dense fruit that is exceptionally high in vitamin C and contains nutritionally relevant levels of dietary fiber, potassium, vitamin E and folate, as well as various bioactive components, including a wide range of antioxidants, phytonutrients and enzymes that provide functional and metabolic benefits. The kiwifruit (A. deliciosa and A. chinensis-"green" and "gold" cultivars, respectively) of commercial cultivation are large-fruited selections of predominantly green kiwifruit and an increasing range of gold varieties [30]. The flesh of the green Hayward cultivar is described as tangy, sweet and sour, a unique flavor combination, whereas the gold cultivar is described as having a sweet and tropical taste [30][31][32]. The mango (Mangifera indica L.) is a tropical fruit, originally from the South of Asia, and it is available worldwide today. Mango is rich in vitamin A and contains reasonable amounts of vitamins B and C, as well as minerals, mainly calcium, phosphorus and iron. Mango fruits provide energy, dietary fiber, carbohydrates, proteins and fats. Mango is also a particularly rich source of polyphenols [33][34][35]. Mangoes have a short shelf-life and are often processed to facilitate exportation and to preserve the fruit past its season. The main and directly consumable part of the mango is the pulp, which accounts for 50 to 60% of the total weight of the fruit and is used to prepare various products such as juice, jam, puree and nectar [16,33,34,36,37]. The aim of the work was to produce tropical fruit sorbets with and without the addition of inulin, and evaluate the impact of the fruit and inulin on selected health-promoting and sensory properties of these sorbets.
Raw Materials for Sorbets
The basic ingredients of the sorbets were fresh fruits which were obtained at edible ripeness from a local grocery store. The choice of these fruits was guided by popularity and their worldwide availability, as well as by their health value, due to being excellent sources of vitamins and microelements, and having medicinal properties, such as analgesic, anti-inflammatory, antioxidant, antiulcer, anticancer, antimicrobial, diuretic, and antidiabetic properties [29,38,39]. The research material was sorbets that were made of selected tropical fruits: avocado, kiwifruit, 2 varieties of melon (cantaloupe and honeydew) and mango with the addition of water, sucrose and lime juice. Three types of products were produced: with the addition (fruit replacement) of inulin (2 and 5% wt.) and without inulin. Chicory inulin was purchased in a local health food store. According to the information given on the package, the product was produced in Belgium and contained 89 g of fiber per 100 g of product.
Preparation of the Sorbets
A total of 15 sorbets were made due to the addition of different fruits and inulin. The sorbets with inulin were prepared by replacing 2% or 5% fruits with inulin. Fruits (without peels and seeds) with the addition of water, sugar, lemon juice and inulin (Table 1) were mixed in a Thermomix TM5. The sorbets mixes were prepared as given in Table 1. Each sorbet mixture was mixed separately, stored at 4 • C for 24 h and then frozen in Ice Cream Machine Unold 48840. The sorbets were stored at −25 • C in the laboratory of the Gdynia Maritime University and evaluated after two days of storage.
Physico-Chemical Parameters
The viscosity of the ice cream mixture was determined by a Fungilab Viscolead Pro Viscometer VL321003.
The melting resistance was measured by determining the number of melted sorbets from a given sample volume. The method consisted in determining the volume as a result of sorbet leachate after 60 min at room temperature. This is an indicator of the resistance of ice cream to melting. Procedure of the determination: cylinder-shaped samples were taken from the frozen ice cream using a metal cylinder-shaped mold with a capacity of 24.73 cm 3 . The entire mass of the mold was transferred to a sieve that was mounted on a funnel. The funnel was placed in a measuring cylinder. After 60 min, the volume of leakage was read from the cylinder. The test was carried out at a temperature of 20 • C.
The calculations were obtained according to the formula: where: The overrun was measured by determining the amount of air in a given volume of the sample. A defined volume of the sample (cut from the product with a cylinder) was transferred to a volumetric flask and made up to the mark with distilled water. Based on the known volume of the volumetric flask, the volume of the cylinder, and the amount of added water, the overrun of the sorbets was calculated.
The total phenolic content (TPC) was determined using Folin-Ciocalteu reagent and a standard curve of Gallic acid (AR), and the results were expressed as mg Gallic acid equivalent (GAE)/100 g sample [40].
The vitamin C content was determined by Tillmans' method [42]. Titratable acidity was determined according to the AOAC method [43]. All analyses were conducted three times.
Sensory Evaluation
The sensory assessments were performed by 10 semi-trained judges (7 female, 3 male, age 30-49, employees of the Department of Quality Management, Faculty of Management and Quality Science, Gdynia Maritime University, Gdynia, Poland). The coded samples were served to panelists under normal daylight. Each panelist received samples at approximately −12 • C for evaluation (this temperature is recommended while serving ice cream). A 5-point hedonic scale was employed for the evaluation of color, odor, taste, consistency and overall preference of sorbets (Table 2). After receiving the evaluation results of all products, the average values of the individual evaluations were calculated.
Statistical Analysis
A two-way analysis of variance (ANOVA) was used for the statistical analysis. The type of fruit and the effect of the added inulin was considered. Calculations were made using Microsoft Excel MSO 2016 (ver. 2205, Redmond , Washington, USA) and Statistica 13 software (ver. 13.1.336.0 StatSoft, Palo Alto, California, USA). The post hoc Tukey's procedure was used to find patterns and relationships between the subgroups. Differences among the groups were determined as statistically significant at a level of p ≤ 0.05.
Results
The first assessed feature is the appearance and color of the product, then the smell is assessed, followed by the taste and consistency. Additionally, the panelists were asked to give an overall rating (the "overall preference"). The odor of the avocado sorbets was not significantly different to the cantaloupe sorbets. The taste of the avocado with inulin sorbet was not significantly different to the cantaloupe sorbets, and in addition was not significantly different to the sorbets of kiwi and mango. Despite the high health value, these avocado features in terms of sorbet production were not high. The best rated was the mango sorbet with the addition of inulin, which could have contributed to the improvement of the consistency of the product ( Table 3). The consistency of this sorbet was rated the highest. The smell of the melon sorbets was rated at 3.5-4.0 points, while their consistency was rated the worst. This could be because the melons contain a lot of water and all the mixtures in the experiment were prepared in the same way. The amount of water that was added into the sorbet mixtures influenced the sensory evaluation, as well as the melting resistance of the sorbets. The color of the mango sorbets was not significantly different to the kiwi with inulin sorbet, and these products were rated the best. Inulin addition did not influence the color estimation. In overall preference, the mango and kiwifruit sorbets were evaluated as the best products. Regarding the overall preference, the sorbets of mango, kiwi, honeydew melon, and cantaloupe melon were statistically equal, although the mango and kiwifruit sorbets were evaluated with the highest points estimation. The addition of inulin generally improved all the features that were estimated in the organoleptic assessment. A slight odor deterioration of the cantaloupe melon sorbets with inulin was found; nevertheless, the difference of odor estimation was statistically not significant. The addition of inulin (both 2 and 5% wt.) did not statistically affect the organoleptic features of the sorbets from the given fruit. The melting resistance of the tested sorbets ranged from 11.46 to 38.41%, and the addition of inulin influenced this parameter, depending on the type of fruit and the amount of the additive ( Table 4). The addition of 5% of inulin influenced the meltability of all the sorbets, while the addition of 2% of inulin affected the melting resistance of the kiwifruit sorbet. The highest melting resistance values were obtained by the kiwifruit sorbets, as well as the melon sorbets, which could have been caused by the high water content in these fruits. Inulin addition decreased the melting resistance of the sorbets. Ice cream overrun is influenced by the protein and fat content. The overrun of sorbets is therefore lower than the overrun of milk-based ice cream and is usually at the level of 10-40%. This parameter is influenced by the composition of the mixture. In the tested sorbets, the overrun ranged from 12.33 to 19.67%, which was influenced by both the type of fruit and the inulin addition. The addition of inulin influenced the overrun of cantaloupe melon and mango sorbets, but the amount of added inulin did not significantly influence that parameter. The amount of added inulin significantly influenced the overrun of the kiwifruit sorbets, whereas only 5% of added inulin influenced the overrun of the yellow honeydew melon sorbet. This parameter was not influenced in the avocado sorbets. The total polyphenols in the sorbets ranged from 4.83 to 10.97 (mg GAE/g product), depending on the type of sorbet. The highest number of total polyphenols was estimated in the avocado sorbets, and the lowest in the honeydew melon sorbets. The kiwifruit sorbets were also high in total polyphenols. The addition of inulin influenced this parameter in the avocado and mango sorbets, but this was probably due to the smaller amount of fruit that was used in the production of the product. The DPPH assay showed the highest antioxidative activity in the kiwifruit and mango sorbets, while the lowest antioxidative activity was assessed in the sorbet with the lowest content of total polyphenols (honeydew melon). The largest amount of vitamin C was determined in the kiwifruit and mango sorbets, and the lowest in the melon sorbets. The inulin sorbets contained lower amounts of vitamin C, total polyphenols and oxidative activity. The determination of titratable acidity showed that the acidity of all the sorbets was in the range of 2.2-4.5 • SH; the kiwifruit sorbet showed the highest acidity, and the addition of inulin increased or decreased this parameter, depending on the fruit. X ± SD X ± SD X ± SD X ± SD X ± SD X ± SD X ± SD The ANOVA analysis of variance showed an influence or no influence of the type of fruit and the addition of inulin on the examined features. The type of fruit had a significant impact on all the examined discriminants (p = 0.0000). The addition of inulin had no significant effect only on the acidity of the tested sorbets (p = 0.0949), while fruit plus inulin had a significant effect on the content of polyphenols (p = 0.0031), vit. C (p = 0.0000) and the acidity of sorbets (p = 0.0001). In the case of the sensory evaluation, the statistical analysis showed that the type of fruit influenced all the assessed characteristics (p = 0.0000), while the addition of inulin influenced the taste of the sorbets (p = 0.0187), their consistency (p = 0.0242) and overall preference (p = 0.0149). The consistency of the product affects the mouthfeel and overall preference, and in sorbets, it is highly influenced by the time of freezing and the viscosity of the ice cream mixture. The addition of inulin, both 2 and 5%, statistically influenced the viscosity of the prepared sorbet mixtures. The addition of inulin increased the viscosity, and the greater the addition, the more the viscosity increased.
The addition of inulin (2% wt.) to the avocado sorbet (replacing 2% wt. of fruit with inulin) only influenced the viscosity of the prepared mix, while 5% of inulin in the mixture had a statistically significant effect on the improvement of the melting resistance; however, the total polyphenols content and antioxidative properties of this sorbet significantly decreased. The addition of inulin (both 2 and 5% wt.) increased the viscosity of all the sorbets. Inulin in the cantaloupe melon sorbets improved the overrun and melting resistance but decreased the acidity. Inulin addition improved the melting resistance of the kiwifruit, mango and yellow honeydew melon sorbets. The mango sorbets with inulin had lower total polyphenols content, probably due to the lower fruits content, and in these sorbets the difference was significant.
Discussion
The quality of fruits and their products are influenced by four characteristics: color/appearance, flavor, texture, and nutritional value. Consumers first evaluate the color and appearance of the fruit before deciding whether to eat it, and then its flavor helps to determine if they will consume it again [44]. The same principle applies to many products, especially fruit and vegetable preserves. Products such as sorbets must have an acceptable, attractive color. Sorbets should be made of fresh and good quality fruits or vegetables. Consumers are willing to pay more for high-quality products, comparable to the fresh and minimally processed items [13]. The level of fruit ripeness is an important key to ensuring an appropriate level of quality, both sensory and pro-health. Harvesting and storage conditions are crucial determinants of the quality of the fruit food materials that are used fresh and in various functional-type food products and supplements [45]. Despite those factors preceding sorbet preparation, freezing process conditions can affect the flavor, chemical composition, and the color of fruit products. Nevertheless, freezing is recognized as the postharvest technique that best preserves fruit flavor.
Consumption of tropical fruits, such as melons, kiwifruit, avocado or mango, can help fight several diseases. Phytochemicals can fight human diseases and help in preventing cancer, fighting depression, preventing ulcers and removing dandruff, as well as stimulating the immune system [46,47]. However, the vitamins and polyphenol concentration vary not only in different plant species, but also between cultivars, and may be affected by harvesting and storage conditions [48].
Freezing is regarded as a technique that has little damage effect on the phenolic content of fruits. Some authors reported an increase in the concentration of phenolic compounds after freezing [49], while other studies have shown a significant reduction. Freezing can reduce the concentration of phenolics with no effect on the total antioxidant capacity of the juice, due to the high stability of L-ascorbic acid. This difference could be a consequence of the thawing process [50,51]. Frozen storage for one year did not cause significant losses in the content of polyphenolic antioxidants in most of the analyzed fruits (strawberries, raspberries, red currants, cherries, sour cherries, hawthorn, cornelian cherries, black-berries, white and red grapes) [52]. The investigated sorbets had high antioxidant activity, although the amount of total polyphenols was at a low level (5-11 mg GAE/g product). The avocado and mango sorbets that were studied by Palka [53] contained much higher total phenols (over 260 mg GAE/g product). Such a difference could have been due to the different quality of the fruit that was purchased for research. This could be further evidence that the initial quality of fruits is crucial in products with health-promoting features of functional food.
Freezing usually leads to losses in vitamin C content. These losses depend both on the method of freezing (time and temperature of the process) and on the storage conditions (temperature and its fluctuation) A reduction in vitamin C content is linked to drip losses after freezing, and not to a true degradation process [54].
The total vitamin C content is an attribute of kiwifruit (80 and 120 mg/100 g fresh weight Hayward green cultivar, while 18 mg/100 g is given for melon and 27.7 mg/100 g for fresh mango). This natural variation in the amount of vitamin C in fruit, including kiwifruit, is due to numerous factors, including growing region and conditions, time and maturity at harvest, and storage conditions [32,55]. The vitamin C content in the sorbets was significantly lower (<0.5 mg for melon sorbets, <9.5 mg for mango sorbets and <15 mg for kiwifruit sorbets) due to a lower content of the fruits in the products, but also due to the low initial content of vitamin C in the fruits that were used in the sorbets' production. Therefore, it is important to use good quality fruits in the production of fruit preserves.
The lower content of vitamin C and polyphenols in the tested sorbets with the addition of inulin could be the result of the lower content of fruit in the mixture. The addition of lemon juice to all the sorbets could also have affected the results that were obtained.
The flavor of mango cultivars changes during the production of sorbet; therefore, sorbet manufacturers should select cultivars based on their properties in sorbet. To assist in selecting cultivars for mango puree and sorbet production, consumer studies should be conducted to determine which attributes influence preference [36]. The flavor of sorbets is influenced not only by selected fruits, but also by the addition of sugar, lemon, orange or lime juice. The flavor can also be influenced by the addition of inulin.
Pintor et al. [56] investigated the effect of inulin on the melting resistance and textural properties of low-fat and sugar-reduced ice cream. In their research, higher apparent viscosity resulted in a more stable system with a higher overrun, where inulin controlled the available water. The improvement in melting properties reflected the stable state of the air bubbles-emulsified fat-ice crystals matrix, where the putative effect of inulin to retain water compensating solids and fat reduction retarded the melting of the ice crystals. Inulin retained free water when butyric fat and sugar were reduced, resulting in smaller ice crystals, reflecting a softer texture. Inulin is a functional ingredient (soluble fiber and prebiotic) and can be employed to reduce 30% of the butyric fat content and 12% of the sugar content in the formulation of low-fat reduced-sugar ice cream [56]. The addition of inulin significantly influenced the acidity and color parameters of sorbets in Przybylski et al.'s studies [9]. The content of inulin in frozen ice cream desserts should be determined individually, depending on the type of ice sorbet. The results of the research show that the addition of inulin to fruit and vegetable ice sorbets allows for obtaining products of a satisfactory sensory quality; good physicochemical properties impacted most of the studied discriminants.
Due to many studies proving the high pro-health value of fruit peels, it is also advisable that, in order to reduce waste, fruit with peel should be used for the production of sorbets. Several studies have proven the presence of a wide range of bioactive compounds in various fruit industrial by-products, which are essentially pomace, peels and seed fractions. These compounds consist mainly of carbohydrates, secondary metabolites, lipids and proteins. Generally, seeds are rich in polyphenols and bioactive lipids, whereas peels are considered as a rich source of dietary fibers. Bioactive compounds are present in fruit by-products with various concentrations and combinations [2,39,57,58]. Avocado peel has antioxidant activity and the potential to develop as a functional food [59]. In this study, all the fruit peels were removed; therefore, where possible, the removal of fruit peel should be considered when manufacturing sorbets with peels. Some peels can be difficult to crush into small enough pieces that are not detectable in an organoleptic evaluation; therefore, this method of preparing the mixture is crucial.
Palka [60] verified consumers' opinion of sorbets that were produced from three old apple varieties; all the apples were processed into sorbets, including the peel. The sensory characteristics of traditional varieties were well accepted by consumers [60]. The same author [53] studied the best proportions of ingredients to produce sorbets based on avocado, with a mango flavoring and color enhancer. The organoleptic assessment and physicochemical characteristics of the sorbets confirmed that it was possible to create sensory-attractive sorbets with pro-health properties [36]. Avocado improves the bioavailability of nutrients from other plant-based foods. Therefore, consuming avocados with other fruit and vegetables can be beneficial to human health [22].
The amount and type of sweeteners that are added have a large influence on the consistency and hardness of ice cream and sorbets. The addition of sucrose causes a harder and more brittle consistency; therefore, in addition to sucrose, glucose and maltodextrin are most often added to ice cream, which improves its consistency. The improvement in consistency by adding other sucrose substitutes can also be achieved by using xylitol. Naknaen and Itthisoponkul studied the influence of xylitol on the texture of cantaloupe jam. The increase in the amount of xylitol caused a decrease in the hardness of the jam. The increase in xylitol content causes an apparent decrease in hardness, meaning xylitol could be used as a substitute for sucrose as a low-calorie sweetener [27]. The use of various fruits, as well as various sweeteners, can improve the health-promoting properties of sorbets. These properties are also influenced by the times and techniques of freezing. Pavlyuk et al. [61] scientifically substantiated and experimentally proved the possibility of using the cryogenic "shock" freezing and cryomechanolysis methods for the finely dispersed shredding of fruits and vegetables as the innovative method for structure formation, and for obtaining fruit and vegetable ice cream-sorbets with a record content of BAS [61].
Considering the different starting compositions of the fruits that were used in the experiment, it is not surprising that the sorbets' consistencies were different, and thus their sensory evaluation. In this study, it was proved that for each type of fruit, the ice cream mixture should be composed in such a way as to ensure the appropriate sensory experience, without a significant loss of the pro-health values.
Conclusions
The addition of inulin improved the consistency and melting resistance of the ice cream and influenced the physicochemical properties. The addition of 5% of inulin was better than a 2% addition. The overall preference of the assessed sorbets was rated as good (4.1) to almost very good (4.8) in the sensory evaluation, with the exception of the avocado sorbet. Many scientific reports claim that sorbets can be a valuable source of vitamins and polyphenols. The conducted research suggests that in the case of the tested fruits, a combination of different fruits should be prepared in order to obtain the excellent quality of the sorbet. Due to their high pro-health and technological value, avocados could be added to ice cream and sorbets that are made of other fruits, but they should not be used as a base for sorbets due to their sensory qualities. The addition of inulin (both 2 and 5% wt.) did not statistically affect the organoleptic features of the sorbets, but improved the physical features (mixture viscosity, overrun of finished products and melting resistance). Therefore, the addition of inulin to fruit sorbets could be replaced by other ingredients that could improve both the organoleptic and pro-health features of the sorbets, or the addition of inulin should replace the water in sorbets, instead of fruits. | 2022-07-03T15:04:41.777Z | 2022-06-30T00:00:00.000 | {
"year": 2022,
"sha1": "7f1d0a3e3cf10cb22b88635cdd474723d794426f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/13/4239/pdf?version=1656604026",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7eb60b5496037738285b9faff50142a1d9d58555",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216271727 | pes2o/s2orc | v3-fos-license | Modeling of a Mill for Processing a Turbine Blade
. The aim of the study is to model the cutting edges, front and rear surfaces of the cutter with a constructive feed for machining a turbine blade. In the modeling process, we used the method of compiling mathematical models of technological processes for obtaining surfaces of parts using cutting. As a result, equations of the cutting edge of the mill, equations of the producing surface were obtained, parametric equations of the rear and front surfaces of the cutting teeth were constructed, and it was also proposed to introduce corrections to reduce the profile distortion after regrinding into the insertion motion parameter.
Introduction
An important step in the design process of a shaped cutter with a variable radius is the modeling of the cutting elements of the tool.
The form of the cutting edge equation depends on the type of the front surface (flat, helical) and the angle of inclination of the flat front surface ().
The final solution is not the cutting edge equation, but an array of coordinates, because the solution was presented not in an analytical form, but in numerical form.
For the studied case of a curved front surface, the cutting edge is the result of the intersection of the producing surface with a plane passing through the axis of the tool rotation and located at a certain angle θ with respect to the X axis. The resulting edge does not have an inclination angle and belongs to the same plane of the radial section of the cutter, while it is not linear.
Materials and Methods
The equation of the leading surface of the cutting edge can be obtained by solving the equation with respect to the parameters θ and z of the vector r . In order to express this equation, it is necessary to perform the following steps: We express the equation of the front surface using the installation vector: where п -the equation of the front surface plane of the i-th cutting edge in its own coordinate system; 0 0 0 1 -unit vector of zero length; п -installation matrix of the front surface plane: γ -rotation matrix taking into account the rake angle of the cutting edge; θ -a parameter defining the angular position of the cutting edge; γ-rake angle.
-installation matrix of the front surface plane, which can be defined as ( Fig. 2) where А -a function that determines the position of the axes represented in the coordinate system of the cutter; # $ % , vector, defining the position of the axis -. , represented in the coordinate system of the cutter & $ % vector, defining the position of the axis 0 з , represented in the coordinate system of the cutter where θ -a parameter defining the angular position of the cutting edge;
Fig. 2. Rear surface plane installation diagram.
Also for the back surface: where з -the equation of the front surface plane of the i-th cutting edge in its own coordinate system; зд -back plane adjustment matrix ? з α • (8) α -rotation matrix taking into account the back corner of the cutting edge; зA -installation matrix of the plane of the rear surface at α 0; α -back corner.
Results
Based on the obtained solutions, it becomes possible to simulate the normal to the front and back surfaces.
Due to the fact that a straight cutting edge with ω = 0 is simpler and cheaper to manufacture in the future, we will build the model for this particular option.
The coordinates of the resulting tool surface depend on the coordinates of the sections of the part. The purpose of the calculation is to create cutting edges whose angular pitch will not depend on these coordinates.
We create an array of angles, that is, we break the surface into edges, where each edge has its own angular position. Then for each edge we create, expressing its equation [1][2][3][4][5].
To begin with, solving the equation of arccosines, we display the values of the angles of the planes passing through the origin and a point on the curve of the first section of the tool (Fig. 4). As can be seen from the obtained graph, a correction is necessary in solving this expression to ensure a monotonic change in the values of the angles. Therefore, we start the process of converting angles, which will choose a solution that does not violate the construction logic (Fig. 5). We display the values of the angles of the planes passing through the origin and the point on the curve of the first section of the tool, taking into account the adjustment (Fig. 6).
Discussion
To set the edge in the secant plane, it is necessary to deprive the angles depending on the coordinates of the blade sections. To do this, we move from the angular position of the point on the first section of the tool surface to the new coordinates, which will be in the same plane relative to the coordinate system of the tool surface (Fig. 7).
Then we express the surface function through the obtained coordinates, thereby obtaining the equation of the cutting edge. B C2$ 2 7, ! → B C2$ 7 θ $ , ! ; (9) Next, we set the calculation function of the matrix elements, vector modules to determine the position of the axes represented in the coordinate system of the cutter.
Using the obtained function, we create the installation matrix of the front and rear surfaces of the cutting tooth, and set the rear and front angles: γ 20° and α 15°; of the surfaces equation relative to the coordinate system, created by the installation matrices.
Enter the values of the front and rear angles: We express the equations of the front and rear planes: > θ, I, γ > θ, ! ,γ I ; K θ, I, α > θ, ! α I .
Next, we calculate the angles in the first section of the tool through an equal step relative to the coordinates of the given sections of the scapula. θ ( L MNO B C2 7, 1 ; (10)
Conclusions
From the obtained images it is seen that the rear and front surfaces are not flat, that is, the rear and front angles are variable. The cutting edge itself lies in the plane, but is not straight [10][11][12][13][14][15]. This work was supported by a scholarship from the President of the Russian Federation for 2019-2021. Project number -SP-2950.2019.1. | 2020-03-26T10:40:23.933Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "b593c7f509459c26aba48e63095da87956b3b350",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/17/e3sconf_ktti2020_01017.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "391c5c1cdb7472287d966a09a0b9e110958591aa",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4550946 | pes2o/s2orc | v3-fos-license | The eigenvalues of stochastic blockmodel graphs
We derive the limiting distribution for the largest eigenvalues of the adjacency matrix for a stochastic blockmodel graph when the number of vertices tends to infinity. We show that, in the limit, these eigenvalues are jointly multivariate normal with bounded covariances. Our result extends the classic result of F\"{u}redi and Koml\'{o}s on the fluctuation of the largest eigenvalue for Erd\H{o}s-R\'{e}nyi graphs.
1. Introduction. The systematic study of eigenvalues of random matrices dates back to the seminal work of Wigner (1955) on the semicircle law for Wigner ensembles of symmetric or Hermitean matrices. A random n × n symmetric matrix A = (a ij ) n i,j=1 is said to be a Wigner matrix if, for i ≤ j, the entries a ij are independent mean zero random variables with variance σ 2 ij = 1 for i < j and σ 2 ii = σ 2 > 0. Many important and beautiful results are known for the spectral properties of these matrices, such as universality of the semi-circle law for bulk eigenvalues (Erdős et al., 2010;Tao and Vu, 2010), universality of the Tracy-Widom distribution for the largest eigenvalue (Soshnikov, 1999), universality properties of the eigenvectors (Tao and Vu, 2012;Knowles and Yin, 2013), and eigenvalue and eigenvector delocalization (Erdős et al., 2009).
In contrast, much less is known about the spectral properties of random symmetric matrices A = (a ij ) n i,j=1 where the entries a ij are independent but not necessarily mean zero random variables with possibly heterogeneous variances. Such random matrices arise naturally in many settings, with the most popular example being perhaps the adjacency matrices of (inhomogeneous) independent edge random graphs. In the case when A is the adjacency matrix for an Erdős-Rényi graph where the edges are i.i.d. Bernoulli random variables, Arnold (1967) and Ding and Jiang (2010) show that the empirical distribution of the eigenvalues of A also converges to a semi-circle law. Meanwhile, the following result of Füredi and Komlós (1981) shows that the largest eigenvalue of A is normally distributed when E[a ij ] = µ and Var[a ij ] = σ 2 for i < j.
Theorem 1 (Füredi and Komlós (1981)). Let A = (a ij ) be an n × n symmetric matrix where the a ij are independent (not necessarily identically distributed) random variables uniformly bounded in magnitude by a constant C. Assume that for i > j, the a ij have a common expectation µ > 0 and variance σ 2 . Furthermore, assume that E[a ii ] = v for all i. Then the distribution of λ 1 (A), the largest eigenvalue of A, can be approximated in order n −1/2 by a normal distribution with mean (n − 1)µ + v + σ 2 /µ and variance 2σ 2 , i.e., as n → ∞. Furthermore, with probability tending to 1, In the case when A is the adjacency matrix of an Erdős-Rényi graph with edge probability p, Theorem 1 yields as n → ∞.
A natural generalization of Erdős-Rényi random graphs is the notion of stochastic blockmodel graphs (Holland et al., 1983) where, given an integer K ≥ 1, the a ij for i ≤ j are independent Bernoulli random variables with E[a ij ] ∈ S for some set S of cardinality K(K + 1)/2. More specifically, we have the following definition.
The stochastic blockmodel is among the most popular generative models for random graphs with community structure; the nodes of such graphs are partitioned into blocks or communities, and the probability of connection between any two nodes is a function of their block assignment. The adjacency matrix A of a stochastic blockmodel graph can be viewed as ) is a generalized Wigner matrix whose elements are independent mean zero random variables with heteregoneous variances. We emphasize that our assumptions on A − E[A] distinguish us from existing results in the literature. For example, Péché (2006); Knowles and Yin (2014); Bordenave and Capitaine (2016); Pizzo et al. (2013) consider finite rank additive perturbations of the random matrix X given by X = X + P under the assumption that X is either a Wigner matrix or is sampled from the Gaussian unitary ensembles. Meanwhile, in Benaych-Georges and Nadakuditi (2011), the authors assume that X or P is orthogonally invariant; a symmetric random matrix H is orthogonally invariant if its distribution is invariant under similarity transformations H → W −1 HW whenever W is an orthogonal matrix. Finally, in O' Rourke and Renfrew (2014), the entries of X are assumed to be from an elliptical family of distributions, i.e., the collection {(X ij , X ji )} for i < j are i.i.d. according to some random variable (ξ 1 , ξ 2 ) with E[ξ 1 ξ 2 ] = ρ.
The characterization of the empirical distribution of eigenvalues for stochastic blockmodel graphs is of significant interest, but there are only a few available results. In particular, Zhang et al. (2014) and Avrachenkov et al. (2015) derived the Stieltjes transform for the limiting empirical distribution of the bulk eigenvalues for stochastic blockmodel graphs, thereby showing that the empirical distribution of the eigenvalues need not converge to a semicircle law. Zhang et al. (2014) and Avrachenkov et al. (2015) also considered the edge eigenvalues, but their characterization depends upon inverting the Stieltjes transform and thus currently does not yield the limiting distribution for these largest eigenvalues. Lei (2016) derived the limiting distribution for the largest eigenvalue of a centered and scaled version of A. More specifically, Lei (2016) showed that there is a consistent estimate E[A] = (â ij ) of E[A] such that the matrix A = ( a ij ) with entries a ij = (a ij −â ij )/ (n − 1)â ij (1 −â ij has a limiting Tracy-Widom distribution, i.e., n 2/3 (λ 1 ( A) − 2) converges to Tracy-Widom.
This paper addresses the open question of determining the limiting distribution of the edge eigenvalues of adjacency matrices for stochastic blockmodel graphs. In particular, we extend the result of Füredi and Komlós and show that, in the limit, these eigenvalues are jointly multivariate normal with bounded covariances.
2. Main results. We present our result in the more general framework of generalized random dot product graph where E[A] is only assumed to be low rank, i.e, we do not require that the entries of E[A] takes on a finite number of distinct values. We first define the notion of a (generalized) random dot product graph (Young and Scheinerman, 2007;Rubin-Delanchy et al., 2017).
Definition 2 (Generalized random dot product graph). Let d be a positive integer and p ≥ 1 and q ≥ 0 be non-negative integers such that p + q = d. Let I p,q denote the diagonal matrix whose first p diagonal elements equal 1 and the remaining q diagonal entries equal −1. Let X be a subset of R d such that x I p,q y ∈ [0, 1] for all x, y ∈ X . Let F be a distribution taking values in X . We say We therefore have When q = 0, we say that (A, X) ∼ RDPG(F ), i.e., A is a random dot product graph.
Remark. Any stochastic blockmodel graph (τ, A) ∼ SBM(π, B) can be represented as a (generalized) random dot product graph (X, A) ∼ GRDPG(F ) where F is a mixture of point masses. Indeed, suppose B is a K × K matrix and let B = UΣU be the eigendecomposition of B. Then, denoting by ν 1 , ν 2 , . . . , ν K the rows of U|Σ| 1/2 , we can define F = K k=1 π k δ ν k where δ is the Dirac delta function. The signature (p, q) is given by the number of positive and negative eigenvalues of B, respectively. Similar constructions show that degree-corrected stochastic blockmodel graphs (Karrer and Newman, 2011) and mixed-membership stochastic blockmodel graphs (Airoldi et al., 2008) are also special cases of generalized random dot product graphs.
Remark. We note that non-identifiability is an intrinsic property of generalized random dot product graphs. More specifically, if (X, q is said to be an indefinite orthogonal matrix. For the special case of random dot product graphs where q = 0, the condition on W reduces to that of an orthogonal matrix. With the above notations in place, we now state our generalization of Füredi and Komlós (1981) for the generalized random dot product graph setting.
where X ∼ F and suppose that ∆I p,q has p + q = d simple eigenvalues. Let P = XI p,q X and for 1 ≤ i ≤ d, letλ i and λ i be the i-th largest eigenvalues of A and P (in modulus), respectively. Let λ i (∆I p,q ) and ξ i denote the i-th largest eigenvalue and associated (unit-norm) eigenvector pair for the matrix ∆ 1/2 I p,q ∆ 1/2 . Let µ = E[X] and denote by η the d × 1 vector whose elements are Also let Γ be the d × d matrix whose elements are When A is a d-dimensional random dot product graph, Theorem 2 simplifies to the following result.
has d simple eigenvalues. Let P = XX and let λ i (∆) and ξ i denote the i-th largest eigenvalue and associated (unit-norm) eigenvector of ∆. Let µ = E[X] and denote by η the d × 1 vector with elements and by Γ the d × d matrix whose elements are as n → ∞.
To illustrate Corollary 1, let A be an Erdős-Rényi graph with edge probability p; then F is the Dirac delta measure at p 1/2 and hence ∆ = p, ξ 1 = 1, and λ i (∆) = p. We thus recover the earlier result of Füredi and Komlós thatλ When the eigenvalues of ∆I p,q are not all simple eigenvalues, Theorem 2 can be adapted to yield the following result. 4 Theorem 3. Let (X, A) ∼ GRDPG(F ) be a d-dimensional generalized random dot product graph on n vertices with signature (p, q). Let P = XI p,q X and for 1 ≤ i ≤ r, letλ i and λ i denote the i-th largest eigenvalues of A and P (in modulus), respectively. Also let v i be the unit norm eigenvector satisfying (X X) 1/2 I p,q (X X) 1/2 v i = λ i v i for i = 1, 2, . . . , d. Denote by η = η(X) the d × 1 vector with elements and by σ 2 = σ 2 (X) the d × 1 vector whose elements are as n → ∞.
The main differences between Theorem 3 and Theorem 2 are that (1) we do not claim that the quantities η i and σ 2 i in Theorem 3 (which, for (X, A) ∼ GRDPG(F ) are functions of the underlying latent positions X) converge as n → ∞ and (2) we do not claim that the collection (λ i − λ i ) d i specified as in Theorem 3. Note that ∆ has repeated eigenvalues, i.e., the eigenvalues of ∆ are 11/30, 2/30 and 2/30.
in Theorem 3 converges jointly to multivariate normal. The above diffences stem mainly from the fact that when the eigenvalues of ∆I p,q are not simple eigenvalues, then X X n → ∆ as n → ∞ but v i does not necessarily converges to ξ i , the corresponding eigenvector of ∆ 1/2 I p,q ∆ 1/2 , as n → ∞.
A sketch of the proof of Theorem 2 and Theorem 3 is as follows. First we derive the following approximation ofλ i − λ i by a sum of two quadratic forms u i (A − P)u i and u i (A − P) 2 u i , namelŷ Now, the term λ −1 i u i (A − P) 2 u i is a function of the n(n + 1)/2 independent random variables {a rs − p rs } r≤s and hence is concentrated around its expectation, i.e., where the expectation is taken with respect to A. Letting η i = E[λ −1 i u i (A − P) 2 u i ], we obtain, after some straightforward algebraic manipulations, the expression for η i in Eq. (2.7). When the eigenvalues of ∆I p,q are distinct, we derive the limit η i a.s. −→ η i where η i is defined in Eq. (2.3). Next, with u is being the s-th element of u i , is, conditional on X, a sum of independent mean 0 random variables and the Lindeberg-Feller central limit theorem yield When the eigenvalues of ∆I p,q are distinct, then σ 2 i a.s. −→ Γ ii as defined in Eq. (2.4). The joint distribution of (λ i − λ i ) d i=1 in Theorem 2 then follows from the Cramer-Wold device. We now provide detailed derivations of Eq. (3.1) through Eq. (3.3).
Proof of Eq. (3.1) For a given i ≤ d, we have Now suppose thatλ i I − (A − P) is invertible; this holds with high probability for sufficiently large n. Then multiplying both sides of the above display by u i (λ i I − (A − P)) −1 on the left and using the von Neumann identity (I − X) −1 = I + ∞ k=1 X k for X < 1, we have We first assume that all of the eigenvalues of ∆I p,q are simple eigenvalues. The eigenvalues of P = X I p,q X are then well-separated, i.e., min j =i |λ i − λ j | = O P (n) for 1 ≤ i = j ≤ d. The Davis-Kahan sin Θ theorem (Davis and Kahan, 1970;Yu et al., 2015) therefore implies, for some constant C, We can thus divide both side of the above display by u iû i to obtain Equivalently, (3.7) 7 Now λ −1 iλ j = O P (1), and by Hoeffding's inequality, u j (A−P)u i = O P (1). Since u jû i = O P (n −1/2 ), we have We thus have The above bounds then implieŝ Similar to the derivation of Eq. (3.8), we also show that with high probability and thus Eq. (3.9) and Eq. (3.10) implŷ (3.11) and Eq. (3.1) is established.
We now consider the case where the i-th eigenvalue of ∆I p,q has multiplicity r i ≥ 2. Let S i be the indices of the r i eigenvalues λ j of P = XI p,q X that is closest to nλ i (∆I p,q ), i.e., Denote by U S i the n×r i matrix whose columns are the eigenvectors corresponding to the λ j , j ∈ S i . We note that i ∈ S i with high probability for sufficiently large n. Furthermore, |λ j − λ k | = O P (n) for all j ∈ S i and k ∈ S i . Therefore, by the Davis-Kahan theorem, u iû k = O P (n −1/2 ) for all k ∈ S i . We now consider u iû j for j ∈ S i , j = i. We note that (3.12) By Hoeffding inequality, u i (A − P)U S i = O P (1) with high probability. Since j ∈ S i , we have (I − U S i U S i )û j = O P (n −1/2 ) by the Davis-Kahan theorem. We then boundλ j − λ j using the following result of (Cape et al., 2016, Theorem 3.7) (see also O'Rourke et al. (2013, Theorem 23)). 8 Theorem 4. Let A and be a n × n symmetric random matrix with A ij ∼ Bernoulli(P ij ) for i ≤ j and the entries {A ij } are independent. Denote the d + 1 largest singular values of A by 0 ≤σ d+1 <σ d ≤σ d−1 ≤ · · · ≤σ 1 , and denote the d + 1 largest singular values of P by 0 ≤ σ d+1 < σ d ≤ σ d−1 ≤ · · · ≤ σ 1 . Suppose that Υ = max i j P ij = ω(log 4 n), σ 1 ≥ CΥ, σ d+1 ≤ cΥ for some absolute constants C > c > 0. Then for each k ∈ {1, 2, . . . , d}, there exists some positive constant c k,d such that as n → ∞, with probability at least 1 − n −3 , we have We thus have .
We now analyze n −1/2 (λ j − λ i ). We can view P = XI p,q X as a kernel matrix with symmetric kernel h(X i , X j ) = X i I p,q X j where X i , X j ∼ F . As h is finite-rank, let (φ i , λ i (I p,q ∆)) d i=1 denote the eigenvalues and associated eigenfunctions of the integral operator K h : L 2 (F ) → L 2 (F ), i.e., Then, following Koltchinskii and Giné (2000), let Ψ i denote the r i × r i random symmetric matrix whose half-vectorization vech(Ψ i ) is (jointly) distributed multivariate normal with mean 0 and r i (r i + 1)/2 × r i (r i + 1)/2 covariance matrix with entries of the form with a slight abuse of notation, the collection {φ s } s≤r i denote the r i eigenfunctions of K h associated with the eigenvalue λ i (I p,q ∆). A simplification of the statement of Theorem 5.1 in Koltchinskii and Giné (2000), to the setting where h is a finite-rank kernel, yields n 1/2 (λ j /n − λ i (I p,q ∆)) j∈S i → λ i (I p,q ∆) × (λ s (Ψ i )) 1≤s≤r i as n → ∞; here we use the notation λ s (M) to denote the s-th largest eigenvalue, in modulus, of the matrix M. Thus, the joint distribution of {n −1/2 (λ s −nλ i (∆I p,q )} s∈S i converges to a non-degenerate limiting distribution and hence the limiting distribution of n −1/2 (λ i − λ j ) is also non-degenerate. We therefore have Finally, we note that there exists an orthogonal matrix W such that U Û − W = O P (n −1 ). Hence, for any i ≤ d, d j=1 (u iû j ) 2 = 1 + O P (n −1 ); hence, from our bounds for u iû j for j = i given above, we have u iû i = 1 − o P (1). In summary, when the eigenvalues of I p,q ∆ are not all simple eigenvalues, we have (in place of Eq. (3.5) and Eq. (3.6)), the bounds (3.13) Thus Eq.(3.7) still holds and the remaining steps in the derivation of Eq. (3.11) can be easily adapted to yieldλ To derive Eq. (3.2), we show the concentration Z around E[Z] (where the expectation is taken with respect to A, conditional on P) using a log-Sobolev concentration inequality from Boucheron et al. (2003). More specifically, let A = (a rs ) be an independent copy of A, i.e., the upper triangular entries of A are independent Bernoulli random variables with mean parameters {p rs } r≤s . For any pair of indices (r, s), let A (rs) be the matrix obtained by replacing the (r, s) and (s, r) entries of A by a ij and let Z (rs) = λ −1 i u i (A (rs) − P) 2 u i . Then Theorem 5 of Boucheron et al. (2013) states that Theorem 5. Assume that there exists positive constants a and b such that Then for all t > 0, , (3.14) The main technical step is then to bound r≤s (Z − Z (rs) ) 2 . An identical argument to that in proving Lemma A.6 in for some constants a and b. Theorem 5 therefore implies as desired.
We now evaluate Let ζ rs denote the rs-th entry of E[(A − P) 2 ]. We note that ζ rs is of the form We therefore have, Let λ i and v i be an eigenvalue/eigenvector pair for the eigenvalue problem (3.16) (X X) 1/2 I p,q (X X) 1/2 v = λv.
We note that if λ i and v i satisfies Eq. (3.16) then λ i and u i = X(X X) −1/2 v i are an eigenvalue/eigenvector pair for the eigenvalue problem XI p,q X u = λu; λ = 0.
Conversely, if λ i and u i are an eigenvalue/eigenvector pair for XI p,q X then λ i and v i = (X X) −1/2 X u i satisfies Eq. (3.16). In addition, if the vectors {v i } d i=1 are mutually orthonormal then the vectors are also mutually orthonormal. We therefore have By the strong law of large numbers as n → ∞. In addition, when λ i (I p,q ∆) is a simple eigenvalue, then we also have v i → ξ i as n → ∞. We therefore have, when λ i (∆I p,q ) is a simple eigenvalue, that as desired.
Proof of Eq. (3.3) We recall that, conditional on P, u i (A − P)u i is a sum of mean zero random variables. Therefore, by the Lindeberg-Feller central limit theorem for triangular arrays, we have σ −1 i u i (A − P)u i converges to standard normal; here σ 2 i = Var[u i (A − P)u i ]. All that remains is to evaluate σ 2 i . Since A − P is symmetric, we have k l (X k (X X) −1/2 v i ) 2 (X l (X X) −1/2 v i ) 2 X k I p,q X l (1 − X k I p,q X l ) + o P (1) = 2 k (X k (X X) −1/2 v i ) 2 X k I p,q l (X l (X X) −1/2 v i ) 2 + o P (1) − 2tr k (X k (X X) −1/2 v i ) 2 X k X k I p,q l (X l (X X) −1/2 v i ) 2 X l X l I p,q When λ i (I p,q ∆) is a simple eigenvalue, the strong law of large numbers and Slutsky's theorem implies, σ 2 i → E[ξ i ∆ −1/2 XX ∆ −1/2 ξ i X] I p,q E[ξ i ∆ −1/2 XX ∆ −1/2 ξ i X] − tr E[ξ i ∆ −1/2 XX ∆ −1/2 ξ i XX ]I p,q E[ξ i ∆ −1/2 XX ∆ −1/2 ξ i XX ]I p,q = Γ ii | 2018-03-30T17:43:16.000Z | 2018-03-30T00:00:00.000 | {
"year": 2018,
"sha1": "d4483509a253be94addddce26aa61e6b115e56b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d4483509a253be94addddce26aa61e6b115e56b5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
214113039 | pes2o/s2orc | v3-fos-license | Design a web-based asset management information system using the straight line method for private universities
Universities cannot be separated from the existence of assets owned, therefore an information system is needed that functions as a tool in the asset management process. Most asset management in the private university still using Spreadsheet besides that the management of its assets is also not centralized. These problems can be solved by creating a web-based asset management information system. The purpose of this research is for design and build an asset management information system at the private university. The method used is straight line where this method is used for problems in calculating the depreciation of assets and knowing the useful life of the asset. This system can help private university administrative staff in managing assets and calculating asset value.
Introduction
Universities that have been established for a long time, usually have many assets, in all different conditions, so that the assets cannot be recognized, positioned or even lost. Asset management is a series of activities that are carried out by identifying what assets are needed, identifying funding needs, acquiring assets, providing logistical and maintenance support systems for assets, removing or renewing assets so that they can effectively and efficiently fulfill their objectives [1].
This asset management information system is created to be able to manage asset data easily, effectively and accurately starting from recording asset data, asset data search, asset data recapitulation and asset data deletion. In addition, this system also provides an asset valuation function, so that asset value recapitations starting from depreciation values, residual values, estimated benefits and net book values can be assessed and processed automatically [2].
Depreciation of fixed assets applied in this system is the straight line depreciation method (straight line) in accordance with the provisions of tax regulations that have been regulated in Law Number 7 of 1983. This is related to one of the objectives of making an asset value report in each company where to fulfill one of the data requirements of the taxpayer whose payment is made annually at the end of the company's accounting period [3,4].
Methodology
Assets are things or things (anything) that have economic value, commercial value or exchange value owned by business entities, institutions or individuals (individuals) [5]. Assets are usually in the form of physical goods such as land letters, apartments, and others, sometimes also not shaped like stocks, copyrights and trademarks [6].
The process of developing the Prototype method is: description of the parts that will be needed. Quick Plan, the design is done quickly and represents all known aspects of the software, and this design is the basis for making prototypes. Modeling Quick Design, focuses on the representation of software aspects that can be seen by the user. Modeling quick design tends to make prototypes. Construction of Prototype, build a framework or design prototype of software that will be built. Deployment Delivery and Feedback, the prototype that has been made by the developer will be distributed to the user, to be evaluated, then the user will provide feedback that will be used to revise the software requirements to be built. Repetition of this process continues [7,8].
The advantages of the prototype include: Users can consider a few changes during the prototype form; Provide results that are more accurate than previously estimated, because the desired function and complexity can be well known; Users feel satisfied. First, the user about the computer and the application that will be made for him. Second, the user is directly involved from the start and motivates enthusiasm to support analysis during the project [8].
Data Flow Diagram is a diagram that is used to describe an existing system or a new system that will be developed logically, structured, and clear and explain the flow of data from start to output [9,10]. On the straight line method has three processes for calculating the value of different assets, namely the calculation of depreciation per year, depreciation of the first year, and depreciation of the last year [11].
In the first year, asset depreciation is calculated based on how long (in months) the asset is used starting from usage until the end of the first accounting period. In the last year of the asset's life, the depreciation value is the book value of assets, which means that the asset value is fully depreciated in the final year of use [12].
Results and discussion
The design of this asset management information system aims to illustrate before making an application. In this section we will explain the system design that will be developed and the steps outlined. The design used in the application to be built is structured based design using Context Diagrams in figure 1. Context Diagram in asset management information system there are two external entities, namely staff, and leader. Level 1 DFD describes the data flow activities that occur in asset management information systems. In this diagram there are two entities and four processes which are login and logout processes,
Construction of prototype
The home page is the start page after login, on this page there is information about the total number of assets, the total number of rooms, the total building, and the total value of assets can be seen in Figure 3. Interface design is one of the important parts in software development. because the user's evaluation is seen from the interface or the appearance of a software. Following is the interface design that will be built in figure 4. Is an asset data page, where on this page can add asset data, edit, and delete asset data. Interface for a room data page, this page contains data on the rooms or space in the university in figure 5. The following is a maintenance data page, this page serves to record the assets that have been taken care of and to determine the next treatment in figure 6.
Testing
The testing using black box testing, black box testing is a test that focuses on the function of the software system without knowing the structure of the coding program. Testing has conducted to find out the final result in the form of asset depreciation. Testing of system and manual depreciation calculations is done with 117 data.
Conclusion
Apply the straight line method to this system there are three different calculation categories, namely annual depreciation calculation, calculation of depreciation for assets whose usage period is first, and calculation of depreciation of value for assets whose usage period is in the last year. In addition, based on the results of tests conducted on 117 data, this system produces an accuracy rate of 88.03%, with this level of accuracy indicating that the system is functioning properly. | 2019-12-19T09:18:06.272Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "f49dabc6087952b5dac1b201346f44eeb2aa06be",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1402/6/066057",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ebb4b0218823cad6aa79e096e8937e3c61d87c1b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
8531371 | pes2o/s2orc | v3-fos-license | Chiral Lagrangians from lattice gauge theories in the strong coupling limit
We derive nonlinear sigma models (chiral Lagrangians) over symmetric spaces U(n), U(2n)/Sp(2n), and U(2n)/O(2n) from U(N), O(N), and Sp(2N) lattice gauge theories coupled to n flavors of staggered fermions, in the large-N and g^2 N limit. To this end, we employ Zirnbauer's color-flavor transformation. We prove the spatial homogeneity of the vacuum configurations of mesons by explicitly solving the large-N saddle point equations, and thus establish the above patterns of spontaneous chiral symmetry breaking without any assumptions.
Introduction. The concept of spontaneous symmetry breaking introduces an essential parallelism between the spectral statistics of disordered condensate [1] and the low-energy dynamics of strongly coupled gauge field theories [2], involving a juxtaposition of the lightest and weakly coupled Goldstone particles in these theories: diffusion modes associated with spontaneous breakdown of the symmetry between advanced and retarded Green's functions, and pions associated with spontaneous breakdown of the chiral symmetry. However there is a considerable difference at the practical level. An ensemble averaging can be analytically performed for disordered Hamiltonians, so that the Goldstone manifold and the nonlinear σ model over it describing the low-frequency regime of the theory manifest themselves without ambiguity. On the other hand, gauge theories with propagating gluons do not admit such an analytic computation. Accordingly, in order to identify a low-energy effective theory one needs to appeal to nonperturbative theorems, namely Vafa-Witten's theorem [3] that forbids spontaneous breaking of the vector symmetry in a vectorial theory, and 't Hooft's anomaly matching [4] between fundamental and effective theories. Relying on these theorems, non-standard patterns of chiral symmetry breaking have been studied [5] in the context of technicolor models.
The symmetry-based approach puts a strong restriction on operators that are allowed in the effective chiral Lagrangian, but the setback is that their coupling constants are left as free parameters that can by no means be related to those of the fundamental theory. To overcome this difficulty, attempts [6] were made to derive an effective theory directly from a microscopic model under an extreme condition, namely a U(N ) lattice gauge theory in the strong coupling limit, combined with the large-N limit. This serves as a basis on which a strong coupling expansion is performed and the plaquette action can be taken into account perturbatively in (g 2 N ) −1 [7].
Recently Altland and Simons [8] have renewed interest in this subject. Instead of computing a one-link integral via the Brézin-Gross formula [9] valid only in the large-N limit, they have adopted an alternative method, Zirnbauer's color-flavor transformation [10,11], which has a clear advantage of being valid at finite N . By converting the integrations over strongly coupled gluonic variables into supposedly weakly coupled mesonic variables, they have provided a transparent derivation of the low-energy effective action corresponding to fermions in the complex representation. However, their and all the previous works have relied on a crucial and far-from-evident assumption that the mesonic fields in the vacuum are homogeneous.
The purpose of this paper is twofold: by applying the color-flavor transformation to strongly coupled lattice gauge theories with three classical gauge groups and with staggered fermions, we shall i) demonstrate the spatial homogeneity of the mesonic fields in the vacuum, thereby establishing the chiral symmetry breaking rigorously, ii) extend the method of Altland and Simons valid for quarks in the complex representation of the gauge group, to other two classes of representations, so as to derive non-standard type of chiral Lagrangians [12].
Lattice Gauge Theory with Staggered Fermions. We consider a lattice gauge theory coupled to staggered fermions on an even(= d) dimensional lattice Z d ∋ x, which is split into odd and even sublattices labeled by • and ×, respectively. The set of 2d odd (even) sites adjacent to an even (odd) site x will be denoted as • x (× x ). We write the partition function with an emphasis on the bipartite nature (the lattice spacing is set to unity): Here the link variable U ij xy takes its value in the gauge group G = U(N ), O(N ), or Sp(2N ).
The Haar measure of G is denoted by dU , and S[U ] = g −2 xyzw tr U xy U † zy U zw U † xw + c.c. is the plaquette gauge action. The site variables ψ i,a x andψ i,a x are independent Grassmannian numbers. The color indices i, j, . . . run from 1 to N for G = U(N ), O(N ) or to 2N for G = Sp(2N ), and the flavor indices a, b, . . . run from 1 to n. The phase is defined by η x x±μ = (−1) x 1 +···+x µ−1 , and the quark mass matrix is by m = diag(m 1 , . . . , m n ).
As the fundamental representation of O(N ) (Sp (2N )) is real (pseudoreal), i.e. its symmetric (antisymmetric) product contains an invariant, the chiral symmetry group is extended to U(2n) for these cases [5]. This can be made apparent by introducing 2n-flavored site variables , resp., and writing the fermionic action as Extended mass matrices m AB in the above are defined by Color-Flavor Transformation. Henceforth we concentrate on the strong coupling limit g 2 N → ∞. The theory is thus referred to as a non-Abelian random flux model.
In this case, η xy can be absorbed into the redefinition of the link variables. We can reexpress the partition function of this model in terms of an integration over flavor-singlet link variables into the one over color-singlet link variables, by Zirnbauer's color-flavor transformation. That for the U(N ) link variable coupled to fermions reads [10,11], up to an irrelevant numerical factor, The integration on the RHS is over complex n × n matrices, N = GL(n, C). Although the case with the O(N ) link variable coupled to fermions was not explicitly provided (see Ref. [11] for its bosonic counterpart), one can show that The integration on the RHS is over the set of complex 2n × 2n antisymmetric matrices, which is diffeomorphic to N = GL(2n, C)/Sp(2n, C). The case with the Sp(2N ) link variable coupled to fermions reads [10] Sp(2N ) The integration on the RHS is over the set of complex 2n × 2n symmetric matrices, which is diffeomorphic to N = GL(2n, C)/O(2n, C). Applying these transformations to each link and then integrating over fermions, we obtain Here ǫ = 0, Homogeneity of the Saddle Point. So far the manipulation applied to our non-Abelian random flux models was exact. We finally make an approximation, by taking the N → ∞ limit. The saddle point equations read (6) (x ∈ ×, y ∈ • x ) for G = U(N ). Those for G = O(N ) or Sp(2N ) are identical as the above, with Z xy constrained to be antisymmetric or symmetric. Now we shall demonstrate that the unique solution to Eq.(6) at m = 0 is given by i.e. the vacuum configuration of the mesonic variable is homogeneous. In other two cases, we simply need to restrict U to be an antisymmetric (G = O(N )) or a symmetric (G = Sp(2N )) unitary matrix, which in turn can be parametrized by another unitary matrix V ∈ U(2n) by the canonical projection U = V JV T (J = iσ 2 ⊗ 1 1 n ) or U = V V T , respectively.
The set of saddle point equations involving an even site x read, after using abbreviations Z p ≡ Z xyp for y 1 , . . . , y 2d ∈ • x , Here Z 1 , . . . , Z 2d ∈ GL(n, C). Eq.(8) clearly leads to We make singular value decomposition of the matrices where U p , V p ∈ U(n) and s a . Without loss of generality we choose (U p , V p ) so that σ (p) 1 ≥ · · · ≥ σ (p) n . From Eq.(9), we have From the uniqueness of the eigenvalues, we find that the diagonal matrix S −1 p + S p is independent of p, and thus is simply denoted as S −1 p + S p = diag (σ 1 , . . . , σ n ) . Then Eq.(12) in a componentwise notation reads As σ b + σ a is nonzero, we find Using Eq. (15) in Eq. (11), we obtain Note that W † qp = W pq . In terms of W qp , Eq.(8) reads Let us now suppose σ 1 = · · · = σ r > σ r+1 ≥ · · · ≥ σ n and consider possible two cases separately.
In the case r = n, we have We rewrite the saddle point equations (8) where p, q = 1, . . . , 2d and p = q. Since W † rp S r W rp are positive-definite, the LHS of Eq.(19) is positive-definite. Therefore S −1 p and W † qp S q W qp are ordered hermitian matrices [13] so that the largest and smallest diagonal elements (eigenvalues) of S −1 p are larger than the largest and smallest diagonal elements of S q , respectively. Suppose the largest diagonal element of S −1 p is s. From Eq.(18), the diagonal elements of S p and S q are s or 1/s. The largest diagonal element of S q is 1/s and s > 1, since it must be smaller than s. Therefore all the diagonal elements of S q are 1/s. Then the smallest diagonal element of S −1 p has to be s, because it must be larger than the smallest diagonal element of S q . Consequently all the diagonal elements of S −1 p are s and we conclude In the case r < n, we obtain from Eqs. (14) and (16), (W qp ) ab = 0 for a = 1, . . . , r, b = r + 1, . . . , n a = r + 1, . . . , n, b = 1, . . . , r for an arbitrary pair of indices p and q. It means that the saddle point equation (17) is decomposed into those with smaller ranks r and n − r. Thus we can inductively show Z 1 = · · · = Z 2d , using the argument given in the previous case for each irreducible component. Substituting Z 1 = · · · = Z 2d into Eq.(8), we obtain As this proof applies for each of even sites, or odd sites with Z xy substituted by Z † xy , we finally establish Eq.(7). Chiral Lagrangian. Next we take into account a small deviation from the chiral limit, as well as fluctuations of the mesonic field around the vacuum configuration that has been proven in the last section to be homogeneous.
The terms to be collected are of lowest nontrivial orders in masses and momenta, which are O(m 1 ) and O(∂ 2 ), respectively. In the case of G = U(N ) [8], we split the mesonic variable into massive and Goldstone modes as where P xy ∈ GL(n, C)/U(n) and U xy ∈ U(n). In the case of G = O(N ) or Sp(2N ), the mesonic variable is constrained to be antisymmetric or symmetric, respectively, so we must employ parameterizations where V xy is the Goldstone mode over M = U(2n)/ Sp(2n) or U(2n)/O(2n), resp., and P xy is the massive mode over N /M. Then we expand the Goldstone mode U xy = exp(∂ xy )U x up to quadratic order in the directional derivatives ∂ xy , and Gaussian-integrate over the massive mode P xy = exp(X xy ) by retaining up to quadratic order in X xy . We reinstate the lattice spacing a and employ dimensionful continuum notations Then the resulting effective partition function reads where U = V, V JV T , V V T for G = U(N ), O(N ), Sp(2N ), respectively, and DV = x∈× dV x . The coefficient C of the singlet (η ′ ) kinetic term is a constant of O(1) depending on d and G [8], but we suppress its explicit form as η ′ should decouple from the Goldstone sector if the U(1) A anomaly were properly taken into account. Consequently the pion decay constant squared and the chiral condensate (the coefficients of the first and the second terms) are derived from the first principle. Conclusion. We have rigorously derived three classes of low-energy chiral Lagrangians from microscopic theories: non-Abelian random flux models, or lattice gauge theories with staggered fermions in the strong coupling limit. These nonlinear σ models have symmetric spaces M = U(n), U(2n)/Sp(2n), U(2n)/O(2n) as their target manifolds, depending on the complexity, reality, and pseudoreality of the defining representations of the gauge groups G = U(N ), O(N ), Sp(2N ), respectively. These results are anticipated from Vafa-Witten argument [12] or equivalently from the anti-unitary symmetries of the staggered Dirac operators in Eq.(1), whose matrix elements are trivially complex, real, and quaternion real for these three gauge groups. For the last two cases of the gauge groups, the anti-unitary symmetries of lattice Dirac operators are interchanged from those of the continuum Dirac operators, due to the absence of the charge conjugation matrix acting on spinor indices, that squares to −1 [14]. The puzzle of if and how the crossover between different classes of effective theories could occur, originally posed for staggered fermions in the adjoint of SU(2) (= fundamental of O(3)) and in the fundamental of SU(2) (≈ fundamental of Sp(2)), remains unsolved. Although the Wess-Zumino term in d = 2 dimensions is of the same order as the kinetic term, our models involving staggered fermions do not yield it.
Verbaarschot [15] has conjectured that the three patterns of chiral symmetry breaking induce the spectral fluctuation of the Dirac operators to obey three universality classes of chiral random matrices [16]. If our parameters are restricted to the 'ergodic' domain where the size of the lattice is much smaller than the Compton length of pseudo-Goldstone bosons, the chiral Lagrangians are dominated by their zero modes, i.e. become finite-dimensional integrals involving only the mass terms in Eq.(24) [17]. As these three 'finite-volume' partition functions are known [16,18] to be equivalent to chiral random matrix ensembles at β = 2, 1, 4 in the large matrixdimension limit, we have also proven his conjecture. | 2017-09-16T20:23:06.522Z | 2000-12-29T00:00:00.000 | {
"year": 2000,
"sha1": "593135d0e8a6d3091faafd4d5f51edf4ca7aaac0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/0012029",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ccb5a4d23578727b9302ad59e63c0ab578cf8ff1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118727004 | pes2o/s2orc | v3-fos-license | A metapopulation model with local extinction probabilities that evolve over time
We study a variant of Hanski's incidence function model that accounts for the evolution over time of landscape characteristics which affect the persistence of local populations. In particular, we allow the probability of local extinction to evolve according to a Markov chain. This covers the widely studied case where patches are classified as being either suitable or unsuitable for occupancy. Threshold conditions for persistence of the population are obtained using an approximating deterministic model that is realized in the limit as the number of patches becomes large.
Introduction
A metapopulation is a collection of local populations of a single species occupying spatially distinct habitat patches. This division of the population may be due to natural variation in the landscape or artificial fragmentation of the habitat. Although the local populations are geographically separated, they still interact through colonising patches that no longer support a local population. This process enables the species to persist despite local extinction events.
The aim of much of metapopulation ecology is to identify and quantify extinction risks. This is often achieved using Stochastic Patch Occupancy Models (SPOMs), which are well established in the ecology literature [13]. A SPOM is a discrete-time Markov chain that models the presence/absence of the focal species at each habitat patch in the metapopulation. The simplest example of a SPOM is the stochastic logistic model [37,28], which provides a model of the number of occupied patches under very strong assumptions. Hanski [11] proposed a more realistic SPOM called the Incidence Function Model (IFM), which, since its inception, has been widely employed in empirical studies [30].
One of the most useful properties of the IFM is that the colonisation and extinction probabilities are parameterised in terms of landscape characteristics such as distance between patches and patch area. It is implicitly assumed that, when applying the IFM, the landscape is static. However, for many metapopulations, the dynamics of the landscape play an important role in the persistence/extinction of the species [34]. As an example, Hanski [12] mentions the marsh fritillary butterfly (Eurodryas aurinia) whose host plant (Succisa pratensis) occurs in forest clearings that are between two and ten years old.
The metapopulation of sharp-tailed grouse (Tympanuchus phasianellus), which occupies areas of grassland, is similarly affected by landscape dynamics [2,10]. For this species, fire opens new grassland areas and prevents the encroachment of forests. Other examples include metapopulations of the perennial herb (Polygonella basiramia) [5] and metapopulations of the beetle (Stephanopachys linearis), which breeds only in burned trees [32].
In these examples, the landscape dynamics are driven by secondary succession, and this is often the case regardless of whether the focal species depends on a seral community or the climax community.
There have been a number of approaches proposed to model metapopulations in dynamic landscapes. Several authors [18,33,38,39] have incorporated habitat dynamics into the stochastic logistic model by allowing each patch to alternate between being suitable or unsuitable for supporting a local population according to some Markov chain [see also the related approach in 7]. Others [35,16] have attempted to deal with landscape dynamics by incorporating the time elapsed since a patch was colonised. A third approach is to model the evolution of the relevant characteristics of the landscape and use these in the colonisation and extinction probabilities of the metapopulation model [12].
In this paper, we adopt this third approach.
Starting with a variant of the IFM, we model the landscape dynamics by allowing the probability of local extinction to evolve according to a continuous-state Markov chain.
By modelling the landscape dynamics in this way, the approach of classifying patches as being suitable or unsuitable is included as a special case. Our aim is to derive threshold conditions for metapopulations with dynamic landscapes comparable to those available in the static landscape case [for example , 29]. To this end, a 'law of large numbers' is derived which shows that the stochastic model can be well approximated by a certain deterministic model when the number of patches is large. This deterministic model is then used to derive the threshold condition. The work presented here builds on our previous analyses of metapopulations models [21,23,26]. All proofs are given in the Appendix.
Model description and main assumptions
As previously noted, the transition probabilities of the IFM [11] are determined by characteristics of the patches. For the i-th patch, these characteristics are its location z i , a weight a i related to the size of the patch, and the probability s i that a population occupying this patch survives a given period of time. For an n-patch metapopulation, its state at time t is described by the binary vector X n t = (X n 1,t , . . . , X n n,t ), where X n i,t = 1 if patch i is occupied at time t and X n i,t = 0 otherwise. Assuming a static landscape and conditional on the patch characteristics, the evolution of the metapopulation follows a discrete time Markov chain. It is assumed that the colonisation and extinction events occur in phases with observations of the state of the metapopulation made after the extinction phase. This type of phase structure has previously been used in [1,8,14,21,23]. Conditional on X n t and the patch characteristics, the X n i,t+1 (i = 1, . . . , n) are independent with transitions given by P X n i,t+1 = 1 | X n t , z n , a n , s n = s i X n i,t + s i f n −1 where D(z,z) ≥ 0 is a measure of the ease of movement between patches located at z andz, and f : [0, ∞) → [0, 1] (called the colonisation function). We note that although X i,t appears in the colonisation probability for patch i, it provides no contribution since patch i can only be colonised if X i,t = 0. Further explanation of this point can be found in McVinish and Pollett [26].
Although there are several ways in which landscape dynamics can be incorporated into the model defined by transition probabilities (2.1), we only consider the case where the local population survival probabilities evolve over time, but the patch areas and connectivity remain static. For each i, let s i,t denote the probability that the population occupying patch i survives from time t to time t + 1. The transition probabilities for X n t are now given by P X n i,t+1 = 1 | X n t , z n , a n , s n t = s i,t X n i,t + s i,t f n −1 Our analysis of the model (2.2) is based on a number of assumptions. The first four are essentially the same as those used in McVinish and Pollett [26].
(C) D(z,z) defines a uniformly bounded and equicontinuous family of functions on Ω.
That is, there exists a finite constantD such that for all z 1 , z 2 ∈ Ω, |D(z 1 , z 2 )| ≤D, and for every ǫ > 0 there exists a δ > 0 such that for all z 1 , Furthermore, D(z,z) > 0 for all (z,z) ∈ Ω × Ω. Although the assumption that the patches evolve independently has been previously used in metapopulation models with dynamic landscapes [18,33,38,39], sometimes only implicitly, it must be noted that independence excludes some important forms of landscape dynamic. In particular, disturbances that affect multiple patches instantaneously, such as widespread fires or droughts, are excluded by this assumption. The Markov chain model for the survival probabilities can incorporate the suitable/unsuitable approach to landscape dynamics. Patches that are unsuitable at time t are equated with those patches for which s i,t = 0; for any patch that is colonised with s i,t = 0, the local population immediately goes extinct. Suitable patches are those for which s i,t > 0. To recover the type of dynamic typically used, the Markov chain for the survival probabilities reduces to a Markov chain with two states 0 and s * > 0; the transition kernel is given by for some p 0 , p 1 > 0. For x ∈ {0, s * }, P (x, dr) can be set to ensure the weak Feller property holds.
The last of our main assumptions concerns the initial variation in the landscape. Let The measure σ n,t describes the landscape of the n patch metapopulation model at time t.
It is purely atomic placing mass n −1 a i at the point determined by patch i's location and its survival probability at time t. We assume that σ n,0 satisfies the following: (F) As n → ∞, σ n,0 d → σ 0 for some non-random measure σ 0 , that is [17,Theorem 16.16] h(s, z)σ n,0 (ds, dz) Assumption (F) is satisfied if, for example, the random vectors (z i , a i , s i,0 ) are independent and identically distributed. Although this assumption only concerns the initial variation in the landscape, it implies a similar 'law of large numbers' for the landscape at all subsequent times.
Law of large numbers
Consider the array of random measures µ n,t constructed from the Markov chain X n t by The measure µ n,t has a similar structure to σ n,t , but only involves those patches that are occupied at time t. These measures can be used to determine quantities such as the proportion of occupied patches in a given area weighted by the patch size. The following theorem describes the behaviour of the metapopulation as the number of patches tends to infinity.
For Theorem 3.1 to provide useful information on the evolution of the metapopulation, it is necessary that the limiting proportion of occupied patches is positive. If only a finite number of patches are initially occupied, then as n → ∞, the µ n,0 will converge to the null measure, and, since f (0) = 0, it will follow that µ t is the null measure for all t ≥ 0.
A different type of analysis is required to analyse the evolution of the metapopulation when it is very close to extinction [Section 4 of 26, provides an example of this type of analysis with a static landscape].
A consequence of Theorem 3.1 is that the occupancy status of a single patch converges to a Markov chain with time dependent transition probabilities.
The proof of Corollary 3.2 uses the same arguments as in the proof of Corollary 1 of McVinish and Pollett [23].
We may simplify recursion (3.3) by simplifying the evolution of the landscape. This is done by assuming that the landscape is in an equilibrium.
For some landscape dynamics, σ t will converge to an invariant measure. If the landscape has existed for a long time, then Assumption (G) should be reasonable. Applying the same arguments as in McVinish and Pollett [24,Lemma 5], it can be shown that, for all t ≥ 0, µ t is absolutely continuous with respect to σ. Therefore, one might hope to obtain a recursion for the Radon-Nikodym derivative of µ t with respect to σ. Define the measure ν such that, for any measurable subset A of [0, 1], ν(A) := σ(A×Ω).
From Lemma 2.1, ν is an invariant measure for P . Assuming that the transition kernel P is reversible with respect to ν, it is possible interchange to the order of integration in (3.3) to obtain a recursion for the Radon-Nikodym derivative. However, this assumption can be avoided by using the dual kernel of the Markov chain. The dual kernel has been used by various authors studying Markov chains and processes [see 4, and references therein].
As we have been unable to find anything in the literature dealing explicitly with the case of interest here, we state the definition of the dual kernel and some basic results. In the following, (S, Σ) denotes a general measure space.
Definition 3.1. Let P be a sub-transition kernel on (S, Σ) and let π be a σ-finite measure on (S, Σ). If there exists a sub-transition kernel P * such that for all A, B ∈ Σ, then P is said to be reversible with respect to π.
Remark. We shall see later that if π is a subinvariant measure for P , then the dual of P with respect to π is determine uniquely π-almost everywhere, in that, for all A ∈ Σ, Notice (setting B equal to S in (3.6)) that if P is reversible with respect to π, then π is an invariant measure for P . More generally, we have the following.
To apply the dual kernel, it is necessary to construct a Markov chain on Therefore, the Radon-Nikodym derivative of µ t with respect to σ satisfies the recursion In addition to providing a simplified recursion for the measure µ t , the Radon-Nikodym derivative has a nice interpretation as the probability of a given patch being occupied when the number of patches in the metapopulation is large.
for all t ≥ 0.
Equilibrium: Long run occupancy level
We would like to study the equilibrium behaviour of (3.3) recursion using the simpler recrusion (3.7). To see why this is possible, let µ ∞ be a stable fixed point of recursion (3.3), that is µ t → µ ∞ weakly as t → ∞. As the support of σ is compact by Assumption (B) and ∂µt ∂σ is bounded by one almost everywhere for all t, we can show that µ ∞ is absolutely continuous with respect to σ using a similar argument to McVinish and Pollett [24,Lemma 5]. Hence, the Radon-Nikodym derivative ∂µ∞ ∂σ exists. Furthermore, by Scheffé's lemma, if the sequence of densities ∂µt ∂σ given by recursion (3.7) converges almost everywhere as t → ∞, then this limit must be ∂µ∞ ∂σ . Therefore, the stable fixed points of the two recursions are equivalent.
The recursion (3.7) has some nice monotonicity properties which suggest the application of the powerful cone limit set trichotomy [15]. If it could be applied, then much of the difficulty in determining the threshold condition for the persistence of the metapopulation would be resolved as it would enable us to make very strong statements concerning the existence and stability of fixed points. Unfortunately, the operator defined by the right-hand side of (3.7) does not satisfy the necessary compactness property, so a slightly different approach is required. Our first step is to characterise the fixed points of recursion (3.7) in such a way that allows the cone limit set trichotomy to be used. This gives conditions for the existence and uniqueness of a non-zero fixed point. The stability of the fixed points is studied using a similar approach to [6].
Discussion
We have determined a threshold condition for the extinction/persistence of a metapopulation in a dynamic landscape. The applicability of our result hinges on the validity of the assumptions made in the analysis. While most are technical assumptions, satisfied by typical choices of parameters, the assumptions concerning the landscape dynamics will necessarily limit the range of metapopulations to which our result applies. In these concluding paragraphs, we discuss how these assumptions can potentially be relaxed and what tools will be needed for our work to be extended.
We have assumed that the only temporal variation in the landscape is due to the evolution of the local extinction probabilities at each patch; the patch areas and connectivity are assumed constant. It seems possible that variation in the patch areas could be incorporated into the model by allowing them to evolve following some Markov chain, and the analysis could be carried out using essentially the same arguments. On the other hand, allowing for temporal variation in the connectivity of patches, relevant to certain marine species [36], would require a different analysis and possibly involve techniques from the study of random graphs [9].
As previously noted, the independence assumption excludes from consideration certain forms of disturbance such as widespread fire and drought. A first step in weakening the independence assumption would be to allow local spatial interaction in the landscape dynamics. For sufficiently weak spatial interactions, we would still expect a 'law of large numbers' result to be possible under appropriate technical assumptions. However, such weak spatial interaction is not going to provide a realistic model for widespread disturbances. As an extreme case of strong spatial interaction, suppose that for each time all patches had the same local extinction probability. In that case, we would expect any limiting process to still depend on the realization of the local extinction probability process. Tools from random dynamical systems [3] may prove useful in the analysis of the limiting process in this case. 6. Proofs [17,Theorem 16.16]. We use induction on t to prove weak convergence of the random measures σ n,t to non-random measures σ t . By Assumption (F), σ n,0 d → σ 0 for some non-random measure σ 0 . The conditional expectation of h(s, z) σ n,t+1 (ds, dz) given (s n t , a n , z n ) is E h(s, z)σ n,t+1 (ds, dz) | s n t , a n , z n = n −1 n i=1 a i h(s, z i )P (s, ds) = h(s, z)P (s, ds) σ n,t (ds, dz).
Suppose that σ n,t d → σ t for some non-random measure σ t . If h(s, z)P (s, ds) ∈ C + ([0, 1]× Ω), then lim n→∞ E h(s, z)σ n,t+1 (ds, dz) | s n t , a n , z n = h(s, z)P (s, ds) σ t (ds, dz). induction on t to prove weak convergence of the random measures µ n,t to non-random measures µ t . By assumption µ n,0 d → µ 0 for some non-random measure µ 0 . Suppose that µ n,t d → µ t for some non-random measure µ t . Then E h dµ n,t+1 | X n t , s n t , a n , z n = n −1 |X n t , s n t , a n , z n = s h(r, z)P (s, dr) µ n,t (ds, dz) where |ǫ n,t (h)| ≤ C h(r, z)P (s, dr)σ n (ds, dz) sup z∈Ω D(z,z)µ n,t (ds, dz)− D(z,z)µ t (ds, dz) , for some constant C > 0 as f is Lipschitz continuous. Applying a small modification of Theorem 3.1 of [31] and Assumption (C), it follows that if µ n,t d → µ t , a non-random measure, then We need both s h(r, z)P (s, dr) and s h(r, z)P (s, dr)f ( D(z,z)µ t (ds, dz)) to be in for some non-random measure µ t . Therefore, E h dµ n,t+1 | X n t , s n t , a n , z n p → s h(r, z)P (s, dr) µ t (ds, dz) The conditional variance of h(s, z)µ n,t+1 (ds, dz) can be bounded by n −1 A 2 sup |h(s, z)| 2 .
6.4. Proof of Theorem 3.4. Let P be a sub-transition kernel on (S, Σ) and let π be a σ-finite measure on (S, Σ). We first show that if π is a subinvariant measure for P , then there exists a sub-transition kernel P * satisfying Definition 3.1. Suppose π is subinvariant It is a measure on (S, Σ) because P (x, ·) is a measure on (S, Σ). It is also clear that η A is absolutely continuous with respect to π, because if N ∈ Σ is any π-null set then So, by the Radon-Nikodym theorem, there exists a function P * : S × Σ → [0, ∞) such that P * (·, A) is a Σ-measurable function, and for all B ∈ Σ, Hence, P * is determined uniquely π-almost everywhere by equation (3.5). It remains to show that, for π-almost all x ∈ S, P * (x, ·) is a measure on (S, Σ) with P * (x, S) ≤ 1.
For any A ∈ Σ, P * (·, A) is the Radon-Nikodym derivative of η A with respect to π. As η ∅ is the null measure, P * (x, ∅) = 0 for π-almost all x ∈ S. To show that P * (x, ·) is countably additive, let {B k } be a sequence of pairwise disjoint sets in Σ. We want to show that the Radon-Nikodym derivative of η ∪ k B k with respect to π is k P * (·, B k ). For Hence, P * (x, ∪ k B k ) = k P * (x, B k ) for π-almost all x ∈ S. Finally, since π is subinvariant for P , we have, for any A ∈ Σ, A π(dx)P * (x, S) = S π(dx)P (x, A) ≤ π(A).
We now show that if there exists a dual P * for P with respect to π, then π is subinvariant. Since P * is a sub-transition kernel, P * (x, S) ≤ 1 for all x ∈ S. On setting B equal to S in equation (3.5) we see that that is, π is subinvariant for P . This completes the proof of the first part of Theorem 3.4.
To prove the second part we note that if P * is a transition kernel then P * (x, S) = 1 for all x ∈ S. In that case, inequality (6.11) becomes equality, and π is seen to be invariant.
On the other hand, if π is invariant for P , then for all A ∈ Σ. Therefore, P * (x, S) = 1 for π-almost all x ∈ S, and P * is a transition kernel. The final part is proved in similar vein.
Let K denote the reproducing cone of non-negative functions on Ω and letK denote the interior of K. The cone K is equipped with the partial ordering for all z ∈ Ω. The cone limit set trichotomy can be applied if H has the following properties: (i) continuity; (ii) order compactness; for any χ 1 , χ 2 ∈ K, H maps the set {φ : χ 1 ≤ φ ≤ χ 2 } to a relatively compact set.
We now proceed to shows these properties hold.
(i) continuity: The operator H is continuous if for each (s, z) ∈ [0, 1] × Ω. Since D is uniformly bounded, continuity of H follows from the dominated convergence theorem.
As D is equicontinuous, so is the sequence of functions Hφ 1 , Hφ 2 , . . . Therefore, H is order compact.
By Assumption (D), for any (z,z) ∈ Ω × Ω, D(z,z) > 0. Also, Assumption (H) implies that f (λx) − λf (x) > 0 for all x > 0, λ ∈ (0, 1). Therefore, H(λφ) − λHφ ∈K for any λ ∈ (0, 1), φ ∈K. Hence, H is strongly sublinear. The conditions of the cone limit set trichotomy are satisfied. Therefore, either (i) ψ = 0 is the only fixed point of H, or (ii) H has a unique non-zero fixed point and this fixed point must be inK, or (iii) for every φ = 0, successive applications of the operator H leads to an unbounded sequence. In proving order compactness, we have shown that H is bounded. Therefore, (iii) is excluded as a possibility. We can conclude that H has at most one non-zero fixed point. | 2014-12-01T22:55:30.000Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "e916615cb9167fc2c15123ce986902e82b2465a1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1412.0719",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e916615cb9167fc2c15123ce986902e82b2465a1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Biology"
]
} |
205601406 | pes2o/s2orc | v3-fos-license | Correlative Evaluation of Mental and Physical Workload of Laparoscopic Surgeons Based on Surface Electromyography and Eye-tracking Signals
Surgeons’ mental and physical workloads are major focuses of operating room (OR) ergonomics, and studies on this topic have generally focused on either mental workload or physical workload, ignoring the interaction between them. Previous studies have shown that physically demanding work may affect mental performance and may be accompanied by impaired mental processing and decreased performance. In this study, 14 participants were recruited to perform laparoscopic cholecystectomy (LC) procedures in a virtual simulator. Surface electromyography (sEMG) signals of the bilateral trapezius, bicipital, brachioradialis and flexor carpi ulnaris (FCU) muscles and eye-tracking signals were acquired during the experiment. The results showed that the least square means of muscle activity during the LC phases of surgery in an all-participants mixed effects model were 0.79, 0.81, and 0.98, respectively. The observed muscle activities in the different phases exhibited some similarity, while marked differences were found between the forearm bilateral muscles. Regarding mental workload, significant differences were observed in pupil dilation between the three phases of laparoscopic surgery. The mental and physical workloads of laparoscopic surgeons do not appear to be generally correlated, although a few significant negative correlations were found. This result further indicates that mental fatigue does markedly interfere with surgeons’ operating movements.
Surgeons encounter musculoskeletal strain and disorders resulting from long periods of muscle tension and awkward poses [1][2][3] . Injuries to surgeons include pain in specific areas of the body, vertebral disk prolapse and carpal tunnel syndrome 4,5 . These issues are closely related to the mental and physical workloads of surgeons during surgery 2 . In terms of mental workload, surgeons can suffer from impaired concentration and slow reactions after long operations. Furthermore, job dissatisfaction of surgeons is considered to be significantly associated with burnout 6 . As muscle fatigue and attention deficit may contribute to failed surgeries, risk monitoring and risk reduction measures should be implemented if a surgeon is experiencing physical or mental overload or fatigue. Surgeons' mental and physical workloads have been a focus of operating room (OR) ergonomics over the last few decades.
To assess surgeons' mental and physical workloads, laparoscopic box trainers and virtual reality simulators are usually employed and are comparable in most aspects 7,8 . Some studies have suggested that virtual simulators may be more reliable and convenient, and peg transfer, ball pick-and-drop, and cutting and suturing are commonly simulated procedures 2 . Most previous studies have focused on either mental workload or physical workload but have seldom performed comparative analyses [9][10][11][12] . Taking into account psychophysiological causes and related literature, the relationship between the two types of workloads should be considered.
Metrics used to assess surgeon workload include subjective measures of workload, physiological indices of workload, objective performance, and other methods including comprehensive evaluations. Scales and questionnaires, such as the NASA Task Load Index scale 13 and the Subjective Workload Assessment Technique scale, have become among the most popular tools, especially for surgical procedures [14][15][16] . Various physiological indices, such as heart rate, blood pressure, eye movements, EMG, and EEG signals, etc. refs 17,18 , change corresponding to changes in workload; heart rate is generally used to evaluate body load, and eye movements, and EEG are generally used to assess mental workload. In particular, EEG can characterize the dynamics of functional coupling among different brain areas across surgeons performing laparoscopic tasks with different approaches 19 . In addition, workload status can be deduced through tasks and the associated performance. These different workload evaluation methods each have their own advantages, and physiological indices of workload are more prominent in accuracy and objectivity.
A previous study presented an interesting finding that the mental workload of bank staff is significantly correlated with musculoskeletal disorders 20 . The mental workload of nurses is also associated with musculoskeletal disorders 21 . This previously reported conclusion is based on different types of work and different work contents, and those surveyed enjoyed certain autonomy while working 22,23 . In contrast, considering a surgeon's workload, the equipment used, the working time and the processes are severely restricted during an operation. In addition, surgeons must meet high mental and physical demands, have high operation accuracy, and make accurate judgements and decisions. Mental status is associated with muscle activity in some work situations. Schleifer et al. 24 discovered that mental stress results in increased EMG activity of the upper limbs during computer work. With the differential changes in heart period and end-tidal carbon dioxide in differential working conditions, mental stress elicits more psychophysiological activation, and less effects are attributed to the biomechanical demands of work. Furthermore, high mental workload tasks predispose individuals to increased psychological and physiological activation. Mental fatigue also influences muscle endurance, recovery and EMG activity 25,26 .
The interactive effects of mental and physical workload have received growing attention, and negative correlations between mental workload and physical workload have been reported 22 . In the foregoing cited study, subjective self-report rating assessment tools, the Borg CR10 Scale and NASA-TLX, were adopted to assess physical and mental workloads, respectively. The dual-task methodology consisted of a physical lifting task (no load, 8%,14% and 20% of body mass) and a mental arithmetic task (no load, addition, subtraction, and multiplication) with a total of 15 combinations of conditions. This approach has also been commonly used in other studies 23,24,27 . Compared with the interactive effects of mental and physical workloads that have been assessed for different types of tasks, laparoscopic surgeries contain both heavier mental and physical loads.
Results
The calculated muscle activity levels are shown in Tables 1, 2 and 3. Descriptive statistics are shown in Table 1, the fixed effects of the characteristics on the results are given in Table 2, and statistics for the various phases are listed in Table 3. The physical workload patterns during the 3 phases were generally similar, with minor differences between the left and right trapezius muscle and bicipital muscle and large differences in the brachioradialis Table 2. Fixed effects of characteristics based on results of a mixed model. and FCU. The most significant finding was that the activities of the eight muscles in the AC phase (disinterring the bile duct and the cystic artery) and SC phase (sealing and cutting the bile duct and the cystic artery) were quite similar (mean difference = 0.02, p < 0.05) and significantly lower than the muscle activities in the DI phase (detaching the gallbladder from the hepatic bed and inspecting the hepatic bed) (p = 0.01 and 0.03, respectively). Interestingly, the left brachioradialis %MVC was nearly twice that of the right brachioradialis, and the bilateral FCU exhibited the opposite trend, with the exception of during the DI phase. Figure 1 demonstrates the change in the participants' pupil diameter during the 3 LC phases. The extent of pupil dilation during the SC phase (mean = 0.12, median = 0.13) and DI phase (mean = 0.13, median = 0.13) was less than that in the DI phase (mean = 0.05, median = 0.04). Moreover, the pupil diameter increased during each individual phase.
The results of a correlation analysis between sEMG measurements and eye-tracking is shown in Table 4. We found that the sEMG and eye-tracking measurements during the different phases were uncorrelated. The activities of the left brachioradialis and the left FCU in the SC phase were significantly negatively correlated with mental workload (r = −0.68, p = 0.01 and r = −0.53, p = 0.05).
Discussion
Our study is the first to address the significant concern regarding the relationship between the two types of workloads on laparoscopic surgeons. The experimental platform and tasks were carefully considered. An LC surgery was divided into 3 approximately equal phases in terms of the time and process. This partitioning method was effective for our study and is convenient for acquiring and comparing sEMG and eye-tracking signals 18 . Table 3. Least square means and multiple comparisons of the LC phases in an all-participants mixed effects model. Table 4. Correlation analysis between mental and physical workload during the 3 LC phases.
For evaluating physical workload, similarities in muscle activity among different phases can be determined. This phenomenon can be explained by similar gestures and movements. Notably, we found a difference between the sides of the brachioradialis and the FCU. Fine operations are usually carried out by the dominant hand (i.e., the right hand), and fine movements rely heavily on the wrist and fingers. The left brachioradialis was employed more during usual motions, while the FCU was utilized for finer motions, which reflects muscle movement compensation.
The mean changes in pupil size during the 3 phases are shown in Fig. 1. In this experiment, the pupil was able to characterize mental workload according to expectations. A high mental workload within a short time does not cause mental fatigue and thus does not result in a cumulative effect, which is consistent with the conclusions of other studies 28 . Other factors influencing pupil size include anxiety, stress, fatigue, and intelligence. In our experimental design, we attempted to eliminate the effects of these factors on pupil size through various methods: allowing the participants to relax, preventing participants from performing tests in a fatigued state, and adjusting lighting brightness of the scene.
Our experimental findings suggest that the mental and physical workloads of the laparoscopic surgeon were non-synchronous and were generally negatively correlated, although insignificant. Mental workload during low-level static work has been verified to adversely affect muscle activity. Laparoscopic surgery involves low-level strength and high-level mental workload. The surgeons' physical workloads in the AC and SC phases were almost equal and were much lower than the physical workload during the DI phase. In contrast, the surgeons' mental workload in the AC phase was lower than the mental workload in the SC and DI phase, which corresponded to similar workload levels. The relationship between the workloads can be explained using physiology. Studies of the brain have indicated that mental fatigue and physical fatigue are closely linked. When people are physically fatigued, blood oxygenation in the bilateral prefrontal cortex is reduced, which aggravates mental fatigue 29 . Muscle activity is directly related to neural activity, as proven by neuroimaging techniques, and the brain possesses a self-adjusting function to maintain physical performance, even when falling into a state of fatigue [30][31][32] .
Our experimental results showed that there was no significant negative correlation between the workloads, which is not entirely consistent with previous studies. We attributed this discrepancy to the following reasons: (1) surgical procedures involve both mental and physical workloads, unlike the individual mental and physical tasks employed in previous studies, and the two workloads are not completely independent. Co-existence of the workloads indicates that their relationship is not entirely interactive. (2) We selected a representative group of muscles as research targets but did not include all muscles used during an operation performed by a surgeon, which may result in bias.
Objective evaluations of the workload and ergonomics of laparoscopic surgeons are vital and meaningful. More studies are needed to compensate for the limitations of this study. Physical and mental workload levels are complex and cannot be characterized in a general manner. Multi-means, indices and subjective methods combined with objective techniques will be the most promising approaches going forward 33 . Workload threshold and ergonomics guidelines should be elaborated to prevent ergonomic problems during laparoscopic surgery.
This experiment was based on a simulation, which holds obvious limitations compared to actual operations. Participants may not be as careful when using a simulator because they may perceive that there will be another chance to repeat the procedure without repercussions. Another limitation of this study was that the LC operation duration was not long enough to induce surgeon fatigue, and therefore, surgeon workloads during a fatigued state could not be evaluated. We plan to study the working status of laparoscopic surgeons and its impact on operation safety and outcomes under different workload conditions in the near future. The concept of a surgeon's total workload should be established, which would provide a general description of the physician's fatigue status for quantitative and intuitive monitoring.
Methods
Participants. The procedures of this study were carried out in accordance with approved guidelines. This study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology (IORG No: IORG0003571), and was performed in a simulated operating room with proper lighting conditions and other requirements for an operating environment, according to related standards and manufacturer recommendations. Informed consent was obtained from all participants. In this study, 14 male volunteers were recruited. Four of the participants were laparoscopic surgeons, and the other 10 were predoctoral students. All individuals had laparoscopic surgery experience or training experience and were familiar with the experimental platform prior to the experiment. All participants were right-handed, and they ranged in age from 25 to 35 years (mean age = 28.7, SD = 3.8). The participants' body mass index (BMI) and elbow height were measured and used as references to adjust the experimental set-up.
Experimental platform and tasks. The experiment was executed in a laparoscopic virtual simulator, which can provide feedback on the operation performance of the participants, including haptic feedback. Statistics of the participants performance on the task were provided when the task was completed.
LC is one of the most common laparoscopic surgeries. LC has been used in many studies as a sample procedure to study the working status of laparoscopic surgeons with respect to OR ergonomics [34][35][36] . In previous research, surgical videos, combined with other tools such as rapid upper limb assessment (RULA), have been employed to study physician gestures and stress statics 36,37 . In contrast to other studies, our study aimed to explore both the physical and mental workloads of laparoscopic surgeons in different LC phases and the correlative relationship between these workloads. All participants were required to complete an LC surgery using the simulator. According to the operation process and the operation simulator's setting, the LC should be completed via the following five phases: Phase 1: create the pneumoperitoneum and place the trocars; phase 2: based on the anatomy of Calot's triangle, disinter the bile duct and the cystic artery (AC phase); phase 3: seal and cut the bile duct and the cystic artery (SC phase); phase 4: detach the gallbladder from the hepatic bed and inspect the hepatic bed (DI phase); and phase 5: remove the gallbladder and complete the operation. Phase 1 and phase 5 were executed automatically, and the participants were required to complete the AC, SC and DI phases. We advised the participants to allocate 5 minutes to each of the three phases and to finish the surgery in 15 minutes, if possible. Intervals of approximately 3 minutes were included between the phases to allow the participants' muscles to relax and to provide feedback.
Workload assessment protocol. Data analysis. An overall 14 × 4 × 2 × 3 (14 participants × 4 muscles × 2 hand sides × 3 phases) analysis of variance was used to analyse the data. A mixed effect model was used for statistical analysis in SAS 9.4, with the significance level set at p = 0.05. Variables with random effects were selected based on the smallest Akaike information criterion (AIC) and the Bayesian information criterion (BIC), with a positive definite G matrix for the intercept, muscles, location and phase.
Eye-tracking data. According to many studies, mental workload can be evaluated by participants' eye movements, particularly pupil dilatation [38][39][40] . The Tobii Glasses 2 Eye Tracker (Tobii Technology, Danderyd, Sweden) was used as an eye-tracking instrument in our study. Before starting the procedure, the participants were equipped with the eye tracker and asked to stare at black dots printed on a paper card for the calibration process. The physiological parameters of the participants' eyes were recorded during the calibration process.
Laparoscopic surgery requires a high degree of attention, and therefore, eye movement is more able to reflect the physiological state of the surgeon and surgical conditions. Here, pupil size was analysed as the focal index to measure mental workload during operation. In the 1960s, pupil dilation was found to be sensitive to task difficulty and workload 41,42 . Pupil dilation can be used as a peripheral indicator of brain noradrenergic activity and mental workload in a testing situation. The measurement of pupil diameter has been deemed a promising method for assessing mental workload 38,43,44 . Task-evoked pupillary responses (TEPRs) have been suggested for exploring the inherent relationship between a task and pupillary dilation 44 . Generally, larger pupil sizes indicate greater mental workload [45][46][47] . In this study, the baseline pupil size (initial diameter) was assessed after the calibration period 48 . A change in pupil size from baseline, measured as the mean pupil diameter change (MPDC), was observed, consistent with the expected effect. sEMG data. Physical workload was evaluated by surface myoelectricity, which was captured using a Delsys Trigno Lab sEMG system (Delsys, Inc., Boston, MA) and analysed with standard software. The muscle groups analysed included the bilateral trapezius, biceps, brachioradialis, and FCU. The sEMG sampling frequency was 512 Hz. These data were full-wave rectified and then filtered to obtain a spectrum band ranging from 20 Hz to 250 Hz.
At the beginning of the experimental session, we measured the maximum voluntary contraction (MVC) of each target muscle and normalized the sEMG data to the MVC during data processing 49 . In this study, we used %MVC, the percentage of MVC, as a measure of muscle workload and characterized the level of muscle contraction per unit time 2 . For data processing, iEMG was obtained by first integrating the sEMG; the ratio of iEMG to MVC was taken as %MVC 2 49 .
Conclusions
Observations of surgeons operating during different phases of LC and measurements of their mental and physical workload indicated that the two workloads are non-synchronous, with a general non-significant negative correlation. This study evaluated the workload imposed on surgeons during laparoscopic surgery by physiological (sEMG and eye movement) analysis and objectively demonstrated that while some laparoscopic phases require equal levels of physical work and others do not, significant disparities exist among the mental workloads of those phases. Synthetic and dynamic monitoring of surgeon workload levels is thus highly important in OR ergonomics. | 2018-04-03T02:42:05.675Z | 2017-09-11T00:00:00.000 | {
"year": 2017,
"sha1": "ee818611fa1bb636b876ffef4ad89840d8f54dba",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-11584-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b236a36776934b0be183ffaeaf31ed5ed20fc420",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18063218 | pes2o/s2orc | v3-fos-license | A prospective population-based study of maternal, fetal, and neonatal outcomes in the setting of prolonged labor, obstructed labor and failure to progress in low- and middle-income countries
Background This population-based study sought to quantify maternal, fetal, and neonatal morbidity and mortality in low- and middle-income countries associated with obstructed labor, prolonged labor and failure to progress (OL/PL/FTP). Methods A prospective, population-based observational study of pregnancy outcomes was performed at seven sites in Argentina, Guatemala, India (2 sites, Belgaum and Nagpur), Kenya, Pakistan and Zambia. Women were enrolled in pregnancy and delivery and 6-week follow-up obtained to evaluate rates of OL/PL/FTP and outcomes resulting from OL/PL/FTP, including: maternal and delivery characteristics, maternal and neonatal morbidity and mortality and stillbirth. Results Between 2010 and 2013, 266,723 of 267,270 records (99.8%) included data on OL/PL/FTP with an overall rate of 110.4/1000 deliveries that ranged from 41.6 in Zambia to 200.1 in Pakistan. OL/PL/FTP was more common in women aged <20, nulliparous women, more educated women, women with infants >3500g, and women with a BMI >25 (RR 1.4, 95% CI 1.3 – 1.5), with the suggestion of OL/PL/FTP being less common in preterm deliveries. Protective characteristics included parity of ≥3, having an infant <1500g, and having a BMI <18. Women with OL/PL/FTP were more likely to die within 42 days (RR 1.9, 95% CI 1.4 – 2.4), be infected (RR 1.8, 95% CI 1.5 – 2.2), and have hemorrhage antepartum (RR 2.8, 95% CI 2.1 – 3.7) or postpartum (RR 2.4, 95% CI 1.8 – 3.3). They were also more likely to have a stillbirth (RR 1.6, 95% CI 1.3 – 1.9), a neonatal demise at < 28 days (RR 1.9, 95% CI 1.6 – 2.1), or a neonatal infection (RR 1.2, 95% CI 1.1 – 1.3). As compared to operative vaginal delivery and cesarean section (CS), women experiencing OL/PL/FTP who gave birth vaginally were more likely to become infected, to have an infected neonate, to hemorrhage in the antepartum and postpartum period, and to die, have a stillbirth, or have a neonatal demise. Women with OL/PL/FTP were far more likely to deliver in a facility and be attended by a physician or other skilled provider than women without this diagnosis. Conclusions Women with OL/PL/FTP in the communities studied were more likely to be primiparous, younger than age 20, overweight, and of higher education, with an infant with birthweight of >3500g. Women with this diagnosis were more likely to experience a maternal, fetal, or neonatal death, antepartum and postpartum hemorrhage, and maternal and neonatal infection. They were also more likely to deliver in a facility with a skilled provider. CS may decrease the risk of poor outcomes (as in the case of antepartum hemorrhage), but unassisted vaginal delivery exacerbates all of the maternal, fetal, and neonatal outcomes evaluated in the setting of OL/PL/FTP.
Conclusions: Women with OL/PL/FTP in the communities studied were more likely to be primiparous, younger than age 20, overweight, and of higher education, with an infant with birthweight of >3500g. Women with this diagnosis were more likely to experience a maternal, fetal, or neonatal death, antepartum and postpartum hemorrhage, and maternal and neonatal infection. They were also more likely to deliver in a facility with a skilled provider. CS may decrease the risk of poor outcomes (as in the case of antepartum hemorrhage), but unassisted vaginal delivery exacerbates all of the maternal, fetal, and neonatal outcomes evaluated in the setting of OL/PL/FTP.
Background
Obstructed labor (OL) is a common cause of maternal mortality, accounting for approximately 6% of maternal deaths worldwide and substantial long-term maternal morbidity [1]. Maternal mortality from OL is caused by ruptured uterus, postpartum hemorrhage, and puerperal sepsis, while maternal morbidity includes secondary infertility, vaginal scarring and stenosis, severe anemia, musculoskeletal injury, urinary incontinence and obstetric fistula [2,3]. OL also has implications for the fetus or neonateit frequently results in asphyxia that can result in stillbirth, neonatal demise, cerebral palsy, and developmental disabilities [4].
According to the World Health Organization, labor is obstructed when the presenting part of the fetus cannot progress into the birth canal despite strong uterine contractions [1]. The etiology is often cephalo-pelvic disproportion (CPD), which is defined as a mismatch between the size of the fetal presenting part and the mother's pelvis [2]. Often, in developing countries, CPD is due to stunted growth of the maternal pelvic bones from malnutrition, early childbearing before the growth of the pelvis is complete, or abnormalities of the shape of the pelvis due to rickets or osteomalacia [5].
While there is literature on maternal mortality resulting from OL, the complexity of isolating OL as the cause of any individual maternal, fetal, or neonatal death makes data collection and analysis difficult and often of poor quality. After performing a comprehensive literature review for stillbirth and neonatal outcomes related to OL, only two small, single institution studies were found that evaluated perinatal outcomes in pregnancies complicated by OL [6,7]. Thus, we sought to undertake a review of a large, prospective study on pregnancy, the Global Network's Maternal Newborn Health Registry (MNHR). Reviewing the experience illustrated by the MNHR data will shed light on both maternal and perinatal morbidity and mortality associated with to OL in low-and middleincome countries.
Methods
This data analysis was conducted on information from a prospective population-based observational study conducted in 106 communities at six sites in five lowincome countries on births from January 1, 2010 through December 31, 2013 (Chimaltenango, Guatemala; Nagpur, India; Belgaum District, India; western Kenya; Thatta District, Pakistan; and Lusaka, Zambia) and at one site in a middle-income country (Corrientes, Argentina). These seven sites were selected by the Eunice Kennedy Shriver National Institute of Child Health and Human Development in the United States of America (NICHD), a governmental organization that supports the Global Network for Women's and Children's Health Research (GN), which is a network of research institutions in the aforementioned sites that enrolls women during pregnancy and collects data through 6-weeks postpartum to assess pregnancy outcomes.
The prospective community-based registry, called the Maternal and Newborn Health Registry (MNHR), includes outcomes from rural or semi-urban geographical areas served by government health services. Each site includes between six and 24 distinct communities. The methods of the MNHR have been published [8]. In general, each community represents the catchment area of a primary healthcare center, and about 300 to 500 births take place annually in each locale. Beginning in 2009 and 2010, the study investigators at each site initiated an ongoing, prospective maternal and newborn health registry of pregnant women for each community. The objective is to enroll pregnant women by 20 weeks' gestation and to obtain data on pregnancy outcomes for all deliveries that take place in the community. Each community employs a registry administrator who identifies and tracks pregnancies and their outcomes in coordination with community elders, birth attendants, and other health care workers.
The primary purpose of the MNHR is to quantify and analyze trends in pregnancy outcomes in defined lowresource geographic areas over time in order to provide population-based statistics on pregnancy outcomes, including stillbirths, neonatal, and maternal mortality. This analysis utilizes the MNHR to determine maternal and fetal outcomes in the setting of dysfunctional labor and to compare these outcomes to a reference population, also from the registry, that did not experience this labor complication. In these settings it is difficult to define dysfunctional labor because it is nearly impossible to distinguish clinically between obstructed labor, prolonged labor, and/or failure to progress in labor, so for the purposes of data collection, these outcomes were combined into a single overall outcome called obstructed labor/prolonged labor/failure to progress (OL/PL/FTP).
The definition of OL/PL/FTP in the MNHR is, "a situation when the descent of the presenting part is arrested during labor due to an insurmountable barrier. This occurs in spite of strong uterine contractions and further progress cannot be made without assistance. Obstruction usually occurs at the brim but it may occur in the cavity or at the outlet of the pelvis". This definition is adapted from the World Health Organization's definition, noted in the introduction. All sites involved in this analysis used the same definition for OL/PL/FTP.
Other co-variates were defined in accordance with the WHO definitions, described elsewhere [9]. Specifically, body mass index (BMI), in kg/m 2 , was calculated based upon weight and maternal height taken at the antenatal care visit (the Kenya site did not obtain BMI measurements and were omitted from those analyses with BMI). Gestational age (GA) at delivery was determined as term (≥37 weeks gestation) or preterm (<37 weeks) for all deliveries, based on last menstrual period (LMP) or ultrasound, when available, and finally, birth weight was the weight of the live birth or stillbirth taken at the delivery visit. Data were collected and entered into research computers at each study site and transmitted through secure methods to a central data coordinating center (RTI International). All analyses were done with SAS version 9.3 (SAS Institute, Cary, NC, USA). Analyses included descriptive statistics. Relative risks were computed using generalized estimating equations, accounting for study clusters. In addition, because the findings related to education were unexpected, an additional regression analysis was run to better understand the relationship between OL/PL/FTP and maternal education.
The appropriate institutional review boards/ethics research committees of the participating institutions and the ministries of health of the respective countries approved the MNHR. Prior to initiation of the study, approval was sought from the participating communities through sensitization meetings. Individual informed consent for study participation is requested from each study participant. Monetary reimbursements are not provided to study participants nor to the communities participating in the study. A Data Monitoring Committee, appointed by the NICHD, oversees and reviews the study at annual meetings.
Results
Between 2010 and 2013, 266,723 of 267,270 records (99.8%) included data on whether or not the woman experienced OL/PL/FTP. For the women with information on OL/PL/FTP, 62% of deliveries were in Southeast Asia, 23% at the African sites, and 15% of the deliveries took place in Latin American sites. In the population studied, the vaginal delivery rate was 86.2%, the operative vaginal delivery rate was 1.6%, and the cesarean section rate was 12.2%. In the setting of OL/PL/FTP, the rate of operative vaginal delivery increased from 0.9% to 6.6%, and cesarean section rate increased from 7% to 53%, which represented seven and eight-fold increases over no OL/PL/FTP, respectively. Figure 1 graphically represents the OL/PL/FTP rate in each community, with an overall rate of 110.4/1000 deliveries in the whole cohort. The rates of OL/PL/FTP ranged from 41.6/1000 births in Zambia to 200.1/1000 in Pakistan. Table 1 illustrates the demographic characteristics of the women involved in the study. In the subpopulation of women experiencing OL/PL/FTP as well as in the general population, the age distribution was similar with about 84% aged between 20 -35, about 12% younger than 20, and the remainder being over 35. The youngest women (age <20 years) had a 30% (RR 1.3, 95% CI 1.2 -1.3) increased risk of experiencing OL/PL/FTP as compared to the 20 -35 age group, which encompassed the majority of women. Compared to women who had one or two prior deliveries, women in their first pregnancy were 80% (RR 1.8, 95% CI 1.7 -2.0) more likely to experience OL/ PL/FTP; conversely, women who had already had three or more prior deliveries were 20% (RR 0.8, 95% CI 0.7 -0.9) less likely to experience obstruction. Interestingly, unlike parity, more education was associated with increased risk of a woman experiencing OL/PL/FTP. Compared to the referent group of women with a primary school education, the risk of having OL/PL/FTP was almost two fold higher in the most highly educated women-those with a university level education (RR 1.8, 95% CI 1.7 -1.9). Women with no formal education had a reduced risk of OL/PL/FTP (RR 0.7, 95% CI 0.7 -0.8).
As this was an unexpected finding, an additional regression analysis was performed on these data, including an With respect to birthweight, which can also been seen in Table 1, OL/PL/FTP was more common in larger fetuses. Fetuses <1500g were less likely to have OL/PL/ FTP (RR 0.7, 95% CI 0.5 -0.8), and fetuses ≥ 3500g more likely to have OL/PL/FTP than women with a fetus with birthweight of 2500 -3499g (RR 1.2, 95% CI 1.1 -1.3). Deliveries categorized as preterm were about 10% less Table 2 illustrates that across all seven sites and all outcomes related to maternal morbidity and mortality, every complication was significantly increased in women who experienced OL/PL/FTP, except for maternal mortality in Latin America. This result in Latin America is likely the result of small sample size as only one labor was complicated by a maternal death attributed to OL/ PL/FTP (RR 0.4, 95% CI 0.1 -2.1). The outcomes of interest included 42-day maternal mortality, maternal infection, and antepartum and postpartum hemorrhage. All outcomes were about twice more likely to occur in labors complicated by OL/PL/FTP than those that were not. Of particular interest in this analysis is the fact that African women experienced more morbidity and mortality than women in Asia and Latin America who also had OL/PL/FTP, with relative risks ranging from 3.4 (in the case of infection) to 9.1 for antepartum hemorrhage.
Similar to the results shown in Table 2, Table 3 also shows that stillbirths, neonatal mortality, and neonatal infection occurred more often in women with OL/PL/ FTP than those who did not have this diagnosis, with RR of 1.6 (95% CI 1.3 -1.9), 1.9 (95% CI 1.6 -2.1), and 1.2 (95% CI 1.1 -1.3), respectively. Additionally, the data again showed poorer outcomes in African women in the case of stillbirth (RR 4.8, 395% CI.7 -6.1) and neonatal mortality (RR 3.6, 95% CI 3.0 -4.4), but not in neonatal infection, where neonates in each location born of a labor complicated by OL/PL/FTP experienced a 20% increased risk of infection (RR 1.2, 95% CI 1.1 -1.4). Table 4 displays the outcomes of women experiencing OL/PL/FTP by method of delivery, which include spontaneous vaginal delivery, operative vaginal delivery, and cesarean section. The analysis shows that delivery by cesarean section only improves maternal antepartum hemorrhage in the setting of OL/PL/FTP, but does not have an association with maternal mortality, maternal infection postpartum hemorrhage, the stillbirth rate, neonatal mortality, or neonatal infection. Women who were designated as having OL/PL/FTP but were eventually delivered vaginally without assistance (e.g. without the use of forceps or vacuum), were more likely to experience every single adverse outcome. Women with spontaneous vaginal births after OL/PL/FTP were about three times more likely to succumb, to have a stillbirth, and to have a neonatal death (RR 3.0, 95% CI 2.0 -4.5; RR 3.3, 95% CI 2.8 -3.9; RR 3.0, 95% CI 2.5 -3.6), 60% more likely to have maternal infection (RR 1.6, 95% CI 1.3 -2.1), almost five times more likely to experience antepartum hemorrhage (RR 4.7, 95% CI 3.4 -6.7), about four times more likely to have a delivery complicated by postpartum hemorrhage (RR 3.9, 95% CI 2.7 -5.6), and were 40% more likely to have a neonate with an infection (RR 1.4, 95% CI 1.2 -1.6).
Discussion
This population-based study provides estimates of the rate of OL/PL/FTP in 7 sites in 6 countries in a population-based study of more than 260,000 births. Women with OL/PL/FTP were more likely to be primiparous, younger than age 20, with a BMI > 25 kg/m 2 and of higher education, with a fetal birthweight of >3500 g. Women with this diagnosis were more likely to experience a maternal, fetal, or neonatal death, antepartum and postpartum hemorrhage, and maternal and neonatal infection. Outcomes were often worse in women experiencing OL/PL/FTP in Africa compared to the other locations.
Our literature review for maternal and perinatal outcomes related to obstructed labor found small, single institution studies that evaluate perinatal outcomes in pregnancies complicated by obstructed labor [6,7]. One study from Nigeria that evaluated 120 perinatal outcomes in the setting of OL found a 23% stillbirth rate and a 6.7% early neonatal death rate [6]. Our analysis, which assessed the outcomes of more than 29,000 labors complicated by OL/PL/FTP found a stillbirth rate of 46.8/1000 deliveries and 44.2 neonatal deaths per 1000 live births. In a study from Sudan, which reported on the outcomes of 42 women experiencing OL, the rate of sepsis (not specified as maternal or neonatal) was 7.1%, postpartum hemorrhage 11.9%, maternal death 4.8%, stillbirth 26.2%, and early neonatal death 9.5%. Our MNHR data show maternal and fetal sepsis rates of 1.4% and 11%, respectively, postpartum hemorrhage rates of 5.8%, and a maternal death ratio of 246/100,000 deliveries. Since our study is population based and the others were not, a direct comparison between these studies is not possible, but the direction of the findings is similar.
The strengths of this study include its large sample size, varied community-based sites on 3 continents, data collected prospectively, pre-specified composite outcome that combined prolonged labor, obstructed labor, and failure to progress used at all sites. A registry administrator who often interviewed the mother and/or her family and the delivery attendant, which could have been a traditional birth attendant, nurse, nurse midwife, or physician, collected the data. The registry administrator also reviewed the medical record for additional data, if available. Differentiating between OL, PL, and FTP at the sites would have been difficult if not impossible given the clinical and diagnostic limitations of these settings. The complexity of isolating OL/PL/FTP clinically as the cause of any individual maternal, fetal, or neonatal death makes data collection and analysis difficult, so this analysis, which reports data from 7 sites, is intended to be descriptive and not definitive, in terms of the actual prevalence of OL/PL/FTP and outcomes related to this condition. For example, why Zambia experienced a lower rate and Pakistan experienced a higher rate of OL/PL/FTP relative to the other sites may reflect the true rate of OL/PL/FTP in those geographic regions, or, perhaps more likely, reflects some difference in how OL/PL/FTP was clinically defined and recorded in the field. A few other findings were notable with respect this analysis. First, while overall maternal, fetal, and neonatal outcomes were significantly worse in the setting of OL/ PL/FTP, the experience was compounded up to fourfold in the African sites. Whether these increased risks in the setting of obstructed labor reflect an access to care issue versus some pathophysiologic or clinical etiology is not clear from this analysis, but warrants further investigation given the significantly increased burden of morbidity and mortality observed with OL/PL/FTP in the African sites. Pakistan also had a notably higher rate of OL/PL/POL, which we believe reflects poorer quality maternal and child healthcare in that setting as compared to other registry sites [12].
The second interesting finding is that this analysis is at odds with other previously published papers regarding the demographics of women experiencing OL with respect to education. Previous analyses report that a risk factor for women experiencing OL is poor educational status, but in this study, the opposite was seen. Given that this finding could reflect confounding factors, a regression analysis including adjustment for maternal demographics, which did not change the direction of the original analysis. The explanation for this result remains unclear.
The final notable finding of this analysis, is that women delivering preterm had a reduction in OL/PL/ FTP of about 10%. Gestational age is difficult to define accurately in these settings since many women do not know the dates of their last menstrual period, and few had a dating ultrasound. Acknowledging that the MNHR gestational age data are imprecise, we nevertheless found a trend toward significance suggesting that women with preterm deliveries are less likely to experience OL/PL/FTP. OL/PL/FTP puts maternal, fetal, and neonatal lives at significant risk for a wide variety of adverse outcomes. This analysis suggests that vaginal delivery exacerbates all of the maternal, fetal, and neonatal outcomes evaluated in the setting of OL/PL/FTP while cesarean section appears to reduce these adverse outcomes, although not as much as might be expected. This is likely attributable to delays in diagnosis, at which point delivery by cesarean section may be too late to impact outcomes from a prolonged dysfunctional labor. In terms of the results regarding attendant at delivery and delivery location, it appears that many women with OL/PL/FTP are eventually arriving at appropriate delivery settings and being delivered by skilled attendants. However, it is likely that women with OL/PL/FTP are arriving in these settings too late to affect the primary outcomes. The overall conclusion of this analysis is that labor should take place in the presence of an experienced provider at the outset who can recognize the signs of OL/PL/FTP and determine whether or not further intervention is necessary to prevent the excess maternal, fetal, and neonatal morbidity and mortality that occurs in untreated cases.
Peer review
Reviewer reports for this article can be found in Additional file 1.
Competing interests
Data and presentation of information has not been influenced by the personal or financial relationship of the authors with other people or organizations. Authors have no financial or otherwise competing interests to disclose.
Authors' contributions MH conceived of the study, and participated in its design and coordination and drafted the manuscript. RLG participated in its design and edited the manuscript. OP, SS, FA, EC, WAC, AG, NFK, KMH, SSG, BK, RJD, AP, PLH, FE, EAL, MKT, DDW, EMM and RLG designed and monitor the MNHR study quality. SA, SSG, OP, SS, FA, AG, AP, MB, AM, AM, and FE oversaw field activities and quality monitoring. Data analysis was conducted by JM, DDW, EMM with input from RLG. All authors read and approved the final manuscript. | 2016-05-04T20:20:58.661Z | 2015-06-08T00:00:00.000 | {
"year": 2015,
"sha1": "a51940b6d0014ba3892f2baee7f3ca11a6794b15",
"oa_license": "CCBY",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/1742-4755-12-S2-S9",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bac79b1fff75667e4d579b27253f3b85d4a09294",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246311742 | pes2o/s2orc | v3-fos-license | Network-Compatible Unconditionally Secured Classical Key Distribution via Quantum Superposition-Induced Deterministic Randomness
: Based on the addressability of quantum superposition and its unitary transformation, a network-compatible, unconditionally secured key distribution protocol is presented for arbitrary networking in a classical regime with potential applications of one-time-pad cryptography. The network capability is due to the addressable unitary transformation between arbitrary point-to-point connections in a network through commonly shared double transmission channels. The unconditional security is due to address-sensitive eavesdropping randomness via network authentication. The proposed protocol may offer a solid platform of unconditionally secured classical cryptography for mass-data communications in a conventional network, which would be otherwise impossible.
Introduction
Due to the exponential growth of information traffic in fiber-optic communications backbone networks over the last thirty years, the information traffic rate has tripled every two years and is expected to reach its theoretical upper bound of 100 Tbps within a decade [1]. The more information traffic increases, the more data security should be emphasized. Current information security relies on computational complexity [2] and is thus vulnerable to both classical [3] and quantum attacks [4][5][6]. In classical cryptography such as public key cryptography [7], the key length has gradually increased over decades to protect data from potential eavesdropping, mostly relying on computing power [8]. As a result, secured data transmission in a classical (unsecured) regime becomes inefficient as the key length increases due to the tradeoff between security and the key generation rate [8]. Especially for big data-based artificial intelligence applications such as unmanned vehicles and Internet of things applications such as drones, data security must be carried out in an efficient way [9]. Thus, fundamental innovation in cryptography is required to overcome vulnerabilities in both classical attacks relying on algorithms or computing powers [3] and quantum attacks relying on quantum parallelism of superposition [4].
On the contrary, quantum cryptography [10] has been intensively studied for unconditionally secured quantum key distribution (QKD) over a quantum channel ever since the first QKD protocol of BB84 [11]. Due to imperfect single-photon detectors and quantum channel losses resulting in quantum loopholes however, QKD is also vulnerable to quantum attacks from a practical point of view [12]. The detection loopholes affect all QKD protocols, including decoy states [13] for single photons and Bell states [14] for entangled photon pairs. For transmission distance, QKD is strongly limited by the no-cloning theorem prohibiting duplication or amplification [15], unless quantum repeaters are implemented [16]. Moreover, there are no commercially available deterministic single-photon or entangled-photon pair generators yet, resulting in an extremely low QKD rate [10]. Besides, the key must be used only once to keep the unconditional security guaranteed by quantum mechanics [17]. Quantum networking among many parties is much harder to realize due to the limitations of multipartite entangled photon-pair generation [18]. Based on these practical issues, quantum cryptography seems to have a long way to go for commercial network applications such as e-commerce, including online banking and IoT via both wired and wireless communications [19], even though some point-to-point QKD protocols have already been launched for a testbed [20,21]. Further, QKD is incompatible with conventional information infrastructures in the classical domain such as wired and wireless networks, and thus severely limits its applications in mass data communications such as artificial intelligence based on big data [22].
To overcome the limitations of classical and quantum cryptographies, an entirely different method of unconditionally secured classical key distribution (USCKD) has been proposed for both wired [23] and wireless [24] transmissions using a pair of transmission channels forming a Mach-Zehnder interferometer (MZI) via quantum superposition between the MZI channels and its unitary transformation, resulting in deterministic randomness.This deterministic randomness represents no eavesdropping due to measurement indistinguishability caused by quantum superposition in the MZI channels, as well as the deterministic key distribution between two remote parties via unitary transformation. As demonstrated, the key generation determinacy in USCKD [23] is well understood in the coherence optics of MZI in terms of the directional determinacy [25]. The basis of eavesdropping randomness in USCKD has also been understood as measurement indistinguishability caused by channel superposition, as in Young's double-slit experiments [26]. Here, a network-compatible USCKD (NC-USCKD) protocol is presented, analyzed, and discussed for arbitrary networking in the classical domain, where a commonly shared pair of transmission lines of MZI plays a key role in both the physics and infrastructure. The classical channel represents a lossy and unsecured transmission line, resulting in open access by anyone. In the proposed NC-USCKD scheme, unconditional security is achieved coherently via addressable quantum superposition between two arbitrary parties in a network through the shared MZI channels. For the robustness of the MZI system, real-time phase stabilization has already been experimentally demonstrated for a few km ranges in both wired [27] and wireless schemes [28].
For the network addressability of the present NC-USCKD, addressable quantum superposition between arbitrary two-remote parties is presented as a building block of unconditionally secured classical networking. Compared with the original point-to-point transmission scheme of USCKD [23], the addressability in the present NC-USCKD is due to the linear expansion of orthogonal bases through the shared MZI channels for N-to-N networking. For the unconditional security of NC-USCKD in a classical network, we also present an authentication protocol via network initialization between any arbitrary parties. The practical advantages of NC-USCKD include high-speed key distribution, addressable networking, and compatibility with conventional optical systems relying on the wave nature of coherence optics. Owing to the coherence optics of MZI [25,26], NC-USCKD is naturally compatible with classical systems such as optical switches, optical routers and even optical amplifiers. The phase locking in an optical amplifier such as an erbium-doped fiber amplifier is technically assured due to its coherence optics for regeneration in the fiber-optic communications networks [29]. The classical compatibility offers a great benefit to the current bottlenecked big-data applications based on CMOS technologies and can lead to a breakthrough in present mass data communications networks.
Materials and Methods
Numerical calculations in the results are conducted by homemade program using MATLAB, where the equations are driven in the main text analytically. Figure 1 shows a schematic of the proposed NC-USCKD based on a shared pair of round-trip MZI transmission channels in an N-party composed network, where addressable remote parties are called Alice and Bob. Here, the round trip configuration of MZI is the , which is a quadratic expansion. This quadratic scalability in networking may be solved via multi-party superposition, which is beyond the present scope (discussed elsewhere). Each party has dual phase shifters to encode/encode one's phase bases represented by, for example, ϕ 1 and ϕ 2 for the phase shifters Φ 1 and Φ 2 at Bob's side or ψ 1 and ψ 2 for Ψ 1 and Ψ 2 at Alice's side, respectively. The MZI scheme in Figure 1 has nothing to do with the phase encoded BB84 protocol [30], where USCKD uses a pair of transmission channels for deterministic randomness via quantum superposition and its unitary transformation [23]. For NC-USCKD, the phase controllers Φ 2 and Ψ 2 are added to the original scheme of USCKD for the purpose of addressable networking, where the original phase controllers (Φ 1 and Ψ 1 ) are used for the unconditional security via deterministic randomness in the doubly coupled MZIs. In USCC without Φ 2 and Ψ 2 [23], the round-trip MZI results in the deterministic randomness if ϕ 1 = ψ 1 is satisfied, where ϕ 1 and ψ 1 have the same set of orthogonal phase bases: ϕ 1 , ψ 1 ∈ {0, π}. The opposite case of ϕ 1 = ψ 1 also works for the key distribution if bit-by-bit network initialization is performed [23]. Here, we briefly seek an N-party addressable condition in the NC-USCKD scheme of Figure 1: The phase basis '0 ('π') represents the key '0 ('1 ).
Results
Cryptography 2022, 6, x FOR PEER REVIEW 3 of 13 Figure 1 shows a schematic of the proposed NC-USCKD based on a shared pair of round-trip MZI transmission channels in an N-party composed network, where addressable remote parties are called Alice and Bob. Here, the round trip configuration of MZI is the same as symmetrically coupled double MZIs, where Bob (Alice) controls the first (second) MZI. For the N-party networks, the number of arbitrary pairs for networking is ( ) , which is a quadratic expansion. This quadratic scalability in networking may be solved via multi-party superposition, which is beyond the present scope (discussed elsewhere). Each party has dual phase shifters to encode/encode one's phase bases represented by, for example, and for the phase shifters Φ and Φ at Bob's side or and for Ψ and Ψ at Alice's side, respectively. The MZI scheme in Figure 1 has nothing to do with the phase encoded BB84 protocol [30], where USCKD uses a pair of transmission channels for deterministic randomness via quantum superposition and its unitary transformation [23]. For NC-USCKD, the phase controllers Φ and Ψ are added to the original scheme of USCKD for the purpose of addressable networking, where the original phase controllers (Φ and Ψ ) are used for the unconditional security via deterministic randomness in the doubly coupled MZIs. In USCC without Φ and Ψ [23], the round-trip MZI results in the deterministic randomness if = is satisfied, where and have the same set of orthogonal phase bases: , ∈ 0, . The opposite case of ≠ also works for the key distribution if bit-by-bit network initialization is performed [23]. Here, we briefly seek an N-party addressable condition in the NC-USCKD scheme of Figure 1: The phase basis '0′ ('π') represents the key '0′ ('1′). (1)
Results
According to the unitary transformation in a round-trip MZI configuration of Figure 1, the returned light (E9 and E10) at Bob's side must satisfy the identity or inversion relation if no network error occurs: = [ ] 0 . Here, the added phases ( , ) are the assigned address parameters for their sites. As discussed already in USCKD [23], unconditional security is performed with the phase bases of and . For a fixed address set ( , ), Bob randomly prepares a key with his phase basis , and sends it to Alice: this is the key preparation stage. Relative to Bob's prepared lights (E3, E4), Alice's phase ( , ) is transparent. Likewise, Bob's phase ( , ) is also transparent to the returned According to the unitary transformation in a round-trip MZI configuration of Figure 1, the returned light (E 9 and E 10 ) at Bob's side must satisfy the identity or inversion relation if no network error occurs: Here, the added phases (ϕ 2 , ψ 2 ) are the assigned address parameters for their sites. As discussed already in USCKD [23], unconditional security is performed with the phase bases of ϕ 1 and ψ 2 . For a fixed address set (ϕ 2 , ψ 2 ), Bob randomly prepares a key with his phase basis ϕ 1 , and sends it to Alice: this is the key preparation stage. Relative to Bob's prepared lights (E 3 , E 4 ), Alice's phase (ψ 1 , ψ 2 ) is transparent. Likewise, Bob's phase (ϕ 1 , ϕ 2 ) is also transparent to the returned light (E 7 , E 8 ). Alice measures her visibility V A to copy Bob's choice of ϕ 1 (see Table 1 in [23]). Then, Alice randomly chooses her phase basis for ψ 1 to shuffle Bob's phase choice and sends it back to Bob: this is the key selection stage. If the returned light E 9 (E 10 ) hits the detector B 3 (B 4 ), the identity (inversion) relation is satisfied for the unitary transformation of the MZI matrix in Figure 1 (Section A of the Supplementary Information). If Alice chooses the same (opposite) basis as Bob, this results in the identity (inversion) relation. Unlike QKD, the key distribution of USCKD is fully deterministic without the need for sifting due to the MZI directionality, where sifting is used to induce eavesdropping randomness and the unconditional security is provided by the no-cloning theorem of quantum mechanics in QKD [15]. Thus, the random phase shuffling by Alice corresponds to sifting of the QKD for eavesdropping randomness. Here in NC-USCKD, eavesdropping randomness is achieved by network initialization (see Table 1). Depending on the key distribution strategy, the inversion case (Section B of the Supplementary Information) can also be included (see Table 2) for the key distribution. From Equation (1), the following phase relationship between Alice and Bob is obtained for identity and inversion relations, respectively: with a deterministic key distribution according to the MZI physics of transmission directionality, the control phase bases (ϕ 1 , ψ 1 ) in Equations (2) and (3) must be shifted by the address phase (ϕ 2 , ψ 2 ). For example, the modified phase basis ϕ 1 in Figure 1 is where ϕ 1 is the original binary basis (0, π); similarly for ψ 1 : where ψ 1 is also the original binary phase basis (discussed in Figure 2). Due to the phase matching condition of In a similar analogy for the inversion case of ψ 1 = ϕ 1 (ψ 1 = ϕ 1 ± π), the modified phase basis becomes ψ 2 = ϕ 2 (ψ 2 = ϕ 2 ± π) (Section B of the Supplementary Information). Owing to the network addressability with ψ 2 = ϕ 2 or ψ 2 = ϕ 2 ± π for the identity or inversion case, NC-USCKD works for any arbitrary phase address. Thus, Figure 1 functions as a basic building block of network compatible USCKD in the classical domain. Table 1 is for π-added δ. For non-π-added δ, see [23]. The order number is to show random cases. "O" ("X") represents a correct (wrong) one.
In more detail, the control phase ϕ 1 depends on ψ 2 (= ϕ 2 ) for arbitrary networking with a particular address δ at Φ 2 , where ϕ 1 = ϕ 1 + ψ 2 (δ). Obviously, the ϕ 1 value varies based on the assigned address with δ at Φ 2 . As a result, the corresponding phase ψ 1 at Alice's side also becomes shifted by δ, satisfying Equation (2), resulting in ϕ 1 = ψ 1 for the identity relation; otherwise, ϕ 1 = ψ 1 ± π for the inversion relation. Here, the address phase δ at Φ 2 plays a key role in addressable networking in NC-USCKD, where δ can be considered as a continuous phase variable (CPV). This is the generalization of USCKD for networking without changing the original physics of USCKD. Keeping this in mind, we investigate the CPV property in NC-USCKD for the network addressability. Table 2. A key distribution procedure for NC-USCKD in Figure 1. The phase ϕ 1 is denoted without addition of ϕ 2 for simplicity. So does ψ 1 . The red indicates a network error. Each 'order' needs the network initialization in Table 1, otherwise sifting for the identity relation is needed.
Prepared key: Raw key Final key Final key for E4 in the upper transmission line. Thus, the modified phase at Φ must be = + δ, where = δ, 0 ≤ δ ≤ π, and is the binary phase basis of 0, . With this modified phase, equation (4) can be easily proved for the MZI determinacy (directionality) with an arbitrary value of δ for . Figure 2c. In other words, the address phase is used for networking to the corresponding at Alice's side. The corresponding interference IN5,6 always has the same value if = + is satisfied, as shown in Figure 2d. Thus, Figure 2 demonstrates the −dependent MZI directionality in the NA-USCKD scheme of Figure 1 as well as the indistinguishability in eavesdropping (discussed later). The resulting addressable condition on Alice's side is = + .
Because the relation = must be satisfied for the one-way deterministic key transmission in Figure 1, on Alice's side must be equal to according to Equation (2). Figure 3 shows the numerical calculations for the present NA-USCKD with addressable CPV of and . To satisfy the identity matrix at Bob's side for the returned light, Figure 1 shows a paired party assigned to the address set (ϕ 2 , ψ 2 ) through a shared pair of transmission channels of MZI in the N-party network (Section C of the Supplementary Information). The coherent (bright) input light pulse E 1 in Figure 1 is launched from a coherent laser (LD) through an optical modulator (OM) by Bob. A random phase basis ϕ 1 ∈ {0, π} controlled by the phase shifter Φ 1 is added to the split light E 4 . The other split light E 3 is encoded by the address phase shifter Φ 2 with a phase variable ϕ 2 , where 0 ≤ ϕ 2 ≤ π. As explained above, only the ϕ 2 -corresponding receiver (Alice) with the ψ 2 address satisfies Equations (2) and (3) for deterministic randomness of USCKD through the commonly shared pair of MZI transmission channels. Here, the MZI determinacy represents the phase-dependent transmission directionality: If ϕ 1 = 0 (ϕ 1 = π) assuming no network errors, detector A 1 (A 2 ) always clicks with E 6 (E 5 ) for ϕ 2 = ψ 2 = 0. The ψ 1 -controlled returned light E 8 along with E 7 by Alice is also governed by the same MZI transmission directionality, resulting in the identity or inversion relation (discussed in Figures 2 and 3). For the return lights of E 7 and E 8 , both phases ϕ 1 and ϕ 2 are invisible as mentioned above. Likewise, ψ 1 and ψ 2 are invisible to E 3 and E 4 , respectively.
is continuous: 0 ≤ ≤ . In practice however, the possible number of CPV is of course determined by the detector's sensitivity and MZI phase stability. Figure 3a,b represents for the −independent identity relation ( = 0; ; ) in the round-trip MZI scheme of Figure 1. For the address matching condition ( = ) as shown with the dashed curve in Figure 3a, all values satisfy the correct VB if = . The visibility VA (=V5,6) is broken if ≠ + (see Figure 2b). Thus, only the dotted curve with = (= ) in Figure 3b satisfies directionality condition in both sides with , = −1 and = −1 (see the open circle). This is because must be shifted by the value, and the shifted affects to keep , = ±1. ≠ −1. Here, ≠ −1 means that detector B4 is also clicked on for E4, indicating an error. Like USCKD [23], this property of NA-USCKD is also deterministic in the key distribution with random eavesdropping owing to the MZI physics. Details of authentication are discussed in the section on network initialization. Figure 4 shows numerical calculations for the MZI channel measurements in Figure 1 for the demonstration of unconditional security in NC-USCKD. The matrix representation [ ] , is for both E7 and E8 in the MZI paths of Figure 1: Figure 2 shows numerical calculations of the MZI determinacy for the output lights E 5 and E 6 on Alice's side as well as the measurement randomness (IN 5,6 ) in the shared pair of transmission channels. The related matrix representation [MZ] ϕ 1 ,ϕ 2 of the directionality for E 5 and E 6 at the MZI interferometer is as follows: where The added phase ϕ 2 (δ) causes a δ-phase shift in E 3 in the lower transmission line. To compensate the phase shift, ϕ 1 must be adjusted accordingly for E 4 in the upper transmission line. Thus, the modified phase at Φ 1 must be ϕ 1 = ϕ 1 + δ, where ψ 2 = δ, 0 ≤ δ ≤ π, and ϕ 1 is the binary phase basis of {0, π}. With this modified phase, Equation (4) can be easily proved for the MZI determinacy (directionality) with an arbitrary value of δ for ϕ 2 .
For the numerical demonstrations of the ϕ 2 -dependent MZI determinacy mentioned above, two basis values of ϕ 1 ∈ {0, π} are used to test both the visibility V 5,6 and the interference IN 5,6 . Here, the interference IN 5,6 should be the same as IN 3,4 if Eve has the same measurement tool as Alice's. However, Eve's measurement with the same interference tool results in either an in-phase or out-of-phase scenario with the same probability due to the measurement indistinguishability caused by the MZI path superposition. Figure 2a is the reference for ϕ 2 = 0, while Figure 2b is for any arbitrary value of ϕ 2 = π/3. Figure 2a shows a typical fringe pattern of visibility V 5,6 , where the maximum occurs at the phase bases, ϕ 1 = ϕ 1 ∈ {0, π} (see the green dots in the solid curve). On the contrary, the interference IN 5,6 results in the same value for both bases, resulting in measurement indistinguishability (see the green and orange dots in the dotted curve). As discussed in [21], IN 5,6 should be the same as IN 3,4 , showing the physical origin of the measurement immunity in the MZI path corresponding to the no-cloning theorem in QKD. The phase shift of ϕ 1 by the address value of ϕ 2 is numerically demonstrated in Figure 2b for For the maximum visibility V 5,6 = ±1, the phase shift condition is also satisfied. This linear phase shift relation in ϕ 1 with ϕ 2 reveals the infinite number of phase variables in ϕ 2 , resulting in the CPV characteristics of the present protocol as shown in Figure 2c. In other words, the address phase ϕ 2 is used for networking to the corresponding ψ 2 at Alice's side. The corresponding interference IN 5,6 always has the same value if ϕ 1 = ϕ 1 + ϕ 2 is satisfied, as shown in Figure 2d. Thus, Figure 2 demonstrates the ϕ 2 -dependent MZI directionality in the NA-USCKD scheme of Figure 1 as well as the indistinguishability in eavesdropping (discussed later). The resulting addressable condition on Alice's side is ψ 1 = ψ 1 + ψ 2 . Because the relation ϕ 1 = ψ 1 must be satisfied for the one-way deterministic key transmission in Figure 1, ψ 2 on Alice's side must be equal to ϕ 2 according to Equation (2). Figure 3 shows the numerical calculations for the present NA-USCKD with addressable CPV of ϕ 2 and ψ 2 . To satisfy the identity matrix at Bob's side for the returned light, the visibility of V B = −1 for both bases (ϕ 1 = ψ 1 = {ϕ 2 , π + ϕ 2 }) is numerically shown in Figure 3a for the right condition of ϕ 2 = ψ 2 2π 5 : V B = V 9,10 . However, for the wrong condition of ϕ 2 = ψ 2 2π 5 , the maximum visibility of V B fails. Thus, Equations (2) and (3) are proved, where the modified phase basis of ϕ 1 becomes continuous because ϕ 2 (= ψ 2 ) is continuous: 0 ≤ ϕ 2 ≤ π. In practice however, the possible number of CPV is of course determined by the detector's sensitivity and MZI phase stability. Figure 3a,b represents for the ψ 1 -independent identity relation ψ 1 = 0; 2π 5 ; π in the round-trip MZI scheme of Figure 1. For the address matching condition (ϕ 2 = ψ 2 ) as shown with the dashed curve in Figure 3a, all ψ 1 values satisfy the correct V B if ψ 1 = ϕ 1 . The visibility V A (=V 5,6 ) is broken if ϕ 1 = ϕ 1 + ϕ 2 (see Figure 2b). Thus, only the dotted curve with ψ 1 = 2π 5 (=ϕ 2 ) in Figure 3b satisfies directionality condition in both sides with V 5,6 = −1 and V B = −1 (see the open circle). This is because ϕ 1 must be shifted by the ϕ 2 value, and the shifted ϕ 1 affects ψ 1 to keep V 5,6 = ±1.
For the key distribution process in Figures 2 and 3, how does Alice know the correct ψ 1 ? In other words, how does Bob send his prepared key to Alice without revealing it to Eve? The answer to this question is given by authentication. If ϕ 2 = ψ 2 for a wrong choice, the identity relation (V B = −1) must fail as shown in Figure 3c,d (see the open circles). For the correct choice (ϕ 2 = ψ 2 ), both Bob and Alice automatically have ϕ 2phase shifted ϕ 1 and ψ 1 , respectively. Thus, their visibility measurements must fulfill the identity (or inversion) relation. If there is any mismatch in the address (ϕ 2 = ψ 2 ), the return light cannot satisfy the identity (or inversion) relation as shown in Figure 3d (see the open circle): V B = −1. Here, V B = −1 means that detector B 4 is also clicked on for E 4 , indicating an error. Like USCKD [23], this property of NA-USCKD is also deterministic in the key distribution with random eavesdropping owing to the MZI physics. Details of authentication are discussed in the section on network initialization. Figure 4 shows numerical calculations for the MZI channel measurements in Figure 1 for the demonstration of unconditional security in NC-USCKD. The matrix representation [MZ] ψ,ϕ is for both E 7 and E 8 in the MZI paths of Figure 1: where Figure 4 shows both the interference IN 7,8 and visibility V 7,8 in the shared MZI channels for a smart eavesdropper. Although the channel intrusion by Eve without altering the output fringe is theoretically and technically possible with the same measurement tool, Eve's chance to decode is just 50% on average because there is no way to keep the same phase difference as Bob or Alice. In other words, the same fringe pattern (visibility) can be achieved by Eve, but the absolute phase information of the light carrier is impossible due to the superposition between the two paths. Thus, Eve's eavesdropping chance with fringe coincidence is random, resulting in unconditional security. Moreover, a random phase-basis selection technique is added to prevent classical attacks such as memory-based attacks [23]. According to Equation (2), Alice's phase adjustment on ψ 1 with ψ 2 is automatic as discussed in Figure 3. Figure 4a,b is for the address matching (ϕ 2 = ψ 2 ) between Alice and Bob, while Figure 4c,d is for mismatching (ϕ 2 = ψ 2 ). Regardless of knowing or unknowing the address set (ϕ 2 , ψ 2 ), Eve's channel attack must fail due to the MZI physics as well as the channel independence of coherence optics, as shown in Figure 4. This measurement randomness by Eve is rooted in Equation (5), where the four phase exponents of the matrix elements are all same. Thus, the eavesdropping randomness and measurement indistinguishability in the shared MZI channels by Eve are sustained for ϕ 2 -dependent network channels, resulting in the unconditional security in NC-USCKD.
difference as Bob or Alice. In other words, the same fringe pattern (visibility) can be achieved by Eve, but the absolute phase information of the light carrier is impossible due to the superposition between the two paths. Thus, Eve's eavesdropping chance with fringe coincidence is random, resulting in unconditional security. Moreover, a random phasebasis selection technique is added to prevent classical attacks such as memory-based attacks [23]. According to Equation (2), Alice's phase adjustment on with is automatic as discussed in Figure 3. Figure 4a,b is for the address matching ( = ) between Alice and Bob, while Figure 4c,d is for mismatching ( ≠ ). Regardless of knowing or unknowing the address set ( , ), Eve's channel attack must fail due to the MZI physics as well as the channel independence of coherence optics, as shown in Figure 4. This measurement randomness by Eve is rooted in Equation (5), where the four phase exponents of the matrix elements are all same. Thus, the eavesdropping randomness and measurement indistinguishability in the shared MZI channels by Eve are sustained for −dependent network channels, resulting in the unconditional security in NC-USCKD.
Network Initialization: Network Addressing and Authentication
In an N party attached classical network configuration through a commonly shared pair of MZI transmission channels, the network initialization includes network authentications between the two parties assigned by the corresponding address set of ϕ 2 and ψ 2 . For the deterministic randomness analyzed in Figures 1-4, the network initialization between arbitrary two parties in the network is a prerequisite process to avoid any potential eavesdropping. Suppose that Alice and Bob represent any paired party in the network connected by a specific address set of ψ 2 and ϕ 2 , respectively (see Figure 1). For a preparation stage, first, Alice shuffles the MZI network by randomly shifting her phase shifter Ψ 1 with a phase parameter δ(0 ≤ δ ≤ 2π). Alice is now ready for scanning Ψ 1 for her visibility V A . Second, Bob repeatedly sends the same test key encoded by his phase shifter Φ 1 with ϕ 1 ∈ {0, π} randomly. Third, Alice scans her phase shifter Ψ 1 until she obtains an interference fringe of the maxima. Then, Alice sets her phase basis with the δ-added one: ψ 1 ∈ {δ, π + δ}. This modified phase set has a 50% chance of correctness due to the MZI randomness as mentioned above for Eve. The network initialization results in authentication.
Eve can also do the same as Alice does, but her chance is worse than for randomness due to δ. The chance for Eve to have the same δ as Alice's is extremely low. In principle, two independent MZI systems set for Bob-Eve and Bob-Alice have a rare chance to be the same as each other, unless the input information by Bob is known to Eve, which is prohibited by definition. This small chance depends on the detector sensitivity, which is lower than one in a million in commercially available avalanche photodetectors. This sensitivity-based resolution defines the maximum number of possible addresses in the network. Of course, the network address number can be increased infinitely by using address layers, e.g., by expanding the address set ϕ j 2 , ψ j 2 with the j hierarchy. Although Eve has luckily found the δ assigned by Alice, Eve still has 50% chance to coincide with Alice's.
The network initialization is summarized in Table 1, where the sequence number 1-4 applies for Sequence below. For this, Alice randomly resets the MZI system by modifying her phase shifter Ψ 1 with a new phase variable δ as mentioned above, as a preparation stage: Sequence #0. First, Bob randomly selects ϕ ∈ {ϕ 2 , ϕ 2 + π} for the light pulse E 4 in Figure 1 and sends it to Alice along with E 3 (see Figure 2): Sequence #1. Second, Alice measures V A and randomly sets her phase controller Ψ 1 with either δ or δ + π to send the reflected light to Bob: Sequence #2. Alice announces the result of V A publicly. Note that Alice never announces her phase choice either for ψ 1 or δ. Third, Bob measures his V B and publicly announces whether Alice's measurement is correct or not: Sequence #3. Lastly, Alice knows secretly and deterministically whether the δ is correct or wrong: Sequence #4. If it is wrong, Alice just adds a π phase to δ, otherwise keeps it as her final phase basis set of ψ. Table 1 is for the case of a π-phase shifted δ.
(Network preparation) Initially Alice resets the MZI network by disturbing the MZI with her phase controller Ψ(δ) and scans δ until she gets V A = ±1 for the test bits provided by Bob. The δ is a phase variable added to her phase basis ψ ∈ {0, π}. Then, Alice gives a cue to Bob. 1.
Bob randomly selects his phase basis ϕ ∈ {0, π}, encodes his light with ϕ, and sends it to Alice.
2.
Alice measures V A , publicly announces the result, and returns the ϕ-set light to Bob after encoding it with δ + ψ.
3.
Bob measures V B and publicly announces whether Alice's result is correct (O) or not (X).
Eve may also perform the same network initialization of Table 1 with an arbitrary value of δ for her phase shifter, Ψ e (δ ). As a result, Eve obtains the same pattern but with unsynchronized maxima with respect to Alice's because δ = δ due to the asymmetry of independent systems. The synchronization chance (δ = δ ) between Eve and Alice is extremely low, where the chance is decided by the detector's sensitivity as mentioned above: a commercially available detector sensitivity is very high (>10 4 V/W at GHz). Thus, the addressable networking with unconditional security is achieved by network initialization as shown in Table 1. The unconditional security is effective with a 50% chance (randomness) via information theory [31]. As discussed with memory-based attacks [23], Eve has no chance of eavesdropping the data. One might suggest that Eve's eavesdropping trials may shift the V A value causing an error, where the shift must be consistent owing to Eve's abilities in the coherence setup. However, a consistent V A shift to Alice does not affect the initialization process at all, otherwise, confirms Eve's intrusion. Thus, network initialization implies both network addressing and authentication between two addressees because this process completely removes the potential eavesdropping chance by Eve. Table 2 shows the key distribution procedure without sifting for the present NC-USCC in Figure 1. This procedure accompanies the network initialization at each order to avoid the memory-based attack, otherwise sifting is performed [23]. Below is a summary of the key distribution process: Procedure. After network initialization, Bob prepares a random key using the orthogonal bases of ϕ 1 and sends it to Alice via the shared MZI transmission lines. Then, Alice randomly selects the Bob-prepared one using her phase bases ψ 1 and set it for a raw key. Here, ψ 1 is modified via the network initialization in addition to the individual address ψ 2 . Owing to the directional determinacy of MZI, both parties deterministically share the same raw key by simply reading out their visibilities (V A ; V B ). Both the identity and inversion relations in V B are used for the row keys, resulting in a nearly 100% bit rate. If bit-by-bit network initialization is not performed, then a usual sifting process is performed for a batched order based on the identity relation in V B (Section E of the Supplementary Information). In this added sifting case, the network initialization is performed for the batched order. For error corrections, both parties finally publicly announce their error bits only (red numbers), and then remove them from the row key chain. As a result, the same length of final key (m) is shared between Alice and Bob. Here, the mark X represents the discarded bit resulting from the error correction. To evaluate the error rate, Bob compares the final key chain (m) with his prepared one. Privacy amplification may be added by randomly selecting some bits in the final key chain to calculate the error bit rate. The following is the key distribution procedure for NC-USCKD (see Table 2).
Discussion
Regarding the eavesdropping discussed in Figure 4, Eve can set up the same measurement tools for both outbound and inbound eavesdropping as Alice and Bob have, respectively. Then, Eve simply reads out her visibility relying on the same MZI directionality with best chance of 50% on average. For arbitrary addressing in the N-party attached NC-USCKD, the network initialization between any arbitrary bi-parties results in network authentication. Thus, Eve's measurement-based eavesdropping for the phase-controlled round-trip MZI system of Figure 1 is worse than random, resulting in unconditionally secured cryptography, even in the classical domain. Here, the network resolution or maximum number of addresses in the network is determined by the MZI phase stability [32], where extension of the transmission distance of more than a few km range [27,28] for the shared MZI is a just technical issue [33].
Coherence-Based Memory Attack
The eavesdropping randomness in the MZI scheme of Figure 1 however must be consistent relative to all coherently measured bits by Eve either in phase or out of phase with Alice or Bob. This fact is critical to post-measurement attacks such as memorybased attacks because Eve can simply flip all eavesdropped bits for correction. To protect from such a classical attack, bit-by-bit network initialization (Table 1) or block-based sifting (Section C of the Supplementary Information) is necessary. In other words, the eavesdropping randomness in MZI must be bit-by-bit to satisfy unconditional security in the present scheme. Then, the maximum eavesdropping rate becomes η e = 1 2 N , where N is the key length in digits. For N = 128, η e ∼ 10 −39 , it takes much longer than the age of the universe (10 35 s) for a brute-force attack to succeed even with the world's most powerful supercomputer, whose bit flip time is 10 −17 s (see Section F of the Supplementary Information). For the random bit sequence, no efficient algorithm exists except for brute-force attacks. Owing to the coherence optics compatible with conventional optical systems, the key length of the present NC-USCKD has no practical limit due to phase-locked amplification. Thus, the unconditional security of NC-USCKD using coherent light opens the door to potential one-time-pad cryptography in the classical domain, otherwise impossible.
Conclusions
The NC-USCKD protocol was presented, analyzed, and discussed for addressability in an N-party attached classical network, where unconditional security is based on quantum superposition between shared transmission lines in the classical regime. The key rate of NC-USCKD depends on classical optoelectronic devices such an acousto-optic or electro-optic modulators at GHz compatible with current fiber-optic communications network systems. The network initialization in the N-party-involved optical network was successfully shown for two arbitrary parties assigned by the public addresses. The number of public addresses is practically dependent on the photo-detector's sensitivity. Network initialization also resulted in authentication between the addressed two parties, where Eve's eavesdropping success rate is quadratically decreased as N linearly increases. The proposed NC-USCKD can be applied to conventional DWDM-based fiber-optic communications networks by allocating each address to each wavelength [34]. Because of the MZI robustness in phase fluctuations demonstrated in both optical fibers [24] and free space [28] for a few km ranges, the network extension to tens of km with large N is a simple technical issue with current locking technologies [27,28,33,34]. In a multi-core fiber, the MZI path length is potentially error-free due to the core-to-core proximity in a few microns [1]. The wavelength converter, optical MUX/DEMUX, and an amplifier such as EDFA are coherent devices, so a phase difference between the input and output can be locked. This fixed phase shift can also be adjusted for the desired interference fringe in a network preparation stage. For wavelength sharing/dependent network configurations, STAR, ring, or FTTH fiber optic networks are also possible.
Unconditional security in NC-USCKD by using bright coherent light was presented using addressable quantum superposition and its unitary transformation for a shared MZI system between any two arbitrary remote parties in a network. Compared with QKD protocols such as BB84 based on single photons over a single quantum channel, the unconditional security of NC-USCKD was far more superior, resulting in detection loophole-free, ultrafast and distance unlimited unconditionally secured cryptography for N parties in a network. Unlike the canonical (non-orthogonal) basis-based no-cloning theorem in QKD, the physics of unconditional security of NC-USCKD lies in the quantum superposition between paired transmission lines of the MZI channels and its unitary transformation in a round-trip scheme, resulting in deterministic randomness. To avoid potential eavesdropping, real-time network initialization was performed to protect from classical attacks such as memory-based attacks. Compared with the original point-to-point transmission scheme of USCKD, the addressability in NC-USCKD is due to the linearity of orthogonal basis expansion among N parties for N-to-N networking. Eventually, the proposed NC-USCKD can be applied to current fiber-optic communications networks with laser locking techniques as well as to future multi-core fiber networks. As a result, NC-USCKD has potential for the long-lasting goal of one-time-pad cryptography in the classical regime for artificial intelligence requiring unconditionally secured mass data communications, such as in unmanned vehicles, drones, and medical record transmission.
Data Availability Statement:
The data presented in this study are available in article.
Conflicts of Interest:
The author declares no conflict of interest. | 2022-01-28T17:00:14.919Z | 2022-01-21T00:00:00.000 | {
"year": 2022,
"sha1": "57accf2313b35eaacdb851766045e2e42d2d6588",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2410-387X/6/1/4/pdf?version=1646041428",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a5027794dd220e4190fef925368bf21d8554a588",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
256005371 | pes2o/s2orc | v3-fos-license | Association between physical activity and sleep-disordered breathing in male Japanese workers: a cross-sectional study
Whether physical activity reduces the risk of sleep-disordered breathing (SDB) for non-obese people remains unclear. The present cross-sectional study examined the association between physical activity and SDB among non-obese male Japanese workers. All 200 workers in a company in Tokyo, Japan, who drove a motor vehicle as part of their job, were invited to be screened for SDB to prevent traffic accidents. Of these, 195 agreed to participate in this study. The number of apnea and hypopnea episodes occurring during one night was measured using a single-channel airflow monitor to obtain an individual respiratory disturbance index (RDI). SDB was defined as RDI ≥15 apneas/hypopneas/h. Non-obese males (body mass index <30 kg/m2) were included in the analysis. Unconditional logistic regression analysis was used to calculate crude and adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for SDB by physical activity level tertile, as measured by the International Physical Activity Questionnaire. The prevalence of SDB was 26.9%. The unadjusted analysis showed a significant inverse association between physical activity and SDB: crude ORs for the tertiles of physical activity were 1.00 (low), 1.58 (middle), and 0.27 (high) (95% CI 0.08–0.88; P for trend = 0.007). However, this association was attenuated after adjusting for covariates: Adjusted ORs were 1.00 (low), 1.65 (middle), and 0.41 (high) (95% CI 0.10–1.61; P for trend = 0.11). In a cross-sectional study among non-obese male workers in Japan, we found no significant association between physical activity and SDB.
Background
Physical activity has been hypothesized to have a protective effect against the development of sleep-disordered breathing (SDB). There are several possible mechanisms by which physical activity is thought to mitigate SDB, such as through the maintenance of body weight [1], possible influence of muscle tone in the neck [2] or the influence on visceral body fat [3].
A few previous epidemiologic studies in the United States have suggested a reduced risk of SDB in association with physical activity [1,[4][5][6]. A longitudinal study showed that additional adjustment for body mass index (BMI) attenuated the association [1]. This suggests that BMI may be a mediator of this effect. A cross-sectional study also showed that the protective association between physical activity and SDB was seen mainly in males [5]. However, these studies investigated the association of physical activity with SDB among predominantly obese or overweight populations (mean BMI 27-32.5 kg/ m 2 ).
Little evidence is available from epidemiologic studies regarding the association between physical activity and SDB among less obese people. Additionally, to our knowledge, this association has not yet been studied outside the United States. It has been reported that Far East Asian male patients with obstructive sleep apnea, which accounts for the majority of SDB, were less obese but had greater severity of obstructive sleep apnea when compared with white male patients with obstructive sleep apnea [7]. However, Asian facial/mandibular shape may have contributed to these results [8]. Therefore, it remains to be elucidated whether physical activity reduces the risk of SDB in non-obese people, particularly in Asian countries. Weight change, and the general strengthening and fatigue resistance of the ventilatory and upper airway dilator muscles caused by physical activity may improve SDB [3] even for non-obese people.
The present study used a cross-sectional design to examine the association between physical activity and SDB among non-obese male workers in Japan. Our investigation among this group will be very significant for Japanese society in terms of driving and road safety.
Participants
A cross-sectional sample of subjects participated in this study. In October and November 2013, all 200 workers from a nationwide petroleum-related company in Tokyo, Japan, who drove a motor vehicle as part of their job were invited to participate in SDB screening to prevent possible traffic accidents. Of these, 195 agreed to participate in the present study (participation rate: 97.5%). Written informed consent was obtained from all participants. To avoid confounding by sex, female participants (n = 2) were excluded from the analysis. Accordingly, all further analyses were restricted to male participants. The study protocol was approved in advance by the Institutional Review Board of Juntendo University Faculty of Medicine, Tokyo, Japan (receipt number 813; approval letter number 2012057; May 21, 2012).
Data collection Questionnaire for exposure and covariate assessment
Participants completed a self-administered questionnaire collecting information on socio-demographic and anthropometric characteristics, lifestyle including exercise habits, and medical history including hypertension, type 2 diabetes, and cardiovascular disease. BMI was calculated as weight in kilograms divided by the square of height in meters. Participants were also asked to recall their body weight at approximately 20 years of age. Adult weight gain was then estimated by deducting this recalled weight from their current body weight. The revised version of the International Physical Activity Questionnaire (IPAQ)-Short Form Japanese edition was used to evaluate individual levels of physical activity in terms of intensity and frequency in a usual week (MET-h/week) [9,10]. The self-administered IPAQ short version has been validated against accelerometer-measured activity levels among Japanese adult men and women, resulting in correlation coefficients of 0.39 and 0.37 (P < 0.001) for the CSA accelerometer and the life coder accelerometer, respectively [9].
SDB screening
The number of apnea and hypopnea episodes occurring during one night was measured using a single-channel airflow monitor (SOMNIE; NGK Spark Plug Co. Ltd., Nagoya, Japan) [11,12] to obtain an individual respiratory disturbance index (RDI). Among healthy-weight people (BMI <25 kg/m 2 ), the RDI measured by SOMNIE was highly correlated with the apnea-hypopnea index as assessed by polysomnography (r = 0.93), and its sensitivity and specificity for predicting an apnea-hypopnea index ≥15 events/h were 0.78 and 0.89, respectively [12].
The recording of airflow during sleep was carried out using a portable monitoring device in the participants' homes. Participants were provided with an instruction leaflet for the portable monitor and a sleep log. The leaflet instructions read as follows: "The recording should be carried out for more than 4 h. Please engage in your normal life at home, including diet, drinking, smoking, and taking medicine. " Among night shift workers, 17 participants conducted the recording during sleep after a day shift, 6 after a night shift, and 29 during a day off.
Thirteen participants were excluded from the analysis because of imprecise (n = 9) or missing RDI values (n = 4) caused by poor measurement conditions. Data from the remaining 180 men with precise RDI values were used for the analysis. An SDB case in the present study was defined as having an RDI of at least 15 apneas/ hypopneas/h, as in a previous study [5]. Using this criterion, the subjects were divided into two groups: 49 cases and 131 controls.
Statistical analysis
All statistical analyses were performed using SAS software version 9.2 for Windows (SAS Institute Inc., Cary, NC, USA). There was a BMI limit (<30 kg/m 2 ) for inclusion in our analysis, following the World Health Organization's definition of obesity [13] (three cases and six controls were excluded). Characteristics were compared between cases and controls using the Wilcoxon rank-sum test with normal approximation for continuous variables and the Fisher's exact probability test for categorical variables. Individual physical activity levels were categorized into tertiles among the control subjects, and corresponding dummy variables were created. Here, tertiles were defined by two cut points that divided the participants into three groups of equal size based on the distribution of physical activity among the controls. Unconditional logistic regression analysis was performed to calculate crude and multivariable-adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for SDB according to tertile of physical activity level using the SAS LOGISTIC procedure. A linear trend was tested in logistic regression models using the median value for each physical activity level tertile. Adjusted models included age as a covariate and then added BMI, education, marital status, history of hypertension, smoking and drinking habits, and night shift work. These variables were selected as potential confounders based on a comparison of characteristics between cases and controls and previous studies [1,4,5]. In particular, age and BMI are well-known risk factors for SDB [8]. We did not adjust for adult weight gain, because this may be an intermediate variable. Missing values were managed by complete case analysis, meaning that observations with missing values were not used for the multivariable analyses (listwise deletion). All P values and 95% CIs reported were two-sided, and significance was set at P < 0.05. Table 1 presents the characteristics of case and control subjects. Cases were distinguished by having an RDI of 15 or more episodes/h. The prevalence of SDB was 26.9%. Compared with the control subjects, SDB case subjects were less likely to be leaner or night shift workers, and more likely to be older or cohabiting with their spouses. Most participants were engaged in specialized or technical work, managerial work, or marketing and sales. All participants were regular employees. Table 2 shows the ORs and 95% CIs for SDB by level of physical activity. The unadjusted analysis showed a significant inverse association between physical activity and SDB: crude ORs for the tertiles of physical activity were 1.00 (low), 1.58 (middle), and 0.27 (high) (95% CI 0.08-0.88; P for trend = 0.007). However, this association was attenuated after adjusting for age and other covariates: Adjusted ORs were 1.00 (low), 1.65 (middle), and 0.41 (high) (95% CI 0.10-1.61; P for trend = 0.11). Similar results were obtained when RDI was dichotomized using 10 rather than 15 as the cutoff point (data not shown). Even when middle and high physical activity groups were combined, no significant association was found (adjusted OR = 1.12; 95% CI 0.45-2.77). In addition, when RDI and physical activity level were included as continuous variables, multiple linear regression analysis did not show any significant association between physical activity and RDI (P = 0.69). In this multiple linear regression, the included independent variables were physical activity, age, BMI, smoking, drinking, education, marital status, night shift work, and history of hypertension. This analysis also showed a positive association between BMI and RDI (P = 0.015).
Discussion
This cross-sectional study investigated the association between self-reported physical activity and SDB, adjusting for participants' characteristics, among non-obese male workers in Japan. The results showed that higher levels of physical activity were not clearly associated with a reduced risk of SDB.
Previous studies have shown inverse associations between physical activity and SDB [1,[4][5][6]. However, the participants in these studies were predominantly obese or overweight people in the United States. To our knowledge, the present study is the first to investigate the association between physical activity and SDB among non-obese men in an Asian country.
Despite having a small sample and moderate statistical power, our results suggest that physical activity may not mitigate SDB among non-obese male adults in Japan. This non-significant association might be biologically plausible, because, for non-obese people, physical activity may not be expected to further improve or prevent SDB via weight change. Other possible mechanisms, such as increased ventilatory muscle strength and endurance and the redistribution of adipose tissue from the pharyngeal regions to other areas, may not be sufficiently developed by physical activity in the present study. To obtain such effects, a course of aerobic physical training would be needed. For example, a randomized controlled trial showed that 150 min/week of moderate intensity aerobic activity for 12 weeks resulted in a significant reduction of total body fat and apnea-hypopnea index without a significant change in body weight [3]. As a result, for non-obese SDB patients in Japan, treatment with continuous positive airway pressure is a good choice to reduce the risk of complications such as type 2 diabetes, cardiovascular diseases, and possible future traffic accidents despite suboptimal long-term adherence [14]. However, as a matter of course, increasing physical activity is still recommendable because of its preventive effects on many diseases other than SDB [15][16][17][18][19][20][21].
Major strengths of the present study were as follows. First, almost all invited subjects participated in the study, reducing the possibility of nonresponse bias. Second, many confounders were measured and statistically controlled through multivariable analysis.
Several potential limitations also warrant mention. First, although the correlations between IPAQ scores and the accelerometer-measured values (r = 0.39 and 0.37) were at least as high as those reported in previous studies [9], measurement error in the exposure assessment might have caused some degree of misclassification, leading to a null result. Second, although we adjusted for potential confounders to the extent possible, we could not exclude the possibility of residual confounding by unmeasured confounders. Third, the small sample size used in this study might not have allowed us to detect weak associations, although a significant crude OR was observed in the highest physical activity tertile group. The results presented here should be confirmed through replication by larger studies in the future. Fourth, the generalizability of our findings might be limited, because all participants were men and employed by a single company. However, these participants can be considered typical workers in Japan as far as their physical activity levels and prevalence of SDB are concerned. The prevalence of SDB in the present study (26.9%) was relatively consistent with those reported in surveys of male adults in Japan (22.3%) [22], Australia (24.9%) [23], and Brazil (24.8%) [24], when SDB is defined as apnea-hypopnea index ≥15 events/h. Fifth, although this study investigated the association between physical activity level and SDB among non-obese men, the mechanistic pathway of this association could not be explored because we employed a cross-sectional design and did not measure such mediator variables. In addition, we used a single-channel airflow monitor instead of polysomnography. The use of polysomnography in future studies may reduce the possibility of outcome misclassification.
Conclusions
In a cross-sectional study of non-obese workers in Japan, we found no significant association between physical activity and SDB. Our results do not suggest that physical | 2023-01-20T14:28:36.585Z | 2017-01-09T00:00:00.000 | {
"year": 2017,
"sha1": "023b1782b604ada19bf0ecb68abf164b4e30f4ac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13104-016-2362-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "023b1782b604ada19bf0ecb68abf164b4e30f4ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
271164237 | pes2o/s2orc | v3-fos-license | Dataset explaining the comparative seasonal crop load and harvest quality of guava upon pruning strategies
The dataset explained the details on how pruning techniques significantly affected the seasonal variations on fruit availability and edible quality of guava (Psidium guajava L.) under fluctuating sub-tropical weather conditions. The present pruning data also directed a way of enhancing lean season (off-season) harvest without sacrificing the main season yield and fruit quality. In detail, the pruning strategies included branch removal of 0 cm, 15 cm, 30 cm and 45 cm from shoot-tip once a year during spring (early March), monsoon (early June) and autumn (early September) starting with spring pruning. Over two consecutive years (2019–2020 and 2020–2021), the pruning treatments were assigned in triplicates following a randomized Complete Block Design (RCBD) where the same plants received the same treatments during observation period. Data on crop load like number of fruits and fruit yield per plant and fruit biochemical traits namely total soluble solids, titratable acidity, total sugars, vitamin C and fruit specific gravity were recorded. To assess the seasonal variations, data collection was performed continuously and grouped at quarter intervals i.e., March-May, June-August, September-November and December-February of the year. Plants under pruning produced greater number of flowers and fruits for superior yield and quality compared to non-pruned plants. Irrespective of pruning techniques, June-August and September-November quarters had superior yield over others, whereas March-May harvests retained utmost fruit quality. Considering pruning time, plants reserved maximum harvestable fruits in June-August quarter under spring pruning followed by March-May quarter for autumn pruning compared to other combinations. Moreover, fruit biochemical attributes were examined the best at March-May harvests under autumn pruning. Alongside, June-August and September-November periods exhibited superiority for yield over others when plants were pruned at 30 cm level, but 45 cm pruning had best yield at March-May quarter. Whether, fruits had notable TSS, sugars, vitamin C and specific gravity obtained during March-May period from 45 cm pruning treatment. June-August was noted to produce inferior quality fruits in guava.
a b s t r a c t
The dataset explained the details on how pruning techniques significantly affected the seasonal variations on fruit availability and edible quality of guava ( Psidium guajava L.) under fluctuating sub-tropical weather conditions.The present pruning data also directed a way of enhancing lean season (off-season) harvest without sacrificing the main season yield and fruit quality.In detail, the pruning strategies included branch removal of 0 cm, 15 cm, 30 cm and 45 cm from shoot-tip once a year during spring (early March), monsoon (early June) and autumn (early September) starting with spring pruning.Over two consecutive years (2019-2020 and 2020-2021), the pruning treatments were assigned in triplicates following a randomized Complete Block Design (RCBD) where the same plants received the same treatments during observation period.Data on crop load like number of fruits and fruit yield per plant and fruit biochemical traits namely total soluble solids, titratable acidity, total sugars, vitamin C and fruit specific gravity were recorded.To assess the seasonal variations, data collection was performed continuously and grouped at quarter intervals i.e., March-May, June-August, September-November and December-February of the year.Plants under pruning produced greater number of flowers and fruits for superior yield and quality compared to non-pruned plants.Irrespective of pruning techniques, June-August and September-November quarters had superior yield over others, whereas March-May harvests retained utmost fruit quality.Considering pruning time, plants reserved maximum harvestable fruits in June-August quarter under spring pruning followed by March-May quarter for autumn pruning compared to other combinations.Moreover, fruit biochemical attributes were examined the best at March-May harvests under autumn pruning.Alongside, June-August and September-November periods exhibited superiority for yield over others when plants were pruned at 30 cm level, but 45 cm pruning had best yield at March-May quarter.Whether, fruits had notable TSS, sugars, vitamin C and specific gravity obtained during March-May period from 45 cm pruning treatment.June-August was noted to produce inferior quality fruits in guava.
© 2024 The Author(s
Value of the Data
• As observed from the dataset on pruning induced seasonal variations in yield and fruit quality in guava that rainy season (June-November) had better yield compared to that of dry period (December-May), but fruits harvested during dry season had the superior edible quality in comparison to the wet season harvests.In addition, spring pruning produced maximum yield in June-November, while autumn pruning had superior yield in December-May periods having considerable other season harvests.Alongside, plants receiving 30 cm pruning had distinguishable yield as well as optimum fruit quality as of 45 cm pruning.Thus, autumn pruning by 30 cm shoot-tip removal exhibited superiority over other combinations for standardized yield during the lean period (December-May) with better quality fruits ensuring the main season harvests too in guava.From the dataset, the farmers can strategically implement pruning practices to optimize yield based on seasonal variations.• In addition, by understanding the impact of pruning on fruit quality, farmers can ensure that fruits harvested during the dry season maintain superior edible quality, potentially leading to increased market demand and better prices.Implementing optimal pruning practices can lead to higher yields and better fruit quality, potentially reducing input costs and maximizing profits for farmers.
• The dataset provides a valuable foundation for future research in guava cultivation.Researchers can build upon these findings to explore additional factors influencing yield and fruit quality, such as different pruning techniques, varieties, soil conditions, and environmental factors.However, scholastic researchers may find areas for further investigation on the specific physiological mechanisms underlying the observed effects of pruning on yield and fruit quality.• Furthermore, policy makers can use the findings to develop evidence-based agricultural policies and recommendations for guava growers.For example, they can incorporate guidelines for optimal pruning practices into extension programs and agricultural training initiatives.Thus, by promoting practices that enhance yield and fruit quality, policymakers can contribute to the sustainability and profitability of guava farming operations.This can help strengthen rural economies and improve food security in guava-producing regions.• Overall, the dataset of the experiment offers practical benefits for stakeholders across the agricultural sector, from farmers seeking to improve their practices to researchers exploring new avenues of inquiry and policymakers striving to support sustainable agricultural development.
Background
Meeting nutrition demand for the overgrowing population has been an important global issue as noted in SDGs.Fruits are the vital source of health boosting vitamins (C, A, B1, B6, B9, and E), minerals, dietary fibers, and phytochemicals with secondary metabolites [ 1 ].But uniform and year-round availability of fresh fruits can hardly be noticed.Due to the seasonal weather fluctuations as well as species or cultivar specificity, fruits produced in plenty during the summer and rainy four months, while scarce in the post-monsoon dry months in the tropics and subtropics.In Bangladesh, more than 54 % of total annual fruits are available during mid-May to mid-August, while the rest eight months have only 46 % fruits leading to an acute shortage of native fruits to meet the daily dietary requirement [ 2 , 3 ].So, technology(ies) are imperative to enhance the lean season fruit production to secure nutrient demand.
However, guava ( Psidium guajava L.), an important tropical and sub-tropical fruit, has the potentiality to produce flowers and fruits round the year.But heavy bearing during the summer months and absence of new growths during the post-monsoon period result in sub-optimal guava yield during the winters.Timely pruning at appropriate levels can then ensure adequate new shoots to bear flowers and fruits in guava [ 4 , 5 ].Therefore, sustainable pruning technique to emphasize the off-season production without sacrificing the main season yield and fruit quality in guava can provide invaluable insights for farmers, researchers, and policymakers to set decisions regarding human nutrition supplement through improved fruit farming.By collecting and analyzing the comprehensive data on pruning induced comparative seasonal yield and fruit nutrition differences, the dataset intents to formulate an evidence-based recommendation for sustainable guava production to meet food and nutrition security of the inhabitants.
Data Description
Pruning time (P), pruning level (T) and observation quarter (D) had significant (p ˂0.05) influence on yield and fruit quality traits of guava, whereas the three-way interactions among P, T and D showed non-significant impact for total soluble solids (TSS) and titratable acidity (TA) content in fruits with few other exceptions ( Table 1 ).Table 2 elucidates the interactive effect of P × D explaining the quarter-based variations in yield and fruit quality traits in guava after pruning at different times.Number of fruits and fruit yield per plant were noted the highest in the June-August quarter interacted with spring pruning (52.27 fruits and 12.43 kg per plant, respectively) followed by March-May periodfor autumn pruning (45.87 fruits and 10.96 kg per plant, respectively).Guava plants produced statistically maximum harvestable fruits during June-August and September-November (rainy months) when spring pruning was performed.In contrast, winter and dry period (December-February and March-May) production was led by autumn pruning.Regarding the fruit biochemical traits, TSS, total sugars (TSG), vitamin C (VTC) and specific gravity (SPG) of guava were determined superior in the autumn pruning and its during March-May followed by December-February quarter.The quality traits demonstrated statistical similarity during other rainy season quarters ( Table 2 ).Meanwhile, Table 3 illustrates the T × D interactions displaying that how pruning levels affected the quarter-based variations in yield and fruit quality traits in guava.The 30 cm pruning level resulted in maximum fruiting and yield in guava, and it was during September-November quarter having statistical parity with June-August harvest of the same treatment.The pruning length of 45 cm followed the best treatment in case of yield of guava under study.Taking on the fruit physicochemical attributes, both 30 and 45 cm pruning levels exhibited statistical harmony for superior results all throughout the quarters of the year ( Table 3 ).
Sole seasonal availability of fruits and fruit nutritional properties displayed that fruit yield was estimated the best during September-November period followed by June-August ( Fig. 1 ).Whereas, among the post-harvest fruit quality characters TSS, total sugars, vit-C and fruit specific gravity were registered the distinguishable values in March-May harvest followed by December-February fruits.Reversely, June-August harvests had inferior results in terms of fruit biochemical parameters.Titratable acidity showed exceptions to others ( Fig. 1 ).
Further correlation analysis demonstrates both positive and negative relationships among pruning mediated yield and fruit quality variables in guava ( Table 4 ).Flowering (FLP) shows positive response with only fruit number (FNP) and fruit yield (FYP), but the fruit biochemical traits demonstrate reverse correlation with flower number.Whereas FNP and FYP exhibit moderate to strong positive relationships with fruit quality attributes except titratable acidity.There also exist positive correlations among the quality parameters except for titratable acidity ( Table 4 ).
Site description and weather status
Over the two sequential crop years from 2019 to 2021, the study on pruning practices in guava was conducted at the Fruit Research Farm, Pomology Division, Horticulture Research Centre, Bangladesh Agricultural Research Institute (BARI).The orchard has its position at the Middle of the Madhupur Tract of Bangladesh with the geographical coordinates of 23.983 °N × 90.408 °E.Elevating by 8.4 m above the sea level, the site possesses moderate to excessive rainfall, high humidity and high temperatures from the month of April to September, whereas October to March has no rainfall, low to moderate humidity, low to moderate temperatures (Supplemen- tary Figure 1).The soil was predominantly Grey Silty Clay and acidic having low organic matter concentration and variable soil mineral status (Supplementary Figure 2).
Experiment design and pruning treatment
The pruning experiment was set upon an eight-year-old guava orchard following a factorial Randomized Complete Block Design (RCBD) taking three replications.Pruning operation was performed during three different seasons namely spring (early March), monsoon (early June), and autumn (early September) by removing the 1-year-old shoot tip at four different lengths viz., 0 cm (control), 15 cm, 30 cm, and 45 cm and every plant was pruned once a year.Pruning treatment commenced from March 2019.A total of fifty branches were pruned from every plant under treatment.Control plants were not subjected to pruning; rather data were collected from 50 randomly selected shoots all around the plants.However, dense and overcrowded branches were pruned regularly after the monsoon for light and aeration.Bordeaux paste (CuSO 4 : CaO: H 2 O = 1:1:4) was smeared at the cut ends.Fertilization, insect-pest regulations, and other intercultural managements were performed regularly [ 6 ].
Data collection on yield and fruit quality
The pruning efficiencies on fruiting, yield and fruit biochemical traits at various seasons were evaluated through collecting data on number of flowers and fruits per plant and yield per plant for yield related trait, whereas fruit quality traits included total soluble solids, titratable acidity, total sugars, vitamin C and fruit specific gravity.Data collection was started from March, June, and September for spring (March), monsoon (June) and autumn (September) pruning, respectively, and continued till the complete harvests from the plants receiving corresponding treatments.The number of flowers was counted from the selected branches all through the growing seasons with respective dates.Fruits were harvested at color break stage (dark green skin color starts converting to light green or yellowish), counted, weighed and recorded against the date of calculation.Handheld digital refractometer (Model: PAL-α, ATAGO, Japan) was used to determine total soluble solids (TSS).Titratable acidity (TA), as percentage of citric acid, was measured as per standard procedure [ 7 ] by using the pure extract of 5 g fruit pulp homogenized with 20 mL of purified water.Vitamin C estimation was done after AOAC in mg 100 g −1 of fresh weight [ 8 ].The procedure of Somogyi [ 9 ] was followed to estimate total sugars and expressed as the percentage of fresh weight.The water displacement method was used to determine fruit specific gravity [ 10 ] and expressed as g per ml.Quality attributes of ten randomly selected fruits were determined and the average was used as single value.
Data arrangement and statistical analysis
Data of the respective parameters except number of flowers plant −1 were grouped into four categories (March-May, June-August, September-November, and December-February) according to the harvest dates of fruit.The flowering data was collected in four groups as per the objectives of the study.Three-way analysis of variance (ANOVA) for pruning time and pruning level versus harvest period was performed.Data analysis was performed in such a way that quarterbased influence of pruning time and pruning level can be outlined as well as seasonal variations in fruit quality can be obtained.Mean separation was obtained with Fisher's LSD at the 5 % level of significance ( p < 0.05).Data analysis was performed using 'R studio' version 4.2.2 software.
Limitations
The dataset exhibits the comprehensive data on yield and fruit quality attributes of guava in various seasons of the year as influenced by pruning strategies, but it lacks information on utilization and interception of light after pruning as well as physiological changes in the pruned branches or plants.
Fig. 1 .
Fig. 1.Quarterly occurrence of yield (A) and fruit quality attributes (B and C) of guava upon pruning techniques.Different lowercase letters above bars denote significantly ( p < 0.05) different.
). Published by Elsevier Inc.This is an open access article under the CC BY-NC license ( http://creativecommons.org/licenses/by-nc/4.0/ ) fruits per plant, fruit yield (kg per plant) and fruit physicochemical properties namely total soluble solids, titratable acidity, total sugars, vitamin C contents and fruit specific gravity were recorded.The collected data were categorized into four groups corresponded to the four quarters of the year.Number of flowers and fruits per plant was counted manually, fruit yield was measured using electric balance, and standard chemical analysis procedures and formula were followed to determine the fruit quality data at the laboratory.Every time, mean chemical measurements of ten random fruits was considered as single value.Data source locationThe guava orchard was situated at the Fruit Research Farm of Pomology Division, Bangladesh Agricultural Research Institute (BARI), Gazipur under the Agroecological Zone 28 at the middle of the Madhupur Tract of Bangladesh.
Table 2
Quarter-based variations in yield and fruit quality traits in guava after pruning at different times.
Table 3
Quarter-based variations in yield and fruit quality traits in guava as influenced by pruning levels.
Table 4
Correlation coefficient exhibiting the relationship among the yield and quality parameters of guava as influenced by pruning strategies.Here, FLP, FNP, FYP, TSS, TA, VTC and SPG represent Number of flowers plant −1 , number of fruits plant −1 , fruit yield plant −1 , total soluble solids, titratable acidity, total sugar, vitamin C contents and specific gravity of fruit, respectively. | 2024-07-15T15:51:20.836Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "ab3cbe14a0f832f835efb96a6e3b3a97cbe94617",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.dib.2024.110733",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f32ca3f365420ad751f480881f837d5add7a1e2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213172093 | pes2o/s2orc | v3-fos-license | Water quality analysis of Urun-Islampur City, Maharashtra, India
Safe drinking water is a human need and right of the people. This study focuses on water quality analysis of the Urun-Islampur city which is in the Maharashtra state of India. Water quality testing is very important to check the quality of drinking water to avoid waterborne diseases and improve health. Water Quality Index (WQI) is important to determine the depletion of the water quality of the study area. The Urun-Islampur city is divided into fourteen wards. The values of WQI of those fourteen wards were compared, where from each ward three water samples were taken for the test. In order to assess the water quality, we calculate the WQI with physical, chemical and biological parameters. In water quality tests, various parameters are measured, including pH, total hardness, chloride content (Cl−), electric conductivity, residual chlorine and total dissolved solids (TDS), all those parameters compared with World Health Organization (WHO) standards of water quality; also in the present research paper classification of water samples of the 14 wards was an investigation into the basis of TDS, anions, cations, and total hardness. This article highlights the importance of using the WQI, and it is very useful to analyze the water quality. After water sample testing, it was observed that the pH of all water samples was found almost neutral. The TDS, conductance and hardness increased toward the old water supply line as compared to a new water supply line. The results of the Water Quality Assessment done in Urun-Islampur city show that all parameters were within the permissible limits as per WHO standards. The Water Quality Index (WQI) in the range of 86 to 90 was also good. But it may be affected by water distribution lines which were older than 30 years, so there is a need for proper maintenance of the distribution system and chlorination to avoid waterborne diseases.
Introduction
Water plays an important role in human life. About 37% of urban and 64% of rural Indians are without access to safe drinking water as per the World Health Organization (WHO) report (Akoto and Adiyiah 2007). Freshwater is important for the survival of all living beings. It is more important for the people as they depend upon it for industrial, food production, waste disposal and cultural requirement (APHA et al. 2012).
Groundwater is an important freshwater resource; during the last decade, it was observed that groundwater gets polluted because of increased human activities, and therefore, a number of cases of waterborne diseases have been seen. So there is a need for understanding the water chemistry, which includes the basis of the knowledge of the multidimensional aspect of aquatic environmental chemistry which involves the composition, source and reactions transportation of water. The quality of water is important for mankind because it is directly linked to human welfare.
India is one of the most populated countries in the world, with 127.423 crore population as per 28th January 2017. Cities accommodate nearly 31% of India's current population. Maharashtra is one of the state in the western region of India and with 11.237 cores of the population. Maharashtra is one of the wealthiest and the most developed state in India, contributing 25% of the country's industrial output and 23.2% of GDP as per the Census 2011.
Urun-Islampur is a Municipal Council city in the Sangli district of Maharashtra state of India and which is shown in Fig. 1 Fig. 2, it is observed that the population is slightly decreasing from the year 1901 to 1921, and it may be because of the health and water for crops and some other parameters. Afterward from the year 1931, it is observed that population growth slightly increased up to the year 1991 to every decade; after that from 1991 the population growth percent increased as compared to previous decade; this is because due to the development of the city, industry area, infrastructure, facilities, improved lifestyle , people are attracted toward city; it is also because the education facilities are good, etc.
Urun-Islampur has favorable conditions for stabilized market, work opportunity, modern trade setup; is able to fulfill the necessity for citizens and to develop the whole urban ecosystem; and is helpful in the advancement of organization and physical, social and economic framework, but due to these reasons, the city encompasses a prime significance and scope of advancement in different regions.
The main source for piped water supply for the city of Urun-Islampur is the Krishna River. In 1985, a 65-km pipeline was laid and has been increased to 125 km as of the year 2005 due to the requirement of 4 to 5 km pipeline expansion each year.
In the city, there are eight elevated storage reservoir (ESR) and water distribution lines which are older than 30 years; so, to check the quality of water to prevent health issues, we have decided to study and assess the quality of water of the Urun-Islampur city (Kate and Kumbhar 2017).
Materials and methods
The methodology includes, data collection, household survey for understanding the quality of water by people, collection of three water samples from each ward, testing the water samples, collection of secondary data, validation of secondary data. (Kate and Jamale 2018) In the city, there are eight ESRs; capacity of ESR is given in Table 1, also six ESRs are proposed, capacity of WTP, daily hours of operation is 2 h, 1 h at morning and 1 h of evening in all wards, minimum water tax collected per household is Rs 750, and number of private connections are 12,000 .
As per the norms of APHA, the water samples were collected in a wide mounted plastic bottle of 1-liter size and preserved till the parameters were analyzed in the laboratory. The sample collection procedure and the preservation were as per the guidelines given in Chapter 5-Fieldwork and sampling Edited by Jamie Bartram and Richard Balance, published on behalf of the United Nations Environment Programme and the World Health Organization (UNEP/WHO) (Boominathan and Khan 1994;Jafari et al. 2008;Kaushik et al. 2002;Jayabhaye et al. 2008;Ravindra and Garg 2006;Khan and Chaudhary 1994;Kadam et al. 2007;Kodarkar 1992;Pandey et al. 1993;Ramakrishnaiah et al. 2009;Salve and Hiware 2008;Trivedy and Goel 1986;Ballance and Bartram 1998;Bartram et al. 1996;World Health Organization (WHO) 2006).
Water samples were tested for the estimation of various physicochemical parameters like water temperature and pH were recorded using thermometer and digital pH Meter. Specific conductivities were measured by using a digital conductivity meter. The TDS values were measured by using the TDS meter. Other parameters such as hardness were estimated in the laboratory by using standard laboratory methods. The present study involves the analysis of water quality in terms of physicochemical methods.
Water Quality Index (WQI) was calculated for collecting water samples. Water Quality Index was calculated based on physicochemical parameters, and the standards of drinking water quality by the World Health Organization (WHO) is given in Table 2.
Water Quality Index (WQI) was calculated in three steps. In the first step, each parameter was assigned a weight (wi) according to its relative importance in the overall quality of water for drinking purposes. In the second step, the relative weight (WI) is computed from the following Eq. 1.
where Wi is the relative weight, wi is the weight of each parameter, and n is the number of parameters. Relative weight (Wi) values were calculated of each parameter.
In the third step, a quality rating scale (qi) for each parameter is assigned by dividing its concentration in each water sample by its respective standard according to the guidelines laid down in the Bureau of Indian Standards (BIS) and the result multiplied by 100 (Bureau of Indian Standards (BIS) (2012)). where qi is the quality rating, ci is the concentration of each chemical parameter in each water sample in mg/L, and si is the Indian drinking water standard for each chemical parameter in mg/L according to the guidelines of the Bureau of Indian Standards (BIS) (Census India. http://www. 2011). For computing the WQI, the SI is first determined for each chemical parameter, which is then used to determine the WQI as per the following Eq. 3 and 4.
SIi is the subindex of ith parameter; qi is the rating based on the concentration of ith parameter; and n is the number of parameters. The computed WQI values are classified into five types, which are given in the following Table 2.
Results and discussion
Three water samples taken from each of the fourteen wards within the city of Urun-Islampur were tested for pH, electrical conductivity, TDS, residual chlorine, chloride and hardness. The result shows that the water quality is good in the city, but the water distribution system is too old so the quality may be affected. Figures 3, 4, 5, 6, 7, 8 show the results of water samples, and Table 3 shows the inclusive results of water samples. From the result, it was observed that pH, total hardness (TH), chloride content (Cl-), electric conductivity (EC), residual chlorine (RC) and total dissolved solids (TDS) were in the permissible limits as per World Health Organization (WHO) standards of water quality. Figure 9 shows the Water Quality Index (WQI) is in the range of 80 to 90 also good as per standards are given Table 2.
Conclusion
The results of water samples collected from all fourteen wards and tested in the laboratory as per the World Health Organization (WHO) standard indicate that the quality of water is within the permissible limit. The water quality is satisfactory in all the wards for the domestic purposes; also the Water Quality Index (WQI) value is in the range of 80 to 90 also good; still there is a need regarding the water distribution system, as the main source of piped water supply for the city of Urun-Islampur is the Krishna River.
In 1985, a 65-km pipeline was laid and has been increased to 125 km as of the year 2005 due to the requirement of 4 to 5 km pipeline expansion each year, as the distribution system is 34 years old and prone to corrosion and may get affected by sewage and other waste in the nearby region, so there is need of proper maintenance of the distribution system and chlorination to avoid waterborne diseases to improve the health and quality of people.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. Water Quality Index
Ward Number
Water Quality Index | 2020-03-19T14:45:39.173Z | 2020-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "afeb230da3bc14c62119e240a5a79418f120b529",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13201-020-1178-3.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "afeb230da3bc14c62119e240a5a79418f120b529",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
256437286 | pes2o/s2orc | v3-fos-license | Seismic properties of the permafrost layer using the HVSR method in Seymour-Marambio Island, Antarctica
Authors have calculated the H/V spectral ratios using seismic-noise recordings in the uppermost layers north of the Seymour-Marambio Island, Antarctic. Sixty-seven seismic site-response measurements near and far from the Argen-tinean Marambio Base runway suggest geotechnical works on the uppermost sedimentary layers due to maintenance, landing, and taxi of large loads and aircraft during decades could contribute to changes in their seismic dynamic response. Two horizontal images of V p , V s , and V p / V s ratios at 1.0 m and 35.0 m depth show lateral variations in the permafrost properties. Authors interpret that permafrost is emplaced in rocks with different porosities and contrasting fluids saturation at those depths. In shallow strata, the saturation of gases affects mainly the elastic properties. In deeper strata, where the location of water reservoirs is detected, the primary mechanism of seismic dissipation is anelastic.
Introduction
Permafrost is frozen ground in high latitudes formed by low sea-level stands during the ice ages (Overduin et al., 2019;Martens et al., 2020). The permafrost, the active layer on its top, and the underneath stratum hold complex ecosystems and are leading players in the complex study of climate change due to the release of greenhouse gases during its thaw. The data required for its research is typically time-consuming due to elaborated laboratory essays, sometimes field drilling, or complicated sampling collection.
Estimating permafrost's elastic properties is a scientific challenge that involves layers with in-depth contrasting behaviors, lateral variations in each layer's composition, inversion of velocities and densities, and diverse seismic inelastic performances. Besides, the permafrost seismic performance is involved in damages to the men-made infrastructure (see, e.g., Melvin et al.,2017;Raynolds et al., 2014;Hong et al., 2014), showing that there is a significant lack of information to diagnose its comportment appropriately and to promote proper hazard management. For instance, in some places, cryoturbation related to thawing produces seismic liquefaction, often accompanied by sand volcanoes (Vandenberghe et al., 2016), but sometimes with large-scale subsidence and potential changes in the drainage systems (Thienen-Visser et al., 2015).
In recent years, the permafrost's thickness and seismic properties have been mapped with seismic noise measurements. A good example is presented by Picotti et al. (2017), who corroborated estimations using radio-echo sounding, geoelectric, and active seismic methods in glacial environments such as the Adamello and Ortles-Cevedale massifs (Italy), the Bernese Oberland Alps (Switzerland), and the Whillans Ice Stream (West Antarctica). Their results suggested that the resonance frequency in the horizontal-to-vertical spectral ratio (HVSR) correlates well with the ice thickness at the site, in a wide range from a few tens of meters to more than 800 m, even allowing reliable estimations of the seismic properties.
On the other hand, in areas where large loads repeatedly impact the permafrost, some reports recognize their elastic and inelastic performance or allow the inference of the local seismic structure oriented to the infrastructure design care. Good examples constitute the aerial runways deployed near the scientific bases in the Antarctic continent. Thus, to provide valuable information regarding the seismic structure in these places, the authors collected seismicnoise information on Seymour-Marambio Island, approximately 80 km far the Antarctic Peninsula. The easy deployment, fast processing, and trusty results from similar sites suggest that the HVSR analyses may contribute to the knowledge of the Antarctic permafrost's elastic and inelastic properties (Bignardi et al., 2018;Köhler & Weidle, 2019). Sixty-seven seismic noise stations were deployed on the Island's plateau, where the runway is located (Figure 1). The study area corresponds to a topographic contrast of ~200 m altitude, located about 2 km from the shoreline. High slopes are observed with the shallower layers' surficial activity that triggers abundant landslides. In this Figure 1. Location of the study area. a) Seymour-Marambio Island, approx. 80 km from the Antarctic Peninsula. The blue square represents the north part of the Island. b) Topographic image of the area where 67 seismic noise stations were recorded (blue squares). c) Marambio Scientific Base's runway is located on a narrow plateau. The map shows the scientific station's location (red houses) with 67 seismic-noise stations measured in the Colombian -Argentinean Antarctic summer field campaign in January 2020 (yellow pins). d) Perpendicular topographic profiles crossing the runway suggested in b).
scenario, the permafrost and the topographic contrast may offer information about the combined effect's seismic response.
In this work, the authors report the results of seismic site-response experiments made during the Colombian and Argentinean scientific teams' campaign in January 2020 near the Marambio Scientific Base of Argentina to evaluate the permafrost structure on the Island's plateau, the site response along the runway, and contrast these results with previous observations based on other geophysical approaches.
Geological setting
Marambio Island (also known as Seymour Island) is one of the 16 islands encompassing major islands on the Antarctic Peninsula. This Island is located in the westernmost Weddell Sea, and according to Fukuda et al. (1992), its ground has been frozen since the Holocene. At least three geomorphological terraces are recognized in the north of the Island, with depths of the permafrost estimated to be 200m (upper terrace), 105m (middle terrace), and 35m (lower terrace).
According to Montes et al. (2013), the north of the Island outcrops Miocene to Quaternary formations related to surficial and glacier processes such as the Weddell and Hobbs formations that rest on La Meseta Formation of Eocene-Oligocene age. This sequence reaches 600 m in thickness. Tectonic faults with vergence eastward and westward limit those sedimentary sequences with older sedimentary units dated from Maastrichtian to Paleocene, such as Snow Hill Island, Lopez de Bertonado, Sombral, and Cross Valley-Wiman formations ( Figure 2). Montes et al., 2013). The HVSR study was done at the northern sector of the Island, located on the Marambio Scientific Base's runway. Profile A-A' is a representative section of the uppermost basin in the study area, suggesting a graben structure filled with Paleogene to Neogene sediments.
The Eocene to Quaternary strata underneath the airport runway suggests a graben structure filled at different sedimentary rates. Besides, annual climate seasons favor permanent erosive activity, mainly at the runway's north, west, and east, where surficial runoff is observed. Along the runway, the active layer was removed or compacted. This layer of < 1 m thick is only observed around the runway. Nogoshi and Igarashi (1971) and later Nakamura (1989Nakamura ( , 2000 introduced the HVSR or 'H/V' method, which is based on the processing of microtremors or the ambient seismic noise generated under natural or anthropogenic perturbations, e.g., by the incidence of wind, sea waves, traffic of vehicles or industrial processes on sedimentary sequences. Acquisition and processing of the seismic noise's orthogonal components are used to estimate the power spectra ratio of the horizontal (H) over vertical (V) components to infer the strata's elastic structure below the subsurface. Thus, resonance peaks of various frequencies are then related to the thickness (h) and shear wave velocity (V s ) of each layer under the hypothesis of lateral homogeneity. Under this assumption, the following expression is used to analyze the direct problem of the soft sedimentary cover resting on a rigid basement (Nakamura, 1989):
Materials and methods
Equation 1 suggests that different physical properties in the sedimentary sequence incorporate one harmonic in the spectra. Therefore, the HVSR curve presents as many peaks as layers, with amplitudes linked to each impedance contrast. Diverse strategies have been proposed to estimate the inverse problem to infer the elastic structure of the soft sedimentary cover resting on a rigid basement using the HSVR curve (see, e.g., Castellaro and Mulargia, 2009;Herak, 2008;Sánchez-Sesma et al., 2011;Priolo et al., 2012;Abu-Zeid et al., 2014;Lunedei & Albarello, 2015;Mantovani et al., 2015;Bignardi et al., 2016;Carcione et al., 2017). In this work, the authors have used the program OpenHVSR (Bignardi et al., 2018), based on the previous program ModelHVSR (Herak, 2008). This code computes theoretical transfer functions for layered soil models based on the fast recursive algorithm proposed by Tsai (1970) and modified to take frequency-dependent attenuation and body-wave dispersion into account, as was suggested by Tsai and Housner (1970). In this case, the soil model consists of any number of horizontal, homogeneous, isotropic, and viscoelastic layers stacked over a halfspace, each of which is defined by thickness (h), the velocity of propagation of the body wave (V p and V s ), density, Poisson's ratio, and the frequency-dependent seismic Q-factor (Q p and Q s ), which controls the anelastic properties. The theoretical transfer functions are calculated using a simple and guided Monte Carlo search in the model space. Those estimations are compared with the observed HVSR curves to minimize a misfit function. Details of this procedure can be consulted by Tsai (1970), Tsai and Housner (1970), and Herak (2008). The initial model for starting the inversion (Table 1) was inferred from the detailed stratigraphy reported by Montes et al. (2013). After several tests, the best fit was reached using a one-layer model with parameter values based on the average of the seven layers of Table 1. The range of values used as input parameters in the OpenHVSR code was: V p = 80 -4500 m/s; V p / V s = 1.6 -4.0; density = 1.4 -2.8 g/cm 3 , thickness =0.5 -999 m,Q s = 5 -500, and Q p / Q s =1.5 -4.5. Table 1. The initial model of elastic and anelastic parameters used for stating the inversion. The model is based on seven sedimentary layers on a rigid basement inferred from the stratigraphy reported by Montes et al. (2013). After several tests, a best-fit was reached using a one-layer model with parameter values based on the average of the seven layers.
Layer (m/s) (m/s) ρ (g/ cm 3 )
Thickness (m) The study area's operational and access restrictions limited the experiment to be carried out between January 10 and 24, 2020. During this period of the austral summer, the intensity and direction of winds were stable. Once the authors recognized the terrain, functionality of the seismic instrument, and field logistics, 67 continuous seismic noise stations, 20-30 min-long, were recorded. Signals recorded were not discriminated from or excluded by possible noise sources, which were expected to be mainly related to wind and oceanic tides. Signals were acquired using a three-component seismometer with a natural frequency f n = 0.45Hz. The seismometer was directly installed on the ground using spikes under the base to guarantee a proper soil-sensor coupling. The seismographic system is composed of a 24-bit digitizer to 200 sps and was set up with an anti-aliasing filter at 50 Hz, which, in this particular case, guaranteed sample frequencies below the noise associated with the electrical system frequency that powers the infrastructure of the scientific base in the Seymour-Marambio Island. The seismograph is synchronized by a GPS with 5 microseconds resolution and receives analogic signals from a triaxial seismometer at 4.5 Hz. Those analogic signals were coupled to an operational amplifier with negative feedback and a buffer amplifier to extend the bandwidth to 0.45 Hz (e.g., Vitale et al., 2018). According to Parolai and Galiana-Merino (2006), transients into the seismic noise record do not affect the H/V ratio. Hence, original signals need not be filtered or have anti-triggering procedures. Geopsy was the open software used for signal processing (Wathelet et al., 2020). Figure 3 shows a flow diagram of the procedure used, starting with 1) acquisition of >20 min three-components seismic record; 2) split the record in windows of 30 s with overlaps of 50%; 3) estimation of a Weighted Hanning window and then FFT for each 30 s record; 4) smoothed of the spectrum of each component using the Konno-Ohmachi filter with a b-value of 40 (Konno & Ohmachi, 1998); 5) The horizontal specter was computed by Euclidian norm using the north and east components (H=√N 2 + E 2 ) and then estimated the H/V spectral ratio. 6) Stacking those spectral ratios for all the 30-second segments allowed the calculation of an average and standard deviation in each station for the frequency range 0.5-10.0 Hz. The same figure presents an example of one >20 min three-component seismic record, its H/V spectral ratios for all 30 second-long windows, the stacked spectral ratio, and the estimations of the elastic and anelastic properties using the OpenHVSR code. In general, the authors observed that even having momentary and abrupt changes in the seismic noise records' energy, the best 1D model found is like that estimated without the abrupt changes.
Two conditions were assumed along with all estimations: 1) the seismic ambient noise field is homogeneous across all azimuths, and 2) the soil model has several viscoelastic layers laterally homogeneous stacked over a halfspace.
Strictly those restrictions are not satisfied owing to: 1. the diverse intensity of the Antarctic weather in the study zone, with wind trend NNE throughout the year (Yu et al., 2020); 2. varied interactions between ocean currents and the Island; 3. the effect of material damping on the expected noise wavefield as a consequence of sources relatively distant from the receivers (e.g., Lunedei and Albarello, 2009); 4. complex geology inside a steep topographic contrasting (Figures 1 and 2); 5. inversion of viscoelastic parameters due to an active upper layer, the rigid permafrost, which is over sedimentary layers with and without water, and the deeper bedrock. All those factors make the HVSR curves challenging to invert and interpret. Carcione et al. (2017) studied soft-layers and bedrock anelasticity on the S-wave amplification to evaluate the impact of attenuation and elasticity of bedrock on the amplitude and frequency of the resonance peaks in two glaciers located in Northern Italy and the Antarctic continent. They concluded that the attenuation and bedrock elasticity must be considered to obtain reliable layer thickness estimations. Hence, the combined interpretation of the thickness and viscoelastic parameters (V p , V s , Q p and Q s ) is necessary to understand the permafrost structure in the study area. (d) The average H/V ratio and its standard deviation for a frequency range of 0.5-10.0 Hz. (e) Example of best-fit of the H/V averaged spectral ratio, used to calculate changes in depth for elastic (V P , V S , and V P /V S ), and anelastic (Q P , Q S , and Q P /Q S ) parameters using the program OpenHVSR (Bignardi et al., 2016(Bignardi et al., , 2018.
Results
Due to the H/V ratio does not discriminate the horizontal incidence direction of the perturbations, and each station may be analyzed from the point of view of a 1D model with limited seismic energy influencing its surrounding area, the authors hypothesize that results may show trends related to the seismic response of the underneath sedimentary structure. Figure 4 shows the 67 HVSR stations classified in three main trends: (a) with almost flat HVSR response, (b) presenting a growing-up HVSR response, and (c) showing a decreasing HVSR response. Only two stations offer a dominant peak in the frequency range analyzed (SN-2 and SN-3). Comparatively, the spectral response along the runway does not present dominant peaks, with trend responses (a) and (b) suggesting that the permanent interaction of these layers throughout impacts, movements of high loads (e.g., the permanent landing and taxi of large aircraft on the runway), and maintenance works contributed to permanent changes in the dynamic response of this ground. The dataset impedes exploring if this activity may contribute to permanent changes in the seismic dynamic response beyond the first meters. Those stations with dominant peaks or related to trend (c) are on the eastern steepest side of the runway, near active landslides, indicating a seismic response linked with not well-consolidated materials. A related observation concerning changes in the ground's dynamic response by human activity is reported by Abu Zeid et al. (2017). Those authors suggest that long-term trampling results in sediment stiffening, increasing the density and velocity of seismic shear waves.
The authors found that V p and V s have a broad range of values ( Figure 5 and Table S1). Several authors have suggested that in permafrost, V p reaches high values (e.g., Timur, 1968;Carcione & Seriani, 1998;Krautblatter & Draebing, 2014). According to these authors, high velocities of the seismic wave's propagation respond to the pore space occupied by ice and the velocities of the rock matrix, interstitial ice, and available water in an extensive range of temperatures. Nevertheless, the present study's authors observe that our fitting of observed and theoretical spectra at low frequencies is poor (also reported in other works, e.g., Talha Qadri et al., 2015;Abu Zeid et al., 2017;Bignardi et al., 2018). This uncertainty may suggest artifacts in the solution of the velocity structure, for instance, large velocity values. The authors want to encourage developing future works to contrast the estimated range of velocities reported in this paper.
It draws attention that plotting values of Q P vs V p and Q s vs V s at depths ranges of z ≥ -35m, -35 m ≤ z < -100 m, and z ≤ 100 m appear to be trends of low attenuation-high velocity in shallow depths, and converse, high attenuation-low velocities in deeper layers ( Figure 5). Following Barton (2007), this trend could be interpreted as increasing fluid saturation in deeper layers. Indeed, the authors consider that the shallow frozen and rigid permafrost observed in the field presents low attenuation, high velocity, and, consequently, low fluid saturation, except in the thinner active layer. They also interpret that the incremental geothermal gradient promotes melting ice, generating high attenuation and low velocities in deeper layers. Prasad (2002) suggested that the relationship between ratios (V p / V s ) 2 and Q P / Q s offers an idea of increasing pore pressure by reducing effective stress in saturated or partially saturated rocks. The dataset ( Figure 5) indicates a broad range of pore pressure conditions that dominate in surficial layers, probably due to permafrost but are still observed in deeper layers. Besides, in the wide range of densities detected, around -100 m, it was found find high V p / V s ratio with low-density values indicating highpressure fluids' potential presence (see, e.g., Vargas & Torres, 2015;Koulakov & Vargas, 2018), probably related to the high concentration of gases in the permafrost. (SN-2 and SN-3). The map shows the spatial distribution of these patterns. The decreasingly HVSR response stations and the two previously mentioned stations with a dominant peak are close to active landslides.
Discussion
In the fieldwork activities during the austral summer of 2020, the authors observed high mobility of the active layer, accompanied by runoff in the northern sector of Seymour-Marambio Island, probably linked to the Weddell, Hobbs, and La Meseta formations. These processes made it possible to detect landslides, mainly in some areas of high slopes. Thanks to these events and the accessibility to two surficial trenches, the permafrost's presence at approx. 50-100 cm depth was verified. This upper permafrost surface may offer keys to its spatial distribution. Fukuda et al. (1992), using one reference site in Seymour-Marambio Island with a mean annual temperature of approx. -10°C at the surface, a thermal profile up to 2.5 m depth, and assuming a yearly temperature gradient of 0.19°C/m in steady-state conditions, estimated the permafrost base with 0°C at a depth of ~34 m. They also contrasted other geoelectrical resistivity measurements acquired by Fournier et al. (1990), which reported the permafrost thickness between 28.6 m and 127.5 m. Beneath the rigid permafrost base, the abundance of melted water and other precipitated minerals may promote viscoelastic inversions in the stratigraphy profile, at least from the runway (summit at approx. 200 m.a.s.l.) up to sea level. Besides, Kato et al. (1990) found high concentrations of CaCO 3 and Na 2 SO 4 in ice wedges, indicating percolation processes of molten water on this Island under an extremely arid environment. All those observations may explain the diverse viscoelastic profiles and inversions of properties presented in this paper's Supplementary Material. For instance, according to the V p profiles, in general, the authors observed the first inversion at depths ranging between 5 and 110 m. There are up to three inversions in some places, suggesting a variable thick of the permafrost and the possibility of lateral and vertical variations of porosity that allow more significant ice concentrations, mainly in the Neogene Quaternary formations. On the other hand, the inversions of and do not coincide with those of and , suggesting that seismic attenuation mechanisms may be observed in the permafrost in response to the presence of gases or under the permafrost base, probably related to formations with water deposits. Thereby, V p , V s , and V p / V s ratios were mapped at two depths: at 1.0 m to observe the lateral variations of the permafrost under the active layer and at 35.0 m in the first permafrost base reported (Fukuda et al., 1992). As the most profound resolution reached in this work is approx. 200 m, it guarantees to observe the possible cases of permafrost up to the sea level and the role of the Paleogene to Quaternary formations (almost flat) in the lateral distribution of this layer (Supplementary Material). Figure 6 presents the horizontal sections, showing lateral variations of the seismic velocity structure in the study area. Laboratory and field experiments have associated high values of V p to permafrost layers (Krautblatter & Hauck, 2007;Krautblatter & Draebing, 2014;Skvortsov et al., 2014), meaning in this case that permafrost is emplaced at 35 m-depth and highlighting patches of high values of elastic properties. High P-wave velocities are widespread except in high-slope areas. Similar patterns are observed with the S-wave velocities (V s ). N-S and E-W profiles of the same figure suggest lateral variation in the permafrost layer's elastic properties. Consequently, the V p / V s ratio is distributed in a broad range of values (1.6 -4.0), indicating rocks with various porosities. Many are probably saturated with fluids in different phases and/or critical points (e.g., Lee, 2003). The authors do not discard that these high V p / V s ratio patches represent the permafrost with anomalous saturated accumulations of gases such as CH 4 and/or CO 2 . Hence, they inferred that Paleogene to Quaternary formations reported by Montes et al. (2013) have high petrophysical properties heterogeneity. Consequently, they also should be associated with high lateral variation in ice concentrations.
Beyond observations of the HVSR curves in three trends (Figure 4), it is difficult to discern the topographic effect on the distribution of the mapped elastic parameters. Low values of and in the peripheral part of the maps ( Figure 6) could be interpreted as the seismic response of non-compacted material associated with landslides. Future work could collect data on hillside areas, even by more extended sampling periods and instruments with more bandwidth, to verify if the combined effect of topography and permafrost presence introduces imprints in the seismic response.
Discarding the topographic effect in the study area, the almosthorizontal stratigraphy in the northern sector of the Island offers an interesting constraining to validate the HVSR method's hypothesis regarding a subsurface structure composed of viscoelastic layers laterally homogeneous. In addition, the trends of V p vs. V s showed in Figure 5 illustrate that in shallow strata persist saturation of gases, something not observed in the plot Q p vs.Q s . It means that gas saturation mainly affects the elastic properties and, in a lesser proportion, the anelastic properties. In contrast, in deeper strata, under the permafrost base, where the water reservoir's location is calculated (Kneisel & Hauck, 2008), the primary mechanism of seismic dissipation is anelastic. The plot of (V p /V s ) 2 vs. Q P /Q s shows these effects nicely.
According to Borzotta and Trombotto (2004), the abundance of unfrozen water in frozen soils promotes variation in subsurface geophysical properties. Hence, having detected the possible interface where permafrost's basement has a temperature assumed as 0°C, the authors speculate as an additional worth of the HVSR method applied in the study area that the spatial distribution of elastic and anelastic properties may reflect the thermal structure of this layer in the study area. High values of could V P , be related to permafrost layers. Permafrost could be associated at 35 m-depth with patches of high values of elastic properties.
Conclusions
Using the estimates of H/V ratios on 67 seismic-noise stations, we calculated elastic and anelastic properties in the uppermost layers at the north sector of Seymour-Marambio Island. The H/V spectral ratios distribution suggests the permanent interaction on the uppermost sedimentary layers due to maintenance works, trampling, impacts, and high loads (e.g., the permanent landing and taxi of large aircraft on the runway during decades) may contribute to changes in the seismic dynamic response.
The mapping of elastic properties at 1.0 m and 35.0 m depth allowed identifying lateral variations of the study area's permafrost seismic properties. At those depths, we hypothesize that permafrost is emplaced in rocks with patches of porosities saturated with fluids in different phases and critical points, e.g., with abnormal saturated accumulations of gases like CH 4 and CO 2 .
In shallow strata, the saturation of gases affects mainly the elastic properties. In contrast, in deeper strata, where the location of water reservoirs is expected, the primary mechanism of seismic dissipation is anelastic.
Data and resources
This study is based on seismic microtremors' data acquired during the Colombian and Argentinean scientific teams' campaign in January 2020 near the Marambio Scientific Base of Argentina. For more information and access to the dataset, contact to authors of this paper. All data processing and plotting were done using the open software Geopsy (Wathelet et al., 2020), OpenHVSR (Bignardi et al., 2016(Bignardi et al., , 2018, and the academic license of MatLab ® R2020B of MathWorks.
CAV and AMG thank Instituto Antártico Argentino and Comando
Conjunto Antártico for the logistic support. CAV and JMS thank Universidad Nacional de Colombia, Departamento de Geociencias for helping support this research. The authors are grateful to staff members of the Laboratorio Antártico Multidisciplinario Marambio (LAMBI) for taking care of the permanent instrumentation. The authors also thank officers, crew, technicians, and the Fuerza Aérea Argentina science party that take care of the Marambio Station. AMG is a member of the Carrera del Investigador Científico, CONICET. | 2023-02-01T16:13:21.673Z | 2022-11-29T00:00:00.000 | {
"year": 2022,
"sha1": "68f55e663b9875d45239dd836e03033b01421119",
"oa_license": "CCBY",
"oa_url": "https://revistas.unal.edu.co/index.php/esrj/article/download/103981/85311",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ada59be88bc103ff6f0ece3d98b62330326bed63",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
} |
856536 | pes2o/s2orc | v3-fos-license | New insights into plant glycoside hydrolase family 32 in Agave species
In order to optimize the use of agaves for commercial applications, an understanding of fructan metabolism in these species at the molecular and genetic level is essential. Based on transcriptome data, this report describes the identification and molecular characterization of cDNAs and deduced amino acid sequences for genes encoding fructosyltransferases, invertases and fructan exohydrolases (FEH) (enzymes belonging to plant glycoside hydrolase family 32) from four different agave species (A. tequilana, A. deserti, A. victoriae-reginae, and A. striata). Conserved amino acid sequences and a hypervariable domain allowed classification of distinct isoforms for each enzyme type. Notably however neither 1-FFT nor 6-SFT encoding cDNAs were identified. In silico analysis revealed that distinct isoforms for certain enzymes found in a single species, showed different levels and tissue specific patterns of expression whereas in other cases expression patterns were conserved both within the species and between different species. Relatively high levels of in silico expression for specific isoforms of both invertases and fructosyltransferases were observed in floral tissues in comparison to vegetative tissues such as leaves and stems and this pattern was confirmed by Quantitative Real Time PCR using RNA obtained from floral and leaf tissue of A. tequilana. Thin layer chromatography confirmed the presence of fructans with degree of polymerization (DP) greater than DP three in both immature buds and fully opened flowers also obtained from A. tequilana.
Introduction
Carbohydrates produced by photosynthesis can be consumed directly or stored in plant cells in the form of sucrose, starch or fructans. Whereas, the majority of plants accumulate starch, around 15% of plant species store carbohydrates in the form of fructans (Hendry, 1987). Long term storage of carbohydrates in the form of fructan polymers has been reported in root and stem tissue in dicotyledonous plants such as chicory (Cichorium intybus) (Van den Ende and Van Laere, 1996) and Jerusalem artichoke (Helianthus tuberosus) (van der Meer et al., 1998), in non-gramineous monocotyledons such as asparagus (A. officinalis) (Cairns, 1992) and agave (Agave tequilana) (Mancilla-Margalli and López, 2006) and in gramineous species such as wheat (Triticum aestivum) Abbreviations: PGHF32, plant glycoside hydrolase family 32; FEH, Fructan exohydrolase; 1-SST, Sucrose:sucrose 1fructosyltransferase; 6G-FFT, Fructan:Fructan 6G Fructosyltransferase; 6SFT, sucrose:fructan 6-fructosyltransferase; FFT, Fructan:fructan 1-fructosyltransferase; Vinv, Vacuolar invertase; Cwinv, Cell wall invertase. (Housley and Daughtry, 1987) and barley (Hordeum vulgare) (Wagner and Wiemken, 1986). It has been proposed that fructan polymers serve not only as energy reserves for new growth in the Spring but also to protect cell membranes and contribute to cold tolerance at low Winter temperatures (Hisano et al., 2008). Hendry (1993) also suggested that fructan metabolism is an evolutionary adaptation to withstand prolonged periods of drought. Perhaps surprisingly, fructans are not found exclusively in leaves or storage organs but also in floral tissue where their primary role may be in mechanisms that lead to flower opening by modulating the osmotic state (Bieleski, 1993;Vergauwen et al., 2000).
Most agave species including Agave tequilana are well adapted to grow under arid or semi-arid conditions due to a unique combination of several characteristics. Morphological traits include succulence, concave leaves with thick cuticles organized in a rosette formation and shallow adventitious roots (Gentry, 1982) whereas physiological components include CAM mediated photosynthesis (Nobel, 1976) and fructan accumulation (López et al., 2003). The capacity to produce and store fructans has been exploited in Mexico since the pre-Columbian era to produce fermented (pulque) or distilled (tequila, mezcal) beverages (García-Mendoza, 1992) and Agavins (Agave fructan polymers) are of the neoseries type (López et al., 2003).
Agavin structure implies the activity of four different fructosyl transferase enzyme activities: Sucrose:sucrose 1-fructosyl transferase (1-SST), Fructan:fructan 1-fructosyltransferase (1-FFT), sucrose:fructan 6 fructosyltransferase (6-SFT), and Fructan:Fructan 6G-Fructosyltransferase (6G-FFT). Previously different isoforms of 1-SST and 6G-FFT enzymes from A. tequilana (shown in red in Supplementary Table 1) were characterized at the genetic and functional level in different tissues and plants of different ages (Cortés-Romero et al., 2012). A 1-FFT type enzyme has also been reported for A. tequilana and A. inaequidans and shown to respond differentially in terms of expression in relation to exposure to different metabolites such as hormones or sugars (Suárez-González et al., 2013). The commercial importance of A. tequilana as a crop and the increasing interest in the exploitation of agave species for biofuel production (Borland et al., 2009;Cushman et al., 2015) underline the need for further detailed analysis of both the synthetic and degradative components of fructan metabolism in agave at the molecular level, with the aim of increasing the efficiency of fructan production and/or producing specific forms of fructan polymers.
Plant glycoside hydrolase family 32 (PGHF32) that includes fructosyltransferases, invertases, and fructan exohydrolases (FEH) is characterized by highly conserved amino acid sequences where changes in a single residue can modify the activity of specific enzymes (Le Roy et al., 2007;Van den Ende et al., 2009). This has necessitated the heterologous expression of putative PGHF32 encoding genes, protein purification and in vitro analysis of activity in order to reliably classify genes encoding each enzyme type. In the current environment of massive accumulation of sequence data from a wide number of species the need for analysis of enzyme activity could hamper detailed analysis of fructan metabolism in processes such as stress tolerance and osmotic balance especially in non-model species such as agave. Although definitive activity analysis is essential, an accurate sequence based method for initial classification of putative PGHF32 members at least for agave species would be a useful tool.
Recently transcriptome data has been generated for four different Agave species: A. deserti and A. tequilana (members of the sub-genus Agave) (Gross et al., 2013) and A. victoriaereginae and A. striata (members of the sub-genus Littae) (Avila de Dios and Simpson unpublished). In this work we describe the identification of previously uncharacterized members of PGHF32 from four different Agave species based on RNAseq data and suggest that the hypervariable loop domain (Van den Ende et al., 2009) could be useful for accurate sequence based prediction of enzyme activity. In silico expression patterns and qRT-PCR analysis for both newly identified and previously characterized cDNAs indicated well conserved patterns of expression in the three Agave species analyzed with high levels of expression for genes encoding degradative enzyme types in floral tissue. TLC analysis confirmed the presence of fructan polymers in immature buds and flowers of A. tequilana.
Agave Transcriptome Database Searches
In order to uncover new members of PGHF32 in agave species, searches were carried out in transcriptome databases from four Agave species: A. deserti, A. tequilana, A. victoriae-reginae, and A. striata. For A. deserti and A. tequilana transcriptome data was generated by Ilumina Hi-seq and is available at NCBI (Gross et al., 2013). Transcriptomes for A. victoriae-reginae, A. striata and a second A. tequilana transcriptome were generated individually at Cinvestav Irapuato by Ilumina My-seq in pairedend runs to produce >23 million reads for each species which were assembled using Trinity (Grabherr et al., 2011) and BLAST2GO (Conesa et al., 2005) was used to obtain biological information about the assembled contigs (Avila de Dios and Simpson, unpublished). BLAST (Altschul et al., 2009)
Alignment of Amino Acid Sequences and Identification of Conserved Motifs
Translated complete ORFs or selected motifs were aligned with MUSCLE (Edgar, 2004) to sequences from agave and other species, which had been validated experimentally. The best substitution model and phylogenetic reconstruction were carried out by maximum likelihood using MEGA 6 (Tamura et al., 2013) and Bootstrap analysis using 1000 repetitions was also carried out. Analysis of conserved motifs and corresponding figures were accomplished using Geneious R 8.1.3(www.geneious.com).
In silico Expression Analysis
Expression levels in different agave tissues for transcripts encoding different enzyme types and isoforms were determined in silico by mapping sequences, Bowtie 0.12.9 (Langmead, 2010) and RSEM 1.2.0 (Li and Dewey, 2011) to assembled contigs. Expression levels are presented as "transcripts per million" (TPM). Heat maps were created based on the expression data for the transcripts PGHF32 isoforms identified in the three Agave species using the heatmap.2 function from the gplots library in the R statistics package version 2.17 http://www.R-project.org/.
qRT-PCR Analysis
RNA extraction and qRT-PCR analysis was carried out as described in Abraham Juárez et al. (2015). Primers used are listed in Cortés-Romero et al. (2012).
Extraction and Thin Layer Chromatography of Fructans
Fructans were extracted from ground, lyophilized tissue from approximately 2.5 cm long unopened flower buds or fully opened flowers. Two aqueous extractions were carried out as follows: 30 ml of distilled water was added to 0.2 g of plant tissue and incubated at 75 ± 5 • C for 30 min. The supernatant was recovered and the sample was re-extracted in 20 ml distilled water at 75 ± 5 • C for 15 min. Both supernatants were combined and frozen before lyophilization to obtain a white powder. This protocol was adapted from Mellado-Mojica and Lopez (2012). Extracted fructans were resuspended in distilled water to a concentration of 25 mg/ml. One microliter of each fructan sample was applied to an aluminum backed silica-gel plate (Sigma-Aldrich) and run three times using a butanol-glacial acetic acid-water (50:25:25 v/v/v) system (Thome and Kühbauch, 1985). Visualization of separated fructans was carried out using the aniline: diphenylamine: phosphoric acid reagent in acetone (Anderson et al., 2000).
Identification of Members of PGHF32 in Agave Species
Transcriptome databases for four different Agave species: A. tequilana, A. deserti, A. victoriae-reginae, and A. striata were analyzed in order to identify sequences encoding members of PGHF32. In total 255 transcripts encoding putative PGHF32 members were identified and 31 new full-length cDNA sequences determined. The predicted amino acid sequences for each of the full length cDNAs were aligned and compared with previously characterized amino acid sequences (indicated by * in Figure 1) for 1-SST and 6G-FFT fructosyltransferases, vacuolar (Vinv) and cell wall (Cwinv) invertases and FEH from A. tequilana, A. officinalis, and A. cepa (closely related members of the Asparagales family) whose activity had been confirmed previously by heterologous expression and in vitro assays. As expected from previous reports, two main groups are formed: A containing FEH (subgroup a) and cell wall invertases (subgroup b) and B containing fructosyltransferases (subgroups d-6G-FFT and e-1-SST), vacuolar invertases (subgroup f) and an undefined invertase clade (subgroup c) (Figure 1). Based on these groups each sequence could be tentatively classified as encoding a specific enzyme type and sequences were named based on this classification and the agave species from which they were obtained. Sequences in the same clade from the same species representing putatively different isoforms had at least 4% divergence at the amino acid level. As can be observed sequences putatively encoding fructan exohydrolase enzymes (FEH) were identified for the first time for A. tequilana and new isoforms for vacuolar and cell wall type invertases were also determined for this species. Enzymes in each class were identified for A. deserti but complete sequences could only be identified for invertase and FEH type enzymes from A. striata and invertase, FEH and 1-SST type enzymes for A. victoriaereginae. The numbers and types of new isoforms found for each species are summarized in Table 1. A dendrogram based on an expanded alignment including all available amino acid sequences from experimentally determined members of PGHF32 from dicotyledonous and monocotyledonous species confirmed the classification of the agave sequences (Supplementary Figure 1). Previously characterized sequences from A. tequilana (Cortés-Romero et al., 2012) are indicated in red.
Based on this data, members of PGHF32 for A. victoriaereginae, A. striata, and A. deserti are reported for the first time. FEH isoforms and novel invertase isoforms were also determined for A. tequilana including a putatively distinct invertase isoform found only in A. tequilana and A. deserti.
Comparison of Conserved Motifs and Differences within the Hypervariable Loop Domain
The conserved motifs close to the active sight that characterize PGHF32 are shown in Figure 2. The FRDP motif is not displayed since this motif was perfectly conserved in all sequences analyzed. In general the WMNDNPG, WSGSAT, ILYTGG, WECPD (WECVD), and GWAS motifs are well conserved within all PGHF32 members from the four Agave species analyzed. Differences observed previously for the WMNDNPG and WSGSAT motifs in fructosyltransferases of A. tequilana were confirmed and also shown to be present in other agave species. The undefined invertase group (clade c in Figure 1) shows no specific pattern of amino acid conservation for these motifs.
The hypervariable loop region has previously been shown to contain conserved arrangements of amino acids that correlate with enzyme type. When an 18 amino acid sequence spanning this region was used to compare the members of PGHF32 from Agave species, a strong correlation between conserved amino acids and enzyme activity was observed (Supplementary Figure 2). In order to determine whether the 18 amino acid region could be useful as a general tool for distinguishing and identifying the different enzyme types within PGHF32, a comparison was made of all available complete amino acid sequences from both monocotyledonous and dicotyledonous species, encoding enzymes whose activity had been experimentally confirmed (Supplementary Figure 3). Based only on the 18 amino acid hypervariable loop region a reasonably good correlation is obtained between the groups formed in the dendrogram and enzyme activity. Only 5 of the 106 sequences indicated with red boxes in Supplementary Figure 3 show no strong correlation between activity and the groups defined by sequence analysis. AtCwinv3 indicated by a stippled red box was initially classified as a cell wall invertase but later shown to be an FEH and is correctly placed in the corresponding clade. When the 18 amino acid loop sequences were aligned by putative or confirmed enzyme activity, distinct patterns of conserved amino acids could be determined for each of the enzyme types (Figures 3A,B). As indicated, three amino acids: lysine, tyrosine, and glycine at positions 1, 12, and 15 respectively within the 18 amino acid motif are conserved in all sequences. This minimal pattern distinguishes the FEH group from the other enzyme types. In contrast the closely related Cwinv group, in addition to the minimal 3, has 6 additional conserved amino acids, the vacuolar invertases 5 and the 1-SST, 6G-FFT, and 1FFT fructosyltransferases 6, 9, and 5 additional conserved amino acids respectively ( Table 2). The conserved amino acids within the 18-residue motif for the 6SFT type enzyme is based on closely related reported sequences whose identities have not yet been confirmed by activity and shows strong conservation with only 2 of the 18 residues found to vary. Although Cwinvs and FEHs could clearly be distinguished based on the hypervariable loop region, different forms of FEH type enzymes could not be accurately determined. The newly identified Vinv, Cwinv, FEH, and FFT genes from the different agave species show good correlations with the conserved amino acid patterns and putative enzyme types, supporting the initial classification based on the dendrogram in Figure 1.
The undefined group (clade c in Figure 1) boxed in e of Figure 3A, shows the same pattern of conserved amino acids as the Vinvs supporting their classification as invertases. However, an alignment of the complete amino acid sequences for clades c and f from Figure 1 uncovered a sequence structure specific to clade c where in addition to conserved amino acid sequences, defined groups of amino acids are either missing or inserted in comparison with clade f (red boxes, Supplementary Figure 4) and may indicate functional differences.
Highly conserved motifs confirm the identification of new members of PGHF32 in agave species and specific arrangements of conserved amino acids within the hypervariable loop domain could be exploited at least for preliminary classification of new sequence data pertaining to PGHF32. No agave sequences were classified as encoding either 1-FFT or 6-SFT type fructosyltransferases.
In silico Expression Analysis of PGHF32 Members in Different Tissues
General transcriptome analysis (data not shown) of PGHF32 genes in A. tequilana, A. striata, and A. victoriae-reginae revealed expression in tissues such as roots and flowers. In order to document the expression patterns of PGHF32 genes in different tissues of different Agave species, in silico expression analysis based on the transcriptome databases generated at Cinvestav was carried out for each enzyme type for A. tequilana, A. striata, and A. victoriae-reginae (Figures 4A-C). Although full-length amino acid sequences were not be identified for all enzyme types in all Agave species, mapping of partial transcripts to contigs as described in Materials and Methods, allowed the expression patterns of genes encoding the different enzymes of PGHF32 to be determined, including patterns of expression of different isoforms for specific enzymes within a single species.
In silico Expression Analysis of PGHF32 Members in Different Tissues
General transcriptome analysis (data not shown) of PGHF32 genes in A. tequilana, A. striata, and A. victoriae-reginae revealed expression in tissues such as roots and flowers. In order to document the expression patterns of PGHF32 genes in different tissues of different Agave species, in silico expression analysis based on the transcriptome databases generated at Cinvestav was carried out for each enzyme type for A. tequilana, A. striata, and A. victoriae-reginae (Figures 4A-C). Although full-length amino acid sequences were not be identified for all enzyme types in all Agave species, mapping of partial transcripts to contigs as described in Materials and Methods, allowed the expression patterns of genes encoding the different enzymes of PGHF32 to be determined, including patterns of expression of different isoforms for specific enzymes within a single species.
For A. tequilana, as previously reported very similar patterns of expression were observed for Atq1-SST-1 and Atq1-SST-2 with highest expression levels in tepals and pistils. In contrast to the 1-SST isoforms, Atq6G-FFT-1 and Atq6G-FFT-2 isoforms show distinct patterns of expression. Whereas, Atq6G-FFT-2 is expressed in all tissue types and at the highest level in pistils and shows a similar pattern of expression to the Atq1-SST-1 and Atq1-SST-2 isoforms, Atq6G-FFT-1 shows highest expression in shoot apical meristem (SAM) tissue. The newly described AtqInv2, also shows highest expression in SAM tissue and overall shows a similar expression pattern to Atq6G-FFT-1. With the exception of AtqInv2 the A. tequilana genes encoding invertases or FEHs are expressed to higher levels in comparison to the genes encoding fructosyltransferases and also show unique patterns of expression with each isoform highly expressed in a specific floral tissue Figure 4A. These results are also represented as heat maps in Supplementary Figures 5A,B where Figure S5A is normalized in Enzyme Type 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 FEH K -------
Conserved amino acids in hypervariable loop region
relation to the isoforms and Figure S5B is normalized in relation to plant tissues. The dendrogram relating to the isoforms (left hand side of figure) in Figure S5A also reflects the similarities in expression patterns between the fructosyltransferase isoforms and the AtqVinv-2 isoform and the unique expression patterns for each of the other invertase and FEH isoforms. The single Ast1-SST-1 isoform shows a similar pattern of expression as the A. tequilana 1-SST transcripts with highest expression in pistils. Interestingly three distinct isoforms encoding 6G-FFT type enzymes (Ast6G-FFT-1, Ast6G-FFT-2, and Ast6G-FFT-3) were identified for A. striata and although all three isoforms are most highly expressed in vegetative tissue such as leaves, stem, SAM and roots, each shows a slightly different expression pattern. These isoforms show similar patterns of expression to Atq6G-FFT-1 and AtqInv2. As observed for A. tequilana, the expression patterns observed for the A. striata invertase and FEH isoforms are also more highly expressed in comparison to the fructosyltranserase isoforms and also show unique tissue specific patterns Figure 4B. These patterns are reflected in Supplementary Figures 5B,C where the dendrogram relating to the isoforms (left hand side of Figure S5B) show a closely related cluster containing the FT isoforms whereas the Invertase and FEH isoforms are on more distant branches.
The two isoforms of 1-SST identified for A. victoriaereginae show very different expression patterns. Avr1-SST-1 is most strongly expressed in stem tissue whereas Avr1-SST-2 shows highest expression in floral organs corresponding more closely to the expression patterns observed for Atq1-SST-1, Atq1-SST-2, and Ast1-SST-1. Avr1-SST-1 shows an expression pattern similar to Avr6G-FFT1 and 2 that are also most highly expressed in vegetative tissues and to a lower level in floral tissues as was described for Ast6G-FFT-1, Ast6G-FFT-2, and Ast6G-FFT-3 Figure 4C. AvrVinv-1, AvrCwinv-1, AvrFEH-1, and AvrFEH-3 all show expression patterns very similar to the putatively orthologous sequences in the other Agave species as described above. Supplementary Figures 5D,E show these relationships.
Avr1-SST-1 and 2, Avr6G-FFT-1 and 2, Ast1-SST-1 and Ast6G-FFT-1, 2, and 3 are all tentatively named isoforms based on partial sequence comparisons. Transcripts for other isoforms encoding a 1-SST, Cwinvs, Invs, and FEHs were also identified but showed the same expression patterns as those presented in Figure 4 and for brevity have not been included.
In most cases distinct isoforms from a single species showed different levels and tissue specific patterns of expression supporting their classification. Certain FT isoforms showed higher expression in floral tissues whereas others were most highly expressed in vegetative tissues. In general, higher levels and unique patterns of expression for invertases and FEHs were observed in comparison to FTs for all three species in floral tissues in comparison to vegetative tissues.
Confirmation of Expression of Selected Genes and Presence of Fructans in Floral Tissue
High levels of expression of PGHF32 members in floral tissue have not been documented previously for any Agave species. The in silico expression data however show that many of the isoforms identified for Agave PGHF32 enzymes are highly expressed in floral tissue in all three species. In order to confirm these observations, qRT-PCR analysis was carried out for Atq1-SST-1, Atq6G-FFT-1, Atq6G-FFT-2, and AtqCwinv-1 in immature floral buds (Figure 5A) of A. tequilana samples not used to obtain transcriptome data. As shown in Figure 5B, the fructosyltransferase encoding genes show higher expression in at least one of the floral tissue types in relation to leaf tissue supporting the results from the in silico data. AtqCwinv-1 was expressed at a low level and no significant difference was observed between floral and leaf tissue in this experiment.
High levels of expression of genes encoding enzymes responsible for fructan synthesis suggest the presence of fructan polymers in Agave flowers and in order to confirm this hypothesis, TLC was carried out on extracts obtained from immature flower buds and fully opened A. tequilana flowers (Figures 6A,B). As shown in Figure 6C, fructans of between 3 • and 10 • of polymerization (DP) were observed in immature buds and mature floral tissue although the larger DP fractions are somewhat less abundant in mature floral tissue.
Quantitative real time PCR analysis confirmed high levels of expression of genes encoding PGHF32 enzymes in floral tissue and TLC analysis also showed the presence of fructooligosaccharides (FOS) in these tissues.
Discussion
The association of the groups obtained in the dendrogram in Figure 1, with sequences from at least one enzyme whose activity had been confirmed allowed the putative allocation of the newly identified sequences into different fructosyltransferase, fructan exohydrolase, and invertase groups and comparison of the agave amino acid sequences with a wider range of more distantly related species including both monocotyledons and dicotyledons also supports the classification of the agave sequences as does the analysis of the variable loop region where each enzyme type, showed the conserved amino acid configurations. Additionally, the presence of closely related sequences and the conservation of expression patterns for isoforms found in different agave species within the same groups also lends weight to the classification.
tequilana. Here we report a new vacuolar invertase isoform, a novel tentatively classified invertase isoform, two new isoforms encoding a cell wall invertase and for the first time four isoforms encoding an FEH type enzyme for this species. All enzyme types are also reported for the first time for A. deserti. The identification of a novel invertase type enzyme is supported by the presence of transcripts in both A. tequilana and A. deserti and the difference in expression pattern observed in comparison to the Vinv isoforms from the other species. The significant differences in amino acid sequence observed for the new invertase group could be due to in part to differential transcript processing and this clade could be specific to agave species found in the sub-genus agave such as A. tequilana and A. deserti since no equivalent sequences were found in the A. victoriae-reginae or A. striata transcriptomes. It will be of interest to determine the precise activity of enzymes in this group by in vitro analysis. The identification of new Agave isoforms for Cwinvs, Vinvs, and FEHs is consistent with the presence of multiple isoforms for these enzymes in other species. A. thaliana and Rice have 9 and 6 Cwinv isoforms, respectively, and both have two Vinv isoforms (Sherson et al., 2003;Ji et al., 2005). Three FEH isoforms have also been described previously for Chicory (C. intybus) (Van den Ende et al., 2001) and two for wheat (T. aestivum) (Van Den Ende et al., 2003). Suárez-González et al. (2013) reported a 1-FFT type enzyme for A. tequilana, however enzyme activity has not been reported for this gene. Based on the grouping in the dendrogram and the sequence found in the hypervariable loop domain it is possible that this sequence may encode a 6G-FFT type enzyme. Although complete amino acid sequences for 1-SST and 6G-FFT could not be assembled for A. victoriae-reginae and A. striata, at the nucleotide level partial transcripts encoding these enzymes could be mapped to contigs and used in the in silico expression analysis. One interesting observation is that in all the transcriptome analysis that we have carried out to date we have never uncovered sequences that can be convincingly classified as encoding 1-FFT or 6-SFT type enzymes. This is surprising since reports of the biochemical structure of Agave fructans (López et al., 2003) in A. tequilana indicate that probably both 6-SFT and 1-FFT enzyme activity is necessary in order to produce these polymers. Ritsema et al. (2003) have previously questioned the need for a separate 1-FFT enzyme in onion (A. cepa) a relative of the Agavaceae within the order Asparagales. It is possible that 1-FFT type genes in Agave species are expressed at very low levels or in specific tissues that have not been sampled, or that at least one of the several 6G-FFT isoforms could carry out this activity as has been reported in other species such as L. perenne (Lasseur et al., 2006) Asparagus (A. officinalis) and onion (A. cepa) (Ritsema et al., 2003). The lack of candidate cDNAs encoding 6-SFT type enzymes in the four Agave species studied is also intriguing. This may also be due to low levels of expression or tissue specific expression patterns as suggested for 1-FFT. The presence of a 6-SFT enzyme in A. cepa has been reported (Fujishima et al., 2005) but not for A. officinalis, two species closely related to the Agavaceae.
The difficulty in distinguishing enzyme type within PGHF32 has been commented before (Van den Ende et al., 2009) and in some cases classification based solely on sequence data has proved erroneous as shown by the A. thaliana gene classified as AtCwinv3 and later shown to have FEH activity (De Coninck et al., 2005). Ultimately the final classification of new genes should be based on activity, however the conserved amino acid patterns within the hypervariable loop domain could provide a simple and relatively accurate initial tool to identify and annotate sequence data for PGHF32. This is ever more important as large scale sequencing projects become more numerous and the possibility to confirm activity for many of the species and sequences analyzed will be impractical. Detailed analysis of the hypervariable domain by site directed mutagenesis could also lead to insights on the activity and the determination of specificity for these enzymes.
Expression patterns for different isoforms encoding degradative enzymes (invertases and fructan exohydrolases) of PGHF32 were completely conserved across all species and strongly correlated to floral organs. This observation agrees with the hypothesis that breakdown of fructan polymers in floral tissue is needed to provide both energy and the osmotic variations thought to play a role in the opening of flowers in other species (Bieleski, 1993;Vergauwen et al., 2000). The expression patterns of genes encoding fructosyltransferase enzymes were more variable with distinct patterns of expression observed for each species. Atq1-SST-1 and 2, Ast1-SST-1 showed similar expression patterns with highest expression in pistils. For Avr1-SST-1 and Avr1-SST-2 very different patterns of expression were observed. Whereas, Avr1-SST-1 was most strongly expressed in stems, Avr1-SST-2 showed high levels of expression in all floral tissues. Atq6G-FFT-1 shows highest expression in SAM tissue and significant expression in stem tissue but low levels in leaf and root. This gene is also moderately expressed in all floral tissues. In contrast, Atq6G-FFT-2 is strongly expressed in all tissues with highest expression in pistils. 6G-FFT encoding genes from A. victoriae-reginae or A. striata were most strongly expressed in predominantly vegetative tissues. The differences in expression patterns for the different fructosyltransferase encoding genes may reflect differences in the accumulation or turnover of fructans in flowers from the different species and also the different morphology of the inflorescences defining each subgenus. A. tequilana is classified in subgenus Agave and has a large paniculate inflorescence while A. victoriae-reginae and A. striata are classified in subgenus Littae with simple spicate inflorescences. It may be possible that low DP fructans can be transported more easily to the spicate flowers directly from the inflorescence where they can be utilized immediately, whereas transport in the paniculate inflorescence may be less efficient given the large numbers of branched umbels, leading to the need to synthesize and store at least short DP fructan polymers in floral tissue until needed and hence the need for 6G-FFT activity.
Quantitative RT-PCR analysis of floral tissue confirmed the higher levels of expression observed for Atq1-SST-1 and Atq6G-FFT-1 and 2 in floral tissues in comparison to leaves. AtqCwinv-1 showed only slightly higher expression in tepals in comparison to leaves in contrast to the pattern observed in silico where highest expression was observed almost exclusively in ovaries. This may be due to the different developmental stages at which the flower buds were sampled since qRT-PCR analysis was carried out on immature buds whereas transcriptome data was obtained from fully developed flowers. Given the conservation in amino acid sequences between different isoforms encoding enzymes with the same activity it will be of great interest to obtain genomic sequences in order to study the regulatory basis of the differential expression patterns observed.
Based on the expression data, it was expected that fructan polymers would be detected in floral tissue of agave plants and this was confirmed by TLC analysis of samples from A. tequilana. Higher levels of polymerization of fructans in immature buds may reflect that during flower development fructans accumulate but are then degraded to provide an energy source and/or osmotic change, leading to the opening of the fully developed flower as has been proposed for other fructan producing species (van Doorn and Van Meeteren, 2003).
Transcriptome analysis allowed us to identify for the first time, cDNAs encoding members PGHF32 in A. victoriae-reginae, A. striata, and A. deserti and to obtain sequences to complete the set of enzymes necessary to carry out fructan metabolism in A. tequilana. The results also support the notion that as in the case of onion (A. cepa) enzymes with specific 1FFT activity may not be necessary in Agave species although this possibility needs to be confirmed by analysis of activity in vitro and/or in a heterologous system. Sequence alignments, conserved patterns of amino acids and differential expression patterns all support the classification of the different isoforms identified, although evidence from transcriptome data suggests that other isoforms still remain undetermined. The release of genomic sequences for Agave species will permit the definitive determination of numbers of isoforms for each enzyme and the analysis of gene regulatory elements. Conserved patterns of amino acids within the hypervariable loop may be a useful tool for initial identification and annotation of new sequences showing homology to members of PGHF32. Based on the observed expression patterns and the presence of fructan polymers, fructan metabolism must play an important role during flowering in these three Agave species and probably in most other species within the genus. | 2016-06-18T00:07:47.761Z | 2015-08-05T00:00:00.000 | {
"year": 2015,
"sha1": "37520339a0861665ecfe156448f2138219cecec6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00594/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37520339a0861665ecfe156448f2138219cecec6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
30002902 | pes2o/s2orc | v3-fos-license | Community based intervention for tobacco cessation: a pilot study experience, north East India.
BACKGROUND
North East India has a high prevalence of tobacco consumption, but only few individualsseek help for tobacco cessation. Impact of community based tobacco cessation intervention in this part needs more research.
MATERIALS AND METHODS
Retrospective analysis was done on the dataset from a community-based tobacco cessation intervention pilot project conducted in Guwahati metro during 2009-10. Subjects, both male and female tobacco users, age>15 years, permanent residents of these blocks giving consent were included in the study.
RESULTS
The sample was 800 tobacco users, of whom 25% visited any health care provider during last 12 months and 3% received tobacco cessation advice. An 18% quit rate was observed at six weeks follow up, more than the National average, with a 47% quit rate at eight months, while 52% of subjects reduced use.
CONCLUSIONS
Higher tobacco quit rate and reduced tobacco use, no loss to follow up and negligible relapse was observed with this community based intervention design. Such designs should be given more emphasis for implementation in specified communities with very high tobacco consumption rates, cultural acceptance of tobacco and less motivation towards quitting.
Introduction
Tobacco is a burning issue. India is the second largest consumer and third largest producer of tobacco (Jhanjee, 2011). Tobacco related diseases are preventable causes of death to a large extent yet these diseases killing people worldwide. Around the world, nearly 5.4 million people every year die from lung cancer, heart disease and other illnesses. If current trends continue tobacco will account for 13% of all deaths by 2020. Nearly 900, 000 people die every year in India due to diseases attributed to tobacco (Jayakrishnan et al., 2011) .Tobacco is a well known risk factor of cancer. The International Agency for Research on Cancer (IARC) Monograph states that tobacco smoking is the major cause of lung cancer (all types) and is majorly associated with oral cancer, cancers of the oropharynx and hypopharynx, oesophagus, stomach, nasopharynx (IARC, 1987). More than fifty percent of all cancer cases reported in NE-India was tobacco related cancers (NCRP, 2013). Cardiovascular and respiratory diseases are as well tobacco related.
In India, head and neck cancers (HNCA) account for 30-40% cancers at all sites, out of which 9.4% being oral cancers which is sixth common cause of death in males and seventh in females. Oral cancer is one of the most common cancers in the world, commonest in India, Bangladesh, Srabana Misra Bhagabaty 1 *, Amal Chandra Kataki 2 , Manoj Kalita 3 , Shekhar Salkar 4 Srilanka and Pakistan. In North-east India, incidence of tobacco related oral cancers is about 33% (Bhattacharjee et al., 2006). The relative proportion tobacco related cancer of kamrup urban of Assam state shows that 6.47% of male and 2.87% of females had oral cancer (NCCP, 2005). Out of the total cancer cases registered at Guwahati Hospital Based Cancer Registry during 2011-12 shows that 64.3% cancers in males and 28.2% cancers in females were Tobacco related (HBCR Guwahati, 2012).
More than one third (35%) of adults in India use tobacco in some form and 84% of the users use tobacco every day (Mini et al., 2014). About two in five adults from rural and one in four from urban areas use tobacco. Prevalence of tobacco use is 48% in males in comparison to 20% amongst female. If we compare with the national scenario of tobacco use, current tobacco use pattern is really atrocious in the entire North Eastern Region, where Assam (39%), Sikkim (42%), Arunachal Pradesh (48%), Manipur (54%), Meghalaya (54%), Tripura (56%), Nagaland (56%), exceeds far from the National average and Mizoram with 67%, tops the list amongst all the union territories covered by Global Adult Tobacco Survey (GATS, 2010). No doubt, as a whole India has a very high rate of tobacco use throughout but in the North East region of India shows the exigency to the need to do something intensive towards effective tobacco control in this region.
The cultural acceptance of tobacco use makes the picture even worse. Tobacco is consumed in every possible way in this part of the country; it is smoked, chewed and even used as drink Tibur, as a tradition (Malakar et al., 2012;Sharma et al., 2013).
Current users of tobacco show variations with education level. In India literacy is inversely proportional to tobacco use. Rural vs. urban percentages of current tobacco users being 25% and 38% respectively (Jayakrishnan et al., 2011). Tobacco use differs from place to place from population to populations with pockets where consumption is alarmingly high and needs immediate attention.
Probable consumers of tobacco in near future may add to the demand pool of tobacco products by adding already existing tobacco demands. This may make the control of production, sale and distribution of tobacco even more cumbersome. Pools of both current and potential users of tobacco should be addressed for successful tobacco control.
To decrease the demand, users of tobacco have to quit the habit. India reports that only 38.4% of current tobacco users above 15 years of age made a quit attempt in the last 12 months. These figures are much lesser in the North Eastern states. Less than half of the current users of tobacco in India visited any health care provider during last 12 months out of which only 46.3% of current smokers and 26.7% of current chewers got any advice to quit tobacco by any health care provider. In North East these figures are even lower and only 24.7% current smokeless tobacco users and 41.6% current smokers got any advice to quit the tobacco habit by any health care provider (GATS, 2010).
Research question: As in India specially in the North Eastern States, health seeking behavior amongst the people is still far behind the other developed nations, the possibility of coming to a health care provider or a clinic for the sole reason of quitting tobacco is still a question. Here the researchers tried to analyze if there could be any affirmative impact if care providers go out to the community to provide cessation help. Some of such community based tobacco cessation intervention studies has been conducted outside India, and few studies in India. But In North East India with a high tobacco use prevalence and low motivation towards quitting and health seeking behavior, such intervention design should be tried to see the feasibility. This is a first such kind of pilot study in this region.
Objectives of this study: i) To find out the practicability of providing tobacco cessation intervention at community level. ii) To analyze the success of tobacco cessation services extended to the community level. iii) To see the prospect of implementation of Community based tobacco cessation intervention at large scale and to suggest recommendations for better tobacco cessation service
Materials and Methods
Data obtained from a community based tobacco cessation intervention study at Guwahati city in the year 2009-2010. It was carried out in South, East, West and Central urban blocks of the city . Four Medical Social Workers (MSW) carried out the community based intervention and follow up. Each of the MSW was allotted with one zone. The MSWs were trained up on the tobacco scenario, tobacco hazards and counseling and basic knowledge on tobacco control and cessation services. These MSW surveyed their respective areas door to door starting from a prominent landmark till they found the required no. of study subjects fulfilling the inclusion and exclusion criteria. The inclusion criteria for the study subjects being, both male and female users of tobacco of age >15 years and permanent residents of study areas. The exclusion criteria were non users of tobacco, tobacco users of age less than 15 years, not willing to get tobacco cessation intervention, not permanent residents of the study areas.
Sample for the study was 800. Each MSW registered 200 study subjects. Study duration was 12 months. The first two months were used for, door to door survey in the intervention areas and registration of the subjects. A predesigned proforma was filled up during registration of the study subjects. Detailed history of the tobacco use pattern along with addiction and motivation level was included in the proforma. Proforma contained follow up status estimation questions which was filled up during each follow up.
During registration, IEC materials on tobacco hazards were offered to the subjects and sensitized them on the tobacco hazards. A date and time for counseling was fixed according to the convenience of the subjects a keeping in mind time frame of the study .
The mode of intervention was tobacco cessation counseling offered by the MSW at the community setting at homes of the study subjects.
After counseling, the subjects were followed up for eight months. Follow up was done by direct contact at 2 weeks, 4 weeks, 6 weeks, 2months, 3 months, 4 months, 6 months, 8 months. When contacted for follow up the study subjects if needed re counseling was given to them.
Results
The survey covered 750 households with 87% of these househols with a neuclear family and total numbers of persons in the age group >15years and parmanent residents of these areas were 2250 out of which 809 were current tobacco users . 9 of these 809 did not want to get intervened. 800 study subjects were registered and intervened for tobacco cessation. This study population showed current tobacco use of 36%.81% of the study subjects were males and 19% females. 78% of the study subjects were in the age group of 20-39 years, 18% in 40-59 years while 3% in the age group of 10-19years(< 15 years were not included in the study and not counted) and 1% above 60 years. (Figure 1) Before registering the subjects were enquired if they visited any health care service for any reason and how many of them actually got any advice to quit the tobacco use habit. It was found that only 1/4th of the subjects (current tobacco users) visited any health care provider during last 12 months. 15% of current tobacco users (both smoking and smokeless type) got any kind of advice to DOI:http://dx.doi.org/10.7314/APJCP.2015.16.2.811 Community-based Intervention for Tobacco Cessation: a Pilot Study Experience, NE India quit the tobacco use habit. 10% of the study subjects were illiterate, 18% completed their primary school education, 30% middle school, 15% high school, 22% higher secondary and 15% completed educaion level of degree and above (Figure 2). 3% were in the income group less than rupees 2000 per month, 55% fall in the income group of between rupees 2000 and 5000 per month, 36% had per month income between rupees 5000 and 1000, and 6% showed income 10000 and above per month.
After couselling each subject was followed up. At six weeks post intervention follow up 18 out of 100 users have quitted the tobacco use habit at the time of follow up. A very high quit rate was observed (Table 1) at eight months post intervention follow up which was 47% with only tobacco cessation counseling intervention and regular scheduled follow up visits.
Discussion
Current tobacco use pattern in the study pupulation (36% ) tallies with national (34.6%) and state level (39.3%) current tobacco use pattern (GATS, 2010). This cross section of the community somewhere reflected inclination towards tobacco use in the younger age group in comparison to the National level. Maximum numbers of the users in the study population was in the age group of 15-39years which is 81%(648). Whereas in national context , age group distribution of tobacco users showed maximum numbers in the age group above 65years(48%) followed by 45-64 years (47%), 25-44years(37%) and 15-24 years(18%) (GATS, 2010). Only 25% of the study sample visited any health care provider during last 12 months and only 3% received any kind of tobacco cessation advice. This is much lower than the national health seeking behavior where less than half of the current users of tobacco visited any health professional. If we consider the national scenario, 46.3% of current smokers and 26.7% of current chewers got any advice to quit tobacco by any Health Care Provider. The picture is even worst in north east only where visit to any health care provider is much lower and only 24.7% current smokeless tobacco users and 41.6% current smokers got any advice to quit the tobacco habit by any health care provider. The picture was even menacing amongst the study subjects where only 15% of current tobacco users (either smoking or smokeless type) got any kind of advice to quit the tobacco use habit. With such low level of health seeking behavior and the low possibility of getting any cessation advise as well as availability of only few tobacco cessation centers in this region, reaching out to the community to provide them tobacco cessation help may be of great help.
Awareness about the ill effects of tobacco and the approch to any health facility is higher in the well educated strata of the community hence health seeking behaviour of people increases with level of education and socioeconomic status . In the study 10% subjects were completely illiterate. Morever majority of the subjects were in the lower family income group of rupees 2001-5000 per month. When study areas were approched for community based intervention, the whole community got access to the service regardless their educational or economic background. So commuty interventions can address needs of the needy where cessation help goes to their doorsteps and here by by passing all the barriers that limits the mass from seeking tobacco cessatin help.
Tobacco cessation centers (TCC) from India, have reported overall quit rates of around 16% at six weeks post intervention. The addition of pharmacological adjuncts improves the quit rates. Agents like the anti-depressants, bupropions have been shown to increase the quit rates in the treatment of nicotine dependence by approximately 1.5 to two times, irrespective of the settings. Combined with behavioral interventions they produce quit rate up to 35% (TCC India) (NCCP, 2005).
The six weeks follow up of the intervened subjects under study showed still a higher rate (18%) of quitting in comparison to the national average (16%) as reported by TCCs of India.. The lost to follow up is zero at six weeks post intervention follow up as well as at the eight month post intervention. In clinic based cessation interventions for tobacco control, either the registered users themselves have to visit to the centre for follow up or they can be contacted over phone for follow up. But as per our clinic based experiences from TCC Guwahati, it was obvious that most people either don't revisit for follow up or could not be contacted over phone due to either nonavailability of phone numbers, wrong phone numbers, non receipt of phone calls, out of reach phone numbers or switched off mobile, non delivered, on replying of phone message . By the end of one year only few numbers of cessation help seekers at the clinic were traceable where as all of the community based subjects getting cessation interventions could be followed up without miss as they could be followed up by visiting their homes. The results were very much obvious reflected in their high quit rate at the end of eight month post intervention. It is really important to follow up the subjects who were under tobacco influence for a better quit rate and who are in the maintenance phase to maintain the status. Those who are in contemplation and preparation stage also can go into the action phase through scheduled follow up visits and if needed giving re-counseling during these follow up visits. Such vigorous follow up is possible only by personal contact with the subjects, but in clinic based services it is not feasible. Less follow up means more relapse.
This study observed that with the unmissed timely follow up the quit rate was added and maintained to give a high quit rate of 47% at the end of eight months post intervention (with only behavioral counseling) and 52% of the study subjects were in the stage of reduced use and regular scheduled follow up and re counseling have all the bright possibility of towing them towards quitting the habit.
Conclusion and recommendation
Community based tobacco cessation intervention is a good model for tobacco cessation where only with cessation counseling as an intervention tool, we observed a very high quit rate of 47% at eight months post intervention follow up. This quit rate can be even further increased by adding pharmacological adjuncts along with counseling. Lost to follow up was zero and relapse rate was negligible. So if we opt for a better follow up and a better quit rate in a defined population, the community based tobacco cessation interventions could be an answer.
Such vigorous community based tobacco cessation intervention designs should there for be given more emphasis on for implementation in a specified community with a very high tobacco consumption rate, cultural acceptance of tobacco and less motivation towards quitting to get: i) Better results towards awareness generation about tobacco hazards and benefits of quitting. ii) To address needs of the needy bypassing all the barriers that limits the mass from seeking tobacco cessatin help. iii) To attain high tobacco quit rate and lesser lost to follow up and close monitoring of maintenance of tobacco cessation | 2017-08-27T20:45:24.217Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "710d90bb2ba919a57aa1dfdf1c2a4a2022529c0a",
"oa_license": "CCBY",
"oa_url": "http://koreascience.or.kr/article/JAKO201507964683166.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2dbf25cbab76c9411954cea0a0b2dc7677eb0b38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1116049 | pes2o/s2orc | v3-fos-license | Silk: a potential medium for tissue engineering.
OBJECTIVE
Human skin is a complex bilayered organ that serves as a protective barrier against the environment. The loss of integrity of skin by traumatic experiences such as burns and ulcers may result in considerable disability or ultimately death. Therefore, in skin injuries, adequate dermal substitutes are among primary care targets, aimed at replacing the structural and functional properties of native skin. To date, there are very few single application tissue-engineered dermal constructs fulfilling this criterion. Silk produced by the domestic silkworm, Bombyx mori, has a long history of use in medicine. It has recently been increasingly investigated as a promising biomaterial for dermal constructs. Silk contains 2 fibrous proteins, sericin and fibroin. Each one exhibits unique mechanical and biological properties.
METHODS
Comprehensive review of randomized-controlled trials investigating current dermal constructs and the structures and properties of silk-based constructs on wound healing.
RESULTS
This review revealed that silk-fibroin is regarded as the most promising biomaterial, providing options for the construction of tissue-engineered skin.
CONCLUSION
The research available indicates that silk fibroin is a suitable biomaterial scaffold for the provision of adequate dermal constructs.
Objective: Human skin is a complex bilayered organ that serves as a protective barrier against the environment. The loss of integrity of skin by traumatic experiences such as burns and ulcers may result in considerable disability or ultimately death. Therefore, in skin injuries, adequate dermal substitutes are among primary care targets, aimed at replacing the structural and functional properties of native skin. To date, there are very few single application tissue-engineered dermal constructs fulfilling this criterion. Silk produced by the domestic silkworm, Bombyx mori, has a long history of use in medicine. It has recently been increasingly investigated as a promising biomaterial for dermal constructs. Silk contains 2 fibrous proteins, sericin and fibroin. Each one exhibits unique mechanical and biological properties. Methods: Comprehensive review of randomizedcontrolled trials investigating current dermal constructs and the structures and properties of silk-based constructs on wound healing. Results: This review revealed that silk-fibroin is regarded as the most promising biomaterial, providing options for the construction of tissue-engineered skin. Conclusion: The research available indicates that silk fibroin is a suitable biomaterial scaffold for the provision of adequate dermal constructs.
Human skin is a complex organ made of 2 layers of dermis and epidermis. The loss of the integrity of the protective barrier served by skin through injury or illness may result in infection, dehydration, and necrosis, which may ultimately lead to severe trauma and shock, with morbid consequences. Therefore, adequate cutaneous substitutes, which ideally, could replace all the structures and functions of native skin are essential to permit maximal recovery. Silk has recently been established as a biomaterial scaffold capable of fulfilling the properties required for effective cutaneous constructs.
SKIN
Human skin anatomically and functionally consists of 2 primary layers: 1. The outer epidermis is composed of keratinized stratified squamous epithelium consisting mainly of keratinocytes. 2. The dermis is a dense irregular connective tissue primarily consisting of fibroblasts, which underlies and interdigitates with the epidermis. Underlying the dermis is the hypodermis (or subcutaneous layer), a loose connective tissue containing varying amounts of adipocytes ( Figure 1). The margin between the dermis and the hypodermis is abrupt; however, the 2 regions are structurally and functionally well integrated through nerve and anatomizing vascular networks.
Skin has a diverse range of functions. It serves as a barrier against microorganisms and other environmental insults. It provides mechanical support against injury and radiation such as ultraviolet light. Also, skin protects the body against dehydration and provides sensory detection at the body surface.
CUTANEOUS CONSTRUCTS
The development of cutaneous substitutes/constructs by tissue engineering (TE) has evolved from simple cultured autologous epidermal sheets to more complex bilayered cutaneous constructs ( Figure 2). Currently, there are no tissue-engineered cutaneous constructs that can duplicate the complexity of the human skin. Mainly, a dermal construct is created with or without a temporary synthetic epidermis (Table 1).
CULTURED CUTANEOUS CONSTRUCTS
The attempts to create an effective skin substitute have been along the 3 TE approaches: (1) gel approach, (2) scaffold approach, and (3) self-assembly approach. The scaffold approach is the most commonly used in TE ( Figure 3). It is used to create porous scaffolds, frequently from natural (eg, collagen) or biosynthetic biomaterials such as polyglycolic acid or polyvinyl alcohol. Classification of these scaffolds is based on the absence or presence of cellular components, to either acellular or cellular respectively. Both systems have been widely used in skin TE. ePlasty VOLUME 8 An in vivo study of cultured artificial dermal substitutes showed that an artificial dermis containing autologous cultured fibroblasts enhances the reepithelization of a full-thickness skin defect when compared to an acellular dermal substitute scaffold. 1 This emphasized the significance of incorporating fibroblasts in all engineered constructs for skin replacement. Therefore, cultured dermal substitutes can be prepared by the production of a scaffold for the dermal component and seeding it with skin fibroblasts, capable of secreting many growth factors as well as extracellular matrix (ECM) components to fill the gaps created by the pores in the material. [2][3][4][5] Subsequently, keratinocytes can be seeded upon the scaffold after appropriate dermal maturation.
SILK: STRUCTURE AND PROPERTIES
Silks are well-known natural fibers produced by a variety of silkworm insects and spiders including, Bombyx mori, which is one of the most widely studied sources. 6 Silk fibers are composed of 2 types of proteinous polymers ( Figure 4): 1. Sericin, an antigenic gum-like protein that forms the outer rubbery hydrophilic coating, and 2. Fibroin, the inner-core protein filaments consisting of hydrophobic amino acids, glycine and alanine repeat sequences, which accounts for up to 90% of the total molecular weight. 7,8 For decades, silk threads were used as surgical sutures until sensitization to sericin, demonstrated by type I allergic responses (asthma and upregulated levels of IgEs), were reported in patients, undergoing repeated surgical procedures. 9-12 Therefore, sericin-removal is an essential step before silks can be used clinically; thus recently, silk fibroin (SF) has been increasingly investigated as a promising biomaterial for new biomedical applications.
Structurally, SF is characterized by heavy-and light-chain polypetides arranged into highly organized β-sheet crystal regions through hydrogen bonding as well as semicrystalline regions that together are responsible for the elasticity and tensile strength. [13][14][15][16][17] Furthermore, silk fibers have greater elasticity than fibers of comparable tensile integrity 18 ; for example, the elasticity of dragline silk is 4 to 7 times higher than that of synthetic hightenacity fibers like Kevlar 49. In addition, silks are thermally stable up to approximately ∼250 • C, allowing processing over a vast range of temperatures. 19
APPLICATION OF SF AS A BIOMATERIAL SCAFFOLD
Although silk has been used commercially for centuries, in textiles production and in many clinical applications, only recently the use of solubilized SF has been explored as a biomaterial scaffold for cell culture and TE. 20 The biomaterial scaffold/matrix plays a key role in transducing environmental cues to cells seeded within it, acting in essence as a translator between the local environment and the developing tissue (neotissue), hence aiding the ePlasty VOLUME 8 Figure 3. The scaffold approach to skin substitute production. development of biologically viable functional tissue. 18 The scaffold should essentially be designed by mimicking the structure and function of native ECM proteins, which provide mechanical support and regulate cell activities. 21 It should support cell attachment and migration as well as guide cell differentiation and function. Furthermore, key criteria include biocompatibility and biodegradability, with nontoxic and noninflammatory degradation products during replacement in vivo by cellular ECM components. 22 Many natural and synthetic polymers have been considered for biomaterial scaffolds; however, the challenging combination of biocompatibility, biodegradability, controllable porosity, stability for an extended time-period during neotissue growth, and processibility into porous matrixes often limit the utility of most polymers. 23 A number of studies have demonstrated that upon sericin removal, regenerated SF has good biocompatibility, 18,24-26 heamocompatibility, 27 oxygen and water permeability 7,28 as well as minimal inflammatory reaction. Separate studies have found that regenerated SF films prepared by dissolving silkworm cocoon fibers in 9-9.5 M lithium bromide supported the attachment and proliferation of both human and animal cell lines. 29,30 Collectively, these studies have recognized that SF offers versatility in biomaterial scaffold/matrix design for use in tissue regeneration of bone, cartilage, blood vessels, ligaments, and tendons, in which mechanical performance and biological interactions are major factors for success. Furthermore, for ease of utility, SF can be processed into films, fibers, hydrogels, and meshes as shown by Min et al in 2004. 21 A large number of fabrication techniques have been applied to process 3-dimensional polymeric scaffolds of high porosity and surface area. These include electrospinning, solvent casting/particulate leaching, emulsion freeze-drying, thermally induced phase separation, and gas foaming.
NONWOVEN SF NANO-/MICROFIBROUS MEMBRANES
Recently, nonwoven SF membranes fabricated by electrospinning have gained attention due to the ability to produce polymer nanofibers with diameters in the range of several micrometers down to tens of nanometers. 21 Researchers have investigated the effects of nonwoven SF microfibrous nets on the culture of a wide variety of human cell lines including osteoblasts, fibroblasts, keratinocytes, and endothelial cells. These studies have shown that these microfibrous nets support the adhesion, proliferation, and cell-cell interactions. 31 In addition, nonwoven SF nanofibrous mats were also found to support attachment, spreading and proliferation of human bone marrow stromal cells, keratinocytes, and fibroblasts in vitro. 21,22 The biocompatibility of nonwoven microfibrous membranes has been shown to be composed of partially dissolved native SF fibers. 32 There were no infiltration of the lympocytes present in the tissue even after 6 months of subcutaneous implantation, which indicates a good biocompatibility. In addition, the implanted SF membranes were integrated with the surrounding tissue within 6 months and no obvious degradation observed. Previous in vivo studies have demonstrated SF-based membranes as promising materials for skin regeneration. 25,32
SF-BASED SCAFFOLDS AND STEM CELL-BASED TE
When considering stem cell-based TE, a reliable cell source that responds appropriately in terms of morphology, proliferation, and tissue-specific differentiation to the biomaterial scaffold is of paramount importance. Although embryonic stem cells are capable of differentiating into cell types of all different tissue lineages, a lack of understanding and control of differentiation, as well as ethical and legal boundaries, limit their use in TE. In contrast, adult stem cells with limited differentiation ability are an appealing alternative. Mesenchymal stem cells are one such example, which can be isolated from various adult tissue including adipose tissue, 33 articular cartilage, 34 bone marrow, 35,36 and synovium. 37 Mesenchymal stem cells and SF-based scaffolds have been extensively studied in ligament, cartilage, and bone. However, there are currently limited published reports on their use in skin, providing an area if interesting for further research.
DISCUSSION
Human skin is considered as one of the most important organs of the body, providing a multitude of structural and functional benefits, ensuring perfect homeostasis. 38 Since tissue loss at the skin level is a common occurrence due to a multitude of events such as lacerations, cutaneous disease, neoplasia, infection, burns and other trauma, adequate ePlasty VOLUME 8 cutaneous constructs that could act as effective skin replacements, capable of mimicking native skin are highly desirable. 39 There are currently 2 different types of commercially available bilayered cutaneous constructs. 40 However, they are at present unable to fulfill all the structural and functional properties of native skin; thus, the scope for engineering a novel cutaneous construct remains. 41 The development of innovative skin replacements has previously been along the 3 approaches of TE, although the scaffold approach commonly created from either modified collagens or resorbable polymers is most frequently used. 42 In vitro investigation has shown that several types of human cell lines, including dermal fibroblasts, epidermal keratinocytes, and endothelial cells, can be successfully cultured on SF scaffolds in various forms; therefore, cell attachment, proliferation, and differentiation can be studied accordingly. 21,22 In addition, after in vivo implantation subcutaneously, the SF implants were shown to integrate well with the surrounding tissue while no host immunologic response was reported. 25,32 Hence, the suitability of SF as a novel type of biomaterial scaffold to be used for TE has been clearly demonstrated.
This encouraging breakthrough was attributed to the innovative characteristics of SF, which includes the following: r Good interaction with human cells in vitro that support cell-specific needs. 21,22 r Compatibility in vivo following implantation without evoking a foreign body response and therefore being capable of fully integrating into the surrounding host tissue. 25,32 r It can be processed into aqueous solutions for subsequent formation of films as well as other material formats. 21 r Degradability at a controlled rate both in vitro and in vivo, which is of particular importance with regards to biodegradable scaffolds where slow neotissue growth is most desirable. 18 SF also has novel mechanical properties that are capable of rivaling many natural or synthetic high-performance fibers. [13][14][15][16][17][18] Consequently, SF has been established as a highly promising biomaterial for its surface morphology, superior structural and mechanical properties, in association with good biocompatibility and biodegradability, thus proving to be a suitable scaffold for TE applications and subsequent development of novel cutaneous constructs.
CONCLUSIONS
Tissue-engineered skin has advanced from the initial cultured autologous epidermal sheets to more complex bilayered cutaneous constructs capable of mimicking selective structures and functions of native skin. Silk fibroin, 1 of 2 proteins found in naturally occurring silk, has been gaining momentum as a promising material for biomedical application due to its ability to be biocompatible with the host immune system, as well as support cell attachment, proliferation, and differentiation, which are key components for TE. Consequently, the potential advantages of SF as a biomaterial scaffold are substantial, with the provision of further possibilities in cell culture and TE, implying that adequate cutaneous constructs are a realistic prospect in the not too distant future. | 2014-10-01T00:00:00.000Z | 2008-10-10T00:00:00.000 | {
"year": 2008,
"sha1": "2fe8467f0162199538f5ea013ddd41eb31cd547c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2fe8467f0162199538f5ea013ddd41eb31cd547c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257429941 | pes2o/s2orc | v3-fos-license | “Comparison of Nissen Rossetti and Floppy Nissen techniques in laparoscopic reflux surgery”
Abstract Objective The present study makes a comparative assessment of the Floppy–Nissen (FN) and Nissen–Rossetti fundoplication (NRF) procedures. Methods Included in the study were 80 patients who presented to the General Surgery Department outpatient clinic of Balcalı Hospital of the Cukurova University Faculty of Medicine with gastroesophageal reflux between March 2010 and March 2013 All patients were operated on by the same surgeon using the laparoscopic FN or NRF techniques in a randomized controlled manner. The preoperative and postoperative reflux-specific and nonspecific gastrointestinal symptoms of the patients were compared. Results The duration of symptoms had no effect on the level of satisfaction; regurgitation, bloating and heartburn were more common in those with a longer duration of symptoms Of the patients, 92.5% were satisfied with their resulting condition, and 92.5% were inclined toward the surgery. It was further found that there was no difference between the symptoms or satisfaction levels of the patient groups who underwent the FN procedure and those who underwent the NRF procedure, other than those related to the duration of surgery. laparoscopic NF and the NRF fundoplication treatments, aside from the duration of surgery. Conclusion Our study revealed no significant difference between the laparoscopic NF and the NRF fundoplication treatments, aside from the duration of surgery. KEY MESSAGES The Nissen–Rossetti technique can be used safely based on the similarity of its outcomes with those of the classical Nissen technique. Despite the documented success of laparoscopic anti-reflux surgery, the absence of studies comparing surgery and medical treatments prevents these discussions from being concluded. Comparison of Nissen Rossetti and Floppy Nissen Techniques in Laparoscopic Reflux Surgery
Introduction
Gastroesophageal reflux (GER) refers to the effortless, spontaneous reflux of gastric contents into the esophagus, and accounts for approximately 75% of all esophageal pathologies.It is physiologically common, especially in the postprandial period [1], and when this reflux exceeds the normal physiological limit, esophageal and extraesophageal symptoms occur.Patients may present with such typical symptoms as heartburn, chest pain, regurgitation and dysphagia, as well as such atypical symptoms as cough, hoarseness, sinusitis, pharyngitis, laryngitis and dental erosion.The easiest approach to the identification of the disease is based on symptoms, although the symptoms considered to be indicative of GERD, such as heartburn and acid regurgitation, are quite common in the general population.
The prevalence of the endoscopic detection of esophagitis in symptomatic patients is 20%approximately 100 times higher than in the normal population [2].The most concerning complication is Barrett's esophagus, Barrett's esophagus among individuals with gastrooesophageal reflux varied according to different geographical regions ranging from 3% to 14% for histologically confirmed Barrett's esophagus with a pooled prevalence of 7.2% (95% CI 5.4%-9.3%)Estimates of the annual cancer incidence in patients with Barrett's esophagus have ranged from 0.1 to 0,4 percent.Although the risk of developing esophageal cancer is increased at least 30-fold above that of the general population the absolute risk of developing cancer for an individual patient with nondysplastic Barrett's esophagus is low [3,4].
In recent years, developments in both medicine and surgery have led to an increase in discussions of the optimum treatment approach, especially between gastroenterologists and surgeons, and today the leading treatment method is considered to be PPI [5,6].Indications for anti-reflux surgery should be based on the identification of the disease from objective values determined from appropriate tests and the presence of symptoms and should lead to the administration of an appropriate and effective medical treatment prior to surgery.
When we examine the randomized studies in the literature on the division versus non-division of short gastric vessels during Nissen fundoplication.Kinsey-Trotman in their study laparoscopic Nissen fundoplication has durable efficacy for heartburn symptom control at up to 20 years follow-up, division of shortgastric vessels failed to confer any reduction in side effects, and was associated with persistent epigastric bloat symptoms at late follow-up [7].Similarly, Kosek et al. found that routine division of short gastric vessels during Nissen fundoplication did not provide either a functional or clinical advantage in short or long-term follow-up [8].
The present study makes a comparative assessment of the postoperative period (duration of surgery, complication development, hospital stay) and short-term outcomes (subjective assessment of, for example, the reduction of symptoms and improvement in quality of life, as well as an endoscopic demonstration of whether or not pathological acid reflux has been resolved) of the Floppy-Nissen and Nissen-Rossetti fundoplication procedures from among the laparoscopic fundoplication alternative performed for gastroesophageal reflux disease (GERD).
Patients/materials and methods
that remained, all of whom had endoscopic hiatal hernias and various degrees of esophagitis, were operated on sequentially using one of two proceduresa laparoscopic Floppy-Nissen procedure (group 1) or a laparoscopic Nissen-Rossetti procedure (Group 2)for which the selection was made using a simple randomization method Randomization was done sequentially, regardless of the patients' age, gender, education level, duration of preoperative symptoms and preoperative esophagitis degree.Floppy Nissen was applied to one patient and Rosetti Nissen to the other patient, in sequential order.Data collection and analysis were done in a blinded manner.All procedures were performed by the same surgeon (Cem Kaan Parsak MD).
At the outset of the study, the patients' age, sex, educational level, endoscopic esophagitis grade, duration of preoperative symptoms, duration of preoperative medical treatment, year of surgery, duration of surgery, duration of postoperative follow-up, presence/absence of postoperative complications, and presence of reoperation were all recorded.
Endoscopic assessment of esophagitis were performed in accordance with the Los Angeles classification [9].
In the postoperative follow-up period, the patients were administered a satisfaction questionnaire inquiring about the presence of dysphagia, and if so, what kind of food was difficult to swallow.The evaluation was made on a scale of 0-4 (0 ¼ no swallowing difficulties, 1 ¼ difficulty swallowing solid food, 2 ¼ difficulty swallowing soft food, 3 ¼ difficulty swallowing liquids, and 4 ¼ difficulty swallowing all).
The patients were then administered a further questionnaire to determine the levels of heartburn, bloating, frequent belching, diarrhea, abdominal pain, vomiting and inability to belch, with evaluations, again made on a scale of 0-4 (0 ¼ no symptom, 1 ¼ mild [noticeable but not bothersome every day], 2 ¼ moderate [noticeable and bothersome every day], 3 ¼ often [affecting daily life], and 4 ¼ very often [limiting daily life]).
The patients were also administered a preoperative gastroesophageal reflux diseasehealth-related quality of life (GERD-HRQL) questionnaire (Appendix 1) and a gastroesophageal reflux symptoms checklist during a follow-up visit 2 months after the operation (Appendix 2).Endoscopic evaluations were performed blindly by a gastroenterology specialist at the 2nd and 12th months postoperatively.and the level of postoperative satisfaction between the two groups was compared, evaluated on a rating scale of 1-4 (1 ¼ Very satisfied, 2 ¼ Satisfied, 3 ¼ Neutral [neither satisfied nor dissatisfied], and 4 ¼ Dissatisfied).A 1-year follow-up was performed and 80 patients who participated in the study completed a one-year follow-up Written informed consent was obtained from all participants and all study procedures were conducted according to the Declaration of Helsinki.
Statistical analyses were performed using SPSS for Windows (Version 16.0.Chicago, SPSS Inc.), and included a Wilcoxon test, a Kruskal-Wallis analysis of variance, Spearman's correlation coefficient and a Mann-Whitney U test.The critical statistical significance level was set at p ¼ .05for all analyses.
Surgical technique
All patients were operated on by the same surgeon using one of two different procedures.The following surgical steps were followed for the laparoscopic Nissen fundoplication: Exposure of the gastrohepatic ligament; dissection of the hiatus; exposure of the gastrosplenic ligament and dissection of the fundus; closure of the crura; fundoplication.
The laparoscopic Nissen-Rossetti fundoplication, in turn, included fundoplication without exposure of the gastrosplenic ligament.The abdomen was accessed using 5-mm trocars, as previously described.After the crura were adequately identified, an opening was created posterior to the esophagus, allowing an easy pass for the fundus.
For the Floppy-Nissen procedure, the mobilization of the fundus was achieved via a ligature, starting at the level of an imaginary line assumed to transverse the stomach through the lower end of the spleen toward the crural.A 39-F bougie was used routinely in all patients.We recommend the surgical procedure to be standard, to facilitate dissection, and to push the bougie after the fundoplication is completed to understand whether there is a narrowing or not The crural defect was repaired with two to three non-absorbable sutures.A 2 Ã 4-cm prolen graft was fixed on the crus with a titanium ProTack TM (Covidien, U.) tacker.The fundus was passed through a window opened posterior to the esophagus, and the 360 fundoplication was completed.In this position, three non-absorbable sutures were made between the two opposing stomach walls, the first of which passed through the esophagus.Patients started to take liquids on a postoperative day 1 and were discharged with a recommendation to ingest only liquids and soft foods for 3-4 weeks after discharge.
Results
Involved in the study were 80 patients who were assigned to two groups in a randomized controlled manner, with 40 undergoing laparoscopic Floppy-Nissen fundoplication, and 40 undergoing laparoscopic Nissen-Rossetti fundoplication.The mean age of the patients was 40.46 ± 10.396 years, and 42.5% percent were female and 57.5% were male.Of the total, 48.8% had completed higher education, while 51.2% were high school graduates or below.There was no significant difference in the age, sex or educational levels of the two groups (Table 1).
No significant difference was noted in the duration of preoperative symptoms or the duration of preoperative medical treatment between the two groups (p ¼ .376;p ¼ .383).Every patient underwent an endoscopic examination in the preoperative period, revealing grade A esophagitis as the most common condition in both Group 1 and Group 2 (65% vs. 60%), with no statistically significant difference between the two (p ¼ .703)(Table 2).
The duration of surgery in both groups was calculated as the time from the minute of anesthesia induction to the time of spontaneous breathing.A comparison of the two groups revealed a statistically significantly shorter operation duration in the Nissen-Rossetti group (p ¼ .008).The duration of the postoperative hospital stay was the same for both surgical procedures, with a mean hospital stay of 1.39 d.Complications developed in a total of five patients, one of which required an operation.In the laparoscopic Floppy-Nissen group, three patients developed bleeding and one patient underwent surgery for an infected hematoma in the postoperative period.In the laparoscopic Nissen-Rossetti group, bleeding developed in one patient and perioperative pneumothorax in another patient, the latter of whom underwent perioperative chest tube insertion.An assessment of the complications based on the Clavien-Dindo Classification revealed two patients in Group 1 with grade I and one patient with grade IIIb, while Group 2 contained one patient with grade II and another with grade IIIa.There was no significant difference in the complications encountered between the two groups (p ¼ .646).The postoperative follow-up period ranged from 12-24 months in both patient groups (Table 3).Furthermore, three patients in the laparoscopic Floppy-Nissen group and five patients in the Nissen-Rossetti group developed difficulty in swallowing solid food (p ¼ .459), the rate of postoperative bloating was 35% (p ¼ .935), the rate of frequent belching was 11.3% in total (p ¼ .079), the rate of inability to belch was 20% in the overall group (p ¼ .267),and the rate of diarrhea was 11.3% in total (p ¼ .725).Only one of the 80 patients developed postoperative vomiting, although it was not bothersome every day, and was statistically insignificant (p ¼ .317).In an assessment of postoperative abdominal pain, 73.8% of the patients developed mild abdominal pain, but not occurring daily (p ¼ .449).Among the 80 patients, the preoperative symptoms recurred in six patientsfour in the Floppy-Nissen group and two in the Nissen-Rossetti groupand all six patients underwent medical treatment for the recurrent symptoms (p ¼ .399)(Table 4).
The heartburn levels of the patients differed significantly between the preoperative and postoperative periods (p ¼ .001),while there was no significant difference in either the preoperative or postoperative periods between the two groups (p ¼ .508for the preoperative period; p ¼ .304for the postoperative period) (Table 5).
An examination of the relationship between preoperative symptom duration and satisfaction with the surgery revealed no negative or positive correlation between symptom duration and satisfaction level (p ¼ .773)(Table 8).
The overall satisfaction rate (very satisfied and satisfied) was 92.5%, and 92.5% (74/80) of the patients responded 'yes' when asked 'If you developed reflux again, would you undergo this surgery?',with no statistically significant difference between the two groups (p ¼ .399).While 38 of 40 patients in the laparoscopic Floppy-Nissen group expressed that they would have the operation again, 36 of the 40 patients in the .339 laparoscopic Nissen-Rossetti group stated that they would have the operation again.
Ppi was recommended for all patients with grade b-c esophagitis in the postoperative period, as well as 5 patients with grade esophagitis who were symptomatic.
Discussion
The development and prevalence of laparoscopic antireflux surgery have led to a decline in morbidity, mortality and even surgical treatment recurrence rates [10,11].Comparative studies between antireflux surgery and medical therapy demonstrated mixed results in patients with GERD.A large meta-analysis that included seven trials showed that surgical treatment of GERD is more effective than medical therapy with respect to patient-relevant outcomes in both the short and medium term.Heartburn and regurgitation were less frequent after surgical intervention.However, a considerable proportion of patients still needed antireflux medication after surgical fundoplication [12].The Reflux trial, which included 21 hospitals, showed that after 5 years, laparoscopic fundoplication continued to provide better relief of GERD symptoms associated with improved health-related quality of life.Surgical complications were shown to be rare.A surgical policy has been shown to be more likely to be cost-effective, although initially more costly [13].
There are two major anti-reflux procedures: 360 total (Nissen) fundoplication and 270 partial (Toupet) fundoplication The Rossetti modification to the Nissen fundoplication built on Nissen's original approach, involving the fixture of the anterior surface of the fundus to the anterior surface of the fundus by wrapping it around the esophagus after the complete mobilization of the abdominal esophagus and lesser curvature.Su et al.In their study in which they compared the efficacy and safety of laparoscopic Nissen, Toupet and Dor fundoplication in the treatment of hiatal hernia complicated with gastroesophageal reflux disease, these three laparoscopic fundoplications were found to be safe and feasible in the treatment of hiatal hernia complicated with GERD.However, laparoscopic Nissen and Dor fundoplication showed that it was better than Toupet fundoplication in reducing the number of reflux episodes, suppressing long reflux, increasing lower esophageal sphincter pressure, and reducing the incidence of postoperative dysphagia [14].Du et al. found in their meta-analysis that Laparoscopic Nissen fundoplication and 180 laparoscopic anterior fundoplication (LAF) were equally effective in controlling reflux symptoms and achieved a comparable prevalence of patient satisfaction.balanced by the high risk of reoperation.They suggested that when surgeons chose surgical procedures for each individual with GERD, the risk of recurrence of symptoms should be balanced against the risk of dysphagia [15].
Randomized controlled studies have shown the Rossetti modification to be at least as successful as the classical Nissen fundoplication [16][17][18], or even more so [19,20].Although the anterior wall technique seems to be an easier operation, choosing the right aspect of the anterior wall of the gastric fundus to be used in the fundoplication can be difficult, as an incorrect choice can lead to such typical laparoscopic complications as a 'bilobed' stomach.Other complications include a very tight valve formation, twisted fundoplication and gastric valve formation.
Clinical observations have shown that new gastrointestinal symptoms can occur and existing symptoms can continue in the postoperative period.Negre identified bothersome and unbearable gastrointestinal symptoms in 26% and 10%, respectively, of the patients in their study, while Swanstrom reported a rate of 96% [21,22].The most likely reason for the significantly different rates reported in the literature is the assessment of groups that were not operated on by the same surgeon, as these were generally multicenter studies.Furthermore, the postoperative assessments were made by different observers, and so differences in techniques would be inevitable.The main difference in the present study is in its evaluation of patients who were operated on by the same surgeon using two different techniques, and the assessment of all patients by a single observer.Bloating and dysphagia are the most common postoperative symptoms discussed in the literature [23][24][25].There have been several studies to date identifying bloating and gas among the preoperative symptoms with the potential to occur preoperatively, and that continue to a great extent postoperatively, with a reported rate of 20-67% [21][22][23][24][25][26][27].In the present study, 35% of the patients reported varying degrees of bloating, with an equal number of patients in both patient groups.All 14 patients with bloating in the laparoscopic Floppy-Nissen group described the frequency of symptoms as low, that is, mild enough not to affect daily life, while 13 patients in the laparoscopic Nissen-Rossetti group described the frequency as low and one patient as a medium, that is, at a level that affects daily life.It may be thought that bloating and gas are not symptoms specific to gastroesophageal reflux [28,29].Another problem is dysphagia, which can be divided into its postoperative early and late forms.Postoperative dysphagia may occur for several reasons, among which are unknown motility disorders such as achalasia, peptic stricture, retroperitoneal hematoma, tight fundoplication and denervation of the lower esophagus as a result of the operation.Studies have shown that personal characteristics can also be effective, and postoperative dysphagia has been reported to be more common in patients with NERD [29][30][31][32][33][34].
Very different rates of dysphagia have been reported in the literature, with Beldi and Glattti reporting a dysphagia rate of 25% and Frantzides et al. a rate of 34% [35,36].A comparison of the findings of the present study alongside those in the literature would suggest that dysphagia is mostly caused by solid foods.In the present study, the rate of dysphagia was 10%, and consistent with the literature, was mostly linked to solid food, with three patients in the laparoscopic Floppy-Nissen group and five patients in the laparoscopic Nissen-Rossetti group complaining of difficulty in swallowing solid food.Floppy-Nissen group and four in the laparoscopic Nissen-Rossetti group.Frequent belching is another common symptom in reflux patients.In the postoperative period, patients generally complain of being unable to belch, although frequent postoperative belching may indicate a loose fundoplication.The late appearance of frequent postoperative belching may suggest a shift either in the graft or in the fundoplication, and so it may therefore serve as an important symptom at follow-up.In the present study, the rate of frequent postoperative belching was 11.3%.
The mechanism of an inability to belch, as another common symptom after anti-reflux surgery, can be explained as follows: The reflex required to belch begins with the stimulation of tension receptors in the fundus.The dissection of short gastric vessels during the operation may lead to the dissection of the afferent nerves required for this reflex, resulting in the loss of reflex, leading potentially to an inability to belch in the postoperative period.A previous study found the rate of inability to belch to be 22%, while our study established a rate of 20%, which is consistent.
Regurgitation and heartburn are the two main symptoms of GERD.It should not be assumed that patients who express ongoing heartburn are experiencing a recurrence of reflux, as continuing symptoms may be attributable to the previous esophageal irritation, and it may take several months for this to fully resolve.Many studies have reported that these symptoms largely resolve within three months [36,37].Our study identified regurgitation to various degrees in each patient in the preoperative period, while this rate decreased to 11.3% in the postoperative period and mostly occurred only once a week with little bother to the individual (p ¼ .001).Heartburn and regurgitation were detected more often in the postoperative period in patients with a longer duration of preoperative symptoms.
In the present study, the rate of taking medication due to newly developed symptoms in the postoperative period was 8.8%, while a rate of 14% was reported in a study comparing postoperative 5-8 year outcomes [26] in which 79% received treatment for symptoms unrelated to reflux.It was established in the present study that the patients used mainly simethicone-group drugs for the treatment of bloating, and a small proportion took PPI irregularly, but without a physician's recommendation and without pathology.
Despite all of these postoperative symptoms, 92.5% of the patients were satisfied with their current condition, and 92.5% responded 'yes' when asked 'If you developed reflux again, would you have this surgery?'Only 7.5% of the patients had recurrent symptoms and 8.8% were undergoing irregular medical treatment.
Laparoscopic fundoplication procedures have proven to be successful for the treatment of gastroesophageal reflux disease with low morbidity.As can be seen, gastrointestinal symptoms occur at various rates after laparoscopic surgery, and multiple theories have been put forward to explain the mechanisms behind their occurrence.There are different mechanisms behind the development of different symptoms, including vagal injury, tight fundoplication, the shift of the fundoplication into the thorax, dietary habits and air swallowing [21,38].A previous study found postoperative symptoms to be more common when vagotomy was added to anti-reflux surgery [39], suggesting that vagal injury during laparoscopic anti-reflux surgery may lead to the development of gastrointestinal symptoms in the postoperative period.It is believed that postoperative adhesions may also be an effective factor delaying gastric and duodenal emptying, although these dyspeptic symptoms may also be related to an underlying undiagnosed disease.In such cases, the operation may not be the direct cause of the symptoms but may play a supporting role in their emergence.Nissen recommends care during surgery not to cause vagal injury [21,40].
The high patient satisfaction rate, even in the presence of gastrointestinal symptoms, is proof that the operation is effective and well-tolerated.The main determinant here, however, is the frequency and severity of symptoms.One of the factors determining satisfaction with the operation relates to patient expectations from the operation.Obviously, if patients are told that they may experience such postoperative symptoms as bloating, inability to belch, vomiting and sometimes diarrhea, and that these symptoms may occur depending on the physiology of the surgery, patient satisfaction will be increased.
The limitations of our study were that the data were obtained based on verbal statements and that some of the data were relatively subjective.
Our study concluded that the surgical treatment option can be chosen for the treatment of GERD, and that the Nissen-Rossetti technique can be used safely based on the similarity of its outcomes with those of the classical Floppy-Nissen technique, but with a shorter operation duration.
Table 3 .
Intraoperative and postoperative clinical characteristics.
Table 5 .
Preoperative and postoperative heartburn characteristics.
Table 6 .
Comparison of endoscopy at postoperative months 2 and 12.
Table 7 .
Distribution of postoperative satisfaction.
Table 8 .
Comparison of duration of preoperative symptoms and levels of satisfaction. | 2023-03-11T06:17:42.991Z | 2023-03-10T00:00:00.000 | {
"year": 2023,
"sha1": "ebb6a16c7b80b99f15052f169f9f300e240a2930",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/07853890.2023.2187075",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "40c27dc5e079439394dc0481b1b1853e7ebb4e43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
122864013 | pes2o/s2orc | v3-fos-license | Searches for the Higgs Boson in CMS
The CMS potential for the Higgs boson discovery is discussed in the framework of the Standard Model (SM) and its Minimal Supersymmetric extension (MSSM). Imperial College, London, UK
Introduction
The Large Hadron Collider (LHC) is designed to collide two counter rotating beams of protons or heavy ions. Proton-proton collisions are foreseen at an energy of 7 TeV per beam with a planned start-up in 2007. The Compact Muon Solenoid (CMS) is one of the two general purpose detectors that will be installed on the collider. One of its main challenges is the discovery of the Higgs boson. In this report, the CMS potential for the Higgs boson discovery is discussed in the framework of the Standard Model (SM) and its Minimal Supersymmetric extension (MSSM). More details can be found in [1].
Discovery Potential for the Standard Model Higgs Boson
The main production mechanism for the Higgs boson at 14 TeV is the gluon-gluon fusion and the WW/ZZ fusion. For low Higgs boson masses (below 130 GeV/c¦ ), the most promising channel for discovery is the H § which will allow a very fast discovery. For very high Higgs boson masses (above 500 GeV/c¦ ) the cross section for the qq § qqH production process is large and the decay channels were found to yield the highest sensitivity even though the large backgrounds and the large Higgs boson width make the discovery much more difficult compared to the lower Higgs boson masses.
The statistical significance expected for 30 fb3 r q of integrated luminosity can be seen in Figure 1 when all channels are combined.
Discovery Potential for the MSSM Higgs Bosons
In the MSSM there are five Higgs bosons: two CP-even Higgs boson mass eigenstates h,H, a charged Higgs boson pair Hs and a CP-odd neutral pseudoscalar A. At tree-level the Higgs boson sector is determined by two parameters. A common choice is the ratio of vacuum expectation values of the two doublets t c u 1 v =u ¦ /u q and the mass of the pseudoscalar Higgs boson Mw . Radiative corrections modify the predictions of the model significantly: the mass of the lightest higgs boson at tree level is predicted to be below M9 which is already excluded by LEP [2] but after corrections its mass may rise up to 135 GeV/c¦ . Several MSSM Higgs boson scenarios have been proposed depending on different choices of the soft SUSY breaking parameters. For the results presented in this report the SUSY parameters are fixed to the values used in the LEP studies [3].
In the large Mw limit (Mw y x M9 ), the so-called decoupling region, the heavy Higgs bosons (H, A, Hs ) are almost degenerate in mass. The lighter Higgs boson h is SM-like, so its production cross sections and decay partial widths are very close to those of the SM Higgs boson. The discovery potential for the lighter scalar Higgs boson h can be seen in Figure 2 for 30 fb3 r q of integrated luminosity. decay channel in the production, by suppressing the backgrounds with an isolated lepton from the accompanying W decay. The discovery potential for the Hs Higgs bosons can be seen in Figure 4 for 30 fb3 r q of integrated luminosity. | 2019-04-20T13:12:43.371Z | 2006-12-01T00:00:00.000 | {
"year": 2006,
"sha1": "8894b04afe7a97890f0579842b85d0a3eee3a67d",
"oa_license": "CCBY",
"oa_url": "http://cds.cern.ch/record/951367/files/CR2006_021.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "82f57444bbfad1991b98cb5221dc97b42818b46c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
67810691 | pes2o/s2orc | v3-fos-license | Spectroscopic ( FTIR , Raman , NMR ) and DFT Quantum Chemical Studies on Phenoxyacetic Acid and Its Sodium Salt
Abstract. FT-IR, Raman, and NMR spectra of phenoxyacetic acid and its sodium salt were recorded and analyzed. Optimized geometrical structures of studied compounds were calculated by B3LYP/6-311++G∗∗ method. The atomic charges were calculated by Mulliken, NPA (natural population analysis), APT (atomic polar tensor), MK (Merz-Singh-Kollman method), and ChelpG (charges from electrostatic potentials using grid-based method) methods. Geometric as well as magnetic aromaticity indices, dipole moments, and energies were also calculated. The theoretical wavenumbers and intensities of IR spectra as well as chemical shifts in H and C NMR spectra were obtained. The calculated parameters were compared with experimental characteristics of these molecules.
Introduction
Phenoxyacetic acid has been investigated by various researches because of its biological activities.It is useful in the treatment of insulin resistance and hyperglycemia [1].Derivatives of phenoxyacetic acid are widely used in herbicide and pesticide formulations.The molecular basis of their mode of action is not fully understood.The estimation of the electronic charge distribution in metal complexes and salts allows to predict what kind of deformation of the electronic system of ligand would undergo during complexation [2].It also permits to make more precise interpretation of mechanism by which metals affect the biochemical properties of ligands.In this paper the influence of sodium cation on the electronic system of phenoxyacetic acid was studied.
Experimental
Sodium phenoxyacetate was prepared by dissolving the powder of phenoxyacetic acid in the water solution of the appropriate sodium hydroxide in a stoichiometric ratio (1 : 1).Both reagents were obtained from Aldrich Chemical Company.The solution was left at the room temperature for 24 h until the sample crystallized in the solid-state.Precipitants were filtered, washed by water, and dried under reduced pressure at 110 • C. Obtained complex was anhydrous-in the IR spectra of solid-state sample the lack of bands characterized for crystallizing water was observed.
The IR spectra were recorded with the Equinox 55, Bruker FT-IR spectrometer within the range 4000-400 cm −1 .Samples in the solid state were measured in KBr pellets.The resolution of spectrometer was 1 cm −1 .Raman spectra of solid samples in capillary tubes were recorded in the range of 4000-400 cm −1 with a FT-Raman accessory of the Perkin Elmer System 2000.The resolution of spectrometer was 1 cm −1 .The NMR spectra of DMSO solution were recorded with the NMR AC 200 F, Bruker unit.TMS was used as an internal reference.
The density functional (DFT) hybrid method B3LYP/6-311++G * * was used to calculate optimized geometrical structures of studied compounds (Figure 1).All theoretical calculation were performed using the Gaussian'09 [3] of programs running on a PC computer.
Vibrational Spectra
Experimental and theoretical bands together with their relative intensities and band assignments for phenoxyacetic acid and its sodium salt in FT-IR and Raman spectra were obtained.Complete assignments of all bands require application of both IR and Raman methods supported by theoretical calculations and literature data [4].The calculated wavenumbers were obtained by B3LYP method and 6-311++G * * basis set.The correlation between calculated and experimentally obtained wavenumbers in IR and Raman spectra of phenoxyacetic acid and sodium phenoxyacetate was studied and good agreement was found.The correlation coefficient R 2 for phenoxyacetic acid spectra is amount 0.9972 and for its sodium salt R 2 = 0.9990.The corresponding values for Raman spectra amount to 0.9969 and 0.9975.
IR and Raman spectra for phenoxyacetic acid and its sodium salt are presented in Figure 1.Comparing results obtained for sodium phenoxyacetate to the, respectively, values of phenoxyacetic acid, certain changes of intensities and wavenumbers of the bands of aromatic system and carboxylic group can be noticed.
The changes of intensities and wavenumbers of the bands of aromatic system and carboxylate group in the case of sodium salt were discussed comparing to the free ligand.The characteristic bands occurring in the IR spectra of sodium phenoxyacetate, which do not exist in the spectra of free acid, for example: symmetric or asymmetric stretching vibrations ν(COO) and in plane β(COO) and out of plane γ(COO) deformations of carboxylic group were noticed.On the other hand, the lack of bands, which are characteristic for phenoxyacetic acid (the C=O band, 1736 and 1703 cm −1 ; β(OH) band, 1300 cm −1 , broad band of ν(OH), e.g.) were observed in the sodium salt spectra.The wavenumbers of aromatic bands numbered as 20b, 9b, 17a, and 5 in IR as well as in Raman spectra increase in comparison to free acid.For 2, 3, and 17b bands the increase of wavenumbers was noticed only in IR spectra.In the case of 8a and 6a bands the decrease in comparison to phenoxyacetic acid was observed in IR spectra, whereas for 2, 3, 17b, and 19a bands in Raman the decrease of wavenumbers was noticed.There are also some changes of alky chain part of studied molecules.The wavenumbers of ν as (CH 2 ) and ν(O-CH 2 ) bands shift to higher values in IR spectra of sodium phenoxyacetate, while ν s (CH 2 ) band shifts to lower values.
NMR Spectra
Theoretically as well as experimentally obtained 1 H NMR and 13 C NMR chemical shifts of phenoxyacetic acid and its sodium salt are presented in Figure 2. The linear correlation between proton as well as carbon NMR shieldings of studied compounds and experimental data is observed.The correlation coefficient (R 2 ) for 1 H NMR spectra are amount to 0.9739 for phenoxyacetic acid (PAA) and 0.9868 for sodium phenoxyacetate (NaPA).For 13 C NMR spectra corresponding values are 0.8945 and 0.9035.All protons in sodium phenoxyacetate are shifted diamagnetically in comparison to PAA.This tendency suggests that introduction of sodium atom causes the decrease in ring current intensity.Some changes in 13 C NMR spectra were also observed.The chemical shifts of almost all carbon atoms in sodium phenoxyacetate molecule, except of C1, C8, and C9 atoms, are lower than those in free acid.
Calculated Molecular Structure
Optimized geometrical structures of phenoxyacetic acid as well as of sodium phenoxyacetate molecule were obtained using B3LYP/6-311++G * * method.The bond lengths and the angles between bonds in sodium salt molecule in comparison to free acid were presented in Table 1.The increase of almost all bond lengths in aromatic ring except of C2-C3 and C5-C6 bonds in sodium salt molecule in comparison to acid was observed.The increase of O7-C8, C8-C9, C9-O10, and O10-11a bond lengths was also noticed, whereas bond lengths of C1-O7, C9-O11, and O11-11a decreased in NaPA in comparison to PAA molecule.The differences between C9-O10 and C9-O11 as well as O10-11a and O11-11a bond lengths almost disappeared.In the case of angles, the increase was observed for almost all angles in the aromatic ring, except of C3-C4-C5 and C6-C1-C2 angles in NPA molecule.For C2-C1-O7 and O7-C8-C9 angles the increase was observed, but decrease was noticed only for C8-C9-010, C9-O11-11a, and C9-O10-11a angles.Geometric and magnetic aromaticity indices [5][6][7], dipole moments, and energies were calculated and also shown in Table 1.All geometric aromaticity indices calculated for sodium phenoxyacetate in comparison to acid molecule decreased.It indicates that aromaticity of salt molecule decreased in comparison to free acid.This conclusion was confirmed by values of magnetic aromaticity indices NICS.
Mulliken, NPA, APT, MK, and ChelpG methods were used to calculate atomic charges on the atoms of phenoxyacetic acid molecule and its sodium salt.One of them, exemplary, is presented in Figure 3.The highest changes irrespective of used method are observed for carboxylate group.Total charges calculated for COO significantly decrease in comparison to free acid, for example, the values calculated by the Mulliken method for PAA amount to −0.766, but for NaPA molecule −1.194.In the case of other methods corresponding values are −0.473 and −0.852 (NPA); −0.292 and −0.763 (APT); −0.473 and 0.835 (MK); −0.341 and −0.851 (ChelpG method).
Conclusions
Replacement of hydrogen by sodium in molecule causes significant changes in geometrical structure of studied molecules.The highest changes are noticed for carboxylate group, as may be expected.However almost all bond lengths in aromatic ring insignificantly increase, the decrease of aromaticity of studied molecules are observed.The displacements of bands in FT-IR, Raman, as well as in NMR spectra are also noticed.In IR and Raman spectra, different changes of bands are observed, some of them shift to higher, other to lower, wavenumbers.In 1 H and 13 C NMR spectra almost all bands shift to lower values in sodium phenoxyacetate spectra in comparison to free acid.It is characteristic tendency for molecules, in which the decrease of aromaticity is noticed.
a 1 ÅFigure 3 :
Figure 3: Electronic charge distribution (NPA method) calculated for molecules of phenoxyacetic acid (a) and its sodium salt (b). | 2018-12-30T11:43:41.369Z | 2012-07-11T00:00:00.000 | {
"year": 2012,
"sha1": "f17c692b885f7f71983050174cef38a80a7ae32e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jspec/2012/480282.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f17c692b885f7f71983050174cef38a80a7ae32e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
49526882 | pes2o/s2orc | v3-fos-license | Vibration-Assisted Handling of Dry Fine Powders
Since fine powders tend strongly to adhesion and agglomeration, their processing with conventional methods is difficult or impossible. Typically, in order to enable the handling of fine powders, chemicals are added to increase the flowability and reduce adhesion. This contribution shows that instead of additives also vibrations can be used to increase the flowability, to reduce adhesion and cohesion, and thus to enable or improve processes such as precision dosing, mixing, and transport of very fine powders. The methods for manipulating powder properties are described in detail and prototypes for experimental studies are presented. It is shown that the handling of fine powders can be improved by using low-frequency, high-frequency or a combination of lowand high-frequency vibration.
Introduction
In the field of biotechnology, the pharmaceutical industry, production of plastics, and additive manufacturing, raw materials are increasingly provided as fine powders in order to optimize the process quality or duration.
Due to specific adhesive and cohesive material behavior that leads to agglomeration and permanent adhesion to surfaces, the handling-in particular the exact dosing, the transport, and the production of uniform powder mixtures and the dispersion of fine powders-is challenging and has not yet been satisfactorily solved.While commercialized solutions with conventional knocking, vibrating, stirring or shaking technologies work well with coarser powders, they have reached their technical limits with fine powders.
In order to enable dosing, transportation, dispersion and mixing of fine powders, new techniques have to be found.There are several ways to improve the handling of fine powders.While chemists prefer to use additives [1] to reduce adhesion and cohesion in order to improve the flowability and avoid agglomeration, physicists prefer to use air flow [2] or mechanical vibration [3] to achieve the same effect.The use of additives leads to a permanent altering of powder characteristics.Thus, the application of additives may have a negative effect on the subsequent process steps.The application of vibrations, on the other hand, does not cause a permanent change in the powder characteristics.The effect is reversible, and when used correctly does not affect the following process steps.Hereafter, new methods are described that contribute to improving the handling of fine powders.
The following experiments were all done with standard flour (Type 405).With particle sizes between 2 and 200 µm, it exhibits all the characteristic features of fine powders, such as poor flowability, strong agglomeration and strong adhesion.
Flowability and Powder Dosing
Poor flowability makes dosing a difficult task.When handling coarse, free-flowing powders, gravity is in most cases sufficient to remove powder from a conical container (for example a silo or funnel).Fine powders, however, tend to bridging at conical outlets.The powder sets and interrupts any gravity-driven flow.If bridging occurs unexpectedly during a process, the most common solutions to enable the flow are hammer beats or inflowing air.As shown in [4], the correct use of mechanical vibrations can reduce the shear strength of cohesive powders and thus improve the flow behavior.
A simple method for determining the flowability of powders is the measurement of the angle of repose on a powder heap [5].To investigate the effect of vertical vibrations on the flowability of cohesive powders, a heap of flour was placed on a circular vibratory surface with a diameter of 30 mm. Figure 1 shows the described setup.Figure 1a was taken at a static state and shows an angle of repose of 55 • .With increasing vibration amplitude, see Figure 1b,c, the angle of repose decreases.At a critical vibration amplitude, the whole powder heap is dispersed and shaken off.The same experiment was carried out at different frequencies and vibration amplitudes.Figure 2 shows a set of individual measurements at frequencies of 30, 50 and 100 Hz.There is a linear relationship between the angle of repose and the acceleration amplitude; frequency doesn't seem to have an impact.However, for higher frequencies, this relationship becomes increasingly non-linear.At frequencies above 100 Hz, especially in the ultrasonic range, the vibration is highly damped by interparticular friction.The vibration does not penetrate far enough into the powder heap and therefore has less influence on the flowability.However, ultrasonic vibrations can still be used to reduce flowability, but are limited to very small amounts of dry fine powders, as shown by [6][7][8] on the microfeeding of dry fine powders.
For dosing powders, sufficient flowability is a basic requirement.By means of a coordinated increase of the flowability, the powder flow in a dosing system can be controlled.Figure 3 shows systems for fine and coarse dosing of fine powders.For fine dosing, a glass pipette with a knocking device is used, as shown in Figure 3a.Fine powders like flour build bridges, and thus block up the conical outlet of the pipette.When knocking at the pipette, the powder is fluidized and flows out.For the shown system with an outlet diameter of 2 mm, the knocking frequency was chosen within the range between 100 and 300 Hz.By varying frequency and amplitude of the excitation signal, the powder flow could be controlled between 1 and 70 mg/s.A similar system, consisting of a cylindrical pipe with a sieve at the outlet, as shown in Figure 3b, is used to reach higher flow rates.Due to bridging and agglomeration, fine powder clogs over the sieve.When the powder is excited to vibrations, it flows through the sieve.A system was built up with a pipe diameter of 20 mm and a mesh size of 1 mm.At excitation frequencies of up to 100 Hz and amplitudes below 1 mm, flow rates between 120 mg/s and 3 g/s could be set.
The two systems show that coarse dosing of fine powders can be made possible by the use of vibrations as well as fine dosing.To achieve an uninterrupted powder flow at low flow rates, the two systems can easily be combined, so that the system for fine dosing is filled at certain intervals by the system for coarse dosing.In both systems, powder flows can be controlled by adjusting the vibration amplitude using a characteristic diagram of powder flow and vibration amplitude.A measurement of the powder flow during operation with conventional methods like a load cell is difficult due to the strong vibrations.To regulate the powder flow in a feedback control, complex signal processing must be handled or special sensors must be used.
Manipulation of Friction Forces and Powder Transport
The fact that friction forces can be manipulated by vibrations has been proven by different scientists [9][10][11][12][13].Depending on the direction of the vibration, different mechanisms lead to apparent friction reduction.All mechanisms can be explained using a mass on an inclined plane, see Figure 4.When the surface below the mass is excited to vibrate in sliding direction, the friction force has an accelerating effect for short phases instead of only acting as a brake, which leads to a lower time-average of friction force, and thus to an apparently reduced coefficient of friction.A transverse vibration in the sliding plane causes a similar effect [9,10].When exciting the surface to orthogonal vibration there is indeed friction reduction possible if the mass temporary loses contact as well as if the contact persists permanently [13].In all cases, the vibration causes only a reduction of the time-averaged, effective frictional force.In detail, the frictional force oscillates.To simplify the modeling, the described effects can be summarized in a reduced time-averaged, effective friction coefficient µ.For simplification, it is also assumed that the coefficients of static and sliding friction are identical.So far, literature covers experiments on metallic solids only.Thus, similar experiments were carried out on dry fine powders.
A common method for the determination of friction coefficients is to measure the minimum inclination angle at which a mass starts to slide.According to Figure 4 the powder mass m P will slide down when the part of the weight force m P g parallel to the contact surface is higher than the frictional force The normal force between plane and powder particle consists of a partial weight force and the adhesion force F a .If the adhesion force F a is negligible, the static friction coefficient µ can be calculated based on the inclination angle γ: µ = tan γ. ( Although neglecting the adhesion force might cause larger errors, this method is well suited to studying the influence of vibration on the effective friction.In a simple test rig, the coefficient of effective friction µ for flour was determined under the influence of ultrasonic vibrations.Figure 5 shows the experimental setup consisting of an ultrasonic bolted Langevine transducer with a magnifying sonotrode with a cuboid tip.The transducer is operated at its resonance frequency of about 20 kHz.The powder is placed on top of the sonotrode tip.With the shown setup, the influence of longitudinal vibrations (i.e., parallel to the sliding direction) on the effective friction coefficient could be studied.The transducer can be tilted along all three axes so that vibrations in the transversal (i.e., perpendicular to sliding direction, in sliding plane) and orthogonal (i.e., perpendicular to sliding plane) directions can be examined as well.In an experimental series, the effective friction coefficient µ of flour on aluminum alloy (polished surface) at vibration excitation in longitudinal, transversal, and orthogonal direction with different vibration amplitudes was determined.A thin layer of powder was spread on the vibrating surface.While the inclination angle was fixed, the vibration amplitude was slowly increased until the particles moved into a sliding state.Since the sliding does not start at the same amplitude for all particles, average displacement amplitudes were recorded.
As shown in Figure 6, all the vibrational directions show a significant reduction in the effective friction coefficient µ with increasing vibration amplitude.As lower friction coefficients are achieved at the same displacement amplitudes, the excitations in transversal and orthogonal directions are much more efficient than in the longitudinal direction.Knowing that the friction of powders can be apparently reduced by ultrasonic vibration, many chute-like transport processes for powders could be optimized.Additionally, conventional transport mechanisms like vibratory conveyors using the principle of inertia, which have increasingly problems with finer powders, can be optimized using "friction reduction" by ultrasound.As an example, the setup of a powder transport principle that is based on a harmoniously vibrating pipe and coordinated friction manipulation is described in the following [14][15][16].
Schematically, the powder-carrying substrate vibrates harmoniously in an axial direction x C (t) with frequency f a and amplitude xC as shown in Figure 7 and Equation (3).Under negative relative velocity .
x C (t) − .x(t), a vibration in orthogonal direction z C (t) is superimposed, as shown in Equation (4), so that the effective coefficient of friction between powder mass m and pipe is reduced.The powder mass is therefore highly accelerated during time periods of positive relative velocity and slightly decelerated during periods of negative relative velocity due to lower friction and thus moves in one direction.
x C (t) = xC sin(2π f a t) According to the described principle, a transport system was built, as shown in Figure 8 [14,15].The conveying part consists of a pipe which is excited to harmonious axial vibration by a voice-coil actuator (12 Ω, 75 W, Sony, Tokyo, Japan).The friction reduction is achieved by a radial vibration of the pipe, which corresponds to an orthogonal vibration of the substrate.This radial vibration is excited by an annular piezoelectric actuator (similar to Sonox P8, Ceramtec, Plochingen, Germany), which is adhered around the pipe wall and excites the pipe at the radial resonance frequency.In order to ensure reliable transport, the excitation signals of the voice-coil actuator and the piezoelectric actuator have to be synchronized exactly.The excitation signal of the piezoelectric actuator, which generates the radial vibration, has to be activated depending on the relative velocity, which presupposes the knowledge of the powder velocity .
x. Since the online measurement of the powder velocity requires significant effort, the piezoelectric actuator is turned on only at negative velocities of the conveying pipe .
x C ≤ 0. This is much easier, as the stroke of the pipe is directly proportional to the excitation voltage of the voice-coil actuator.However, this simplification comes along with a lower maximum of the powder velocity.The excitation signals for the voice-call actuator, as well as the piezoelectric actuator, were generated by a signal generator (Model 195, Wavetek, San Diego, CA, USA) which can output and synchronize several signals.Figure 9 displays the axial and radial displacement of the pipe for an exemplary excitation.The axial vibration, shown as a blue line with vertical axis on the left side is a sinusoidal curve with a frequency of 50 Hz and an amplitude of 0.5 mm.The radial vibration is shown as an orange curve with vertical axis on the right side.The pulsed signal has a frequency of about 35 kHz, which is a radial resonance frequency of the system and an amplitude of about 0.25 µm.Due to the high radial resonance frequency of the aluminum pipe, the transient phases of the radial vibration are extremely short.Therefore, the transportation is possible even at much higher frequencies of the axial vibration.Figure 10 shows the mean powder velocity v for different excitation amplitudes of both axial and radial vibration.This was determined as an average of the time that particles needed to travel along the pipe (20 cm).Powder velocity and mass-flow can be adjusted by changing the amplitudes of either the low-frequency axial vibration or the high-frequency radial vibration of the pipe as well as the pulse width of the radial vibration.The powder flow is reversed by shifting the phase between axial and radial excitation by 180 • .The built transport system was able to transport dry fine powders at a tilt angle of more than 10 • upwards.Due to the ultrasonic vibration of the pipe surface, which overcomes the strong adhesion of fine powders, only minimal residues of powder remain inside the pipe.The shown powder transportation principle can be used for very small, as well as for large, powder flows using the same pipe diameter.Experiments showed that the finest powders, with particle sizes of less than one micrometer, could be transported, as well as coarser powders.
Deagglomeration, Dispersion, and Mixing
There are a variety of principles for the deagglomeration and dispersion of powders using ultrasound.The easiest might be just putting powder on a vibrating surface.This principle is shown in Figure 11a.When touching the vibrating sonotrode surface, agglomerates receive an extremely high acceleration and are deagglomerated and dispersed.To achieve similar results without contact with the vibration surface, an intensive airborne ultrasonic field is needed.This can be generated by the vibrating surface of a sonotrode and a passive reflector (see Figure 11b) or with two opposite vibrating sonotrodes.In both cases, the distance should be adjusted to achieve a resonant tuned standing wave field for maximizing sound pressure.Loose particles, as well as agglomerates, are trapped at pressure nodes, and due to the extreme pressure fluctuations over time, powders are deagglomerated and dispersed [17].The disadvantages of this principle include the fact that the powder is dispersed in all directions and might touch the reflector, where it sticks or is dispersed as described above.In order to get a reliable powder flow, an additional airflow is advisable.A third option for achieving high sound pressure is shaping the vibrating surface so that the sound is focused.At this focus, the acoustic waves veer away from the sonotrode surface.When powder is dosed into the focus, it is deagglomerated and dispersed in a preferred direction.Figure 11c shows such a setup, consisting of a transducer with specially shaped sonotrode and dosing apparatus.To show the effectiveness of ultrasonic deagglomeration and dispersion, flour was dosed using the dosing apparatus described in Figure 3a with an outlet diameter of 2 mm.Thus, the agglomerates have a size of up to 2 mm, see Figure 12a.When ultrasound is applied as shown in Figure 11c, the agglomerates are dissolved, and a fine powder dust is generated, see Figure 12b.The diagram in Figure 13 shows the relative frequency of agglomerates or particles over their diameter.Agglomerates with diameters of up to 2 mm occur when the flour is dosed without using ultrasound.When ultrasound is applied, the maximum size of the agglomerates is at about 200 µm, which corresponds to the maximum particle size of flour and thus allows for the assumption that all agglomerates are dissolved.A homogeneous mixture of fine powders can be achieved dosing two powders into the ultrasonic acoustic field inside a mixing chamber, see Figure 14.The powders are deagglomerated, dispersed, and a fine powder dust is formed.Due to turbulences in the mixing chamber, caused by the same acoustic field, the dust of the two powders is mixed.At the outlet of the mixing chamber, a homogeneously distributed powder mixture is obtained.Figure 15 shows the results of mixing flour and cocoa.Without using ultrasound, large agglomerates occur, see Figure 15a.When ultrasound is applied, a homogeneously distributed powder mixture is achieved, see Figure 15b.
Conclusions
It was shown that the use of vibrations enhances the handling of dry fine powders.Vibrations can be used to fluidize powders, prevent deposits, reduce effective friction, deagglomerate, disperse and mix powders.Depending on the application, low-frequency or high-frequency vibrations are required to achieve optimum effects in increasing flowability, reducing friction forces and separation of agglomerates.Thus, the application of vibration is a powerful technique to improve dosing, transport, deagglomeration, dispersion and mixing of fine powders.
The dosing systems shown in Figure 3a,b are able to generate powder flows in the range of a few mg/s up to a few g/s.The shown transportation principle is able to transport powders with flow rates of a few g/s but also single particles.The mixing principle achieves best results with low powder flows at low speeds.Thus, the principles shown are well suited for use in laboratory equipment or in other processes with small powder flows.
Figure 3 .
Figure 3. Dosing systems that use vibrations to enable powder flow; (a) glass pipette with knocking device, (b) cylindrical vessel with vibrating sieve at the outlet.
Figure 4 .
Figure 4. Resulting forces acting on a powder particle on an inclined plane.
Figure 5 .
Figure 5. Experimental Setup for measuring coefficients of effective friction µ at ultrasonic vibration parallel to the sliding direction.
Figure 6 .
Figure 6.Effective friction coefficient µ of flour (Type 405) on aluminum alloy for vibration of the surface in longitudinal, transversal and orthogonal direction.
Figure 7 .
Figure 7. Schematic of the powder transport by coordinated manipulation of friction forces.
Figure 8 .
Figure 8. System using coordinated manipulation of friction forces for the transportation of dry flour.
Figure 9 .
Figure 9. Axial and radial displacement of the pipe vibration in the powder transportation system; pipe material: aluminum alloy.
Figure 10 .
Figure 10.Mean powder velocity for variation of amplitudes of axial and radial excitation of the pipe; test powder: flour; frequencies f a = 50 Hz, f r ≈ 35 kHz.
Figure 11 .
Figure 11.Principles for deagglomeration and dispersion of dry fine powders: (a) flour on vibrating sonotrode, f = 20 kHz; (b) flour in a standing ultrasonic wave, f = 46 kHz; (c) flour in a focused ultrasonic wave, f = 35 kHz.
Figure 12 .
Figure 12.Comparison of the particle size at dispersing flour with the system shown in Figure 11c (outlet diameter of dosing module: 2 mm) with and without using ultrasound; (a) flour being dosed without dispersion; (b) flour after dispersion.
Figure 13 .
Figure 13.Comparison of the particle size distribution of dispersing flour with the system shown in Figure 11c with and without ultrasound.
Figure 14 .
Figure 14.Setup for homogeneous mixing of two dry fine powders using a focused ultrasonic wave.
Figure 15 .
Figure 15.Comparison of mixing flour and cocoa without (a) and with (b) using focused ultrasound. | 2018-06-29T08:40:10.282Z | 2018-04-10T00:00:00.000 | {
"year": 2018,
"sha1": "1a25de694779d8a413f5888bd889d3f8b636139d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0825/7/2/18/pdf?version=1525347918",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1a25de694779d8a413f5888bd889d3f8b636139d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
219176779 | pes2o/s2orc | v3-fos-license | Quantum Circuits for Sparse Isometries
We consider the task of breaking down a quantum computation given as an isometry into C-NOTs and single-qubit gates, while keeping the number of C-NOT gates small. Although several decompositions are known for general isometries, here we focus on a method based on Householder reflections that adapts well in the case of sparse isometries. We show how to use this method to decompose an arbitrary isometry before illustrating that the method can lead to significant improvements in the case of sparse isometries. We also discuss the classical complexity of this method and illustrate its effectiveness in the case of sparse state preparation by applying it to randomly chosen sparse states.
I. INTRODUCTION
A general quantum computation on an isolated system can be represented by a unitary matrix. In order to execute such a computation on a quantum computer, it is common to decompose the unitary into a quantum circuit, i.e., a sequence of quantum gates that can be physically implemented on a given architecture. There are different universal gate sets for quantum computation. Here we choose the universal gate set consisting of C-not and single-qubit gates [1]. We measure the cost of a circuit by the number of C-not gates as they are usually more difficult to implement than singlequbit gates and since the number of single-qubit gates is bounded by about twice the number of C-nots [2,3].
More generally, we can consider operations where the dimensions of the input are different to those of the output. An isometry from m qubits to n qubits can be represented by a 2 n × 2 m matrix V satisfying V † V = I. Unitaries and state preparation are special cases of isometries where m = n or m = 0 respectively. An isometry with m = n can be implemented by extending it to a unitary and implementing the unitary instead. The freedom in the extension can be exploited to lower the number of gates required.
We will briefly summarize previous work on decompositions of generic quantum computations using the gate set consisting of C-not and single-qubit gates. Arbitrary unitaries can be decomposed using 23 48 4 n Cnots [4] to leading order, about twice as many as the best known lower bound. The most efficient known method for preparing arbitrary states requires about 2 n C-nots to leading order [5][6][7] which is about twice the best known lower bound [5]. The decomposition of arbitrary isometries has been considered in [7,8]. Near optimal methods for decomposing arbitrary isometries * maemanue@ethz.ch † itenr@itp.phys.ethz.ch ‡ roger.colbeck@york.ac.uk exist, and again they achieve C-not counts approximately twice as large as the best known lower bounds. The implementation of quantum channels has been considered in [9] and these have been implemented along with POVMs and instruments in [10].
In this work we focus on the special case of sparse isometries, including in particular sparse state preparation. We present a method for decomposing arbitrary isometries that achieves essentially the same near optimal C-not counts as the methods in [7,8], and adapts well to the case of sparse isometries, where the C-not count will depend on the number of non-zero entries and their structure. Some particular examples of sparse unitaries have been studied in previous work, e.g., diagonal gates [4], uniformly controlled singlequbit gates [6] and permutation gates [11]. The case of efficiently computable sparse unitaries with a polynomial number of non-zero entries per row or column has also been considered [12], and can be implemented using a polynomial number of C-nots.
The general approach taken here to implement an isometry V from m to n qubits is to apply a sequence of elementary gates G = G k · · · G 1 G 0 to the columns of the isometry in order to reduce it to the first 2 m columns of the identity matrix on n qubits denoted by I n,m , that is GV = I n,m . Then G † is an extension of V to a unitary and yields an implementation of V using elementary gates. Our decompositions are based on Householder reflections, whose significance for (dense) circuit decompositions has been considered in [13,14].
The main results presented in this work are summarised in Table II. In particular we can bound the number of C-not gates required to implement a sparse isometry in terms of a quantity that we call the matrix envelope (see Section IV B 2), or in terms of the number of non-zero entries of the isometry.
II. SPARSE STATE PREPARATION
State preparation is the main building-block of the decompositions used in this work. In this section we in- k-controlled single-qubit gate C k (U) Theorem 4] k-controlled single-qubit gate C k (U) 6k − 4 k − 1 clean [15,Corollary 4] k-controlled not gate C k (X) 16k − 8 1 dirty Corollary 29, [7] k-controlled not gate C k (X) 8k − 12 if k ≥ 5 troduce a method for implementing sparse state preparation more efficiently than is possible in the dense case.
Definition 1. Let |v be a state on n qubits. We say that a unitary SP v on n qubits implements state preparation for |v if We start by presenting a useful pivoting algorithm for permuting entries in a sparse state such that all nonzero entries are grouped together. The idea is to then perform a decomposition scheme for dense state preparation on the grouped entries, which correspond to the state of a subset of the n qubits.
Lemma 2.
Let |v be a state on n qubits and let nnz(v) denote the number of non-zero entries of |v in the computational basis. Let s = log 2 nnz(v) . Then there exists a permutation gate Piv v that disentangles n − s qubits in a computational basis state, i.e., Piv v |v = |i n−s ⊗ |ṽ s for some s-qubit state |ṽ s . Let N Piv (n, s) denote the number of C-nots required for pivoting any state on n qubits with at most 2 s non-zero entries and let N C n s (X) denote the number of C-nots required to implement an s-controlled not when there are a total of n qubits. Then N Piv (n, s) ≤ (n − 1 + N C n s (X) ) nnz(v). Explicit counts are given in Table II. Proof. If s = n there is nothing to do so assume s < n. The given state can be represented as a column vector in the computational basis, where the basis states can be written in terms of n binary indices |b 1 b 2 . . . b n , which we split into two |b 1 . . . b n−s |b n−s+1 . . . b n . The first of these corresponds to a block of the vector. The goal is then to move all non-zero elements of |v into a single block. We achieve this by using the following algorithm: 1. If all non-zero entries are in the target block, we stop the algorithm.
2. Pick a non-zero entry outside the target block and a zero entry inside the target block. Write the basis state of the non-zero entry as |t n−s |r s and that of the zero entry as |t n−s |r s .
3. Choose a qubit on which t n−s and t n−s differ. 4. With this as a control qubit, use at most n − 1 Cnots to adjust |t n−s |r s to |t n−s |r s such that t and t differ only on the control qubit.
5. Use one s-controlled not (controlling on |r s ) to exchange |t n−s |r s and |t n−s |r s . Note that none of the other entries of the target block are affected by this process.
At the end of this algorithm, all non-zero entries of |v are in the target block. Thus we used at most n − 1 C-nots and one s-controlled not to insert one non-zero entry into the target block. Since no non-zero entry ever leaves the target block, the claimed bound follows.
Remark 3.
In the preceding Lemma we split the vector into blocks, where we have chosen to separate the n − s most significant qubits from the s remaining ones. However, any splitting into n − s and s qubits would work. In addition, the target block and the order in which to proceed are not fixed and none of these choices affect any of the decompositions used in this work. Making these choices in the right way can reduce the C-not counts. See Section V B for details on implementing the algorithm. Table I the following bounds on the C-not count hold if a dirty ancilla are available. N Piv (n, s) ≤ (n + 16s 2 − 28s − 3) nnz(v) for n + a ≥ s + 1 N Piv (n, s) ≤ 27n2 n for n + a ≥ s + 1 N Piv (n, s) ≤ (n + 16s − 9) nnz(v) for n + a ≥ s + 2 N Piv (n, s) ≤ (n + 8s − 13) nnz(v) for n + a ≥ s + s 2 , s ≥ 5
Remark 4. Using
In the case n = s + 1, a = 0, there are not enough qubits available for any of the known decompositions of an scontrolled not and hence we can decompose it as an s-controlled single-qubit gate (first line) or note that the entire pivoting can be written as a permutation (second line -see Appendix C for a more precise bound). The first of these gives a better count for small n.
This pivoting algorithm allows us to find a scheme with low cost for the preparation of sparse states.
Corollary 5.
Let |v be a state on n qubits and let nnz(v) denote the number of non-zero entries of |v in the computational basis. Let s = log 2 nnz(v) . The number of C-nots required for sparse state preparation of a state of n qubits with nnz(v) non-zero elements is bounded by N SSP (n, s) ≤ N ∆ Piv (n, s) + N SP (s), where N ∆ Piv (n, s) denotes the number of C-not gates used to implement pivoting up to a diagonal gate, i.e., to implement ∆ Piv for some diagonal gate ∆. Explicit counts are given in Table II. Proof. It is sufficient to find a circuit that maps |v to |0 n , since the inverse of this circuit implements state preparation for |v . Let Piv v be constructed as in Lemma 2, then ∆ Piv v |v has the form |i n−s ⊗ |ṽ for any diagonal gate ∆. Without loss of generality i = 0. Now we merely need to apply reverse state preparation for |ṽ on s qubits. Thus sparse state preparation can be implemented as which gives the claimed count.
Remark 6.
For use in our sparse state preparation decomposition, it is sufficient to decompose the scontrolled not gates of Lemma 2 up to a diagonal gate. For example the Toffoli gate (with two controls) requires six C-nots when implemented exactly [1], but it can be implemented using only three C-nots when implemented up to a diagonal gate [1, Section VI B]. We are not currently aware of schemes to decompose not gates with more controls up to diagonal, but, if these were found, our counts would be improved.
Remark 7.
A more straightforward way to implement sparse state preparation is to use two-level unitaries to eliminate the non-zero entries one by one, but this leads to higher counts than the method presented here. A similar method is used in [1] to implement arbitrary unitaries.
III. GENERALIZED HOUSEHOLDER REFLECTIONS
Given a unit vector |v , the standard Householder reflection [16] with respect to |v is defined as We call |v the Householder vector associated with the reflection. The generalized Householder reflection of phase φ with respect to |v is defined as and coincides with the standard definition if φ = π. On certain architectures generalized Householder reflections can be implemented directly [17] and in a faulttolerant way [18]. Standard Householder reflections can be approximated well using Clifford and T gates [13]. In the circuit model a state preparation scheme can be used to perform a generalized Householder reflection.
Proof. This can be seen by considering the action on an orthonormal basis containing |v .
Lemma 9.
The gate H φ 0 on n qubits can be implemented using an (n − 1)-controlled single-qubit gate. The special case H 0 can be implemented using the same number of Cnots and ancilla qubits as an (n − 1)-controlled not gate. Explicit counts for these gates are given in Table I. Proof. The gate H φ 0 is a multi-controlled single-qubit gate with n − 1 controls and hence the C-not count is as in Table I. If φ = π, then using some common gates (note the Hadamard gate H has a similar notation to the Householder reflection) and −Z = XZX = X(HXH)X we obtain The C-not count is thus the one of C n−1 (X) (see again Table I). Here n is the number of output qubits, m the number of input qubits, nnz(·) the number of non-zero entries of a state or of an isometry in the computational basis and s = log 2 nnz(v) . The number of eliminations elim is defined in Eq. (5) and ed is defined in Definition 21. All results follow from the given reference and the results in Table I and other entries in the present table. Slightly different results can be derived by using different decompositions for multi-controlled not gates. An ancillary qubit is called clean if it starts in a known computational basis state and is restored to that state after the computation. It is called dirty if it starts in an unknown state which must be restored after the computation.
Given two states |v and |w we can construct a gate that maps |v to e iθ |w for some real θ using a standard Householder reflection defined as with θ = π − arg( v| w ) or θ = 0 if v| w = 0. We also define the generalized Householder reflectioñ which has the propertỹ The motivation for these definitions and related proofs are given in Appendix A.
IV. HOUSEHOLDER DECOMPOSITION
Our goal is to find a decomposition of any isometry V from m to n qubits. Let I n,m denote the first 2 m columns of the identity gate on n qubits. If G = G k · · · G 1 G 0 is a product of elementary gates on n qubits such that GV = I n,m , then G † is an extension of V to a unitary. Thus G † yields an implementation of V using elementary gates.
Householder reflections provide a straightforward method for implementing arbitrary isometries. Let |v 0 = V |0 be the first column of V and consider H v 0 ,0 , the Householder reflection mapping |v 0 to |0 up to a phase. We can reduce the first column (and row by orthogonality) of V by applying the Householder reflection to the isometry, i.e., the only entry in the first row and column of H v 0 ,0 V is that corresponding to |0 0|. Using the same idea the isometry can be reduced column by column to a diagonal isometry. Applying a diagonal gate on m qubits then yields I n,m . See Figure 1 for a schematic representation of the decomposition.
Before we describe the decomposition in more detail we show how the reduction of a column via Householder reflection affects the other columns of the isometry.
Lemma 10. Let V be an isometry. Let v j = V |j be the j th column of V and i be the target row index. The Householder reflection H v j ,i reduces the j th column to the i th row (i.e., such that its only non-zero element is in the i th row). For s = i and t = j we have The basic idea of the Householder decomposition for dense isometries. Here * represents an arbitrary complex entry. Each step reduces one column without affecting the previous columns. The rows are reduced automatically due to the orthogonality of the columns. The final diagonal gate sets the phases on the diagonal equal to one. and Proof. By construction we have H v j ,i V |j = e iθ |i and by orthogonality of the columns i| H v j ,i V |t = 0. For the final case given in the statement, we compute In other words if the j th column is reduced to the i th row then the entry of V at position s, t does not change if i| V|t = 0 or s| V|j = 0.
A. Dense isometries
First we consider the Householder decomposition for dense isometries. This decomposition works for any isometry, but does not take advantage of any sparseness. The idea is to reduce the columns one by one. It follows from Corollary 11 that previously reduced columns are not affected by subsequent Householder reflections. More precisely define V 0 = V and iteratively where ∆ m denotes a diagonal gate on m qubits. This proves the following Lemma.
Lemma 12.
Let V be an isometry from m to n qubits. Then V can be implemented using 2 m Householder reflections on n qubits and a diagonal gate on m qubits.
In any sequence of Householder reflections implemented using the construction in Lemma 8, gates of the form SP † v · SP w occur. Using the dense state preparation scheme from [5] one can merge some gates as described in [7, Theorem 1] for Knill's decomposition [8]. This essentially halves the C-not count. The structural similarity between the Householder decomposition and Knill's decomposition suggests a generalized decomposition, which we present in Appendix D.
Lemma 13. Let V be an isometry from m to n qubits with n ≥ 5. Then V can be implemented via standard Householder reflections using one dirty ancilla qubit and Proof. This follows analogously to the proof of Theorem 1 of [7], so we only point out the necessary modifications. The idea is to decompose the isometry via where S i implements state preparation of the i th column of V. Our modification is to replace the 2 m generalized Householder reflections by standard Householder reflections, and then to correct for the difference by using a diagonal gate on m qubits at the end, i.e., Lemma 9 shows that using one dirty ancilla qubit the standard Householder reflection H 0 can be implemented using 16n − 24 C-nots instead of 16n 2 − 60n + 42 used for the generalized version H φ 0 . The cost of the diagonal gate is 2 m − 2 (see Table I).
Note that, to leading order, the counts match those of [7, Theorem 1] and hence, in the case of dense isometries, the advantage of using the Householder decomposition rather than Knill's is only apparent for small cases.
Corollary 14.
Let U be a unitary on n qubits with n ≥ 5. Then U can be implemented via standard Householder reflections using one dirty ancilla qubit and N U (n) C-not gates where Proof. First we claim that a controlled isometry from m − 1 to m qubits with n − m controls can be implemented using at most N(m − 1, m) + 16(n − m)2 m−1 + 2 n−1 − 2 m−1 C-nots where N(m, n) denotes the upper bound for implementing an isometry from m to n qubits given in Lemma 13 1 . This follows from Lemma 13 and the fact that when implementing the controlled isometry by controlling all the gates in (4), we do not need to control the state preparation gates or their inverses (i.e., the S i and S † i gates do not need controls). Thus the control does not affect the first three terms in the counts from Lemma 13.
To decompose the unitary, we start by reducing the first half of the columns using the inverse of a circuit implementing an isometry from n − 1 to n qubits. This yields |0 0| ⊗ I + |1 1| ⊗ U 1 , where U 1 is an n − 1 qubit unitary. We can then control on the first qubit and do the inverse of an isometry from n − 2 to n − 1 qubits and so on. At each step we reduce half of the remaining columns and requires the inverse of an isometry from n − k − 1 to n − k qubits with k controls, where k = 0, 1, . . . , n − 1.
The C-not count is thus This can be used with the counts from Lemma 13 to upper bound the number of C-nots. To generate a clean bound on the leading order term, note that if n This result improves on the C-not count for a similar decomposition for dense unitaries based on Householder reflections given in [14] which achieves 4 n to leading order.
B. Sparse isometries
Now we consider the Householder decomposition for sparse isometries. Again the decomposition works for any isometry, but now the number of C-not gates depends on the number of non-zero entries and their structure. The method yields lower C-not counts than the decomposition for dense isometries if the isometry is sufficiently sparse. We do not specify a precise number of zeros an isometry should have in order to be considered sparse, but use W to denote isometries in the context of methods designed to make use of sparseness.
Decomposition of sparse isometries
The basic idea of the sparse Householder decomposition is that if the columns of W are sparse, then so are the Householder vectors used in the decomposition. Therefore we can use our sparse state preparation method (Corollary 5) to save C-nots. The main obstacle is fill-in generated by the Householder reflections, i.e., where zero entries of the isometry are removed by the reflection. Corollary 11 implies that such fill in is relatively small. It can be further reduced by decomposing the columns of the isometry in a well-chosen order.
More precisely let ρ be a permutation of the rows of W and let σ be a permutation of its columns. We call the pair (ρ, σ) an elimination strategy. Then the sparse Householder decomposition proceeds as in the dense case, except that at step i we reduce column σ(i) to row ρ(i). If we implement the Householder reflections up to a permutation and a diagonal gate, then, after all reductions, the isometry will be a row permutation of a diagonal isometry. More precisely define W 0 = W and iteratively for i = 0, . . . , 2 m − 1 and where Π i is a permutation gate andρ(i) = (Π i−1 · · · Π 1 Π 0 • ρ)(i) with • denoting the action of a permutation gate on a permutation. Then G 2 m −1 · · · G 0 W = Π n I n,m ∆ m . The following two lemmas show how to decompose permuted diagonal gates and how to implement Householder reflections up to a permutation and a diagonal gate.
Lemma 15.
Let Π n be a permutation gate on n qubits and ∆ m a diagonal gate on m qubits. An isometry of the form Π n I n,m ∆ m can be implemented using C-nots. Explicit counts are given in Table II. Proof. Consider the vector |u formed by summing the columns of Π n I n,m ∆ m and the pivoting algorithm (Algorithm 1) applied to this. If the gates required to pivot |u are applied to Π n I n,m ∆ m , all the 2 m non-zero entries of Π n I n,m ∆ m are moved to the top. [Note that when taking the counts from Remark 4, we replace nnz(v) by 2 m .] The resulting isometry has the form I n,m Π m ∆ m .
Since it leads to the same structure, it suffices to perform the pivoting up to a diagonal. It remains to implement a permutation on m qubits (up to diagonal) and a diagonal gate on m qubits.
Lemma 16.
Let |v be a state on n qubits and let nnz(v) denote the number of non-zero entries of |v in the computational basis. Let s = log 2 nnz(v) . Then the standard Householder reflection H v can be implemented up to diagonal and permutation gates using C-nots. Explicit counts are given in Table II.
Proof. It follows from Lemma 8 and Eq. (1) that H v can be implemented up to diagonal and permutation gates as which gives the claimed bound.
Now we can give the C-not counts for the sparse Householder decomposition. First we define to be the total number of eliminations during the Householder decomposition of W when using the elimination strategy (ρ, σ). Recall that |w i denotes the column reduced in the i th step of the decomposition, which differs in general from W |σ(i) .
Lemma 17.
Let W be an isometry from m to n qubits and let (ρ, σ) be an elimination strategy. Then W can be implemented using C-nots where s(i) = log 2 (1 + nnz(w i )) . Explicit counts are given in Table II. Proof. In the Householder decomposition the steps involve H w i ,ρ(i) . This is a standard Householder reflection with respect to a vector that may have one additional non-zero element (see Eq. (2)). The bound given then follows from the sparse Householder decomposition.
The count resulting from Lemma 17 depends on the chosen elimination strategy. The optimal strategy is the one that minimizes the amount of fill-in produced and therefore the number of eliminations required.
Remark 18.
It can be beneficial to use the idea of the sparse Householder decomposition, without adhering to the exact form given above. For example, using a single standard Householder reflection we can implement a k-controlled single-qubit gate up to diagonal. This gate can be used to improve the C-not count of the column-by-column decomposition [7]. Indeed, without loss of generality we can assume that the least significant qubit is the target qubit of the k-controlled singlequbit gate. Then the corresponding unitary is the identity matrix except for the 2 × 2 block in the bottom right corner. We can reduce the penultimate column (up to diagonal) using a standard Householder reflection with respect to |1 . . . 10 n and two single-qubit gates for the state preparation and reverse state preparation. The Cnot count is then that of a k-controlled not gate by a similar argument as in Lemma 9.
Remark 19.
To obtain a more explicit bound we can plug in the counts from Table II: where we have used the bound s(i) ≤ n. The given bound is valid with one dirty ancilla.
Fill-in and envelopes
In order to gain as much advantage as possible from the sparseness of an isometry, we need to minimize fillin as much as possible, which corresponds to choosing ρ and σ so as to minimize elim(W, ρ, σ). Reducing a column of W in general affects other columns and creates new non-zero entries. Due to the orthogonality of the columns however, Householder reflections create little fill-in when applied to isometries. In fact, it follows immediately from Corollary 11 that when reducing column j to row i fill-in can only occur in columns that are non-zero in the i th entry and fill-in is confined to the rows where W |j is non-zero.
We use matrix envelopes to give a bound on the amount of fill-in the sparse Householder decomposition produces. The envelope of a sparse matrix gives for each column of the matrix the row index of the lowest non-zero element (i.e., the largest row index of a non-zero element) in that column or any previous column.
Definition 20. Let W be an isometry from m to n qubits. The envelope of W is defined to be the function env W : {0, 1, . . . , 2 m − 1} → {0, 1, . . . , 2 n − 1} that maps each column index j to the smallest row index env W (j) such that i| W|j = 0 for all i > env W (j) and such that env W (j + 1) ≥ env W (j).
Definition 21. Let W be an isometry from m to n qubits. Define ed(W) = ∑ 2 m −1 j=0 (env W (j) − j). Then ed(W) denotes the number of entries between the envelope and the diagonal of W.
The definition of ed(W) is motivated by the following result.
Proof. LetW = Π ρ WΠ σ for some permutations (ρ, σ). Reducing the columns of W according to (ρ, σ) is equivalent in terms of fill-in to reducingW according to the trivial elimination strategy (ι, ι) where ι denotes the identity permutation, i.e., elim(W, ρ, σ) = elim(W, ι, ι). It follows from Corollary 11 that the fill-in forW is confined to entries (i, j) with i ≤ env(j). Due to orthogonality of the columns, if we proceed in column order we never need to eliminate any elements above the diagonal, so elim(W, ι, ι) ≤ ed(W).
Finding the row and column permutations that minimize the envelope is a computationally difficult task. Methods for similar problems have been considered in the context of sparse matrix decompositions [19]. Note that once a column permutation is fixed, the optimal row permutation can be found in the following straightforward way.
Let W be an isometry from m to n qubits. Consider the following algorithm for constructing a modified isometry W .
Set k to be the number of non-zero elements in
the column with index j with row indices greater than or equal to i.
3. If k = 0, permute the rows with indices greater than or equal to i such that these k non-zero elements have row indices i to i + k − 1, assign the new isometry to W and set i = i + k.
Lemma 23. Let W be an isometry from m to n qubits and W be the output after applying Algorithm 2 to W. For all row permutations ρ we have env W (j) ≤ env Π ρ W (j) for all j.
Proof. Given a column index j, let t(j) be the number of rows such that for each of the columns with indices smaller than or equal to j, those rows have at least one non-zero element, i.e., t(j) := 2 n − |{i : i| W j = 0 ∀ j ∈ {0, 1, . . . , j}}| .
It follows that for any row permutation ρ we have env Π ρ W (j) ≥ t(j) − 1 for all j (recall that env Π ρ W is a non-decreasing function by definition). However, by construction, env W (j) = t(j) − 1 for all j, from which the claim follows.
The difficult part is therefore finding the best column permutation. In this work we consider a simple greedy algorithm for finding a good column permutation. 1. Set i = 0, j = 0 and W = W.
Set M to be the submatrix of W formed by only
considering rows with row index greater than or equal to i and columns with column index greater than or equal to j.
3. Pick one of the columns of M with the fewest nonzero elements and set k to be the number of nonzero elements.
4. Permute the columns of W with column index greater than or equal to j such that when restricting the permutation to M, the chosen column becomes the first column of M. Then permute the rows of W with row index greater than or equal to i such that when restricting the permutation to M, all non-zero elements of the chosen column are moved to the top. Set i = i + k, j = j + 1.
5. If j < 2 m and i < 2 n return to Step 2, otherwise output W .
This algorithm corresponds to minimizing the increment of the envelope at each step. More information on an efficient way to implement it can be found in Section V C.
C. Fixed envelope method
The asymptotic C-not count for the sparse Householder decomposition contains an undesirable factor n stemming from Lemma 2. We now show how this factor can be avoided if we give up some control on the amount of fill-in.
For this we make use of the decrement gate Dec n which subtracts 1 in the computational basis (modulo 2 n ), i.e., Dec n = ∑ 2 n −1 i=0 |i i ⊕ 1| where ⊕ denotes addition modulo 2 n . This gate can be implemented using O(n) C-nots and one ancilla qubit [20]. The method is illustrated in Figure 2. 2. Illustration of the first step of the fixed envelope method. Here * represents an arbitrary complex entry, denotes the target entry of the reduction, × stands for an entry that was eliminated, − denotes entries eliminated due to the orthogonality constraint and + means that fill-in occurs. Here σ(0) = 0.
Lemma 24. Let W be a sparse isometry from m to n qubits and let (ρ, σ) be some row and column permutations. Then W can be implemented using Explicit counts are given in Table II.
Proof. We consider reducing W in the following way: First apply the row permutation ρ up to diagonal. Then reduce the columns in the order given by the column permutation σ. For each column, use a Householder reflection to reduce it to the topmost row. Apply the decrement gate and then move to the next column. After all columns have been reduced in this way we apply X ⊗n−m ⊗ I m . The resulting isometry has the form I n Π m ∆ m which can be reduced to the identity by applying a permutation on m qubits up to diagonal and a diagonal gate on m qubits. By construction, before each Householder reflection, all non-zero entries in the column being reduced are in the topmost 2 s(i) positions. We can hence perform the Householder reflections using the method of Lemma 16 but omitting the pivoting steps.
Remark 25.
To obtain a more explicit bound we can plug in the counts from Table II This bound is valid with one dirty ancilla. The factor 4 stems from the fact that each Householder reflection uses state preparation twice and each state preparation acts on s(i) qubits where 2 s(i) is at most twice the height of the envelope 1 + env Π ρ WΠ σ (i) − i in column i of Π ρ WΠ σ .
D. No fill-in method
Using a clean ancilla qubit we can avoid fill-in altogether. The method is illustrated in Figure 3.
Lemma 26. Let W be a sparse isometry from m to n qubits. Then, using one additional clean ancilla, W can be implemented using C-nots where s(i) = log 2 (1 + nnz(W |i )) . Explicit counts are given in Table II.
Proof. We implement the isometryW from m to n + 1 qubits defined byW |i = |0 ⊗ W |i which in the computational basis is just W stacked on top of a zero matrix of the same size. Then each of the 2 m columns can be reduced to one of the 2 n zero rows without creating any fill-in. These reductions can be implemented using Householder reflections up to a diagonal and permutation gate as in Lemma 16. Then using Lemma 15 we reduce the resulting permuted diagonal isometry to the identity.
V. CLASSICAL COMPLEXITY
In this section we compute the classical worst-case time complexity of some decompositions presented in this work. We compare the dense Householder decomposition to other known methods and we propose a sparse storage format which is well adapted to the sparse Householder decomposition.
A. Dense isometries
First we consider the dense case. For each column we have to compute the corresponding Householder vector (see Eq. (2)) and apply the Householder reflection to the entire isometry and produce the circuit implementing the Householder reflection. Computing the vector takes O(2 n ) and applying the reflection takes O(2 m+n ). Producing the circuit takes O(2 3n/2 ) according to Appendix B.4 of [10]. Thus the classical complexity is O(2 m+n (2 m + 2 n/2 )). For comparison, the column-bycolumn decomposition requires O(n2 2m+n ) and Knill's 3. Illustration of the no fill-in method. The clean ancilla leads to the empty 4 × 4 block at the start. Here * represents an arbitrary complex entry, × stands for an entry that was eliminated and denotes the target entry of each reduction. We actually implement each of the Householder reflections up to permutation, so the final state is a row permutation of that shown (for simplicity we did not depict this). decomposition and the Cosine-sine decomposition both require O(2 3n ) [10]. A high performance implementation for a decomposition of dense unitaries based on Householder reflections is presented in [14].
B. Sparse state preparation
The non-zero pattern of a sparse state on n qubits with at most 2 s non-zero entries can be compactly stored as a list of the n-bit indices of the non-zero entries. This requires O(n2 s ) space.
The pivoting algorithm, Algorithm 1, can be implemented with some greedy optimizations. First there are ( n s ) possible splittings for the n qubits. We will think of the vector as being reshaped into a two dimensional array with 2 s rows and 2 n−s columns corresponding to the chosen qubit splitting. If the number of possible splittings is small enough, we try all splittings and choose the one with the largest number of non-zero elements in one column and this will be the target column. Otherwise one can randomly sample a fixed number of splittings and use the best one. Second, in each insertion step we choose an element not yet in the target column for which the cost of inserting it into the target column is minimal and perform the insertion. We iterate this until all elements are in the target column.
The insertion of one element into the target column can be implemented using one s-controlled not gate and d − 1 C-nots, where d is the Hamming distance between the index of the non-zero entry and the index of the target entry. Thus we want to find a non-zero entry outside the target column and a zero entry in the target column for which the Hamming distance is minimal. We do this by finding for each non-zero entry outside the target column the closest zero entry in the target column. The Hamming distance can be written as d = d c + d r where d c is the Hamming distance obtained when restricting the indices to the column indices, and d r is the part corresponding to the row indices. For each non-zero entry, we can compute d c in O(n − s), which adds up to O((n − s)2 s ) for all non-zero entries outside the target column. We can compute d r for all non-zero entries at the same time in O(s2 s ) by using breadth first search on the s-dimensional hypercube with multiple starting vertices, given by the row indices of the zero entries in the target column. More precisely, we store a list of length 2 s , where each entry corresponds to one row and stores the minimal distance d r to some free row of the target column. The entries corresponding to free rows are initialized with distance 0 and all other entries are initialized with distance ∞. Then we perform the usual breadth first search on the graph whose vertices are given by the entries of the list and whose edges connect any pair of entries whose indices have Hamming distance one. Performing the insertion takes O(n2 s ). The entire circuit implementing sparse state preparation with the greedy optimizations mentioned above can thus be computed in time O(( n s ) + n2 2s ).
C. Sparse isometries
To store a sparse isometry W we store two arrays R and C of size 2 n and 2 m respectively. For a given row index i, R(i) stores a reference to a balanced tree (e.g. a red-black tree) containing for each non-zero element of the i th row a triplet of the form (i, j, i| W|j ). The elements of the tree are sorted according to the key j.
Analogously we define C(j) to store a reference to a tree containing the non-zero elements of the j th column. This requires O(2 n + 2 m + nnz(W)) space. Given row and column indices i and j, the corresponding entry can be created, read, modified or deleted in time O(n).
We now show how to reduce column j to row i. From Corollary 11 we know that the modified entries are those in column j and row i and those with indices s and t such that s ∈ C(j) and t ∈ R(i). First we iterate over all choices for s = i and t = j and create or modify the entries according to Lemma 10. Then we set the entries in column j and row i to zero except for the entry with indices (i, j) which is determined by Lemma 10. Thus each reduction can be done in time O(n mod) where mod denotes the number of modified elements.
In Algorithm 3 we presented a greedy method for constructing a permutation of an isometry leading to a small envelope. For a sparse isometry W given in the data structure described above, the corresponding row and column permutations ρ and σ can be computed iteratively as follows. Using the notation from Algorithm 3, we only store the submatrix M, which is initially set to W. In each step we find the sparsest column of M, which will be the next column in the column permutation, and the rows containing the non-zero elements of this column, which will be the next rows in the row permutation. Then we simply delete the non-zero elements in the chosen column and rows and iterate the procedure. In order to find the sparsest column in each iteration, we maintain a minheap storing for each column the number of non-zero elements. The smallest element can be removed in time O(n) which yields the sparsest column. Then we can delete the elements as described above, each in time O(n). For every deleted element we decrease the number of non-zero elements in the containing column by one, and thus we may have to reorder the minheap. But this can also be done in time O(n). The procedure stops when all non-zero elements have been deleted after total time O(n nnz(W)).
VI. NUMERICAL RESULTS FOR SPARSE STATE PREPARATION
We compare the C-not counts resulting from our sparse state preparation scheme presented in Section II to the dense case. The implementation of the sparse state preparation scheme is described in Section V B. In order to improve the classical computation time, we do not consider all possible qubit splittings, but randomly sample 100 splittings and choose the one with the largest number of non-zero elements in one column. We use the dense state preparation scheme from [5], implemented in [10], which achieves near optimal C-not counts for arbitrary dense states. The results are presented in Figure 4. These indicate the advantage we gain by taking into account the sparseness. Note however, that the dense case outperforms the sparse case for fairly dense states (where the cost of pivoting is not compensated by the smaller state preparation).
RC is supported by EPSRC's Quantum Communications Hub (grant numbers EP/M013472/1 and EP/T001011/1). RI acknowledges support from the Swiss National Science Foundation through SNSF Requiring H φ u θ |v ∝ |w leads to the following condition which we can also write as φ = π + 2 arg(z) mod 2π.
Standard Householder reflection
For a standard Householder reflection we have φ = π. Now choose θ = π − arg( v| w ) or θ = 0 if v| w = 0. This implies z = 1 + | v| w | = 0. Then we define This map has the property that H v,w |v = e iθ |w .
Generalized Householder reflection
If we want to get rid of the phase e iθ we have to use a generalized Householder reflection. Setting θ = 0 implies that z = 1 − v| w which might lead to numerical instabilities. Then φ = π + 2 arg(z). We definẽ
Appendix B: Multi-controlled not gates
We denote a k-controlled not gate by C k (X). Using a single dirty ancilla qubit such gates can be decomposed with a linear number of C-not gates. We start by recalling two lemmas from [1,7].
Note that if k ≥ 5 this bound can be reduced to 8k − 12 [15]. However, we do not use this here for the convenience of having a single bound for all k ≤ n 2 . The desired decomposition of multi-controlled not gates using a single ancilla qubit follows.
Corollary 29. Let n ≥ 3 denote the total number of qubits. Then we can implement a C n−2 (X) gate with at most 16n − 40 C-nots.
Appendix C: Permutation gates
One key feature of several of our methods is the use of permutations to adjust the form of the given isometry. Here we discuss the number of C-nots needed for these.
Proof. Permuting the rows of a state on n qubits corresponds to constructing a 2 n × 2 n permutation matrix (i.e., a unitary matrix with one 1 in every row and column). Each such matrix corresponds to a permutation on 2 n objects. It is known that all permutations can be decomposed as a sequence of swaps. A permutation is even if it can be decomposed into an even number of swaps and otherwise it is odd. It is known that all even permutations on n ≥ 3 qubits can be performed with at most n nots, n 2 C-nots and 3(2 n + n + 1)(3n − 7) Toffoli gates [11,Theorem 33]. A Toffoli gate can be performed up to a diagonal gate using 3 C-nots [1] and diagonal gates can be commuted with not, C-not and Toffoli gates up to another diagonal. Thus, ignoring the single-qubit (not) gates, any even permutation can be decomposed into at most (27n − 63)2 n + 28n 2 − 36n − 63 C-nots without ancilla, up to a diagonal gate.
If we have an odd permutation on n qubits, we can apply an (n − 1)-controlled not to make it an even permutation. Without an ancilla, we can do this as a n − 1 controlled single-qubit unitary with an overhead of 16n 2 − 60n + 42 C-nots (see Table I). This leads to an overall C-not count of (27n − 63)2 n + 44n 2 − 96n − 21 C-nots without ancilla 2 .
A diagonal gate on n qubits can be performed using 2 n − 2 C-nots (see Table I), leading to an overall C-not count of (27n − 62)2 n + 44n 2 − 96n − 23.
Slightly lower counts are possible with ancillas, but these do not change the leading order so we do not consider them here for simplicity. Note also that the above bound is always less than 27n2 n for n ≥ 3, so we use the latter as a simplification.
Any unitary on n ≥ 2 qubits can be decomposed using at most 23 48 4 n − 3 2 2 n + 4 3 C-nots [4] (without ancilla) which gives a better count for an arbitrary permutation for n ≤ 8.
[In [11], the authors note that if we add a qubit on which we do not act, an odd permutation becomes even and hence any permutation on n ≥ 3 qubits can be done using one ancilla and an even permutation on n + 1 qubits [11,Corollary 13]. However, this is significantly worse than the above bound.] Lemma 31. A permutation gate on n ≥ 2 qubits can be performed with one dirty ancilla and at most (18n − 26)(2 n − 1) C-nots.
Proof. The idea is to use Householder reflections. We can write the permutation gate as ∑ 2 n −1 i=0 |j(i) i|, where |j(i) is a computational basis state. The Householder reflection H j(i),i takes |j(i) → |i and |i → |j(i) without affecting any other columns (cf. Corollary 11).
Since H j(i),i = H u , where |u = 1 √ 2 (|i − |j(i) ), we can implement each Householder reflection along the lines given in Lemma 8. The state |u can be reduced to |i as follows. Let |i = |i 1 i 2 . . . i n and |j(i) = |j 1 j 2 . . . j n in the computational basis. Find an index k such that j k = i k . Controlling on the k th qubit we can apply at most n − 1 C-nots such that |j(i) → |i 1 . . . i k−1 j k i k+1 . . . i n and |i is unchanged. When applied to |u , this results in |u = 1 √ 2 (|i − X k |i ), where X k is a not on the k th qubit. Applying a single qubit rotation to the k th qubit then maps |u to |i . Let us denote the reverse of these steps by SP i,u , so that SP i,u |i = |u . We have where H i = I − 2 |i i| is either a Z or −Z gate with n − 1 controls, and hence has the C-not count of C n−1 (X), which is 16n − 24 if one dirty ancilla is available (see Table I).
It follows that we can do H j(i),i using 18n − 26 Cnots, and hence the whole permutation matrix using at most (18n − 26)(2 n − 1) C-nots.
Appendix D: Knill-Householder decomposition
In this appendix we consider a generalized decomposition scheme that contains Knill's decomposition [8] and the Householder decomposition as special cases. Suppose U is a unitary that we want to implement and B is another unitary representing a change of basis. Assume that B andŨ = B † UB are sparse. Let |b i = B |i , then |ũ i := UB |i = U |b i and Consider the generalized Householder reflectionHũ 0 ,b 0 mapping |ũ 0 to |b 0 (in this section we want this map to be exact and not up to a phase). In the basis formed by the columns of B this reduces the first column ofŨ. Consider the effect on the second column |ũ 1 . Since it is orthogonal to |ũ 0 , this column will suffer fill-in if and only if it is not orthogonal to |b 0 . In this case, fill-in will be confined to the subspace spanned by |ũ 0 and |b 0 . This means that fill-in is determined by the structure ofŨ and works in the same way as for the Householder reflection in the standard basis. It is important to note however that state preparation is only efficient if |ũ 0 is sparse, i.e., if bothŨ and B are sufficiently sparse. Continuing in this fashion allows us to reduce U to the identity.
Remark 32. If we define B to be the unitary whose columns form an eigenbasis of U, then this scheme reduces to Knill's decomposition, and if B is the identity, it is essentially the Householder decomposition. Note however that in the case of the Householder decomposition we used standard Householder reflections and it was sufficient to implement them up to diagonal and permutation gates.
This method can be generalized to isometries by extending a given isometry V to a unitary U. This can be done such that U has at least 2 n − 2 m eigenvalues equal to 1 (see [8], [10,Lemma 5]). Let B be an n qubit unitary whose first 2 n − 2 m columns are eigenvectors of U with eigenvalue 1. ThenŨ = B † UB is blockdiagonal with a (2 n − 2 m ) × (2 n − 2 m ) trivial block and an 2 m × 2 m nontrivial block. This observation can be used to reduce U as described above. | 2020-06-02T21:03:07.198Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "52de778298782b641a3a458d020fd5012bc9cccb",
"oa_license": "CCBY",
"oa_url": "https://quantum-journal.org/papers/q-2021-03-15-412/pdf/",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "52de778298782b641a3a458d020fd5012bc9cccb",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
18277921 | pes2o/s2orc | v3-fos-license | Measurements of the radiation hardness of selected scintillating and light guide fiber materials
Radiation hardness studies of KURARAY SCSF-78M scintillating fibers and clear fibers from KURARAY and pol.hi.tech. performed under different dose rate conditions in proton and electron beams are summarized. For high dose rates in-situ measurements of the fiber light output were done. During several months after irradiation all fibers were measured concerning light emission and transparency. Fibers irradiated at high rates to about 1 Mrad are clearly damaged but recover within a few hours up to several weeks. Using smaller rates up to the same integral dose a decrease of the light output of scintillating fibers of up to 30% can not be excluded. Clear fibers seem to be uneffected up to 400 krad. No significant influence of fiber coverage and atmosphere during irradiation was found.
Introduction
Recently [1]- [3] a fiber detector was developed as an alternative solution for the inner tracker of the HERA-B experiment [4]. With an accelerator cycle of 96 nsec and four events per cycle with a charged multiplicity of about 200 the detector modules have to work several years under an estimated integral dose per year in the inner tracker region of about 1 Mrad.
The light produced by particles crossing the scintillating fibers of the detector is transported by 3 m long light guide fibers to 64 channel multianode photomultipliers Hamamatsu R5900-M64 2 available only with bialcali photocathodes. Best light output and long term stability were obtained for KURARAY 3 scintillating double clad fibers SCSF-78M and clear double clad fibers from KURARAY and pol.hi.tech. 4 . The corresponding radiation hardness studies were performed with high dose rates in 70 MeV proton and 2 MeV electron beams of the Hahn-Meitner Institute Berlin [1], [3], [5], [6] and photons from a 60 Co source [2]. In the first case in-situ measurements of the light output even with spectral resolution were possible.
In the past there were several arguments [7]- [9] that the presence of oxygen during and after the irradiation may be important for the observed damage. In this case also the dose rate may influence the final result because diffusion processes are time dependent.
Our results from high dose rates using charged particle beams will be summarized below and compared to new data from low dose rate exposures of the same materials. The new tests were performed in air and nitrogen atmosphere with glued and non-glued scintillating fibers and compared with non-irradiated test samples.
Fiber samples
The fiber samples for proton and electron irradiation have the same global structure as shown in Fig. 1. For high rate irradiation 4×4 fibers of 0.48 mm diameter were glued together resulting in a cross section of about 2×2 mm 2 .
The samples for electron low dose rate irradiation consist of a fiber arrangement of 1×7 fibers of the same diameter forming a fiber road in the later detector. Coupling pieces are mounted at both ends of the 30 cm long samples in which the ends of the fibers are inserted, glued and polished. This allows an optical coupling to light guides or photomultipliers with light losses of less than 10 %. The fiber samples are mechanically stabilized by two brass rods of 3 mm diameter.
For low dose rate electron irradiation there were two types of samples. The first type is fully glued to shield the fibers from the gaseous environment, whereas the second type is mounted using a minimum of glue in thin strips near the connectors in order to allow the gaseous atmosphere to have contact to the fiber material.
For the in-situ measurements single fibers were coupled at one or both ends to glass fibers which transport the light to the corresponding spectrometers (see Fig. 2).
Irradiation setup
A schematic view of the irradiation setup in the proton and electron beams is given in Fig. 3. For electron irradiation the beam was extracted from the vacuum system through a window of 100 µm thick Aluminium and 40 µm Inconel. A metallic aperture of 3×12 mm 2 was used for beam profile definition. In the case of the proton irradiation the beam was extracted through a 7 µm thick Tantalum foil. The beam size and the emittance angle were limited by two PMMA (polymethyle methacrylate) apertures. The total range of protons in fiber material is about 39 mm which is checked by the profile of the colour changes in the PMMA aperture during the irradiation. The spot size and position was additionally monitored by polyvinylalcohol (PVA) methylene blue plastic detector foils [10]. The dye is radiation sensitive and its degradation yield is proportional to the irradiated particle fluence. The degradation of the dye in the foil has been determined by UV-VIS-spectroscopy. A typical result of such beam homogeneity control for proton irradiation is depicted in Fig. 4. The higher the transparency the higher was the irradiation dose in the given area. At positions 1, 2 a high radiation level with low restriction in the field distribution is registrated. The profile created by the plastic apertures is given by curve 3 and 4. A low non-structured irradiation level is characterized by curves 5 and 6.
For the in-situ registration of beam excited scintillation spectra fiber optic PCplug-in spectrometers (Ocean Optics 5 ) were used. They were placed outdoors of the cave in order to suppress the high radiation background using 22 m long light-guiding glass fibers . A detailed description of the used experimental setups in both cases can be found in [5].
The proton irradiation was performed quasi point-like at two points along the sample with a dose of ≥1 Mrad at 20 cm and of 0.1 Mrad at 10 cm respectively within a few minutes. The dose rate was about 30 Mrad/h. The irradiation of short areas of the samples gave the possibility to separate the damage of scintillator and optical matrix.
The same irradiation procedure was applied for in-situ measurements of radiation damage using a corresponding electron beam.
High current irradiations were only carried out under ambient atmosphere using cooling by a powerful fan. The temperature rise during the irradiation could be neglected [5].
A new series of tests has been performed irradiating the fiber material with a relatively low dose rate of 2 MeV electrons to approximate the later experimental conditions. A dose of about 1 Mrad was applied during five periods within about nine weeks. The particle flux was monitored by a matrix of Faraday cups. The distance between scatter foil and sample plane was about 1.5 m. In this case the samples were kept either in air or in nitrogen atmosphere.
Measurement procedure
In-situ registration of scintillation spectra first described in [11] was performed for the beam excited regions 1 and 2 (see fig. 2). The spectra were measured during the whole irradiation time in the first irradiated region 1 of fibers and after that in the second region 2 under influence of high absorption in the presumably predamaged region 1 in order to determine the change of the absorption coefficient during the irradiation. Between irradiation procedure 1 and 2 a preparation time of a few minutes was necessary. The beam excited scintillation spectrum served in the second case as changeable light source for absorption measurements in a limited spectral region.
For recovery measurements in the laboratory a few hours after irradiation the optical excitation was realized by a high pressure Hg-lamp at λ = 365 nm . In addition to the in-situ measurements which used single fiber samples and UV-excitation for the measurement in the laboratory investigations were done using multi-fiber bundles. The irradiated multi-fiber samples (see section 2.1) were evaluated using a 106 Ru source. The fiber sample was mounted within a source collimator slit. The light signal was measured using a Philips 6 XP 2020 photomultiplier and analyzed by an Analog-to-digital converter (ADC). The ADC was triggered by a threefold coincidence of signals coming from a 5 mm thick plastic scintillator mounted behind the fibers using two Philips XP 1911 photomultipliers for readout and from a second photomultiplier XP 2020 coupled to the second coupling piece of the fiber sample. The light output measurement was performed before and after irradiation. In addition the light output of the non-irradiated scintillator reference samples and light attenuation of the light guide reference samples were regularily measured to minimize systematic errors.
Results
From in-situ observations of proton and electron excited spectra no remarkable difference could be found [6]. Consequently, we report here representative results for both charge carrier excited spectra. As described in [5], [6] all in-situ measured spectra show a two stage decay of the scintillating light intensity in dependence on the energy dissipation (or irradiation time). In Fig.5 A recovery of the damaged fibers could be observed already during the in-situ measurements. A considerable increase in light output was observed several times during the irradiation procedure after switching off the beam for only three minutes (see Fig. 3 in [5]).
Exciting the same fibers by UV-light in the laboratory a few hours after irradiation a long term recovery was measured. After 40 hours a SCSF-78M fiber irradiated to 8.1 Mrad showed 90 % of the light output with respect to pre-irradiation (Fig.4 in [5]). This process seems however to depend on the fiber material and the integral dose (compare Fig.2 in [6]).
The kind of excitation seems to be of particular importance for the measured fiber light output. This will influence also the observed recovery after irradiation and may explain the corresponding different time constants for in-situ measurements and UV-excitation.
In a real experiment the scintillation light in fibers will be produced by crossing charged particles. Therefore the multi-fiber test samples were exposed to electrons from a Ru-source before and after beam irradiation to measure light output and transmission. Indeed a different behaviour was found. As described in [1], [3] the strongest damage was observed only about 30 hours after irradiation with a dose of 1 Mrad for both light emission and transparency with a complete recovery after two days (see Fig. 4 Measurements going on for about half a year, using many times the same samples, are difficult to perform keeping systematical errors small due to some instability of the setup in time and mechanical damages of the fragile samples. To minimize those effects, non-irradiated samples were measured every time in addition. All results are presented as ratios of irradiated to non-irradiated fibers R S and R L for scintillators and light guides, respectively. The maximum errors of these ratios have been estimated to be about 30 % including effects which may arise from sample production.
How the irradiation was going on in time is demonstrated in Fig. 6a. The corresponding damage and recovery of four fiber samples is shown in fig 6b. No effect could be observed outside the 30 % error band. Neglecting the measurement errors and relying on the pre-irradiation data points some damage may have happened up to the maximum dose followed by a long term recovery. The damage seems to be smaller for glued fibers in particular in nitrogen atmosphere. Non-glued fibers seem to recover only partly.
From Fig. 7a it can be seen that clear fibers were only irradiated up to a dose of 400 krad. For all measurements they were coupled to scintillating fibers which were excited by electrons from a Ru-source. Also here a maximum error of 30 % has to be kept in mind for the ratio R L of irradiated to totally non-irradiated (clear plus scintillating) fibers shown in Fig 7b. Neglecting the error band no damage is seen for the clear fiber irradiation itself. However irradiating the scintillator to more than 1 Mrad caused a decrease of the light output from a coupled clear fiber to more than one half, i.e. more than observed for the scintillating sample alone. After two weeks complete recovery was found. The behaviour is the same for KURARAY and pol.hi.tech. clear fibers in air and nitrogen.
Summary
Several radiation hardness tests were performed for KURARAY scintillating fibers SCSF-78M and clear fibers from KURARAY and pol.hi.tech. Using high current proton and electron beams the irradiation was performed both with very high and low dose rates.
In-situ observations demonstrated a strong damage of scintillating fibers for high dose rate exposures. Both light emission and transparency were decreased down to 20 % for 1 Mrad. Short and long time recovery effects followed the irradiation.
For low dose rate conditions closer to a later experiment, a 30 % decrease of scintillating fiber light output could not be excluded recovering after three weeks. No significant influence of the fiber coverage and the atmosphere during irradiation was found.
Clear fibers are apparently not damaged for doses up to 400 krad. Coupled to irradiated scintillating fibers the effective damage of the system seems to increase. Fig. 3 giving the positions of such plastic films in the irradiation setup: 1behind the exit window, 2-before the aperture 1, 3 -between aperture 1 and 2, 4-behind aperture 2, 5 -before the Faraday cup, 6 -behind the Faraday cup. Fig. 3 giving the positions of such plastic films in the irradiation setup: 1 -behind the exit window, 2-before the aperture 1, 3 -between aperture 1 and 2, 4-behind aperture 2, 5 -before the Faraday cup, 6 -behind the Faraday cup. | 2014-10-01T00:00:00.000Z | 1999-07-13T00:00:00.000 | {
"year": 1999,
"sha1": "bc04caba77367fd71366934b30342ae157ae5d18",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bc04caba77367fd71366934b30342ae157ae5d18",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
146181953 | pes2o/s2orc | v3-fos-license | ScholarWorks @ Georgia State University
DAVID W. STINSON is an associate professor of mathematics education in the Department of Middle and Secondary Education in the College of Education and Human Development, at Georgia State University, P.O. Box 3978, Atlanta, GA, 30303; e-mail: dstinson@gsu.edu. His research interests include exploring socio-cultural, -historical, and -political aspects of mathematics and mathematics teaching and learning from a critical postmodern theoretical (and methodological) perspective. He is a co-founder and current editor-in-chief of the Journal of Urban Mathematics Education. EDITORIAL
s a critical 1 mathematics educator, it is difficult not to be pessimistic about the Every Student Succeeds Act of 2015 (ESSA), signed into law by President Barak Obama on December 10th. The ESSA, similar to it predecessors, has an admirably worded purpose statement: "To provide all children significant opportunity to receive a fair, equitable, and high-quality education, and to close educational achievement gaps" (ESSA, 2015, Sec. 1001. But after more than a decade of suffering through federal legislation that left far too many children behind and yielded far too many losers in the race to the top, I have become increasingly doubtful that any organization, including the federal government, has "the will" (Hilliard, 1991, p. 31) 2 to facilitate "the kind of violent reform necessary to change the conditions of African American, Latin@, Indigenous, and poor students [i.e., the collective Black 3 ] in mathematics education" (Martin, 2015, p. 22). Nevertheless, it is being Critical theory refuses to identify freedom with any institutional arrangement or fixed system of thought. It questions the hidden assumptions and purposes of competing theories and existing forms of practice. … Critical theory insists that thought must respond to the new problems and the new possibilities for liberation that arise from changing historical circumstances. Interdisciplinary and uniquely experimental in character, deeply skeptical of tradition and all absolute claims, critical theory…[is] concerned not merely with how things [are] but how they might be and should be. (pp. 1-2) 2 In his article titled "Do We Have the Will to Educate All Children?" Hilliard (1991) writes: If our destination is excellence on a massive scale, not only must we change from the slow lane into the fast lane; we literally must change highways. Perhaps we need to abandon the highways altogether to take flight, because the highest goals that we can imagine are well within reach for those who have the will to excellence. (p. 36, emphasis in original) critical that makes me optimistic as well, albeit a "non-stupid optimism" (McWilliam, 2005, p. 1). 4 It is this forever oscillating between pessimism and optimism that drives me and many other critical educators to do the work that we do.
For the past 8 years, exemplars of this crucially needed work-completed by a particular group of (largely) critical mathematics educators-are found within the online pages of the Journal of Urban Mathematics Education (JUME). The readers, editors, reviewers, and authors of JUME (a collective group that numbers more than 1,000 strong) have brought to life over 1,700 pages of scholarly editorials, commentaries, response commentaries, public stories, research articles, and book reviews. This group of educators includes those who have spent decades working to provide all children significant opportunity to receive a fair, equitable, and highquality education (many with a specific focus on the collective Black), as well as those who are just beginning their careers as critical mathematics classroom teachers, teacher educators, and/or education researchers.
The purpose behind the creation of JUME was and continues to be to create a movement of change in mathematics education (Matthews, 2008). Over the past 8 years, JUME has offered different statements-that is, different knowledges (cf. Foucault, 1969Foucault, /1972)-about "urban" mathematics education and, in turn, different statements about urban children and urban schools (Stinson, 2010). To date, web views of JUME content have exceeded 140,000 views, and Google Scholar citations have exceeded 400, with Google and Google Scholar web searches returning over 2,300 and 340 hits, respectively.
Four years ago, based on the power, in the Foucauldian sense (see, e.g., Foucault, 1980), of the academic edited handbook to produce and reproduce knowledge in both social science research, in general (e.g., Denzin & Lincoln, 1994, 2000, 2005, and mathematics education research, in particular (e.g., Grouws, 1992;Lester, 2007), I suggested that JUME be envisioned "as a both-and rather than an either-or research and pedagogical resource" (Stinson, 2011, p. 3). That is, JUME can function as both a peer-reviewed journal and an academic edited handbook on urban mathematics education. I then proceeded to provide the Table of Contents, if you will, of the first edition of the Handbook of Research on Urban Mathematics Teaching and Learning.
Here, I offer an expanded version of that I also suggest here an expanded use for JUME beyond its use as a research and/or pedagogical resource. I suggest that JUME be used as an easily accessible resource guide to assist those mathematics education leaders and policy makers who will be busy in the coming months and years translating ESSA into policies and practices intended to ensure that every "urban student" succeeds in mathematics. This time around, however, I hope that members of the larger mathematics education community will neither allow politics to take the place of scientific inquiry (Boaler, 2008) nor erase "race" from a national conversation on mathematics teaching and learning (Martin, 2008), among other policy missteps and omissions of the past. 6 As the single largest and most up-to-date collection of theoretical and empirical social science on urban mathematics teaching and learning, I hope those members of the mathematics education community who will be charged (both directly and indirectly) to translate ESSA will turn to JUME often as they consider Bullock's (2015) most recent direct and timely question: -"Do all lives matter in mathematics education?" | 2016-10-26T03:31:20.546Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "afc8614bfe86934d99b69bf2aa9d9726168f1b04",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.21423/jume-v8i2a293",
"oa_status": "CLOSED",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "83191334ad88611d9bca6358ba8a50231caf2caa",
"s2fieldsofstudy": [
"Education",
"Mathematics"
],
"extfieldsofstudy": []
} |
16240207 | pes2o/s2orc | v3-fos-license | The solution of multi-scale partial differential equations using wavelets
Wavelets are a powerful new mathematical tool which offers the possibility to treat in a natural way quantities characterized by several length scales. In this article we will show how wavelets can be used to solve partial differential equations which exhibit widely varying length scales and which are therefore hardly accessible by other numerical methods. As a benchmark calculation we solve Poisson's equation for a 3-dimensional Uranium dimer. The length scales of the charge distribution vary by 4 orders of magnitude in this case. Using lifted interpolating wavelets the number of iterations is independent of the maximal resolution and the computational effort therefore scales strictly linearly with respect to the size of the system.
Introduction
Wavelets are a powerful new mathematical tool which offers the possibility to treat in a natural way quantities characterized by several length scales. In this article we will show how wavelets can be used to solve partial differential equations which are characterized by widely varying length scales and which are therefore hardly accessible by other numerical methods. The standard way to solve partial differential equations is to express the solution as a linear combination of so-called basis functions. These basis functions can for instance be plane waves, Gaussians or finite elements. Having discretized the differential equation in this way makes it amenable to a numerical solution. Wavelets are just another basis set which however offers considerable advantages over alternative basis sets. Its main advantages are: 1. The basis set can be improved in a systematic way: If one wants the solution of the differential equation with higher accuracy one can just add more wavelets in the expansion of the solution. This will not lead to any numerical instabilities.
Different resolutions can be used in different regions of space:
If the solution of the differential equation is varying particularly rapidly in a particular region of space one can increase the resolution in this region by adding more high resolution wavelets centered around this region.
3. There are few topological constraints for increased resolution regions: The regions of increased resolution can be chosen in arbitrarily, the only requirement being that a region of higher resolution be contained in a region of the next lower resolution. 4. The matrix elements of the differential operators are very easy to calculate 5. The numerical effort scales linearly with respect to system size: Three-dimensional problems of realistic size require usually a very large number of basis functions. It is therefore of utmost importance, that the numerical effort scales only linearly (and not quadratically or cubically) with respect to the number of basis functions. If one uses iterative matrix techniques, this requirement is equivalent to the two requirements, namely that the matrix vector multiplications which are necessary for all iterative methods can be done with linear scaling and that the number of matrix vector multiplications is independent of the problem size. The first requirement is fulfilled since the matrix representing the differential operator is sparse. The second requirement is related to the availability of a good preconditioning scheme which can be easily found by analyzing the Fourier properties of wavelets.
A first tour of some wavelet families
Many families of wavelets have been proposed in the mathematical literature. If one wants to use wavelets for the solution of differential equations, one therefore has to choose one specific family which is most advantageous for the intended application. Within one family there are also members of different degree. We believe that the so-called bi-orthogonal interpolating wavelets [6] are the most useful ones in the context of differential equations and we will therefore mainly concentrate on this class. Each wavelet family is characterized by two functions, the mother scaling function φ and the mother wavelet ψ. For the case of a fourth order interpolating wavelet they are shown in Figure 1. Another family which will be introduced is the Haar wavelet family shown in Figure 2. It is too crude to be useful for any numerical work, but its simplicity will help us to illustrate some basic wavelet concepts. To obtain a basis set at a certain resolution level k one can use all the integer translations of the mother scaling function of some wavelet family.
Note that with this convention higher resolution corresponds to larger values of k. Exactly the same scaling and shifting operations can of course also be applied to the wavelets.
This set of wavelet basis functions can be added as a basis to the scaling functions as will be explained in the following for the case of the Haar wavelet family.
The Haar wavelet
In the case of the Haar family, any function which can exactly be represented at any level of resolution is necessarily piecewise constant. One such function is shown in Figure 3. Evidently this function can be written as a linear combination of the scaling functions where s 4 i = f (i/16). Another, more interesting, possibility consists of expanding a function with respect to wavelets of different resolution. This is possible because a scaling function (and wavelet) at resolution level k is always a linear combination of a scaling function and a wavelet at the next coarser level k − 1 as shown in Figure 4 Using this relation, we can write any linear combination of the two scaling functions φ k 2i (x) and φ k 2i+1 (x) as a linear combination of φ k−1 i (x) and ψ k−1 i (x). Denoting the expansion coefficients with respect to ψ k i (x) as d k i , we obviously obtain So to calculate the expansion coefficients with respect to the scaling functions at the next coarser level, we have to take an average over expansion coefficients at the higher resolution level. Because we have to take some weighted sum these coefficients are denoted by s. To get the expansion coefficients with respect to the wavelet, we have to take some weighted difference and the coefficients are accordingly denoted by d. The wavelet part contains mainly high frequency components and by doing this transformation we therefore peel off the highly oscillatory parts of the function. The remaining part represented by the coefficients s k−1 i is therefore smoother. For the case of our example in Figure 3 the remaining scaling function part after one transformation step is shown in Figure 5.
For any data set whose size is a power of 2, we can now apply this transformation repeatedly. In each step the number of s coefficients will be cut into half. So we have to Note that in both cases we need exactly 16 coefficients to represent the function. Functional representations of this type will be the focus of this article.
By doing a backward wavelet transform, we can go back to the original expansion of Equation 3. Starting at the lowest resolution level, we have to split up each scaling function and wavelet on the coarse level into scaling functions at the finer level.
The concept of Multi-Resolution Analysis
In the previous sections a very intuitive introduction to wavelet theory was given. The formal theory behind wavelets is called Multi-Resolution Analysis [2] (MRA). The reader interested in the formal theory can consult Daubechies book. We will list here only a few facts which are useful for numerical work. A bi-orthogonal wavelet family of degree m is characterized by 4 finite filters denoted by h j ,h j , g j ,g j . A filter is just a short vector which is used in convolutions. Those filters satisfy certain orthogonality and symmetry relations. Scaling functions and wavelets at a coarse level can be written as the linear combinations of scaling functions at a higher resolution level. These important relations are called refinement relations.
The expansion coefficients at different resolution levels are related by the wavelet transform equations. The analysis (forward) transform is given by and a wavelet synthesis (backward) transform is given by These two equations are generalizations of Equations (4) and (6) which we derived in an intuitive way and with a different normalization convention. The fundamental functions satisfy the following orthogonality relations The scaling function is usually normalized to 1 φ(x)dx = 1 (17)
The fast wavelet transform
Let us first look at the forward transform given by Equation 11 . The peeling off of the high frequency components in the forward transform can be illustrated in the following way: We note that just two arrays of length n (where n is a power of 2) are necessary to do the transform as shown below: after second sweep Array 1: after third sweep Array 2: final data Array 1: Note that this transformation from the "original data" to the "final data" corresponds exactly to the transformation done in an intuitive way to get from Equation 3 to Equation 5. Just as in the case of a Fast Fourier transform we have Log 2 (n) sweeps to do a full transform. However in the case of the wavelet transform the active data set (the s coefficients) is cut into half in each sweep. If our filters h and g have length 2m the operation count is then given by 2m(n + n/2 + n/4 + ...). Replacing the finite geometric series by its infinite value, the total operation count is thus given by 4mn The backward transform (Equation 12) can pictorially be represented by the following diagram: As can easily been seen the operation count is again 4mn and again it can be done with 2 arrays of length n. Since each sweep in a wavelet transform is a linear operation it can be represented by a matrix. Denoting the matrix for one sweep in a forward transform byF and in a backward transform by B we have where the tilde on the matrix means that the filter coefficients necessary to fill the matrix are replaced by their dual counterparts. Obviously all these matrices are sparse and banded.
Backward wavelet transforms can also be used to make plots of scaling functions and wavelets. To generate the scaling function we start with a data set where s 0 0 = 1 and d k i = 0 for all possible i's and k's up to a maximum resolution level k = K. In the wavelet case the initial data set is s 0 0 = 0, d 0 1 = 1, and d k i = 0 for all other values of i and k up to the maximal resolution K. By doing repeated backward transform sweeps, we express these two functions by skinnier and skinnier scaling functions and the s coefficients will finally be the functional values within the resolution of the eye.
Interpolating wavelets
In addition to being advantageous as basis sets, interpolating wavelets are also conceptually the simplest wavelets and we will therefore briefly describe their construction. The construction of interpolating wavelets is closely connected to the question of how to construct a continuous function f (x) if only its values f i on a finite number of grid points i are known. One way to do this is by recursive interpolation. In a first step we interpolate the functional values on all the midpoints by using for instance the values of two grid points to the right and of two grid points to the left of the midpoint. These four functional values actually allow us to construct a third order polynomial and we can then evaluate it at the midpoint. In the next step, we take this new data set, which is now twice as large as the original one, as the input for a new midpoint interpolation procedure. This can be done recursively ad infinitum until we have a quasi continuous function.
Let us now show, how this interpolation prescription leads to a set of basis functions. Denoting by the Kronecker δ i−j a data set which has a nonzero entry only at the j-th position, we can write any initial data set also as a linear combination of such Kronecker data sets: f i = j f j δ i−j . Now the whole interpolation procedure is clearly linear, i.e. the sum of two interpolated values of two functions is equal to the interpolated value of the sum of these two functions. This means that we can instead also take all the Kronecker data sets as the input for separate interpolation procedures, to obtain a set of functions φ(x − j). The final interpolated function is then identical to If the initial grid values f i were the functional values of a polynomial of degree less than four, we obviously will have exactly reconstructed the original function from its values on the grid points. Since any smooth function can locally be well approximated by a polynomial, these functions φ(x) are good basis functions also in the case where f is not a polynomial and we will use them as scaling functions to construct a wavelet family.
The first construction steps of an interpolating scaling function are shown below for the case of linear interpolation. The initial Kronecker data set is denoted by the big dots. The additional data points obtained after the first interpolation step are denoted by medium size dots and the additional data points obtained after the second step by small dots.
Continuing this process ad infinitum will then result in the function shown in the left panel of Figure 6. If an higher order interpolation scheme is used the function shown in the right panel of Figure 6 is obtained. By construction it is clear, that φ(x) has compact support. If an (m − 1)-th order interpolation scheme is used, the filter length is (m − 1) and the support interval of the It is also not difficult to see that the functions φ(x) satisfy the refinement relation. Let us again consider the interpolation ad infinitum of a Kronecker data set which has everywhere zero entries except at the origin. We can now split up this process into the first step, where we calculate the half-integer grid point values, and a remaining series of separate ad infinitum interpolations for all half-integer Kronecker data sets, which are necessary to represent the data set obtained by the first step. Doing the ad-infinitum interpolation for a half integer Kronecker data set with a unit entry at position j, we obviously obtain the same scaling function, just compressed by a factor of 2, φ(2x − j). If we are using a (m − 1)-th order interpolation scheme (i.e. m input data for the interpolation process) we thus get the relation Comparing this equation with the refinement relation Equation 7 we can identify the first filter h as For the case of third order interpolation the numerical values of h follow from the standard interpolation formula and are given by { -1/16 , 0 , 9/16 , 1 , 9/16 , 0 , -1/16 }.
Let us next determine the filterh. Let us consider a function f (x) which is bandlimited in the wavelet sense, i.e which can exactly be represented by a superposition of scaling functions at a certain resolution level K.
It then follows from the orthogonality relation Equation 13 that Now we have seen above that with respect to interpolating scaling functions, a bandlimited function is just any polynomial of degree less than or equal to m − 1, and that in this case the expansion coefficients s K j are just the functional values at the grid points (Equation 19). We therefore have which shows that the dual scaling functionφ is the delta function.
Obviously the delta function satisfies a trivial refinement relation δ(x) = 2δ(2x) and from Equation 9 we conclude thath j = δ j From the symmetry relations for the filters the two remaining filtersg(i) and g(i) can be determined and we have thus completely specified our wavelet family. Using these filters we can then determine the wavelet ψ and its dual counterpartψ which turn out to be We see that the interpolating wavelet is a very special case in that its scaling function and wavelet have the same functional form and that the dual functions are related to the delta function. The non-dual functions are shown in Figure 1.
Lifting [5] is a very useful technique to modify an existing family of wavelets to meet specific needs. We can for instance lift the interpolating wavelets to obtain a new family whose wavelet has more vanishing moments M l .
x l dx which will for instance improve the frequency properties of the wavelet.
Expanding functions in a wavelet basis
As was demonstrated in the case of the Haar wavelet, there are two possible representations of a function within the framework of wavelet theory. The first one is called scaling function representation and involves only scaling functions. The second is called wavelet representation and involves wavelets as well as scaling functions. Both representations are completely equivalent and exactly the same number of coefficients are needed in the case where one has uniform resolution.
The scaling function representation is given by The coefficients s Kmax j can be calculated by integration through Equation 22. Once we have a set of coefficients s Kmax j we can use a full forward wavelet transform to obtain the coefficients of the wavelet representation Alternatively, one could also directly calculate the d coefficients by integration Equation 28 follows from the orthogonality relations 14 to 16. So we see that if we want to expand a function either in scaling functions or wavelets, we have to perform integrations at some point to calculate the coefficients. For general wavelet families this integration can be fairly cumbersome [3] and require especially in 2 and 3 dimensions a substantial number of integration points. Furthermore it is not obvious how to do the integration if the function is only given in tabulated form. The interpolating wavelets discussed above are the glorious exception. Since the dual scaling function is a delta function (23 ) and since the dual wavelet is a sum of delta functions (25 ), one or a few data points are sufficient to do the integration exactly. One will therefore get exactly the same number of coefficients as one has data points and one has an invertible one-to-one mapping between the functional values on the grid and the expansion coefficients. This is even true in the case of nonuniform data sets, where we necessarily have to calculate the s and d coefficients directly by integration using 28. As follows from Equation 23 and 25, one just needs the functional values at the data point at which the wavelet will be centered and a few data points at one lower resolution level around this center. If one wants to calculate the interpolating wavelet center at the high resolution grid point indicated by the fat arrow in the figure below, one needs in the case of the 4-th order interpolating wavelets the 4 additional points indicated by thin arrows which belong to a more coarse grid and are therefore always available even if the fine grid does not extend into this region.
In the case where one wants to represent functions with several length scales which need inhomogeneous real space grid structure the wavelet representation allows a much more compact representation than the scaling function representation, since on can neglect all the tiny d coefficients in the regions where one has little variation. To illustrate this let us look at the function f Evidently this function exhibits 8 different length scales. If one expands one simple Gaussian exp(−x 2 ) with respect to 4-th order interpolating scaling functions with a resolution of 1/16, one gets a reasonably small error of 10 −6 . For the multi-scale function f , this error increases to more than 10 −2 with the same resolution. If one however uses a scheme where one uses 32 wavelets on additional 5 resolution levels to improve the resolution as one approaches the origin one can again represent the function with an error of roughly 10 −6 ( it turns out that the expected 8 additional levels are not all needed). The total number of coefficients needed to represent the function in the interval [−2; 2] is then 4 × 16 coefficients for the equal resolution (1/16) scaling function part plus 5 × 32 coefficients for the resolution enhancement with the wavelets, which makes all together 224 coefficients. This has to be compared with the 1024 scaling function coefficients which would be needed to represent the function over the whole interval with the maximum resolution of (1/256), which we have obtained around the origin with this data compression scheme.
Wavelets in 2 and 3 dimensions
The easiest way to construct a wavelet basis in higher dimensional spaces is by forming product functions [2]. For simplicity of notation we will only consider here the 2dimensional case, the generalization to higher dimensional spaces being obvious.
The space of all scaling functions of resolution level k is given by The wavelets consist of three types of products A wavelet transform step in the 2-dimensional setting is done by first transforming along the x and then along the y direction (or vice versa).
The standard operator form
In a bi-orthogonal wavelet basis it is natural to solve a differential equation in the collocation sense. Let us recall that in the collocation method one has two functional spaces, the space of the basis function which are used to represent the solution and the space of the test functions which are used to multiply the differential equation from the left to obtain a linear system of equations. In our case the expansion set are the scaling functions and wavelets while the test set are their dual counterparts. Lets consider the case of Poisson's equation Given the expansion of the charge density ρ in a wavelet basis we are looking for the wavelet expansion coefficients of the potential V .
Plugging in the expansion for ρ and V ( 34) and ( 35) in Poissons equation 33 and multiplying from the left with the dual wavelet collocation test space we obtain a system of equations where v is the vector containing both the s and d coefficients of the potential and ρ is the same vector for the charge density ρ. The matrix A s represents the Laplacian in this wavelet basis and one says that it has standard form. This standard form is graphically shown in Figure 7 . The problem with the standard form is that it is first of all rather complicated. There is coupling between all resolution levels and one has to calculate many different types of matrix elements corresponding to all possible products of wavelets and scaling functions at different resolution levels and positions. The second point is that there are many blocks in that matrix which have no or only few zeroes. Let us look at the blocks representing the coupling between the scaling functions at the highest resolution level and the wavelets at the different resolution. In general each scaling function will extend over the whole computational volume and will therefore overlap with all the wavelets at any position. All these blocks will consequently have nonzero entries only. So this standard matrix form has more nonzero entries than we would like to have for optimal efficiency in the matrix vector multiplications which are required for all iterative linear equation solvers.
The non-standard operator form
The so-called nonstandard [8] form gives a much easier and efficient representation of our matrix. To derive it let us first assume, that our potential V and charge ρ are given in a scaling function basis. The Laplacian is then represented by a matrix A whose elements can graphically be represented in the following way: Now we can of course perform one step of a forward wavelet transform on all our data, i.e. both on the vector to be multiplied with the matrix and on the vector which is the result of this matrix times vector multiplication. Correspondingly we have then to transform the matrix A using the matrices whose properties are given in Equation 18. We see that our input and output vectors v and ρ also have to be adapted to this matrix structure leading to a redundant copy of the S data set.
We can now recursively apply this 2-step procedure on the < S|S > block of the resulting matrices. Doing this we obtain the so called non-standard form, which is graphically visualized in Figure 8 As we see, we have now completely decoupled different resolution levels, since there are no blocks in this matrix between different levels. The coupling between different levels just enters trough the wavelet transforms which have to be interleaved with the application of this nonstandard operator form. We also see that all the nonzero blocks of this nonstandard matrix representation are strictly banded and the application of this matrix to a vector scales therefore linearly.
The structure of the matrix in Figure 8 is primarily valid for the case of uniform resolution where all the possible d coefficients at the highest resolution level are nonzero.
It can however easily be seen that this nonstandard form retains its advantage in a case of varying resolution where only some of the d coefficients are nonzero. If the nonredundant input data set is sparse, the redundant input data set will be sparse as well. Since all the blocks are banded, the redundant output set will be sparse as well. Finally the nonredundant output set will then be sparse as well.
Calculation of differential operators in a wavelet basis
As we have seen in the preceeding chapter we need the matrix elements for the application of an operator in the nonstandard form. Matrix elements on different resolution levels are related by simple scaling relations. So we just have to calculate these 4 matrix elements for one resolution level. On a certain resolution level, we can use the refinement relations to express the matrix elements involving wavelets in terms of matrix elements involving scaling functions (at a better resolution level) only. So we just have to calculate the basic integral a i Using the refinement relations Equations 7 and 9 for φ andφ we obtain We thus have to find the eigenvector a associated with the eigenvalue of 2 −l .
where the matrix A i,j is given by As it stands this eigensystem has a solution only if the rang of the matrix A − 2 −l I is less than its dimension. For a well defined differential operator, i.e if l is less than the degree of smoothness of the scaling function this will be the case.
The system of equations 44 determines the a j 's only up to a normalization factor. For the case of interpolating wavelets the normalization condition is easily found from the requirement that one obtains the correct result for the function x l . From the normalization of the scaling function (17) and from elementary calculus, it follows that On the other hand we know, that we can expand any polynomial of low enough degree exactly with the interpolating polynomials. The expansion coefficients are just i l by Equation 22. So we obtain By comparing Equation 46 and 47 we thus obtain the normalization condition The interpolating wavelet family offers also an important advantage for the calculation of differential operators. Whereas in general derivative filters extend over the interval [−2m; 2m] their effective filter length is only [−m + 2; m − 2]. Since higherdimensional wavelets are products of one-dimensional ones differential operators in the higher-dimensional case can easily be derived from the one-dimensional results.
The standard operator form can not only be used for the application of differential operators, but also for other operations. If one want to transform for instance from one wavelet family φ to another wavelet family Φ the basic integral becomes Another use is for scalar products where the fundamental integral is 12 Solving Poisson's equation for the U 2 dimer Poisson's equation is a prototype differential equation and we want to solve it therefore as an illustration of wavelet theory. To demonstrate the power of the wavelet method we applied it to the most difficult system we could think of in the area of electronic structure calculations, namely the calculation of the electrostatic potential of a three dimensional U 2 dimer [10]. In this example, we clearly find widely varying length scales. The valence electrons have an extension of 5 atomic units, the 1s core electrons of 2/100 atomic units and the nucleus itself was represented by a charge distribution with an extension of 1/2000 atomic units. So all together the length scales varied by 4 orders of magnitude and two regions of increasing resolution (around each nucleus) were needed. In order to have quasi perfect natural boundary conditions we embedded the molecule in a computational volume of side length 10 4 atomic units. All together this necessitated 22 levels of resolution. Even though the potential itself varies by many orders of magnitude, we were able to calculate the solution with typically 7 digits of accuracy. We believe that it would not be possible with any other method to solve this kind of benchmark problem. The solution of Poisson's equation consists of several steps. Initially we have to find the wavelet expansion for a data set on a nonuniform real space grid structure shown in Figure 9 which represents the charge density. The resolution needed can in this example be estimated from the known extension and variation of the different atomic shells. Analogously to the one-dimensional case, this expansion can also easily be obtained for higher dimensional interpolating wavelets since all the dual function are related to delta functions. Let us point out, that also in this case the mapping from real space representation to the wavelet representation is invertible, and we could thus get back exactly the same real space values if we evaluate the wavelet expansion on the grid points. Next we start a iteration loop for the potential. First we have to apply the Laplace operator to an approximate potential using the non-standard operator form. Subtracting from this result the charge density gives the residue vector which is the basis for all iterative methods [4], such as steepest descent and conjugate gradient methods. Unfortunately the condition number of the Laplace matrix worsens when more high resolution levels are added and the number of iterations needed to obtain convergence would dramatically increase if we used straightforward iterative methods. It this therefore absolutely necessary to use a preconditioned iterative method which will give a condition number which is independent of the maximal resolution. In a preconditioning scheme one has to find an approximate inverse matrix of the Laplace matrix. If the Laplace matrix is strongly diagonally dominant, then just the inverse of the diagonal part (which is again diagonal) will be a good approximate inverse. Whether the Laplace matrix is strongly diagonally dominant depends on the kind of wavelet family which is used. In a plane wave representation the Laplace matrix is strictly diagonal. If therefore our wavelet family has good frequency localization properties the resulting matrix will be strongly diagonally dominant. Unfortunately our favorite interpolating wavelets have a very poor frequency localization making an iterative solution practically impossible. It is therefore necessary to do the preconditioning step within another family such as the lifted interpolating wavelets which have much better frequency localization properties as shown in Figure 10. Their improved frequence localization properties is related to the fact that several moments of the wavelet vanish. The spectrum is shown for 3 wavelets on neighboring resolution levels. One has reasonable frequency separation in the lifted but not in the unlifted case.
As discussed above the transformation into another wavelet family can also be done with the help of the non-standard operator form. The preconditioned residue vector is then used to update the potential and we go back to the beginning of the iteration. Using lifted interpolating wavelets with 2 vanishing moments we were able to reduce the norm of the residue vector by one order of magnitude with 3 iterations independent of the maximal resolution. Despite their poor frequency localization properties, unlifted interpolating wavelets have recently also been proposed for the solution of Poisson's equation [9].
Outlook and conclusions
Since we used mainly interpolating wavelets, all we did was essentially interpolating, which is one of the oldest technique in numerical analysis. However the framework pro-vided by wavelet theory puts this whole interpolation procedure on the new and powerful basis of multi-resolution analysis, expanding thus considerably the scope of interpolation based techniques. In particular it assigns basis functions to certain interpolation schemes. Wavelet based techniques allow us thus to solve differential equations which have several length scales and to do this with linear scaling. It is thus to be expected, that wavelet based techniques will catalyze progress in many fields of science and engineering, where such problems exist. An detailed tutorial style book describing how to use wavelets for the solution of partial differential equations will soon be published by the authors. | 2014-10-01T00:00:00.000Z | 1998-03-10T00:00:00.000 | {
"year": 1998,
"sha1": "b3be9f8bf90ad517ece7fb7c07742b168dd43fe8",
"oa_license": null,
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.168739",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7ecc929afb5cb61e86187ba7c8f5e579ee9483a6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Computer Science"
]
} |
256054238 | pes2o/s2orc | v3-fos-license | Social media discourse and internet search queries on cannabis as a medicine: A systematic scoping review
The use of cannabis for medicinal purposes has increased globally over the past decade since patient access to medicinal cannabis has been legislated across jurisdictions in Europe, the United Kingdom, the United States, Canada, and Australia. Yet, evidence relating to the effect of medical cannabis on the management of symptoms for a suite of conditions is only just emerging. Although there is considerable engagement from many stakeholders to add to the evidence base through randomized controlled trials, many gaps in the literature remain. Data from real-world and patient reported sources can provide opportunities to address this evidence deficit. This real-world data can be captured from a variety of sources such as found in routinely collected health care and health services records that include but are not limited to patient generated data from medical, administrative and claims data, patient reported data from surveys, wearable trackers, patient registries, and social media. In this systematic scoping review, we seek to understand the utility of online user generated text into the use of cannabis as a medicine. In this scoping review, we aimed to systematically search published literature to examine the extent, range, and nature of research that utilises user-generated content to examine to cannabis as a medicine. The objective of this methodological review is to synthesise primary research that uses social media discourse and internet search engine queries to answer the following questions: (i) In what way, is online user-generated text used as a data source in the investigation of cannabis as a medicine? (ii) What are the aims, data sources, methods, and research themes of studies using online user-generated text to discuss the medicinal use of cannabis. We conducted a manual search of primary research studies which used online user-generated text as a data source using the MEDLINE, Embase, Web of Science, and Scopus databases in October 2022. Editorials, letters, commentaries, surveys, protocols, and book chapters were excluded from the review. Forty-two studies were included in this review, twenty-two studies used manually labelled data, four studies used existing meta-data (Google trends/geo-location data), two studies used data that was manually coded using crowdsourcing services, and two used automated coding supplied by a social media analytics company, fifteen used computational methods for annotating data. Our review reflects a growing interest in the use of user-generated content for public health surveillance. It also demonstrates the need for the development of a systematic approach for evaluating the quality of social media studies and highlights the utility of automatic processing and computational methods (machine learning technologies) for large social media datasets. This systematic scoping review has shown that user-generated content as a data source for studying cannabis as a medicine provides another means to understand how cannabis is perceived and used in the community. As such, it provides another potential ‘tool’ with which to engage in pharmacovigilance of, not only cannabis as a medicine, but also other novel therapeutics as they enter the market.
Introduction Genetic analysis of ancients cannabis indicates the plant cannabis sativa was first cultivated for use as a as a medicinal agent up to 2400 years ago [1]. From the 1800's, people in the United States (US), widely used cannabis as a medicine by either prescription or as an over the counter therapeutic [2]. Yet by the mid-20th century, cannabis use was prohibited in many parts of the developed world with the passing of legislation in the US, the United Kingdom (UK) and various European countries that proscribed its use [3][4][5][6]. Since the 2000s, the use of cannabis for medicinal purposes has been decriminalized in many countries including Israel, Canada, Netherlands, United States, United Kingdom, and Australia [7][8][9]. More recently, new evidence regarding the clinical effect of medical cannabis on the management of symptoms for some conditions [10] has triggered public interest in cannabis and cannabis-derived products [11,12], resulting in a global trend towards public acceptance, and subsequent legalisation of cannabis for both medicinal and non-medicinal use.
There is emerging evidence of cannabis efficacy for childhood epilepsy, spasticity, and neuropathic pain in multiple sclerosis, acquired immunodeficiency syndrome (AIDS) wasting syndrome, and cancer chemotherapy-induced nausea and vomiting [13][14][15]. Although researchers are investigating cannabis for treating cancer, psychiatric disorders [16], sleep disorders [17], chronic pain [18] and inflammatory conditions such as rheumatoid arthritis [19], there is currently insufficient evidence to support its clinical use. Scientific studies on emerging therapeutics typically exclude vulnerable populations such as pregnant women, young people, the elderly, patients with multimorbidity and polypharmacy, and this limits the availability of evidence for cannabis effectiveness across these population groups [20].
Cannabis as medicine is associated with a rapidly expanding industry [21]. Patient demand is increasing, as is reflected in an increasing number of approvals for prescriptions over time [22], with one study showing that 61% of Australian GPs surveyed reported one or more patient enquiries regarding medical cannabis [23]. With this increasing demand, is sophisticated marketing by medicinal cannabis companies that leverages evidence from a small number of studies to promote their products [24,25]. In light of this, concerns regarding patient safety is warranted especially when marketing for some cannabinoid products is associated with inadequate labelling and/or inappropriate dosage recommendations [26]. These concerns are compounded by the downscheduling of over-the-counter cannabis products which do not require a prescription [27] and the illicit drug market [28]. Given this dynamic interplay between marketing, product innovation, regulation, and consumer demand, innovative methods are required to augment existing established approaches to the surveillance and monitoring of emerging and unapproved drugs.
Although there is considerable engagement from many stakeholders to improve the scientific evidence regarding the efficacy and safety of cannabis through randomised controlled trials, many gaps remain in the literature [29]. Yet data from real-world and patient reported data sources could provide opportunities to address this evidence deficit [30]. This real-world data can be captured from a variety of sources such as found in routinely collected health care and health services records that include but are not limited to patient generated data from medical, administrative and claims data, as well as patient reported data from surveys, wearable trackers, patient registries, and social media [31][32][33].
People readily consult the internet when looking for and sharing health information [34,35]. According to 2017 survey of Health Information National Trends, almost 78% of US adults used online searches first to inquire about health or medical information [34]. Data resulting from these online activities is labelled 'user generated' and is increasingly becoming a component of surveillance systems in the health data domain [36]. Monitoring user-generated data on the web can be a timely and inexpensive way to generated population-level insights [37]. The collective experiences and opinions shared online are an easily accessible wide-ranging data source for tracking emerging trends-which might be unavailable or less noticeable by other surveillance systems.
The objective of this systematic scoping review is to understand the utility of online user generated text in providing insight into the use of cannabis as a medicine. In this review, we aim to systematically search published literature to examine the extent, range, and nature of research that utilises user-generated content to examine cannabis as a medicine [38][39][40]. The objective of this review is to synthesise primary research that uses social media discourse and internet search engine queries to answer the following questions: • In what way, is online user-generated text used as a data source in the investigation of cannabis as a medicine?
• What are the aims, data sources, methods, and research themes of studies using online usergenerated text to discuss the medicinal use of cannabis?
Search strategy
For this review, we used an established methodological framework for scoping reviews to inform our methodology and we reported the review in accordance with the PRISMA reporting guidelines [38][39][40][41]. Literature database queries were developed for four categories of studies. The first three categories used social media text as a data source, the fourth relied on internet search engine query data. For the first category, the database queries combined words used to describe social media forums, and cannabis-related keywords and general medicalrelated keywords (Table 1 Category 1). The second category also included the social media and cannabis-related keywords, but used keywords specific to psychiatric disorders, for which the use of medical cannabis has been described. Our search terms for this second category were informed by a systematic review of medicinal cannabis for psychiatric disorders [16] (Table 1 Category 2). The third category included social media and cannabis related keywords but focused on non-psychiatric medical conditions for which cannabis is sometimes used (Table 1 Category 3). The fourth category included studies using Internet search engine queries as a data source, there were no medical conditions included in these searches ( The inclusion criteria for this review were: (i) peer reviewed research studies, (ii) peer reviewed conference papers (iii) studies which used online user-generated text as a data source, and (iv) social media research that was either directly focused on cannabis and cannabis products that have an impact on health, or were health-related studies that found medicinal use of cannabis.
Exclusion criteria comprised: (i) editorials, letters, commentaries, surveys, protocols and book chapters; (ii) studies that used social media for recruiting participants; (iii) studies where the full text of the publication was not available; (iv) conference abstracts (iv) studies primarily focused on electronic nicotine delivery systems adapted to deliver cannabinoids; (v) studies that used bots or autonomous systems as the main data source and (vi) studies that focused exclusively on synthetic cannabis.
All studies captured by the search queries listed in Table 1 were uploaded into excel to enable all duplicates to be removed. Following this all titles and abstracts were reviewed The full text articles that were identified for inclusion following screening process were then independently critiqued by pairs of reviewers using a checklist developed for this study. The purpose of the checklist that we developed for this systematic scoping review was to provide an overall assessment of quality rather than generate a specific score, (S2 Appendix). Assessments of quality in each study were based on evidence of relative quality in the aims or objectives, main findings, data collection method, analytic methods, data source, and evaluation and interpretations of the study. CMH and SKH critiqued all articles, and YB and MC each critiqued a selection of studies to ensure each article had been independently reviewed by two researchers. Where initial disagreement existed between reviewers regarding the inclusion of a study, team members met to discuss the disputed article's status until consensus was achieved.
Study inclusion
Assessments of quality in each study were based on evaluating each study's aims and objectives, main findings, data collection and analytic methods, data sources, and evaluation and interpretations of the results. Social media studies were included if there were no major biases affecting the internal, external or construct validity of the study [42]. In doing so, the internal validity of each study was determined by the quality of the data and analytic processes used, the external validity determined by the extent to which the findings can be generalised to other contexts, and the construct validity was ascertained by the extent to which the chosen measurement tool correctly measured what the study aimed to measure (S3 Appendix).
Results
Of the 1556 titles identified in the electronic database searches, 859 duplicate articles were removed, 450 were excluded following the screening of title and abstracts and 195 were excluded based on publication type (i.e., survey, letter, comment, abstract). This screening process provided fifty-two potentially relevant full text primary research studies to be included in the review (Fig 1). Of these, five articles were not able to be retrieved, two out of forty-seven articles had initial disagreement. Upon consensus, five were excluded with reasons (S3 Appendix) using the quality assessment checklist as described above. This provided forty-two papers for inclusion in this systematic scoping review published between 2014 and 2022. Regarding publication type, the majority were journal articles 40/42 (95.2%), and two were conference-based publications 2/42 (4.8%). Although the first study was published eight years ago, nearly two-thirds 24/42 (57%), have been published over the last four years. Table 2 provides a summary of each paper that includes author names, publication year, data source, duration of the study, number of collected posts, number of analysed posts, and the coding or labelling approach used.
Data collection and annotation
The largest manually annotated dataset that contained 47,000 labelled tweets was published by Thompson et al. in 2015 [43]. This paper was one of 22 studies included in this review (52.4%) that either collected a limited number of data points, or sampled their collected data, and manually coded the data to gain an in-depth understanding of the domain . Four of the 42 studies (9.5%) used existing meta-data including Google trends summary data [66][67][68], and geo-location data [69]. Two (4.8%) studies used data that was manually coded using crowdsourcing services [45,49], and two (4.8%) used automated coding supplied by a social media analytics company [70,71]. Fifteen of the forty-two studies (35.7%) used automated methods for labelling data, which included the use of machine learning, lexicon, and rule-based algorithms [60,[72][73][74][75][76][77][78][79][80][81][82][83][84][85]. Automated coding was increasingly used as an analytic tool for social media data on this topic from 2017 onward ( Table 2).
Data analysis
For the studies that were manually labelled, analysis included the calculation of proportions and trends, and the development of repeating and emergent themes. [43,44,46,47,49,50,54- [69]. The studies that utilised a large volume of data used advanced computational methods, which included sentiment analysis, topic modelling, and rule-based text mining [60,72,[77][78][79][80][81][82]85]. The use of sentiment analysis in the [60,80,95] enabled the analysis of people's sentiments, opinions, and attitudes. Topic modelling in the [74,76,79,80,85] studies enabled the development of themes via automatic machine learning methods. The use of rule-based text mining such as found in the [60,72,83,96,97] studies enabled the classification of posts into pre-existing health-related categories.
Research themes
In this review, we categorized the forty-two research articles into six broad themes. Themes were based on the research questions motivating the studies, where each paper was classified as belonging a primary theme, based on alignment with the research aims (Table 3). General cannabis-related conversations. Nine studies were included in the theme relating to cannabis-related conversations ( Table 4). The main keywords used in these studies included general terms such as 'cannabis', 'marijuana', 'pot', and 'weed'. The major aim of these studies was to either identify topics of conversations regarding cannabis, or to examine the role of normative and valence information in the perception of medicinal cannabis. These studies are included because they reported on conversations around cannabis use for medical purposes, the valence associated with perceptions of health benefits of cannabis, and reports of adverse effects. For example, a study on veterans use of cannabis found that cannabis is used to self-medicate a number of health issues, including Post-Traumatic Stress Disorder (PTSD), anxiety and sleep disorder [94]. Seven of the studies used Twitter as a data source [43,73,78,87,96,98,99], one examined the content of YouTube videos about cannabis [100], one investigated online self-help forums [88] and another used Reddit data [94].
Cannabis mode of use. Seven studies were included in the theme relating to the mode of use of cannabis as a medicine (Table 5). These studies collected data using keywords such as 'vape,' 'vaping', 'dabbing', and 'edibles'. Conversations around modes of use revealed a theme about lacking, seeking, or sharing knowledge about health consequences of the modes of use. Another theme was around the perceived health benefits of cannabis and the various modes of use of cannabinoids that included sleep improvement and relaxation resulting from dabbing oils [49] or consuming 'edibles' [50]. The findings suggest that for emerging modes of use such as dabbing, where the availability of evidence-based information is limited, people seek information from others' experiences.
Cannabis as a medicine for a specific health issue. Six studies were included in the theme relating to cannabis as a medicine for a specific health issue (Table 6). These studies investigated conversations around the use of cannabis or cannabidiol (CBD) for a specific health issue. The health conditions included glaucoma [56], PTSD [72], cancer [65,101], Attention Deficit Hyperactivity Disorder (ADHD) [91], and pregnancy [92]. These studies mostly discovered that conversations claimed benefits of cannabis as an alternative treatment for these health conditions, although mentions of harm, and both harm and therapeutic effects, were also present [91].
Cannabis as a medicine as part of discourse on illness and disease. Twelve studies were included in the theme relating to cannabis as a medicine as part of discourse on illness and disease (Table 7). In this theme, the research focus was on social media topics relating to [71], ophthalmic disease [44], cluster headache and migraine [86], asthma [105], cancer [93,106], autism disorder [107], and brachial plexus injury [64]. Cannabidiol (CBD). There were seven studies in the cannabidiol category [61,80,82,89,90,108,109] (Table 8). These studies concentrated on conversations related to the benefits of CBD products, product sentiment (positive, negative, or neutral), the factors that impact on a person's decision to use CBD products, and the trends in therapeutic use of CBD.
Adverse drug reactions and adverse effects. One paper had a research question that explicitly focused on the detection of adverse events [75] (Table 9) This study explored the prevalence of internet search engine queries relating to the topic of adverse reactions and cannabis use. Seven other studies contained mentions of adverse effects which were associated with cannabis use [47,49,51,53,55,62,77,81], however these papers were not included under this theme, as their research questions were not centered around the explicit investigation of adverse events.
Discussion
Currently, there exist systematic reviews of cannabis and cannabinoids for medical use based on clinical efficacy outcomes from randomised controlled trials [18] and reviews on the use of social media for illicit drug surveillance [110]. However, following searches on PROSPERO and the databases listed above, to our knowledge, this paper constitutes the first systematic scoping review examining studies that used user-generated online text to understand the use of cannabis as a medicine in the global community.
Our scoping review found that the use of social media and internet search queries to investigate cannabis as a medicine is a rapidly emerging area of research. Over half of the studies included in this review were published within the last four years (24, 57.1%), this reflects not only increase community interest in the therapeutic potential of cannabinoids, but also worldwide trends towards cannabis legalisation [4,[7][8][9]111,112]. Regarding social media platforms, Twitter was the data source in eighteen (42.9%) of the forty-two studies, almost three and a half times the number of studies using Reddit (6, 14.3%) and just under three times the number of studies using data from Online forums (5, 11.9%). Three (7.14%) GoFundMe studies and three (7.14%) Google Trends studies were also included in the review. Hence, much of the data in this systematic review comprised posts from the Twitter platform. Several factors may explain this finding, firstly Twitter is real-time in nature, it has a high volume of messages, and it is publicly accessible. These factors makes it a useful data source for public health surveillance [113]. Regarding the subjects of the studies, twelve (28.6%) focused on general user-generated content regarding the treatment of health conditions (glaucoma, autism, asthma, cancer, bowel disease, brachial plexus injury, cluster headaches, opioid disorder). These studies were either explicitly designed to investigate cannabis as a medicine or were studies that generated results that incidentally found cannabis mentioned as an alternative or complementary treatment (either formally prescribed or via self-medication).
Qualitative studies featured in the research, but while their contributions are valuable, especially in the context of hypothesis generation, they tend to be limited by their smaller datasets, which frequently comprise manually annotated samples. The recent emergence of powerful To examine the sentiment and themes of cannabis-related tweets from influential users and to describe the users' demographics.
A common theme of pro tweets was that cannabis has health benefits. Anti-cannabis posts spoke of the harm experienced in using cannabis. 77% of posts had positive sentiments, with 12 times higher reach than other posts.
Cannabis-related keywords Thompson, Rivara et al., 2015 [43] To examine cannabis-related content in Twitter, especially content tweeted by adolescent users, and to examine any differences in message content before and after the legalisation of recreational cannabis in two US states.
More tweets described perceived positive benefits of cannabis use, including relaxation and escaping life problems. Tweets described cannabis as less harmful than other drugs or as not harmful at all and suggested its medical role for conditions such as depression and cancer. Less than 1% of tweets expressed a concern about cannabis use.
Cannabis-related keywords
Greiner, Chatton et al., 2017 [53] To investigate online content of cannabis use/ addiction self-help forums.
Self-help forums on cannabis share a theme around cannabis users seeking help for addiction and withdrawal issues.
Turner et al., 2017 [73] To examine if cannabis legalisation policies impact Twitter conversations and the social networks of users contributing to cannabis conversations.
Medical cannabis was a major topic in the conversations.
Cannabis-related keywords
Cavazos-Rehg, Krauss et al., 2018 [54] To investigate cannabis product reviews and the relationship between exposure to product reviews and cannabis users' demographics and characteristics.
Product reviews promoted cannabis for helping with relaxation, pain relief, sleep, improving emotional well-being. Medical cannabis users are more likely to be exposed to cannabis product reviews.
Allem, Escobedo et al., 2020 [78] To identify and describe cannabis-related topics of conversation on Twitter, and the public health implications of these.
Health and medicine were the third most prevalent topic of the 12 topics identified in the data. Posts suggested that cannabis could help with cancer, sleep, pain, anxiety, depression, trauma, and posttraumatic stress disorder. Health-related posts from social bots were almost double that of genuine posts.
Cannabis-related keywords
Van Draanen et al., 2020 [79] To examine differences in the sentiment and content of cannabis-related tweets in the US (by state cannabis laws) and Canada.
Medical cannabis use was one of the main topics of conversations in cannabis-related tweets from both countries.
Tweets filtered on US and Canada geolocation and then further filtered on cannabis-related keywords Rhidenour et al., 2021 [63] To explore Veterans' Reddit discussions regarding their cannabis use.
Over a third of the Reddit posts described the use of medical cannabis as an aid for psychological and physical ailments. Overall, veterans discussed how the use of medical cannabis reduced PTSD symptoms, anxiety, and helped with their sleep.
The veteran subreddit Allem, Majmundar et al., 2022 [81] To determine the extent to which a medical dictionary could identify cannabis-related motivations for use and health consequences of cannabis use.
There were posts related to both health motivations and consequences of cannabis use. The health-related posts included issues with the respiratory system, stress to the immune system, and gastrointestinal issues, among others.
Cannabis-related keywords
https://doi.org/10.1371/journal.pone.0269143.t004 machine learning-based natural language processing (NLP) models suggests that it should be possible to automate the continuous processing of far larger datasets using NLP technologies, built upon the insights gained from initial qualitative studies, and even leveraging their annotated data for training purposes. Recent trends in the social science data landscape have shown a convergence between social science and computer science expertise, where the ability to use computational methods has greatly assisted the collection and validation of robust datasets that can form the basis of deeper social science research [114]. We found much heterogeneity in approaches applied to analyse user-related content, and inconsistent quality in the methodologies adopted. While we endeavored to include as many studies as possible, some of the publications initially identified as suitable for inclusion were not suitable based on a minimum quality requirements checklist (S1 Appendix). This checklist was designed to ensure that selection of data source, choice of platform, data acquisition and preparation, analysis and evaluation delivers data and conclusions that are appropriate for answering the research questions.
The utilisation of user-generated content for health research is subject to several inherent limitations which include; the lack of control that researchers have in relation to the credibility of information, the frequently unknown demographic characteristics and geographical location of individuals generating content, and the fact that social media users are not necessarily representative of the wider community [115]. Furthermore, the uniqueness, volume, and To explore Twitter data on concentrate ('dabs') use and examine the impact of cannabis legalisation policies on concentrate use conversations.
Twitter data suggest popularity of dabs in the US states with legalised recreational/medical use of cannabis. Dabbing as an emerging mode of use could carry significant health risks.
Dab-related keywords for U. S. location Krauss, Sowles et al., 2015 [47] To explore the content of cannabis dabbingrelated videos on YouTube.
Only 21% of videos contained warnings about dabbing, such as preventing explosions, injury, or negative side effects. 22% of videos specifically mentioned medical cannabis or getting 'medicated', either in the video itself or in the accompanying text description.
Dabbing related keywords
Cavazos-Rehg, Sowles et al., 2016 [49] To study themes of dabbing conversations and to investigate the consequences of high-potency cannabis consumption.
The fourth theme (of seven) was about cannabis helping with relaxation, sleep or solving problems. Extreme effects were both physiological and psychological. The most common physiologic effects were passing out and respiratory, with coughing the most common respiratory effect.
Dabbing related keywords Lamy, Daniulaityte et al., 2016 [50] To study themes of edibles conversations and examine legalisation policies' impact on cannabis-related tweeting activity.
Twitter data suggest mostly positive attitudes toward cannabis edibles. Positive tweets describe the quality of the 'high' experienced and how cannabis edibles facilitate falling asleep. Negative tweets discuss the unreliability of edibles' THC dosage and delayed effects that were linked to overconsumption, which could lead to potential harmful consequences.
Cannabis edible-related keywords Meacham, Paul et al., 2018 [77] To analyse discussions of emerging and traditional forms of cannabis use.
Less than 2% of conversations described adverse effects. The most mentioned adverse effects were anxiety-related in the context of smoking, edibles, and butane hash oil, and 'cough' for vaping and dabbing.
A cannabis specific subreddit on various modes of use Meacham, Roh et al., 2019 [55] To study themes of dabbing-related questions and responses.
Health concerns are the fifth category of dabbing questionsincluding respiratory effects, anxiety, and vomiting. Respondents in these conversations usually spoke from personal experience.
Search for 'Dab' and 'question' on cannabis subreddits
Janmohamed et al.,2020 [85] To map temporal trends in the web-based vaping narrative, to indicate how the narrative changed from before to during the COVID-19 pandemic.
The emergence of a vape-administered CBD treatment narrative around the COVID-19 pandemic.
Vape-related keywords https://doi.org/10.1371/journal.pone.0269143.t005 salience of social media data has implications that need to be considered when used for health information analysis [116]. Volume is usually inversely related to salience; a platform such as Twitter has a very high volume of information, much of which is not highly pertinent for the analysis of an effect, whereas the information contained in a blog will contain less volume, but will be more salient for analysis. Notwithstanding these limitations, user-generated content comprises large-scale data that provides access to the unprompted organic opinions and attitudes of cannabis users in their own words and is an effective medium through which to gauge public sentiment. To date, insights regarding cannabis as medicine have gained primarily through surveys or focus groups which have their own limitations regarding the format of data collection and potential bias in participant recruitment. A limitation of this scoping review was the lack of inclusion of a computational database such as IEEE Xplore in the search strategy, and the exclusion of the search terms 'infodemiology' and 'infoveillance.' Infodemiology and infoveillance studies explicitly use web-based data for research, and IEEE Xplore is a repository that contains technical papers and documents relating to computer science. However, our search was systematic, comprehensive and IEEE Xplore is Scopus-indexed, and we expect data loss to be minimal.
Conclusion
Our systematic scoping review reflects a growing interest in the use of user-generated content for public health surveillance. It also demonstrates there is a need for the development of a To examine the content of online forum threads on ADHD and cannabis use to identify trends about their relation, particularly regarding therapeutic and adverse effects of cannabis on ADHD.
Of all individual posts, 25% indicated that cannabis is therapeutic for ADHD, as opposed to 8% that claim it is harmful, 5% that it is both therapeutic and harmful, and 2% that it has no effect on ADHD.
Cannabis and ADHD keywords Dai & Hao, 2017 [72] To evaluate factors that could impact public attitudes to PTSD related cannabis use.
Of all PTSD tweets, 5.3% were related to cannabis use and these tweets predominantly supported cannabis use for PTSD.
Cannabis and PTSD keywords
Shi et al., 2019 [67] To characterize trends in use of cannabis for cancer and analysis of content and impact of popular news about cannabis for cancer.
Between 2011-2018, the relative google search volume of 'cannabis cancer' queries increased at a rate ten times faster than 'standard cancer therapies' queries. Popular 'false news' stories had a much higher engagement than contrary 'accurate' news stories.
Cannabis vs standard therapies for cancer Jia, Mehran et al., 2020 [56] To analyze the content quality and risk of readily available online information regarding cannabis and glaucoma. To examine cannabis and pregnancy-related tweets over a 12-month period. To determine the educational quality of YouTube videos for asthma.
The most common video content was regarding alternative medicine (38%) and included cannabis as well as live fish ingestion; salt inhalers; raw food, vegan, gluten-free diets; yoga; Ayurveda; reflexology; acupressure; and acupuncture; and Buteyko breathing.
Asthma related videos
Andersson et al., 2017 [52] To understand the use of non-established or alternative pharmacological treatments used to alleviate cluster headaches and migraines.
Cannabis was discussed for its potential to alleviate symptoms or reduce the frequency of migraine attacks. Some discussed use of cannabis for other purposes, but experienced additional benefits for headache symptoms. The effects of self-treatment with cannabis appeared more contradictory and complex than treatment with other substances. To identify public reactions to the opioid epidemic by identifying the most popular topics.
Mentions of cannabis as an effective alternate to opioids for managing pain.
Cannabis was found in two of five main themes: 'In recovery' and 'taking illicit drugs' for pain management.
Users who self-identified as addicted to, or previously addicted to, opioids Pérez-Pérez et al., 2019 [83] To characterize the bowel disease community on Twitter.
Medical cannabis was the fourth most mentioned term in the bowel disease (BD) community. Medical cannabis and its components were the most discussed drug, with mentions of its benefits in mitigating common BD symptoms.
Inflammatory bowel disease, Irritable bowel disease keywords
Mullins et al., 2020 [71] To examine pain-related tweets in Ireland over a 2-week period.
The fourth most occurring keyword was cannabis. Ninety percent of cannabis related tweets were non-personal, with highly positive sentiment and highest number of impressions per tweet. Cannabis had by the largest number of tweets aimed at generating awareness.
Pain-related keywords
Saposnik & Huber, 2020 [68] To analyse of trends in web searches for the cause and treatments of autism spectrum disorder (ASD). To assess and compare active opioid use subreddit and an opioid recovery subreddit: 1) the proportion of posts that mention cannabis, 2) the most frequently-used words and phrases in posts that mention cannabis, and 3) motivations for cannabis use in relation to opioid use as described in cannabis-related posts. systematic approach for evaluating the quality of social media studies and highlights the utility of automatic processing and computational methods (machine learning technologies) for large social media datasets. This systematic scoping review has shown that user-generated content as a data source for studying cannabis as a medicine provides another means to understand how cannabis is perceived and used in the community. As such, it is another potential 'tool' with which to engage in pharmacovigilance of, not only cannabis as a medicine, but also other novel therapeutics as they enter the market. To analyse the CBD informational pathways which bring consumers to CBD for medical purposes.
Self-directed research was the most common pathway to CBD. The proposed uses of CBD were for cancer, seizure-inducing diseases/conditions, joint/inflammatory diseases, mental health disorders, nervous system diseases, and autoimmune diseases.
Searches for CBD exceed searches for yoga and around half as much as searches for dieting.
'cannabidiol' or 'CBD' vs other alternative medicine including diet, yoga, Leas, Hendrickson et al., 2020 [57] To assess if individuals are using CBD for diagnosable conditions which have evidence-based therapies.
CBD subreddit posts
Tran & Kavuluru, 2020 [84] To examine social media data to determine perceived remedial effects and usage patterns for CBD.
Anxiety disorders and pain were the two conditions dominating much of the discussion surrounding CBD, both in terms of general discussion and for CBD as a perceived therapeutic treatment. CBD is mentioned as a treatment for mental issues (anxiety, depression, stress) and physiological issues (pain, inflammation, headache, sleep disorder, seizure disorders, nausea, and cancer).
CBD subreddit posts
Soleymanpour et al., 2021 [80] To perform content analysis of marketing claims for CBD in Twitter.
Over 50% of CBD tweets appear to be marketing related chatter. Pain and anxiety are the most popular conditions mentioned in marketing messages. Edibles are the most popular product type being advertised, followed by oils.
'cbd', 'cbdoil', and 'cannabidiol' Turner, Kantardzic et al. 2021 [82] The objective of this study was to provide a framework for public health and medical researchers to use for identifying and analyzing the consumption and marketing of unregulated substances using CBD as an exemplar.
There was a significant difference in the sentiment scores between the personal and commercial CBD tweets, the mean sentiment score of the commercial CBD scores was higher than that of the personal CBD score. Pain, anxiety, and sleep had the highest positive sentiment score for both personal and commercial CBD tweets.
STUDY AIM HEALTH-RELATED EFFECTS/CLAIMS DATA IDENTIFICATION
Yom-Tov & Lev-Ran, 2017 [75] To check if search engine queries can be used to detect adverse reactions of cannabis use.
A high correlation between the side effects recorded on established reporting systems and those found in the search engine queries. These side effects included anxiety, depression-related symptoms, psychotic symptoms such as paranoia and hallucinations, cough, and other symptoms. | 2023-01-22T05:17:08.899Z | 2023-01-20T00:00:00.000 | {
"year": 2023,
"sha1": "43ef7d4ffe5091cf540911cf934d25192cf9525b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "43ef7d4ffe5091cf540911cf934d25192cf9525b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259153858 | pes2o/s2orc | v3-fos-license | Spectroscopic Properties of the Alkali–Krypton Diatomic M–Kr (M = Rb, Cs, and Fr) van der Waals Systems Including the Spin–Orbit Coupling
Using an ab initio approach based on pseudopotential technique, pair potential approach, core polarization potentials, and large Gaussian basis sets, we investigate interaction of heavy alkali–krypton diatomic M–Kr (M = Rb, Cs, and Fr) van der Waals dimers. In this context, the core–core interactions for M+–Kr (M = Rb, Cs, and Fr) are calculated at coupled-cluster single and double excitation (CCSD) level and included in the total potential energy. Therefore, the potential energy curves are performed for 14 electronic states: eight of 2Σ+ symmetry, four of 2Π symmetry, and two of 2Δ symmetry. Furthermore, for each M–Kr dimer, the spin–orbit coupling has been considered for the B2Σ+, A2Π, 32Σ+, 22Π, 52Σ+, 32Π, and 12Δ states. In addition, the transition dipole moment has been determined, including the spin–orbit effect using the rotational matrix issued from the spin–orbit potential energy calculations.
I. INTRODUCTION
Interactions of alkali metal atoms with rare gas atoms are of great importance in several areas of physics and chemistry. These interactions have been investigated experimentally 1−15 and theoretically 16−45 for many years and for many systems. In fact, they are considered as model systems for van der Waals molecules. To understand these weak interactions, it is crucial to test both theoretical methods and basis sets. The pair potentials themselves can be used by theoreticians to model systems using molecular dynamics and by spectroscopists to aid in the understanding and analysis of molecular spectra. Due to the closed shells of atoms and/or ions that form these systems, their ground state presents very weak interactions. In contrast, considerable bonding can exist in the excited states. The interactions here concern diatomic systems, one alkali or alkaline-earth atom or ion with a single rare gas atom. One might expect the behavior of these complexes to be quite uncomplicated, with bonding attributable to purely physical interactions and van der Waals forces.
In the earlier experimental studies, 1−15 the interactions between alkali and rare gas atoms have been considered as model for investigating the collisional processes such as line broadening, quenching, and electronic energy transfer. In addition, alkali metal vapor lasers that are pumped by diode lasers have been the focus of several investigations. 29 Hedges et al. 15 investigated the emission profiles of the cesium resonance lines broadened by collisions with inert gases, based to theoretical models without knowledge of the cesium density.
Theoretically, there are several calculations 16−46 of the interactions between the alkali atoms with noble gas atoms. The M( 2 S) + Rg pairs (M = alkali; Rg = rare gas) were investigated by Baylis 16 using pseudopotential method and spin−orbit coupling. They calculated the potential energy of the ground state (X 2 Σ + ) and the first excited states (B 2 Σ + and A 2 Π) and extracted their spectroscopic constants. The semiempirical potential model of Baylis 16 was used with a few modifications by Pascale and Vandeplanque 19 to study some excited states for M−Rg species. However, the determination of the potential energy curves of alkali atoms interacting with noble gas atoms was studied using the model potential (MP) 16−28 where the total potential is considered as a system with a single electron.
Ehara and Nakatsuji 35 calculated the Cs−Rg system (Rg = Xe, Kr, Ar, and Ne) at the SAC-CI level. Goll et al. 36 studied the M− Rg (M = Li, Na, K, Rb, and Cs; Rg = Ar, Kr, and Xe) systems using the short-range gradient-corrected density functional method combined with long-range coupled-cluster methods (CCSD, CCSD(T)). More recently, the van der Waals systems involving alkali and alkaline-earth atoms interacting with rare gas atoms have been investigated by our group 37−44 using a oneelectron pseudopotential approach, large Gaussian basis sets, and full configuration interaction.
The precision of produced data for the ground and excited states for similar systems, such as Cs−Ar, Rb−Ar, Cs−Xe, Rb− Xe, and Rb−He complexes, [37][38][39]42,45 has been demonstrated by the prediction of the B ← X absorption spectra of Cs−Ar, which was found to be in excellent agreement with the experimental result of Readle et al. 29 Miller et al. 46 used our data for RbHe 42 to investigate the temperature-dependent rubidium−helium line shape. The prediction of the pressure effect on spectrum broadening by the Anderson−Talman unified pressure broadening theory and using our data has provided good agreement with experimental observations, which proves the high precision of our calculation and the used model.
In the present paper, we report a theoretical investigation of the M−Kr (M = Rb, Cs, and Fr) van der Waals dimers. The calculation is based on a pseudopotential technique reducing the M−Kr system to one electron, the valence electron. The allelectrons calculation is difficult especially for excited states. To reduce the M−Kr (M = Rb, Cs, and Fr) systems to a oneelectron problem, we have treated the M + and Kr as two closedshell cores interacting with the alkali metal valence electron via a semilocal pseudopotential. This permits us to reduce the M−Kr systems to one electron−core and core−core interactions to be included using the CCSD(T) accurate potential.
In the present study, we aim to calculate accurate data for the ground and excited states of heavy alkali M( 2 S)−Kr pairs, which are crucial to evaluate the scaling possibilities for alkali metal− rare gas laser systems. In section II, we present the theoretical method of calculations. The results for Rb−Kr, Cs−Kr, and Fr− Kr are presented in section III. Finally we conclude in section IV.
II. METHOD OF CALCULATION
The electronic structures of heavy alkali atoms Rb, Cs, and Fr interacting with the Kr atom are obtained by resolving the electronic Schrodinger equation. Therefore, the number of active electrons of M−Kr is reduced to only one electron using the l-dependent semilocal pseudopotential proposed initially by Durand and Barthelat. 47 Furthermore, the core polarization pseudopotentials V CPP are incorporated using the l-dependent formulation of Muller et al. 48 They account for the polarization of the alkali ionic cores as well as of the Kr atom considered as a whole. For each atom (λ = Rb, Fr, or Kr), the core polarization effects are described by the following effective potential: where λ is for either the M + (M = Rb, Cs, and Fr) core or the krypton atom, α is the dipole polarizability, and f is the electric field created by valence electrons and cores of each center. The used polarizabilities of the alkali ions were taken as 9.245a 0 3 , 15.116a 0 3 , and 20.38a 0 3 for Rb, Cs, and Fr, respectively, 37−43 whereas that of the krypton atom was taken as 17.0a 0 3 . 49 Cutoff radius has been adjusted for the three alkali atoms (Rb, Cs, and Fr) to reproduce the experimental energy level spectrum taken from ref 50. The cutoff parameters used here are 2.5213, 2.279, and 2.511 au for rubidium, 2.69, 1.58, and 2.810 au for cesium, and 3.16372, 3.045, and 3.1343 au for francium, respectively, for the lowest valence s, p, and d one-electron states.
To study the M−Kr systems, we have used the same uncontracted basis sets as in refs 37−43. In turn, the one-electron SCF atomic calculations for the alkali atoms have been performed to test the quality of the basis sets. For the lowest valence s, p, and d one-electron states, the ionization potentials are deduced from the atomic data table 50 and presented in previous publications. 37 −43 In addition, the electron−Kr effects have been treated through local pseudopotential. We have fitted the numerical pseudopotential of Yuan and Zhang 51 using the analytical form according to For each symmetry (l = 0 (s) orbital, l = 1 (p) orbital, and l = 2 (d) orbital), the pseudopotential parameters α, C i , and n i are extracted. They are given in Table 1.
We have also used a basis set on the krypton atom. The use of a basis set on the Kr atom is essential to treat the steric distortion of the alkali valence orbitals resulting from their orthogonality to the Kr closed shells, which are represented by the pseudopotential. Since there are no active electrons on the Kr atom, the exponents were determined in order to provide correct overlap with the 3s and 2p orbitals of Kr and to extend toward the diffuse range.
In addition, to make them easy to use in a computing code based on the model potential, the core−core interaction of M + − Kr (M = Rb, Cs, and Fr) was achieved by using the numerical coupled-cluster singles, doubles, and perturbative triples (CCSD(T)) potential of Hickling et al. 52 The CCSD(T) numerical potential was interpolated with the analytical form of Tang and Toennies. 53 It is considered as the sum of three terms: the first one is an exponential short-range repulsion (A eff exp-(−bR)), the second is the polarization contribution (D 4 =− 1 / 2 α Kr R −4 ), and the third one is the long-range attractive term (D 6 R −6 − D 8 R −8 − D 10 R −10 ). The fitted parameters for Rb + −Kr, Cs + −Kr, and Fr + −Kr are presented in Table 2. The spin−orbit coupling is introduced for all M−Kr (M = Rb, Cs, and Fr) complexes using the semiempirical scheme of Cohen and Schneider. 55 The spin−orbit coupling matrices are isomorphic to those given by Cohen and Schneider 55 for the 2p 5 Figure 1 shows the repulsive character of the PECs of the X 2 Σ + ground states correlating to Rb(5s) + Kr, Cs(6s) + Kr, and Fr(7s) + Kr, respectively. We remark that the equilibrium position for their wells increases as the mass of the alkali atom increases. However, the depth of these wells decreases as the mass increases. Similarly to the M−Rg (M = Rb, Cs, and Fr; Rg = He, Ar, and Xe) cases, 37−43 these repulsive states exhibit weak van der Waals minima at large internuclear distances.
The X 2 Σ + states of all M−Kr van der Waals complexes are presented in Table 3 and compared with available theoretical 16 Our values for the X 2 Σ + states for Rb−Kr and Cs−Kr can be considered in excellent agreement with the experimental values. 57 The calculated potential curves of the ground states of the Rb−Kr and Cs−Kr systems have a shallow minimum of 80 and 76 cm −1 situated at 9.91a 0 and 10.17a 0 respectively, whereas a well of 73 cm −1 is observed at 9.99a 0 for Rb−Kr and 74 cm −1 at 10.28a 0 for Cs−Kr dimer by the atomic scattering experiment of Buck and Pauly. 57 Similarly, shallow wells were observed theoretically by Baylis 16 with the semiempirical pseudopotential approach and by Goll et al. 36 using several theoretical methods, but the most reliable results were obtained using the PBE/ CCSD(T) combination of density functional and coupledcluster theory.
For Fr−Kr, the theoretical spectroscopic information for the X 2 Σ + ground state is still limited. It is important to note that the spectroscopic constants are determined here for the first time. The corresponding PECs are depicted in Figure 1, and the spectroscopic constants are presented in Table 3.
b. M−Kr Excited States ( 2 Σ + , 2 Π, and 2 Δ). The available data in the literature concern the two lowest states, A 2 Π and B 2 Σ + . The A 2 Π and B 2 Σ + states are the first excited states of the M−Kr systems correlating to Rb(5p) + Kr, Cs(5p) + Kr, and Fr(7p) + Kr. Concerning the higher (3−7) 2 Σ + , (2−4) 2 Π, and (1−2) 2 Δ excited states, the literature is more scarce. These states make a relatively deep well with a short equilibrium distance. The calculated excited states for all M−Kr combinations are displayed in Figure 2 and grouped by molecular term symbol. The spectroscopic constants of states with 2 Σ + , 2 Π, and 2 Δ symmetries are listed in Tables 4−6 and compared with theoretical 15,16,35 works. Comparing the results of Rb−Kr and Cs−Kr dimers to the results of refs 15 and 16, we observe a general good agreement for the well depth D e . The considered spectroscopic constants of the A 2 Π state are D e = 428/406 cm −1 and R e = 6.97/7.36a 0 for Rb−Kr/Cs−Kr. In fact, we note that for all alkali−krypton dimers, the A 2 Π state is seen to be the more attractive state and B 2 Σ + state the more repulsive state.
In the X 2 Σ + state, the alkali electron is mainly in a spherical sσ orbital; however, in the B 2 Σ + state, the alkali electron is mainly in a pσ orbital which overlaps the krypton atom at a larger internuclear separation than the sσ orbital. In the A 2 Π state, the alkali electronic wave function has pπ character with a node along the internuclear axis, and the krypton atom can approach the alkali rather closely before the repulsive interactions dominate.
For all M−Kr (M = Rb, Cs, and Fr) complexes, the np (n = 5, 6, 7) 2 Σ + and nd (n = 4, 5, 6) 2 Σ + states are due to the excitation from the 6s nonbonding molecular orbital (MO) to the feebly antibonding npσ (n = 5, 6, 7) and ndσ (n = 4, 5, 6) MOs. Therefore, the potential energy curves are repulsive, and the systems become unstable. The repulsive interaction in the np (n = 5, 6, 7) 2 Σ + states seems to be smaller than that in the nd (n = 4, 5, 6) 2 Σ + states and starts from a smaller internuclear distance. For example, Rb−Kr shows that the B 2 Σ + and 5 2 Σ + states have a shoulder humps located around 9.0a 0 , while the 3 2 Σ + and 6 2 Σ + states have a characteristic hump situated at 9.89a 0 and 10.5a 0 , respectively. The humps of the 3 2 Σ + and 6 2 Σ + states are caused by an avoided crossing between the 6 2 Σ + and 7 2 Σ + states. The same effects are also observed for the both Cs−Kr and Fr−Kr, respectively, for np (n = 7, 8) 2 Σ + and nd (n = 6, 7) 2 Σ + states. Recently, these features were also observed in our group for the alkali and alkaline-earth atoms interacting with He, Ar, and Xe atoms. 37−44 They were also noticed by Ehara and Nakatsuji 35 in their theoretical calculation for the Cs−Rg systems. For the higher excited states, the potential energy curves of 2 Σ + , 2 Π, and 2 Δ symmetries have shallow minima. The Rydberg orbitals of these states of the alkali atoms are not directed toward the rare gas atom, but the ion core of alkali atoms polarizes the electron density of the rare gas atom, which is responsible for the attractive character.
For all M−Kr complexes, the spectroscopic constants presented in Tables 4−6 show that the well depths D e of the A 2 Π and 1 2 Δ states are found to be superior to those of the 2 2 Π state and that the equilibrium position of the former is larger. The potential energy of the 7 2 Σ + state has a similar behavior to the ionic system, which is due to the avoided crossing between Table 3. Spectroscopic Constants of the X 2 Σ + Ground States of M−Kr (M = Rb, Cs, and Fr) Systems For the Rb−Kr and Cs−Kr systems, including the spin−orbit effects, we show that our potential energy curves of the B 2 Σ 1/2 + , A 2 Π 1/2 , and A 2 Π 3/2 states corresponding to the n 2 P 1/2,3/2 + Kr (n = 5, 6) asymptote and those obtained experimentally by Hedges et al. 15 are in good agreement. The same accord is observed also with theoretical results of Baylis 16 and Ehara and Nakatsuji. 35 For Cs−Kr, the dissociation energy and the equilibrium position are found to be D e = 375/400 cm −1 and R e = 7.29/7.37a 0 for the A 2 Π 1/2 and A 2 Π 3/2 states, respectively, to be compared with the experimental values 15 of D e = 300/350 cm −1 , respectively.
By introducing the spin−orbit coupling, the first A 2 Π 1/2 and A 2 Π 3/2 excited states were studied by Baylis, 16 who found two different values for D e (112 and 439 cm −1 for the A 2 Π 1/2 and A 2 Π 3/2 states, respectively) and the same R e (7.18a 0 ). Ehara and Nakatsuji 35 found shallow potential wells of D e = 163 and 283 cm −1 , respectively. Our results for the A 2 Π 3/2 state and those obtained by Baylis 16 are in good agreement, while for the A 2 Π 1/2 state, the well depth found by Baylis 16 (112 cm −1 ) seems to be underestimated compared to the present work as well as the experimental work. 15 Using the atomic spin−orbit constants ξ 6p (Rb) = 77.51 cm −1 , ξ 7p (Cs) = 181.046 cm −1 , and ξ 8p (Fr) = 545.346 cm −1 , the spin− orbit effect is also investigated for states correlating to Rb(6p), Cs(7p), and Fr(8p). Figure 4 right depicts the energies of the five states produced (5 2 Σ + , 3 2 Π, 5 2 Σ 1/2 + , 3 2 Π 1/2 , and 3 2 Π 3/2 ) without and with spin−orbit perturbation. We show that the PECs including the spin−orbit coupling are considerably similar to those of B 2 Σ + and A 2 Π states in all of the M−Kr complexes from Rb to Fr. We noticed that the molecular splitting energies at equilibrium position for the 3 2 Π 1/2 and 3 2 Π 3/2 states in M−Kr (M = Rb, Cs, and Fr) are smaller than the splittings found for the A 2 Π 1/2 and A 2 Π 3/2 states, which is due to the small spin−orbit constant for the np atomic limit, where n = 6, 7, and 8, for Rb, Cs, and Fr, respectively. The calculated data, before inclusion of the For the 3 2 Σ + , 2 2 Π, and 1 2 Δ excited states correlating to the Cs(5d) + Kr limit, the spin−orbit effect is introduced by using the following value: ξ 5d (Cs) = 39.03356 cm −1 . For Rb−Kr, it should be noticed that the spin−orbit coupling for the states The new Cs−Kr molecular states, dissociating into 5 2 D 1/2 + Kr (3 2 Σ 1/2 + and 2 2 Π 1/2 ), n 2 D 3/2 + Kr (2 2 Π 3/2 and 1 2 Δ 3/2 ), and The Journal of Physical Chemistry A pubs.acs.org/JPCA Article n 2 D 5/2 + Kr (1 2 Δ 5/2 ), are presented in Figure 5. The corresponding spectroscopic constants are presented here for the first time in Table 5, since there is no comparison of these values with other results. In addition, we note that the spin−orbit effect for the Cs n 2 P 1/2,3/2 (n = 6, 7) states is much larger than that observed for the Cs(5 2 D 1/2,3/2,5/2 ) ones, which is due to the large spin−orbit constant for the np atomic limit.
III.3. Qualitative Prediction of the Broadening Spectrum of Alkali (Rb, Cs, and Fr) D 2 Line. Using our potential energy curves for the lower (X 2 Σ + ) and upper (B 2 Σ + and A 2 Π) electronic states, the transition wavelengths were generated for all M−Kr systems using the following expression: 43 where V′(R) and V''(R) are the potential energy curves for the upper and lower electronic states respectively. In fact, the interaction potentials of the X 2 Σ + state and the B 2 Σ + and A 2 Π states and their differences are relevant to the broadening of the D 2 and D 1 lines 45 of alkali atoms. The latter quantity provides information about contributions from dimers or collision pairs. Figure 6 shows the derived transition wavelengths λ(R) for all M−Kr (M = Rb, Cs, and Fr) dimers calculated from the difference potentials B 2 Σ + − X 2 Σ + and A 2 Π − X 2 Σ + ( Figure 6). As can be seen from Figure 6, two different internuclear separations could contribute to the same transition and its wavelength. From this figure we predict that the D 2 broadening spectra are confined to the regions 770−865, 849−925, and 715−825 nm for Rb−Kr, Cs−Kr, and Fr−Kr complexes, respectively. Heaven and Stolyarov 58 determined the maximum for the blue wing of the D 2 line from Cs in Kr to be 835 nm using the QS approximation, 45 which is in a general good agreement with the observed value of 841 nm. It is expected that our maximum will be at 849 nm, which is overestimated compared to the theoretical and experimental values. 45 The same thing is observed for the Rb−Kr pair, as we found a value of 770 nm to be compared with the calculated and observed values 58 of 753 and 759 nm. 45 Once again, our wavelength is overestimated compared to the calculated and observed ones. 45 This is due to the original used potential interactions for the X 2 Σ + and B 2 Σ + states. It seems that the potential used by Heaven and Stolyarov 58 for the B 2 Σ + state including spin−orbit coupling is more repulsive than ours, which makes their wavelength shorter. It is clear that the difference between our work and that of Heaven and Stolyarov 58 is primarily related to the difference in the potential energy interaction calculation approaches. We used a simple model that reduced the M−Kr pair to only one valence electron interacting with the M + −Kr ionic system. However, Heaven and Stolyarov 58 used more sophisticated ab initio approaches such as multireference configuration interaction (MRCI) and multireference averaged quadratic coupled cluster (MRAQCC) to produce the potential interactions used in the spectrum broadening simulations. Figures 7 and 8. As can be seen, our calculation for the 3 2 Π and 5 2 Σ + higher excited states correlating asymptotically with 2 P (np), where n(Rb) = 6, n(Cs) = 7, and n(Fr) = 8, indicates more vibrational levels than for the A 2 Π and B 2 Σ + states correlating with 2 P (np), where n(Rb) = 5, n(Cs) = 6, and n(Fr) = 7, which present, respectively, 40, 56, and 58 levels for Rb−Kr, Cs−Kr, and Fr−Kr dimers. This dissimilarity is obviously related to the difference in the well depths.
III.4. Vibrational Analysis and Transition Dipole
We have also determined vibrational levels for the B 2 Σ + and A 2 Π states and 5 2 Σ + and 3 2 Π states by introducing the spin− orbit coupling. For all systems where the spin−orbit coupling plays a more significant role, the variation of (E v − E v−1 ) with v is split into two curves. We are now interested in Rb interacting with the 87 Kr isotope. We found 32 and 40 vibrational levels for Ω = 1/2 and Ω = 3/2, respectively. Moreover, when approaching the dissociation limit, the energy level spacing vanishes and the common behavior is a decrease as v rises. The same behavior has been observed also for all other dimers. The vibrational levels for heavy alkali−krypton diatomic molecules are more important than for the light alkali−rare gas molecules. 59 To our knowledge, the vibrational levels for heavy alkali−krypton diatomic molecules have been calculated here for the first time.
The calculated transition dipole moment is an essential factor in determining the absorption spectrum. To investigate the nS → nP [n(Rb) = 5, n(Cs) = 6, n(Fr) = 7] transition dipole moments of the alkali atoms by the interaction with Kr atom, the spin−orbit coupling is included by using the energy matrix discussed previously issued from the diagonalization of the rotational matrix. The spin−orbit constants are estimated to be ξ 5p (Rb) = 158.396 cm −1 , ξ 6p (Cs) = 369.36 cm −1 , and ξ 7p (Fr) = 1124.392 cm −1 from the atomic spectral data. 50 The A ← X and B ← X transition dipole moments with and without spin−orbit effect are shown in Figures 9−11 as functions of the internuclear distance. These figures show that the X 2 Σ + → B 2 Σ + transition moments for Rb−Kr, Cs−Kr, and Fr−Kr have a maximum around 3.0a 0 . The X 2 Σ + → A 2 Π transition dipole is split into two curves when the spin−orbit coupling is included. They are labeled as X 2 Σ + → A 2 Π 1/2 and X 2 Σ + → A 2 Π 3/2 . The difference between the X 2 Σ + → A 2 Π 1/2 and X 2 Σ + → A 2 Π 3/2 transition dipole moments is significant at intermediate distances. The transition moments should converge to the value of the pure atomic 2 P ← 2 S transition for large internuclear separations.
Reliable and accurate transition dipole moments at long-range internuclear distances in addition to precise potential energies are fundamental for prediction and design of the scaling characteristics of lasers driven by pumping of alkali−rare gas collision pairs. 60 In fact, highly accurate potential interaction data are needed to predict the absorption spectra and broadening of alkali atomic lines. More precisely, the accurate transition dipole functions and interaction energies are used to simulate the absorption spectrum and broadening in the D 2 line for Rb−Kr, Cs−Kr, and Fr−Kr dimers.
The long-range behavior of the permitted transition dipole moment was extensively explored in the papers of Chu and Dalgarno 61 and Pazyuk et al. 62 The latter demonstrated that the asymptotic behavior of the Rb 2 and Cs 2 62 transition dipole functions at R = ∞ tends to the limiting atomic value B n /R n with n = 3, 4, where the coefficients B n can be calculated using the atomic wave functions and energies. A quantitative fitting of . Transition dipole moments for the X 2 Σ + → A 2 Π, X 2 Σ + → B 2 Σ + , X 2 Σ + → A 2 Π 1/2 , X 2 Σ + → A 2 Π 3/2 , and X 2 Σ + → BX 2 Σ 1/2 + transitions as functions of the internuclear distance for Rb−Kr dimer. Figure 10. Transition dipole moments for the X 2 Σ + → A 2 Π, X 2 Σ + → B 2 Σ + , X 2 Σ + → A 2 Π 1/2 , X 2 Σ + → A 2 Π 3/2 , and X 2 Σ + → B 2 Σ 1/2 + transitions as functions of the internuclear distance for Cs−Kr dimer. The Journal of Physical Chemistry A pubs.acs.org/JPCA Article some transition dipole moments showed this behavior for Cs− Kr and Rb−Kr dimers. A detailed study of the transition dipole functions at long-range internuclear distances is in progress for a better understanding of the long-range behavior. This will permit the probing of their quality. This quality is related to that of the electronic wave functions, which were also used to evaluate the molecular energies.
IV. CONCLUSION
We have performed an accurate theoretical study of alkali metal (Rb, Cs, and Fr) atoms interacting with a rare gas (Kr) atom. The M + (M = Rb, Cs, and Fr) cores and the electron−Kr interactions were replaced by a semilocal pseudopotential, and the M−Kr systems were reduced to one-electron SCF calculations. The potential energy curves and their corresponding spectroscopic constants have been determined. The spectroscopic parameters of the X 2 Σ + ground states show a good agreement compared to the available theoretical 15,16,35,36 and experimental 57 works. Moreover, the semiempirical scheme of Cohen and Schneider 55 was included to study the spin−orbit effect. One can notice that the largest energy splitting is observed for the first excited states B 2 Σ + and A 2 Π. Our spectroscopic constants and the earlier studies for the B 2 Σ 1/2 + , A 2 Π 1/2 , and A 2 Π 3/2 excited states are compared with experimental 57 and theoretical 15,16,35,36 results. Good agreement was observed for the equilibrium distances and dissociation energies. Because of the spin−orbit effect, a splitting was also observed in the case of the vibrational level spacing and the transition dipole moment.
In addition, a qualitative prediction of the D 2 broadening spectra was performed in this study. We found that broadening D 2 spectra are confined to the regions 770−865, 849−925, and 715−825 nm for Rb−Kr, Cs−Kr, and Fr−Kr, respectively.
Expected maxima for the blue-wing D 2 lines are predicted to appear at 770, 849, and 715 for Rb−Kr, Cs−Kr, and Fr−Kr, respectively. Heaven and Stolyarov 58 determined the maxima for the blue wing of the D 2 line from Rb and Cs in Kr to be 753 and 835 nm using the QS approximation, in a good agreement with the observed values of 759 and 841 nm. Our predictions are 770 and 849 nm, which are overestimated compared to the theoretical and experimental values. 48−58 This could be associated to the difference in the potential interactions for the X 2 Σ + and B 2 Σ 1/2 + states, especially at short and intermediate repulsive parts in both states' potentials, making the difference in potential energy smaller and therefore giving longer wavelengths. It is clear that the difference between our work and that of Heaven and Stolyarov 58 is primarily related to the difference in the potential energy interaction calculation approaches. The potential used by Heaven and Stolyarov 58 for the B 2 Σ 1/2 + state including spin−orbit coupling is more repulsive than ours. Considering the simplicity of our used model, reducing the M− Kr pair to only one valence electron interacting with the M + −Kr ionic system, and the more sophisticated ab initio approaches used by Heaven and Stolyarov, 58 our results are in general good agreement with theirs.
In addition, to study the structure and dynamics of large M + − Kr n clusters, the one-electron pseudopotential approach can be used. Furthermore, to investigate the expansion effect of the alkali atom by collision with the Kr atom and also to evaluate transport coefficients for alkali atoms moving through a bath of Kr atoms, the accurate results of the M−Kr systems will be used. . Transition dipole moments for the X 2 Σ + → A 2 Π, X 2 Σ + → B 2 Σ + , X 2 Σ + → A 2 Π 1/2 , X 2 Σ + → A 2 Π 3/2 , and X 2 Σ + → B 2 Σ 1/2 + transitions as functions of the internuclear distance for Fr−Kr dimer. | 2023-06-15T06:16:38.417Z | 2023-06-14T00:00:00.000 | {
"year": 2023,
"sha1": "20dfb06e4934259a9b9218643fe7d27a1c7b9f11",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpca.3c00018",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc368daa83c5867ce2d1e007706a235736d01ae1",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252595977 | pes2o/s2orc | v3-fos-license | Three-loop corrections to the quark and gluon decomposition of the QCD trace anomaly and their applications
In the QCD energy-momentum tensor $T^{\mu\nu}$, the terms that contribute to physical matrix elements are expressed as the sum of the gauge-invariant quark part and gluon part. Each part undergoes the renormalization due to the interactions among quarks and gluons, although the total tensor $T^{\mu\nu}$ is not renormalized thanks to conservation of energy and momentum. We show that, through the renormalization, each of the quark and gluon parts of $T^{\mu\nu}$ receives a definite amount of anomalous trace contribution, such that their sum reproduces the well-known QCD trace anomaly. We provide a procedure to derive such anomalous trace contribution for each quark/gluon part to all orders in perturbation theory, and obtain the corresponding explicit formulas up to three-loop order in the $\overline{\rm MS}$ scheme in the dimensional regularization. We apply our three-loop formulas of the quark/gluon decomposition of the trace anomaly to calculate the anomaly-induced mass structure of nucleons as well as pions. Another application of our three-loop formulas is a quantitative analysis for the constraints on the twist-four gravitational form factors of the nucleon, $\bar{C}_{q,g}$.
Introduction
The QCD energy-momentum tensor T µν is known to receive the trace anomaly [1] as representing the broken scale invariance due to the quantum loop effects, with the beta-function β for the QCD coupling constant g and the anomalous dimension γ m for the quark mass m. Here, η µν is the the metric tensor, and F 2 (= F µν a F aµν ) andψψ denote the renormalized composite operators dependent on a renormalization scale. The symmetric energy-momentum tensor T µν is expressed as the sum of the gauge-invariant quark part T µν q and gluon part T µν g , as (D µ = ∂ µ +igA µ , R (µ S ν) ≡ (R µ S ν + R ν S µ ) /2) using the QCD equations of motion (EOM), up to the ghost and gauge-fixing terms that are not relevant for the following discussion. Classically, we have, η µν T µν q = mψψ and η µν T µν g = 0, up to the terms that vanish by the EOM, but (1) does not coincide with the quantum corrections to the mψψ operator, reflecting that renormalizing the quantum loops and taking the trace do not commute. We note that the total tensor T µν of (2) is not renormalized; it is a finite, scale-independent operator, because of the energy-momentum conservation, ∂ ν T µν = 0, while T µν q and T µν g are not conserved separately and T µν q as well as T µν g is subject to regularization and renormalization. This fact suggests that each of T µν q and T µν g should receive a definite amount of anomalous trace contribution, such that their sum reproduces (1). The corresponding trace anomaly for each quark/gluon part is derived up to two-loop order in [2]. The extension to the three-loop order is worked out in [3], demonstrating that the logic to determine the quark/gluon decomposition of the trace anomaly holds to all orders in perturbation theory. In the MS-like (MS, MS) schemes in the dimensional regularization, we obtain for n f flavor and N c color with C F = (N 2 c − 1)/(2N c ) and C A = N c , where the ellipses stand for the two-loop (O(α 2 s )) as well as three-loop (O(α 3 s )) corrections, whose explicit formulas are presented in [2,3]. The sum of the two formulas of (3) coincides with (1) at every order in α s (= g 2 /(4π)).
Renormalization mixing at three loop
We sketch how the formulas (3) are obtained. First of all, the renormalization of T µν q , T µν g of (2) is not straightforward. Indeed, T µν q , T µν g are composed of the twist-two (traceless part) and twist-four (trace part) operators and the renormalization mixing between the quark part and gluon part also arises. To treat them, we define a basis of independent gauge-invariant operators up to twist four, and the corresponding bare operators, O B k . The renormalization constants are introduced as where, for simplicity, the mixing with the EOM operators as well as the BRST-exact operators is not shown, as their physical matrix elements vanish and they do not affect our final result [4]. Here, O g , as well as O q , is a mixture of the twist-two and -four operators, and the corresponding twist-four components receive the contributions of the twist-four operators O g (4) and O q (4) . The two formulas of (6) reflect, respectively, that the twist-four operator O g (4) mixes with itself and another twist-four operator O q (4) , and that O q(4) is renormalization group (RG)-invariant (see [2,3,5]). Subtracting the traces from both sides of the equations (5), O k and O B k are, respectively, replaced by the corresponding twist-two parts, O k (2) and O B k (2) , such that the twist-four contributions drop out. The renormalization constants Z T , Z L , Z ψ and Z Q remain in the resulting equations that represent the flavor-singlet mixing of the twist-two spin-2 operators, and thus can be determined by the second moments of the DGLAP splitting functions which are known up to the three-loop accuracy [6].
For the renormalization mixing (6) at twist four, the Feynman diagram calculation of Z F and Z C is available to the two-loop order [5]. Moreover, it is shown [3] that the constraints imposed by the RG invariance of (1) allow to determine the form of Z F as well as Z C in the MS-like schemes, completely from β(g) and γ m (g), which are known to five-and four-loop order in the literature, respectively. Therefore, six renormalization constants Z T , Z L , Z ψ , Z Q , Z F and Z C among ten constants arising in (5), (6) are available to a certain accuracy in the MS-like schemes, and they take the form, in the d = 4 − 2ǫ spacetime dimensions with X = T, L, ψ, Q, F, and C; here, a X , b X , c X , . . . , are the constants given as power series in α s , and δ X,X ′ denotes the Kronecker symbol. However, Z M , Z S , Z K and Z B still remain unknown. It is shown [3] that these four renormalization constants can be determined to the accuracy same as the renormalization constants (7), by invoking that they should also obey (7) with X = M, S , K, B, and that the RHS of the formulas of (5) are, in total, UV-finite. Thus, all the renormalization constants in (5), (6) are determined up to the three-loop accuracy, and this result allows us to derive the three-loop formulas [3] for (3), by calculating the trace part of (5).
Anomaly-induced mass structure of hadrons
The QCD trace anomaly (1) signals the generation of a nonperturbative mass scale, say, the nucleon mass m N : Taking the matrix element of (1) in terms of a hadron state |h(p) with the 4momentum p µ as p 2 = m 2 h , and using the fact that h(p)|T µν |h(p) = 2p µ p ν , we obtain so that almost all of the hadron mass m h could be attributed to the quantum loop effects in QCD responsible for the trace anomaly. Based on (8) for the nucleon (h = N), it is frequently argued that the entire mass m N comes from gluons in the chiral limit. However, the partition of QCD loop effects as (3) shows that the latter statement would not be suitable: Indeed, (3) allows us to separate (8) as and, evaluating (3) with N c = 3, n f = 3 at the renormalization scale µ, one finds [3] η λν T λν with the ellipses associated with the operator mψψ. The nucleon (h = N) in the chiral limit gives where n f 3 /(− 11C A 6 ) −0.181818. Eqs. (9)-(12), combined with N(p)|F 2 |N(p) < 0, show that the gluon-and quark-loop effects make the nucleon mass heavy and light, respectively, with the magnitude of the former being five times larger than that of the latter. From (12), the µ-dependence of this result for the relative size of the gluon/quark loop effects in the chiral limit is rather weak. It is also worth noting that the total sum (9) of (10) and (11) allows us to constrain the matrix element of F 2 as N(p)|F 2 (µ = 1 GeV)|N(p) ≃ −8.61m 2 N , using α s (1 GeV) = 0.47358 . . ., as the three-loop running coupling constant in the MS scheme with α s (M Z ) = 0.1181. We note that the neglected four-loop contributions are expected to produce corrections less than ten percent because α 3 s (1 GeV) ≃ 0.1. Next, we consider the pion case, for which the PCAC relation, − (m u + m d ) 0| ūu +dd |0 = 2 f 2 π m 2 π , with f π the pion decay constant, indicates m 2 π ∼ m as m → 0. Eq. (8) for the pion (h = π) implies, π(p) F 2 π(p) m→0 = 0, in the chiral limit m → 0. Eq. (8) also gives the relation among the O(m) terms: When the substitution, π(p) → π(p) 0 + π(p) 1 + . . ., is made, where π(p) 0 ≡ π(p) m=0 and π(p) 1 is the O(m 1 )-term, we have, π(p) mψψ π(p) → 0 π(p) mψψ π(p) 0 and π(p) F 2 π(p) → 0 π(p) F 2 π(p) 1 + 1 π(p) F 2 π(p) 0 , up to the corrections of O(m 2 ). The pion mass can also be calculated as the mass shift due to the ordinary first-order perturbation theory in the quark mass term in the QCD Hamiltonian, as [7] and, combining this with the above results for the O(m) terms in (8), we obtain to the O(m) accuracy. Therefore, up to the corrections of O(m 2 ), the terms associated with the F 2 operator and the mψψ operator in the RHS of (8) contribute to m 2 π according to the relative weights, (1 − γ m (g)) and (1 + γ m (g)), respectively; here, γ m (g) = 0.63662α s + 0.768352α 2 s + 0.801141α 3 s ≃ 0.559, at the three-loop accuracy. Substituting (13) and (14) into (9) with (3), we find and, using α s (1 GeV) = 0.47358 . . ., we obtain 1 2m 2 π π(p) η λν T λν g (µ = 1 GeV) π(p) = 0.521 , which hold to the O(m) accuracy. Again, the µ-dependence of the result is rather weak, but (17) shows the structure different completely from the nucleon case: Both the gluon-and quark-loop effects produce roughly half of the pion mass. This may be a particular nature as a Nambu-Goldstone boson.
To see the consequence of (3) onC q,g (t), we take the trace of the forward limit, ∆ µ → 0, of (18): and substitute (3) into the second term, and the three-loop DGLAP evolution into A q (µ) ≡ A q (t = 0, µ) as a flavor-singlet spin-2 operator renormalized at the scale µ.C q (µ) ≡C q (t = 0, µ)) of (19) reads [2] C q (µ) = − 1 4 with β 0 ≡ (11C A − 2n f )/3, where µ 0 is a certain starting scale, A q0 ≡ A q (µ 0 ), and p| F 2 |p has been eliminated in favor of the nucleon mass m N using (8). Here, explicitly shown are the leading order (LO) terms that are derived from the contributions at the one-loop accuracy in (19); the ellipses denote the NLO and NNLO terms derived from the two-and three-loop contributions in (19), respectively. The first few terms independent of µ in (20) represents the asymptotic value which is completely determined by the values of N c and n f in the chiral limit. Substituting N c = 3 and n f = 3, we obtain where the NLO as well as LO terms are explicitly shown, and the ellipses stand for the NNLO terms. For illustration, we plot (21) as a function of µ in the chiral limit: Fig. 1(a) shows the results up to the LO, NLO, and NNLO accuracy; the NLO as well as NNLO corrections give a few percent level effects, reflecting the small numerical coefficients arising in (21), and the NLO and NNLO corrections tend to cancel; the apprach to the asymptotic value, −0.146, is quite slow. The NNLO result is separated, in Fig. 1(b), into the individual contributions of each term in (19), the first (twist-2) term and the second (anomaly) term; both twist-2 and anomaly terms produce the important effects.
To summarize, the quark/gluon decomposition of the QCD trace anomaly allows us to study the hadrons from new aspects, revealing e.g., quite different pattern between the nucleon and the pion. | 2022-09-30T01:15:25.822Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "343a05fe949b3ca661ebf1a89da3f9e59e902f9a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "343a05fe949b3ca661ebf1a89da3f9e59e902f9a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
146121350 | pes2o/s2orc | v3-fos-license | Certification of spin-based quantum simulators
Quantum simulators are engineered devices controllably designed to emulate complex and classically intractable quantum systems. A key challenge lies in certifying whether the simulator is truly mimicking the Hamiltonian of interest. However, neither classical simulations nor quantum tomography are practical to address this task because of their exponential scaling with system size. Therefore, developing novel certification techniques, suitable for large systems, is highly desirable. Here, in the context of fermionic spin-based simulators, we propose a global many-body spin to charge conversion scheme, which crucially does not require local addressability. A limited number of charge configuration measurements performed at different detuning potentials along a spin chain allow to discriminate the low-energy eigenstates of the simulator. This method, robust to charge decoherence, opens the way to certify large spin array simulators as the number of measurements is independent of system size and only scales linearly with the number of eigenstates to be certified.
Introduction.-Quantum simulators [1][2][3] are devices designed to emulate the behavior of quantum systems whose complexity generically increases exponentially with size. Their importance is multifold as they: (i) can provide new insights into complex quantum phenomena, e.g. high temperature superconductivity [4,5], non-Abelian gauge theories [6], scattering effects [7], quantum criticality [8], and long-term many-body dynamics [9][10][11]; and (ii) realize models that do not exist naturally, e.g. Kitaev Hamiltonian for Toric code [12]. One of the main challenges in quantum technology is to certify that an engineered quantum simulator, nontractable classically, truly emulates the system of interest [13][14][15][16]. A necessary first step consists in matching the simulator low energy eigenstates with their expected counterparts in the emulated model. This task is highly challenging as it usually requires full quantum state tomography, because eigenstates may differ only by their global entanglement structure. However, this requires local addressability along with a number of measurements which scales exponentially with system size [17].
Here, we propose a global spin to charge conversion readout scheme to discriminate between the low-energy entangled spin eigenstates of a spin chain. The basic principle is to measure the charge configuration of the simulator under different potential gradients (called tiltings) applied across the chain. Importantly the number of tilts is independent of the system size and only scales linearly with the number of eigenstates to discriminate. This readout scheme can be used to both certify (when the solution is known) and measure (when an unknown process is being simulated) the system evolution in the low energy regime. Our scheme can greatly facilitate the realization of a solid state spin-based quantum simulator as: (i) charge detections are easier to perform than direct spin measurements [25][26][27][28][29][30]; (ii) a single capacitive detector is able to readout charge configurations of multiple sites [29]; (iii) global potential tilts are sufficient as opposed to local addressability; and, most importantly, (iv) the distinction of eigenstates sharing the same symmetries and the total spin (differing only in their entanglement structure) is possible arXiv:1905.01724v1 [quant-ph] 5 May 2019 without quantum tomography.
Model.-The Heisenberg spin chain is a key model in condensed matter physics [39,40], spintronics [41] and quantum technologies [42,43]. To simulate this model we consider N interacting electrons hopping among N sites (i.e. half filling) in a regular 1D lattice. The Hamiltonian is characterized by the Fermi-Hubbard model where c k,σ (c † k,σ ) is the annihilation (creation) fermionic operator for an electron at site k with spin σ, number operator n k = σ=↑,↓ c † k,σ c k,σ counts the number of electrons at site k, t is the tunnel coupling between neighboring sites,˜ k is the local potential at site k, V is the Coulomb interaction between adjacent sites and U is the on-site energy. In the case of a homogeneous 1D array, i.e.˜ k =0, the Hamiltonian (1) is solvable [44]. Throughout this letter we consider a chain made of an even number of sites N, with on-site energy U/t=40, Coulomb interaction V/t=10 and local potential of the form˜ k = (k − 1) where is the potential difference between two adjacent sites. A schematic picture of the system is shown in Fig. 1(a). In a homogeneous lattice (˜ k =0), whenever U t, the low energy eigenstates take the charge configuration (1, 1, · · · , 1) and the system effectively becomes a Heisenberg spin chain with exchange coupling J∼t 2 /U (with possible corrections due to V) [45]. These eigenstates form a low energy manifold separated by units of U from the eigenstates with double charge occupancies for which the map to the Heisenberg model fails. For even N the ground state |S 1 is always a global singlet with total spin S tot =0. The first two excited states |T 1 and |T 2 are triplets with the total spin S tot =1. The fourth eigenstate is again another global singlet |S 2 . In a chain of length N = 4 these four eigenstates form the low energy manifold.
Charge configurations.-Many-body spin eigenstate measurement is challenging. For example, |S 1 and |S 2 have the same total spin S tot =0 and share various symmetries (e.g. SU(2) invariance) making them difficult to be distinguished locally. To achieve spin eigenstate readout, we apply a potential tilt across the chain, i.e. a finite , to provide enough energy for electrons to overcome U, as shown in Fig. 1(b), the charge configuration is measured. Since the eigenstates are always orthogonal, their charge configuration, which are experimentally measurable, depend on their spin state. This is the core of our certification method.
We now develop the evolution of the charge configurations versus the tilt for a chain of N=4. The charge configuration changes for both eigenstates around /t∼13.4 and one electron moves from either site 4 (in the case of |S 1 ) or site 3 (in the case of |S 2 ) to site 1, creating two different charge configurations for |S 1 and |S 2 . At around /t∼30 in the eigenstate |S 2 an electron moves from site 4 to site 2 resulting in the charge configuration (2, 2, 0, 0). Finally, at /t∼50 the charge configuration of |S 2 evolves to (2, 1, 1, 0) while |S 1 rearranges to (2, 2, 0, 0). All these charge configurations are summarized in Fig. 1(c). To understand this charge dynamics we plot the energies of the first three singlet eigenstates in Fig. 2(c). Any charge movement in the eigenstates corresponds to an anti-crossing between two eigenstates with the same S tot . This is evident at /t∼13.4, /t∼30 and /t∼50 where E S 1 and E S 2 , E S 2 and E S 3 and E S 1 and E S 2 again, anti-cross. A similar analysis can be performed for the triplet states. The charge configurations of the two triplets |T 1 and |T 2 are depicted in Figs. 3(a)-(b), respectively.
The charge configuration of both eigenstates changes around /t∼13.4 and one electron moves from either site 4 (in the case of |T 1 ) or site 3 (in the case of |T 2 ) to site 1. In Fig. 3(c) we plot the energy eigenvalues of both |T 1 and |T 2 as functions of /t which show an anti-crossing at the charge transition point /t∼13.4. For larger systems (see the SM), the final charge configurations are (2, · · · , 2, 0, · · · , 0) for |S 1 and (2, · · · , 2, 1, 1, 0, · · · , 0) for |T 1 . This important feature will be used for certification later in the letter.
Adiabatic tilting.-In order to readout the many-body spin eigenstate, we tilt the system, initially prepared in one of the low energy eigenstates, adiabatically such that it remains in the local eigenvector of the Hamiltonian at any time τ. The eigenstates can be discriminated by measuring the charge configuration at different potentials . The tilt potential varies as where max is the maximum tilt potential considered here to be max /t = 70. For any initial state |Ψ(0) the system evolves to the state |Ψ(τ) according to the Schrödinger equation under the action of the time dependent Fermi-Hubbard Hamiltonian described in Eq. (1). The choice of T max is important as it results in different system dynamics. Adiabaticity, which notably protects the evolution against Landau-Zener transitions while sweeping through anticrossings, is achieved for slow dynamics and large T max . However, faster dynamics minimizes charge decoherence effects at these transitions. In Fig. 4(a) we plot the charge occupancies for the quantum state |Ψ(τ) , taking T max = 2 × 10 4 /t, as a function of time when the system is initially prepared in the state |S 1 . The charge configurations are very similar to the real eigenstates displayed in Fig. 2(a), with the fidelity of the evolution F = | Ψ(τ)|S 1 (τ) | 2 remaining above 0.98 throughout the evolution, which demonstrates that the adiabatic condition is well satisfied. In Fig. 4(b) we depict the charge occupancies when the system is initialized in the state |T 1 . Again the charge configurations are very similar to the ones for the real eigenstate shown in Fig. 3(a) with the fidelity above 0.97 throughout the evolution. In Figs. 4(c) and (d) we plot the charge occupancies of the state |Ψ(τ) when the system is initially in the state |T 2 and |S 2 , respectively. In these two cases, the evolution is very different from the charge configurations of the local eigenstates given in Fig. 3(b) and Fig. 2(b), respectively. Here T max is not large enough to keep an adiabatic evolution for these two eigenstates and their fidelity reaches levels as low as ∼0.2. In the SM, we show that T max values in the order of (10 7 − 10 8 )/t would be required to ensure and adiabatic evolution of |S 2 and |T 2 , due to smaller gaps between higher energy eigenstates. Nonetheless, as we will show below, only an adiabatic evolution of |S 1 and |T 1 is enough to distinguish all four eigenstates, enabling complete certification. State discrimination.-First we consider the ideal case in which the potential tilting is performed adiabatically for all eigenstates. The number of required tilts depends on the number of eigenstates to be discriminated. For instance, if one wants to distinguish between |S 1 and |T 1 then only one tilt, namely /t 50 − 60, is enough as |S 1 takes the configuration (2, 2, 0, 0) and |T 1 goes to (2, 1, 1, 0). Only two tilts are then required to fully distinguish the four lowest eigenstates. For instance, by tilting to /t=35 we can fully distinguish |S 2 , with configuration (2, 2, 0, 0), and |T 2 , with configuration (2, 1, 0, 1). However, both |S 1 and |T 1 share the same configuration (2, 1, 1, 0) and cannot be distinguished. Therefore, another charge configuration measurement must be performed at a larger detuning /t∼50 − 60 when the charge configuration for |S 1 changes to (2, 2, 0, 0) while |T 1 remains in the (2, 1, 1, 0) configuration. The key feature of our proposal lies in its scalability: only two tilts are needed to fully distinguish the four lowest eigenstates, irrespective of the system size (see SM). In fact, for distinguishing n low-energy eigenstates only n/2 tilts are required. Now, we consider an evolution which is only adiabatic for |S 1 and |T 1 , like depicted in Fig. 4. For |S 2 , the outcome of the charge measurement will be time averaged over the charge occupancies due to rapid charge oscillations. Therefore, by using the same procedure, at /t=35 the states |S 2 and |T 2 take the configurations (1.5, 1, 1.5, 0) and (1.2, 1.6, 1, 0.2), respectively, which are very distinct from each other as well as from the configuration of |S 1 and |T 1 . Note that the partial charges mean that the quantum states are in a superposition of multiple charge states. This means that even when the evolution for |S 2 and |T 2 is non-adiabatic the proposed discrimination procedure still holds.
Decoherence.-Interaction with the environment re- where γ represents the decoherence strength, ρ is the density matrix of the system and L n 's are the Lindblad operators which depend on the decoherence source. In Fig. 5(a) we plot the charge occupancies for the evolution of |S 1 in a chain of N = 4 when γ/t=10 −3 and T max = 2 × 10 4 /t. Decoherence leads to partial charge transitions for the quantum states to become mixtures of charge configurations. The same evolution for the triplet state |T 1 is depicted in Fig. 5(b). The evolution for |T 1 is less affected than |S 1 as there are less charge transitions. In the SM we discuss the fidelities and entropy production resulting from this evolution. As decoherence affects charge transitions, it is important to address its impact on on our protocol to distinguish quantum states. Each measurement outcome is associated to a charge projection operator L n with respective probability p n = Tr (ρL n ). Distinguishing the two eigenstates, e.g. |S 1 and |T 1 , equivalent to distinguishing two probability distributions {p n : p n = Tr (ρ S 1 L n )} and {q n : q n = Tr (ρ T 1 L n )}, where ρ S 1 (ρ T 1 ) is the solution of the above Lindblad master equation with the initial state |S 1 (|T 1 ). Experimentally the true probability distribution can be obtained by averaging over M charge measurements at each tilt. The distance (or relative entropy) defined as d(S 1 , T 1 )= n p n log 2 p n q n can be used to quantify the distinguishability between the two distributions. The error in discriminating the two probability distributions after M samples scales as ∼2 −Md [48], for M large. Therefore, by repeating the experiments at each tilt for M∼10 2 −10 3 one can reconstruct the probability distributions and discriminate between the eigenstates when d>1. In Fig. 5(c) we plot d(S 1 , T 1 ) versus γ for a tilt set to /t=70. The distance drops as γ increases, however it remains above 10 even for γ/t=0.01, and discrimination is still achievable.
Experimental realization.-The most relevant platforms to realize our proposal are fermionic optical lattices [49] and dopant arrays [30,50]. We specifically consider the latter. The atomic precision of scanning tunneling microscopy lithography [50] provides the required versatility to fabricate 1D or 2D phosphorus donor-bound spin arrays in silicon with charge sensors in their proximity calibrated to accurately deduce charge configurations [51]. The dopant charging energy is U∼47 meV for bulk donors and both t and V can be engineered via the physical separation between sites. For dopants placed 10 nm apart, t is about 1 meV [52] and V around 10 meV as considered in this letter. From these values, an adiabatic evolution is achieved for T max ≥13 ns. Experimental charge dephasing values can be converted to γ∼0.02 − 1 µeV [53][54][55][56]. More precisely the ratio γ/t is relevant, which is found to be ∼10 −5 −10 −3 as strong tunneling interactions are considered here. As shown in Fig. 5(c), this results in d>20 (and fidelities above 0.8, see SM), and hence precise certification to be achievable in dopant systems. The hyperfine interactions, coupling electron and nuclear spins, are another possible source of errors in dopant systems as they mix the singlet and triplet subspaces. For the hyperfine coupling of A∼0.4 µeV this mixing rate is ∼A 2 /(E T 1 − E S 1 ) due to the energy difference between |S 1 and |T 1 . As the minimum E T 1 −E S 1 ∼100 µeV is found for N=4 the role of hyperfine interaction can be neglected. However, as the energy gap scales as 1/N 2 , we predict that hyperfine interactions will be relevant for N>20 making the nuclear spin initialization essential.
Conclusion.-We have proposed an efficient procedure for certifying the performance of spin-based quantum simulators via discriminating between the low energy eigenstates without using quantum tomography. This is a nontrivial task as the eigenstates cannot be distinguished locally because of: (i) being many-body entangled; and (ii) having the same symmetries and total spin. Given the fact that our scheme can be implemented without local addressability, it opens up the possibility to scale up the simulators to large sizes. The proposed mechanism can potentially be exploited to detect low energy phenomena such as quantum phase transitions, electronic thermometry and emergent Kondo screening clouds. After certification of the spin Hamiltonian in the low energy regime, the same simulator can be used to reveal classically inaccessible features such as high energy long-time dynamics and com-plex two-dimensional structures. Relevant platofrms to implement our certification method include dopant arrays [30,50] and fermionic optical lattices [49].
SUPPLEMENTARY MATERIAL
In the supplementary material we provide further investigations on some of the subjects which were not discussed in details in the main text of the paper.
Charge Configurations and State Discrimination for Large Chains
The proposed mechanism can also be applied to large chains with N > 4. The charge configurations become more diverse as the size of the system increases. Using the same parameters as in the main text, namely U/t = 40 and V/t = 10, one can plot the charge occupancies of each site as a function of the tilting potential . As a typical example we plot the data for N = 8 in Fig. S1 for the first four eigenstates, namely |S 1,2 and |T 1,2 . As the figure shows the overall picture is similar to the case of N = 4 except that there are more charge movements. The eigenstate charge configurations as a function of /t for chains of length N = 6 and N = 8 are represented schematically in Figs. S2(a)-(b). It can be shown that the final configuration of the eigenstate |S 1 is always (2, · · · , 2, 0, · · · , 0) and for the eigenstate |T 1 it is (2, · · · , 2, 1, 1, 0, · · · , 0). An important feature which arises in large chains is that the final charge configuration of |T 2 shows partial charge occupancies. This is due to a superposition of charges.
Remarkably, independently of the system size we can discriminate between the four eigenstates using only two potential tilts. For instance, in the case of N = 6, with /t=35 we can fully discriminate the eigenstate |S 2 and |T 2 from the rest but we cannot distinguish |S 1 from |T 1 . Note that, at this value of the potential tilt the charge measurement outcome for |T 2 is not unique as that eigenstate is a superposition of different charge configurations but due to orthogonality it does not share any charge configuration with |T 1 (which has the same charge configuration as |S 1 ) and |S 2 . If the charge measurement shows the configuration (2, 2, 1, 1, 0, 0) this means that the quantum state is either |S 1 or |T 1 and to discriminate between them one has to tilt the system further to /t=70 for which the two eigenstates take different charge configurations. The same argument is valid for N = 8 and N = 10 (data not shown) in which the two potential tilts should still be performed at /t=35 and /t=70 for full discrimination between the four eigenstates.
Adiabatic Evolution
As mentioned in the main text, the time T max needed to keep the evolution of |S 2 and |T 2 adiabatic must lie in the range 10 7 − 10 8 . An example is given for T max ∼8 × 10 7 /t. In Fig. S3(a) S1: The minimum energy gaps for both singlet and triplet states during the evolution of the system for max /t varies from 0 to 70. |S 2 for N = 4. As the figure shows the charge occupancies are almost identical to the charge configuration shown in Fig. 2(b) for the local eigenstate. Similarly, in Fig. S3(b) we plot the charge occupancies for the initial state |T 2 which is also close to the charge configuration of the local eigenstate given in Fig. 3(b). To asses the adiabatic evolution we plot the fidelities for |S 2 and |T 2 in Fig. S3(c). The fidelity for |S 2 is always above 0.99 and for |T 2 it always remains above 0.9. The better fidelity for |S 2 is due to a larger energy gap for higher singlet eigenstates. It is worth emphasizing that, as discussed in the main text, a large value of T max to keep the evolution of |S 2 and |T 2 adiabatic is not needed for state discrimination between the first four eigenstates.
Decoherence
In order to understand the full effect of decoherence in the Lindblad master equation, we consider an adiabatic evolution of both |S 1 and |T 1 in a system of length N = 4 with the total evolution time T max = 2 × 10 4 /t. In Fig. S4(a) we plot the fidelity of the evolution for the state |S 1 as a function of time τ for different values of noise strength γ. As the figure shows, by increasing γ the fidelity decreases. To understand this, it is important to note that such dynamics is not unitary. This means that the quantum state of the system becomes mixed during the time evolution. To see this, one can compute the von Neumann entropy of the whole system which is defined as S (ρ) = − Tr ρ log 2 ρ . ( In Fig. S4(b) we plot the von Neumann entropy of the system when the quantum state is initially |S 1 as a function of time τ for different values of noise strength γ. As the figure shows the entropy increases monotonically and sharp rises happen during the charge movements when the charge state is delocalized. In Fig. S4(c) we also plot the fidelity for the quantum state |T 1 keeping all the parameters the same as for the singlet |S 1 . Finally, in Fig. S4(d) we plot the von Neumann entropy of the evolution of the triplet state |T 1 as a function of time. Figs. S4(c)-(d) show that the fidelity of the triplet is slightly higher and its von Neumann entropy is smaller in comparison with the singlet. This is due to less charge movements for triplets, or equivalently fewer energy anti-crossings between the eigenstates, which makes the triplet evolution less prone to decoherence. | 2019-05-05T18:00:06.000Z | 2019-05-05T00:00:00.000 | {
"year": 2020,
"sha1": "8fcf72d677dd74352044dcd29eeffc1064b2592a",
"oa_license": null,
"oa_url": "https://discovery.ucl.ac.uk/10101232/1/PhysRevA.101.052344.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8fcf72d677dd74352044dcd29eeffc1064b2592a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.