id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
228951387
pes2o/s2orc
v3-fos-license
Black and Gifted in Rural America: Barriers and Facilitators to Accessing Gifted and Talented Education Programs Nationwide, Black students are underrepresented in gifted and talented education and advanced learner programs. These tragic outcomes occur in all demographic communities: urban, suburban, and rural. As a result, the academic and psychosocial supports needed by gifted Black students are overlooked, disregarded, and underdeveloped. Rural communities are frequently depicted as remote, lacking in social and academic experiences and opportunities, and predominantly White and economically disadvantaged. For gifted and talented Black students, these characterizations contribute to feelings of isolation and alienation in school on a daily basis. Despite their high intellectual potential, they are constantly victimized by racially oppressive conditions in society that cause stress and anxiety. The Black rural community, including Black gifted and talented students, is almost invisible in scholarship that discusses rural education in the United States. This article explores the nature of the rural communities where these students reside; shares intellectual, academic, and cultural characteristics that make Black gifted students from rural communities unique; and delineates recommendations for research, curriculum, and specific programming to meet their intellectual, academic, cultural, and psychosocial needs with an emphasis on access, equity, and excellence. 1 Authors' note: This article expands and reexamines previous work presented in Davis et al. (2020). surrounding environment and often require specialized solutions (Lavalley, 2018). In this article we explore the needs of Black students in rural communities, focusing on the academic, intellectual, and psychosocial needs of Black students with high intellectual abilities or who should be defined as "gifted and talented" according to typical definitions of that label. According to the National Association for Gifted Children (NAGC, 2019a), giftedness is defined as students with gifts and talents perform-or have the capability to perform-at higher levels compared to others of the same age, experience, and environment in one or more domains. They require modification(s) to their educational experience(s) to learn and realize their potential. Student with gifts and talents: • Come from all racial, ethnic, and cultural populations, as well as all economic strata. • Require sufficient access to appropriate learning opportunities to realize their potential. • Can have learning and processing disorders that require specialized intervention and accommodation. • Need support and guidance to develop socially and emotionally as well as in their areas of talent. (p. 1) In this article, we delineate several of the factors that create challenging circumstances for Black gifted students as they seek to access specialized program services and coursework that match with their advanced intellectual abilities. We also make recommendations to add to the limited research and specific best practices that may guide researchers and practitioners with an interest in the needs of Black gifted students who originate from rural communities. We conclude by considering how gifted education as a field can become more inclusive, ensure that talent from all communities becomes a focus for all our work, and produce innovative outcomes for Black gifted students, regardless of their geographic location. The Nature of Education for Black Students in Rural America Several states in the U.S. Southeast are noted as having sizable populations of Black students attending rural schools (Snyder et al., 2019; U.S. Chamber of Commerce Foundation & U.S. Chamber of Commerce, 2015), such as Mississippi, Louisiana, Alabama, Georgia, and Tennessee. Students in these states and others are drastically lagging behind in performance compared to their peers across the nation (U.S. Chamber of Commerce Foundation & U.S. Chamber of Commerce, 2015). Mississippi has the highest percentage of students attending rural schools of any state in the nation (Showalter et al., 2017). Black rural students attending schools in these states face daily challenges that limit their access to equitable, high-quality educational opportunities. Among these students are those who should have access to gifted and talented education (GATE) and advanced learner opportunities. Additional challenges faced by rural area students include (a) the multifaceted definitions of rural areas, (b) the complex nature of distance and isolation in rural areas that impact access to higher education opportunities, (c) extreme poverty levels, and (d) a high number of low-performing schools in rural communities across the nation. As we explore the needs of Black students in rural communities (with some attention to other students of color), we focus on communities defined as rural. Rural communities are very complex and sometimes difficult to distinguish from suburban communities or small towns. Herein, rural is defined as the complex range of geographically isolated communities with populations between 2,500 and 20,000 (per Cromartie & Bucholtz, 2008). Nationally, one-fourth of all public school students are enrolled in rural area schools (Showalter et al., 2017). In three states, more than half of their students attend rural schools: Vermont (57.5%), Maine (57.2%), and Mississippi (56.5%). In Mississippi over 49% of the student population is Black, and Alabama, Tennessee, and Georgia also have sizable populations of Black students (Snyder et al., 2019). In rural communities, education systems are faced with a unique set of challenges that stem from circumstances within the surrounding environment and often require specialized solutions (Lavalley, 2018). Isolation and disconnectedness from metropolitan areas are two of several key factors associated with many of the problems experienced by Black students living in rural America. Being isolated and disconnected from urban area Theory & Practice in Rural Education | 87 resources may limit student access to cultural and enrichment opportunities that have much potential to expand their educational experiences. Distance and funding also pose challenges for rural area families in accessing resources that may be located in metropolitan areas. A classic example are summer and weekend opportunities hosted on urban or metropolitan college campuses, which may be inaccessible to rural area students, including programs for gifted and talented learners. With such limited access, even Black gifted and talented students have the potential to fall behind and be disadvantaged when it comes to competing with their urban or suburban peers who come from communities with better resources. In addition to these problems, the tragic effects of poverty are undeniably a significant factor in the challenges and complexity schools face in equitably meeting the needs of rural students from all racial and ethnic backgrounds. The impact of poverty on educational engagement has been documented (e.g., Alsbury et al., 2018;Jensen, 2013). Living in the South places Black rural students at a particular disadvantage. Due to the impact of race and income iniquities, Black rural students are doubly disadvantaged (Ford, 2013). Twelve of the top 15 states noted as having the highest percentage of low-income students are Southern states-which also have the highest percentage of schools located in rural areas (e.g., Mississippi, Georgia, Tennessee, North Carolina, South Carolina, and Louisiana). There is a higher concentration of free and reduced lunch schools in rural areas than in urban districts. According to Showalter et al. (2017), a significant and disproportionate number/ percentage of students in poverty attend schools in rural communities. Based on a report published by the Southern Education Foundation (2015), for the first time in the nation's history most public school students are living in poverty. Poverty in rural schools is further complicated by the lack of qualified educators available to meet the needs of students living there. Many of the personnel found in rural schools are forced to take on multiple roles in the school and district to meet students' varying needs, albeit with significantly less funding compared to schools in more affluent and densely populated areas (Howley et al., 2009;Superville, 2020). Lack of Access to Opportunities Literature is very limited on the presence and educational needs of Black students who are or have potential to be identified as gifted and talented while living in rural communities. Scholarly work on high-potential and gifted and talented students in rural schools focuses primarily on White students in rural communities (Howley et al., 2009;Stambaugh, 2010). All too often, educators hold low expectations for rural, Black students and fail to create equitable opportunities for them to demonstrate their abilities and thereby be considered viable candidates for gifted programming and services (Floyd et al., 2008). Ong's (2011) andSinger's (2011) research in rural, low-income communities found a lack of appropriate resources in schools to help students compete with their counterparts in wealthier and better-resourced school districts. Equity and excellence are compromised, hindering the potential of Black and other minoritized students. While this work continues to draw attention to the needs of rural-area gifted and talented White students, little work has been directed to the intellectual, academic, cultural, and affective needs of gifted and talented Black students attending schools in rural communities. This lack of scholarly attention presents an incomplete view of life as Black students growing up in a rural community seeking higher-level educational opportunities, and in some cases suggests that these students do not exist (Ford, 2015). Meeting the Intersectional Needs of Black Students in GATE Programs Black students are systematically underrepresented in GATE programs nationwide. While Black students comprise 19% of schools nationally, only 10% of students in GATE programs are Black (Ford, 2013;U.S. Department of Education, 2016). Estimates of national data (U.S. Department of Education, 2016) indicate that Black students are consistently underrepresented at a rate of 40-55% each year. According to Ford, Wright et al. (2018), when equity is quantified, Black students should represent a minimum of 15.2% of students in GATE programs nationwide. These data clearly note an egregious problem that thousands of Black students continue to lack access to high-end, advanced-learner programs, GATE programs, and other offerings typically made accessible to White and Asian students daily (Ford, Wright et al., 2018). A disaggregation of the Office for Civil Rights data for rural districts is needed to allow school personnel, families, and advocates to better understand the full scope of underrepresentation in rural GATE programs. From an intersectional viewpoint, to better understand the needs of Black gifted and talented students, we must more clearly understand the impact of race, gender, culture, rurality, community, and income on functioning (see Figure 1). Being Black places students in a historical and contemporary oppressed group. The Black community typically has less access to a quality education, has the highest percentage of incarcerated individuals, and has more students disproportionately suspended, pushed out, and expelled from schools (Crenshaw et al., 2016;Losen & Skiba, 2010;Smith & Harper, 2015). Such unjust practices occur nationwide, but especially in the Southern states. Concomitantly, students with a poor discipline record are less likely than others to access services offered in GATE programs. Noteworthy, Black students are also less likely to be referred for GATE programs compared to their White peers with similar achievement levels and family backgrounds (Grissom & Redding, 2016). Some scholars have examined the nature of rural living as an additional construct to understand what differentiates rural students from their counterparts in other geographic communities and the specific academic needs of rural students aspiring to attend college (Chambers et al., 2019). Intersectional View of Black Gifted Students From Rural Communities Source: Davis et al. (2020), used with permission of the authors. To rectify these conditions, educational leaders must provide specific, culturally responsive professional learning for educators, engage in focused engagement with the Black community, and hear about the lived experiences of Black gifted and talented individuals in rural communities. In rural communities, Black students are more likely to live in closer proximity to family. These individuals also may be a source of support that school leaders may draw on in developing responsive structures within the school for Black gifted and talented students (Davis, 2008(Davis, , 2010(Davis, , 2016. Based on this work, establishing effective school and home partnerships with Black community to enrich the GATE experiences for Black students is highly recommended. The numerous access and equity barriers to GATE faced by rural Black students are much the same as those faced by their urban peers (Biddle, 2011), but there are also important differences (see Table 1). Rural students are at risk for low motivation, low academic efficacy, and poor school success and have decreased chances of success in postsecondary education (Byun et al., 2012;Hardré et al., 2009;Stambaugh, 2010). A disproportionate Table 1. Characteristic Urban Rural Theory & Practice in Rural Education | 90 percentage of students whose parents did not attend college are Black, as noted by Falcon (2015), who delineated barriers facing students, including being prepared for and adjusting to college. These students have other unique challenges as they contend with negative perceptions based on their oppressed group status; living in isolated communities with fewer cultural enrichment experiences; originating from communities that continue to suffer from vestiges of systemic racism and discrimination from the Jim Crow era, which especially impacted Southern states; and the risks associated with living in poverty, including being first-generation college students (Hébert & Beardsley, 2001;Hines, 2012). One case study of a third-grade gifted and talented Black male, Jermaine, who attended a rural school provides evidence of the challenges faced in rural schools for high-potential black students. Being a rural area gifted and talented Black male in his environment, Jermaine suffered from challenges of being isolated, fitting in, racial identity, and being misunderstood as a complex racially diverse student with gifted abilities and talents. According to the researchers, too few of Jermaine's teachers recognized his gifted potential (Hébert & Beardsley, 2001). A case shared by Davis (2016) demonstrates how a rural Black student in a GATE program suffered from bullying by his gifted counterparts and peers in high school programs and athletics teams. Gifted youth from rural areas are also at risk for underachievement due to the limited experiences of family members in advanced learner settings and the likelihood of being a first-generation college student. Those living in poverty are particularly challenged as they attend poorly resourced schools daily. Rural schools, like those in other communities, have a responsibility to serve and identify all gifted and talented students and should make necessary changes that enable educators to identify more minority gifted and talented students (Howley et al., 2009). Role of Black Educators in Promoting Aspirations of Gifted and Talented Black Students In a study of the school experiences of rural Black students, Hines (2012) found that Black students faced low expectations from teachers and, subsequently, high rates of school failure. A larger and more recent study by Grissom and Redding (2016) had similar findings, not only reporting low expectations and under-referrals for Black students who were performing at the same level as White students, but also reporting that Black students were more likely to be referred to GATE screening if they had a Black teacher. Noteworthy is that only 7% of teachers in the United States are Black (Taie & Goldring, 2020; National Center for Education Statistics, 2019b); in rural communities, only 3.6% of teachers are Black. To say that these data are troubling is an understatement. Having more Black teachers in rural schools would dramatically increase Black students' chances of academic success and successful life outcomes. This has been found in the important work of Easton-Brooks (2019) on ethnic matching: students who share the race or ethnicity of their teachers often achieve at higher levels. Teachers in rural schools also often lack access to specialized training about the nature of gifted and talented learners within their communities (Howley et al., 2009;Stambaugh, 2010). Further, nationally, teachers have little or no training (e.g., professional development and/or coursework) on being culturally responsive/competent. In short, as more attention is drawn to the needs of diverse populations in all schools, including rural schools, and the multiple and complex challenges of increasingly diverse populations, educators will face more difficulties in meeting their specific needs (Davis, 2019). Bryan and Ford (2014) recommended increasing the presence of Black male teachers across all districts to impact student success. More problematic is the role of classroom teachers in the identification of gifted and talented students and as providers of related service options. Chambers et al. (2019) posited that educators categorized as "dreamkeepers" were needed in schools to empower and encourage rural students aspiring to attend college. Some educators presuppose that rural students are less intelligent and have lower aspirations than students in other demographic communities. Thus, for highly able, gifted and talented Black students in rural schools, low teacher perceptions can have a negative impact on their school success, despite their high level of potential. Dreamkeeper teachers (Ladson-Billings, 2009) are critical to Black student success. Chambers et al. (2019) noted that such teachers are "percolators of student dreams but also actively convey their hopes and dreams, catalyzing student dreams of further education. Within rural education contexts, there are not enough Dreamkeepers" (p. 7). In a study of developmental factors associated with rural area Black adolescents, Murry et al. (2016) described positive peer influences and the role of families who encourage academic achievement. The authors also discussed the impact of caring teachers who hold high expectations for the youth's abilities as important to school success. Davis (2010) also described the use of social and cultural capital of immediate family members, extended family, and the church community as substantive means of support for Black gifted students. Mediating Isolation in Rural Schooling Being geographically disconnected from a concentrated culturally and socially enriched community often leads to feelings of isolation with rural students. Feelings of isolation from a common peer group can be detrimental to students' performance (Harris, 2006). Being Black and gifted in a rural school environment exacerbates these feelings of disconnectedness. When racially and culturally different gifted and talented students enter new programs with a group of students who are markedly different from them in income, race, ethnicity, language, culture, and experiences, their self-esteem, self-concept, and racial pride may suffer. Students need to feel a strong sense of belonging and acceptance to be recruited and retained in GATE programs, even more so for Black and Hispanic students due to underrepresentation. Cohort groups combat the effects of isolation and increase assurance of a more comfortable "fit" for students of color to focus more on the academic challenge and less on their need for acceptance. Educators are encouraged to develop service models to identify small groups of students and cohorts who can move through programs together with their social, cultural, and intellectual peers (Davis, 2015). Cultural mismatch may also cause Black gifted and talented students to feel disconnected and isolated from their peers. Recruiting, training, and retaining a highly qualified teaching force composed of teachers of color is a national issue, along with too few educators, especially those with backgrounds in gifted education and training in cultural competence (Davis, 2019;Sleeter et al., 2015). This cultural mismatch affects student performance and success outcomes. To bring more clarity to this point, Easton-Brooks (2019) emphasized the importance of highly qualified teachers of color in classrooms with students of color. His contention is supported by interviews with teachers of color who have been instrumental in leading their students of color to academic success. Theorists suggest that, in the absence of teachers of color, the use of culturally responsive curriculum and pedagogy can mediate the effects of cultural differences and improve student achievement (Ladson-Billings, 2014). School district leaders must ensure that all teachers, including those responsible for working with gifted and talented students and GATE programs, are trained in cultural competency. This training helps educators understand how their perceptions and conscious and unconscious biases affect students and how they interact with their entire educational community (Davis, 2019). Appropriate professional development is the first step to addressing the needs of Black and other students of color in our schools and ensuring their access to GATE programs (Ford, 2011). Distance learning and the correct use of technology can help alleviate challenges found in rural areas by bringing people together. Use of distance learning and online learning technologies in rural schools has enriched curricular opportunities for students previously relegated to studies available only in the general education Theory & Practice in Rural Education | 92 curriculum (de la Varre et al., 2010). Technology helps connect students in rural schools with the world outside their isolated communities through videoconferencing, advanced classes, and research (Floyd et. al., 2008), and online and distance education programming has the potential to provide enhanced curricula, academic peer grouping, and access to highly trained classroom teachers (Hébert & Beardsley, 2001). While these options are becoming more readily available to students living in rural communities, ensuring that high-potential Black students have access to emerging technology remains a challenge. This inequality has been further highlighted during the COVID-19 pandemic and the need for virtual learning for students. As noted earlier, in districts that rely solely or extensively on teacher recommendations for GATE and advanced learner programs, Black students are less likely to be referred and therefore may continue to be shut out from enriched and higher-level curriculum available to non-Black students, including online and distance education programming for advanced students. The challenges of regional programs designed for rural area gifted and talented students, including transportation, enabling students to have a sense of connectedness to the home school, and establishing a community of learners, are all issues of concern that need examination as accessing effective options are considered for Black rural area gifted students (Howley et al., 2009;Stambaugh, 2010). Currently, 15 states offer statewide or regional academic-year high schools for gifted and talented students (NAGC, 2019b), including states with significant rural populations: Alabama, Kentucky, Georgia, Mississippi, North Carolina, South Carolina, Virginia, and Texas. Of these states, Mississippi, Georgia, Alabama, and Tennessee also have sizable populations of Black students attending rural schools. Ensuring equitable admissions procedures for regional programs remains a challenge, as identification procedures likely mirror local district identification models. As such, Black students may continue to be overlooked and lack access to sophisticated regional programs designed for gifted and talented students. Improving the capacity of teachers to recognize gifts, talents, and high potential in Black rural students will remove barriers to the more sophisticated teaching and learning environments provided through online learning and other types of high-end regional school programs. Regional programs and online programs have potential to mitigate the effects of geography and small class size and provide expertise that is often not available to Black rural area students in lowfunded, low-performing school districts (Hines 2012;Redding & Walberg, 2012). The cost of such programs may be a burden, however, to very small schools on limited budgets that attempt to provide service options for a few students. In some cases, rural districts have formed sophisticated regional consortiums with local universities to provide access through technologies not available to single schools or districts. The advantage of these online distance learning models is that they are more feasible and learner centered, thus more attractive to district leaders responsible for curriculum planning and delivery (de la Varre et al., 2010). Nonetheless, the challenge remains of ensuring that Black gifted and talented students have access. Dual-enrollment models that allow high school students to take college-level courses for high school and college credit simultaneously are available in some districts (National Center for Education Statistics, 2019b; Zinth, 2014). These models enhance the capacity of GATE programs to reach more students attending rural schools. Zinth (2014) discussed strategies used by rural schools to lessen burdens of cost, transportation, and other challenges. Efforts to alleviate logistical challenges are encouraging. A recent report from the National Center for Education Statistics (2019b) indicates that nationwide only 27% of Black students were enrolled in dual-enrollment courses compared to 38% of White students. This low representation may indicate access difficulties that Black students experience in schools nationwide. Importance of a Culturally Responsive Education for Gifted and Talented Black Rural Students Ford (2011) learning environment, (c) curriculum, (d) instruction, and (e) assessment. When curriculum and instruction are culturally responsive, it permeates all aspects of education and endeavors to reach all students. Culturally responsive education is not colorblind; rather, it affirms the dignity and worth of students by attending to their lived experiences, interests, and needs as cultural beings (Ladson-Billings, 2014). Similarly, in a reframing of professional learning needs of teachers of diverse gifted and talented students, Davis (2019. p. 56) suggested three key features that professional learning experiences should address: (a) understanding the gifted traits, intellectual strengths, and unique psychosocial needs of diverse gifted and talented students; (b) knowing and being able to implement culturally responsive curriculum and instruction in their gifted classes and specialized programs; and (c) understanding the cultural norms and traditions of culturally diverse families and communities. A common misperception is that Black students, because they are not immigrants or international students, do not have a specific culture (Ford, 2011). This colorblind or culture-blind view presents serious misunderstandings and clashes between Black students and their teachers. Stated another way, when teachers fail to recognize the culture of their students, in this case what it means to be a Black rural student, it will be difficult to see their gifts and talents. Colorblindness is a form of racism and can deeply impact relationships between teachers and their students (Williams, 2011). When teachers do not understand the importance of traditions, cultural norms, and belief systems of their Black students, their relationships are very limited. With Black gifted students, who may be more sensitive and insightful, this lack of teacher understanding can be problematic and also contributes to their underreferrals for GATE screening and retention in programs once identified. A culturally responsive philosophy supports classroom and learning environments that are welcoming and personally engaging (Davis, 2019;Ford, Dickson et al., 2018). When classrooms are more welcoming and inclusive, gifted and talented Black students, who tend to feel alienated and isolated, feel more like they are a part of the classroom community. This sense of belonging is essential when there are few culturally different gifted and talented students in their classes, schools, and related activities (e.g., competitions) in a small school, as is usually the case in rural districts. For Black students in rural communities, the church family has also been identified as a historically strong and stable source of spiritual, psychosocial, and academic support (Davis, 2010). Inclusion of faith leaders in community engagement programs has been recommended as an effective source of collaborative support for rural area Black students for whom economic and social capital are often limited (Davis, 2010). Understanding the distinct culture of being rural also has an impact on teacher expectations of student ability and capacity for high performance. Teachers whose educational experience is not in a rural community may have a distorted view of the ability of Black students (Broadhurst & Wright, 2004). Just as low expectations of urban students tend to be the norm, so are the expectations of some teachers regarding the potential of rural area Black students (e.g., Riel, 2019). Culturally responsive education differs from traditional mainstream educational pedagogy. It is a philosophy and a process based on the fundamental belief that all cultural groups must be accorded prominence in our schools and given equal respect and value for their traditions, values, and legacies. Just as important, regardless of gender, class, religion, or physical and mental abilities, all students should be recognized in the teaching and learning process (Gay, 2010;Ladson-Billings, 2009). Culturally responsive education affirms the value of individual and cultural differences through the act of reducing or, better yet, eliminating prejudices, biases, microaggressions, and stereotypes based on sociocultural demographic variables. In the GATE classroom, it may be assumed that students have a higher and more advanced understanding of the worth of all human beings. Gifted and talented learners possess an accentuated sense of empathy and justice. Thus, a culturally responsive curriculum aligns well with the needs of Black gifted and talented students and with those of their peers. The truth and sanctity of cultural contributions to society cannot be overlooked or disregarded in a setting where students are more apt to question potentially false and/or questionable instructional content, such as in gifted education settings, where students are more apt to be insightful and sensitive to hypocrisy or contradictions in behavior. Educators of gifted and talented students who teach using culturally responsive pedagogy and philosophy encourage their students to be empathetic critical thinkers-to challenge and interrogate assumptions, biases, prejudices, and stereotypes. Likewise, they examine resources and content material from a broader, more inclusive perspective that encourages gifted students to become more proactive and assertive in their approach to questioning tenets of the varied disciplines with which they interact. Black students in rural areas, in particular those in GATE classes where they are racially isolated, benefit from seeing themselves reflected and affirmed in lesson plans and instructional materials. Children's multicultural literature expert Rudine Bishop (1982) coined the phrase "mirror and window books" to literally and figuratively reflect the crucial impact multicultural curriculum and materials can have on students of color. The obvious representation of cultural norms, contributions, historical content, and literature increases engagement, racial and cultural pride, and potentially student achievement (Bishop, 1982). White students also benefit from lesson plans that are multicultural; they learn about other groups and increase their regard for these groups. To reiterate, culturally responsive education improves relationships (harmony and understanding) among students from different backgrounds and their teachers (e.g., Gay, 2010;Ladson-Billings, 2014). The curriculum is incomplete if it is polemic and fails to promote empathy and inclusion-if students are not taught to think and learn beyond the scope of themselves, and if they cannot see others and the world from viewpoints other than their own. Recommendations for Research and Improved Practice in Schools There is an urgent need for specific systematic research and exemplary models that reflect the needs of Black students in rural schools and the GATE programs that serve them. This research will contribute greatly to the skills and ability of school leaders to improve their programming to ensure equity, access, and excellence in educational service options for these gifted and talented Black students. School districts willing to form regional consortiums or partner with universities are in a promising position to develop models that serve students in intense targeted summer programs that provide advanced instruction, giving Black rural students opportunities to be exposed to university life and engage with peers from other localities. The state of Virginia offers summer residential programs on university campuses for gifted secondary students. In these environments, college faculty are often engaged as instructors and potential partners with the state-level accelerated programs. University partnerships help secure resources for professional learning and networking opportunities for educators working in rural areas while providing collaborative spaces for researchers to address the issues facing rural schools and educators (Superville, 2020). Given the dearth of information in the literature about families of Black rural students who are gifted and talented (identified and not identified), it is highly recommended that ethnographic studies of family impact on student achievement in rural communities also be conducted. Such research will extend the understanding of the historical role of the Black community and families in promoting student achievement in various contexts. Existing programs engage the Black community and families to expose their children to advanced coursework and support services. These programs vary, but most have a primary goal of preparing Black students for success in high school and college and closing the opportunity gaps that exist between Black students and their White peers in schools across the nation. Three of these programs are described below: 3. Tuskegee University (2020), in partnership with Verizon Communications, hosts Innovative Learning programs for minority males. These programs serve middleschool minority male students on several historically black colleges and universities, including Tuskegee University in Alabama, through entrepreneurship and tech innovation courses during the summer, with ongoing support in the academic year. One of these programs' goals is to increase minority student participation in STEMrelated coursework in preparation for college and careers in STEM areas. These program models provide extended support for high-potential Black students, whose needs are often unmet in their schools and communities. Providing these services through university and community collaboration demonstrate the level of interest and concern Black universities and community leaders have for ensuring the success of Black students, who may not be adequately served in school district programs. It is highly recommended that educational leaders examine possibilities for replication of these programs in rural communities across the nation. The urgency for culturally competent teachers in all schools is greater now than ever. Teachers of Black students must engage in training that enables them to understand the daily challenges that students face and the systemic discrimination and personal prejudices that negatively affect the ability of Black students to reach their highest potential (Davis, 2019). In rural communities especially, where staffing is inherently challenged due to funding constraints and workload demands, educators must endorse culturally responsive policies and practices and display appropriate skills and dispositions to work effectively with Black gifted students. The literature does not presently provide examples of districts that are successfully integrating culturally responsive practices in gifted education programs. As these models are developed, replication of these best practices is recommended in rural communities serving Black gifted students (Floyd et al., 2008). Effective teachers of culturally different students understand and respect cultural differences and have a high degree of tolerance and respect for behavioral characteristics of Black gifted and talented students, which often do not fit traditional conceptions what it means to be gifted or talented (Davis, 2019;Ford, 2011Ford, , 2013. As has been discussed, poverty adds another layer of complexity to problems facing rural students and their families. African American children in the rural South have borne a disproportionate share of the burden of poverty in America for decades. A more thorough examination of how poverty impacts the lives and opportunities of Black gifted students is recommended. While the overall rate of rural poverty is higher than urban poverty, the difference in rural and urban poverty rates varies significantly across regions. Neither genes nor zip code is cause for inequitable treatment and ignoring specific student needs (Ford, 2013). Summary and Conclusion Immediate attention is needed to fully understand and address the unique cultural, intellectual, psychosocial, and academic needs of Black gifted students who live in rural American communities. Given the 50-plus years of research and attention to the needs of intellectually gifted students in this nation, the fact that the needs of a sizable population of gifted and talented students, particularly students from rural areas, are almost completely absent from the literature is unacceptable. Due to this absence, very little is known about the most effective practices that would address the complex, intersectional, affective and intellectual needs of Black gifted and talented students who live in isolated rural areas across the nation. From what has been reviewed, ironically, even with the uniqueness of their geographic communities, Black gifted and talented students in rural areas have more similarities with than differences from those our nation's urban centers. This article shares a glimpse into the barriers, challenges, and the unique facilitators of talent that exist for this special population of gifted and talented students. A targeted focus on cultural competency training for educators, increased funding for sophisticated technologies, and recruitment of highly qualified Black teachers are of critical importance. Inclusion and application of these practices will ensure that Black rural gifted students have access to the best curriculum experiences so they can be poised to compete with their academic, economic, and racial peers across regional groups. The fact that so many challenges in equitable identification and access to opportunities persists in the twenty-first century is telling of a field that has not dedicated itself to fully seeking out talent in all communities. The material presented in this article makes a strong case for a much-needed research platform, improved practices, and funding to provide services for this unique population of students: Black gifted and talented students from rural communities. Concomitantly, as programmatic responses to specific student needs are generated, we suggest that the most productive innovations in the field of gifted education will come when complete inclusion of all populations' intellectual and psychosocial support needs are considered and strategically implemented on a wide scale. Rural communities comprise a substantive group of geographically important set of students; to dismiss their importance because of their racial makeup or geographic location is unethical to say the least. The giftedness in small isolated rural communities that is properly discovered and nurtured may yield the innovative solutions to our society's most complex problems. Providing support for research and development of comprehensive best practices that can be replicated nationwide specifically targeting Black gifted students holds promise for a better outcome not just for the Black community but all who may be recipients of their productive outcomes. To say that the research in comprehensive best practices for rural Black gifted is limited is an understatement. Black students with gifted and talented potential exist in all communities. These students, their families, communities, and the educators responsible for their futures need support and guidance to develop exemplary models that can be replicated in their rurally isolated schools across the nation. Perhaps the limited number of students in sparsely populated rural communities is seen as a rationale for overlooking this population. However, the physical number of students should be of no regard to the educational policy, research, and practitioner community. The loss of even one gifted and talented mind is too much for any community, our nation, and our global community.
2020-11-12T09:07:37.656Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "c0ff4eaae7ed75fd1b2d88cccbb9b2535cb09a6c", "oa_license": "CCBYNC", "oa_url": "https://tpre.ecu.edu/index.php/tpre/article/download/736/68", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb20e0cf36cc50c6c7120c42b721a6b78229c075", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
256695214
pes2o/s2orc
v3-fos-license
Splenectomy improves erythrocyte functionality in spherocytosis based on septin abundance, but not maturation defects Key Points • Splenectomy limits RBC morphology and functionality defects according to the septin content but does not improve maturation defects.• The increased septin content in patient RBCs might result from maturation defects and could affect RBC membrane properties. Introduction The cytoskeleton is one of the main red blood cell (RBC) constituents involved in deformability and functionality. 1 It is composed of a meshwork of spectrin (SPT) tetramers linked to the lipid bilayer by 2 nonredundant anchorage complexes based on 4.1R and ankyrin (ANK1) proteins. 2The 4.1R anchorage complexes allow for SPT horizontal linkages, whereas the ANK1 anchorage complexes ensure most of the vertical linkages between the SPT meshwork and the lipid bilayer. 2 Upon RBC deformation, a transient rise of intracellular calcium activates Gardos channels, leading to cell dehydration, and favors local uncoupling between the membrane and the cytoskeleton. 35][6] During the RBC's normal lifespan of 120 days, the RBC membrane elasticity is strained ~12 000 times upon its passage from the splenic cords to the venous sinuses demarcated by a discontinuous endothelium.Defective or old RBCs have difficulty managing this passage and remain blocked in the cords, where they are phagocytosed. 7][10] Spherocytosis is the most common cause of chronic hemolytic anemia because of a red cell membrane defect.It is characterized by a broad spectrum of clinical severity, from mild (~20%, nearly asymptomatic) to moderate (~75%, possible intermittent need for transfusions) to severe (~5%, life-threatening and transfusiondependent anemia). 8,11,12In Northern America and Europe, it can affect 1 out of 2000 or 5000 children.Spherocytosis results in ~75% cases from an autosomal dominant inheritance and in ~15% from autosomal recessive inheritance.De novo mutations are rare.Mutations affect predominantly the genes encoding for ANK1, Band3, αand β-SPT (SPTB), and 4.2R. 13Those mutations cause the weakening of the vertical linkages between the cytoskeleton and the lipid bilayer and thereby its destabilization.In consequence, there is a premature clearance of RBCs in the spleen, leading, in the worst cases, to anemia, jaundice, splenomegaly, and cholelithiasis. 14,15sides transfusion and erythropoietin treatment, 14 splenectomy represents a therapeutical intervention for hereditary spherocytosis. 16It might be the only intervention granting long-time relief from symptoms, allowing for the reduction of RBC destruction and preservation of less deformable but still functional RBCs.Thus, after splenectomy, the Hb concentration almost always increases, reticulocytes decrease, bilirubin levels return to normal, and RBCs exhibit a relatively normal life span. 16This favorable response led to the recommendation of splenectomy even for patients with spherocytosis with moderate degrees of hemolysis based on quality of life and spleen size. 17,18vertheless, it is largely unknown whether and to what extent splenectomy is able to improve RBC morphology and functionality as well as the RBC maturation process.Enucleation represents the final step of the RBC maturation process, resulting in the formation of a reticulocyte and the release of a pyrenocyte.The reticulocyte is then released from the bone marrow and undergoes its terminal differentiation in the circulating blood.Enucleation is a complex process involving multiple steps, including (1) cell polarization thanks to microtubules, (2) formation of a contractile actomyosin ring similar to cytokinesis but in an asymmetric manner, and (3) vesicle trafficking that creates an asymmetric protein distribution. 19rotein sorting might involve the RBC cytoskeleton proteins, as revealed by the aberrant protein sorting in mice lacking ANK or 4.1R proteins. 20Nevertheless, cytoskeletal proteins involved in cytokinesis, cell polarity, and endo and exocytosis could also contribute to this.2][23][24] Those septins have been shown to contribute to platelet morphology and functionality 25 but were not previously described in RBCs, to the best of our knowledge. Thus, several years ago, we launched a study aiming at comparing RBCs from 13 splenectomized patients, 7 nonsplenectomized patients, and 10 matching healthy donors (including 1 splenectomized) for their morphological, functional, biophysical, and biomechanical properties and their potential improvement by splenectomy.This paper focuses on RBC morphological and functional properties in relation with cytoskeleton defects, evaluated as follows: (1) for RBC morphology, optical microscopy of living RBCs and scanning electron microscopy of fixed RBCs; (2) for RBC baseline characteristics, RBC distribution width and hemi-RBC area expressed in relation to RBC volume; (3) for RBC functionality, osmotic fragility through Hb release and cryohemolysis, a fragility parameter independent of surface-tovolume ratio but dependent on molecular defect, 26 extracellular vesicle (EV) release as well as intracellular calcium and ATP contents; and (4) for RBC cytoskeleton, SPT confocal imaging and transmission electron microscopy. 27,28In addition, we performed a comprehensive proteomic study, which revealed the presence of septins in diseased RBCs. Blood collection and preparation The study (B403201316580) was approved by the medical ethics committee of the UCLouvain (Brussels, Belgium), and all donors gave written informed consent.The study includes 17 different patients, of whom, 3 were analyzed both before and after splenectomy, allowing for the comparison of a cohort of 13 splenectomized patients vs a cohort of 7 nonsplenectomized patients.These patients were analyzed, as far as it was possible, every year during their annual clinical checkups for 2 to 7 years and were compared with gendermatched healthy volunteers (6 women and 4 men aged 20-49 years old), including 1 splenectomized adult donor.Nevertheless, because patient blood samples were collected for the study during their annual hospital appointments, not all patients could be included in all types of experiments.Blood was collected by venipuncture into K + / EDTA-coated tubes.Before experiments, blood was diluted 10-fold in the adapted experimental medium (Dulbecco's Modified Eagle Medium [Invitrogen] or 1.8 mM calcium-containing homemade medium), and RBCs were washed as described previously. 27 RBC functionality alteration score A score reflecting the extent to which the patient RBCs were affected by the disease was determined and validated as explained in supplemental Information. Isolation and analysis of EVs EV abundance in plasma samples was analyzed via Nanoparticle Tracking Analysis with the ZetaView (Particle Metrix) as described previously. 27Data were expressed as percentage of control values. Intracellular calcium Calcium content of washed RBCs was measured, as described previously, 27,29 using fluorimetry (GloMax; Promega) at λ exc of 490 nm and λ em of 520 nm, and data were normalized to the global Hb content determined by spectrophotometry.Data were finally expressed as percentages of control values. Proteomic analysis RBC ghosts were prepared and samples further processed for mass spectrometry and relative quantification via tandem mass tag labeling, as described elsewhere. 28This approach was performed in 3 nonsplenectomized vs 9 splenectomized patients, and the statistical analysis was therefore adapted, as explained in supplemental Information. Intracellular ATP ATP was determined with a chemiluminescence assay kit (Abcam), as described previously, 27,29 normalized to the Hb content, and expressed as percentage of control. Western blotting RBC ghosts were prepared as described previously, 28 mixed with 2% Tris-buffered saline (TBS) sample buffer (0.25 M Tris-HCl, pH 6.8, 10% sodium dodecyl sulfate, 20% glycerol, and 0.005% bromophenol blue) containing 5 mM dithiothreitol and boiled for 5 minutes.Western blotting was performed as described previously, 30 except that 30 μg of proteins were loaded onto 3% to 10% sodium dodecyl sulfate polyacrylamide gel, and membranes were incubated overnight with a septin-2 mouse monoclonal or a septin-7 rabbit polyclonal antibody (ProteinTech) in TBS with Tween 20 and 5% milk.After visualization, antibodies were removed from membranes using the Reblot Plus Strong Antibody Stripping Solution 10× (Merck Millipore), and membranes were incubated overnight with rabbit (Merck Millipore) or mouse (Invitrogen) glyceraldehyde-3phosphate dehydrogenase antibody in TBS with Tween 20 and 5% milk.Data were expressed as percentages of control values. Immunofluorescence SPT immunolabeling was performed as described previously. 27,28or septin-2 immunolabeling, RBCs and K562 erythroleukemic cells were spread onto poly-L-lysine-coated coverslips, fixed/permeabilized in ice-cold methanol for 1 minute at −20 • C and then in 3% Triton X-100 for 5 minutes and blocked by 4% bovine serum albumin and 0.05% Tween 20 in phosphate-buffered saline for 1 hour at room temperature.Cells were immunolabeled for 2 hours at room temperature with the septin-2 antibody, washed, and incubated with the Alexa Fluor 488-coupled secondary antibody for 1 hour in the dark.All coverslips were then mounted overnight with Dako on SuperFrost blades and visualized using Zeiss LSM980 (SPT and septin) or COSD confocal microscopes (SPT) with a plan-Apochromat 63× NA 1.4 oil immersion objective.The same settings for illumination were used for all samples from the same experiment.SPT data in patients were expressed as a percentage of the corresponding controls analyzed in the same conditions. Vital imaging of ceramide, mitochondria, and lysosomes RBCs were labeled with BODIPY-ceramide, as described, 27,29,31 and with MitoTracker 31 or LysoTracker (Invitrogen) for 5 minutes at 50 nM.Coverslips were then placed upside down in LabTek chambers filled with medium (still containing the LysoTracker) and directly observed with a wide-field fluorescence microscope Observer.Z1.The proportion of RBCs with MitoTracker-or LysoTracker-positive patches in control samples was subtracted from patient samples. Further information can be found in the supplemental Material. Overview of patients included in the study The patients presented mutations in the SPTB or ANK1 gene, as revealed via next-generation sequencing.The splenectomized cohort included 13 patients (closed and semiclosed symbols in graphs).The nonsplenectomized group (open symbols) included (1) 2 patients (P13 and P21) who were compared with splenectomized patients from the same family (P12 and P19; semiclosed symbols), (2) 3 patients (P20, P24, and P25) without other family members for comparison, and (3) 3 patients followed before and after splenectomy (P1, P7, and P13; Table 1 describes the patients baseline characteristics; supplemental Table 1 describes the cohort baseline characteristics).The patients were compared with healthy control donors, who were gender-matched in 90% of the cases. Splenectomy reestablishes the RBC distribution width and the blood parameters but only partially improves RBC morphology and area-to-mean corpuscular volume ratio As expected, the RBC morphology was impaired in nonsplenectomized patients, as reflected by a twofold decrease in the proportion of discocytes in favor of spherocytes and stomatocytes as well as spiculated echinocytes and acanthocytes (Figure 1A-D; see supplemental Figure 1A for RBC classification).Nonsplenectomized patients also showed increased RBC distribution width, which can be linked not only to the decrease of the hemi-RBC area and surface-to-volume ratio (Figure 1E,G) but also and, especially, to the high proportion of reticulocytes (supplemental Figure 1E) and their bigger size, compared with healthy donor RBCs.This is further supported by the direct comparison of RBCs from nonsplenectomized P7 with splenectomized P8 from the same family, showing a strong increase in RBC distribution width and reticulocyte count but a less marked effect on RBC biconcavity (Figure 1E; supplemental Figure 1B,E).All RBC morphology changes were partially restored by splenectomy, except the proportion of discocytes, which was maintained lower than in healthy control donors, mainly because of the remaining spherocyte population (Figure 1A-D,E-G).The blood content in bilirubin and Hb as well as the reticulocyte count were also impaired in spherocytosis and significantly improved because of splenectomy (supplemental Figure 1C-E).In contrast, the RBC count, the mean corpuscular volume, and the mean corpuscular Hb concentration were in the normal range for both cohorts (supplemental Figure 1F-H). Splenectomy decreases RBC osmotic fragility, EV release, and calcium accumulation but does not ameliorate the cryohemolysis Because spherocytes are generally associated with increased RBC fragility and EV release, 32,33 these parameters were then evaluated.The higher RBC osmotic fragility observed in the nonsplenectomized cohort was only partially corrected via splenectomy (Figure 1H), consistent with the remaining proportion of spherocytes.Cryohemolysis was also more pronounced in nonsplenectomized patients, but in contrast to osmotic fragility, it was not reestablished via splenectomy (Figure 1I).The abundance of EVs in the plasma of nonsplenectomized patients was higher than in splenectomized patients or control donors (Figure 1J).The intracellular calcium level was also heightened in nonsplenectomized patients and reduced after splenectomy (Figure 1K).This increase did not result from a limitation in intracellular ATP content, which was instead even higher before splenectomy (Figure 1L).Splenectomy limits but does not prevent alterations in RBC morphology and functionality Based on the before mentioned altered clinical and laboratory features affecting RBC functionality (Figure 1A-L; supplemental Figure 1C,E,H), we established an RBC alteration score ranging from 0 for nonaffected patients to 1 for the most-affected patients.Although splenectomy strongly and significantly improved this score, it did not allow for the recovery of normal RBC morphology and functionality.This was specifically reflected by a similar score between the most-affected splenectomized patients and the less-affected nonsplenectomized ones (Figure 1M,N).It should be noticed that this score did not simply reflect the abundance of reticulocytes (supplemental Figure 1I,J). The extent of RBC alteration negatively correlates with the ANK1 content, which, in turn, correlates negatively with SPT distribution To address the reason for the partial effectiveness of splenectomy, we performed a quantitative proteomic analysis on RBC ghosts from the 2 cohorts and the matched controls.Comparison between the SPTB-and ANK1-mutated groups indicated that the ANK1 level was, as expected, lower in the latter group, but no difference was observed for SPTB levels (Figure 2A-B).However, SPTB and ANK1 levels were similarly reduced in the nonsplenectomized and splenectomized cohorts, when compared with healthy donors (Figure 2C-D).We then evaluated the extent and distribution of SPT membrane occupation by immunolabeling and confocal microscopy while validating the data for some patients via transmission electron microscopy and including 2-week-old RBCs as a positive control for SPT cytoskeleton densification (supplemental Figure 2A).Data revealed the occasional presence of SPT-enriched patches and vesicles, reflected in an increased heterogeneity of SPT membrane distribution, particularly visible in the ANK1-mutated group, which also exhibited a denser membrane SPT coverage (Figure 2E-G; see supplemental Figure 2B for electron microscopy images).In contrast, the SPT membrane occupation and distribution in the nonsplenectomized and splenectomized cohorts were not significantly different from those in controls (Figure 2H-I).Surprisingly, no correlation was found between SPTB content and membrane distribution (data not shown).Only the ANK1 content was inversely correlated with SPT variance and alteration of RBC functionality (Figure 2J-M), leading us to suggest that other proteins could also contribute to the phenotype of spherocytosis. Septin association with the membrane is increased in spherocytosis and partially restored upon splenectomy Mass spectrometry revealed the change of 2 cytoskeletal protein classes upon splenectomy, which are tubulins (TUBA and TUBB) and the regulator of microtubule dynamics protein 3 (RMDN3) as well as the small GTP-binding proteins septins.Although the effect of splenectomy on tubulin abundance was isoformdependent, the abundance of septins was systematically decreased (Figure 3).We therefore analyzed the latter in detail. We first confirmed via western blotting the increased membrane association of septin-2 and -7 in patient RBCs (Figure 4A).Quantification even revealed a greater increase in septins than observed via proteomics (Figure 4B-C) and a clear decrease of both septins-2 and -7 upon splenectomy, as revealed by the comparison of P13 before and just after splenectomy (Figure 4D-F).Moreover, RBC immunofluorescence indicated that septin-2 formed a heterogeneous pattern with submicrometric assemblies spread over the cell surface and was clearly visible in the nonsplenectomized patients but also in splenectomized patients, although to a lower extent (Figure 4G).In contrast, the signal was barely detectable in healthy mature RBCs but substantially present in K562 erythroleukemic cells used as positive controls, confirming the presence of septins in erythroid precursors. Because the septin-2 network was not restricted to some cells of the patient and no correlation could be detected between the septin content and the reticulocyte or the transferrin receptor (TfR) content (Figures 4H-I and Figure 6K), one can reasonably exclude that septins were exclusively associated to reticulocytes. Distinct septins correlate with different RBC morphology and functionality parameters We then asked whether septins could explain the alterations of the RBC cytoskeleton and alteration score.We found an inverse correlation between ANK1 and septin-2 and -7 contents (Figure 5A).This led us to determine septin/ANK1 ratios for further analyses and to consider both cytoskeletal protein groups.Thus, correlations of SPT coverage, SPT heterogeneity, and RBC alteration score were stronger with septin-2 and -7/ANK1 ratios than with the ANK1 content itself (compare Figures 5B-D and 2J-L).RBC alteration score also slightly correlated with septin-8/ANK1 but not with septin-11/ANK1 ratio (Figure 5D).This may be explained by the lesser decrease in membrane association of the latter septin upon splenectomy (Figure 3J), which itself may result from the differential dynamics and actin/microtubule association of septin-11 compared to the others. 34 A closer look at RBC morphology and functionality-related parameters revealed that among the different septins, septin-11 showed the best correlation with spherocytes (Figure 5E).Higher septin-2 and -7/ANK1 ratios correlated well with a reduced RBC surface-to-volume ratio, increased osmotic fragility, and a rise in intracellular calcium, whereas the septin-2/ANK1 ratio correlated with EV release (Figure 5G-H,J-K).In contrast, no septin appeared to correlate with stomatocyte abundance or cryohemolysis (Figure 5F,I).All these data suggested the differential control of RBC parameters by different septins. ER proteins decrease upon splenectomy, whereas lysosomal and mitochondrial proteins and remnants increase We finally explored whether the septin increase could result from RBC maturation defects.In RBCs from nonsplenectomized patients, proteomic analyses revealed a particular increase in proteins implicated in protein synthesis and folding in the endoplasmic reticulum (ER) and in endocytosis (see Figure 6A Septin/ankyrin (%) EV abundance (% CTL) 400 600 ratio with RBC morphology (E-F), surface-to-volume ratio (G), osmotic fragility (H), cryohemolysis (I), intracellular calcium content (J), and EV release (K).Data are respectively from Figure 1B-C,G, H-K.Correlations with septin-2, squares and blue linear regressions; with septin-7, circles and red linear regressions; with septin-8, triangles and green linear regressions; and with septin-11, inverted triangles and gray linear regressions.Linear regressions were plotted only for r 2 > 0.5; when 1 parameter correlates with 1 septin but not with the others, only the one correlating is indicated for the sake of clarity.examples) but no major differences in nuclei and secretory pathwayassociated proteins (supplemental Figure 3C-D).In contrast, an increase in lysosomal and mitochondrial proteins was observed in splenectomized patients (see Figure 6B-C for selected proteins; see supplemental Figure 3B,E,H-K for volcano plots and other examples). RBC maturation defects To determine whether splenectomy-based differential protein enrichment could result from the differential presence of ER, lysosomal, and/or mitochondrial fragments, RBCs were labeled with a fluorescent ceramide analog (enriched in the ER-Golgi and mitochondria), a LysoTracker, and a MitoTracker, respectively.In RBCs from nonsplenectomized patients, a network-like structure reminiscent of the ER was observed, but almost no LysoTrackerand MitoTracker-positive structures could be detected.The opposite was observed in splenectomized patients, who had patches enriched in LysoTracker, MitoTracker, and ceramide (Figure 6D-F).RBCs from a healthy splenectomized donor also showed a slight increase in lysosomal and mitochondrial proteins as well as in MitoTracker-positive patches, contrasting with a slight decrease in ER-ribosomal proteins (red dotted line; Figure 6). Moreover, decreased membrane association of protein 4.2 and Rhassociated glycoprotein, both shown to be degraded before enucleation upon severe ANK1 deficiency, 35 was found in all patients (Figure 6G-H).In contrast, translocator protein and voltage-dependent anion channels, which play an active role in mitophagy throughout human erythropoiesis, 36 were particularly increased in splenectomized patients.This contrasted with increased TfR and dynamin 2 in nonsplenectomized patients (Figure 6K-L).These data suggested RBC maturation defects, potentially explaining the increased septin abundance. Main findings Although other studies have focused on morphological and, less frequently, functional alterations of RBCs in hereditary spherocytosis, our study combined biochemical, proteomic, and imaging approaches to investigate a series of RBC functionality-related parameters in a cohort of patients, splenectomized or not.We revealed that splenectomy limited but did not prevent alterations in RBC morphology and functionality.Moreover, septins were increased in patient RBCs and correlated with RBC alteration.This increase might have resulted from RBC maturation defects.These findings are discussed below and summarized in supplemental Figure 4. Study limitations To determine the effect of splenectomy on RBC morphology and functionality, patients should ideally have been compared before and after splenectomy.However, as most patients were splenectomized before the study was launched, this approach could only be applied to P1, P7, and P13 for some parameters.Therefore, an entire cohort of 4 nonsplenectomized and 10 splenectomized patients was included for comparison.Because biological and RBC fragility parameters measured in P1 and P7 before and after splenectomy showed the same evolution as those in the whole patient cohort, we were confident about the data obtained.Comparison of patient baseline characteristics indicated that the nonsplenectomized group mainly includes men and patients with ANK1 mutations, whereas the splenectomized group shows a higher proportion of SPTB mutations and a wider age range, comparatively.This does not mean that the 2 groups are not comparable.Indeed, as hereditary spherocytosis is a very heterogenous disease that can show different outcomes even in family members with the same underlying genetic defect, having perfectly balanced groups might not be an advantage and potentially not even be achievable.In addition, no significant difference in phenotypes was observed between patients with variants in ANK1 vs SPTB. 37Furthermore, despite important intragroup heterogeneity regarding gender, age, and mutations, splenectomized patients were nevertheless highly comparable for all evaluated parameters. Differential impact of splenectomy on RBC morphology and functionality-related parameters Mutations were detected in the ANK1 or SPTB genes, reported to account for ~50% and ~20% to 30% of all spherocytosis cases, respectively. 38In agreement with the fact that patients with mutations in these 2 genes are associated with moderate and severe forms of the disease, 32 and because splenectomy appears to be more beneficial for these patients compared with patients with mutations of the SLC4A1 gene, 33 most patients included in our study are splenectomized.In agreement with data from previous studies, 17,32,[39][40][41][42] the RBC distribution width, Hb levels, RBC, and reticulocyte counts were improved after splenectomy, whereas the decrease in spherocytes was only partial and concerned the most microcytic ones. The RBC osmotic fragility was also reduced upon splenectomy, in agreement with the improved surface-to-volume ratio.In contrast, cryohemolysis was not restored.Because cryohemolysis is independent of the surface-to-volume ratio but dependent on molecular defects, 26 and because the molecular defect is not restored by splenectomy, cryohemolysis may remain unchanged as well. We also revealed that splenectomy partially restored the increased intracellular calcium levels observed in nonsplenectomized patients.The latter observation is consistent with those in previous studies, 43,44 but the underlying mechanism is still debated.Some studies propose a reduced activity of the plasma membrane Ca 2 ⁺ ATPase pump, 44 whereas others suggest the contribution of the higher reticulocyte count, especially in nonsplenectomized patients. 45We showed here that the increased calcium levels were accompanied by a higher than usual intracellular ATP content, which could represent a compensation mechanism to avoid extensive calcium accumulation but also to maintain sodium and potassium homeostasis, because membrane permeability for these cations might be increased in spherocytosis. 46,47ch improvements in RBC morphology and functionality because of splenectomy could have resulted from the longer RBC lifetime induced by splenectomy, which in turn prevents stress-induced erythropoiesis. 48 Septins as potential new key players in spherocytosis As expected, splenectomy was not able to restore ANK1 and SPTB deficiencies.More surprisingly, these protein deficiencies were not accompanied by significant alterations of membrane SPT occupation or distribution.This observation might partially rely on the fact that SPT immunolabeling was performed with an antibody directed against both SPTB and SPTA, potentially hiding alterations resulting from SPTB deficiency, combined with the calculation of the mean SPT occupation for the whole cohorts whatever the mutation. The idea of assessing the molecular defect to determine the clinical severity in hereditary spherocytosis is not new. 49We showed here that ANK1 levels were negatively correlated with the SPT variance and the RBC alteration score.It might seem a little surprising that such a correlation could be established for the whole patient cohort.However, molecular defects in SPTB are often accompanied by ANK1 deficiencies in hereditary spherocytosis. 8An even better correlation could be demonstrated between the RBC alteration and septin/ANK1 ratios.Except for the abundance of spherocytes, not all septins correlated with the same RBC parameters.Thus, septin-2 and -7 both correlated with cytoskeleton and functionality parameters, but only septin-2 correlated with EV release and septin-11 with spherocyte abundance.Those differential implications might be related to the fact that, in contrast to septins-2 and -7, septins-8 and 11 are constitutively bound to GTP and do not have a polybasic domain. 21,22 the best of our knowledge, we were the first to detect septins in mature RBCs.Nevertheless, septins have been recently described in platelets, contributing to the morphology and functionality of those enucleated cells. 25Upon splenectomy, septin abundance was reduced, as revealed by the comparison of the nonsplenectomized and splenectomized patients and confirmed in 1 patient before and after splenectomy.This observation matched with reduced levels of TUBA1B, an identified partner of septin-2 and -11 in endothelial cells, 21 and with the decrease of ER and ribosomal proteins.Those septins did not simply result from the presence of reticulocytes because the septin network was detected in all RBCs in nonsplenectomized patients, whereas their reticulocyte count was <10%.Moreover, the septin content did not correlate with the proportion of reticulocytes, nor did it correlate with the TfR content, a marker of reticulocytes.Both the disease itself and splenectomy might contribute to the differential contents in ER fragments and septins.However, notice that septins-2 and -7 can also interact with mitochondria and are implicated in the endolysosomal pathway. 22ecause mitochondrial and lysosomal fragments and associated proteins had increased after splenectomy; this would suggest that the decrease in septin levels resulting from the loss of ER could be counterbalanced by the increase in mitochondrial/lysosomal fragments.Whatever its origin, septin-2 was found under the RBC surface, suggesting it might be present in spherocytosis RBCs independently of organelles.Accordingly, Kim et al recently showed that septins-2 and -9 seemingly interact with the platelet cytoskeleton to control their shape. 25 RBC maturation defect in spherocytosis Proteomic and microscopy approaches revealed that RBCs from nonsplenectomized patients exhibited increased amounts of proteins associated with protein synthesis and folding, endocytosis, mitochondria but also lysosomes, although for the latter to a lesser extent when compared with RBCs from splenectomized patients.After splenectomy, only the ER-ribosomal and endocytosis proteins returned to normal values, in contrast to the mitochondrial/lysosomal ones.Moreover, we found decreased membrane association of protein 4.2 and Rh-associated glycoprotein, both previously shown to be degraded before enucleation upon severe ANK1 deficiency. 35Combined with the fact that the content of those proteins negatively correlated with the septin/ANK1 content, we therefore propose that RBCs from spherocytosis patients suffered from maturation defaults already at the erythroblast stage, at least in the more severely affected patients, and that splenectomy contributed to maturation impairment.Accordingly, translocator protein and voltage-dependent anion channels, which play active roles in mitophagy throughout human erythropoiesis, 36 had especially increased in splenectomized patients.In contrast, higher membrane association of endocytosis proteins was seen in nonsplenectomized patients.Although the reticulocyte maturation mechanism is still not fully elucidated, it has been proposed that autophagy and exocytosis collaborate during reticulocyte maturation through the fusion of endosomes with autophagosomescontaining mitochondria, Golgi, or lysosomes, which would then be eliminated by exocytosis.The spleen could facilitate the release of these vesicles, as large vacuoles were described in reticulocytes from splenectomized individuals. 50Accordingly, we found RBC maturation defects also in a healthy splenectomized donor, although to a lesser extent.The absence of spleen macrophages, known to clear inclusion bodies at the RBC surface, might be responsible for the retention of organelle-enriched vesicles. 51This is in line with the observation that the organelle-associated proteins could be detected by proteomics performed on RBC ghosts, suggesting a close proximity with the plasma membrane.This is further supported by the absence of an increase in nucleusassociated proteins, even though Howell-Jolly bodies are systematically detected in RBCs after splenectomy. 52Thus, both the disease itself and splenectomy seemed to induce RBC maturation defaults.Hereditary spherocytosis is not the first anemia associated with RBC maturation failures, because RBCs with even functional mitochondria have been described in sickle cell disease. 53 Conclusions Although splenectomy represents the standard therapeutical treatment for patients with hereditary spherocytosis nowadays, it is accompanied by increased risks of infections and vascular events.In consequence, partial splenectomy or laparoscopic ligation of splenic vessels are proposed as alternatives to total splenectomy in order to maintain partial splenic activity, but long-term studies to determine clinical outcomes are lacking. 54,55We reported here, upon total spleen removal, a large but not complete restoration of RBC morphology and functionality according to septin content.However, at the same time, the RBC maturation process was strongly affected, suggesting that administration of autophagy modulators could be beneficial for splenectomized patients, as described by mTOR (mammalian target of rapamycin) inhibition for anemia in β-thalassemia. 56Moreover, the absence of restoration of septinindependent parameters upon splenectomy could question its recommendation, especially for patients showing high cryohemolysis vs low osmotic fragility.Finally, septins could represent a new contributor to the pathophysiology of hereditary spherocytosis. associated with a number according to the order of inclusion in the study.Family relationship (same color), gender, year of birth, mutated gene, and year of splenectomy.− indicates nonsplenectomized; +, splenectomized; F, female; M, male; NA, not applicable; ND, not determined. 16 ]Figure 1 . Figure 1.Splenectomy partially reestablishes RBC baseline characteristics as well as RBC morphology and functionality-related parameters.(A-L) RBCs from patients (P; 1 color, 1 family), either splenectomized (spl; filled circles or semifilled circles for intrafamily comparison) or not (nonspl; open circles), and RBCs from healthy controls (CTLs; light and dark blue dotted lines for child and adult donor ranges, respectively; or subtraction from P values [ΔP-CTL]) were compared for morphology (A-D), baseline characteristics (E-G), and functionality (H-L).(M-N) Based on these different parameters, a RBC alteration score was calculated.Statistics are indicated above the patient cohorts for the comparison with CTL values and above a horizontal line for comparison between the 2 patient cohorts, respectively.(A-D) RBC morphology determined by electron or light microscopy on RBCs in suspension.The relative abundance of discocytes (A), spherocytes (B), stomatocytes (C), and echinocytes (D) was evaluated and expressed as percentage of the global RBC population (mean of 1-5 independent experiments per patient; Kruskal-Wallis tests followed by Dunn post hoc for the comparison of the 3 cohorts).(E) RBC distribution width (mean of 1-7 independent measurements per patient; Mann-Whitney tests to compare the 2 patient cohorts).(F-G) Hemi-RBC membrane area and area-to-mean corpuscular volume (MCV) ratio.(F) Hemi-area of RBCs spread on poly-L-Lysine (PLL)-coated coverslips.(G) Ratio of values provided in panel F to the MCV provided in supplemental Figure 1G (mean of 4-23 independent measurements per patient for panel F; Kruskal-Wallis tests followed by Dunn post hoc for the comparison of the 3 cohorts).(H) RBC osmotic fragility determined in increasingly hypotonic media.The osmolarity required to lyse 50% of RBCs (Half maximal effective concentration [EC 50 ]) was calculated using hemolysis curves (mean of 1-5 independent experiments per patient; Kruskal-Wallis test followed by Dunn post hoc).(I) RBC cryohemolysis (mean of 1-6 Figure 1 (Figure 2 . Figure 1 (continued) independent experiments per patient; Mann-Whitney test).(J) EV abundance in plasma samples determined by Nanoparticle tracking analysis (mean of 1-3 independent experiments per patient; Kruskal-Wallis test followed by Dunn post hoc).(K) Intracellular calcium content.RBCs were labeled with the nonfluorescent Fluo4-AM, which is transformed in RBCs into the fluorescent Fluo4 after de-esterification and interaction with calcium ions.Labeled RBCs were analyzed by fluorimetry, and data were normalized to the Hb content (mean of 1-11 independent experiments per patient; Kruskal-Wallis test followed by Dunn post hoc).(L) Intracellular ATP content determined with a kit based on the activity of the firefly luciferase in the presence of ATP and emitted light in the presence of luciferin.ATP levels were normalized to Hb (mean of 1-8 independent experiments/patient; Kruskal-Wallis test followed by Dunn post hoc.(M) RBC morphology, functionality and biological parameters considered to establish the RBC functionality alteration score.These parameters were associated with a scale ranging from 0 to 1 when the parameter was nearly unaffected for most patients or from 0 up to maximum 8 when different degrees of affection for a parameter were observed in patient cohorts.The different scores corresponding to the different parameters were then added and the sum divided by the maximal score that could have been obtained to determine the RBC global functionality alteration score for each patient.The closer the score to 1, the more affected the RBCs by the disease.(N) RBC alteration score (Kruskal-Wallis tests followed by Dunn post hoc).MCHC, mean corpuscular Hb concentration; ns, not significant; RDW, red cell distribution width. Figure 3 .IFigure 4 . Figure 3.Among cytoskeletal and anchorage proteins, tubulins and septins are modified in content upon splenectomy.RBCs from patients, either spl (filled circles or semifilled circles for intrafamily comparison) or nonspl (open circles), and RBCs from CTLs (dark blue dotted line, CTL ratio) or a healthy splenectomized adult donor (red dotted line) were assessed by differential quantitative mass spectrometry for cytoskeleton and anchorage complex protein membrane association.(A,B) Volcano plots of ghost membranes for cytoskeletal and anchorage complex proteins (extension of Figure 2A-B).Volcano plots show the log2 of the fold changes (logFC) and the adjusted P values associated with the splenectomy effect.Proteins showing a negative or a positive logFC have a lower or higher expression level in splenectomized patients, respectively.Proteins above the dotted line show a significant difference (P < .05) in the splenectomized patient cohort as compared with the nonsplenectomized one.(C-F) Tubulin α 1b (TUBA1B), TUBB4B, TUBA4A, and regulator of microtubule dynamics protein 3 (RMDN3) membrane association.(G-J) Septins-2, -7, -8, -11 ghost membrane association. for a selected ER protein; see supplemental Figure 3A-B,F-G for volcano plots and other Septin content in relation with spectrin distribution & RBC parameters septin Figure 5 . Figure 5.The septin/ANK1 ratio correlates with SPT covering/distribution and RBC alteration score, and different septins differentially correlate with RBC morphology and functionality parameters.(A) Correlation between septins and ANK1 content presented in Figure 3G-J and Figure 2B.(B-D) Correlation between septin/ ANK1 ratios calculated from Figure 3G-J and cytoskeletal parameters presented in Figure 2F-G or the RBC alteration score from Figure 1N.(E-K) Relation of the septin/ANK1 Figure 6 . Figure 6.The presence of ER proteins and remnants in RBCs of nonspl patients contrasts with lysosome and mitochondria proteins and remnants in RBCs of spl patients.RBCs from patients, either spl (filled circles or semifilled circles for intrafamily comparison) or nonspl (open circles), and RBCs from CTLs (dark blue dotted line or CTL ratio or ΔP-CTL) or a healthy splenectomized donor (red dotted line) were compared for membrane association of proteins of the ER-ribosomes (A), lysosomes (B), or mitochondria (C) and for the presence of organelle remnants (D-F).Scale bar represents 5 μm in panel D. Statistics are indicated above the patient cohorts for the comparison with CTL values and above a horizontal line for comparison between the 2 patient cohorts, respectively.(A-C) Membrane ghost association of ribosomal protein S25 (RPS25), lysosomal-associated membrane protein 1 (LAMP1) and ATP synthase F1 subunit alpha (ATP5F1A), respectively, enriched in ribosomes, lysosomes, and mitochondria and determined by proteomics (statistical analysis and additional examples in supplemental Figure 3).(D-F) Organelle labeling.RBCs spread on PLL-coated coverslips were labeled with BODIPY-ceramide (Cer), LysoTracker, or MitoTracker and observed by fluorescence microscopy (LysoTracker was maintained during observation).(D) Representative images.Open and filled arrowheads, patches, and network-like structures, respectively.(E-F) Quantification of RBCs presenting LysoTracker-or MitoTracker-positive patches expressed as percentage of the total RBC population and then as ΔP-CTL (mean of 1-3 and 1-7 independent experiments perpatient in panels E and F, respectively; Kruskal-Wallis tests followed by Dunn post hoc).(G-L) Membrane ghost association of band 4.2 (EPB42), Rh-associated glycoprotein (RhAG), voltage-dependent anion channel 1 (VDAC1), translocator protein (TSPO), Transferrin receptor (TfR) and DNM2 (dynamin), respectively, involved in cytoskeleton anchorage complexes (G-H), mitophagy (I-J), and endocytosis (K-L) and determined by proteomics (for volcano plots, see supplemental Figure 3). Table 1 . Overview of patients included in the study
2023-02-10T06:18:07.822Z
2023-02-08T00:00:00.000
{ "year": 2023, "sha1": "03bc908588cd6de1f54299816211b20c8f7a631f", "oa_license": "CCBYNCND", "oa_url": "https://ashpublications.org/bloodadvances/article-pdf/doi/10.1182/bloodadvances.2022009114/2032958/bloodadvances.2022009114.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "812623e293f4a9feb8f98d7f5e490642484cb1fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59450708
pes2o/s2orc
v3-fos-license
Vector bundles over iterated suspensions of stunted real projective spaces Let $X^k_{m,n}=\Sigma^k (\mathbb R\mathbb P^m/\mathbb R\mathbb P^n)$. In this note we completely determine the values of $k,m,n$ for which the total Stiefel-Whitney class $w(\xi)=1$ for any vector bundle $\xi$ over $X^k_{m,n}$. Introduction Recall (see [6]) that a CW -complex X is said to be W -trivial if for every vector bundle ξ over X the total Stiefel-Whitney class w(ξ) = 1. A theorem of Atiyah-Hirzebruch ( [1], Theorem 2) says that for any finite CW -complex X, the 9-fold suspension Σ 9 X is W -trivial. In the same paper Atiyah-Hirzebruch have shown ( [1], Theorem 1) that the sphere S k is W -trivial if and only if k = 1, 2, 4, 8 (see also, [5], Theorem 1). In view of the Atiyah-Hirzebruch theorem it is interesting to understand whether or not the iterated suspension Σ k X of a finite CW -complex X is W -trivial with 0 ≤ k ≤ 8. In recent times there has been some interest in understanding W -triviality of iterated suspensions of spaces (see, [6], [7], [8] and the references therein). In [7], the author has completely determined the values of k and n for which the iterated suspension Σ k FP n is W -trivial. Here FP n denotes the projective space of 1-dimensional subspaces of F n+1 where F is the field R of reals, the field C of complex numbers or the skew-field H of quaternions. In [6], the author has completely described the cases under which the stunted projective space RP m /RP n is W -trivial. In [8], the second author has almost complete results concerning W -triviality of the iterated suspension Σ k D(m, n) of the Dold manifold D(m, n). Let X m,n denote the stunted projective space RP m /RP n and let X k m,n denote the k-fold suspension Σ k X m,n of X m,n . In this note we completely determine the values of k, m, n for which X k m,n is W -trivial. In view of the Atiyah-Hirzebruch theorem, we assume that 0 ≤ k ≤ 8. Also note that the cases X k m,0 = Σ k RP m and X k m,m−1 = Σ k S m = S m+k are completely understood. In the sequel we assume that 0 < n < m and hence, in particular, m ≥ 2. Since the case X m,n is completely understood, we state our results for X k m,n with 1 ≤ k ≤ 8. The following statements completely describe the cases when X k m,n is W -trivial. Theorem 1.1. Let X k m,n be as above with 0 < n < m. 8t+7,n is W -trivial if t ≥ 1 and n ≥ 2. X 2 7,n is W -trivial if and only if n = 6. The proofs of the above theorems crucially make use of the computations of the KO-groups of the real projective spaces and the stunted real projective spaces. In the next section we first state some easy to verify general observations and prove our main results. Conventions. All references to cohomology groups will mean singular cohomology with Z 2coefficients. Given a map α : X −→ Y , the induced homomorphism in cohomology and KO-groups will again be denoted by α. Proof of the theorems We begin by recording some important observations which will be crucial in the proofs of the main theorems. (1) X k m,n is W -trivial if there does not exist an integer s such that Proof. (1) follows from the well-known fact that the first non-zero Stiefel-Whitney class of a vector bundle is in degree a power of 2. (2) follows from the fact that the obvious map X k m,n ′ −→ X k m,n induces isomorphism in cohomology in degree i with n We note that if m is odd, then we have a splitting and if m is even, then we have Given a sequence of integers 0 ≤ p < n < m, the cofiber sequence gives rise to an exact sequence Our proofs, in many cases, involve analyzing the above exact sequence of KO-groups corresponding to a suitable choice of a cofiber sequence as above. We shall use the above observations implicitly in the sequel. We record here what is known about the W -triviality of Σ k RP m and the stunted projective spaces X m,n for easy reference. Proof of Theorem 1.1. Assume first that k = 3, 5. If m + k = 8 then as Σ k RP m is W -trivial it follows from Proposition 2.1 (2) that X k m,n is also W -trivial. Next we look at X 3 5,n . The obvious map X 3 5,n −→ S 8 induces isomorphism in the top cohomology and hence the Hopf bundle on S 8 pulls back to a bundle ξ over X 3 5,n with w(ξ) = 1. A similar argument works for X 5 3,n . This completes the proof of (1). The case (2) when m = 2, 3 follows from arguments similar to the above case. That X 6 3,n is not W -trivial for n = 1, 2 follows as from the facts that X 6 3,1 = S 9 ∨ S 8 and X 6 3,2 = S 8 . Clearly, X 6 2,1 = S 8 is not W -trivial. This completes the proof of (2). Finally as m ≥ 2, W -triviality of Σ 7 RP m implies that X 7 m,n is W -trivial. This completes the proof of (3) and the theorem. Proof of Theorem 1.2. There are obvious maps X 1 3,n −→ S 4 and X 1 7,n −→ S 8 that induce isomorphisms in cohomology in the top dimension. The Hopf bundles on S 4 and S 8 then pull back to give bundles with total Stiefel-Whitney class not equal to 1. This shows that X 1 m,n is not W -trivial if m = 3, 7. Hence when m ≡ 3, 7 (mod 8), we have that X 1 m,1 and hence X 1 m,n is W -trivial for all n ≥ 1. Thus X 1 m,n is W -trivial if and only if m = 3, 7. This completes the proof of the theorem. Proof of Theorem 1.3. We prove each of the claims in the theorem. Proof of (4). First note that as X 2 7,5 = S 9 ∨ S 8 we have that X 2 7,5 is not W -trivial. Thus X 2 7,n is not W -trivial for n ≤ 5. Clearly, X 2 7,6 = S 9 is W -trivial. Next we consider X 2 8t+7,n with t ≥ 1. In the exact sequence Table 4) and the last group is infinite cyclic we have that α is an epimorphism. By (3) above, X 2 8t+6,2 is W -trivial and hence X 2 8t+7,2 is W -trivial. Hence the proof of (4) is complete. Proof of (5). We first concentrate on the case t = 1. Note that X 2 8,7 = S 10 and hence is W -trivial. As X 2 8,6 = Σ 2 M (Z 2 , 7) = Σ 8 RP 2 , we have, by Theorem 2.2, that X 2 8,6 is W -trivial. Next consider the exact sequence We note that KO −2 (S 6 ) = Z generated by the Hopf bundle ν, KO −1 (X 8,6 ) = Z 2 ([4], Table 4), Table 3) and Table 3). Hence α is an epimorphism. Since β induces isomorphism in 8th cohomology, the equality w(β(ξ)) = 1 implies that w(ξ) = 1. It follows from the exactness of the above sequence that there is a generator ξ of the torsion free part of KO −2 (X 8,5 ) with β(ξ) = 2ν. Since w(2ν) = 1, we have that w(ξ) = 1. If η is the generator of the torsion part, w(β(η)) = 1 as β(η) is (stably) trivial. Hence w(η) = 1. This shows that for any θ ∈ KO −2 (X 8,5 ), w(θ) = 1 and hence X 2 8,5 is W -trivial. W -triviality of X 2 8,4 follows from that of X 2 8,5 by considering the exact sequence and noting that the last group in the above sequence is zero. Similar considerations shows that X 2 8,3 , X 2 8,2 are W -trivial. This completes the proof in the case t = 1. Next assume that t > 1 and consider the exact sequence The last group in the above sequence is zero ( [4], Table 4). Hence α is an epimorphism. We claim that X 2 8t,8t−3 is W -trivial. The inclusion map S 8t −→ X 2 8t,8t−3 induces isomorphism in cohomology in degree 8t. If ξ is a vector bundle over X 2 8t,8t−3 with w(ξ) = 1, then w 8t (ξ) = 0. This bundle then pulls back to a bundle η over S 8t with w(η) = 1. This is a contradiction as t > 1. Hence X 2 8t,8t−3 is W -trivial. The surjectivity of α now imples that X 2 8t,2 is W -trivial. This completes the proof of (5) and the theorem. Proof of Theorem 1.4. First note that if m = 2, 3, then X 4 m,n is always W -trivial. Since the obvious map X 4 4,n −→ S 8 induces isomorphism in cohomology in top dimension, we have that X 4 4,n is not W -trivial. This completes the proof of (1). Now assume that m > 4. Consider the exact sequence The last group in the above sequence is well known to be zero (see [3]) and hence α is an epimorphism. By Theorem 2.2, Σ 4 RP m is not W -trivial. The surjectivity of α now implies that X 4 m,3 is not W -trivial if m > 4. In view of Proposition 2.1 (3), X 4 m,n is not W -trivial for m > 4 and n = 1, 2, 3. To complete the proof of the theorem we now show that if m > 4, then X 4 m,4 is W -trivial. To see this consider the exact sequence (see [4] Section 3) The last two groups in the above exact sequence are finite cyclic [3]. By Theorem 2.2, the spaces Σ 4 RP m and Σ 4 RP 4 are not W -trivial. Thus if θ, η are generators of the second and the third group respectively, then we must have w(ξ) = 1 and w(η) = 1. Assume that β(θ) = η. Now let ξ ∈ KO −4 (X m,4 ) be such that w(ξ) = 1. Then w(α(ξ)) = 1 as α induces isomorphism in cohomology in degress j, j ≥ 9. Let α(ξ) = sθ. Then s is odd as the cup products in H * (Σ 4 RP m ) are all zero. The calculation w(βα(ξ)) = w(sβ(θ)) = w(sη) = w(η) = 1 contradicts the exactness of the above sequence. Thus w(ξ) = 1 for every ξ ∈ KO −4 (X m,4 ) proving that X 4 m,4 is W -trivial. This shows that X 4 m,n is W -trivial for all n ≥ 4 if m > 4. This completes the proof of the theorem. Proof of Theorem 1.5. Consider the exact sequence It is known (see [2]) that the image of α is generated by 2 ϕ(n) ξ where ξ is the canonical line bundle over RP m and ϕ(n) is as in Theorem 2.3. It follows that in the exact sequence the image of β is generated by 2 ϕ(n) η where η corresponds to ξ under the Bott periodicity isomorphism. Since 2 ϕ(n) is even and the cup products in H * (Σ 8 RP m ) are zero, it follows that w(2 ϕ(n) η) = 1. Hence if θ is in the image of β, then w(θ) = 1. As α induces isomorphism in cohomology in degrees j, j ≥ n + 9, it follows that X 8 m,n is W -trivial. This completes the proof of the theorem.
2012-11-22T12:50:16.000Z
2012-11-22T00:00:00.000
{ "year": 2012, "sha1": "1be38331ceca008bf62a86bbe17326a9f49d41b6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1211.5274", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1be38331ceca008bf62a86bbe17326a9f49d41b6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
40545649
pes2o/s2orc
v3-fos-license
Testing a sequential stochastic simulation method based on regression kriging in a catchment area in Southern Hungary Modelling spatial variability and uncertainty is a highly challenging subject in soiland geosciences. Regression kriging (RK) has several advantages; nevertheless it is not able to model the spatial uncertainty of the target variable. The main aim of this study is to present and test a sequential stochastic simulation approach based on regression kriging (SSSRK), which can be used to generate alternative and equally probable realizations in order to model the spatial variability and uncertainty of the target variable; meanwhile the advantages of the RK technique are retained. The SSSRK method was tested in a sub-catchment area of the Lajvér stream, in Southern Hungary for the high resolution modelling (i.e. 10 metre grid spacing) of the spatial distribution of soil organic matter (SOM). In the first step, secondary information was derived according to the soil-forming factors; then the RK system was built up, which provides the base of SSSRK. 100 realizations were generated, which reproduced the model statistics and honoured the input dataset. These realizations provide 100 simulated values for each grid node, which is an appropriate number for calculating the cumulative distributions for each grid node. Using these cumulative distributions the following maps were derived: the map of the E-type estimation, the corresponding 95% confidence interval width’s map and the map of the probability of the event of {SOM < 1.5%}. The latter map is highly informative in soil protection and management planning. The resulting model and maps showed that, SSSRK is a valuable technique to model and assess the spatial variability and uncertainty of the target variable. Furthermore, the comparison of RK and SSSRK showed that the SSSRK’s E-type estimation and the RK estimation gave almost the same results due to the fairly high R2 value of the regression model (R2=0.809), which decreased the smoothing effect. INTRODUCTION Modelling the spatial distribution, variability and uncertainty of soil related attribute(s) (e.g. soil organic matter content, rooting depth, pH, particle size distribution, bulk density) is a challenging subject in the soil-and geosciences, as well as in general environmental research. The resulting model(s) can be applied to support various soil and environmental related decisions, such as delineation of contaminated or endangered zones, estimation of remediation costs, identification areas for fertilization or crop growth and so forth. Geostatistics, which can be regarded as a subset of sta-Testing a sequential stochastic simulation method based on regression kriging in a catchment area in Southern Hungary fication but they were referred to as "deterministic" and "stochastic" approaches. The main aim of this study is to present and test a sequential stochastic simulation method based on regression kriging (SSSRK), which is able to generate alternative and equally probable realizations (in order to model the spatial uncertainty) with the constraint that they have to reproduce the model statistics; meanwhile the advantages of RK are retained. SSSRK is presented and tested in a sub-catchment area of the Lajvér stream in southern Hungary, where former soil (water) erosion research has resulted in particular soil sampling and laboratory analysis. The study site is a good example from the point of view of soil science because of the heterogeneous landscapes, where the various effects of soil-forming factors and soil erosion, as well as the various land cover types diversify the sub-catchment soil pattern. Soil organic matter (SOM) content was chosen as a sample variable, being a soil attribute, which has an important role due to its multipurpose functionality in the soil-and geosciences, as well as in environmental research. The goal is to build up the high resolution model (i.e. 10 metre grid spacing) of the SOM spatial distribution based on SSSRK. The resulting model and the derived "maps" are of interest for precision agriculture, water erosion and soil protection research, small scale landscape planning and evaluation, integrated catchment management and climate change research (sources and sinks for atmospheric carbon dioxide). STUDY SITE The sub-catchment (area is approximately 1.32 km 2 ) which drains into the Lajvér stream is located in the southern part of Hungary, in the Szekszárd Hills, near the village of Szálka (Fig. 1). The area of interest is covered by loess-like sediments (DÖVÉNYI, 2010). The annual precipitation is 650 mm in the study site. The original soil types are Cambisols and Luvisols with a loamy soil texture, but there are several eroded types of them because of the high relief and the longterm agricultural land use. Even Regosols can be observed in small areas. However, the eroded soil material forms Fluvisols on the valley bottoms. Land use more or less conforms to the relief conditions: approximately 50% of the area (65.2 hectares) is used as arable land (see Fig. 1), but the steeper slopes are covered by meadows and forests. These latter ones mean mainly acacia but we can see oak forests on the northern part of the sub-catchment. There are new vineyards on the southeastern parts (see Fig. 1). Sampling, laboratory measurements and preliminary data analysis Former soil (water) erosion research has resulted in a particular pattern of topsoil (0-10 cm depth) sampling and laboratory analyses including SOM. The database contains 47 records on SOM content originating from 2 soil profiles and 45 boreholes (Fig. 1), sampled in 2008-2009. tistics specialized in analysis and interpretation of geographically referenced data (GOOVAERTS, 1997), provide a huge amount of tools to support these decisions (WEBSTER & OLIVER, 2007). A majority of geographically referenced data (e.g. soil attributes) is related to discrete (sampling) points in the geographic space, which means that we do not have any information about these variables at the unvisited locations. However, an increasing amount of spatially exhaustive secondary information (e.g. digital elevation models, satellite images, geological maps and land cover maps) is available today with increasing spatial and temporal resolution, which can be used in combination with geostatistical tools to satisfy certain requirements (e.g. intrinsic hypothesis, second-order or weak stationarity), and to improve the estimation or simulation models (MINASNY & MCBRATNEY, 2007). Regression kriging (RK) is a representative and widely used technique, which combines the regression of the target variable on spatially exhaustive secondary information with simple kriging of the regression residuals to estimate the value of the target variable at an unvisited location (HENGL et al., 2004). RK has several advantages in contrast to other kriging methods, e.g. it can take spatially exhaustive secondary information into account in the estimation process, it can handle the trend (or drift), where the trend term means that the local mean systematically varies from place to place. Furthermore, RK is more flexible than kriging with external drift or cokriging methods (SIMBAHAN et al., 2006;ELDEIRY & GARCIA, 2010;HENGL et al., 2003), which also can take secondary information into account in the estimation process. One of the main drawbacks of RK is that it is unable to model the spatial uncertainty or to provide, for example, the 95% confidence interval for the estimates. According to GOO-VAERTS (1997), if the intrinsic hypothesis holds, the kriging variance could be used to derive the 95% confidence interval for the estimates. Unfortunately, the intrinsic hypothesis is not reasonable to hold in many cases (there is a trend, which makes it unacceptable). The main reason why RK is widely used for spatial modeling is that it can handle the trend (SZATMÁRI & BARTA, 2013). During past decades, stochastic simulations became widespread in the soil-and geosciences to model and assess the spatial variability and uncertainty of the target variable(s) (GOOVAERTS, 1997;DEUTSCH & JOURNEL, 1998;GEIGER, 2006;MALVIĆ, 2008;NOVAK ZELENIKA & MALVIĆ, 2011;GEIGER, 2012;MALVIĆ et al., 2012;NO-VAK ZELENIKA et al., 2012). As opposed to any kind of kriging techniques, the main aim of simulation methods is to generate alternative and equally probable realizations, which reproduce the model statistics (e.g. histogram and variogram model), rather than to minimize the local error variance Var{Z*(u) -Z(u)}. Hence, simulation methods model the "reality" in a certain global (and not local!) sense, which give an opportunity to model the spatial uncertainty (GOO-VAERTS, 1997;DEUTSCH & JOURNEL, 1998;GEIGER, 2006;MALVIĆ, 2008). Based on this, we can classify the geostatistical techniques into two classes: estimation and simulation methods. MALVIĆ (2008) also used this classi- The topsoil sampling design was planned for modelling the soil erosion process (BORCSIK et al., 2011). A stratified sampling strategy (supported with geostatistical considerations) was used for this purpose to determine the spatial variability of the eroded and accumulated soil patterns. The land cover (LC) map and the steepness (slope from the digital elevation model; see. Fig. 3) of the study area was the basis for the stratification. Arable lands are the most seriously affected LC type by soil erosion. Hence, this LC type (including the eroded and accumulated soil patterns) had more weights in the sampling strategy than forests and meadows, because the previous LC types are not affected by this soil degradation process (BORCSIK et al., 2011). As a consequence, arable lands (including the eroded and accumulated soil patterns, as well as the flat valley bottom) are relatively overrepresented; whilst forest and meadows are relatively underrepresented. According to WEBSTER & OLIVER (2007), geostatistical considerations were applied in the sampling strategy to make sure that the sampling design covered the study site uniformly, as far as possible. The soil profiles were excavated; their total depth was 140 cm, where the parent material (loess-like sediments) was reached. The topsoil layer (0-10 cm depth) of the profiles was sampled. A gouge auger was used to excavate the boreholes with an average total depth of 120 cm. The total depths of the boreholes were determined by the depth of the parent material. The samples were collected from the topsoil layer (0-10 cm depth) of the boreholes. The soil profiles and the boreholes were used to characterize the main soil types, as well as the soil erosion process. The laboratory analyses included the determination of SOM, particle size distribution, pH, bulk density, as well as the carbonate content (BORCSIK et al., 2011). This study has used the SOM measurement data according to the Hungarian Standard (MSZ 21470-52:1983), which means that SOM content was determined after sulfuric acid digestion in the presence of 0.33 mol/dm 3 potassium dichromate by spectrophotometer (type: Helios-gamma). Figure 1: The location of the study site in Hungary and its land cover presented with the measured soil organic matter (SOM) data at the sampling points. where the trend component is accounted for by a multiple linear regression model, whilst the model residuals represent the spatially varying but dependent stochastic component with zero mean, normal distribution (see Fig. 2) and covariance structure, which can be modeled with a simple kriging technique. Figure 2 presents the schema of RK. Based on this, the estimation for Z at an unvisited location u 0 is given by where β is the vector of the regression coefficients, q 0 is the vector of the secondary information at the unvisited location, λ 0 is the vector of the simple kriging weights (assigned to the regression's residuals), z is the vector of the observations and q is the matrix of the secondary information at the sampling locations. The regression coefficients are estimated by the generalized least squares (GLS) method because it is able to take the covariance matrix of the residuals into account along the estimation process. Spatially exhaustive secondary information for RK In case of soil related attribute(s), secondary information can be compiled according to the soil-forming factors because there is a significant relationship between these factors and the soil attribute(s) (PHILLIPS, 1998;MCBRATNEY et al., 2003;BOCKHEIM & GENNADIYEV, 2010;BOCKHEIM et al., 2014). According to BOCK-HEIM et al. (2014), the soil-forming factors are the following: topography, climate, parent material, organisms (i.e. vegetation and fauna), the age of the soil and the human intervention, as an anthropogenic factor. Only one outlier was identified by Box and Whisker plot and then it was removed from the raw SOM data. Then summary statistics were calculated for the filtered SOM data (Table 1). Figure 1 presents the spatial distribution of the measured SOM content values. As it was anticipated, the SOM content is much lower in arable lands and vineyards than in forests and meadows, due to the long-term and intensive agricultural activity, as well as the soil erosion effects, which cause a higher amount of organic matter mineralization, as well as the erosion of the SOM rich topsoil. As a consequence, the study site shows a diversified picture of the SOM content's spatial distribution. These also imply that, the intrinsic hypothesis is not reasonable to hold, because there is an obtrusive trend (i.e. the local mean systematically varies from place to place), which makes several kriging techniques (e.g. ordinary kriging, simple kriging) inappropriate. Moreover, the large number of factors (e.g. land cover types, topography, morphometric parameters) and their abrupt changes in the geographic space make cokriging and kriging with external drift techniques inadequate too, according to GOOVAERTS (1997). Alternatively, RK is able to handle the trend, as well as it can take numerous secondary information into account considering their abrupt changes in geographic space. Hence, the RK technique is a reasonable choice for modeling the SOM content's spatial distribution in the area of interest. Theory of regression kriging (RK) In the last ten years, regression kriging (RK) has been more and more popular to estimate the value(s) of the target variable(s) at unvisited locations taking spatially exhaustive secondary information into account (HENGL et al., 2004;SIMBAHAN et al., 2006;GOOVAERTS, 2010GOOVAERTS, , 2011KERRY et al., 2012;SZATMÁRI & BARTA, 2013;SZAT-MÁRI et al., 2013;PÁSZTOR et al., 2014a, b). RK assumes that the random function Z(u) can be deconstructed into a Climate and parent material can be regarded as homogeneous because of the local aspect of the sub-catchment (DÖVÉNYI, 2010). Unfortunately, we do not have any information about the age of the soils. However, this factor has been frequently omitted in soil related spatial modelling, because it is difficult to characterize well (MCBRATNEY et al., 2003). Spatially exhaustive information on topography can be derived from the digital elevation model (DEM) of the study area, which was built up with 10 metre resolution. The first step in using this secondary information is morphometric analysis of DEM. The derived, so called "morphometric" parameters' grids are aimed to characterize the geomorphometry of the surface (MCBRATNEY et al., 2003). The grids of the morphometric parameters have the same resolution as DEM. Table 2 and Figure 3 summarize the derived parameters and their characteristics. Note that there is a significant relationship between the derived morphometric parameters, thus the "raw" grids of these parameters cannot be used in further multiple linear regression analysis because of multicollinearity. To avoid this, principal component (PC) analysis was performed to transform the grids of the morphometric parameters, because the resultant PC grids are orthogonal and independent. Hence, their application decreases the effect of multicollinearity; moreover the resulting PC grids preserve the total variation of the morphometric parameters (GEIGER, 2007). The PC analysis was carried out in SAGA GIS software with its "Spatial and Geostatistics / Principal Components" module, which calculates the PC grids from the input grids. The resulting PC grids were used in further multiple linear regression analysis. Spatially exhaustive information on organisms (i.e. vegetation) and human intervention can be derived from the LC map of the study area (Fig. 1). In the present case, the LC map was compiled interpreting the products of the official aerial photography campaign of Hungary, taken in 2005. As opposed to the morphometric parameters, the LC type is a categorical variable. For the sake of the application of RK, each LC type was converted into an indicator variable (IV). Therefore a grid map was generated (with 10 metre resolution) for each LC type with a value domain showing 1 at the locations of the given LC type and showing 0 for all other locations. The resulting IVs were used in further multiple linear regression analysis. Quantified control of local topography on hydrological processes, and indicator of the spatial distribution of soil moisture and surface saturation Theory of sequential stochastic simulation based on regression kriging (SSSRK) During previous decades, stochastic simulations became wide spread (GOOVAERTS, 1997;DEUTSCH & JOURNEL, 1998;GEIGER, 2006;MALVIĆ, 2008;NOVAK ZELENIKA & MALVIĆ, 2011;GEIGER, 2012;MALVIĆ et al., 2012;NOVAK ZELENIKA et al., 2012). These simulations are methods in which alternative and equally probable high resolution models of spatial distribution of Z(u) are generated (DEUTSCH & JOURNEL, 1998). If the realizations (also referred to as stochastic images) from them honour the input data, then the simulation is called "conditional" (DEUTSCH & JOURNEL, 1998). According to GOOVAERTS (1997), let {Z(u j ), j=1,… ,N} be a set of random variables defined at N locations u j within the study area. The objective is to generate several joint realizations of these N random variables conditional to the dataset. The corresponding N-point (or N-variate) conditional cumulative distribution function (ccdf) is: However, this N-point ccdf can be written as the pro duct of N one-point ccdf: realizations are obtained by repeating the entire sequential process with possibly different random path for each realization (GOOVAERTS, 1997;GEIGER, 2006). When secondary information is available, then this information can be used in the sequential stochastic simulation process (GOOVAERTS, 1997;DEUTSCH & JOURNEL, 1998;MALVIĆ, 2008). As mentioned above, the secondary information is related to DEM and the LC map of the pilot area. In this study, the RK estimation was used to identify the mean of ccdf at any grid node and the simple kriging variance of the residuals was used to identify the variance of ccdf at any grid node, according to GOOVAERTS (1997) and DEUTSCH & JOURNEL (1998). Figure 4 presents the schema of SSSRK. The consequence of this practice is that the variogram and the histogram of the residuals are reproduced by the simulation model. Furthermore, the realizations honour the input dataset. In our work, 100 realizations (according to GOO-VAERTS [2001] and GEIGER [2006]) were generated by the previously detailed SSSRK method. The resulting stochastic images can be used, for example, to map the E-type estimation, as well as the corresponding upper and lower bound of the confidence interval for each grid node, to assess the spatial uncertainty using the differences between the realizations or to solve tasks like "contouring the probability of the event of {SOM < 1.5%}" (GEIGER & MUCSI, 2005;GEIGER, 2006;MUCSI, et al. 2013). Results of RK The generalized least squares (GLS) method was used to estimate the regression coefficients for the multiple linear regression model. The response variable was the SOM content, whilst the explanatory variables were the PC grids of the morphometric parameters and the indicator variables (IV) of the LC types. The applied significance level was 0.05 and the "stepwise" method was used to select the explanatory variables into the regression model. Table 3 summarizes the results of the multiple linear regression analysis. Seven explanatory variables were selected into the multiple linear regression model by the "stepwise" method (see Table 3), where 4 explanatory variables were related to the PC grids (i.e. PC-1, PC-2, PC-4 and PC-6), whilst 3 explanatory variables were related to the IV grids (i.e. IV-Forests, IV-Vineyards and IV-Eroded Arables). In case of the selected PC grids, PC-1 relates to the altitude parameter (i.e. the altitude morphometric parameter had the highest PC coefficient in the PC analysis), PC-2 relates to slope, PC-4 represents the Topographic Wetness Index, whilst PC-6 relates to the LS factor. As a consequence, SOM spatial distribution is mainly determined by the soil erosion related morphometric parameters and the LC types, as was anticipated. The determination coefficient of the regression model is 0.809 (see Table 3), which means that the model explains more than 80% of the total variability of the SOM data and the remaining approximately 20% have to be modeled with a simple kriging system. The regression residuals were derived and the corresponding experimental variogram was calculated to model their spatial structure (Fig. 5). The experimental variogram was approached with a spherical variogram model type with zero nugget, 0.0515 sill (which is approximately 20% of the total variance, see Table 1), 204 meter range and isotropic characteristic (see Fig. 5). The spatial estimation by RK was carried out using the multiple linear regression model and the fitted variogram model. The RK estimation is presented in Fig. 6. Results of SSSRK In this study, 100 equally probable realizations were generated based on the regression kriging system presented earlier and using the SSSRK algorithm. Figure 7 shows three of the resulting stochastic images of SSSRK. As we can see in Fig. 7, there are areas where the stochastic images are not so different (e.g. arable lands), whereas we can find some regions (e.g. forests and meadows) where the differences are more pronounced. The experimental variograms of the resulted realizations can be derived and they can be compared with the applied variogram model, which was used in the SSSRK process (Fig. 8). One of our constraints was that SSSRK algorithm has to reproduce, through the resulted realizations, the applied variogram model of the residuals. Figure 8 shows that this was achieved. The 100 realizations provide 100 simulated values for each grid node and this number is quite appropriate to calculate the cumulative distribution around an infinitesimally small neighbourhood of each grid node (GEIGER, 2006;MUCSI et al., 2013). Using these cumulative distributions the E-type estimation and the corresponding upper and lower boundary of the 95% confidence interval can be calculated for each grid node. Moreover, the 95% confidence interval's width also can be derived, which provides a measure of uncertainty of the SOM estimation, i.e. when this interval width is relatively high, then the SOM estimation is more uncertain, according to GEIGER (2006). Figure 9 shows the E- a wider range, than is the case for arable lands. This is in accordance with the aforementioned statements that the E-type estimations are more uncertain in forests and meadows. Furthermore, the transect also indicates that the SOM content is much lower in arable lands, than in forests and meadows, due to the long-term and intensive agricultural activity, as well as the effects of soil erosion, which cause a higher amount of organic matter mineralization, as well as the erosion of the SOM rich topsoil. At 140 metres along the transect, the range of the simulated values is fairly small, which can be attributed to a sampling point, which is pretty near that grid node (see Fig. 1). We can conclude that, the SSSRK algorithm honours the input dataset, which was the other constraint on the SSSRK technique. In contrast, we can use the calculated cumulative distribution for each grid point to solve tasks like "contouring the probability of the event of {SOM < 1.5%}", according to GEIGER (2006). Figure 11 presents that "probability map", which can be directly used in a soil protection and management plan of the sub-catchment to delineate areas for SOM archiving. Comparison of RK estimates with SSSRK's E-type estimates Numerous authors have compared the results of the estimation and simulation methods in the last decade, such as GOOVAERTS (2000) (2013). Following this practice, the results of RK and SSSRK were compared. If we compare the RK estimation ( Fig. 6) with the SSSRK's E-type estimation ( Fig. 9.a), then we can state that, they are very similar. To test this impression, a difference map was calculated (Fig. 12), which quantifies the difference between the map of RK estimation and SSSRK's E-type estimation. The range of the difference map is [0.068; -0.091] (see Fig. 12), which is fairly small. Based on this, we can conclude that the two maps, from the practical point of view, present the same result. However, we have to notice that this similarity may can be attributed to the fact that the determination coefficient of the regression model is fairly high (R 2 =0.809), which decreased the smoothing effect of the RK technique . CONCLUSIONS In this paper, a sequential stochastic simulation method based on regression kriging (SSSRK) was presented and tested in a sub-catchment area of the Lajvér stream, in Southern Hungary. For this purpose, the soil organic matter (SOM) content was chosen because this particular soil property has an important role due to its multipurpose applicability. As it was illustrated in this study, SSSRK (as opposed to RK) is able to model the spatial uncertainty of the target variable using the generated equally probable realizations (which reproduce the model statistics and honour the input dataset). It is able to provide a measure of the uncertainty of Figure 9: The E-type estimation of the soil organic matter content (a) and the corresponding 95% confidence interval width (b) on the basis of 100 realizations. type estimation and the corresponding confidence interval width. As we can see in Fig. 9b, the E-type estimations are more uncertain in forests, vineyards and meadows, than in arable lands. It can be attributed to the sampling strategy, which underrepresented the former LC types. A horizontal transect (see Fig. 1) was traced out in the northern part of the study site, which intersects the most frequent LC types (arable, meadows and forests). Along the transect, the E-type estimation, the upper and lower boundary of the 95% confidence interval and every tenth simulated value were plotted (see Fig. 10) in order to analyze how the simulated values vary in different LC types. Figure 10 shows clearly that the simulated values in forests and meadows have the E-type estimates using the derived confidence interval width. Moreover, the calculated cumulative distribution around an infinitesimally small neighbourhood of each grid node can be used to support various decisions (e.g. identification of SOM rich or SOM poor areas). In addition, SSSRK retained the main advantages of the RK technique such as its flexibility, but it can also handle the trend (or drift) and it can take spatially exhaustive secondary information into account in the simulation process. In conclusion, SSSRK is a valuable technique to model the spatial distribution, variability and uncertainty of the target variable and to complete RK's several shortcomings.
2017-05-02T22:56:25.745Z
2015-09-16T00:00:00.000
{ "year": 2015, "sha1": "0228982babf48188d2d238dc0cda18867e3b491d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4154/gc.2015.21", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0228982babf48188d2d238dc0cda18867e3b491d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
17051808
pes2o/s2orc
v3-fos-license
The Natural History of Biopsy-Negative Rejection after Heart Transplantation Purpose. The most recent International Society for Heart and Lung Transplantation (ISHLT) biopsy scale classifies cellular and antibody-mediated rejections. However, there are cases with acute decline in left ventricular ejection fraction (LVEF ≤ 45%) but no evidence of rejection on biopsy. Characteristics and treatment response of this biopsy negative rejection (BNR) have yet to be elucidated. Methods. Between 2002 and 2012, we found 12 cases of BNR in 11 heart transplant patients as previously defined. One of the 11 patients was treated a second time for BNR. Characteristics and response to treatment were noted. Results. 12 cases (of 11 patients) were reviewed and 11 occurred during the first year after transplant. 8 cases without heart failure symptoms were treated with an oral corticosteroids bolus and taper or intravenous immunoglobulin. Four cases with heart failure symptoms were treated with thymoglobulin, intravenous immunoglobulin, and intravenous methylprednisolone followed by an oral corticosteroids bolus and taper. Overall, 7 cases resulted in return to normal left ventricular function within a mean of 14 ± 10 days from the initial biopsy. Conclusion. BNR includes cardiac dysfunction and can be a severe form of rejection. Characteristics of these cases of rejection are described with most cases responding to appropriate therapy. Introduction Heart transplantation continues to provide patients with endstage heart disease with extended survival with a half-life of 9.3 years between 2000 and June 2008. [1]. However, despite substantial advancements in immunosuppression, patients continue to be at significant risk for allograft rejection early after cardiac transplantation. The two recognized forms of allograft rejection are acute cellular rejection and antibodymediated rejection (AMR). While acute cellular rejection has historically been the most common cause of allograft dysfunction, AMR has only recently become widely accepted [2]. During the 2004 International Society of Heart and Lung Transplantation (ISHLT), cellular rejection grades were revised and AMR was formally defined [3]. In April 2010, a publication from the ISHLT Consensus Conference assessed the status of AMR in heart transplantation and a pathologic grading scale was devised [4]. Despite the advent of new technology, such as gene expression profiling and echocardiograms, endomyocardial biopsy remains the standard for detecting rejection. To minimize the risk of a false negative, multiple specimens (usually 3-5) are obtained from 3 different sites. Though rare, false negatives do exist either through sampling error, artifact, Quilty lesions, or pathology misread. Prior to the 2010 Consensus Conference, hemodynamic compromise, in the absence of acute cellular rejection, was termed biopsy-negative rejection. 10 to 20% of cardiac allograft recipients were diagnosed as having such rejection. Although the prevalence of the so-called biopsy-negative rejection has declined with the clinical diagnosis of AMR, there are incidences of patients who present with cardiac dysfunction (left ventricular ejection fraction, LVEF ≤ 45%) but have no biopsy findings of cellular or antibodymediated rejection. These rejection episodes are now termed biopsy-negative rejection (BNR). Since the outcome of patients with BNR (which includes cardiac dysfunction) has not been well established, we sought to review the characteristics and treatment response of patients who have developed BNR after heart transplantation. Methods and Statistics We retrospectively reviewed our cohort of patients who underwent heart transplantation between 2002 and 2012 and found 12 cases of BNR in 11 heart transplant patients, as defined by patients presenting with cardiac dysfunction, characterized by LVEF ≤ 45%, and who had no biopsy findings of cellular rejection or AMR. As severe infections are also known to cause left ventricular dysfunction, patients with a clear clinical picture of sepsis with fever, increase in white blood count, or positive cultures were excluded. Baseline characteristics and immunosuppression were collected and summarized. Characteristics and response to treatment at 90 days were noted. Continuous variable was presented as mean ± standard deviation while categorical variable is presented as percentages. Results We identified 11 patients who underwent heart transplantation between 2002 and 2012 and were treated for BNR. One of the 11 patients was treated a second time for BNR. The demographics of patients presenting with BNR was shown in Table 1. None of the BNR patients had previous blood transfusion or previous transplant. 5 (45%) patients had a previous VAD placement and 1 (9%) patient was African American. The baseline immunosuppression medications are as follows: 5 patients (45%) underwent induction therapy with antithymocyte globulin (ATG), 9 patients (82%) initiated with tacrolimus and mycophenolate mofetil, 1 patient (9%) initiated with cyclosporine and mycophenolate mofetil, and 1 patient (9%) initiated with cyclosporine and azathioprine. The immunosuppression regimen at time of BNR onset is shown in Table 2. 2 patients switched to cyclosporine from tacrolimus because of reduced seizure threshold; 1 patient switched to a renal sparing protocol of sirolimus with mycophenolate mofetil because of renal function concerns. The 12 cases of BNR presented with an average LVEF of 34% ± 10% and 11 (92%) treated cases occurred during the first-year after transplant ( Table 3). The mean time to the first incidence of BNR is 7.8 ± 7.5 months and no patient had treated rejection Mean LVEF at recovery, LVEF % ± SD 57% ± 6% Mean days to recovery, days ± SD 14 ± 10 Negative (ACR 0R, AMR 0) biopsy result after treatment, % 10/12 (83%) * The only case that represented the patient who had two BNRs during followup. before onset of BNR except for the patient at the second onset. Among the 106 biopsies that have been done before BNR, 24 (23%) demonstrated low grade acute cellular rejection (1R, 1A, or 1B) and 1 (0.9%) showed suspicious antibody-mediated rejection (AMR 1). De novo circulating antibodies developed in 3 cases (in 3 patients) and 1 of them had donor-specific antibodies. 4 out of the 12 cases presented with LVEF ≤ 35% and presented with heart failure symptoms. Of these 4 heart failure cases, 2 required inotropic support-both received intravenous (IV) methylprednisolone (500 mg-1000 mg per day for 3 days), ATG (125-150 mg per day for 3-5 days), and IV immunoglobulin (IVIG, 70 g per day for 3 days), followed by oral corticosteroids (prednisone 80-100 mg per day bolus and taper). For the remaining 2 patients who presented with heart failure symptoms but did not necessitate inotropes, 1 was treated with ATG and IV methylprednisolone followed by prednisone bolus and taper and the other was given IV methylprednisolone followed by prednisone only. For the 8 out of 12 cases without heart failure symptoms, 7 were treated with only high dose oral corticosteroids. The remaining case was empirically treated with IVIG for two days. Overall, 7 cases (58%) (in 7 patients) favorably resulted with return to normal left ventricular function (LVEF of 57% ± 6%) within a mean of 14 ± 10 days from the initial negative biopsy. Of these 7 patients, one expired approximately 6 months after the date of the normalized LVEF and another one expired approximately 4 months after the date of normalized LVEF. The remaining 5 cases (in 5 patients, including 1 patient's second case of BNR) maintained persistent left ventricular dysfunction beyond 90 days. A flow chart of treatment and effect could be found in Figure 1. In summary, the LVEF status at 90 days is as follows: 5 cases (in 5 patients, including 1 case as second BNR on a patient) had persistent LV dysfunction, and 7 cases (in 7 patients) experienced normalized LVEF. Discussion As our understanding of the biological mechanisms underlying cardiac allograft rejection increases, the number of undiagnosed rejection decreases. The concept of BNR has evolved from its initial definition as hemodynamic compromise in the absence of acute cellular rejection to its current definition of patients presenting with cardiac dysfunction in the absence of biopsy findings of cellular rejection or AMR [2]. AMR has changed from being suggested as BNR to its own ISHLT biopsy grading scale. However, even though the majority of rejection episodes of unclear etiology have now been resolved to be cases of AMR, there are still cases where the biopsy findings are inconsistent with the clinical prognosis, that is, patients with a decrease in LVEF with no biopsy findings of rejection, cellular, or humoral. Due to the inconsistencies of endomyocardial biopsies, the existence of BNR is questioned. This argument is furthered by the fact that prior to the 2010 AMR Consensus Conference, 10-20% of cardiac allograft recipients were diagnosed with BNR when in actuality the majority of patients most likely experienced AMR. This brings up the question of whether BNR should exist as a separate category of rejection or is merely a by-product of the problems inherent with endomyocardial biopsies. In one case study, sampling error or nonuniformity of histopathologic changes resulted in a falsenegative biopsy [5]. In this case study, a heart transplant recipient with repeatedly unremarkable endomyocardial biopsies and a negative evaluation for humoral rejection was found to have severe subepicardial myocyte necrosis with classic cellular rejection in the subsequent autopsy. The subendomyocardial layer was free from rejection. This is contrary to other studies which have found that rejection is evenly distributed throughout the right ventricular endomyocardium [6]. Although endomyocardial biopsy is the primary resource to diagnose acute rejections in all the cases discussed in this study, there are a few noninvasive diagnostics that have been demonstrated to be helpful in diagnosing acute allograft rejection after heart transplantation. In one multicenter clinical trial, Pham et al. reported that using gene-expression profiling to monitor rejection for patients 6 months after heart transplantation was not associated with increased risk of serious adverse outcomes and could reduce the need for biopsy [7]. In two pilot studies, Wu et al. applied cardiac magnetic resonance (CMR) in vivo to detect immune-cell infiltration at sites of rejection by monitoring macrophages. The investigators subsequently developed a functional index from local strain analysis and proved it to be correlated with rejection grades [8,9]. Although clinical applications have not been implemented, CMR was demonstrated to be capable of providing the rejection status of whole-heart perspective, and thus might be a potential tool of optimizing diagnosis of BNR. Another potential tool is speckle-tracking 2-dimensional strain echocardiography (2DSE). By using rat cardiac transplantation models, Pieper et al. demonstrated that 2DSE was able to differentiate myocardial function between rejection in allografts and nonrejection in isografts. Therefore 2DSE might potentially help with clinical practice in terms of early rejection monitoring [10]. Perhaps BNR is another form of AMR but due to the "newness" of AMR and the lack of a complete understanding of AMR, no definite conclusions can be made [11]. Despite the ISHLT revised biopsy rejection scale, inconsistencies of histologic light-microscopic features make recognizing AMR difficult [12]. This revised definition of BNR has been noted to result in a decrease in 3-year subsequent survival, lower subsequent freedom from cardiac allograft vasculopathy (CAV), and a decrease in freedom from nonfatal major adverse cardiac events (NF-MACE) [13]. Previous studies characterized BNR as occurring on average 43 ± 38 months following transplantation while the data collected in our study indicates that the time to the first incidence of BNR is 7.8 ± 7.5 months. The mechanism associated with BNR requires further understanding of its characteristics. Unfortunately not too many reports in the field looked at the treatment protocols for BNR. In our single center experience, who patients presented with heart failure symptoms were given more aggressive treatment, that is, ATG, IV methylprednisolone, and IVIG while patients without heart failure symptoms were mostly given prednisone bolus and taper only. As BNR is an infrequent phenomenon, of which there is considerable debate regarding its validity, further work needs to be instigated on whether there is a correlation between different factors and BNR episodes. Overall, from this set of data, it is inconclusive as to whether or not there are any characteristics of BNR that differs from other types of rejections. A point of interest in this study is the patient who experienced two episodes of BNR. Due to the small sample size, it is uncertain whether having one episode of BNR increases the risk of recurrent BNR episodes. During the first BNR episode, this patient experienced a return to normalized LVEF within 90 days of onset. In contrast, it took the patient 6 months to experience a return to normalized LVEF (51%) during the second BNR episode. For the first episode of BNR, there was a favorable return to normal cardiac function following treatment with oral corticosteroids bolus and taper. In the case of the second episode of BNR, IVIG was given instead which led to an increase in LVEF, although it took a longer time (6 months) for the patient to experience a return to the range of normalized LVEF. Prior to experiencing normalized LVEF the patient's LVEF remained in the 40% range, fluctuating between 43% and 48%. The repeat BNR patient was the only African Americans in this study and did not experience normalized LVEF at 90-day after second onset. African Americans have been reported to have a higher risk of rejection after transplant compared to any other race due to their unique metabolism in regard to immunosuppression [14,15]. Thus their immunosuppression needs to be tailored differently. A possible immunosuppression regimen with respect to BNR could require more aggressive treatment, employing the use of ATG and IVIG in addition to steroids. Potential cause for this second onset of BNR could be the combined factor of this patient not receiving an adequate treatment for the second BNR and the ethnicity of the patient. As cases of BNR tend to respond favorably to appropriate rejection therapy 2 (17%) of 12 BNR cases required inotropes and 7 (58%) BNR cases without heart failure symptoms were treated with oral steroid bolus and taper. Seven of 12 (58%) treated BNR cases resulted in a return to a normal LVEF of 57% ± 6% within a mean of 14 ± 10 days from the initial negative biopsy. Due to the small sample size, it is uncertain whether BNR should be considered as a new category in the biopsy grading scale or if it is merely difficult to diagnose based on endomyocardial biopsies alone. If it is the former, then mechanism needs to be elucidated. If it is the latter, then perhaps other methods for detecting rejection need to be considered. Conclusion BNR is a rare phenomenon (12 cases of BNR in a 10-year period) and can be a severe form of rejection in that there is cardiac dysfunction. However, this type of rejection is not apparent on biopsies due to either lack of knowledge regarding further types of rejection or other factors. Characteristics of these cases of rejection are described above with most cases responding favorablly to rejection therapy. A detailed mechanism of this type of rejection needs to be elucidated.
2018-04-03T04:41:21.335Z
2013-12-18T00:00:00.000
{ "year": 2013, "sha1": "58ec94ed125056dbe9e24c107fa694bfc32d8214", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jtrans/2013/236720.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9940f9a189233f5115b30ac10b7431ddcf2abf4e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221473412
pes2o/s2orc
v3-fos-license
Multipath Assisted Positioning With Transmitter Visibility Information In multipath assisted positioning, multipath components (MPCs) are regarded as line-of-sight (LoS) signals from virtual transmitters. Instead of trying to mitigate the influence of MPCs, the spatial information contained in MPCs is exploited for localization. The locations of the physical and virtual transmitters are in general unknown but can be estimated with simultaneous localization and mapping (SLAM). Recently, a multipath assisted positioning algorithm named Channel-SLAM for terrestrial radio signals has been introduced. It simultaneously tracks the position of a receiver and maps the locations of physical and virtual radio transmitters. Maps of estimated transmitter locations can be augmented by additional information. Within this paper, we propose to extend the Channel-SLAM algorithm by mapping information about the visibility of transmitters. A physical or virtual transmitter is visible, if its signal is received in a LoS condition. We derive a novel particle filter for Channel-SLAM that estimates and exploits visibility information on transmitters in addition to their locations. We show by means of simulations in an indoor scenario that our novel particle filter improves the positioning performance of Channel-SLAM considerably. I. INTRODUCTION Precise localization in indoor and other global navigation satellite system (GNSS) denied scenarios has been both a requirement and an enabler for a variety of services and applications. For indoor localization, various radio signal based approaches have been introduced, such as fingerprinting [1] or approaches based on the estimation of signal parameters of signal of opportunitys (SoOs) [2], [3]. The SoOs are often network or cellular signals, such as from wireless local area network (WLAN) routers or telecommunication base stations [4], for example. Such signals are available in the majority of scenarios of interest, and no additional infrastructure needs to be deployed to utilize them. Especially in indoor scenarios, signal propagation delay based positioning methods suffer from multipath and nonline-of-sight (NLoS) propagation [5]. When the transmit signal is reflected and scattered by objects in the environment, The associate editor coordinating the review of this manuscript and approving it for publication was Vittorio Degli-Esposti . it arrives at the receiver as a superposition of signal components with different delays. A signal component can be the LoS component or a MPC. Due to bandwidth limitations in practical systems, multipath propagation might lead to biased propagation time estimates. Standard techniques try to remove the influence of MPCs on the LoS path [6]. If there is no LoS signal component present, the receiver may untruly deem a MPC to be the LoS signal. An inherent solution to problems caused by multipath propagation is to exploit the spatial information contained in MPCs. In multipath assisted positioning, each MPC arriving at a receiver is regarded as a LoS signal from a virtual transmitter. Thus, with virtual transmitters, a receiver may be localized with only a single physical transmitter. If the location of the physical transmitter and the geometry of the environment in terms of reflecting and scattering objects is known, the locations of virtual transmitters can be calculated. One of the challenges remaining is the correct association of virtual transmitters to actual MPCs, a problem which is referred to as data association [7], [8]. Various papers have addressed multipath assisted positioning with different technologies and applications such as Long Term Evolution (LTE) [9], ultra-wideband (UWB) [10] or fifth generation (5G) [11], [12] signals, radar [13], or cooperative users [14]. Theoretical bounds were provided in [15], [16]. In a more general case, the environment and the locations of physical transmitters are unknown to the user. Though, the locations of physical and virtual transmitters can be estimated jointly with the receiver position with SLAM [17], [18]. In SLAM terminology, the user equipped with a receiver is localized simultaneously with mapping the transmitter locations as in [19]- [27]. In the following, the term user may refer to the actual receiver depending on the context. The authors of [28], [29] have introduced a multipath assisted positioning algorithm named Channel-SLAM which does not require knowledge about the environment. In a first step, a channel estimator estimates the parameters of signal components and tracks them over time. Such parameters can be the time of arrival (ToA), angle of arrival (AoA), or phase, for example. In a second step, theses estimates are used to obtain the positions of the user and the locations of the transmitters with SLAM. The signal model in Channel-SLAM integrates multiple reflections and/or scattering of the transmit signal at reflecting walls and point scatterers. Geometrical considerations show that the locations of virtual transmitters are static as a user moves through a scenario. In addition to the locations of transmitters, visibility information, i.e., the information from which positions the LoS signal from a physical or virtual transmitter can be received, can be mapped by a user. In [30], we have shown how visibility information can improve the robustness of data association, i.e, the association among signal components and transmitters, in Channel-SLAM. However, visibility information was not used for estimating the user location. If the user returns to a previously visited location, we expect the same transmitters to be visible as before. Thus, visibility information can be used to improve the positioning performance of Channel-SLAM. Within this paper, we extend the Channel-SLAM algorithm by mapping and exploiting visibilities of transmitters. A map of transmitter locations created by a user is augmented by the information from where these transmitters are visible. The visibility information can then be used to facilitate the estimation of the user position. We derive a novel particle filter for Channel-SLAM that incorporates visibility information of transmitters by mapping and exploiting this information in a hexagonal grid map. For each hexagon, the probability that a transmitter is visible is estimated. We show with simulations that visibility information does improve the positioning performance of Channel-SLAM considerably. The remainder of this paper is organized as follows. Section II discusses the idea of multipath assisted positioning and Channel-SLAM. In Section III, we introduce visibility maps and their representation, and derive a novel particle filter for Channel-SLAM exploiting visibility information. Evaluations based on simulations are presented in Section IV. Section V concludes the paper. Throughout the paper, we use the following notation: • As indices, i identifies a user or user-map particle, j a transmitter or a signal component, and a transmitter particle. • h denotes the index of a hexagon. • k is a time instant. • N (x; µ, C) denotes the probability density function (PDF) of a normal distribution in x with mean µ and covariance C. • c 0 denotes the speed of light. • (·) T denotes the transpose of a matrix or vector. • · denotes the Euclidean norm of a vector. II. MULTIPATH ASSISTED POSITIONING A. MULTIPATH ASSISTED POSITIONING AND VIRTUAL TRANSMITTERS The idea of multipath assisted positioning is depicted in Fig. 1, where one physical transmitter Tx transmits a radio frequency (RF) signal isotropically. For clarity, a possible LoS signal from the physical transmitter to the user is ignored. The transmit signal from the physical transmitter arrives at the user via three different propagation paths. On its first path drawn red, the signal is reflected at the wall, corresponding to a specular reflection of the electromagnetic wave. The user regards the corresponding MPC as a LoS signal from the virtual transmitter vTx1 to the receiver. The location of vTx1 is the location of the physical transmitter mirrored at the wall. The virtual transmitter vTx1 is inherently time synchronized to the physical transmitter. On the second propagation path drawn brown, the signal is scattered at a point scatterer. A point scatterer distributes the energy of an impinging electromagnetic wave uniformly to all directions. Again, the arriving MPC is interpreted as a LoS signal, now from the virtual transmitter vTx2 to the receiver. The location of this transmitter is at the point scatterer. In the case of scattering, there is an additional propagation distance between the physical transmitter and vTx2, which is the Euclidean distance between the two. Thus, this propagation distance depends on the location of the scatterer. It translates to a time offset τ 0 by dividing by the speed of light c 0 . The time offset can be interpreted as a clock offset or an additional propagation distance between the physical and the virtual transmitter. On its third propagation path drawn teal, the transmit signal is first scattered at the point scatterer and afterward reflected at the wall. In this case, the location of the resulting virtual transmitter vTx3 is obtained by applying the corresponding two cases iteratively. The location of Tx3 is thus the scatterer location mirrored at the reflecting wall. Accordingly, the time offset between the physical transmitter and vTx3 is again τ 0 . The third case can be generalized to any number of consecutive reflections and/or scattering. If the transmit signal undergoes only specular reflections at plane surfaces, the corresponding virtual transmitter is time synchronized with the physical transmitter, and accordingly τ 0 = 0 s. If the transmit signal is scattered at least one time, the traveled distance of the signal between the physical transmitter and the last involved scatterer is the additional propagation distance and determines the time offset. If the signal is only reflected at straight surfaces and scattered at point scatterers, the virtual transmitters are static and independent from the user position. In the sense of virtual transmitters, the effect of diffraction of an electromagnetic wave can be regarded as scattering. B. SIGNAL MODEL Within the scope of this paper, we consider a linear and time-variant multipath channel. We assume only one physical transmitter, which is time synchronized to the receiver. The generalization to multiple physical transmitters is straightforward, if the transmit signals are separated in time, frequency or code domain. The transmit signal s(t) from a physical transmitter arrives at the user via different propagation paths and can be modeled at the receiver as a superposition of signal components. At time instant k, the j th signal component is described by a complex amplitude a j (k) and a delay τ j (k), and arrives at the receiver with an AoA θ j (k). Both additive white Gaussian noise (AWGN) and dense multipath components (DMCs) [31] are contained in the colored noise sequence n k (τ ). Considering an antenna array at the receiver, the received signal y m (τ, k) at time instant k at the m th element of the array in the baseband is hence where a m θ j (k) is the response of the m th antenna element. The user samples snapshots of the received signal at time instants k. During the short time interval when the user samples a snapshot, the channel is assumed to be static. While the number of signal components is infinite in theory, it is limited by the sensitivity of the receiver in practical systems. C. CHANNEL-SLAM Channel-SLAM exploits the spatial information contained in MPCs for positioning. An overview of the algorithm is depicted in Fig. 2. In the first step, samples of the received signal are used by a channel estimator, which estimates and tracks the parameters of signal components. The results from the channel estimator serve as measurement inputs for the second step, where the user state and the transmitters' states are jointly estimated with SLAM. In the second step, we may use additional sensors such as from an inertial measurement unit (IMU), for example. In addition, prior maps of transmitter locations may be incorporated [32]. In the first step in Channel-SLAM, a channel estimator estimates the parameters of signal components arriving at the receiver. These parameters include the ToA, amplitude or phase, for example. Within the scope of this paper, we use the Kalman Enhanced Super Resolution tracking (KEST) estimator [33]. KEST works in two stages. In the inner stage, the parameters of signal components are estimated and tracked with a Kalman filter. In the update stage of the Kalman filter, a maximum likelihood (ML) estimator such as the Space-Alternating Generalized Expectation-Maximization (SAGE) algorithm [34] is used to obtain estimates of these parameters. The SAGE algorithm is a variant of the Expectation-Maximization (EM) algorithm [35] and estimates the signal parameters jointly. In the outer stage of KEST, the number of signal components is tracked over time by a number of Kalman filters running in parallel. Each Kalman filter carries a different hypothesis on the number of signal components in the received signal. Thus, the estimate of the number of signal components is an additional output of KEST. The tracking nature of the Kalman filters inherently yields associations among signal components for adjacent time instants. Though, KEST cannot find correspondences among signal components that are not tracked continuously. For example, if track of a signal component is lost and the signal component is detected again at a later time instant, KEST cannot associate this newly detected with the original signal component. We denote the number of signal components that KEST tracks at time instant k by N TX,k . The overall number of signal components that have been detected from time instants zero to k and tracked by KEST is denoted by N a TX,k . Every time a new signal component is detected, N a TX,k is increased by one. There is no differentiation between physical and virtual transmitters in Channel-SLAM, and therefore no differentiation between the LoS component and MPCs of a received signal. Accordingly, the term transmitter comprises both physical and virtual transmitters. Since each signal component corresponds to one transmitter, N TX,k and N a TX,k also denote the number of visible transmitters at time instant k, and transmitters that have been visible during time instants zero to k, respectively. Within the scope of this paper, we assume that the receivers are equipped with rectangular antenna arrays that are aligned with the user orientation, allowing to exploit the AoA information of signal components. Thus, the ToA estimates τ k and AoA estimates θ k of the signal components estimated by KEST at time instant k are stacked in the radio measurement vector where contains the ToA estimates of the N TX,k detected signal components, and the corresponding AoA estimates. The ToA estimate τ j,k describes the propagation time of the signal traveling from the j th transmitter to the user. The AoA estimate θ j,k is the angle between the user heading and the incoming signal from the j th transmitter. In the second step of Channel-SLAM, the locations and clock offsets of the transmitters and the user position and velocity are estimated jointly with SLAM. Each signal component detected by KEST corresponds to one transmitter. As described in Section II-A, the locations of the physical and virtual transmitters are static when the transmit signal is reflected at straight walls and/or scattered at point scatterers. Thus, the transmitter model in Channel-SLAM is the same for both the physical and virtual transmitters: each transmitter can be described by a location p TX in two dimensions and an additional propagation distance corresponding to a clock offset τ 0 . The state vector x <j> TX,k of the j th transmitter at time instant k includes its two-dimensional location p <j> TX,k in Cartesian coordinates and its clock offset τ <j> 0,k , The user state vector x u,k at time instant k consists of the position p u,k = [x k y k ] T and velocity v u,k = v x,k v y,k T of the user in two dimensions each. It is expressed as The entire state vector consists of the user state and the state of the N TX,k transmitters, given by It is clear from the dimension of the user state vector in (6), the transmitters' state vector in (5), and the radio measurement vector in (2) that Channel-SLAM considers an underdetermined system in each snapshot, as there are more unknowns than measurements. Thus, it is crucial that the channel estimator finds correspondences among signal components in consecutive snapshots of the received signal. Only when such correspondences are found, the entire system becomes overdetermined over time due to the static nature of the transmitters. As a consequence, signal components that can be tracked for a long traveled distance of the user can contribute much to Channel-SLAM. Channel-SLAM seeks to find the minimum mean square error (MMSE) estimatorx 0:k for the user and the transmitter states from time instants 0 to k with SLAM. It is expressed in terms of the posterior PDF p x 0:k |z R,1:k , u 1:k , where z R,1:k denotes the measurements from time instants 1 to k, aŝ The variables u 1:k denote the user control input from time instants 1 to k and will be examined later in this section. The posterior PDF of the state vector can be factorized as p x 0:k |z R,1:k , u 1:k = p x TX,0:k , x u,0:k |z R,1:k , u 1:k = p x u,0:k |z R,1:k , u 1:k × p x TX,0:k |x u,0:k , z R,1:k , (9) separating the user space from the transmitter space. The second term in the last line of (9) denotes the transmitters' posterior PDF conditioned on the user state. We assume no correlation among the estimates of the parameters of signal components, and therefore among the measurements for the single transmitters. The latter assumption does not hold if the parameters of two or more signal components are close to each other in all estimated domains. For example, if the respective differences in the ToA, phase, amplitude and AoA of two signal components are very small, the estimates are correlated. However, if such a situation occurs, we expect it to last for only a very short period of time. Thus, we expect only short term biases in the channel estimator, and unbiased estimates in the long term. It can be shown [36] that the transmitters' conditional posterior PDF can be written as p x TX,0:k |x u,0:k , z R,1:k = N TX,k j=1 p x <j> TX,0:k |x u,0:k , z R,1:k . (10) VOLUME 8, 2020 To calculate the posterior PDF, Channel-SLAM applies Bayesian recursive estimation [37], which works in two stages. In the prediction stage, the state evolves following a movement model. In the update stage, the state is updated by measurements, which are the estimates from KEST as in (2). The transmitters are static in the model of Channel-SLAM. The probabilistic model of the transmitter state evolution is expressed by where v TX,k is a noise sample from a zero-mean distribution with very small variance. While the transmitters are considered static, v TX,k is added for numerical stability. For the prediction of the user state, we incorporate heading change rate measurements from a gyroscope with the control input u k . The model of the user state evolution is expressed by where the heading change rate measurements are incorporated in the matrix F u,k as in [29], and v u,k is a noise sample from a distribution with a covariance matrix as described in more detail in [38]. It depends on the assumed dynamics of the user, who is assumed to be a pedestrian in Channel-SLAM. In the update stage, we assume two types of radio measurements, which are ToA and AoA. As mentioned above, they are obtained from the channel estimator in the first stage of Channel-SLAM as in (2). The likelihood of the measurements from the j th transmitter is calculated by where the predicted measurements are given bŷ for the ToA, and bŷ for the AoA, respectively. The four quadrant inverse tangent function atan2 (y, x) returns the unique counter-clockwise angle between the positive x-axis and the line connecting the origin with the point given by the coordinates (x, y). The measurement noise samples are assumed to be drawn from zero-mean Gaussian distributed noise sequences. The variance for the ToA measurement is denoted by σ 2 τ,j , and for the AoA measurement by σ 2 θ,j . They are obtained from the covariance matrices of the Kalman filters in the KEST estimator as in [33]. The actual implementation of Channel-SLAM uses a Rao-Blackwellized particle filter to estimate the user and transmitter states jointly. Following (9), a user particle filter tracks the user position and velocity over time. The posterior PDF of the user state is modeled by p x u,0:k |z R,1:k , u 1: where x <i> u,0:k is the i th of the N p user particles, w <i> 0:k its associated weight, and δ (·) the Dirac delta distribution. For each user particle, the transmitter states are estimated independently from other user particles by particle filters as well. The state of the j th transmitter for the i th user particle is therefore expressed by the model where x <i,j, > TX,0:k is the th of the N p,Tx transmitter particles and w <i,j, > 0:k its associated weight. A detailed derivation of the Channel-SLAM algorithm can be found in [29]. III. CHANNEL-SLAM WITH VISIBILITY INFORMATION A. VISIBILITY MAPS In Channel-SLAM, we map the locations of physical and virtual transmitters. Such maps can be augmented by additional information. As a user moves through a scenario, different physical and/or virtual transmitters are visible from different user locations. We say that a transmitter is visible from a certain user position, if the signal from that transmitter is received by the user in a LoS condition. Following the idea of multipath assisted positioning in Section II-A, a MPC is regarded as the LoS component from a different virtual transmitter. In the rationale of multipath assisted positioning, there is by definition no NLoS propagation. Considering static scenarios, we expect that if a user returns to a previously visited location, the same transmitters are visible as before. For mapping visibility information, we use a location based map. In particular, visibility information is stored in a hexagonal grid map. The two-dimensional space is discretized with a grid of adjacent hexagons. Each hexagon is assigned a unique index. The entire visibility map M can thus be described as a set of hexagon visibility states, where N H is the number of hexagons. Each hexagon state M h in turn contains information on the visibility of each of the N a TX,k transmitters, The value M <j> h represents probabilistic information on whether the j th transmitter is visible from the h th hexagon or not. We use probabilities since due to the discretization of the space, a transmitter might be visible at one position in the hexagon, but not at another position in the same hexagon. Thus, we have where V <j> h donates the probability that the j th transmitter is visible from an arbitrary position in the h th hexagon, and V <j> h Fig. 3 depicts a simple scenario with a physical transmitter Tx and dark gray lines representing walls that block the signal from the transmitter Tx. Each hexagon is filled according to the probability that the transmitter is visible from an arbitrary user position within that hexagon. In hexagons filled dark, the probability that the transmitter is visible is small, whereas in bright hexagons, the probability is high. In hexagons filled gray with a wall cutting them into two equal halves, the probabilities of the transmitter being visible and being not visible are equal. h,k |x u,0:k , z V,1:k can be estimated, though, where z V,1:k are observations on the transmitters visibilities from time instants one to k and will be covered in more detail later. We assume that each of these PDFs follows a Beta distribution [39]. They represent the belief that a transmitter is visible in a hexagon or not. In general, a Beta distribution with parameters p and q is defined by the PDF for x ∈ (0, 1), and B (x; p, q) = 0 otherwise. The Beta function normalizes the Beta distribution, and it is defined by where (·) is the Gamma function [39]. Fig. 4 shows the PDF of the Beta distribution for different parameters. Intuitively, the Beta distribution as defined above may represent the belief in the value of the random variable x ∈ (0, 1). If the value of the parameter p is high and q is low, the value of x is likely to be high and vice versa. In addition, if the sum of the parameters p and q is high, the degree of trust that we have in our belief of x is high as well. If the sum of the parameters is low, the degree of trust in our belief of x is low. Thus, if the parameters p and q are obtained from observations following a binomial distribution, the number of observations represents how reliable the belief in the random variable x is. The PDF at time instant k of the j th transmitter being visible in hexagon h is given by the Beta distribution In Section II, we used the ToA and AoA estimates from the inner stage of KEST as measurement inputs in the recursive Bayesian estimation scheme. The correspondences among signal components at adjacent time instants were assumed to be known implicitly. In the following, we drop this assumption, and explicitly introduce an additional measurement vector z V for mapping the visibilities of transmitters. This vector z V is an output of the outer stage of KEST and has N a TX,k entries, where the j th entry at time instant k is denoted by z <j> V,k . If the j th of the N a TX,k transmitters is visible at time instant k, the value of z <j> V,k is the index of the ToA measurement for that transmitter in z R,k . Otherwise, z <j> V,k is zero. Thus, z V,k has N TX,k non-zero entries. Fig. 5 illustrates the relationship of the vectors z R,k and z V,k with an example, assuming for clarity that only ToA measurements are available. The vector z R,k holds the ToA measurements for N TX,k = 3 currently visible transmitters. The transmitters visible at time instant k have the indices 3, 4 and 7, corresponding to the indices of the entries in z V,k that are non-zero. Each of these non-zero entries denotes the index of the ToA measurement for the respective transmitter in z R,k . For example, z <3> V,k = 1, meaning that the ToA measurement of the third transmitter is the first entry in z R,k . In total, N a TX,k = 8 transmitters have been visible, i.e., N a TX,k = 8 signal components have been tracked by KEST up to time instant k. On the one hand, z V,k can be regarded as an association vector, as it associates signal components at time instant k with signal components from previous time steps. On the other hand, it can be regarded as a measurement for visibilities of transmitters, as it indicates which transmitters are visible at which time instant. Hence, the vector z V,k may be named visibility measurement vector in the following. The overall measurement vector z k comprises the estimates from both the inner and outer stage of KEST. Thus, we define where z R,k is defined as in (2). It contains the ToA and AoA estimates from the inner stage of KEST for the currently visible N TX,k transmitters. C. BAYESIAN FORMULATION AND RAO-BLACKWELLIZED PARTICLE FILTER FOR CHANNEL-SLAM We seek to estimate the states of the user, x u,0:k , the locations of the transmitters x TX,0:k , and the visibility map M 0:k given the measurements z 1:k and the user control input u 1:k with recursive Bayesian estimation. The dynamic Bayesian network (DBN) in Fig. 6 shows the dependencies of the involved variables. We assume a first-order hidden Markov model. In comparison to the standard Channel-SLAM algorithm, the visibility measurements z V and map M are added. The corresponding posterior PDF can be written as p (x 0:k , M 0:k |z 1:k , u 1:k ) = p x TX,0:k , x u,0:k , M 0:k |z 1:k , u 1:k = p x u,0:k , M 0:k |z 1:k , u 1:k × p x TX,0:k |x u,0:k , M 0:k , z 1:k , u 1:k = p x u,0:k , M 0:k |z 1:k , u 1:k p x TX,0:k |x u,0:k , z 1:k , (27) where we assume that the transmitter states are conditionally independent from the user control input u 1:k and the visibility map M 0:k . The latter assumption is based on that without knowledge on the geometry of the environment, a virtual transmitter's location does not reveal much information about its visibility. For example, in Fig. 1, the transmitters Tx and vTx2 are visible in the vicinity of the corresponding transmitter's location, while vTx1 and vTx3 are not. In addition, we do not differentiate between physical and virtual transmitters in our model, and assume no knowledge on the propagation path of a signal component. Given the structure of the last line of (27), we use a Rao-Blackwellized particle filter [40] to jointly estimate the user state and the visibility map, and to simultaneously estimate the transmitter locations. We name the particle filter estimating the user state and the visibility map the user-map particle filter, and the particle filters estimating the transmitter states transmitter particle filters. The history of the i th user-map particle at time instant k is of the form and has an associated weight w <i> 0:k . The posterior PDF in the user-map particle filter, i.e., the first factor in the last line of (27) D. DERIVATION OF THE WEIGHTS FOR THE USER-MAP PARTICLE FILTER In general, it is hard to draw samples from the PDF on the left-hand side of (29). Instead, we use the idea of importance sampling. This idea is to draw samples from an importance density q x u,0:k , M 0:k |z 1:k , u 1:k , and compensate for the mismatch by adapting the weights on the right-hand side of (29). If we can evaluate the left-hand side of (29), the weights w <i> The general derivation of (32) can be found in [41], and a derivation for the user-map particle filter is presented in Appendix I. With (26), the second factor in the second to last line in (32) can be factorized as p z k |x u,0:k , M 0:k , z 1:k−1 , u 1:k = p z V,k |x u,0:k , M 0:k , z 1:k−1 , u 1:k × p z R,k |x u,0:k , M 0:k , z V,1:k , z R,1:k−1 , u 1:k = p z V,k |x u,k , M k p z R,k |x u,0:k , z V,1:k , z R,1:k−1 . In the last line of (33), the measurement z V,k depends only on the current map and the user location. In fact, the user state is needed only to indicate the hexagon in which the user is located at time instant k. The radio measurement z R,k is independent from the map and the control input. For the numerator of the last line of (32), we obtain In general, the user position is conditionally dependent on the estimate of the visibility map. However, without a measurement z V of the visibility, the information in the visibility map cannot be used, since the association of the currently visible transmitters to transmitters in the map is not known. Thus, in the first factor in the last line of (34), the current user state x u,k is independent from the visibility map M k−1 . In the second factor in the last line of (34), the information of the user state x u,k−1 that is relevant for M k is already contained in M k−1 . Thus, M k is independent from the user state at time instant k − 1. In addition, the visibility map is independent from any control input u k . A crucial step in a particle filter is the choice of the importance density in the denominator of (32), as new particles are sampled from it. The importance density can be written as Following the structure of (35), we define the importance density such that q x u,k , M k |x u,0:k−1 , M 0:k−1 , z 1:k , u 1: where the first term on the right hand side of (36) is the a-priori PDF for the user. For sampling a new user-map particle, the importance density q x <i> u,k , M <i> k |x <i> u,0:k−1 , M <i> 0:k−1 , z 1:k , u 1:k as in (36) is evaluated from the left to the right. First, a new user particle x <i> u,k is sampled with the user movement model in (12) given the old particle x <i> u,k−1 . Then, a new visibility map is sampled based on the new user particle. To sample a new visibility map, we proceed as follows. As the user travels through the scenario, they store counts on how often a transmitter was visible or not in each hexagon for each particle. At time instant k, the j th transmitter has been visible C <i,j> h,k times in hexagon h for the i th user-map particle, and it has been not visibleC h,k−1 are updated for the current hexagon h, in which the user-map particle is located, for each transmitter depending on whether the transmitter is visible or not. The Beta distribution is the conjugate prior of the binomial distribution. Updating a prior Beta distribution with observations following a binomial distribution results again in a Beta distribution with new parameters. The observation if a transmitter is visible in a hexagon at one time instant may be regarded as a realization of a random variable following a binomial distribution. Thus, the parameters of the Beta distribution representing the belief of the i th user-map particle that the j th transmitter is visible in the h th hexagon are updated accordingly to The denominator in the last line of (38) can be calculated with Bayes' theorem as Thus, the weights become The visibility measurement z V,k is only dependent on the hexagon the user-map particle is in at time instant k. We identify this hexagon by the index h. In addition, the visibilities for the transmitters are assumed to be independent from each other. The factor in the last line of (40) is therefore where M <i,j> h,k−1 refers to the the visibility of the j th transmitter for the i th user-map particle in the hexagon with index h at time instant k − 1. The likelihoods with respect to the visibility for the j th transmitter, i.e., the factors in the product in (41), can be calculated as the expectation value of the belief that the transmitter is visible or not, The likelihoods with respect to the radio measurements are p z R,k |x u,0:k , M 0:k , z V,1:k , z R,1:k−1 , where we set the likelihood p z R,k |z V,k , x <i> u,k , x <i,j, > TX,k for the j th transmitter to one if the j th entry in z V,k is zero. If that entry in z V,k is non-zero, the likelihood can be calculated following (13), where the indices for the transmitters need to be adapted. A derivation of (43) following [29] can be found in Appendix II. Finally, plugging (41) and (43) into (40), the weights can be calculated as From (44) follows the weight update for the transmitters. The weight of the th particle in the transmitter particle filter for the j th transmitter and the i th user-map particle is The increase of computational complexity in the particle filter derived above compared to the standard Channel-SLAM particle filter is limited to evaluating the terms in (42). For each user-map particle and each hexagon visited by the usermap particle, the two parameters of a Beta distribution need to be stored for every transmitter. However, information on the visibility of transmitters may be used not only in the particle filter, but also for map matching when maps are exchanged, and for data association of transmitters as in [30]. A. SIMULATION SCENARIO A top view of a mall serving as the indoor simulation scenario is depicted in Fig. 7. In the scenario, there is one physical transmitter, which is depicted by the red triangle and labeled Tx. It continuously transmits a signal which is known to the receiver and has a bandwidth of 100 MHz. The center frequency of the signal is at 1.9 GHz. The signal energy is spread uniformly across the signal bandwidth around the center frequency. The thick black lines in the scenario represent walls which reflect the transmit signal. Likewise, the thick black dots model point scatterers which spread the energy of an impinging signal uniformly to all directions. Based on the locations of the physical transmitter and the walls and scatterers, we can calculate a channel impulse response (CIR) for every user location with ray-tracing. For the simulations, we incorporate virtual transmitters of orders one and two, corresponding to single and double reflections and/or scattering of the transmit signal. For each user location, the CIR is convolved with the transmit signal and AWGN is added to create the received signal. The update rate of the receiver is 10 Hz. A sampled snapshot of the received signal is thus fed to the KEST estimator every 100 ms. We also use a gyroscope at the receiver. The gyroscope is incorporated as control input in the user movement model in (12). For the evaluations, we assume the starting location and velocity of the user to be known within their local coordinate system. The locations of the walls, scatterers or any transmitters are unknown to the user. A large hexagon size in the visibility map leads to a large discretization error. If the hexagon size is very small, it becomes unlikely that a user revisits hexagons, and the memory requirement for the map is large. We have found in our simulations that a hexagon side length of 2 m is a good tradeoff and set the size accordingly. The number of user-map particles in the simulations is 800. The number of transmitter particles is different for every user or user-map particle, for every time instant and for every transmitter. When a new transmitter is initialized, the number of transmitter particles is based on the radio measurement. This number is subsequently adapted to the uncertainty about the transmitter's state with a grid based particle reduction method from [42] as the user traverses through the scenario. The lower the uncertainty about a transmitter state, the less particles are used. For the first user track depicted in blue in Fig. 7, the maximum number of transmitter particles we have observed for one transmitter and one user-map particle was 864. The mean of transmitter particles for that transmitter while it was visible was 492. The physical transmitter in the first user track is initialized with 89 particles for each user-map particle. Averaging over the user-map particles, this number drops down to 11 after a traveled distance of approximately 77 m, and stays at approximately 10 after a traveled distance of approximately 139 m. During the first user track, 47 different transmitters were detected by KEST. For the other four tracks, the overall number of transmitters detected by KEST were 34, 23, 33, and 30, respectively. B. SIMULATION EVALUATIONS The simulation results are depicted in Fig. 8. For each of the five tracks, the positioning error, namely the mean absolute error (MAE), of the user is plotted by the solid lines versus the traveled distance for two cases. The curves in blue, denoted by 'Without Visibility Mapping', are the MAEs of the standard Channel-SLAM algorithm if no visibility mapping is performed. For the curves in red, denoted by 'With Visibility Mapping', the new particle filter derived in Section III is used. It incorporates the visibility information from a prior VOLUME 8, 2020 transmitter map that is created based on the scenario. We use a prior map to exploit the full potential of visibility mapping. This prior visibility map could stem from a different user and is structured in hexagons as described in Section III-A. If the j th transmitter in the map is visible from the center of the h th hexagon, the parameters of the corresponding Beta distributions in the prior map are set to ν The prior map comprises visibility information for all transmitters up to an order of two. For associating signal components detected by the KEST estimator in Channel-SLAM with transmitters from the prior map, the data association method from [30] is used. In addition to the MAEs, the 95 th and 5 th percentiles are plotted for each track by the dotted and dashed lines, respectively. Hence, the MAE was below the dotted lines in 95% of the simulation runs, and below the dashed lines in 5% of the runs. Since the starting location and velocity are assumed to be known, all MAE curves in Fig. 8 start at an error of zero at the first time instant, and increase. The error with visibility mapping is then in almost all tracks considerably below the error without visibility mapping. The increasing error near the end of Track 1, Track 3 and Track 5 can be explained with a high geometrical dilution of precision (GDoP). At these positions, all detected signal components arrive at the user from a similar direction. All error curves in Fig. 8 are averaged over 250 simulation runs. The MAEs at the end of the first track are in the order of four to ten meters. For comparison, we also performed simulations where a user does not perform Channel-SLAM, but estimates their state only based on the gyroscope and the movement model. I.e., no radio signals are used in this case. Even if the velocity is known perfectly at the beginning of the track, the corresponding MAE grows quadratically and finally is in the order of 320 m for the first track. The positioning performance is thus dominated by the movement model only for a short distance at the beginning of a run, when the uncertainty for all transmitters is high. Once the transmitter state estimates have converged, the influence of the movement model is small. The evaluations above consider an approximation of a converged prior map of transmitter visibilities that is known to the user to illustrate solely the effect of mapping of and using information about transmitter visibilities. In reality, such a prior map is created by users with crowdsourcing. A first user maps the transmitter states and visibilities with Channel-SLAM and hands the map over to a second user. Since the second user does in general not know their starting location, they need to estimate the translation and rotation relating the coordinate system of the two users in order to be able to use the map. This estimation is referred to as map matching and can be done based on transmitter states estimated by the second user and in the map as in [43]. Once a map match has been found, the second user exploits the information in the map, updates it with own observations. The map is then handed over to a third user, who again performs map matching, uses and updates the information in the map, hands the map on to the fourth user, and so on. We refer to this approach as collaborative Channel-SLAM. We have evaluated collaborative Channel-SLAM for ten different users walking on tracks that are plotted in Fig. 9. The start and end positions of the u th track are labeled Su and Eu, respectively. The MAEs for five different users averaged over 250 simulation runs are exemplarily plotted in Fig. 10 for two cases with the solid lines. The u th track is labeled Track cu. In the first case represented by the blue lines, no visibility mapping is performed. In the second case represented by the red lines, the visibilities of transmitters are mapped and exploited as in Section III. The dotted and dashed lines represent the 95 th and 5 th percentiles, respectively. The positioning errors for the cases where visibility information is used is in almost all cases below the error for the case without visibility. In the collaborative approach, visibility information on the one hand improves the positioning performance directly as in Fig. 8. On the other hand, a better positioning performance leads to a more accurate and robust map matching, which in turn improves the positioning performance as well. The 5 th percentiles are very similar for most of the tracks in Fig. 10, while the 95 th percentiles are most of the time lower if visibility information is used. We see in the simulations that user particles with a large bias in the position estimate are resampled faster in the particle filter. This leads to the conclusion that visibility information mainly improves the robustness of Channel-SLAM by preventing large biases in the user position estimates. V. CONCLUSION Channel-SLAM is a multipath assisted positioning SLAM algorithm. The position of the user is estimated jointly with building a map of physical and virtual transmitter locations based on the estimated parameters of signal components. Within this paper, we have augmented such a transmitter location map by the information from which locations a transmitter is visible. Information on transmitter visibilities is mapped in a hexagonal grid map. For each hexagon, the probability that a transmitter is visible is represented by a Beta distribution. We have derived a new particle filter for Channel-SLAM that incorporates both the signal components' estimated parameters as well as estimates on transmitter visibilities. If a user revisits a hexagon, or if a prior map with transmitter visibilities is available, the visibility information is used to update the weights in the particle filter. For transmitter visibilities, no new measurements as such are required. Instead, the visibilities are estimated based on information that is already available from the channel estimator. Our simulations show that if a converged map of visibility regions is available as prior information, it increases the positioning performance of Channel-SLAM for a single user considerably. In the case of collaborative Channel-SLAM, where a map of visibility regions is estimated by different users subsequently, visibility information additionally increases the positioning performance by a higher map matching robustness. APPENDIX I. WEIGHT DERIVATION In the following, we derive (32) from (30). The numerator in (30) where we make use of the fact that the current user-map particle does not depend on previous measurements and control inputs if its state at time instant k − 1 is known. In addition, we do not incorporate information from future time instants. Inserting (47) exploiting that the denominator in the second line of (46) is not dependent on the user or the visibility map. The weights in (32) result from inserting (48) and (31) into (30). APPENDIX II. RADIO MEASUREMENT LIKELIHOOD In this section, the likelihood in (43) is derived. First, we marginalize over the transmitter state at time instant k. VOLUME 8, 2020 With the assumption of independent transmitter measurements the radio likelihood is p z R,k |x <i> u,0:k , z V,1:k , z R,1:k−1 = N a TX,k j=1 p z R,k |z V,1:k , z R,1:k−1 , x <i> u,0:k , x <i,j> TX,k × p x <i,j> TX,k |x <i> u,0:k , z V,1:k , z R,1:k−1 dx <i,j> TX,k . (49) The first factor in the integral of (49) is p z R,k |z V,1:k , z R,1:k−1 , x <i> u,0:k , x <i,j> TX,k = p z R,k |z V,k , x <i> u,k , x <i,j> TX,k . (50) The radio likelihood z R,k is conditionally dependent on the visibility measurement z V,k for a correct association of the radio measurements to the transmitters. It is independent from any previous states or measurements. The second factor in the integral of (49) The last term in the last line of (52) is obtained from the movement model of the transmitter given by (11). Thus, inserting (50) and (52)
2020-08-20T10:12:38.937Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "247f040e13e27fece4ca5fed4ee0ac987444036e", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09171348.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "528f138c0b04285b30cb5f09370aef2627c32976", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
269132762
pes2o/s2orc
v3-fos-license
Turkish Adaptation and Psychometric Evaluation of the Relationship Mindfulness Measure in an Emerging Adult Sample While latest research has accepted the importance of mindfulness in mental health, its role in interpersonal well-being receives less attention, including the necessary measurement tools. This study aimed to translate the Relationship Mindfulness Measure (RMM) into Turkish and explore its psychometric properties with unmarried Turkish emerging adults. A total of 191 university students (age range 18–29, M = 22.90, SD = 2.78) in committed romantic relationships participated in this study. The convergent validity analysis revealed a positive relation of RMM with trait mindfulness (r = .47, p < .001) and a negative relation with negative affect (r = −.21, p = .05). Internal and test-retest reliability of RMM was acceptable (α = .78, r = .67). The unidimensional factor structure of 5-item RMM was supported, and no common method variance was observed. Overall, findings indicated that Turkish RMM is a valid and reliable measure to assess emerging adults’ relationship mindfulness. Mindfulness has roots in ancient Eastern spiritual practices and has received significant attention in the field of psychology in the last decade.Simply put, mindfulness means paying attention to the present moment on purpose and nonjudgmentally, leading to an increased awareness of thoughts, feelings, and bodily sensations (Kabat-Zinn, 1994).It helps individuals with clinical and non-clinical problems by promoting acceptance without judgmentally reacting to such experiences (Grossman et al., 2004).By some, mindfulness is considered an alternative treatment to pharmacology, and an increasing number of studies have emerged in the field of mindfulness, also studying the underlying reasons for its therapeutic effect by focusing on diverse psychological, biological, and social aspects (Shonin & Van Gordon, 2016).In 2022, more than 1400 journal articles were published in academic journals (American Mindfulness Research Association [AMRA], 2023), which is an important indicator of its increasing popularity. However, interpersonal aspects of mindfulness, particularly in the context of romantic relationships, have received less attention (Karremans et al., 2017).Considering that romantic relationships are crucial to individuals' well-being (Gómez-López et al., 2019), mindfulness has been recognized as a significant aspect of relationship research.Trait mindfulness is important for increased relationship satisfaction (McGill et al., 2016) and higher marital quality (Lenger et al., 2017).Nevertheless, simply tending to be intrapersonally mindful may not be sufficient in the context of romantic relationships (Kimmes et al., 2018).While trait mindfulness may influence an individual's behavior in a romantic relationship, it might not necessarily mean that they will be mindful in the specific context of the relationship.Therefore, various context-specific mindfulness measures have been developed to assess an individual's tendency to be mindful in specific contexts, such as Interpersonal Mindfulness in Parenting Scale (Duncan, 2007), Sexual Five-Facet Mindfulness Questionnaire (Adam et al., 2015), Interpersonal Mindfulness Scale, (Pratscher et al., 2019), The Mindfulness in Couple Relationships Scale (McGill et al., 2022), Mindfulness in Marriage Scale (Erus & Tekel, 2020), and similarly, Relationship Mindfulness Measure (RMM; Kimmes et al., 2018) to evaluate each person's disposition for mindfulness in the setting of romantic relationships. Despite limited research on context-specific relationship mindfulness, available findings are encouraging.For example, even after controlling for trait mindfulness, it has been demonstrated that relationship mindfulness, as measured by RMM, is related to one's psychological functioning and the partner's general psychological health (Kimmes et al., 2020).Moreover, it outperformed trait mindfulness when describing shifts in the quality of romantic relationships, both positively and negatively (Kimmes et al., 2018;Stanton et al., 2021).In addition, in their dyadic daily experience study, Gazder and Stanton (2020) discovered that practicing relationship mindfulness daily leads to more positive relationship behaviors.They found that one's partner's daily relationship mindfulness buffered the negative effects of one's insecure attachment, especially for attachment avoidance.Similarly, Kimmes et al. (2018) and Jaurequi et al. (2022) found negative associations of relationship mindfulness to insecure attachments, which hold a long-standing link to countless adverse relationship outcomes (Hazan & Shaver, 1987). Other studies have found that relationship mindfulness is positively linked to higher sexual and relationship satisfaction (Fincham, 2022;Jaurequi et al., 2022) and negatively correlated to negative emotional symptoms (Fincham, 2022).Additionally, relationship mindfulness has been identified as a mediator between satisfying romantic relationships and decreased negative emotional symptoms, which, in turn, are associated with decreased sleep problems (Jaurequi et al., 2022).It has also been identified as a mediator, linking childhood maltreatment to positive and negative relationship quality (Fitzgerald, 2022). As demonstrated by the increasing volume of studies in this field, future studies employing relationship mindfulness may provide essential insights into relationship research.Therefore, the need to adapt RMM to Turkish culture is becoming significant for Turkish relationship literature.Türkiye, with a population of more than 85 million considerably young people (T Üİ K, 2022a), has a complex structure in the individualisticcollectivist culture continuum; it is not possible to position Türkiye in a precise place in this continuum (Göregenli, 1995).However, it is apparent that relationships with family members and other people, especially with romantic partners, are very significant in Turkish culture.67.6% of Turkish people stated that their families were the reason they were happiest (T Üİ K, 2022b).In Turkish culture, starting a family and maintaining family unity are both individually and socially significant.Romantic relationships before marriage also occupy an essential place in the Turkish relationship literature as they prepare individuals for starting a family.As the age of first marriage has increased for both genders in Turkey over the years (T Üİ K, 2022c), the number of pre-marital relationships has been increasing, especially among emerging adults. Emerging adulthood, encompassing the transition period from adolescence to adulthood, shows distinct characteristics with regard to the importance of romantic relationships (Arnett, 2000).People at this developmental stage, between the ages of 18 and 29, explore their identities through romantic love (Arnett, 2000) and frequently contemplate substantial questions about finding the right person to spend their life with and maintain a healthy relationship (Fincham & Cui, 2010).However, despite the well-established importance of romantic bonds, emerging adults' experiences and conception of this stage may differ according to cultural variations (Uçar & Demir, 2023).In the context of Türkiye, Çok and Atak (2015) revealed that emerging adulthood seems most applicable to those in urban groups who continue their education.Also, Turkish emerging adults' non-marital romantic relationships were examined from different aspects, such as factors predicting non-marital romantic relationship satisfaction (Barutçu Yıldırım et al., 2021;Saraç et al., 2015), romantic relationship beliefs (Küçükarslan & Gizir, 2014), and romantic relationship patterns (Uçar & Demir, 2023).However, studies examining relational mindfulness in the context of non-marital romantic relationships in a Turkish emerging adult sample are almost non-existent.Similarly, this variable has been addressed in a limited number of studies in the context of marriage in Turkish literature (e.g., Deniz et al., 2020).This apparent gap in the literature might stem from the scarcity of reliable and valid instruments that measure relational mindfulness.Previously, a mindfulness assessment tool known as the Mindfulness in Marriage Scale (Erus & Tekel, 2020) was designed for Turkish married couples in the context of romantic relationships.Nevertheless, to the best of our knowledge, RMM will be the initial assessment tool that focuses on the relationship mindfulness of unmarried emerging adults in Türkiye. RMM was developed by Kimmes et al. (2018) to address the significance and differential impact of interpersonal mindfulness in romantic relationship contexts.They applied Item Response Theory analysis to Mindful Attention Awareness Scale (MAAS) and modified those items for the context of romantic relationships, resulting in a valid and reliable unidimensional scale measuring relationship mindfulness, which is related to but a separate construct from trait mindfulness.A comprehensive set of analyses they conducted revealed that RMM works consistently over time to measure the same underlying construct.RMM includes five items (see Table 2 for the items) rated on a six-point scale with anchor points ranging from 1 (almost always) to 6 (almost never).All items are reverse-coded.Higher scores indicate higher relationship mindfulness. Considering the need to measure relationship mindfulness, this study aims to adapt RMM into Turkish and to examine its psychometric features, especially for unmarried Turkish emerging adult university students.These tests included several indicators of validity and reliability.We hypothesized that the Turkish RMM would result in a one-factor solution as the original scale and would exhibit no common method variance.We also expected a high test-retest reliability coefficient. Another hypothesis was a positive correlation between RMM and MAAS.The choice of MAAS for this study was guided by its established reputation for measuring mindfulness since its development and because MAAS and RMM are theoretically related.As mentioned earlier, in the development of RMM, Kimmes et al. (2018) applied Item Response Theory analysis to MAAS.They modified those items for the context of romantic relationships, resulting in a 5item RMM.Therefore, MAAS acts as a foundation for measuring mindfulness, strengthens the coherence in different contexts, and serves as a valid instrument for measuring the convergent validity of RMM.Hereupon, we expected a positive significant relationship between MAAS and RMM. Furthermore, a negative association between RMM and Negative Affect (NA) was hypothesized.Compared to those with higher mindfulness abilities, individuals with high negative affect feel psychological discomfort despite the absence of an external stressor (Watson & Clark, 1984), which conflicts with the idea of mindfulness.Also, as the term mindfulness has been linked to the regulation of dense negative emotions rather than a boost of positive emotions (Brown & Ryan, 2003), this study exclusively employed the NA subscale while addressing convergent validity and omitted Positive Affect (PA) subscale in the convergent validity analysis. A longitudinal study has recently supported this theory, demonstrating that mindfulness leads to a significant decrease in NA over three months but no increase in PA (Jose & Geiserman, 2023).Hereupon, while we linked relationship mindfulness to NA, we separated it from PA. PA was exclusively used to address common method variance (CMV), for which we needed a construct that is theoretically unrelated to relationship mindfulness. Participants In this study, a sample of non-married university students who were currently in a romantic relationship were employed.Participants were selected using a convenience sampling method from a large state university in the central part of Türkiye.The sample included 191 students (69.6% female, 29.8% male, and .5% non-binary) aged 18 to 29 (M = 22.90, SD = 2.78).Of the participants, 145 were undergraduates, 11 were master's students, and 12 were doctoral students.Their romantic relationship length differed from one to 98 months (M = 23.12,SD = 19.67). Instruments Relationship Mindfulness Measure (RMM).RMM assesses the individual's tendency to be mindful in the context of romantic relationships (Kimmes et al., 2018).It is a targeted instrument to capture mindfulness in a relationship context, which was more effective than trait mindfulness (Kimmes et al., 2018).In the development of this instrument, first, a measure of trait mindfulness, MAAS, was analyzed using Item Response Theory.The five items that emerged were adapted to romantic relationships, resulting in the creation of RMM. The scale has a one-factor structure and five items rated on a 6-point scale.The questionnaire includes statements such as "When I am with my partner, I find myself saying or doing things without paying attention" or "I get so focused on what I want my relationship with my partner to be like that I lose touch with what I am doing right now to get there."Higher total mean scores on the questionnaire correspond to higher levels of mindfulness in romantic relationships.In the original study by Kimmes et al. (2018), the coefficient alpha was found to be .86at Time 1 and .93 at Time 2 and interpreted as indicating strong internal consistency.The correlation between Time 1 and Time 2 was found to be .60(p < .01),which was interpreted as having acceptable test-retest reliability by Kimmes et al. (2018) due to its significance value. Mindful Attention Awareness Scale (MAAS).MAAS measures dispositional mindfulness, which is defined as open or receptive awareness of and attention to what is taking place in the present (Brown & Ryan, 2003). The scale has a one-factor structure and 15 items rated on a 6-point scale.Higher total mean scores indicate higher levels of mindfulness.In the original study, the coefficient alpha was found to be .82.The Turkish adaptation of MAAS was conducted by Özyes ¸il et al. (2011).The coefficient alpha was found to be .80.The test-retest reliability of the scale was measured at three-week intervals and was calculated to be .86. Positive Affect and Negative Affect Subscale (PANAS).Having a bifactorial structure, the Positive Affect and Negative Affect Scale (PANAS) measures two affective state dimensions: Positive Affect (PA) and Negative Affect (NA) (Watson et al., 1988).A high NA score, measured with ten items and rated on a 5-point scale, reflects negative states such as subjective distress and unpleasant experiences.This study exclusively employed the NA subscale in the validity analysis. PA, reflecting positive states and rated on a 5-point scale, was employed only in analyzing common method variance as a marker variable.While PA subscale originally comprised ten items, this study selectively employed the four items with higher standardized factor loadings (>.5), representing the construct most effectively.This choice aligns with the small number of items in RMM. The original study found the coefficient alpha to be .87for NA and .88 for PA.The test-retest reliability was found to be .71for NA and .68 for PA.The psychometric properties of the Turkish form of PANAS were investigated by Gençöz (2000).The factor structure was consistent with the original scale.The coefficient alpha was .86 for NA and .83for PA.The test-retest reliability was .54 for NA and .40 for PA. Procedure Translation of RMM.For the Turkish form of RMM, first, the original scale was translated into Turkish by using a committee approach (see Douglas & Craig, 2007 for a review); therefore, including a team of experts consisting of two linguists and three academicians with doctorate degrees in psychological counseling, who were also highly proficient in English.After gathering five translations, the researchers discussed and agreed on the best translation for each item.Then, they sent the translation they chose as the best translation, along with all the other incoming translations, to three more experts.Experts were expected to evaluate whether the translation wholly and accurately captures the original text's meaning.They rated each translation on a 5point scale (1 = Strongly disagree, 5 = Strongly agree) and wrote additional comments when their ratings were not five.One expert suggested minor word changes for items 1 and 2. Two experts suggested changes in item 5.According to their suggestions, the item translations were developed. Data Collection.Before data collection, necessary permissions were taken from the authors of the instruments, and the Human Subjects Ethics Committee of the Middle East Technical University, identified by a protocol number 339-ODTU-2020 in November 2020.Later, data were collected via an online survey platform, METU Survey.Participants were invited to the study with the help of faculty members.Faculty members, who were informed about the study and shared ethics committee approval, e-mailed the study announcement and the survey link to their classes.Each voluntary participant filled out a written informed consent form before completing the survey. Data Analysis Before the primary analysis, a series of initial analyses were carried out.By employing SPSS version 28 (IBM Corp., 2021), the data underwent screening using frequency, minimum, and maximum values.Subsequently, the assumptions of factor analysis were checked, and the data was cleaned by considering any missing values and univariate and multivariate outliers. Later, descriptive analyses were conducted.To check the validity of RMM, the study variables were subjected to Pearson correlation analyses and Confirmatory Factor Analyses (CFA) with maximum likelihood estimation.The Cronbach's alpha value of the scale and correlation coefficient values between total scores of RMM within 3-week internals were examined to assess the reliability. To address common method variance (CMV), we implemented a post hoc statistical detection technique-the CFAbased marker variable technique-following the steps provided by Williams et al. (2010).We constructed three different structural equation models (CFA model, baseline model, and Method-C model using maximum likelihood estimation), each incorporating RMM and a marker variable (i.e., positive affect) along with their respective indicator variables.RMM had five indicator -observed-variables, and PA had four (items 3, 5, 12, and 17).Although the original PA subscale comprised ten items, only the four items with standardized factor loadings exceeding .5, which best represented the construct, were utilized.This decision aligns with Lindell and Whitney's recommendations ( 2001), as a good marker variable should closely resemble the criterion regarding semantic content, format, and a small number of items. In the CFA model, RMM and PA were correlated.Subsequently, the Baseline Model was constructed, mirroring the CFA model but with the restriction that PA did not correlate with RMM.Additionally, PA item factor loadings and error terms were fixed using the unstandardized factor loadings and unstandardized error variances from the CFA model.Then, Model-C was created by adding additional factor loadings from PA to items of RMM.These new loadings are set equal to each other.Later, the Baseline Model and the Model-C were compared using the chi-square difference test to determine the presence of CMV associated with the marker variable.R (R Core Team, 2022) and RStudio (Posit team, 2023) were employed while conducting all CFA-based analyses. Statistical Analysis Criteria. A commonly used threshold for assessing univariate normality is that data is considered normal if it exhibits skewness values within the range of ± 3 and kurtosis values within the range of ±10 (Kline, 2016).Following the guidelines of Tabachnick and Fidell (2013) for univariate outliers, standardized Z-scores of the mean values exceeding ± 3.29 were considered outliers.Multivariate outliers were identified by examining Mahalanobis distances using a critical chi-square value with a significance level of p < .001(Tabachnick & Fidell, 2013).The multicollinearity examination utilized the correlation coefficient values between the study constructs.The highly correlated constructs with correlation values exceeding .90 were accepted as an indication of potential multicollinearity (Tabachnick & Fidell, 2013). Considering that the minimum required sample size for measurement models shows great variability depending on factors such as the number of indicators, latent variables, missing data, or complexity of the model (Wolf et al., 2013), we determined the sufficiency of our sample size in two ways.Firstly, Bacchetti (2010) recommended using a size similar to what has proven effective in comparable studies.In our case, we considered the initial development study of RMM that employed 185 participants.Secondly, we considered a simulation study that investigated sample size requirements for structural equation models.Wolf et al. (2013) demonstrated that the minimum sample size for models with factor structures similar to ours (unidimensional scale consisting of five indicators) required 50 to 190 participants.Overall, we considered an approximate sample size of 190 sufficient to test our model. When interpreting the effect sizes, the guidelines outlined by Cohen (1988) were employed: .10≤ r < .30for small, .30≤ r < .50 for medium, and r ≥ .50 for large effect size.Reliability levels were considered acceptable if Cronbach's alpha values were .60 or higher and good if .70 or higher (Hair et al., 2010).The model fit was evaluated using several fit indices, including the χ2 test, the comparative fit index (CFI), the Tucker-Lewis Index (TLI), the standardized root mean square residual (SRMR), and the root mean square error of approximation (RMSEA).Based on the thresholds Hu and Bentler (1999) provided, with specific attention to the comprehensive insights by Kline (2016), an indication of an acceptable to good fit was observed with CFI and TLI values of .90 for an acceptable fit and .95 or above for a good fit.SRMR and RMSEA values of .10showed an acceptable fit, and .05or below showed a good fit.However, considering that empirical studies do not recommend universal cut-off points for RMSEA, and it varies greatly based on sample size, number of variables, and df values (Breivik & Olsson, 2001;Chen et al., 2008), less emphasis was put on the RMSEA criteria. While addressing CMV using the CFA-based marker technique, the criteria outlined by Williams et al. (2010) were used when deciding upon the marker variable.A construct that is theoretically unrelated but elicits similar response tendencies -having the same source of bias-was determined to be PA.This decision is based on both theory and the low correlations between RMM and PA in this study.As explained earlier, mindfulness is theorized to be associated with regulating negative emotions instead of boosting positive ones (Brown & Ryan, 2003), which is supported by empirical evidence (Jose & Geiserman, 2023).Although no other empirical evidence is available to our knowledge (except for the low correlations in this study) concerning the relation between positive affect and the novel concept of relationship mindfulness, we assume this theoretical irrelevance would be similar to mindfulness in romantic relationships. The presence of CMV was determined by the comparisons between the Baseline Model and the Method-C Model.If the Method-C Model significantly outperformed the Baseline Model in the result of the chi-square difference test, it indicated method variance associated with the marker variable (Williams et al., 2010).χ2 test, df, and CFI values are reported for each model, along with chi-square difference test results. Construct Validity of RMM Before testing the unidimensional factor model of the Turkish RMM with CFA, the assumptions of the factor analysis were first checked.For normality assumption, skewness and kurtosis values were examined.Skewness values ranged between À1.42 and À.54, while kurtosis values were between À.60 and 1.39.Univariate outliers were examined using standardized Z-scores of the mean values.No scores were detected exceeding ± 3.29, so there were no outliers in the data (p < .001).For multivariate outliers, the Mahalanobis Distance was examined.Results showed that there were no multivariate outliers in the data (p < .001).To check multicollinearity, correlation analysis was examined between the items.Pearson's correlation coefficients ranged between .26 and .54(see Table 1).No correlation coefficient exceeded .90,so no multicollinearity was observed in the scale.All the assumptions of factor analysis were met. Convergent Validity of RMM After data screening, two participants were removed since they had missing values for more than 5% of their answers.With the remaining data from 86 participants, Pearson correlations between the mean values of RMM, MAAS, and NA subscale of PANAS were checked (see Table 1).As expected, RMM was found to be positively related to MAAS with a medium effect size (r = .47,p < .001)and negatively associated with NA subscale (r = À.21, p = .05)with a small effect size. Internal Reliability and Test-Retest Reliability of RMM Reliability analysis was administered to measure the internal consistency reliability coefficient.Cronbach's alpha was found to be .78,which indicates acceptable reliability.Of these participants, 63 filled out RMM twice at a 3-week interval.The test-retest reliability coefficient of the Turkish scale was shown to be .67(p < .001). Common Method Variance The CFA-based marker variable technique was implemented to investigate the presence of CMV.The first model, the CFA model including RMM and PA as a marker variable, showed the following goodness-of-fit results: χ2 value of 29.58, df value of 26, and CFI value of .98.The Baseline Model with fixed item loadings and error variances for orthogonal PA (uncorrelated with RMM) revealed χ2 value of 31, df value of 30, and CFI value of .99.Later, Method-C, having the same characteristics of the Baseline Model with an addition of fixed equal item loadings for RMM items, showed χ2 value of 28.92, df value of 29, and CFI value of 1.00.The comparison of the Baseline Model with the Model-C using the chi-square difference test indicated that the models did not differ significantly (Δχ 2 = 2.08, Δdf = 1, p = .15).Thus, it is concluded that CMV is not present. Discussion This study aimed to adapt RMM into Turkish and examine its psychometric features.The findings supported the effectiveness of using RMM to measure mindfulness in romantic relationships in Turkish culture.The measure's internal consistency was satisfactory, and the test-retest reliability coefficient of .67 was acceptable and consistent with the original study in which the value of .60 was reported (Kimmes et al., 2018).The convergent validity results revealed moderately satisfactory outcomes.As expected, a moderate positive correlation between MAAS and RMM was observed.It means that individuals with higher trait mindfulness are more likely to be mindful in their romantic relationships as well.This result is consistent with Zümbül and Okur's work (2021), in which a strong positive correlation existed between the same scales. However, the correlation coefficient of RMM and NA subscale was weaker.This might be due to conceptual limitations, such as the presence of potential moderators between relationship mindfulness and negative affect.For example, emotion regulation was previously shown to be linked to the relationship between mindfulness and negative affect (Chambers et al., 2008).Since studies on relationship mindfulness are minimal, these moderators might be explored further by using relationship mindfulness measures instead of trait mindfulness. It was also discovered that the unidimensional factor structure of Turkish RMM was parallel to the original version.The structural model indicated a good fit except for the RMSEA value, which also exceeded the ideal threshold in the original study (Kimmes et al., 2018).As mentioned earlier, empirical evidence is scarce to support universal RMSEA cutoff values such as .05or .10 for determining satisfactory model fit (Chen et al., 2008) because small models with a limited number of variables and df values are disadvantaged when it comes to RMSEA values, frequently giving a misleading impression of a poorly fitting model (Breivik & Olsson, 2001;Kenny et al., 2015).Similarly, Hu and Bentler (1999) stated that RMSEA is not preferred if the sample size is smaller than 250.Hereupon, although the RMSEA value of our model was above the traditional cut-off points, less emphasis was placed on it in the interpretation of model fit.This decision considers the models' characteristics, such as including only five items, a low df value of five, and a sample size of 191 (N < 250). In addition, the standardized regression weights of all the items were significant and ranged from .45 to .74.Previously, researchers indicated various cut-off points for item loadings in factor analysis.One accepted threshold is that the standardized loading estimates should be .5 or higher (Hair et al., 2010).However, there are other researchers opted for thresholds in favor of our results, such as loadings with a minimum of .4(Mehmetoglu & Mittner, 2021) or proposing that the factor is reliable in the presence of four or more loadings exceeding .6 regardless of the sample size used (Guadagnoli & Velicer, 1988). When RMM items were examined more closely, it was evident that item-3 had the lowest and item-5 had the highest regression weight, as it appeared in the original scale (Kimmes et al., 2018).When the content of item 3 was examined to reflect upon its relatively lower loading, it was observed that, unlike the other items, it centers on one's future expectations regarding the relationship.In contrast, the remaining items address a lack of attention to the relationship in the present moment without providing alternative content for one's mind to wander toward.Hereupon, we conclude that these results support the 5-item RMM model. As the body of literature has been growing in contextspecific forms of mindfulness, relationship mindfulness, which was previously shown to be more advanced at predicting relationship outcomes (Kimmes et al., 2018), might be an efficient alternative to trait mindfulness in relationship research.The current results showed that the Turkish RMM is a reliable and valid measure.Therefore, this study offers a new tool to study mindfulness in romantic relationships, especially for unmarried Turkish emerging adults. Limitations When interpreting the study findings, it is crucial to consider various limitations.To begin with, the utilization of the convenient sampling method might impact the generalizability of the results, given that it is one of the non-random sampling approaches.Additionally, the majority of the participants in this study were women. It is also important to note that the study was comprised of self-report scales in the validation of RMM, which might affect the internal validity of the results.As Kimmes et al. (2018) mentioned, this might be especially prevalent for the mindfulness concept because people lack awareness of how often they lose touch with the present moment, making it unlikely to rate themselves accurately on their mindfulness predisposition.Future studies might employ more objective measures to validate relationship-specific mindfulness. In addition, the study involved collecting data from individuals who provided information about their romantic relationships.Dyadic data collection was not intended.However, we did not control whether both partners participated in the survey or not.This might pose a limitation to the assumption that data is based on independent observations.Future studies might employ intended dyadic data collection to deepen the understanding of interdependence regarding relationship mindfulness. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Table 2 . Means, Standard Deviations, and Factor Loadings for RMM Items.When my partner and I are together, it seems I am "running on automatic," without much awareness of what I am doing.4.32 1.29 .68* 2. I have conversations with my partner without being really attentive.4.48 1.29 .72* 3. I get so focused on what I want my relationship with my partner to be like that I lose touch with what I'm doing right now to get there.When my partner and I discuss an issue or work on a problem together, I behave automatically, without being aware of what I am saying or doing.4.90 1.27 .66* 5.When I am with my partner, I find myself saying or doing things without paying attention.4.66 1.24 .74* Note. p < .01.
2024-04-14T15:06:43.718Z
2024-04-12T00:00:00.000
{ "year": 2024, "sha1": "53795601a740fa1dcfe170815b4f714c7f006366", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21676968241246302", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "ded56ea3e4a51ec0dd540acdc1af07eb6d7655fe", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
270071689
pes2o/s2orc
v3-fos-license
A chimeric haemagglutinin-based universal influenza virus vaccine boosts human cellular immune responses directed towards the conserved haemagglutinin stalk domain and the viral nucleoprotein Summary Background The development of a universal influenza virus vaccine, to protect against both seasonal and pandemic influenza A viruses, is a long-standing public health goal. The conserved stalk domain of haemagglutinin (HA) is a promising vaccine target. However, the stalk is immunosubdominant. As such, innovative approaches are required to elicit robust immunity against this domain. In a previously reported observer-blind, randomised placebo-controlled phase I trial (NCT03300050), immunisation regimens using chimeric HA (cHA)-based immunogens formulated as inactivated influenza vaccines (IIV) −/+ AS03 adjuvant, or live attenuated influenza vaccines (LAIV), elicited durable HA stalk-specific antibodies with broad reactivity. In this study, we sought to determine if these vaccines could also boost T cell responses against HA stalk, and nucleoprotein (NP). Methods We measured interferon-γ (IFN-γ) responses by Enzyme-Linked ImmunoSpot (ELISpot) assay at baseline, seven days post-prime, pre-boost and seven days post-boost following heterologous prime:boost regimens of LAIV and/or adjuvanted/unadjuvanted IIV-cHA vaccines. Findings Our findings demonstrate that immunisation with adjuvanted cHA-based IIVs boost HA stalk-specific and NP-specific T cell responses in humans. To date, it has been unclear if HA stalk-specific T cells can be boosted in humans by HA-stalk focused universal vaccines. Therefore, our study will provide valuable insights for the design of future studies to determine the precise role of HA stalk-specific T cells in broad protection. Interpretation Considering that cHA-based vaccines also elicit stalk-specific antibodies, these data support the further clinical advancement of cHA-based universal influenza vaccine candidates. Funding This study was funded in part by the 10.13039/100000865Bill and Melinda Gates Foundation (BMGF). Introduction Influenza A viruses (IAV) are responsible for annual epidemics resulting in ∼290,000-650,000 global deaths per year. 1 In addition to seasonal epidemics, IAVs can cause sporadic pandemics.The potential for emergence of novel reassortant viruses from the animal reservoir which have pandemic potential, represents an ongoing concern, particularly the pan-zootic clade 2.3.4.4bH5N1 avian influenza viruses. 2,35][6] Conventional inactivated influenza vaccines (IIVs) elicit largely strain-specific immune responses which are directed towards the globular head domain of the main surface glycoprotein, the haemagglutinin (HA), with limited induction of responses towards the HA stalk domain.The HA head is immunodominant, and when vaccine strains are wellmatched to circulating influenza viruses, antibodies (Abs) recognising the HA head can confer protection from disease.Unfortunately, the HA head is antigenically variable and tolerates the accumulation of mutations which can result in virus escape from protective Abs elicited by licensed seasonal influenza vaccines. 7onventional unadjuvanted seasonal IIVs are limited in the induction of cellular immune responses, and largely induce humoral immunity. 8Although live attenuated influenza vaccines (LAIV) do elicit T cell responses, these are preferentially recommended for use in children due to improved efficacy in this age group as compared with adults. 9,10Seasonal vaccines also lack suitability for pandemic preparedness for several reasons.First, IAVs are zoonotic viruses which circulate in a broad range of animal species, and a diverse number of distinct HA subtypes exist (H1-H16 and H19, 11 in addition to bat HAs H17 and H18). 12IAV HAs are sub-divided phylogenetically into group 1 (G1) and group 2 (G2).The strain-specificity of conventional influenza virus vaccines means that they would confer little or no protection against emerging viruses with HA subtypes which do not normally circulate in humans (e.g., avian H5).Therefore, it is clear that efforts to develop and evaluate novel vaccine platforms and strategies capable of eliciting increased breadth of reactivity against multiple IAVs are urgently needed. Research in context Evidence before this study A systematic search of PubMed/Medline was performed to evaluate studies focused on the induction of haemagglutinin (HA) stalk-specific T cells in humans following immunisation.Search terms included "influenza haemagglutinin AND stalk" or "influenza haemagglutinin AND vaccine AND cellular", from years 2008-2024, using filters for "Clinical Trial" and "Randomized Controlled Trial".This search yielded up to one hundred and twenty publications, including three directly related to our clinical trial (NCT03300050).All other manuscripts related to the study of HA stalk-specific antibodies, or the clinical evaluation of the therapeutic efficacy of antibodies targeting the HA stalk domain.A limited number of studies have measured T cell responses to the whole HA antigen in humans, but these did not specifically look at boosting of T cells recognising the HA stalk domain. Added value of this study In this exploratory study using cryopreserved PBMCs from a subset of participants in a clinical trial to test HA stalk focused universal vaccines based on inactivated influenza vaccine (IIV), or live attenuated influenza vaccine (LAIV) platforms, with or without use of adjuvant, we specifically evaluated boosting of HA stalk-specific T cells in humans.We showed that prime:boost immunisation regimens which included a chimeric HA (cHA), stalk-focused immunogen formulated as IIV with AS03 adjuvant was capable of boosting stalk-specific T cells in participants. Implications of all the available evidence A large body of published literature now supports a major role for antibodies recognising the HA stalk being capable of breadth of reactivity (i.e., against seasonal, pandemic and emerging avian influenza viruses), and possessing a wide range of functional and protective capacities in animal models (neutralisation, effector function activity).However, even in mice, very little is known about HA stalk-specific T cell responses, their role in protection, and which immunisation platforms or regimens may be capable of boosting these cellular effectors.Considering (i) the intense research efforts being conducted globally to develop optimised universal influenza virus vaccines, (ii) the ongoing threat that emerging pandemic viruses pose, and (iii) the fact that improving our understanding of correlates of protection for influenza vaccines is a strategic priority for funding agencies (e.g., NIAID), our findings will be useful in informing future clinical trial design. Universal influenza virus vaccines are in various stages of pre-clinical and clinical development.4][15][16][17][18][19][20][21] One such target is the conserved stalk (or stem) domain of HA, which plays an important role in mediating viral fusion and entry. 127][28][29][30][31][32] Importantly, HA stalk Abs have recently been identified as a potential correlate of protection in human cohort studies of natural influenza virus infection. 33,34Collectively, the high degree of conservation within the stalks of G1, and G2 IAVs, as well as documented evidence for stalk-specific immunity conferring protection in vivo, make the stalk an attractive target for next-generation influenza virus vaccines. 0][41] The latter approach involves grafting the head domain of a HA subtype which is exotic to humans (e.g., H8), onto the stalk domain of a HA subtype which circulates in humans (e.g., H1) to produce a chimeric antigen (e.g., cH8/1).Sequential immunisation with distinct cHA-based immunogens in which the stalk domain remains the same upon each immunisation, but the HA head domain is swapped out (e.g., cH8/1 followed by cH5/1), leads to re-focusing of humoral immune responses away from the immunodominant head and towards the immunosubdominant HA stalk.Importantly, unlike headless HAs, the cHA design is compatible with conventional influenza vaccine production, including LAIV and IIV platforms. 13,42equential cHA immunisation regimens using cHA platforms have been successful in animal models, demonstrating that heterosubtypic protection can be achieved against a diverse range of influenza viruses. 40herefore, a vaccine capable of eliciting breadth of reactivity against diverse G1 or G2 HAs, such as H1, H2, H5, or H3, H7 and H10, 43 would represent an advance over conventional seasonal influenza virus vaccines and would be ideally suited to pandemic preparedness and stockpiling. 44,45e previously reported the safety and humoral immunogenicity of G1 cHA-based LAIV and IIV vaccine candidates in humans (NCT03300050). 13,15,44,46We confirmed that the cHA immunisation approach successfully boosted cross-reactive stalk Abs in humans, which were sustained for up to 18 months postimmunisation (end-point for analysis). 13These Abs exhibited breadth of reactivity against diverse G1 HAs, 44 and targeted the central stalk epitope (i.e., CR9114), 47,48 as well as a membrane-proximal, broadly neutralising anchor epitope. 48,49Furthermore, stalk Abs elicited by this vaccination regimen displayed a range of functions, including virus neutralisation, as well Fc-mediated antibody-dependent cellular cytotoxicity (ADCC) and antibody-dependent cellular phagocytosis (ADCP), mechanisms which have been identified as contributing to heterosubtypic protection in vivo in animal models. 7,31he purpose of the current study was to conduct tertiary exploratory analyses to evaluate T cell responses in humans following immunisation with the cHA LAIV and/or adjuvanted/non-adjuvanted cHA IIV platforms in the previously reported clinical trial (NCT03300050).1][52][53] Similar to the HA stalk domain, NP and M1 are highly conserved, and cross-reactivity of NP-specific T cells against heterosubtypic viruses has been reported in humans. 54Importantly, T cells recognizing internal viral antigens or influenza-specific T cells have also been identified as a potential correlate of protection in longitudinal cohort studies, 55,56 and in human challenge experiments. 57,58However, in contrast, little work has been done to investigate the role of HA stalk-specific T cell responses, despite the importance of the HA stalk in universal influenza virus vaccine development. In this study, we report that H1 stalk-specific T cells in humans are successfully boosted following intramuscular (i.m.) immunisation with adjuvanted cHAbased universal influenza virus vaccine candidates.These data complement prior clinical analyses, which clearly demonstrated that adjuvanted cHA-IIV formulations increase cross-subtype immunity, and elicit HA stalk-specific Ab responses which display a range of functional activities associated with protection. 13,14,44,46 vaccine which simultaneously elicits broadly crossreactive humoral and cellular immune responses, represents an ideal universal influenza virus vaccine candidate. Objectives The original clinical study was designed to evaluate the safety and immunogenicity of a prime:boost regimen comparing intranasal (i.n) LAIV, or intramuscular (i.m), AS03-adjuvanted split inactivated influenza vaccine (IIV) prime, followed by an i.m boost with IIV, administered with or without AS03 adjuvant.The primary outcomes of the trial (registered with ClinicalTrials.gov,NCT03300050) have been reported previously and the clinical study protocol is available at https://clinicaltrials.gov/study/ NCT03300050. 13,14The selection of AS03 in G4 (IIV8/ AS03-IIV5/AS03) was to act as a bridging group to a parallel GlaxoSmithKline (GSK) first-in-human clinical trial (NCT03275389) initiated prior to this study. 150][61][62][63] This manuscript reports the findings of exploratory immunological analyses to measure T cell responses to HA stalk and NP in peripheral blood mononuclear cells (PBMCs). Ethics The Cincinnati Children's Hospital Medical Center (CCHMC) Institutional Review Board (IRB) served as the central IRB for review, approval and overview of this trial, as previously described (Protocol #2017-4461). 13,14ritten, informed consent was obtained from all participants.Descriptions of planned T cell analysis by ELISpot are detailed in section 7.3.2,7.3.2.3, Table 12 and Table 13 in the published study protocol, available at https://clinicaltrials.gov/study/NCT03300050. Vaccines The trial evaluated prime:boost regimens of an LAIV and two IIVs, as previously described. 13,14Briefly, the LAIV was manufactured in embryonated chicken eggs by Meridian Life Sciences in Memphis, Tennessee and formulated in sterile saline.5][66] The LAIV was administered i.n at a 10 7.5 50% egg infectious dose.The first IIV carried an identical cH8/1 HA to the LAIV, and a second IIV carried a chimeric H5/1 HA (head domain from A/Vietnam/ 1203/04 [H5N1], stalk domain from A/California/04/09 [H1N1]).Both were rescued with the same A/PR/8/34 (H1N1) backbone and manufactured in embryonated chicken eggs by GSK (Wavre, Belgium), as described previously. 13,14Split-virion IIVs were administered i.m in a volume of 0.5 mL, with either phosphate buffered saline (PBS) or AS03 adjuvant.The antigen content with IIVs was 15 μg of HA (cH5/1 or cH8/1).Corresponding control groups received saline i.n or PBS i.m. A total of n = 10 subjects from the three vaccine groups (G1, 2 and 4) were allocated for T cell analysis, and an n = 2 and n = 8 respectively from each placebo group (G3 and G5, denoted G3+5: PLACEBO) were combined for analysis.In some cases, on the day of assay performance, insufficient cell quantities were recovered after cryopreservation, or cell viability was low.As a result, at some specific timepoints, selected peptide stimulations or participant samples were omitted from the assay or from analyses for selected wells.Information and justification for sample exclusion from analysis is provided in Table 1.Operators were blinded to the treatment groups until laboratory and data analysis were completed. Sample size and PBMC pick list ELISpot assay operators were blinded to clinical groupings until locking of the T cell analytical database.To enable an equal subset of volunteers from across all groups to be evaluated for cellular immune responses, a pick list was generated by staff at The Emmes Company, LLC, providing 10 volunteers per group across G1, G2 and G4, and 10 volunteers split across PLACEBO volunteers in G3 (n = 2) and 5 (n = 8).The pick list was selected randomly from participants that completed both prime and boost immunisation interventions. 13,14his sampling represented 62.5-100% of the participants in the original trial: 62.5% for G1 (n = 10 out of 16), 76.9% for G2 (n = 10 out of 13), 66.7% for G4 (n = 10 out of 15), 100% for PLACEBO G3 (n = 2 out of 2) and 80% for G5 (n = 8 out of 10).Selection was also dependent on the availability of sufficient numbers of cryopreserved PBMC vials at D1, D8, D85 and D92 for the same volunteer, to allow tracking of T cell responses at baseline (D1), 7 days post-prime (D8), pre-boost (D85) and at 7 days post-boost (D92).Importantly, all participants in this subset analysis received their prime immunisation in December 2017, thereby eliminating confounding factors related to time trends.Furthermore, none of the participants in the T cell analysis tested positive for IAV infection during our analysis window (D1-D92).As the T cell assays were tertiary exploratory assays performed once other priority assays had been completed, our sample size (n = 10/group) was largely defined by the availability of cryopreserved PBMCs.Demographics for the participants selected for T cell analysis are shown in Table 2. PBMC isolation Blood samples for PBMC analysis were taken prevaccination (denoted D1), 7 days post-prime vaccination (denoted D8), 84 days post-prime but preboost vaccination (denoted D85), and 7 days post boost vaccination (denoted D92).PBMCs were isolated at the study sites, CCHMC and Duke, and cryopreserved samples provided for T cell analyses. Peptide preparation Protein sequences for A/Michigan/45/2015 H1 HA stalk domain and NP were split in silico into 15mer, 19mer or 20mer peptide sequences overlapping by 10 amino acids (Supplementary Tables S1 and S2, respectively).Additional peptides were synthesised to act as positive controls.For the latter purpose we used an adaptation of the gold-standard "CEF" peptide pool, 67 consisting of known CD8 + T cell epitopes from cytomegalovirus (CMV = "C") and Epstein-Barr virus (EBV = "E"), but without influenza (flu = "F") virus peptides (Supplementary Table S3). 67,68AbClonal (MA, USA) synthesised the peptides to 70% purity with free amino and carboxyl acid groups.Lyophilised peptides were stored at −20 For reconstitution, peptides were warmed to room temperature for at least 1 h, then centrifuged at 500 g for 1 min, before adding dimethyl sulfoxide (DMSO) to result in a specified final peptide concentration of 25-100 mg/mL.Once reconstituted, peptides were vortexed then pooled according to antigen in culture media, consisting of Roswell Park Memorial Institute (RPMI) media supplemented with 10% (v/v) foetal bovine serum (FBS), 100 U/mL penicillin, 100 μg/mL streptomycin and 2 mM L-glutamine (complete RPMI is denoted R10).The HA stalk peptides were split across 4 separate pools (P1-4) and NP across 5 separate pools (P1-5).Each pool stock contained 20 μg/mL of each peptide, and had 6-10 peptides per pool (Supplementary Tables S1 and S2).Peptide pools were aliquoted according to the Enzyme-Linked ImmunoSpot (ELISpot) plate layout into 96 well plates.Negative control wells consisted of R10 plus an equivalent volume of DMSO as was added to the peptide wells.Peptide plates were sealed and stored at −80 PBMC thawing and counting Cryovials containing frozen PBMCs were removed from liquid nitrogen storage and partially thawed in a 37 • C water bath, and then added to R10 with 25 U/mL benzonase (MilliporeSigma, MO, USA; #70664-3).Samples were counted manually using a glass haemocytometer. Live and dead cell counts were measured, recording 2 counts per technical replicate of each sample.To maintain accuracy, samples were re-suspended in a smaller volume and re-diluted to obtain a more accurate count if the mean count was less than 30 cells.Counted PBMC samples were re-suspended at 4 × 10 6 PBMC/mL and stored in a humidified incubator at 37 Plates were placed in a humidified incubator at 37 Statistics For all analyses, control G3 and G5 were pooled and designated as PLACEBO.All data were evaluated for skewness and normal distribution, and the appropriate tests applied.For comparisons of backgroundsubtracted ELISpot responses in the same volunteer at different timepoints (intra-group comparisons), the non-parametric Friedman test was used with Dunn's correction for multiple comparisons for D1 versus D8, D1 versus D92 and D85 versus D92.Volunteers with missing data points were omitted from the Friedman analysis (see Table 1 for exclusions), but all data points are shown on the graph for HA stalk and NP stalk responses.For simplicity the median ELISpot response representing all data is shown for each group (Figs.1e and 3e).Area under the curve (AUC) analyses were used to capture total responses over time.When timepoints were missing due to insufficient cells, or ELISpot plate failures, the mean of the group at that timepoint was used to impute the value, as previously described 17 (see Table 1 for exclusions).Fold-change was calculated through dividing the ELISpot response (in SFU/10 6 PBMCs) at a given timepoint by that of a previous timepoint, on a volunteer-by-volunteer basis.For inter-group comparisons of AUC or fold-change, the non-parametric Kruskal-Wallis test with Dunn's correction for multiple comparisons was used to determine differences between the median AUC or median fold-change across the four groups.For comparisons between groups which displayed different population distributions based on positive and negative skewness, data were transformed by y = y 3 prior to analysis (i.e., Fig. 3j only).AUC and fold-change graphs display the median plus 95% confidence interval.All relevant p-values, 95% CI and test performed are reported in the Results section.Only comparisons which were found to be statistically significant are indicated on the graphs, however all outlined comparisons in a multiple comparison were performed.The absence of significant p-values does not necessarily indicate that a biological effect or association does not exist, simply that there was not sufficiently strong quantitative evidence to statistically reject the null hypothesis.No adjustments for confounding factors/effect modifiers were made due to the exploratory nature of the outcomes, and the randomly selected small sample size (n = 10/group) aimed at a preliminary evaluation of cellular immune responses in humans. Role of funders This research was funded in part by the Bill and Melinda Gates Foundation (BMGF).The funders did not play any role in study design, data collection, data analyses, interpretation, or writing of the manuscript. Results The data presented in this manuscript represent tertiary exploratory T cell analyses performed on cryopreserved PBMC samples from a subset of volunteers (n = 10 per group) enrolled in a completed Phase I clinical trial (ClinicalTrials.govNCT03300050).A summary of the vaccine groups, intervals and analysis timepoints evaluated in this current manuscript is outlined in Table 1, and information on the demographics of participants selected for T cell assays is listed in Table 2. Safety data for these universal influenza virus vaccine candidates has previously been reported, where the vaccines were found to be safe and well-tolerated in humans. 13,14munisation with cHA-based vaccines boosts HA stalk-specific T cells In this study, we used the IFN-γ ELISpot assay to measure H1 stalk-specific T cells in cryopreserved PBMCs from individuals immunised with cHA-based universal influenza virus vaccines, or PLACEBO (Table 1).Data presented in Fig. 1a-e represent background subtracted IFN-γ ELISpot responses to pools of overlapping peptides corresponding to the H1 stalk domain of A/Michigan/45/2015 (see Supplementary Table S1).The HA stalk of A/Michigan/45/2015 has 98% amino acid sequence identity when compared with the H1 stalk of A/California/04/2009, present in the cHA vaccines used in this study.A timepoint of 7 days post-prime (D8), and 7 days post-boost (D92) was selected for T cell analysis, based on previous studies. 17n G1 participants (LAIV8-IIV5/AS03), we did not detect increases in HA stalk specific T cells post-prime (D8), as measured by spot-forming units (SFU) per 10 6 PMBCs (Fig. 1a).However, a ∼3.7-fold-greater median HA stalk-specific T cell response was detected seven days following the boost immunisation with IIV5/ ASO3, when compared with pre-boost (D92 versus D85 95% CI of the median: 218.30-970.00versus 66.67-321.70SFU/10 6 , Friedman test: p = 0.019).This corresponded to a ∼5.1-fold greater response compared to baseline (D92 versus D1 95% CI of the median: 218.30-970.00SFU/10 6 versus 58.33-245.00,Friedman test: p = 0.0022).Similar to G1 (LAIV8-IIV5/AS03), we did not detect increases in stalk-specific T cells in G2 (LAIV8-IIV5) participants following the LAIV8 prime (Fig. 1b).However, a small increase in the response was detected in this group following the IIV5 boost without adjuvant, with median HA stalk-specific T cells boosted ∼2.6-fold on D92 relative to D85 (95% CI of the median: 50.00-278.30versus 9.17-185.00SFU/10 6 , Friedman test: p = 0.0016).Participants in G4 received the IIV8/ AS03-IIV5/AS03 vaccination regimen.These subjects had a ∼3.6-fold greater response in HA stalk-specific T cells post-prime (D8 versus D1 95% CI of the median: 145.00-690.00versus 58.33-235.00SFU/10 6 , Friedman test: p = 0.0055) when compared with baseline (Fig. 1c).A ∼1.8 fold-change was observed postboost (D92) when compared with D85, although this was not statistically significant (95% CI of the median: 83.33-653.30versus 35.00-283.30SFU/10 6 , Friedman test: p = 0.17).However, when comparing D92 with baseline, ∼3.3-fold greater median HA stalk-specific T cell response was observed (D92 versus D1 95% CI of the median: 83.33-653.30versus 58.33-235.00SFU/10 6 , Friedman test: p = 0.036).Subjects assigned to receive the PLACEBO (G3 and G5) did not show statistically significant changes in HA stalk-specific T cell responses following prime or boost vaccinations (Fig. 1d).The median response (SFU/10 6 PBMCs) as presented in Fig. 1a-d is summarised for each group in Fig. 1e. Area under the curve (AUC) analysis has been previously applied in immunological studies to assess the magnitude of an immune response within a defined time period. 17,69Therefore, we also compared the overall AUC for the HA stalk specific IFN-γ + T cell response for each vaccine group from D1-D92 (Fig. 1f).When comparing between all vaccination groups, the median of the overall AUC (i.e., D1-D92) was ∼3.0-fold higher in the G4 (IIV8/AS03-IIV5/AS03) vaccinated group when compared with G2 (LAIV8-IIV5) (95% CI of median area: 9310-41,452 versus 2421-20,312, Kruskal-Wallis test: p = 0.033). A factor which may contribute to bias in the response to immunisation could include prior exposure to influenza virus and the magnitude of pre-existing T cell responses at baseline.At baseline (pre-immunisation), H1 stalk-specific T cells were present at detectable levels in most volunteers, with median responses of ∼86 SFU/10 6 PBMCs (Fig. 1j).Importantly however, no statistically significant differences were observed in baseline HA stalk-specific T cell responses between subjects between groups (Kruskal-Wallis test: p = 0.46). Immunisation with cHA-based vaccines expands the breadth of cellular reactivity against the conserved HA stalk domain A gap in our knowledge has been in the identification of specific T cell epitopes in the HA stalk, and in understanding their precise functional roles in influenza virus infection and/or vaccination.Selected studies have undertaken epitope mapping studies to identify T cell "hot zones" or "dead zones" in the HA stalk. 72By breaking down the HA stalk into distinct peptide pools for stimulation, we were able to identify specific sub-regions of the HA stalk for which volunteers had higher T cell reactivity at baseline, and following immunisation with cHA-based IIV/AS03 vaccines (Fig. 2a-d). Once again, we compared the overall AUC (D1-D92) for the NP-specific IFN-γ + T cell response for each vaccine group (Fig. 3f).When comparing between all (e-h) Radar charts show the proportion of the response to each HA stalk pool (P1-P4) expressed as a percentage (%) of the total summed HA-stalk response, where the total response in each pool is 100%.These figures display changes in the relative response to distinct peptide pools, not the overall magnitude of the response. A factor which may contribute to bias in the response to immunisation could include prior-exposure to influenza virus and the magnitude of pre-existing T cell responses at baseline.To consider this, we determined that prior to vaccination (i.e., baseline), NP-specific T cells were present at detectable levels in all volunteers, with median responses of ∼145 SFU/10 6 PBMCs (Fig. 3j).Importantly, no differences were observed in baseline NP-specific T cell responses between groups (Kruskal-Wallis test of transformed data: p = 0.69). Discussion The ongoing threat of an influenza virus pandemic, as exemplified by global spread of zoonotic H5N1 infections, and human infections with H5N1 (e.g., Cambodia, South America and USA), [77][78][79][80][81] highlights the need for vaccines which elicit broad protection across different strains and subtypes.Vaccines targeting conserved influenza virus antigens, which are capable of eliciting both humoral and cellular immune responses simultaneously, would be desirable as a universal influenza virus vaccine.In this study, we expand analysis of an adjuvanted cHA immunisation regimen shown to boost cross-reactive HA stalk Abs in humans, 13,14 to evaluate T cell responses.Our analysis reveals that immunisation with adjuvanted cHA-based IIV can also successfully boost HA stalk-specific T cell responses in humans. T cells have long been identified as key modulators of disease.In both human influenza virus challenge studies, and prospective cohort studies of natural community-acquired infection, T cells have been associated with reduced viral shedding and the reduction of symptom severity. 55,57,58,70Across these studies, different CD4 + and CD8 + T cell subsets targeting NP, M1 or PB1 have been identified as correlating with protection. 55,57,58,75T cells can have diverse protective functions, from cytolytic activity, to aiding the recruitment of innate effector cells to the lung, as well as providing T cell help in the generation of Abs. 12,82,83ndeed, T helper cell responses and Ab responses have previously been reported to correlate in clinical studies of influenza virus vaccination or infection. 84,85][88][89] The paucity of published data on T cell epitopes in the HA stalk, specifically CD8 + T cell epitopes, highlights an area of research need in the field of HA-stalk based universal vaccines.Some prior studies have demonstrated that T cell epitopes in the immunosubdominant HA stalk can be targeted by vaccination in humans.A Phase I and II evaluation of a universal influenza virus vaccine candidate, Multimeric-001, identified an MHC class II epitope in the stalk of H3. [90][91][92] This epitope was also independently identified as a CD4 + T cell epitope following natural influenza virus infection. 93Furthermore, a H5-stalk based CD4 + T cell epitope was also identified from a natural infection study. 54Interestingly, the latter study failed to detect any human CD8 + T cell epitopes in the H5 head or stalk. 54In support of this, another study describing UK and Vietnamese cohorts exhibiting memory T cell responses to NP were found to map to both CD4 + and CD8 + T cells, whereas HA-and NA-specific memory T cell responses identified were solely restricted to CD4 + T cells. 54he immunodominance of the HA head domain ensures that sequential immunisation with conventional seasonal IIVs would result in largely strain-specific immune responses directed towards the HA head.In contrast, the selection of cHA-based vaccines for sequential immunisation is an approach which has been shown to refocus immunity towards the HA stalk.However, an alternative to the use of cHA-based IIVs for HA stalk re-focusing could include immunisation regimens with antigenically distinct pandemic IIVs, such as a prototype H5N1 IIV.A prior study evaluated a twodose regimen with an AS03-adjuvanted H5N1 split virion vaccine, and determined that H5-specific CD4 + T cell responses were present at baseline, and these could be boosted upon immunisation (NCT00309634). 62though the authors did not formally measure HA stalk-specific cellular immunity, and used a different assay (flow cytometry), it is possible that cross-reactive stalk T cells were also boosted using this immunisation regimen. A strength of this report is that it describes and maps HA stalk-specific T cell responses down to the subdomain/pool level in humans following immunisation with a universal influenza vaccine candidate.However, although we detected boosting of HA-stalk reactive T cells in this study, a caveat of our findings is that we specifically measured responses by ex vivo IFN-γ ELI-Spot assay, and cannot distinguish between CD4 + and CD8 + T cells.Nonetheless, a discussion of responses to defined HA stalk peptide pools used in our study with those reported in the literature could be informative to the field.For HA-specific CD4 + T cell responses measured against H5 by Lee and colleagues in the natural infection UK/Vietnamese study, T cell epitopes were identified in three regions of the HA stalk. 54Of those HA stalk epitopes, one epitope maps to HA stalk pool 2 (P2) in our study.Several other studies also identified CD4 + T cell epitopes mapping to this "P2" HA stalk region. 73,74,94,95The other two epitopes from the UK/ Vietnamese study map to HA stalk pool 3 (P3) in our study.Interestingly, at baseline, T cell responses directed towards epitopes in HA stalk P3 represented ∼60-80% of the response across all groups, and this pool was also where the majority of the T cell expansion was measured following the boost immunisation of G1 (LAIV8-IIV5/AS03) and G4 (IIV8/AS03-IIV5/AS03) with IIV5/AS03.P3 contains the long alpha helix (LAH), where CD4 + T cell responses have been mapped to in both mice 95,96 and humans after natural influenza virus infection or vaccination. 73,74It is suggested that this region is a hotspot for T cell epitopes due to the stability of the tertiary structure and the accessibility to proteasomal processes required for generation of peptide epitopes. 72n addition to responses targeting P3, immunisation of G1 and G4 with IIV/AS03 led to broadening of the T cell response against HA stalk peptide pool 4 (P4).This pool corresponds to the membrane proximal, carboxyl terminal region of the HA stalk domain that contains the transmembrane domain (TM) and cytoplasmic tail.Using a tetramer-guided epitope mapping approach, two human CD4 + T cell epitopes with HLA-DR restriction have been identified in the final membraneproximal residues of the HA stalk, extending into the transmembrane domain, and a further CD4 + epitoperich area in amino acid sequences spanning HA stalk P3 and P4 in our study. 74Given the high sequence similarity of the TM domain within H1 and H3 viruses respectively, inducing responses against this region may be beneficial for broad cellular reactivity. 97n epitope mapping study of donor PBMCs previously identified CD4 + T cell responses mapping across the full NP antigen, with relatively equal responses across all epitopes identified, including at NP 19-42 which would correspond to NP P1, and NP 97-120 which corresponds to a region spanning NP P1 and P2. 50CD8 + T cell epitopes within NP have also been experimentally confirmed throughout the antigen, 98 with one study mapping CD8 + NP-specific responses in 5 PBMC donors across NP with 6 hotspots identified. 99A similar study conducted using non-HLA-A2 PBMC donors revealed comparable results. 51Across these two prior studies, the majority of the immunodominant epitopes were clustered at the carboxyl terminal 2/3 of the NP protein (NP 140-412), which does not include peptides in our NP P1 pool.Therefore, we speculate that the responses we measured against NP P1 are more likely to be CD4 + T cell epitopes, although we have not verified this experimentally.Unlike subunit vaccines, split virion vaccines, such as conventional IIV and the universal cHA-based vaccines described in this study may contain residual NP, although this is likely manufacturer and batch-dependent. 100,101As such, some boosting of immunity to NP can be observed, albeit at a lower level than observed following natural infection. 102e consider and acknowledge both the limitations and strengths of our exploratory study.The methodological constraints encompass a limited sample size per group, potential for uncontrolled or unmeasured confounding factors in between-group comparisons, and susceptibility to regression towards the mean within groups.In addition, despite this being an exploratory study, other factors related to confounding or bias, such as sex, age or ethnicity may have affected our findings as a result of the small sample size.To acknowledge this, we have outlined the demographic characteristics of the participants in our randomly sampled subset (Table 2).Although statistically significant changes in immune responses at various timepoints were observed, substantial confidence intervals (95%) are evident for many comparisons which may impact the clinical interpretation and importance, a limitation of performing this study with a small sample size.The main laboratory limitation lies in the T cell responses not being distinguished into their respective CD4 + or CD8 + subpopulations.Downstream mucosal homing markers, effector identities and memory phenotypes were also not characterised, which would permit more extensive interpretation of the data.PBMC sampling only represents cells present in the peripheral blood, so germinal centre T follicular helper cells and T cells resident in the respiratory mucosa were not sampled.Furthermore, we have measured just a single cytokine, IFN-γ, so cannot comment on the polyfunctionality of the T cell responses described.In addition, due to the small sample size per group, as a result of limited sample availability, we do not possess sufficient power to correlate HA stalkspecific T cell responses with previously reported stalk-specific Ab responses on a per volunteer basis.An extrapolation of the role of cellular immunogenicity to protection is challenging, as the field lacks formal correlates of protection for influenza virus for this arm of the immune response.Additionally, the types of assays employed, and parameters measured for T cell assays in clinical studies are not standardised, making direct comparisons with other studies difficult. 8,103 major strength of our study is that it is to date the most substantial HA stalk-targeting immunisation cohort combining multiple routes of administration, ± adjuvant, with T cell responses described at multiple timepoints pre-/post-vaccination.As more studies of HA-stalk targeting universal influenza virus vaccine candidates progress clinically with larger sample sizes per group, the level of detail on specific epitopes, phenotypes and functions associated with the induced T cell response will grow, building upon the foundation data presented in this study.Later phase clinical studies may elucidate the protective role of these T cell populations, where currently there is a paucity of data.Preliminary studies such as this, will enable parallels to be drawn between clinical and pre-clinical data and support the ongoing, more detailed cellular immunophenotyping in future clinical vaccine trials.In summary, in this study we have demonstrated that adjuvanted cHA-based IIVs are capable of stimulating/boosting HA stalk-specific T cell responses in humans.It has previously been shown that these adjuvanted cHA-based IIV candidates induce durable, HA stalk-specific Abs in humans which are broad in terms of their breath of reactivity against group 1 HAs from H1 clade (H1, H2, H5 and H6), the H9 clade (H8, H9 and H12) and the bat HAs (H17 and H18), 44 and elicit diverse functional activities (neutralisation, Fc effector function activation and in vivo protection).13 Further investigation of the role of defined T cell populations, or specific T cell epitopes in the HA stalk domain in protection following immunisation with universal influenza virus vaccine candidates, is warranted. Contributors Conceptualization, LC, CMB and RN; methodology, LC, CMB and JTG; validation, LC and CMB; formal analysis, LC, and CMB; investigation, LC, CMB and JTG; resources, FK, PP and LC; writing-original draft, CMB and LC; writing-review and editing, all authors contributed to review and editing of the final draft, and all authors read and approved the final version of the manuscript; visualization, CMB and LC; supervision, LC and FK; funding acquisition, LC, FK, PP, AG-S.LC and CB directly accessed and verified the underlying data reported in the manuscript. Declaration of interests The Icahn School of Medicine at Mount Sinai (ISMMS) has filed patent applications regarding universal influenza virus vaccines naming AGC, PP, RN and FK as inventors.AGC, PP and FK have also received royalties and research support for their laboratories from GSK in the past and are currently receiving research support from Dynavax for development of influenza virus vaccines.FK has consulting agreements with Pfizer, GSK, Third Rock Ventures and Avimex. The AG-S. laboratory has also received research support from Pfizer, Senhwa Biosciences, Kenall Manufacturing, Blade Therapeutics, Avimex, Johnson & Johnson, 7Hills Pharma, Pharmamar, ImmunityBio, Accurius, Nanocomposix, Hexamer, N-fold LLC, Model Medicines, Atea Pharma, Applied Biological Laboratories and Merck, outside of the reported work.A.G.-S.has consulting agreements for the following companies: Esperovax, Farmak, Applied Biological Laboratories, Pharmamar, 7Hills Pharma, Avimex, Paratus, Synairgen, Accurius, Pfizer and Nanocomposix outside of the reported work.A.G.-S has consulting agreements for the following companies involving cash and/or stock: Castlevax, Amovir, Vaxalto, Pagoda, Contrafect, CureLab Oncology, CureLab Veterinary and Vivaldi Biosciences outside of the reported work.A.G.-S.has been an invited speaker in meeting events organised by Seqirus, Janssen, Abbott and AstraZeneca.JTG is an employee of Argenx US.RN is an employee and shareholder of Moderna.DIB serves on the Data Safety Monitoring Board and Advisory Board for Moderna.AN is an employee of GSK and has vested stocks in GSK.EBW has received funding support from Pfizer, Moderna, Sequiris, Najit Technologies Inc, Leidos Biomedical and Clinetic for the conduct of clinical trials and clinical research.EBW has served as an advisor to Vaxcyte, a consultant to ILiAD biotechnologies and a data safety monitoring board member for Shionogi. Fig. 1 : Fig. 1: HA stalk-specific T cell responses following immunisation with cHA-based universal influenza virus vaccine candidates.Cryopreserved human PBMCs obtained prior to immunisation (D1), seven days post-prime (D8), pre-boost (D85) and seven days post-boost (D92) were stimulated with pools of overlapping peptides corresponding to the H1 stalk of A/Michigan/45/2015, and IFN-γ secretion measured by ELISpot.(a-d) Time course of individual HA stalk-specific T cell responses presented as SFU/10 6 PBMCs for each volunteer (n = 10/group, single biological replicate from duplicate/triplicate wells).The median is shown as a heavy line.Statistical analyses on intra-group paired data (i.e., different timepoints for each individual volunteer) were performed using the non-parametric Friedman test with Dunn's correction for multiple comparisons.Volunteers with missing timepoints were excluded from the Friedman analysis, but due to the limited sample size, all volunteer responses are represented graphically.Exclusions due to isolated missing timepoints are outlined in Table 1.Statistical significance icons are shown on the graph as *p < 0.05, **p < 0.01.Icons denote statistically significant differences between (a) D92 and D1 (**p = 0.0022) and D92 versus D85 (*p = 0.019) for G1 (LAIV8-IIV5/AS03), (b) D92 versus D85 (**p = 0.0016) for G2 (LAIV8-IIV5) and (c) D8 versus D1 (**p = 0.0055) and D92 versus D1 (*p = 0.036) for G4 (IIV8/AS03-IIV5/AS03).(e) Median HA stalk-specific IFN-γ ELISpot response for each group.Dashed vertical grey line indicates prime (D1) and boost (D85) immunisation timepoints, and horizontal dashed grey line indicates the positive threshold (PT) for summed HA-stalk responses, which represents the median+2X median absolute deviation (MAD) for DMSO control wells corrected for summed pools (PT = 73 SFU/ 10 6 PBMCs).(f) Total area under the curve (AUC) for HA stalk-specific IFN-γ ELISpot response from D1-D92.Inter-group comparisons were analysed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons (*p = 0.033).(g) Fold-change in individual IFN-γ responses postprime, D8 versus D1, for each vaccine group.Inter-group comparisons were analysed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons, with G4 response fold change increased as compared with G1 (*p = 0.034) and G2 (*p = 0.018).(h) Fold-change in individual IFN-γ responses post-boost (D92 versus D85) for each vaccine group.Inter-group comparisons were analysed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons (**p = 0.0081).(i) Fold-change in individual IFN-γ responses at D92 versus D1 for each vaccine group.(j) Baseline responses to HA stalk peptides in each vaccine group.Solid line denotes median with 95% confidence intervals (CI) for f-j.Dashed grey line indicates ≥2-fold elevated T cell responses after vaccination for g-i. Fig. 3 : Fig. 3: NP-specific T cell responses following immunisation with cHA-based universal influenza virus vaccine candidates.Cryopreserved human PBMCs obtained prior to immunisation (D1), seven days post-prime (D8), pre-boost (D85) and seven days post-boost (D92) were stimulated with pools of overlapping peptides corresponding to NP from A/Michigan/45/2015, and IFN-γ secretion measured by ELISpot.(a-d) Time course of individual NP-specific T cell responses presented as SFU/10 6 PBMCs for each volunteer (n = 10/group, single biological replicate from duplicate/triplicate wells).The median is shown as a heavy line.Statistical analyses on intra-group paired data (i.e., different timepoints for each individual volunteer) were performed using the non-parametric Friedman test with Dunn's correction for multiple comparisons.Volunteers with missing timepoints were excluded from the Friedman analysis, but due to the limited sample size, all volunteer responses are represented graphically.Exclusions due to isolated missing timepoints are outlined in Table 1.Statistical significance icons are shown on the graph as *p < 0.05, **p < 0.01, ***p < 0.001.Icons denote statistically significant differences between (a) D92 and D1 (*p = 0.047) for G1 (LAIV8-IIV5/ AS03), and (c) D92 against D1 (*p = 0.028) for G4 (IIV8/AS03-IIV5/AS03).(e) Median NP-specific IFN-γ ELISpot response for each group.Dashed vertical grey line indicates prime (D1) and boost (D85) immunisation timepoints, and horizontal dashed grey line indicates the positive threshold (PT) for summed NP responses, which represents the median+2X median absolute deviation (MAD) for DMSO baseline control wells corrected for summed pools (PT = 92 SFU/10 6 PBMCs).(f) Total area under the curve (AUC) for NP-specific IFN-γ ELISpot response from D1-D92.(g) Fold-change in individual IFN-γ responses post-prime, D8 versus D1, for each vaccine group.(h) Fold-change in individual IFN-γ responses post-boost (D92 versus D85) for each vaccine group.Inter-group comparisons were analysed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons (*p = 0.017).(i) Fold-change in individual IFN-γ responses at D92 versus D1 for each vaccine group.Inter-group comparisons were analysed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons * p = 0.027.(j) Baseline responses to NP peptides in each vaccine group.Solid line denotes median with 95% confidence intervals (CI) for f-j.Dashed grey line indicates ≥2-fold elevated T cell responses after vaccination for g-i. Fig. 4 : Fig. 4: Breadth of the NP-specific T cell response against defined peptide pools following immunisation with cHA-based universal influenza virus vaccine candidates.Cryopreserved human PBMCs obtained prior to immunisation (D1), seven days post-prime (D8), pre-boost (D85) and seven days post-boost (D92) were stimulated with pools of peptides corresponding to NP from A/Michigan/45/2015, and IFN-γ secretion detected by ELISpot (n = 10/group, single biological replicate from duplicate/triplicate wells).(a-e) Median NP-specific T cell responses to each peptide pool (P1-P5) expressed as SFU/10 6 PBMCs for each volunteer.The dashed vertical grey line indicates prime (D1) and boost (D85) immunisation timepoints, and horizontal dashed grey line indicates the positive threshold (PT), which represents the median+2X median absolute deviation (MAD) for DMSO baseline control wells (18 SFU/10 6 PBMCs).Statistical analysis on intra-group paired data (i.e., different timepoints for each individual volunteer) was performed using the non-parametric Friedman test with Dunn's correction for multiple comparisons.Volunteers which had missing timepoints were excluded from the Friedman analysis, but due to the limited sample size, all volunteer responses are represented graphically.Exclusions due to isolated missing timepoints are outlined in Table 1.Statistical significance icons are shown on the graph as *p < 0.05.(a) Data shown for P1 denotes a statistically significant difference between D92 and D85 (*p = 0.027) for G1 (LAIV8-IIV5/AS03).(b) Responses to P2 show a statistically significant difference between D92 and D85 (*p = 0.036) and D92 versus D1 (*p = 0.036) for G1 (LAIV8-IIV5/AS03), or D92 and D1 (*p = 0.028) for G4 (IIV8/AS03-IIV5/AS03).(f-i) Radar charts show the proportion of the response to each NP pool (P1-5) expressed as a percentage (%) of the total summed NP response, where the total response in each pool is 100%.These figures display changes in the relative response to distinct peptide pools, not the overall magnitude of the response. Table 1 : Clinical trial vaccine groups. • C until reconstitution and pooling. Table 2 : T cell analysis participant characteristics by vaccine group. PBMCs were detected.This value was defined as the median plus 2MAD (median absolute difference) of individual negative control wells across the entire QC'd dataset, with values above this considered to reach the positive threshold (PT).For whole antigen analyses, responses to individual pools were summed for the HA stalk and NP antigens, and similarly, PTs for summed pools were determined by multiplying up the PT value for DMSO/ R10, resulting in cut-offs for positivity of 73 SFU/10 6 PBMCs for HA stalk (x4 pools), and 92 SFU/10 6 PBMCs for NP (x5 pools).PT cut-offs are indicated on each figure as a dashed line. • C with 5% CO 2 .After 18-20 h of incubation, plates were washed 6 times using PBS 1% (v/v) Tween-20.Secondary antibody (7-B6-1-Biotin, Mabtech) was diluted to 1 μg/mL, and 50 μL added per well.Plates were incubated for 2-4 h at room temperature.Secondary antibody was removed, plates washed 6 times using PBS 1% (v/v) Tween-20.Streptavidin alkaline phosphatase was diluted 1:1000, and added 50 μL/well for 1-2 h at room temperature.An aliquot of BCIP/NBT Plus developer was warmed to room temperature.Plates were washed 6 times using PBS 1% (v/v) Tween-20, then 50 μL developer added per well for 3 min.Development was halted by washing the plate with tap water.Plates were shielded from light and left to dry overnight, before wrapping in foil until automated plate counting.Automated plate countingPlates were counted using an ImmunoSpot S5 Analyzer (ImmunoSpot, Shaker Heights, OH) in the Human Immune Monitoring Core at ISMMS using SmartCount settings, with identical read and count settings for all plates.Immunospot quality control (QC) settings were used to check individual wells and adjust counts to remove artifacts (e.g., fibre removal).An annotation key was inserted into any adjusted wells, raw data exported to Excel, and an image of the plate printed and crosschecked against raw data. 6(or blackout wells), and the DMSO/R10 negative control wells having less than 125 SFU/10 6 PBMC.Over 98.7% of samples passed full negative/positive ELISpot plate QC, with only one volunteer ELISpot (V12) excluded due to high DMSO background, and one volunteer ELISpot (V12) excluded due to poor cell viability (and subsequently, plate failure due to no response in PHA-L positive control wells).All remaining samples passed
2024-05-29T15:25:48.843Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "814f2ed20d8b5c938aea34e1e4b5d0b95c25c59f", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "16d1e0d8ed7b06649503e8f0b5b08f52ed1bb113", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219921445
pes2o/s2orc
v3-fos-license
A Novel Serum Exosomes-Based Biomarker hsa_circ_0002130 Facilitates Osimertinib-Resistance in Non-Small Cell Lung Cancer by Sponging miR-498 Purpose Exosomes are the effective delivery system for biological compounds, including circular RNAs. In this research, we aimed to explore the role of circular RNA hsa_circRNA_0002130 in osimertinib-resistant non-small cell lung cancer (NSCLC). Materials and Methods In our study, the relative protein expression of glucose transporter 1 (GLUT1), hexokinase-2 (HK2) and lactate dehydrogenase A (LDHA) was detected by Western blot, while the expression of hsa_circ_0002130 and microRNA-498 (miR-498) was detected by quantitative real-time PCR (qRT-PCR). The biological functions of hsa_circ_0002130 in osimertinib-resistant NSCLC were analyzed by cell viability assay, flow cytometry analysis, luciferase reporter assay, RNA pull-down assay, and tumor xenograft model in vivo. Moreover, glucose uptake, lactate production and extracellular acidification (ECAR) levels were measured by glucose uptake colorimetric assay kit, lactate assay kit II, and Seahorse Extracellular Flux Analyzer XF96 assay, respectively. hsa_circ_0002130 identification and localization were confirmed by RNase R digestion and subcellular localization assay, respectively. Exosomes were isolated from the sera collected from NSCLC patients and identified using a transmission electron microscopy and nanoparticle tracking analysis. Results Osimertinib-resistance was closely related to glycolysis. hsa_circ_0002130 was highly expressed in osimertinib-resistant NSCLC cells and hsa_circ_0002130 deletion inhibited osimertinib-resistance both in vitro and in vivo. Moreover, hsa_circ_0002130 targeted miR-498 to regulate GLUT1, HK2 and LDHA. The inhibitory effects of hsa_circ_0002130 deletion on osimertinib-resistant were reversed by downregulating miR-498. Importantly, hsa_circ_0002130 was upregulated in serum exosomes from osimertinib-resistant NSCLC patients. Conclusion Our findings confirmed that hsa_circ_0002130 served as a promotion role in osimertinib-resistant NSCLC. Introduction Lung cancer is one of the most common human cancer with a high mortality rate in worldwide, and the incidence and death cases are 1.8 million new cases and 1.6 million death cases, respectively. 1,2 Non-small cell lung cancer (NSCLC) is a type of lung cancer accounting for about 85%. 3 Despite the therapeutic methods had achieved continuous improvements, the relapse and mortality were still the severe problems for NSCLC patients. Moreover, due to the delay in NSCLC diagnosis, the 5-year survival rate is extremely low that remains about 10%-15%. 4 Currently, chemotherapy, radiotherapy, surgical excision, biological immunotherapy, and targeted molecular therapy were the main therapies for NSCLC patients, and EGFR-targeted therapy among these therapies for NSCLC gained wide attention worldwide. 5,6 Osimertinib, a third-generation EGFR-tyrosine kinase inhibitor, has an effective prolongation for the survival of NSCLC patients at the advanced stage. 7 However, the resistance to osimertinib in advanced NSCLC patients is inevitable. Thus, the exploration of the more molecular targets for osimertinib-resistant NSCLC patients is meaningful. Circular RNAs (CircRNAs), a novel group of endogenous, abundant, and conserved non-coding RNAs, are characterized by a circular structure without 3ʹ poly (A) tails and 5ʹ caps. 8,9 The covalently closed loop structure of circular RNA is generated through "back-splicing" with a downstream splice donor connected to an upstream splice acceptor, which made circRNAs more stable and resistant to the degradation of exonucleases than their linear counterparts. 10 Growing evidence indicated that circRNAs exerted crucial effects in multiple biological processes, including cell survival, proliferation, differentiation, metastasis and apoptosis. 11,12 CircRNAs were confirmed to combine with microRNAs (miRNAs) and served as sponges for miRNAs to regulate gene expression via a circRNA/miRNA/messenger RNA (mRNA) axis in human cancers, including NSCLC. 13,14 For example, Han and his colleagues demonstrated that the oncogene circ-RAD23B could facilitate NSCLC progression, including cell growth and metastasis, through sponging miR-653-5p and miR-593-3p to regulate TIMA1 and CCND2, respectively. 15 A novel circular RNA hsa_circ_0002130 upregulation was observed in osimertinib-resistant NSCLC cells. 16 However, the potential mechanisms and functional effects of hsa_circ_0002130 in NSCLC remain unclear. The aerobic glycolysis (Warburg effect), which could enhance tumor cell growth, metastasis and progression with facilitated lactate production and glucose uptake, is a feature of human cancer cells. 17 Glucose transporter 1 (GLUT1) is a regulator in the increasing glucose uptake in glycolysis, which was revealed as a treatment target and diagnostic marker for various human cancers. 18 Hexokinase 2 (HK2) is another critical driver of glycolysis, which is an enzyme that has the ability to convert glucose to glucose-6-phosphate participating in the rate-limiting step in glycolysis. 19,20 Lactate dehydrogenase A (LDHA), which can catalyze pyruvate into lactate, was involved in the final step in glycolysis and confirmed to be associated with the prognosis of NSCLC patients. 21 The detail mechanisms of the regulators, including GLUT1, HK2 and LDHA, are largely unknown, and the role of glycolysis in osimertinibresistant NSCLC needs further investigated. Exosomes are small intraluminal vesicles that generated and secreted by various types of cells with a diameter of about 30-150 nm, which can deliver bioactive cargoes, such as proteins, lipids, nucleic acids, circRNAs, long noncoding RNAs, miRNAs, mRNAs. 22,23 Accumulating evidence suggested that exosomes served as vital mediators in cell-to-cell communication and exhibited critical effects on tumor growth and progression. 24 Multiple studies demonstrated that exosomes could be secreted from cancer cells, and circRNAs were observed to be enriched in exosomes that might be diagnosis markers for cancers. 25 However, little is known about the functional roles of secreted circRNAs in osimertinib-resistant NSCLC. In the present paper, we analyzed the expression of hsa_circ_0002130 in osimertinib-resistant NSCLC cells and exosomal hsa_circ_0002130 expression in osimertinib-resistant NSCLC patients. Moreover, we explored the effects of hsa_circ_0002130 on tumor growth and metastasis. Mechanistically, we further investigated the potential mechanisms of hsa_circ_0002130 in osimertinib-resistant NSCLC progression. Tissue Samples Tissue samples were isolated from 28 osimertinib-resistant NSCLC patients (non-response) and 32 osimertinibsensitive NSCLC patients (response), who were underwent surgery at Huaihe Hospital of Henan University. Written informed consents were obtained from all participates. This research was approved by the Human Research Ethics Committee of Huaihe Hospital of Henan University. Cell Lines and Cell Culture The two lung adenocarcinoma cell lines HCC827 and H1975 purchased from American Type Culture Collection (Manassas, VA, USA) were cultured in Roswell Park Memorial Institute 1640 (RPMI-1640) medium (Gibco, Carlsbad, CA, USA) supplementing with 10% fetal bovine serum (FBS, Invitrogen, Carlsbad, CA, USA). 100 μg/mL streptomycin and penicillin (Invitrogen) were prior to adding into the RPMI-1640 medium to culture the cells. All cells were cultured in an incubator with the standard culture condition of 5% CO 2 at 37°C. Osimertinib (AstraZeneca, Milan, Italy) was dissolved in dimethyl sulfoxide, followed by adding into the cell culture medium. Osimertinibresistant HCC827 and H1975 cells (HCC827/OCR and H1975/OCR cell lines) were established as previously described using a dose-escalation method. 16 Calculation of Half-Maximal Inhibitory Concentration (IC50) HCC827, H1975, HCC827/OTR and H1975/OTR cells with a density of 1 × 10 4 cells per well were seeded into the 96-well plates. Then, the different concentrations of osimertinib including 0.0001 μM, 0.001 μM, 0.01 μM, 0.1 μM, 1 μM, 10 μM were added, respectively. After the treatment of osimertinib for 48 h, the absorbance of the cells on each well was detected at 450 nm by cell counting kit-8 (CCK-8) assay. IC50 represents the concentration of a drug that is required for 50% inhibition of cell growth, and IC50 was calculated using SPSS 18.0. Determination of Glucose Uptake and Lactate Production The level of glucose uptake was determined by glucose uptake colorimetric assay kit (BioVision, Milpitas, CA, USA). Briefly, cells with a density of 1 × 10 4 cells were seeded, and subsequently cultured with 100 mL Krebs-Ringer-Phosphate-HEPES (KRPH) buffer containing 2% albumin from bovine serum, followed by adding 10 mM 2-deoxyglucose (2-DG). The samples were collected for the analysis of the glucose uptake level. For the determination of lactate production, cells were seeded into the 96-well plate and starved using non-serum medium for 24 h. Then, the cells were suspended in the buffer from lactate assay kit II (BioVision). The glucose uptake colorimetric assay kit and lactate assay kit II (BioVision) were also used for the determination of glucose uptake and lactate production in tumor tissues from the nude mice according to the manufacturer's instructions. Measurement of Extracellular Acidification (ECAR) The level of ECAR was determined by the Seahorse Extracellular Flux Analyzer XF96 (Seahorse Bioscience, North Billerica, MA, USA) in NSCLC resistant and sensitive cells. 2 × 10 4 NSCLC cells were cultured in the XF96well plate overnight. Then, the cultured NSCLC cells were sequentially cultured with glucose, oligomycin (oxidative phosphorylation inhibitor), and 2-DG (glycolytic inhibitor). Seahorse XF-96 Wave software was used to analyze the data and ECAR detection was represented as mpH/min. Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Total RNA was extracted from NSCLC cells or exosomes using Trizol reagent (Invitrogen). Briefly, total RNA was reverse-transcribed into complementary DNA using SuperScript III RT (Invitrogen), and then qRT-PCR was performed to determine the amplified transcript levels of the relative specific genes. U6 and GAPDH was used as the internal loads to normalize the expression of miR-498 and hsa_circ_0002130, respectively. The primer sequences of the relative genes were as follows, hsa_circ_0002130: CCACGTGGGAGATTCTGG (sense) and ACGTTCCAC AGCCAGCTC (antisense), miR-498: TTTCAAGCCAGG GGGCGTTTTTC (sense) and GCTTCAAGCTCTGG AGGTGCTTTTC (antisense), U6: CTCGCTTCGGCAGC ACA (sense) and AACGCTTCACGAATTTGCGT (antisense), GAPDH: AAGGCTGAGAATGGGAAAC (sense) and TTCAGGGACTTGTCATACTTC (antisense). The relative expression of miR-498 and hsa_circ_0002130 was measured by the 2 −ΔΔCt method. RNase R Digestion Total RNA isolated from HCC827/OTR and H1975/OTR cells were incubated with 6 units of RNase R (Epicenter Biotechnologies, Shanghai, China). After total RNA and RNase R incubation for 15 min at 37°C, the expression of hsa_circ_0002130 was examined using qRT-PCR. Subcellular Localization Cytoplasmic & Nuclear RNA Purification Kit (Norgen Biotek Corp., Belmont, MA, USA) was used to measure the localization of hsa_circ_0002130. Firstly, the Lysis Buffer J was used to lyse the cells, and then cell lysates were centrifugated. Subsequently, the nuclear RNA and cytoplasmic RNA were added into anhydrous ethanol and Buffer SK, respectively. The cytoplasmic RNA and nuclear RNA were eluted by the spin column. Finally, the proportion of hsa_circ_0002130 in cytoplasmic and nucleus fractions was determined using qRT-PCR. Cell Viability The proliferation abilities of HCC827, H1975, HCC827/ OTR and H1975/OTR cells were determined by CCK-8 assay. Briefly, the cells were plated into the 96-well plates and incubated for 24 h. Then, 10 μL CCK-8 solution (Dojindo, Tokyo, Japan) was added and the absorbance was measured at 450 nm. Tumor Model The 6-8 weeks female BALB/c nude mice were bought from Beijing Vital River Laboratory Animal Technology (Beijing, China). HCC827/OTR cells were stably transfected with sh-circ #1 or sh-NC. Approximately 2 × 10 6 sh-circ #1/sh-NC-infected HCC827/OTR cells were subcutaneously injected into the right flank of the nude mice. After 6 days, the mice were treated with osimertinib (5.0 mg/kg/d) by oral administration. All the mice were classified into three groups (5 per group): sh-NC group, sh-NC + osimertinib group, and sh-circ #1 + osimertinib group. The tumor volume was detected from 6 day every 3 days using a caliper and the tumor weight was detected after the mice euthanasia at 27 day. All the animal studies were performed in line with the Guide for the Care and Use of Laboratory Animals and approved by Huaihe Hospital of Henan University Experimental Animal Ethics Committee. RNA Pull-Down Assay The biotin-coupled miR-498 probe and control oligo probe were synthesized by RiboBio (Guangzhou, China). Briefly, HCC827/OTR and H1975/OTR cells were lysed using lysis buffer. Cell lysates were incubated with the biotincoupled miR-498 probe or biotin-coupled NC probe for 1 h at room temperature, and then the remaining cell lysates were cultured with streptavidin magnetic beads (Life Technologies, Mountain View, CA, USA). Subsequently, the lysis buffer was used to wash the beads. Finally, RNAs bound to the beads were extracted using TRIzol Reagent (Invitrogen), and then analyzed using qRT-PCR. Exosome Isolation Serum samples were collected from NSCLC patients who underwent surgical procedures at Huaihe Hospital of Henan University. All patients participating in the present research signed the written informed consent. The research was approved by the Human Research Ethics Committee of Huaihe Hospital of Henan University. Exosome isolation was essentially carried out as previously described. 26 Briefly, the serum samples were irradiated for 48 h, and the medium was collected and centrifuged. The medium was centrifuged at 2000 × g for 10 min, and then 10,000 × g for 30 min for depleting cell debris. Then, exosomes were extracted using ultracentrifugation at 120,000 × g for 60 min. The pellet containing exosomes was washed using phosphate-buffered saline (PBS). Finally, the exosomes were suspended in PBS and stored at −80°C until quantitation. The Optima L-100XP ultracentrifuge (Beckman Coulter, Brea, CA, USA) was used for the ultracentrifugation. And the Nanoparticle tracking analysis (NTA) was carried out using a Nanosight NS300 instrument (Malvern Instruments, Malvern, UK). Statistical Analysis Data were reported as the mean ± the standard deviations (SD) and analyzed by SPSS 18.0 software. The significance of difference between groups was evaluated using Student's t-test or One-way ANOVA. Pearson correlation analysis was used to analyze the relationship between miR-498 and GLUT1, HK2 and LDHA. A value of P < 0.05 was regarded as a statistically significant difference. Glycolysis Was Enhanced in Osimertinib-Resistant NSCLC Cells The osimertinib-resistant HCC827 cell line (HCC827/ OTR) was established from the parental HCC827 cell line by gradually increasing the concentrations of osimertinib from 20.92 nM to 10 uM for six months. Meanwhile, H1975/OTR cell line was established from the parental H1975 cell line by gradually increasing the concentrations of osimertinib from 10.87 nM to 10 uM for six months. IC50 values of osimertinib for HCC827 and HCC827/OTR cells were 0.02092 uM and 1.278 uM, respectively. IC50 values of osimertinib for H1975 and H1975/OTR cells were 0.01087 uM and 0.5321 uM, respectively ( Figure 1A). Subsequently, the glucose uptake and lactate production were detected in NSCLC sensitive and resistant cells. As shown in Figure 1B, the level of glucose uptake was significantly increased in HCC827/ OTR and H1975/OTR cells compared with HCC827 and H1975 cells. Consistently, the level of lactate production was dramatically upregulated in HCC827/OTR and H1975/OTR cells relative to that in HCC827 and H1975 cells ( Figure 1C). We also determined the ECAR level in NSCLC sensitive and resistant cells. We found an enhanced ECAR level in HCC827/OTR and H1975/OTR cells in comparison to HCC827 and H1975 cells ( Figure 1D and E). Moreover, GLUT1, HK2 and LDHA were higher in HCC827/OTR and H1975/OTR cells than that in HCC827 and H1975 cells ( Figure 1F and G). All these results indicated that the glycolysis was facilitated in osimertinib-resistant NSCLC cells. hsa_circ_0002130 Was Upregulated in Osimertinib-Resistant NSCLC Cells We discovered that hsa_circ_0002130 was increased in HCC827/OTR and H1975/OTR cells (Figure 2A). Moreover, we found that hsa_circ_0002130 was derived from the host gene C3 and consisted of 2 exons (exon [18][19], which was cyclized with the head-to-tail splicing of exon 18 and exon 19 according to circBase. The exist of back-splice junction was confirmed by our sanger sequencing ( Figure 2B). Moreover, we performed RNase R digestion assay to verify the circular nature of hsa_circ_0002130. The results confirmed that hsa_circ_0002130 was indeed circRNA, which was resistant to RNase R digestion ( Figure 2C). Subsequently, we measured the subcellular localization of hsa_circ_0002130 by nuclear and cytoplasmic separation experiments. The result suggested that hsa_circ_0002130 was mostly located in the cytoplasm of HCC827/OTR and H1975/OTR cells ( Figure 2D). Besides, the knockdown efficiency of siRNAs against hsa_circ_0002130 was measured by qRT-PCR. The data showed that sh-circ #1, sh-circ #2 and sh-circ #3 could significantly downregulate the expression of hsa_circ_0002130 in both HCC827/OTR and H1975/OTR cells ( Figure 2E). Furthermore, sh-circ #1 possessing the best knockdown efficiency was selected for the following experiments. hsa_circ_0002130 Knockdown Inhibited Cell Proliferation, Glycolysis, and Enhanced Cell Apoptosis in Osimertinib-Resistant NSCLC Next, we explored the effects of hsa_circ_0002130 on osimertinib-resistant NSCLC. MTT assay indicated that knockdown of hsa_circ_0002130 significantly inhibited cell viability in both HCC827/OTR and H1975/OTR cells ( Figure 3A). hsa_circ_0002130 deletion promoted cell apoptosis in HCC827/OTR and H1975/OTR cells ( Figure 3B and C). Moreover, the levels of glucose uptake and lactate production were measured. As shown in Figure 3D and E, downregulation of hsa_circ_0002130 markedly decreased the levels of glucose uptake and lactate production in both HCC827/OTR and H1975/OTR cells. Analogously, the level of ECAR was also confirmed to be downregulated in HCC827/OTR and H1975/OTR cells transfected with sh-circ #1 ( Figure 3F and G). Western blot analysis suggested that downregulation of hsa_circ_0002130 significantly attenuated the expression of GLUTI, HK2 and LDHA in both osimertinib-resistant HCC827 and H1975 cells ( Figure 3H and I). Furthermore, the data in Supplement Figure 1A showed a successful overexpression transfection efficiency of oe-hsa _circ_0002130 in both HCC827/OTR and H1975/OTR cells. Overexpression of hsa_circ_0002130enhanced cell viability (Supplement Figure 1B), but had no effect on cell apoptosis in HCC827/OTR and H1975/OTR cells (Supplement Figure 1C). Besides, hsa_circ_0002130 upregulation increased the levels of glucose uptake, lactate production and ECAR (Supplement Figure 1D-G). Overexpression of hsa_circ_0002130 also elevated the protein expression of GLUT1, HK2 and LDHA (Supplement Figure 1H-I). Taken together, our results indicated that hsa_circ_0002130 knockdown could inhibit osimertinib-resistance in HCC827/ OTR and H1975/OTR cells. hsa_circ_0002130 Knockdown Suppressed Tumor Growth in vivo To further investigate the promotion effects of hsa_circ_0002130 in osimertinib-resistant NSCLC, the nude mice injected with sh-NC/sh-circ #1 HCC827/OTR cells was treated with osimertinib. As described in Figure 4A, tumor volume was significantly inhibited by downregulating hsa_circ_0002130 in osimertinib-resistant NSCLC mice. Consistently, we also demonstrated that knockdown of hsa_circ_0002130 could significantly inhibit tumor weight ( Figure 4B). Moreover, the glycolysis-related lactate production was determined. Our data showed that downregulation of hsa_circ_0002130 dramatically decreased the level of lactate production ( Figure 4C). The expression of hsa_circ_0002130 was significantly repressed by hsa_circ_0002130 knockdown ( Figure 4D). Then, we detected the expression of glycolysisassociated proteins including GLUT1, HK2 and LDHA. We observed that the expression of GLUT1, HK2 and LDHA was significantly suppressed by downregulating hsa_circ_0002 130 ( Figure 4E). All these results confirmed that hsa_circ_0002130 served as an oncogene in osimertinibresistant NSCLC. hsa_circ_0002130 Sponged miR-498 to Regulate GLUT1, HK2 and LDHA Expression Venn diagram showed that miR-498 harbored the binding sites of hsa_circ_0002130, GLUT1, HK2 and LDHA ( Figure 5A). CircInteractome predicted that hsa_circ_0002130 contained the binding sites of miR-498 and starBase predicted that miR-498 contained the binding sites of GLUT1, HK2 and LDHA ( Figure 5A). The expression of miR-498 was reduced in HCC827/OTR and H1975/OTR cells compared with the HCC827 and H1975 cells ( Figure 5B). Dual-luciferase reporter assay indicated that the relative luciferase activity was attenuated in HEK293T cells co-transfected with WT-hsa_circ_0002130, WT-GLUT1, WT-HK2 or WT-LDHA and miR-498 mimic compared with the corresponding control group, while the luciferase activity of MUT-hsa _circ_0002130, MUT-GLUT1, MUT-HK2 or MUT-LDHA vectors was not changed in HEK293T cells transfected with miR-498 mimic ( Figure 5C-F). Meanwhile, the results demonstrated that hsa_circ_0002130, GLUT1, HK2 and LDHA were effectively enriched by miR-498 in HCC827/ OTR and H1975/OTR cells ( Figure 5G). The data indicated that the expression of miR-498 was significantly increased by downregulating hsa_circ_0002130 in both HCC827/OTR and H1975/OTR cells ( Figure 5H). Besides, we found the successful overexpression and deletion efficiency of miR-498 mimic and inhibitor in both HCC827/OTR and H1975/OTR cells ( Figure 5I). Furthermore, the expression of GLUT1, HK2 and LDHA was dramatically inhibited by upregulating miR-498 Deletion Reversed the Effects of hsa_circ_0002130 Knockdown on Osimertinib-Resistance in Osimertinib-Resistant NSCLC Cells To further explore the underlying mechanism of hsa_circ_0002130 in osimertinib-resistant NSCLC, we performed the rescue experiments by co-transfecting sh-circ #1 and miR-498 inhibitor into HCC827/OTR and H1975/OTR cells. Cell viability of HCC827/OTR and H1975/OTR cells were significantly inhibited by miR-498 mimic, and downregulation of miR-498 reversed the inhibition effect of hsa_circ_0002130 deletion on cell viability ( Figure 6A). Moreover, miR-498 upregulation significantly enhanced the number of apoptotic cells, while miR-498 deletion reversed the promotion effect of hsa_circ_0002130 downregulation on cell apoptosis ( Figure 6B). Furthermore, we found that overexpression of miR-498 dramatically inhibited the levels of glucose uptake and lactate production, while the inhibitory effects of hsa_circ_0002130 downregulation on glucose uptake and lactate production were blocked by knockdown of miR-498 in both HCC827/OTR and H1975/OTR cells ( Figure 6C and D). Consistently, the level of ECAR was significantly suppressed by overexpressing miR-498, while miR-498 deletion could reserve the decrease of ECAR induced by downregulating hsa_circ_0002130 (Figure 6E and F). We also demonstrated that miR-498 overexpression could inhibit the expression of GLUT1, HK2 and LDHA, while miR-498 deletion could significantly rescue the decrease of GLUT1, HK2 and LDHA expression induced by hsa_circ_0002130 downregulation (Figure 6G and H). Our results indicated that hsa_circ_0002130 was involved in the osimertinib-resistance in osimertinib-resistant NSCLC cells by combining with miR-498 to regulate the expression of GLUT1, HK2 and LDHA. hsa_circ_0002130 Was Secreted by Exosomes from the Serum of NSCLC Patients Finally, in the present study, the abundant sera were collected from 28 non-response NSCLC patients and 32 response NSCLC patients. The isolated exosomes using sequential centrifugation were verified using electron microscopy ( Figure 8A). To confirm the purification of exosomes, the vesicles were determined by Nanosight NTA. The results of NTA analysis indicated that the diameter of the vesicles ranged from 30 to 140 nm that is in line with the size of exosomes ( Figure 8B). Western blot confirmed the presence of two well-known exosomal markers, CD81 and TSG101 ( Figure 8C). QRT-PCR analysis showed that hsa_circ_0002130 was enriched in serum exosomes derived from osimertinib-resistant NSCLC patients compared with that in osimertinibsensitive NSCLC patients ( Figure 8D). Moreover, we demonstrated that hsa_circ_0002130 had the diagnostic value of AUC=0.792 (P<0.001) ( Figure 8E). As shown in Figure 8F, the proportion of non-response NSCLC patients in NSCLC patients with high hsa_circ_0002130 expression was higher than that in NSCLC patients with low hsa_circ_0002130 expression. All data demonstrated that serum exosomal hsa_circ_0002130 was upregulated in osimertinib-resistant NSCLC patients. Discussion Cancer drug-resistance is still a difficult problem in human cancer clinical treatment. Osimertinib is an effective EGFRtyrosine kinase inhibitor for the advanced NSCLC patients, yet it is also inevitable to acquire resistance to osimertinib. 27 Previous studies suggested that cancer cells exerted facilitated effect on glycolysis for ATP generation, which promoted cancer development and progression. 28 Several studies indicated that glycolysis potently inhibited the apoptotic rate of multidrug-resistant cells, suggesting that a novel strategy of targeting glycolysis might be useful to overcome multidrug resistance. 29 In our study, we found that the levels of glucose uptake, lactate production and ECAR were significantly increased in osimertinib-resistant NSCLC cells. Moreover, the glycolysis-related proteins, including GLUT1, HK2 and LDHA, were dramatically upregulated in osimertinib-resistant NSCLC cells. Our data indicated that glycolysis exhibited a promotion role in osimertinib-resistant NSCLC. In the recent few years, noncoding RNAs, such as circRNAs and miRNAs, have been confirmed to be involved and played vital roles in verities of human pathophysiologic processes including tumorigenesis. 30 Despite the growing evidence is usable to verify the underlying mechanisms of circRNAs in the progression of malignant tumors, the knowledge is still limited. CircRNAs, such as circ_100876, circ_CCDC66 and circ_ITCH, were confirmed to be associated with development, lymph node metastasis, pathological grade, as well as drug resistance in lung cancer. [31][32][33] In this paper, we discovered that hsa_circ_0002130 was highly expressed in osimertinib-resistant NSCLC cells and knockdown of hsa_circ_0002130 could significantly exert inhibitory effects on cell growth and glycolysis, but exhibit an enhanced effect on cell apoptosis of osimertinib-resistant NSCLC cells. Moreover, hsa_circ_0002130 deletion also exerted inhibition effects on tumor growth in vivo. hsa_-circ_0002130 might be a novel treatment target for osimertinib-resistant NSCLC. Consistently, it has been discovered that hsa_circ_0002130 was also upregulated in ovarian endometriosis and pancreatic ductal adenocarcinoma. 34,35 Accumulating evidence demonstrated that circRNAs, a novel group of noncoding RNAs, served as miRNA sponges to regulate the downstream gene expression. 36 For example, circ-ZFR could sponge miR-30a/miR-107 to modulate PTEN expression in gastric cancer, while circLARP4 acted as an inhibitor in gastric cancer progression via targeting miR-424-5p and regulating the expression of LATS1. 37,38 Liu and his colleagues indicated that hsa_circRNA_103809 facilitated the development of lung cancer through sequestering miR-4302 to enhance the expression of MYC, suggesting that hsa_circRNA_103809 might be a prognosis biomarker for lung cancer patients. 39 Furthermore, circ-PRMT5 boosted cell growth of NSCLC cells via increasing the expression of EXH2 through sponging miR-377/382/ 498. 40 More importantly, accumulating evidence revealed that circRNAs could regulate gene expression via targeting miRNAs in drug-resistant cancers. 41 Consistently, we found that hsa_circ_0002130 could target miR-498 to regulate the expression of glycolysis-related proteins, including GLUT1, HK2 and LDHA in osimertinib-resistant NSCLC cells. In our research, miR-498 overexpression significantly suppressed cell growth and glycolysis, and enhanced the apoptotic ability of osimertinib-resistant NSCLC cells. Moreover, our data revealed that the suppressive effects of hsa_circ_0002130 deletion were reversed by downregulating miR-498 in osimertinib-resistant NSCLC cells. Exosomes are membrane vesicles released by various cells into the extracellular microenvironment. 42 Accumulating evidence suggested that exosomes might be crucial mediators in Figure 7 miR-498 was decreased, while hsa_circ_0002130, GLUT1, HK2 and LDHA were increased in osimertinib-resistant NSCLC patients. (A-F) The expression of hsa_circ_0002130, miR-498, GLUT1, HK2 and LDHA was detected by qRT-PCR. (F) The correlation between hsa_circ_0002130 and miR-498 mRNA in NSCLC tissues was analyzed (r= -0.74, P < 0.05). (G) The positive relationship between hsa_circ_0002130 and GLUT1 was observed in NSCLC tissues (r= 0.57, P < 0.05). (H) The positive relationship between hsa_circ_0002130 and HK2 was observed in NSCLC tissues (r= 0.49, P < 0.05). (I) The positive relationship between hsa_circ_0002130 and LDHA was observed in NSCLC tissues (r= 68, P < 0.05). *P < 0.05. types of human cancers, which can participate in the intercellular communication through transferring circRNAs, miRNAs, mRNAs and proteins. 43 A previous research indicated that exosomal circRNAs acted as molecular biomarkers for the monitoring, prognosis and diagnosis of diseases, including human cancer. 44 For instance, exosomal circ-PDE8A facilitated tumor growth and invasive in pancreatic cancer. 45 Notably, we discovered that the enhanced expression of hsa_circ_0002130 could be determined in the serum exosomes of osimertinib-resistant NSCLC patients. In summary, hsa_circ_0002130 expression was significantly enhanced in osimertinib-resistant NSCLC cells and serum exosomes from osimertinib-resistant NSCLC patients, which exerted promotion effects on osimertinib-resistance. Furthermore, downregulating hsa_circ_0002130 dramatically suppressed the progression of osimertinib-resistant NSCLC both in vitro and in vivo. hsa_circ_0002130 affected cell growth, apoptosis and glycolysis by sponging miR-498 to regulate GLUT1, HK2 and LDHA in osimertinib-resistant NSCLC cells. Funding This work was supported by the Project of Kaifeng scienceand Technology Bureau (Grant No. 1103420) and Major Science and Technology Projects of Kaifeng Science and Technology Bureau (19ZD011 and 18ZD008). Publish your work in this journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2020-06-11T09:07:18.828Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "fc67a63e0db2feccc757b4996c6853bebaea0a62", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=58729", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db2e00038ada44f385bfb2a36a63740cef8db97d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
253514125
pes2o/s2orc
v3-fos-license
How Cu(II) binding affects structure and dynamics of α -synuclein revealed by molecular dynamics simulations We report accelerated molecular dynamics simulations of α -Synuclein and its complex with two Cu(II) ions bound to experimentally determined binding sites. Adding two Cu(II) ions, one bound to the N-terminal region and one to the C-terminus, decreases size and flexibility of the peptide while introducing significant new contacts within and between N-terminus and non-A β component (NAC). Cu(II) ions also alter the pattern of secondary structure within the peptide, inducing more and longer-lasting elements of secondary structure such as β -strands and hairpins. Free energy surfaces, obtained from reweighting the accelerated molecular dynamics boost potential, further demonstrate the restriction on size and flexibility that results from binding of copper ions. Introduction α-Synuclein (αS) is a 140-residue protein that has been associated with Parkinson's disease (PD), where it has been found to accumulate in Lewy bodies and other pathological aggregates [1].Aggregation of αS has also been reported in Alzheimer's disease (AD), through the central hydrophobic non-amyloid-β component (NAC), residues 61-95, Fig. 1 [2].The spatiotemporal heterogeneity of intrinsically disordered proteins (IDPs) such as αS has been reported to be influenced by environmental factors, with function-related conformations with varied retention times, depending on the peptide and interactions with binding partners [3,4].Promotion of fibrillation of αS has been proposed as a result of various environmental factors, such as low pH and increased temperature, owing to a decrease in the diffusivity of the aggregates [5,6]. Transition metal ions [7][8][9][10][11], which have inspired this study, have been observed to affect fibril formation.αS has been suggested to aggregate intracellularly as a response to divalent metal ions (Fe(II), Mn (II), Co(II), Ni(II) and in particular Cu(II)), bound to the two termini, with the removal of either coordinated metal resulting in disruption of aggregation [12,13].Some reports have been published on the role of metal ions in the promotion of free-radical mediated oxidative processes [14][15][16], with most focusing either on the specific regions interacting with the ions [8,10,11,[17][18][19][20][21][22][23][24], or the possible self-oligomerisation and aggregation mechanisms involving the metal ions [9,12,25,26].Studies employing small-angle X-ray scattering (SAXS), nuclear magnetic resonance (NMR), circular dichroism (CD) and electron paramagnetic resonance (EPR) spectroscopy, on metal ion-αS interactions, implicate regions M 1 DVFMKGLS 9 , V 48 AHGV 52 and D 119 PDNEA 124 as the metal ion-binding sites [8,10,20,27].Further research into the specific binding modes of these metal ions and the aforementioned sites, proposed one of those to consist of macro-chelation between residues M1, D2 and H50, and a second site encompassing D119, D121, N122 and E123 [8,11].We note that Cu(II)-coordination at the N-terminus has also been proposed to occur through either coordination with M1, D2 and H 2 O, or V49, H50 and H 2 O [28,29].The former has also been thought to exist in the membrane-bound form of αS, while the latter has been linked to the acetylated form of αS [30,31].In its monomeric WT form, however, these binding modes have been put into dispute from electron spin-echo envelope modulation (ESEEM) spectroscopy studies [32][33][34], thus here we focus only on the more established macro-chelated coordination mode [20,[32][33][34][35][36].This coordination mode has also been proposed to happen in an interpeptide fashion, although that aspect has not been explored herein [37].The coordination of the first site occurs in a 3N1O fashion, while the C-terminal binding involves a 4O coordination mode [8,11,18,38].These binding modes are presented in Fig. 2. One of the anchoring residues for metal ions, especially Cu(II), binding on the peptide chain is H50.In this context, it is notable that a H50A mutant results in quite different aggregation profile on coordination with Cu(II) compared to the wild type [12,20,39]. Computational studies on the binding of Cu(II) to αS are less prominent in literature compared to experiment, with most of the published ones focusing on modelling the free peptide [17,18,36,40].One of the challenges to be addressed is the parameterisation of the metal ion.A well-established method to approach this issue is the use of density functional theory (DFT), which properly accounts for the electronic effects of the metal ion.However, the flexibility of IDPs and requirement for sampling the many conformations accessible to αS under biological conditions, mean that the computational overhead of DFT, or even hybrid QM/MM, becomes prohibitive.A more tractable approach is to extract molecular mechanics (MM) force field parameters from DFT calculations [41], and then to use these to perform molecular dynamics (MD) simulations.The intrinsically disordered nature of the peptide has to be taken into consideration when choosing the force field [40,42,43]. Here, we employ ff03ws [42] with the Onufriev, Bashford, Case (OBC) implicit solvent model, having performed an evaluation of the combination in a previous study involving metal-free α-Synuclein [44], showing that it reproduces experimental values of size, secondary structure and backbone chemical shift.The choice of implicit solvent also addressed the generally low radius of gyration (R g ) values reported both in our prior computational study, but also throughout the literature, where explicit solvents are used, reporting around 10 Å lower expansion [45] compared to experimental findings [5,46].A recent study [47], analysed the MD data amounting to 73 μs produced by the D. E. Shaw Research (DESRES) group [48], extracting the major conformational components they compared these to experimental results from single-molecule force spectroscopy [49,50], single-molecule Förster resonance energy transfer [51] and other experimental and computational results, concluding the experimental classification of random coil interactions was the most populated, while the corresponding clusters from the MD simulations, appeared to underestimate the random coil and overestimate the strong-interacting experimental populations.The MD simulations by DESRES were performed in explicit solvent, maintaining the overestimation of the more restricted conformation of αS, when simulated using explicit models [52,53].Two QM/MM studies examined the coordination of Cu(II) in the Nterminus of αS, one of them looking at the M1-D2-H 2 O binding site [54], believed to result in the more stable Cu(II)-complex, owing to the formation of a (5,6)-joined chelate ring from (NH2, N − , β-COO − ) [55]; while the other focusing on the V48-H50 region [39], involved in the Cu (II)-coordination of the N-terminally acetylated αS [56].One of the computational studies employing MD simulations to study the copperbound peptide involved a fragment of αS, simulating the first 12 residues, coordinating the copper ion on the first two amino acids and a water molecule.The temperature replica-exchange molecular dynamics (T-REMD) simulations performed in that study, employed the ff03, CHARMM27, OPLS-AA and GROMOS43A1 force fields in explicit solvent [24].Other computational studies on the full peptide have used coarse-grained molecular dynamics (CG-MD), through scaling seen in the ff03ws force field applied in SIRAH [17,23], and ab initio [18,22] and MD simulations using the CHARMM27 in explicit solvent [18].These studies have highlighted the high affinity of Cu(II) coordination to Asp121 and His50, a feature we have also seen through the ab initio calculation of force constants (vide infra).Results on the secondary structure and R g from those simulations, present a decrease in the helical characteristics upon copper binding with a corresponding increase in random coil, while also noting small changes in R g distributions between unbound and copper-bound peptide. Computational methods Molecular dynamics simulations were performed using the AMBER16 package [57].The two metal sites, where Cu(II) interacts with the peptide, were parameterised using the metal centre parameter builder (MCPB.py)program [41], using angle, bond and charge parameters obtained through Gaussian09 [58] using B3LYP/6-31G(d) [59].The Seminario [60] method and restrained electrostatic potential (RESP) fitting scheme [61][62][63], were utilised to obtain harmonic force constants and atomic charges from DFT calculations.The LEaP [64] function was then used to combine these with the force field parameters.The systems were solvated with the Onufriev, Bashford, Case (OBC) modification to the generalised Born (GB) model [65][66][67].The selection for the implicit solvent was made after considering the underestimation of the effective radii, which may come as a result of using macromolecules, owing to the treatment of vacuum-filled crevices as being filled with water, especially where 'buried' atoms, in the central region of the peptide, are concerned [67].The ff03ws force field was used here, after our group previously assessed its performance on the unbound α-Synuclein.The use of implicit solvent with this force field was also evaluated in the same study, where explicit solvent simulations appeared to not adequately match experimental findings, especially for radius of gyration, secondary structure and electrostatic interactions [17,18,36,40,44,68].Key observations made therein on the folding characteristics are also discussed here with regards to experimental evidence on the metal-free system.The systems studied here were modelled in their extended conformation in MOE, where they were also minimised through the ligandfield molecular mechanics (LFMM) force field embedded within Dom-miMOE [69].Having minimised the two systems, three individual conventional molecular dynamics (cMD) simulations were performed each for 300 million steps at a 2 fs timestep.The MD simulations were performed in the NVT ensemble 310 K, using the Langevin thermostat [70].The SHAKE algorithm [71] was used to impose holonomic restraints on bonds to hydrogen, restricting them to their equilibrium length.The cMD simulations are not reported here, since we previously established the superiority of the simulations performed using accelerated MD (aMD) in this system [72].The boost potential was calculated from the mean total potential energy, imposing a bias in the simulations, pushing the peptides out of local minima in which they may get stuck in cMD simulations.Three individual aMD simulations were performed for 600 ns each, starting from the final conformation and velocity from each of the cMD trajectories.The parameters of the simulations were otherwise kept identical to the cMD simulations.Free energy landscape plots were constructed through reweighting [72], and the carma package [73] was used to obtain clusters from principal component analysis (PCA) of the cartesian coordinates of Cα, and the FindGeo [74] tool was used to assess possible geometries around the metal centres.The rest of the analysis performed using the cpptraj [75] tool, acquiring data on secondary structure, root mean square fluctuation (RMSF), salt bridges, hydrogen bonding, RMSD and radius of gyration (R g ), without reweighting of the trajectories. Parameterisation of metal sites The metal sites of the modelled peptide were parameterised using the MCPB.pytool [41], after establishing the metal ion binding sites from literature survey of experimental in vitro and in silico studies on the coordination of Cu(II) [8,10,11,22,76].DFT calculations were then performed to assign harmonic bonds between the metal ions and the atoms involved in their coordination. Looking at the values from the QM calculations, Table 1 and Table S1, a relatively consistent force is imposed on the ligating atoms in the N-terminus.This is not the case, however, in the C-terminal metal site, where smaller force constants are found for N122 and E123.This allowed greater flexibility of the ligating atoms in these residues, yielding closer distances to the metal centre during the MD simulations, albeit with more fluctuations in the bond distance, compared to the other coordinating atoms, Table S3.The stability of the distance of atoms in the metal-coordination sites, can be seen from Fig. S2, where the distances are maintained within bonding length; even for the O from Asn122, where the lowest force constant is seen, the bond distance is seen fluctuating at a high degree at the beginning of one out of the three trajectories. Accelerated MD on free and copper-bound αS The MD simulations presented here were performed using the ff03ws force field in combination with the OBC implicit solvent.The evaluation of the folding characteristics in the simulations of the metal-free system against experimental evidence, Table 2, suggested the suitability of this combination for the simulation of αS over alternative force fields and explicit solvents, as seen in our previous assessment of such systems [44].From the survey presented in that table, it can be seen that there is no collective agreement in the literature regarding the folding characteristics of this peptide, even within each of the studies, in certain cases reporting a great potential deviation from their reported secondary structure percentages.We do, however, see good agreement between our calculated values and those from the ATR-FTIR experiment [77], where a 3% β-sheet character is reported and with the helicity at 35%.A more in-depth discussion of our results on the secondary structures of the simulated systems is given further below.Additionally to the secondary structure propensities reported in the experiments cited below, we further include a comparison of Cα chemical shifts, Fig. 3, where we find a mean deviation of 1.42% from experiment, hinting towards a great similarity in the local covalent interactions of the experimental and simulated systems.Considering these remarks, we are confident in the capacity of our chosen force field and solvent in providing accurate predictions of the two systems we examine. Equilibration of the MD simulations performed here, was assessed from the RMSD plots, given in Fig. 4, with the cumulative average settling to a plateau, despite the fluctuations observed in the RMSD values over the length of the simulations.This, along with the length of the cMD trajectories, where the aMD simulations were extended from, are enough to ascertain the stability of the systems. Analysis of the Cu(II) coordination sites (Fig. S2, Table S2 and Table S3) confirm that Cu-L distances and L-Cu-L angles are stable over the course of the entire trajectory.For the assessment of the most prominent geometry expressed by the peptide, clusters were created using the cartesian coordinates of the Cαthe different clusters are given in Table S5.The coordination of ligating atoms in each of the metal sites, from the average cluster structure obtained from cartesian PCA analysis, is shown below. The conformational assembly of the metal sites, seen in Fig. 5 and Table S4, correlates with experimental observations, detecting a distorted square planar arrangement of atoms, with the high affinity Nterminal coordination site maintaining a geometry that is not strongly distorted from idealised square planar [8,20,22,85].Interaction with atoms neighbouring the ligands, distorts the geometry in each of the sites, exerting repulsion on the equatorial positions.These atoms are from a sidechain oxygen of Glu35, and the second oxygen in the Glu123 sidechain.The coordination of Asp2 in a bidentate fashion, adds to the strain on the geometry of the first metal site, further distorting the square planar geometry.R g values over the aMD trajectories, are shown in Fig. 6, along with the cumulative average.The rolling standard deviation (with a 25 ns window) of the simulations were also plotted, Fig. S3, showing change of SD over time, providing further evidence of equilibration over the course of each run.From these plots, it is evident that the system fluctuates between a range of R g values, with a SD for the free peptide of 4.58 Å and for the copper-bound at 4.31 Å.The distribution plots of the Detailed secondary structure percentages of the three main regions of αS.L. Savva and J.A. Platts R g for each of the runs, are given in Fig. S4, where all runs are seen exhibiting peak values close to the average; thus presenting three correlating trajectories, for each system.Upon comparison of the R g data, an overall increase in the compactness of the peptide is seen upon metal ion binding, with the Cu (II)-αS being 4.5 Å smaller when compared to the free-αS.The standard deviation in Rg, as well as maximum and minimum values, are also smaller in the Cu-bound peptide, indicating a more compact and less flexible trajectory in the presence of two metal ions.This comes as a result of the closer contacts developed within the peptide, especially from the macro-chelation formed with the coordination of the first two residues and His50 in the first metal site.This increase in the intramolecular interactions is seen not only in the Cα contact maps (vide infra Fig. 15), but also in the slightly increased sphericity of the coppercoordinated system, Table S6.The sphericity of the system was assessed after calculating the globularity, by dividing the smallest by the largest diagonalized eigenvalues of the R g tensor, whereby Globularity = λ x /λ z .This is also seen at the free energy landscape of R g against globularity, Fig. 7, where the free peptide, although visiting conformations up to 0.5, on average displayed lower globularity values.The copperbound peptide appears to have a more constrained sampling of the conformational space, owing to the increased ordering of residues bound to metal centres, decreasing flexibility of the overall system. A significant contribution to the secondary structural characteristics of αS is attributed in or around motifs of repeating residue regions KTK (E/Q)GV, found between residues 32-37, 43-48 and 58-63.These repeats start in the N-terminus and extend into the NAC region of αS, and have been implicated before to be involved in the ordered arrangement of the peptide [86][87][88].Within these repeats, β-hairpin structures have been found in aMD trajectories for both the unbound and metal-bound peptide, corroborating the results found by Yu et al. [89], on the region where these are observed (residues 38-53) through the formation of anti-parallel β-sheets; Fig. 8 shows the presence of these folding elements from the clustered structures.An experimental study looking at the nucleation capacities of different regions within α-Synuclein, has reported the region encompassing residues 37-61 to act as a nucleationpromoter, possibly as a result of the β-hairpin assemblies [90].More recently, Y39 has been the focus of an experimental study, that concluded in the importance of the aromaticity in the folding mechanics of that region of the peptide [91].Here, we find that these structures appear more frequently in the Cu(II)-bound α-Synuclein trajectory compared to the unbound one, Fig. S5 and Fig. S6.Another region where β-hairpin structures have been found here, is between residues 63-72, expressed almost in twice as many frames as in the hairpin found between residues 38-53.The maximum time these have been found to last in each of the cases, are given in Table 4, with the copper-bound system exhibiting the greatest persistency in both of these regions.Potential intramolecular interactions, such as salt bridges between lysine and glutamine residues within these repeats, are discussed in more detail below. The secondary structure characteristics of the peptide are shown in Fig. 9, Table 5 and Table 6.The amounts of the different structures in the systems indicate the ratio of β-characteristics in the different regions of the peptide remains consistent with experimental observations, with the highest percentage of sheets present in the NAC region [92].Despite the lack of experimental data on the secondary characteristics of the copperbound peptide, an in-depth evaluation of the free peptide, simulated here, has been given both in our past works [44], as well as the introduction of this section, with experimental findings from CD [78][79][80][81], Raman [82,83] and ATR-FTIR [77] spectroscopic techniques, and NMR data on the chemical shifts of the Cα within this system [84].Our findings showed agreement with values reported from the ATR-FTIR study (α-helices: 35%; β-sheets: 3%) [77], and α-helical percentages reported from CD experiments (α-helices: 19 ± 1% [81] 1.5% [80]).The level of agreement between the free-peptide secondary structure found here and the experimental values, permits the assessment of the differences between the two systems, with relative confidence in their reliability.The NAC region of the peptide has long been thought to be involved in the pathogenesis of PD, owing to the formation a hydrophobic β-sheet intermediate in that particular region [93].The secondary structure in NAC is almost unaffected by binding of Cu(II), with a decrease in β-sheet, possibly owing to the pull exerted on the residues comprising the NAC region, from coordination of His50 to Cu (II), but also from the increased preference for long-range interactions, upon binding of the metal ion, seen both in the increased compactness of the system, Fig. 15 and Table 3, but also in the lack of 3 10 -helices, as opposed to the metal-free peptide, where such structures make up most of the α-character in the system. Conversely, a decrease in the α-helical character is observed in the residues involved in the copper interactions, in both N-and C-termini.This decreased helicity, may in turn influence the membrane binding affinity of the peptide, especially considering the higher affinity lipid membrane binding region is in the N-terminus [94].Experimental evidence have also reported oxidation of Met residues in that region of the L. Savva and J.A. Platts peptide results in a decreased membrane affinity [95,96], as well as the possible modulation of αS, as a result of protein interactions in that region [97].This drop in the percentage of α-helices, particularly in the Nterminus of the Cu(II)-αS system, suggests the possible hampering of the binding affinity with lipid membranes upon coordination of the metalion, as this region has generally been linked with the ability of αS to form such interactions [98], aiding in the physiological activity of αS and as a way of balancing between the normal and aberrant forms [99].The dampening of these interactions in the Cu(II)-coordinated system may therefore act as a mechanism for the formation of toxic oligomers.The community is divided on the possible effects of membrane coordination, with arguments on both sides: regulation of misfolding and oligomerization upon membrane binding [100], and promotion of aggregation [101].Since we do not study the membrane interactions of these systems here, as well as the documented effect of membrane curvature [102], these can only act as speculations on the possible effects when these systems do in fact bind.We would therefore direct readers to a recently published experimental study on the possible mechanisms that take place upon interaction of αS with lipid membranes in the presence of Cu(II), where two possible hypothesis are presented: (1) an increased affinity of Cu(II) interactions to the N-terminal of monomeric αS, thus increasing oligomerization in-solution and decreasing upon membrane-binding; (2) free-αS membrane binding results in extended helical formation increasing the affinity of Cu(II) association with the C-terminal binding site [94]. A closer evaluation of helical characteristics implicates residues Glu57-Gln62 and Ser87-Gly101 as the regions with most helical population, in both the unbound and Cu(II)-bound peptides.The latter region was found to exhibit the greatest mean α-character occupancy both in the free, at 38.31%, and Cu(II)-αS, at 38.61%, while the former region presented mean occupancies of 34.55% and 33.18%, respectively.Representations of the structure in each of these regions is given in Fig. 10.A thing to note here is that the region between residues Ser87-Gly101 is split between residues Ala91-Phe94, by coiled structures, resulting in two short helices between residues Ser87-Ala90 and Val95-Leu100.This happens for the majority of the conformations, despite instances of a long continuous helix, steric hindrance restricts the conservation of such structures.The observations here are in line with The two regions reported above, represent the two most populated helical regions, occurring around at least 1/3rd of the total trajectory, and at least consist of 6 residues.A further four regions where notable helices occur between 4 residues are found in: Ser9-Lys121 (free-αS: 35.50%; Cu(II)-αS: 24.13%), Glu20-Thr23 (free-αS: 35.00%;Cu(II)-αS: 22.83%), Gly111-Glu114 (free-αS: 29.36%; Cu(II)-αS: 28.66%), Pro120-Glu123 (free-αS: 35.07%;Cu(II)-αS: 0.48%).Thus, our data suggests an increased presence of helices in the N-terminal, but an overall decrease of these structures, as a result of the Cu(II) coordination. Below, Fig. 11, displays the reweighted free energy landscape of R g vs. two elements of secondary structure characteristics in the two systems.The potential of mean force (PMF) energies in Fig. 11, display the lowest values at 0% β-character, with ca.16% and 10% α-character, for the free and copper-bound systems, respectively.The free peptide may also be seen sampling a greater conformational space, as explained before, possibly due to the increased flexibility of the system.The local minima with the lowest energy values appear to be between R g values 35-45 Å in both cases, something to be expected considering the experimental and average R g values, shown above. The flexibility of each residue is shown by their root mean square fluctuation (RMSF), Fig. 12. From that, it is evident that adding the Residue (Negatively Charged) Residue (Negatively Charged) Residue (Positively Charged) metal ions to the peptide restricted the motion of the residues in the Nterminal, while the mobility of His50 and neighbouring residues is also reduced.This is due to the macro-chelation of Cu(II) by N-terminal residues and His50, in which the metal ion restricts the motion of neighbouring residues.In contrast, metal ion binding does not strongly affect the motion of NAC or C-terminal residues: indeed, RMSF of the Cu (II) bound system is actually higher in metal binding residues when compared to the free peptide.However, the very large RMSF seen at the C-terminus is reduced from almost 40 Å to below 30 Å when Cu(II) is bound. The free energy landscape associated with a combination of R g and end-to-end distance, Fig. 13, illustrates the smaller size found in the Cu (II)-αS, when compared to the unbound peptide, with the former exploring conformations where the end-to-end distance was maintained below 165 Å, with an average of 97.5 ± 20.6 Å, as opposed to the free peptide, which went as high as 210 Å, averaging 117.1 ± 27.2 Å. In the intramolecular interactions of the two systems, binding of two Cu(II) ions does not strongly alter the pattern of salt bridges within the peptide, Fig. 14.The sole exception is Asp2, which is bound directly to copper and so is not available for interaction with Lys6.This is true even for the repeat sequences where hairpins were observed, highlighting the transience of these elements of secondary structure.Hydrogen bonds are more strongly affected by metal ions (Fig. 16), especially in the N-terminal region, where a significant number of "off-diagonal" H-bonds are found in the Cu(II)-bound form.This again appears to be related to the macro-chelation of Cu(II), which brings His50 and neighbouring residues into close contact with N-terminal ones, closely interacting with residues Glu35 and Glu13.As a result of these contacts, neighbouring residues to His50 are also seen forming H-bonds with more distanced ones, such as Val49-Glu35 and Met1-Glu35.The most significant hydrogen bond formed in the copper-bound system, is seen intra-residue in Glu123 (at ca.83% of frames).This is also evident in maps of close contacts between residues (Fig. 15): the free peptide only shows contact close to the diagonal, but the Cu(II)-bound peptide has close contact throughout the N-terminal region, extending as far as residue 70, i.e. into the NAC.It should be noted, however, that the intrinsically disordered nature of α-Synuclein means that occupancy of all salt-bridge and hydrogen bond contacts is low, typically under 10% of the overall trajectory. Research on the behaviour of α-Synuclein in its monomeric form has shown that it adopts no lasting secondary characteristics, instead they are rather transient, owing to its natively disordered nature, favoring unfolded and extended conformations [82,103].The role of copper ions in the aggregation propensity of α-Synuclein has been studied extensively in the scientific community, with evidence suggesting that it induces aggregation of the peptide [12].This could be attributed, in part, to restriction of the peptide's structure, allowing it to maintain the folded conformation when interacting with the metal ions, possibly owing to the restriction in flexibility introduced by metal coordination [104].Evidence for this may be seen in the increased contacts between the residues, Fig. 15, presenting a more populated region where the Cu(II)interactions occur.The R g data from the accelerated MD simulations, provide further evidence of this upon binding of two Cu(II) ions, showing a decrease in the R g of 4.5 Å.The secondary structure characteristics exhibit a decrease in defined characteristics going from the unbound peptide to Cu(II)-αS, show that the compactness gain in the bound peptide is a direct result of the binding to the metal ions, increasing the intramolecular interactions, as evident by the increased presence of hydrogen bonds, Fig. 16. Conclusions The effect of copper ions on the structure of the peptide has not been extensively studied computationally.Analysis of aMD trajectories, encompassing radius of gyration, RMSF and secondary structure, provide evidence of an average increased compactness and rigidity of the peptide upon the binding of two copper ions at the sites established by experimental studies.The data collected here corroborates previous reports that suggest an increase in stabilization of folding characteristics upon the binding of Cu(II) to αS, seen here from the RMSF and R g of the copper-coordinated system, exhibiting a decreased flexibility, despite the lower amount of secondary structure present [9].The more prominent α-helical character in the C-terminus of the free peptide, has been reported before [105] and also seen in the present work, although this is reduced in the copper-bound system, possibly owing to the coordination of the metal ion with residues in this site. The contribution of the KTKEGV repeats, extending from the N-terminal region into the NAC, in the secondary structure and thus folding of the peptide has also been reported here and validates past observations [86].Two β-hairpin regions have also been found here, between residues 38-53 and 63-72, with the former being sustained for longer, especially where the 38-53 residues are concerned, in the Cu(II)-bound peptide, where they are maintained for up to ca. 72 consecutive ns, as opposed to the ca.42 ns in the unbound one.Copper binding leads to loss of some β-characteristics in the NAC region, when compared to the unbound, possibly owing to the macro-chelation of a copper ion in the N-terminal region, restricting the adoption of a folded conformation in the neighbouring region.The RMSF of the residues involved in the copper ion interactions are significantly reduced, decreasing from ca. 20-30 Å to 10 Å in the N-terminal and from ca. 40 Å to 30 Å in the C-terminal.The effect of binding two Cu(II) ions on the RMSF appears to carry changes in the NAC region, restricting the motion of the constituting residues by 5 Å.The higher flexibility of the free peptide is also evident from the free energy landscape plots of globularity and end-to-end distances, showing a greater conformational space explored by the unbound peptide, as well as reaching more spherical conformations compared to the copperbound case. Declaration of Competing Interest The authors confirm that no conflict of interest exists in this work. Fig. 1 .Fig. 2 . Fig. 1.Primary structure of αS.The sites involved in metal ion binding include: M1, D2, H50, D119, D121, N122, E123 are highlighted in red.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 3 .Fig. 4 .Fig. 5 .Fig. 6 .Fig. 7 . Fig. 3. Predicted Cα chemical shift values per residue, from 1200 frames taken from the accelerated MD simulations on αS (black).Experimental data (red) obtained from source [84].(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 9 . Fig. 9. Secondary structure distribution per residue after 1.80 μs aMD of the (A) free and (B) Cu(II)-αS.The β-sheets are denoted with red (parallel) and black (antiparallel), and the α-helices with grey (3 10 ), blue (α) and purple (π).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) and 22. 5 ±Fig. 10 . Fig. 10.Representation of the most populated α-helical regions (red), between residues 57-62 in the (A) unbound and (B) Cu(II)-bound αS and respectively (C -D) between residues 87-101.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 11 . Fig. 11.Free energy landscape plots of R g against α-and β-characteristics present in the (A-B) unbound and (C -D) Cu(II)-bound peptide. Fig. 12 .Fig. 13 . Fig. 12. Root mean square fluctuation, of the individual residues in each of the systems. Fig. 15 . Fig. 15.Contact maps of the alpha-C from the dynamics of the (A) free and (B) Cu(II)-bound peptide. Table 1 Force constants and equilibrium distances of coordinating atoms to metal centres, as calculated from B3LYP/6-31G(d) optimisation of the metal sites. Table 2 Literature survey of reported secondary structural characteristics from different experimental methods. Table 3 R g data for free and copper-bound peptides (Å). Table 4 Maximum time β-hairpins are expressed in the two residue ranges of 38-53 and 63-72. Table 6 Secondary structure percentages for the free and copper-bound peptide systems.
2022-11-15T16:09:55.791Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "dc55b0505aa52148358c7e00ef0278ff60715890", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jinorgbio.2022.112068", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "06c864d7c566ea5f3d2be2df13676b22f21906e6", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
53207738
pes2o/s2orc
v3-fos-license
Protocol Patterns of Patients ’ Interactions With a Health Care Organization and Their Impacts on Health Quality Measurements : Protocol for a Retrospective Cohort Study Background: Data collected by health care organizations consist of medical information and documentation of interactions with patients through different communication channels. This enables the health care organization to measure various features of its performance such as activity, efficiency, adherence to a treatment, and different quality indicators. This information can be linked to sociodemographic, clinical, and communication data with the health care providers and administrative teams. Analyzing all these measurements together may provide insights into the different types of patient behaviors or more accurately to the different types of interactions patients have with the health care organizations. Objective: The primary aim of this study is to characterize usage profiles of the available communication channels with the health care organization. The main objective is to suggest new ways to encourage the usage of the most appropriate communication channel based on the patient’s profile. The first hypothesis is that the patient’s follow-up and clinical outcomes are influenced by the patient’s preferred communication channels with the health care organization. The second hypothesis is that the adoption of newly introduced communication channels between the patient and the health care organization is influenced by the patient’s sociodemographic or clinical profile. The third hypothesis is that the introduction of a new communication channel influences the usage of existing communication channels. Methods: All relevant data will be extracted from the Clalit Health Services data warehouse, the largest health care management organization in Israel. Data analysis process will use data mining approach as a process of discovering new knowledge and dealing with processing data extracted with statistical methods, machine learning algorithms, and information visualization tools. More specifically, we will mainly use the k-means clustering algorithm for discretization purposes and patients’ profile building, a hierarchical clustering algorithm, and heat maps for generating a visualization of the different communication profiles. In addition, patients’ interviews will be conducted to complement the information drawn from the data analysis phase with the aim of suggesting ways to optimize existing communication flows. Results: The project was funded in 2016. Data analysis is currently under way and the results are expected to be submitted for publication in 2019. Identification of patient profiles will allow the health care organization to improve its accessibility to patients and their engagement, which in turn will achieve a better treatment adherence, quality of care, and patient experience. JMIR Res Protoc 2018 | vol. 7 | iss. 11 | e10734 | p.1 http://www.researchprotocols.org/2018/11/e10734/ (page number not for citation purposes) Benis et al JMIR RESEARCH PROTOCOLS Background Health care organizations and patients communicate with each other using various communication channels [1,2].Some of these communication channels are traditional: face-to-face meetings with a physician or a nurse, face-to-face interactions with the administrative staff, and phone calls.However, in the past decade, many health care organizations introduced novel methods of digital communication with patients such as text messages, emails, video calls, websites, and mobile apps.The communication channels between the health care organization and its patients have been examined and analyzed in previous studies [3][4][5][6][7][8][9][10]. Data mining and machine learning methodologies have been used to define or redefine clusters of patients according to their state of health and other sociodemographic data [11,12].Recently, process mining has been used to try to improve communication between consumers and health care providers [13].However, no studies attempting to cluster patients by combining medical, sociodemographic, or communication characteristics have been conducted and certainly not in a population as large as the one proposed in this study.We expect that such research will improve communication between patients, service providers, and medical organizations and will improve the quality of treatment and treatment effectiveness and responsiveness. Aims and Objectives Finding the circumstances and the extent to which different population segments use different communication channels, and specifically, the extent to which usage of newly introduced channels replaces the usage of more traditional channels will help us learn about the effectiveness of these new channels.Tying these population segments' communication behavior with their sociodemographic profiles and health outcomes will help us establish the association between the 3, and it may help drive the hypotheses as to the causation.In addition, identifying communication-based population segments may help health care providers to use the most appropriate channels with each population segment, leading to more efficient and targeted communications, for example, identifying and quantifying the early adopters group will help the health care organization to estimate the usage level of a newly developed communication channel, its effectiveness in driving the intended message, and to some extent, its effect on health outcomes.Accordingly, this will also allow to improve the quality of treatment, treatment effectiveness, and responsiveness. The aims of this retrospective data study are to assist health care policy makers to improve and personalize the communication between patients and health care professionals (eg, physicians and nurses).Communication improvement includes enhancing the accessibility of health care professionals by expanding the capabilities of current communication channels and introducing new ones.These communications will help to improve patient engagement with the treatment process, increase patient responsiveness to follow-up requirements and treatment, and improve patient experience with health care services.More specifically, the primary aim of this study is to characterize usage profiles in the available communication channels in the Clalit Health Services (Clalit), each one of them without considering the others and then all of them together.The second aim is to establish relationships between communication profiles, sociodemographic, and medical patients' profiles.The main objective is to suggest new ways to encourage the usage of the most appropriate communication channel based on the patient's profile.A secondary objective is to suggest ways for improving communication between the patient and the health care organization mainly through technological means. Hypotheses The first hypothesis is that the patient's follow-up and clinical outcomes are influenced by the patient's preferred channel(s) of communication with the health care organization.If this hypothesis is validated, the research will quantify the phenomenon. The second hypothesis is that the adoption of newly introduced communication channels between the patient and the health care organization is influenced by the patient's sociodemographic and/or clinical profile.If this hypothesis is validated, the research will identify sociodemographic and/or clinical attributes that affect the adoption of newly introduced communication channels. The third hypothesis is that the introduction of a new communication channel influences the usage of existing communication channels.If this hypothesis is validated, the research will characterize the changes in usage of existing communication channels once a new communication channel is introduced. Materials This is a data-based study that analyzes information stored in Clalit electronic medical records (EMRs) and in logs documenting access to various communication channels between patients and Clalit, such as the internet personal health records, and telephone logs.Researchers have full access to Clalit EMRs and logs on the entire insured population of 4.53 million patients in 2015, which constitute 54% of the Israeli population of 8.38 million as of 2015.Data collected include demographic, clinical, and pharmacological information.In addition, we plan to conduct interviews with a representative sample of the patients to learn directly about the patients' perceptions, their relationship with the various means of communication, patterns of use, and suggestions for improvement.We hope that this survey will provide supplementary information to the one we will receive from analyzing the data. Clinical data from community and hospital settings and pharmacological data are routinely collected in the data warehouses (DWHs) of the health maintenance organization (HMO) and classified into the appropriate data world (eg, appointment scheduling, consultation with a physician, appointment with a specialist, diagnosis during hospitalization, medical services, and prescriptions).The information recorded includes sociodemographic data (gender, marital status, number of children at home, age, origin, socioeconomic status (SES), and place of residence), medical information (dates of specialist appointment, physician license number and the corresponding specialization, diagnoses, date of each diagnosis, prescriptions, acquisition of prescriptions, laboratory results, and imaging), and communication data (appointment date, date the appointment occurred, time elapsed between the scheduled appointment and the actual appointment, and the way the appointment was scheduled-through a medical secretary, call center, website, or mobile app).All relevant pieces of information include a patient identifier, which allows compiling all data relevant to a specific patient into a single record. The information to be analyzed is extracted from the EHR DWH of Clalit and includes data collected between 2008 and 2016 for all relevant patients.The long duration of the study will allow us to identify changes in the ways patients interact with the HMO as a function of time and as a function of new communication channels the HMO introduced (eg, website, mobile apps, and the use of the short message service [SMS] text messaging).Accordingly, the patient can start or stop using 1 or more channels to interact with the HMO.The patients included in this study are aged 21 years and over and are members of Clalit for at least 1 year before 2008 and are still alive in 2016.We will focus our study on patients with chronic disease because we want to examine long-term adherence and efficacy.In addition, patients who suffer from 1 chronic disease or more have a high rate of resource consumption.In the United States, for example, 86% of health care spending is devoted to patients with chronic diseases [14].In particular, we will examine diabetic patients, who in 2001 accounted for about 20% of the patient population [15].We hope that the study will help optimize the processes in which these patients participate.The incidence of chronic diseases in general and of diabetes in particular is increasing over the years due to several factors, most notably the aging of the Israeli population.According to Clalit data, as of the end of 2014, more than 40% of the insured population had at least 1 diagnosis that is defined as chronic (eg, diabetes, asthma, heart disease, mental illness, and cancer).Patients with diabetes constitute more than 300,000 individuals with our inclusion criteria [16,17].The profiles that will be found will help define the recommendations and policies that will improve communication with specific subpopulation groups and will increase the effectiveness of treatment and patient adherence.Chronic diseases are not spread uniformly by age; however, given the high cost of treating patients with chronic diseases, we believe it is more useful to concentrate on these patients despite this bias. Methodologies The communication between health care providers (ie, physicians, nurses, hospitals, and more globally, HMOs) and patients is studied by focusing, generally, only on 1 or 2 of the channels [1][2][3][4][5][6][7][8][9][10][11][12].To fulfill our research aims and objectives, our analysis will consist of characterizing the usage profiles of existing nontechnological and technological communication channels over a period of 9 years, taking into account that Clalit has added and changed over the time the methods by which patients contact health care professionals (eg, the introduction of Web and mobile apps).Then, the sociodemographic and clinical profiles of each one of the different communication channels' usage profiles will be defined.This will allow us to qualitatively evaluate the influence of the communication profile on patient's engagement and follow-up quality. As part of the analysis, we will evaluate impacts of new communication channels introduced over the research period.This will allow us to suggest future improvements to the communication between the patient and physician or nurse, with the aim of improving the work processes of the health organization. This research is based on knowledge discovery in databases (KDD) methodologies [18,19].KDD is an interdisciplinary discipline that deals with methodologies for the extraction and identification of valid, new, nontrivial patterns of data that have the potential to be useful and understandable [18][19][20].The continued increase in the amounts of data available, a product of the unprecedented development of computer and communications technologies over the past two decades, created a unique opportunity to implement KDD methodologies.Data science experts from different disciplines are therefore challenged to find new and effective ways to extract and generate new knowledge from existing data. In the analysis phase, we will use one-dimensional and multidimensional statistical methods as well as different data mining algorithms.The data mining stage is part of the KDD process and focuses mainly on the discovery of unknown patterns.For this purpose, we will use and tune, if necessary, data mining [21] and machine learning [22] algorithms for dealing with the multidimensional dataset (ie, sociodemographics, bio-clinical, and communication-related data over time), which will be explored in this study.The patterns found in this stage are then evaluated and interpreted to form the knowledge extracted from the KDD process. The KDD process that will be developed and implemented in this research includes data collection and integration, early processing and cleaning of data, development and implementation of data mining algorithms to discover new knowledge and a qualitative research [18][19][20]. Data Acquisition Clalit DWH is the main source of information the research uses, and a replication for research purposes is updated on a weekly basis.The data extracted from Clalit DWH for each patient comprise the following information: Adjusted clinical groups (ACG) [24] • Comorbidities according to the Clalit chronic diseases registry [15] • Proportion of days covered by treatment of diabetes when relevant based on purchase of drugs used in diabetes and more particularly by blood glucose lowering drugs excluding insulin (Anatomical Therapeutic Chemical Classification System codes starting with A10B) [25] 3. Communication or contacts with the HMO data Data Cleansing After integrating the data collected and extracted from the Clalit's DWH, we will prepare it for analysis.This stage includes cleansing of the data collected by Clalit's DWH when necessary.The main objective of this phase is to reduce noise by detecting and removing or correcting outliers [26] in the dataset by evaluating the quality of the data [21].An outlier is a data measurement that is inconsistent with other historical measurement data of the same individual (eg, outlaying height value, an exceptionally high number of consultations with a physician-a few hundred per year-).When a measurement-specific (eg, BMI) algorithm has been developed in-house by Clalit Research Institute (CRI) for epidemiological studies, outlier detection and data correction will be processed using it.For example, an algorithm screens data on BMI, weight, and height, to detect and handle outliers in the recording of 1 of these 3 measurements (eg, due to mistyping).When the CRI algorithms will not be relevant, outliers will be detected with statistical approaches such as median absolute deviation to find outliers (nonparametric due to lack of knowledge regarding the data distribution [27] and/or machine learning algorithms such as k-means [28]). Data related to communication between patient and Clalit have not yet been fully processed and cleansed before, and accordingly, we may need to develop special cleaning and correction algorithms for these data.If data correction algorithms and/or algorithms that deal with cases of missing information do not exist for any given data in our database [29,30], we will use appropriate machine learning algorithms and/or statistical approaches [31,32] to correct and/or deal with missing data where needed.Examples of potential problems that we might encounter are identifying irrelevant entries (eg, entries related to quality assurance traffic and testing and entries that are not the result of human activity) and lack of full documentation.In addition, interface exposed to the user is a breathing interface and changes over time depending on the services that the HMO chooses to provide through the Web-based and app services.A new version of the website, for example, is released every 6 months.Data processing and analysis should reflect these changes. Data Transformation Many methods of machine learning and data mining require, as part of the preprocessing phase, a data reformulation such as a new categorization or a new grouping of numerical, categorical, or textual data to reduce the number of values each attribute has [28].This step involves the use of techniques for reducing the number of dimensions or transduction methods to reduce the number of variables for analysis or to find invariant representations of the data [26,[33][34][35]. For example, if we consider attributes with continuous values such as laboratory tests or clinical measurement having existing and defined scales in the literature, we will reformulate them into categorical values as a part of the dataset dimension reduction.For example, HbA 1c values may be divided into 5 categories: excellent control (<6.5%), good control (6.5% to 7.5%), moderate control (7.5% to 8.9%), poor control (≥9%), and not available [36,16]. However, for attributes that do not have predefined scales in the literature or which are specific to Clalit, such as the number of appointments by using the HMO website or the number of visits to a physician per year, we will use the k-means clustering algorithm for discretization purposes in 6 groups of resource consumption: "No" (meaning not consuming of the related resource, so excluded from the k-means run and assigned to this group), "Small," "Small-Moderate," "Moderate," "Moderate-Large," and "Large."The cluster bounds are validated, if necessary, by a domain expert (ie, a public health practitioner having some experience with the Clalit data). Data Mining For identifying population clusters, different machine learning methods and algorithms must be used.The main aim is to characterize usage profiles in the available communication channels.Considering the fact that we do not have prior knowledge on the data, we will use unsupervised machine learning algorithms [37][38][39][40][41][42][43] and will more particularly focus on k-means [38] and hierarchical clustering [37].We choose to use these specific algorithms because they are relatively simple to communicate with people having less technical knowledge, such as decision and policy makers of the HMO, which will get the final analysis report and will need to implement its recommendations. The first data mining goal is to find the number of hidden k clusters in the "Communication/contacts with the HMO data" or in other words, the number of different types of patient communication profiles.This will be performed on the available data of the year 2016 because by that time, data cleansing will be fully performed.As communication channels constantly evolve, we chose the most recent year to be the reference point to which previous years, with less communication channels, are compared with.The "Communication/contacts with the HMO data" of 2016 will be clustered as follows: 1.For each k between 2 and 100, 100 randomly selected samples of 20% of the cohort will be generated 2. For each sample, k-means will be run 3.For each run, the Ray-Turi criterion [44] will be computed 4. The results of the overall Ray-Turi criterion computation will be plotted on a graph 5.The elbow will be manually defined on the previously built plot for finding the relevant k. Each cluster relates to a type of patient communication.This step allows reducing the patient communication profiles from the number of patients included into the cohort (more than 300,000 if we consider patients with diabetes) to a small one (at most less than a few dozen). The second data mining goal is to generate a hierarchical clustering of the previously discovered clusters to allow understanding the similarities and dissimilarities between the communication patterns. Descriptive statistics of sociodemographic, bio-medical, and communication data will be generated for each cluster. On the basis of the previously built k clusters of "Communication/contacts with the HMO data" of 2016 and the related hierarchical clustering, we will generate descriptive statistics for each patient communication profiles (ie, cluster or set of patients) over the years (2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015). Information Visualization To provide user-friendly tools to decision and policy makers [45], allowing them to understand the different patient communication profiles and the strengths and weaknesses of each one, we will build heat maps for each year between 2008 and 2016 based on the previously generated hierarchical clustering of 2016 data. Process Mining Furthermore, we plan to implement algorithms and approaches from the field of process mining [46] to identify the changes in communication profiles over time, which may be the cause of treatment adherence changes.For example, process mining will allow us to model how patients with a similar communication profile (ie, patients within the same cluster) have changed their communication patterns with the HMO using the following channels: 1. Consulting with physicians and/or nurses 2. Scheduling appointments by using 1 or more of the following channels: through a medical secretary-data available since 2009, call center-data available since 2009, website-since 2011, or mobile app-since 2012 3. Overall interaction with the HMO (using the overall services). Qualitative Research Qualitative research of focus groups is the most effective means to fully understand factors that encourage or delay the use of communication interfaces with the health care organization.Focus groups enable the collection of information from a multicultural population [47] and discussion of new ideas that do not arise during personal interviews [48].We designed the qualitative part of the proposed study based on the guidelines presented by King et al [49].The qualitative part of the research will include between 1 and 8 focus groups depending on their XSL • FO RenderX usage level of the communications channels with Clalit.Each one of the focus groups will include up to 8 patients from the same area.Participants in the focus groups will be asked to complete a short sociodemographic questionnaire and sign an informed consent form.During the focus group meeting, the group facilitator will record the discussion and make important notes related to the participants' nonverbal communication. A guideline questionnaire for the focus groups will be constructed with the assistance of experts in the field and relevant literature.This questionnaire will evaluate factors that encourage or delay the use of communication channels with Clalit.The guiding questionnaire will include up to 10 open questions that will facilitate responses providing critical information, for example, "What factors contribute or will contribute to your use of the communication channel X?"; "What factors delay or will delay your usage of communication channel X?"; or "How do you think that communication channel X can be improved?".The guiding questionnaire will be used to explore aspects that are relevant for better understanding the topic and will facilitate expanding the discussion to areas that the participants consider to be most significant. The discussions in the focus groups will be recorded and transcribed.The transcripts of the focus group discussion will be analyzed in a phenomenological approach that emphasizes the patient's unique and subjective perception through qualitative content analysis [50].The coding process will begin with open coding (ie, identification of major categories), following by axial coding that results from 1 core phenomenon. Next, the data will be categorized according to this core phenomenon [51] and will be reviewed by external domain experts to ensure objectivity [49].Sandelowski [52] notes that through qualitative content analysis, researchers can add new information to the existing one and gain new insights.The encoding and analysis will be performed by the principal investigators and the associate investigators, with the same encoding rules for guaranteeing homogeneous and consistent encoding [49].In cases of disagreement regarding the encoding, an expanded forum will be held in which the majority decision prevails. Results This project was funded in 2016, and the research project is scheduled to be completed in 2019. A preliminary analysis has been performed on the data of the year 2015 related to 309,460 patients with diabetes in 2015, aged 32 years and above, having the disease treated by Clalit for more than 7 years.Overall, 7 main communication patterns have been discovered.Patients in the last 2 clusters tend to be older than the rest of the patient population (aged more than 70 years) and with relatively high morbidity (ACG=5).Patients in the sixth cluster tend to be consumers of medical services that involve access to a human being, whereas patients in the seventh cluster tend to be heavy users of all medical services.They also tend to have one of the best follow-up rates: only Overview This research protocol deals with the identification of patient communication profiles.This knowledge will help the health care organization to increase the accessibility of patients to the services the health care organization provides and to improve patients' engagement with the treatment process.This, in turn, may motivate the patient to achieve a better treatment adherence, improve quality of care, and generate better patient experience. Expected Results and Future Directions Analysis of communication patterns over time may reveal long-term behavior patterns as well as identify patterns at a higher abstraction level (eg, early adopters of technology and early adopters of services).It should be noted that the research is planned to be performed on data from a period that witnessed a significant yet gradual change in the communication channels Clalit provides its patients.Analyzing the response of the patient population to these changes will hopefully help improve the XSL • FO RenderX available communication channels as well as assist in formulating realistic expectations from the introduction of new communication channels, taking into consideration also the sociodemographic characteristics and clinical constraints as well as their previous communication patterns with the HMO. By tuning its communication tools to patients' preferences (eg, by translating the user interfaces of the electronic communications tools-website or apps-from Hebrew to other languages such as Arabic, English, Russian, Amharic, French, and Spanish), the health organization would (1) improve and increase accessibility to health care services, achieve better patient engagement and responsiveness to treatment, and improve quality of treatment and treatment experience within existing budgetary constraints and (2) increase patients' engagement with the treatment process by transforming the communication scheme with each patient to a more proactive scheme, so as to better fit their profile. Strengths and Limitations Clalit insured and provided medical services to approximately 4.53 million patients in 2015 and is the largest health care provider in Israel.The data available spans all treatment providers including hospitals' end emergency units.Nevertheless, overall ethnic distribution of the Clalit population does not fully reflect the overall Israeli demographic composition.The Clalit members comprise, in comparison with the Israeli general population, (1) a higher proportion of Arabs and a lower proportion of ultra-orthodox members and (2) a higher proportion of members having a low SES. Another potential limitation is the decision to analyze only patients with diabetes.These patients may exhibit behaviors that are unique to this specific chronic disease and may not be shared by other chronic patients.Nevertheless, diabetes is 1 of the most common chronic diseases, with prevalence of approximately 7% within Clalit's insured population. Finally, this research is conducted on data of Israeli patients.The structure of the Israeli health care system as well as Israeli culture and norms may affect patients' behavior and may not apply to patients in other geographical locations. XSL • FO 1. Sociodemographic data • Date and country of birth and date of immigration when relevant • Date of death (allowing exclusion) • Start and end date of membership (allowing exclusion) •Gender• Ethnic sector (general Jewish, Arab, and ultra-orthodox Jewish)-the ethnic sector is determined according to the clinic at which the member receives primary care medicine.It is computed by the Clalit computer services unit by integrating geostatistical data from the Israeli Central Bureau of Statistics • Clinic-level SES (3 categories: low, mid, and high)-the SES is determined according to the clinic at which the member receives primary care medicine.It is computed by the Clalit computer services unit by integrating geostatistical data from the Israeli Central Bureau of Statistics •• Appointments scheduling (through a medical secretary-data available since 2009, call center-data available since 2009, website-since 2011, or mobile app-since 2012) • Consultations with a physician or a nurse • Hospitalizations Consultations at an emergency department • Nonqueue requests (eg, request for periodic checks, prescription renewal, and sick leave certificate) done without visiting but only by sending a request to a physician through a call to a medical secretary or a nurse or by completing a paper or an electronic form • Any purchases in a pharmacy of the HMO or purchase related to a prescription in other pharmacies having an agreement with the HMO • Prescription renewals by SMS-since 2015. HbA 1c measurement.A possible explanation for this difference may be related to the tendency of the patients in the second group to resort mainly to human contact (face-to-face or by phone). RenderXEdited by G Eysenbach; submitted11.04.18; peer-reviewedby JP Allem, A Mavragani; comments to author 22.06.18;revised version received 14.08.18;accepted 20.08.18; published 07.11.18 Please cite as: Benis A, Harel N, Barak Barkan R, Srulovici E, Key C Patterns of Patients' Interactions With a Health Care Organization and Their Impacts on Health Quality Measurements: Protocol Nissim Harel, Refael Barak Barkan, Einav Srulovici, Calanit Key.Originally published in JMIR Research Protocols (http://www.researchprotocols.org),07.11.2018.This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited.The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org,as well as this copyright and license information must be included.
2018-11-15T17:45:15.178Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "3ca0834aba2d6cb42efbf6f03f86130bc8182c7e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/10734", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3ca0834aba2d6cb42efbf6f03f86130bc8182c7e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
248525821
pes2o/s2orc
v3-fos-license
Treatment satisfaction and response in patients with severe alopecia areata under treatment with diphenylcyclopropenone Abstract Background and Aims Alopecia areata (AA) is an autoimmune disease of hair follicles. Treatments currently include topical and intralesional corticosteroids and contact immunotherapy; however, the overall prognosis is usually unfavorable. In severe AA, topical immunotherapy with diphenylcyclopropenone (DPCP) is preferred. Since its effectiveness is heterogeneous and there are several side effects, we decided to measure the patients' satisfaction using the “Version II of the Treatment Satisfaction Questionnaire for Medication,” which investigates satisfaction with effectiveness, side effects, convenience, and global satisfaction. Methods We examined 100 patients under treatment with DPCP for treatment response, asked them to respond to the questionnaire, and calculated their overall scores out of 400. We then investigated the association between the patients' characteristics with their treatment response and satisfaction. Results The overall satisfaction of patients was 257/400. We observed a significant association between patients' satisfaction scores on effectiveness and global satisfaction with their response to treatment (p < 0.001). The patients' satisfaction with the treatment's convenience had a significantly positive association with the age of receiving the diagnosis (p = 0.028). The overall treatment satisfaction was significantly associated with treatment response (276 vs. 213, p = 0.000). Conclusion Although there are currently no gold standard treatments for severe AA, DPCP demonstrated a 71% response to treatment, and patients with response were significantly more satisfied with their treatment. | INTRODUCTION Alopecia areata (AA) is a chronic, autoimmune and relapsing disease of the hair follicles, mediated by T CD8+ cells in genetically susceptible patients. 1,2 The disease manifests as sudden hair loss in a circular, well-defined, nonscaring alopecic area on the scalp or anywhere else in the body. [2][3][4][5] Treatment options include topical and intralesional corticosteroids as first-line options for limited disease and contact immunotherapy for extensive and severe cases; however, there are currently no gold standards for treating AA. 2,6 Despite the numerous treatment modalities available, the overall prognosis is not favorable, especially in AA patients with severe hair loss like alopecia totalis (AT) and alopecia universalis (AU). Although, due to the considerable psychological impact of AA on patients' quality of life (QOL), measures are being taken to treat these patients. Topical immunotherapy with diphenylcyclopropenone (DPCP) is the preferred method of immunotherapy in patients with severe AA. Studies have shown a diverse range of responses to immunotherapy, although 40%-60% of patients have reported experiencing an acceptable response. 2,[7][8][9][10][11][12] Due to the high cost of DPCP, common adverse effects like erythema, eczema, pruritus, and lymphadenopathy, as well as considerably diverse and heterogeneous hair regrowth rate among studies, 7 it seems that monitoring and evaluating the degree of treatment satisfaction is an important and a valuable indicator for assessing the expectation of the quality of services that patients receive. There are various studies on patients' satisfaction with treatment in dermatologic diseases like psoriasis, [13][14][15][16][17][18][19][20] but not much similar research exists on the satisfaction of patients with AA under DPCP treatment. Therefore, we decided to evaluate the treatment response and satisfaction of a group of Iranian patients with severe AA treated with DPCP, using the Treatment Satisfaction Questionnaire for Medication (TSQM) Version II as a validated measure to determine the degree and association of satisfaction with treatment in these patients with disease-related characteristics, hoping to discover the factors affecting satisfaction in hopes of trying to improve them in the future and improve patients' satisfaction and adherence. We recorded the patients' information, including their age, gender, age at the onset of the disease, the extent of hair loss at the treatment's onset with the Severity of Alopecia Tool (SALT) scoring system, duration of therapy, nail involvement, history of atopy, family history of alopecia, and the type of alopecia (including totalis, universalis, ophiasis, and patchy). We also reviewed patients' documents in each visit and recorded their SALT score and treatment response as vellus and terminal hair regrowth. | MATERIALS AND METHODS To evaluate the percentage of alopecia, two dermatologists familiar with the SALT scoring system evaluated each patient, and the final score considered the mean of both measures. We documented the patients' satisfaction using TSQM Version II 21 (under an academic copyright license), which is a valid and reliable instrument in Persian, 22 measuring patients' satisfaction, with 11 questions and on four subscales, including efficacy, convenience, adverse events, and overall satisfaction. Each subscale's score ranges from 0 to 100; they are then added to form a total score, up to a maximum of 400 points. We estimated the sample size by considering a 10% margin of error, 95% confidence level, 50% response rate (to reach the maximum number), and reached the number of 96 patients, which we then rounded up to 100. Data were analyzed by SPSS version 26, using χ 2 tests (Pearson or Fisher's Exact) for qualitative variables, and t test or analysis of variance (or nonparametric equivalents where the data was not normally distributed) for quantitative variables. Further testing was done using regression methods. The p value of less than "0.05" was considered statistically significant. | RESULTS A total of 100 patients, including 33 AT patients, 45 AU, 7 ophiasis, and 15 patients with patchy type disease, were enrolled in the study. In the first subscale of the TSQM, effectiveness, we observed that patients were relatively satisfied with the effectiveness of DPCP. The satisfaction with the effectiveness of DPCP was 52 of 100. We also observed a significant association with treatment response. The rate of response to the treatment in our study was 71%. Other studies have shown different ranges of treatment response, ranging from as low as 29 in one study to 81.5 in another. 2,[7][8][9][10][11][12]23 We demonstrated patient's response, while associated with higher satisfaction, by itself was not associated with any of our variables. Our patients were on average 31 years old; the average age at diagnosis was 22.6, and they were treated for 17.5 months. The association between their satisfaction and age and duration of treatment was not significant. Patients whose disease had started earlier had lower satisfaction scores; however, this association was not significant (p = 0.08). Also, in our study, patients' age, age at diagnosis, and duration of treatment was not associated with treatment response either. In the study by R. C. Lamb et al., the duration of disease was found to have a significant association with treatment response, but the age at onset of the disease was not associated with treatment response. 23 Age of onset or start of treatment was not associated with treatment response in the study by Chiang et al. as well (p = 0.817, p = 0.802). 8 In our study, we concluded that neither subtypes of AA, nor the extent of involvement, were associated with treatment satisfaction or treatment response. We concluded that patients with alopecia ophiasis were more satisfied, and patients with patchy alopecia had the poorest treatment response among the four subtypes; however, these associations were not significant. Similarly, in the study by Dr. Aghaei on treatment response, type of alopecia was not significantly associated with response. 11 ACKNOWLEDGMENTS Funding was made available by the authors. CONFLICTS OF INTEREST The authors declare no conflicts of interest. DATA AVAILABILITY STATEMENT Dr Maryam Nasimi had full access to all of the data in this study and takes complete responsibility for the integrity of the data and the accuracy of the data analysis. TRANSPARENCY STATEMENT Dr Maryam Nasimi affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
2022-04-29T15:18:55.229Z
2022-04-26T00:00:00.000
{ "year": 2022, "sha1": "1fb804883269b7625ca22bc7715f22a0faefa6ef", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "dd0a32bcb9e0fcd4f421ffb3382852ee401bbdb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119309336
pes2o/s2orc
v3-fos-license
A semi-model structure for Grothendieck weak 3-groupoids In this paper we apply some tools developed in our previous work on Grothendieck $\infty$-groupoids to the finite-dimensional case of weak 3-groupoids. We obtain a semi-model structure on the category of Grothendieck 3-groupoids of suitable type, thanks to the construction of an endofunctor $\mathbb{P}$ that has enough structure to behave like a path object. This makes use of a recognition principle we prove here that characterizes globular theories whose models can be viewed as Grothendieck $n$-groupoids (for $0\leq n \leq \infty$). Finally, we prove that the obstruction in arbitrary dimension (possibly infinite) only resides in the construction of (slightly less than) a path object on a suitable category of Grothendieck (weak) $n$-categories with weak inverses. This also gives a sufficient condition for endowing an $n$-groupoid \`a la Batanin with the structure of a Grothendieck $n$-groupoid. Introduction Alexander Grothendieck proposed an algebraic definition of weak ∞-groupoids in 1983, in a letter to Quillen, see [Gr]. His idea was to have a completely algebraic model of these sofisticated objects, in contrast with the existing non-algebraic ones (i.e. Kan complexes). Moreover, as it was proven to be true for other models, he conjectured that these algebraic structures modeled all homotopy types. This goes under the name of "homotopy hypothesis". In his recent paper ( [Hen]), Henry proved that if a technical condition on the category of Grothendieck ∞-groupoids is satisfied, namely the "pushout lemma" (which states that pushouts of certain maps induce isomorphisms on homotopy groups) then the homotopy hypothesis holds true. Moreover, this is the only non-trivial step in constructing a semi-model structure on the category of Grothendieck ∞-groupoids. In our previous work ( [EL]) we developed some tools to attack this problem. We constructed a globular set that ought to model a path object for the category ∞-Gpd, provided one can endow it with the required algebraic structure. We also initiated this construction by interpreting all the categorical operations in a non-functorial way, and we showed how to fix it in low dimensions. Here we prove in Theorem 4.4 that essentially this is the only obstruction to the construction of the semi-model structure, and we get the desired extension in the simpler case of a truncated 3-dimensional version of these highly-structured algebraic objects. In fact, we provide this path object with enough structure to make it into an object of 3-Gpd, the category of Grothendieck 3groupoids (of suitable type), and we use this path object to endow the abovementioned category with a semi-model structure. According to the (generalized) homotopy hypothesis, weak 3groupoids should model all homotopy 3-types, and this will be the object of study of subsequent work, which will make use of the existence of this semi-model structure. Unless otherwise stated, all the structures are weak, thus we use the term n-groupoids and n-categories to mean weak ones. In Section 2 we recall the basic definitions available in the existing literature, with the necessary modifications needed to adapt them, when appropriate, to the category n-Gpd of n-groupoids. This section also contains the definition and some basic properties of a weak factorization system on the category n-Gpd for 0 ≤ n ≤ ∞ that will be used throughout the entire paper. The core of this work is in Section 3 and 4: in the former we prove a characterization of those globular theories for which the category of models bears a cofibrantly generated semimodel structure whose objects look like Grothendieck n-groupoids for 0 ≤ n ≤ ∞ (though they may possibly be strict n-groupoids): this is Theorem 3.1, which enables us to prove the main result of Section 4, i.e. Theorem 4.4. This is a very general result which states that it is enough to construct a path object on the category of weak n-categories with weak inverses, i.e. C Wmodels (see Definition 4), in order to show that these are Grothendieck n-groupoids and obtain a semi-model structure on said category. Section 5 is a recollection of all the main constructions performed in [EL], such as cylinders on globular sums, modifications, and, most importantly, the elementary interpretation ̺ : Cyl(D n ) → Cyl(A) of a given homogeneous (categorical) operation ̺ : D n → A, all adapted to work in the case of C W -models. We also include the explicit description of a specific instance of the stack of cylinders that we defined abstractly in the previous paper, since this turns out to be useful in some calculations. In the final section, given a C W 3 -model X, where C 3 denotes a coherator for 3-categories, we endow the globular set PX with the structure of a C W 3 -model: this makes use of results from Section 6 of [EL] together with new content. Essentially, we inductively correct the boundary of̺ so as to make it into a functor Cyl : C W 3 → Mod(C W 3 ). Therefore, we get an endofunctor P on Mod(C W 3 ) (see Corollary 6.11) as stated in Proposition 4.9, which concludes this work according to Theorem 4.4. Background The basic shapes that constitute the arities for globular theories are the so-called globular sums, i.e. a suitable notion of pasting of globes that will be introduced in the following section. We then recall the definition of models of a globular theory and their universal property, together with a class of (trivial) cofibrations on the category of such models. Globular theories and models Definition 2.1. Let G be the category obtained as the quotient of the free category on the graph 0 1 We denote the class of m-bijective morphisms by bij m , and that of m-fully faithful ones by ff m . The following result holds true, and its proof is left as a simple exercise Globes are not enough to capture a meaningful theory of n-groupoids, for which we need more complex shapes, called globular sums, which are a special kind of pasting of globes. Definition 2.4. A table of dimensions is a sequence of integers of the form satisfying the following inequalities: i ′ k < i k and i ′ k < i k+1 for every 1 ≤ k ≤ m − 1. Given a category C and a functor F : G → C, a table of dimensions as above induces a diagram of the form A globular sum of type F (or simply globular sum) is the colimit in C (if it exists) of such a diagram. We also define the dimension of this globular sum to be dim(A) = max{i k } k∈{1,..., m} . Given a globular sum A, we denote with ι A k the colimit inclusion F (i k ) → A, dropping subscripts when there is no risk of confusion. We denote by Θ 0 the full subcategory of globular sets spanned by the globular sums of type y : G → [G op , Set], where y is the Yoneda embedding. Moreover, we denote y(i) by D i and the globular sum corresponding to the table of dimensions 1 1 . . . 1 1 0 . . . 0 by D ⊗k 1 , where the integer 1 appears exactly k times. In dealing with Grothendieck n-groupoids, we will need a truncated version of the category G, which we now introduce. Definition 2.5. We denote with G n the full subcategory of G generated by the set of objects {k ∈ G : k ≤ n}. Analogously to the infinite dimensional case, we consider the presheaf category [G op n , Set], called the category of n-truncated globular sets, or simply n-globular sets. We will always assume n > 0, to avoid the trivial case of 0-groupoids, i.e. sets. Proposition 2.3 can be extended to the case of n-globular sets, when (using the notation of the proposition) m ≤ n. If we consider the subcategory Θ ≤n 0 ⊂ Θ 0 spanned by globular sums of dimension less or equal than n, we see that there is a fully faithful embedding functor Θ ≤n 0 → [G op n , Set]. The category Θ ≤n 0 plays a similar role for n-groupoids as Θ 0 does for ∞-groupoids. Definition 2.6. An n-truncated globular theory is a pair (E, F), where E is a category and F : Θ ≤n 0 → E is a bijective on objects functor that preserves globular sums of dimension less than or equal to n. We denote by GlTh n the category of n-globular theories and n-globular sums preserving functors. More precisely, a morphism H : (E, F) → (C, G) is a functor H : E → C such that G = H • F. If there is no risk of confusion we will omit the structural map F : Θ ≤n 0 → E and simply denote the globular theory (E, F) by E. Contractibility ensures the existence of all the operations that ought to be part of the structure of an n-groupoid. However, it does not guarantee weakness of the models, and indeed there exists a contractible globular theory (which we denoted byΘ ≤n ) whose models are strict n-groupoids. To remedy this, we need the concept of cellularity, or freeness, to restrict the class of globular theories we consider. This notion is based on a slight variation of a construction explained in paragraph 4.1.3 of [Ar1], which we record in the following proposition. Proposition 2.10. Given an n-globular theory E and set X of admissible pairs in it, there exists another n-globular theory E[X] equipped with a morphism ϕ : E → E [X] in GlTh n with the following universal property: given an n-globular theory C, a morphism H : E[X] → C is determined up to a unique isomorphism by its precomposition F with ϕ, a choice of an extension as in Definition 2.9 for the image under F of each admissible pair f, g : D k → A in X with k < n and the requirement that H(f ) = H(g) if k = n. In words, E[X] is obtained from E by universally adding a lift for each pair in X of nonmaximal dimension and by equalizing parallel n-dimensional operations in X. Definition 2.11. An n-globular theory E is said to be cellular if there exists a functor E • : ω → GlTh n , where ω is the first infinite ordinal, such that: 1. E 0 ∼ = Θ ≤n 0 ; 2. for every m ≥ 0, there exists a family X of admissible pairs of arrows in E m (as in Definition 2.9) such that E m+1 ∼ = E m [X]; 3. colim m∈ω E m ∼ = E. As anticipated earlier, we now define the class of n-globular theories which are appropriate to develop a theory of n-groupoids. Definition 2.12. An n-truncated (groupoidal) coherator, or, briefly, an n-coherator, is a cellular and contractible n-globular theory. Given an n-coherator G, the category of n-groupoids of type G is the category Mod(G) of models of G. In what follows, G will always denote a coherator for n-groupoids, with 0 ≤ n ≤ ∞, and sometimes we will denote the category of its models by n-Gpd, with no reference to G. The restriction of an n-groupoid X : G op → Set to Θ ≤n 0 op gives an object of Mod(Θ ≤n 0 ) ≃ [G op n , Set], which we call the underlying n-globular set of X. The set X i represents the set of i-cells of X for each i ≤ n. Let us now consider the algebraic structure acting on these sets of cells. Section 3 of [Ar2] shows how to endow the underlying globular set of an ∞-groupoid with all the sensible operations it ought to have to deserve to be called such. A completely analogous argument applies, mutatis mutandis, to the case of n-groupoids. For example, we can build operations that represent binary composition of a pair of 1-cells, codimension-1 inverses for 2-cells and an associativity constraint for composition of 1-cells by solving, respectively, the following extension problems: In a similar fashion one can build every sensible operation an n-groupoid ought to be endowed with. Whenever a choice of such operations is understood, at the level of models (i.e. n-groupoids) we denote with the familiar juxtaposition of cells the (unbiased) composition of them, and with the exponential notation A −1 we denote the codimension-1 inverse of an m-cell A. We will need to choose some operations once and for all, so we record here their definition. Choose an operation ∇ 1 0 : D 1 → D 1 ∐ D 0 D 1 as above, and define w = ∇ 1 0 . Next, pick operations D 2 → D 2 ∐ D 0 D 1 and D 2 → D 1 ∐ D 0 D 2 whose source and target are given, respectively by Proceeding in this way we get specified whiskering maps for every k ≤ n of the form: We will often avoid writing down all the subscripts, when they are clear from the context. Definition 2.13. Given a globular sum A, whose table of dimensions is noting that the target is isomorphic to In a completely analogous manner we define a map w A : Consider the forgetful functor induced by the structural map Θ ≤n 0 → G. Given a map of n-groupoids f : X → Y and a natural number m ≤ n, we can factor the map U n (f ) as U n (f ) = g • h, where h is m-bijective and g is m-fully faithful thanks to Proposition 2.3. It is not hard to see that the target of h can be endowed with the structure of an n-groupoid so that g and h are maps of such. This fact, thanks to Proposition 2 of [BG], provides the following result that will be used in this paper. Proposition 2.14. Given m ≤ n, the orthogonal factorization system (bij m , ff m ) on n-globular sets lifts to one on Mod(G) via the forgetful functor U n : This means, in particular, that every map in Mod(G) admits a unique factorization f = g •h where U n (h) is m-bijective and U n (g) is m-fully faithful, and that m-bijective maps are closed under colimits in Mod(G). Example 2.15. The maps σ k , τ k : D k → D k+1 are (k − 2)-bijective. Indeed, since the forgetful functor U n preserves the right class of the factorization system (bij k , ff k ) on Mod(G) for every k ≤ n, its left adjoint F n : [G op n , Set] → Mod(G) preserves the left class. Now it is enough to observe that F n sends source and target maps of globular sets to source and target maps of n-groupoids, and for the former it is easy to check the statement on (k − 2)-bijectivity. Given a globular sum A such that dim(A) = m > 0, whose table of dimensions is we define its boundary to be the globular sum whose table of dimensions is The maps σ, τ : Thanks to what we observed in Example 2.15, we have the following result. Proposition 2.16. Given a globular sum Let us now see how to adapt the main definitions to the case of n-categories, following [Ar1]. The definition is essentially the same as that of n-groupoids, except we have to restrict the class of admissible maps. Definition 2.17. Given an n-globular theory (C, F ), we say that a map f in C is globular if it is in the image of Θ ≤n 0 under F . On the other hand, f is called homogeneous if for every factorization f = g • f ′ where g is a globular map, g must be the identity. C is said to be homogeneous if it comes endowed with a globular sum preserving functor H : C → Θ ≤n that detects homogeneous maps, in the sense that a map f in C is homogeneous if and only if H(f ) is such, where Θ is the globular theory for strict ∞-categories, as defined in [Ar1], and Θ ≤n is its subcategory spanned by all globular sums of dimension less or equal to n. If this is the case, then given an homogeneous map ̺ : D m → A we have m ≥ dim(A), and every map f admits a unique factorization as a homogeneous map followed by a globular one. Remark 2.18. A map f : A → B in a homogeneous globular theory C is homogeneous if and only if, for every D i k appearing in the globular decomposition of A, the homogeneous-globular Definition 2.19. Let (C, F ) be an n-globular theory. A pair of maps (f, g) with f, g : D k → A is said to be admissible for a theory of n-categories (or just admissible, in case there is no risk of confusion with the groupoidal case) if either k = 0, or both of them are homogeneous maps or else if there exists homogeneous maps f ′ , g ′ : D k → ∂A such that the following diagrams commute The definition of a coherator for n-categories is totally analogous to that for n-groupoids, i.e. it is a contractible and cellular globular theory, except the pair of maps that we consider in both cases have to be the admissible ones in the sense of the previous definition. More precisely, the pairs appearing in Definition 2.9 and in point 2 of Definition 2.11 must be pairs of admissible maps. Definition 2.20. A (Grothendieck) n-category is a model of a coherator for n-categories. Unless specified otherwise, n-category and n-groupoid will always mean weak ones, i.e. Grothendieck n-categories and Grothendieck n-groupoids. Direct categories and cofibrations Definition 2.21. (see also [Ho], Chapter 5) A direct category is a pair (C , d), where C is a small category and d : Ob(C ) → λ is a function into an ordinal λ , such that if there is a non-identity morphism f : Given a cocomplete category D and a functor X : C → D, we define the latching object of X at an object c ∈ C to be the object of D given by This defines a functor L c from the functor category [C , D] to the category D, together with a natural transformation ε c : L c ⇒ ev c , with codomain the functor given by evaluation at c. We also define the latching map of a natural transformation α : X → Y in D C at an object c ∈ C to be the map of DL The following results on direct categories are well known, therefore we omit their proofs. The notion of weak orthogonality is denoted with ⋔ Lemma 2.22. Let D be a direct category and C a category equipped with two classes of arrows (L , R) such that L ⋔ R. If we define Lemma 2.23. Let A, B be two cocomplete categories equipped, respectively, with two classes of arrows Example 2.24. The category G n has a natural structure of direct category, with degree function defined by deg : Every time we have an n-coglobular object D • : G n → C in a finitely cocomplete category, we can consider the latching map of ! : ∅ → D • at m, i.e. the map and the other latching maps are obtained inductively from the following cocartesian square is the canonical coglobular n-groupoid, we will also denote L m (D • ) by S m−1 , borrowing this notation from topology. , together with the map collapsing a pair of parallel n-cells to a single n-cells (resp. source maps {σ k : D k → D k+1 } 0≤k≤n−1 ), and I n its saturation, i.e. the set ⋔ (I ⋔ n ) (resp. J n = ⋔ (J ⋔ n )). We say that a map of n-groupoids f : X → Y is a cofibration (resp. trivial cofibration) of n-groupoids if it belongs to I n (resp. J n ). The maps in the class J ⋔ n (resp. I ⋔ n ) are called fibrations (resp. trivial fibrations). The small object argument provides a factorization system on n-groupoids given by cofibrations and trivial fibrations. Lemma 2.22 will be applied to this factorization system and to the the direct category structure on G n as defined in Example 2.24, to provide a way of inductively extending certain maps in Mod(G) G . Let * denote the terminal object in the category of n-groupoids. Since every map in J admits a retraction, the following result is straightforward. Proposition 2.26. Every n-groupoid is fibrant, i.e. the unique map X → * is a fibration for every X ∈ Mod(G). Definition 2.27. An n-groupoid X is said to be contractible if the unique map X → * is a trivial fibration. The proof of the following fact is analogous to the one given for ∞-groupoids in Proposition 3.8 of [EL]. Proposition 2.28. Globular sums, seen as objects in the image of the Yoneda embedding functor y : G → Mod(G), are contractible n-groupoids. Recognition principle for semi-model structures on categories of models of globular theories In this section we are going to characterize those globular theories C for which the category of models Mod(C) bears a cofibrantly generated semi-model structure that satisfies some natural conditions for objects of Mod(C) to look like ∞-groupoids. The definition of a (cofibrantly generated) semi-model structure can be found in [FR], Section 12.1. It is clear that everything can be adapted, with the appropriate changes, to the case of n-globular theories for n < ∞. To begin with, we define a class of maps W in Mod(C) that consists of the maps f : X → Y such that every solid commutative square of the form: admits an extension for the upper triangle which is a "lift up to homotopy" for the lower triangle. More precisely, there is a map Γ such that Γ • j k = (α, β) and there exists a (k • each map σ k : D k → D k+1 admits a retraction; • D 0 is contractible (i.e. the unique map D 0 → * is a trivial fibration); • C admits a system of composition and identities, as defined in Definition 4.1; • for every cofibrant object X in Mod(C) there exists a fibration ev : PX → X × X such that ev i = π i • ev is a trivial fibration for i = 0, 1, where π i : X × X → X denote the product projections. If such a semi-model structure exists on Mod(C), then clearly the four conditions are satisfied. Let us check that the converse also holds true. The proof is a matter of checking that the recognition principle for cofibrantly generated model structures applies (mutatis mutandis, since we want to produce a semi-model structure) to this situation. The first condition clearly implies that all objects are fibrant. Moreover, Mod(C) is complete and cocomplete, and both the domains of I and J permit the small object argument. Lemma 3.2. W is closed under retracts. Proof. Assume f is a retract of g ∈ W, so that we have a commutative diagram: 1. If f and g belong to W then so does g • f ; 2. If g and g • f belong to W, then so does f ; 3. If g • f = 1 X and f • g belongs to W, then both f and g belong to W. Proof. Firstly, assume f and g belong to W, and assume given a (k − 1)-sphere (a, b) in X, together with a k-cell γ : g • f (a) → g • f (b) in Z. By assumption we get a k-cell β : f (a) → f (b) in Y and a (k + 1)-cell H : g(β) → γ. Again by assumption we get a k-cell α : a → b in X, together with a (k + 1)-cell H ′ : f (α) → β. The composite H • g(H ′ ) : g • f (a) → γ (obtained using the system of composition on C) is the data we need to conclude the proof of the first statement. Turning to the second statement, assume g and g • f belong to W and consider a (k − 1)- . We now have a k-sphere in Y given by (f (H), α), and an extension to a (k + 1)-cell in Z between its image under g. By assumption, we get a lift to a (k + 1)-cell H : f (H) → α, which concludes the proof of the second statement. Finally, if g • f = 1 X then g is a retract of f • g, and is thus a weak equivalence thanks to Lemma 3.2. Therefore, f ∈ W thanks to the second point of this lemma, since identities are weak equivalences thanks to Lemma 3.6. Since relative J-cell complexes relative to D 0 include all globular sums, if we prove that such maps are weak equivalences we then obtain for free the contractibility of globular sums, since D 0 is contractible by hypothesis. We actually prove a little bit more, namely the following result. Proof. Let f : X → Y be as in the statement. Pick a section i Y of the trivial fibration ev 0 : PY → Y , which exists since Y is cofibrant, and denote by α the endomorphism ev 1 • i Y , which is a weak equivalence thanks to Lemma 3.3 and Lemma 3.6. Consider the following commutative square: here r denotes the choice of a retraction of f , which exists since X is fibrant, and the lift Γ exists by assumption, since f ∈ cof (J), X is cofibrant and ev is a fibration. We have ev 0 • Γ = 1 Y which implies that Γ is a weak equivalence, thanks to Lemma 3.3. Therefore, thanks to the same lemma and Lemma 3.6, we see that α • f • r = ev 1 • Γ is also a weak equivalence. A final application of the previous lemma yields that f • r belongs to W, which in turn implies that f is a weak equivalence thanks to Lemma 3.3 again, since r • f = 1 X . Lemma 3.5. cof (J) ⊂ cof (I). Proof. Of course it is enough to check that J ⊂ cof (I). We thus have to prove that, for every k ≥ 0, we have that σ k : D k → D k+1 belongs to cof (I). We know by assumption that S k−1 → D k is a cofibration, so that the colimit injection i 0 : D k is also such, being a pushout of it. We can now compose that with the boundary inclusion S k → D k+1 to conclude the proof. Proof. We start by proving inj(I) ⊂ inj(J) ∩ W. Thanks to Lemma 3.5 we only have to prove that inj(I) ⊂ W, which is obvious, since a cell f → f exists for every cell in Y thanks to the system of identities in C. Conversely, assume f is both a fibration and a weak equivalence, and consider a (k − 1)-sphere (a, b) in X together with a k-cell H : f (a) → f (b) in Y . Since f belongs to W, we find a k-cell H : a → b in X, together with a (k + 1)-cell Γ : f (H) → H in Y . Because f is a fibration, we can lift Γ to a cell γ : H → β, so that f (β) = H and β : a → b, since it is parallel to H, and this concludes the proof. Since we have proven that globular sums are contractible in Mod(C), we can endow models of C with the structure of ∞-groupoids, and use the results in Section 4 of [Ar2] to obtain the missing piece: namely, the 2-out-of-3 property for W. Indeed, the maps in W can be characterized as in Theorem 4.18 (ibid.) and we can use the invariance of basepoints (i.e. Corollary 4.14) to conclude that if f and g • f are weak equivalences then g is also such. More precisely, let y be a 0-cell of Y , we want to prove that π n (g) : π n (Y, y) → π n (Z, g(y)) is an isomorphism. Choose a 1-cell f (x) → y, whose existence is ensured by the fact that π 0 (f ) is bijective, so that we have an isomorphism π n (Y, y) ∼ = π n (Y, f (x)) as well as π n (Z, g(y)) ∼ = π n (Y, g(f (x))). Consider the following commutative diagram: By assumption, the upper horizontal arrow of the square is bijective, which implies that the bottom one is also such and this concludes the proof of Theorem 3.1. The semi-model structure The part of Theorem 3.1 that is hard to check in practice is that of the path object (trivial) fibration, i.e. the functorial construction of a fibration ev : PX → X × X such that the composition with both projections is a trivial fibration. As we prove in Theorem 4.4, it is enough to construct such map for a globular theory obtained from a coherator for n-categories (with 0 ≤ n ≤ ∞) by freely adjoining a left and a right inverse for each map. This appears to be quite easier than building one for a coherator for n-groupoids, since we can use the homogeneity property, and in the last section we will define such path object for the case n = 3. The following definition is slightly different from the one given in [EL], namely we consider left and right inverses instead of two-sided inverses, and this is done in order to produce the correct homotopy type of globular sums in the corresponding category of models. More precisely, we are going to prove that the maps α k defined in 6 are trivial cofibrations of C W -models, which is false if one considers a theory with a two-sided inverse operation instead. i 2 ) denotes the colimit inclusion onto the first (resp. second) factor. A system of identities (with respect to a chosen system of compositions) consists of a family of maps {id k : A system of (left and right) inverses (with respect to chosen systems of compositions and identities) consists of a family of maps {i l k , i r k : If C admits a choice of such three systems, given a globular functor F : C → Mod(G) we say that for every G-model X, the C-model G(F, X) can be endowed with such systems. Remark 4.2. In the presence of both left and right inverses for every cell, any of the two can be promoted to a two-sided one. For instance, assume f is an m-cell with both a left inverse k and a right inverse g, and let us show that k is also a right inverse for f , the remaining case being similar. It is enough to provide a cell from k to g, as follows (omitting bracketings for simplicity): Given an n-coherator for categories C, we define a new globular theory C W by means of the following pushout of globular theories: Here, we denote with Θ 0 [comp, id] the free globular theory on a system of composition and identities, and with Θ 0 [comp, id, inv] the free globular theory on a system of composition, identities and inverses. There is a canonical map as depicted in the upper left of the square and the map denoted by i is defined as follows: first, we choose a binary composition of 1-cells operation (say, the w defined in (2.1)), and we set i(c k ) = Σ k−1 (w). The action on identity operations is defined similarly. We are going to prove that, given a coherator for n-categories C (with 0 ≤ n ≤ ∞), one has that C W is a coherator for n-groupoids and there is a semimodel structure on the category of C W -models Mod(C W ) (with (trivial) cofibrations and weak equivalences as in Theorem 3.1) provided there is a functor P : Mod(C W ) → Mod(C W ) together with a natural transformation ev : P ⇒ Id × Id which is a pointwise fibration with the property that ev i def = π i • ev is a pointwise trivial fibration. In the case n = 3, we are going to construct a globular functor Cyl : C W → Mod(C W ) in Section 6, and by setting PX = Mod(C W ) (Cyl(•), X)) we will obtain the endofunctor in the hypotheses of Theorem 4.4. This, in turn, will produce Theorem 4.10 as a corollary. We start with the general result. Theorem 4.4. Let C be a coherator for n-categories (with 0 ≤ n ≤ ∞), and suppose there is a functor P : Mod(C W ) → Mod(C W ) endowed with a natural transformation ev : P ⇒ Id × Id which is a pointwise fibration with the property that ev i def = π i •ev is a pointwise trivial fibration. Then C W satisfies the hypotheses of Theorem 3.1, and therefore is a coherator for n-groupoids. Moreover, Mod(C W ) admits a semi-model structure as described in 3.1. Proof. We denote by D 0 C W the representable C W -model on D 0 , and we adopt a similar convention for D 0 C . All the hypotheses of the theorem are trivially satisfied, except for the contractibility of D 0 C W . We know from Lemma 5.4 that D 0 C is contractible, so it can be endowed with the structure of C W -model, that we still denote by D 0 C . The claim would then follow if we can prove that the counit of the adjunction This is a consequence of a more general result, proven in Proposition 4.5. Proposition 4.5. Let X be a C W -model such that FUX is cofibrant. Then the counit ε of the adjunction Proof. It is enough to show that U(ε X ) is a weak equivalence of C-models. Let us consider the following commutative square in Mod(C): Here, i denotes a choice of a section of the map ev 0 : PUFUX → UFUX, which is equal to U(ev 0 : PFUX → FUX) and is therefore a trivial fibration. Hence, the existence of i is ensured by the cofibrancy assumption on FUX. Suppose we manage to find a diagonal filler Γ : UFUX → PUFUX for such square, we would then have that Γ is a weak equivalence by the 2-out-of-3 property, since ev 1 and U(ev 1 • i) both are, and by construction ev This, in turn, implies that η UX • U(ε X ) is also a weak equivalence, and moreover, thanks to the triangle identities, we have U(ε X ) • η UX = 1 UFUX . This implies that U(ε X ) is a weak equivalence and concludes the proof. Therefore, all is left to do is to find the filler Γ, and this is accomplished separately in the next two lemmas. We define a set of maps α k : D k → I k , where the codomain is obtained by freely adding a pair of k-cell going in the opposite direction as well as a pair of (k + 1)-cells connecting the two possible composites with identities (with respect to the system of composition chosen to define (4)). For example, if k = 1, then I k is the free C-model on the following pasting diagram: and α 1 picks out f . Lemma 4.6. The map η UX is obtained as a transfinite composite of pushouts of maps of the form α k : D k → I k for k ≥ 1. Proof. The claim follows from the same argument given in Proposition 2.2 in [Nik], which proves that the unit is a {α k } k≥0 -cell complex provided the maps α's are monomorphisms. These maps are cofibrations, so it suffices to show that cofibrations are monomorphisms. In the language of [JB2], we can view Mod(C W ) as the cofiltered limit of a tower of iterated injectives, starting Since maps in I 0 are monomorphisms, we see that F I 0 -cell complexes in Inj(I 1 ), where F 0 is the left adjoint to the forgetful functor into globular sets, are again monomorphisms since, by Proposition 2.18 (ibid.), these are I 0 -cell complexes. Therefore we can iterate this construction and get I ⊂ Mod(C W ) as a filtered colimit of F i (I 0 ), with F i being the left adjoint to the forgetful functor down to globular sets, where each set F i (I 0 ) consists of monomorphisms by induction. It follows that I consists of monomorphisms. Lemma 4.7. Let p : E → B be a fibration in Mod(C W ), then p has the right lifting property with respect to the set of maps Lemma 4.8. The commutative square (5) admits a diagonal filler Γ : UFUX → UPFUX. Proof. Thanks to the previous result and to the fact that F (α k ) = α k (where, with a minor abuse of language, we have denoted with the same expression the interpretation of α k in Mod(C) on the left and in Mod(C W ) on the right) with F being the left adjoint to the forgetful functor it has the left lifting properties with respect to fibrations. Suppose given a fibration p : E → B and a diagram of k-cells and (k + 1)-cells in B of the form: ) → p(f )g which is homotopic to γ. Therefore, by lifting this homotopy and taking its target, we get a cell γ : f g → id k−1 (x) which is the lift we were looking for. The case of β is similar to the one we have just considered thanks to the previous lemma. As anticipated earlier, when n = 3 we can use the results of Section 6 in conjunction with those of Section 6 in [EL] to obtain an endofunctor P on Mod(C W ) with the desired properties. Proposition 4.9. Given a coherator C for 3-Cat, there exists a functor equipped with a natural transformation ev : P ⇒ Id × Id which is a pointwise fibration. Moreover the composites with the product projections ev It follows that, in the situation of the previous proposition, C W is a coherator for 3-groupoids, and we can now present the central result of this work. Theorem 4.10. There exists a cofibrantly generated semi-model structure on the category 3-Gpd ∼ = Mod(C W ) of Grothendieck 3-groupoids of type C W , whose set of generating cofibrations (resp. trivial cofibrations) consists of boundary inclusions The weak equivalences coincide with the class W defined in (3), and all the objects are fibrant. Proof. Thanks to the previous corollary, we have that C W satisfies all the hypotheses of Theorem 3.1, and this concludes the proof. Remark 4.11. In the following section we will define a functor where C is a coherator for ∞-categories. It follows from the results of this and the previous section that if one proves the division lemma holds for C W -models then it is enough to extend the functor above to one of the form (as we do in this paper in the 3-dimensional case) to prove that globular sums are contractible in C W , i.e. the latter is a coherator for ∞-groupoids, thus getting a semi-model structure on Grothendieck ∞-groupoids using the same strategy outlined in this section. This would also solve the open problem of making an ∞-groupoid à la Batanin, i.e. a C W -model (see [Bat]), into a Grothendieck one. Moreover, this would also prove the homotopy hypothesis thanks to the main results in [Hen]. Main constructions (revisited) In this section we are going to adapt all the main constructions on the category of ∞groupoids made in our previous work ( [EL]) to the context of C-models and C W -models. In what follows, C n will denote a fixed coherator for n-categories, sometimes denoted with just C when there is no risk of ambiguity. Relative lifting properties of Mod(C) To obtain the desired results, we need some preliminary lemmas on relative liting properties of C-models with respect to Θ-models, i.e. strict ∞-categories. These are needed since we used contractibility of globular sums in various steps of those constructions, and in this context globular sums are not going to be contractible in general. Recall that the structural functor F : C → Θ of the homogeneous coherator C gives rise to a cocontinuous functor F : Mod(C) → Mod(Θ) ≃ ω-Cat by considering the following Kan extension: where the y's denote two (different) instances of the Yoneda embedding. Lemma 5.1. An extension problem in C of the form: admits solution if and only its image under F : C → Θ does so, and moreover such extension can be chosen so as to live over the one in Θ. Proof. Let's prove the non-trivial implication. Suppose we have a map H : D n → A in Θ, with boundary (F (f ), F (g)). By factoring H into a homogeneous map p : D n → A ′ followed by a globular map i : A ′ → A, we see by inspection that the pair (f ′ , g ′ ) def = p • j n : S n−1 → A ′ is admissible: indeed, such are the boundaries of homogeneous maps in Θ. By uniqueness of homogeneous-globular factorizations in C we see that f and g have to factor through A ′ via an admissible pair (f , g) : S n−1 → A ′ that lives over (f ′ , g ′ ). It follows that there exists an extension of (f , g) to a map p : D n → A ′ , and therefore the composite i • p is the extension we are looking for, and lives over H by construction. Lemma 5.2. Let i : X → Y be an I-cellular map in C (i.e. the transfinite composite of pushouts of maps in I), and consider the following extension problem, where A is a globular sum: Then such an extension exists if and only if F (f ) admits an extension along F (i). Moreover, if we fix an extension in ω-Cat then the one in C can be chosen to live over that under F . Proof. There is only one non-trivial implication, which follows from Lemma 5.1 and cocontinuity of F by constructing the extension cell by cell. Lemma 5.3. Let i : X → Y be a map in Mod(C) G , such that its latching mapsL n (i) are I-cellular maps for every n ≥ 0. Then an extension problem of the form: is pointwise a globular sum), admits a solution if and only if its image under Proof. The non-trivial implication follows from the observation that F (L n (i)) ∼ =L n (F (i)) by cocontinuity of F , so that one can construct an extension using the usual inductive argument for Reedy categories and the previous lemmas. Let us conclude this section with a very useful lemma on fillers of spheres in globular sums. Lemma 5.4. Let A be a globular sum in C with n = dim(A). Then every k-sphere in A with k ≥ n admits a filler. In particular, D 0 is contractible, i.e. the unique map D 0 → * has the right lifting property with respect to all boundary inclusions S k−1 → D k . Proof. Thanks to Lemma 5.1, it is enough to prove the statement in ω-Cat. If k > n then the only sphere S k → A is given by a pair of identities on the same cell, and therefore it surely admits a filler. If k = n the restriction along one of the inclusions D k → S k is an identity cell, then the other must be as well, since globular sums in Θ admits no non-trivial endomorphisms of cells. In this case too, a filler exists. Finally, if we have an n-sphere in A consisting of a pair of parallel n-cells none of which is an identity, then the claim follows from the fact that an n-cell in an n-dimensional globular sum in Θ is uniquely determined by its boundary, as can easily be proven using the combinatorial description of Θ in terms of trees given in Section 3.3 of [Ar1]. Suspension-loop space adjunction We recall the construction of the suspension-loop space adjunction performed in Section 4 of [EL]. In this case, given X ∈ Mod(C W n ) and two 0-cells a, b ∈ X 0 we produce the C W n−1 -model of morphisms from a to b, denoted by Ω(X, a, b). For sake of simplicity we only consider the case n = ∞ and omit the subscript, leaving the task of modifying this to fit into the finite-dimensional case to the interested reader. This functor will be then extended to an adjunction of the form where the category on the right is the slice category under S 0 . Using the language of trees it is straightforward to construct a functor where ω-Cat denotes the category Mod(Θ) of strict ∞-categories. As previously done, we will construct Σ : Mod(C) → S 0 ↓ Mod(C) as the cocontinuous globular extension of a functor Σ : C → S 0 ↓ Mod(C) by induction on the defining tower of C, assuming that at each step the following square commutes: Here, A is obtained by taking a bijective on objects-fully faithful factorization of the map Σ : Θ 0 → S 0 ↓ Mod(C), which is defined as before, and so it clearly comes endowed with a functor down to Θ. Implicitly, we are assuming that Σ factors through C. The case Θ 0 = C 0 has already been discussed, and the limit ordinal case is trivial. Let us then suppose we have the construction on C α , and that C α+1 is obtained by adding an operation ̺ : D n → A with boundary an admissible pair (f, g). It is easy to see that Σ : Θ → Θ preserves admissible pairs, so that we can define Σ(̺) as the choice of an extension to the following diagram in A ⊂ S 0 ↓ Mod(C) (which exists in C and it is automatically under S 0 ): which again satisfies the inductive hypothesis. If we want to adapt this construction to the case of Mod(C W ), we simply consider the pushout (4) that defines this globular theory. The definition of A is the same as above, and now constructing a functor Σ : C W → A amounts to define an action on C and on inverses in a compatible way. More precisely, we define a map Σ : C → S 0 ↓ Mod(C W ) by composing the functor Σ : C → S 0 ↓ Mod(C) defined above with the natural map S 0 ↓ Mod(C) → S 0 ↓ Mod(C W ). In addition, we define a map Σ : Θ 0 [comp, id, inv] → S 0 ↓ Mod(C W ) by setting Σ(i ε k ) = Σ k (i 1 ) for ε = l, r and similarly for the maps k m ε . The universal property of pushouts yields the desired map Σ : C W → A ⊂ Mod C W , and consequently a functor by left Kan-extension. By adjunction, the underlying globular set of Ω(X, a, b) is given by We will often denote Ω(X, a, b) simply by X(a, b). Remark 5.5. If we compose Σ with the forgetful functor U : , we get a functor which is no longer cocontinuous. Nevertheless, it is well known that U creates connected colimits, therefore U • Σ preserves all such. Because Σ(I n−1 ) ⊂ I n , where I k is the set of maps defined in Definition 2.25, we therefore have that U • Σ preserves cofibrations (i.e. it sends maps in I n−1 to maps in I n , the respective saturations of I n−1 and I n ). A similar situation is treated in Lemma 1.3.52 of [Cis]. The following lemma will be used quite frequently in the forthcoming sections. Its proof is straightforward and it is thus left to the reader. Lemma 5.6. For every globular sum A there exist unique globular sums α 1 , . . . , α q such that the colimit being taken over the maps where we denote by (ΣB, ⊥, ⊤) the image under the functor Σ : (n − 1)-Gpd → S 0 ↓ n-Gpd of any globular sum B . Cylinders We now recall the important features of cylinders defined in [EL], this time in the context of C W n -models. As before, we only provide details for the case n = ∞ and we drop subscripts. Cylinders should be thought as homotopies between cells that are not parallel, so that one needs to provide first homotopies between the 0-dimensional boundary, then between the 1dimensional boundary adjusted using those homotopies, and so on. This is the right notion of (pseudo)-natural transformation in this context. Example 5.7. Let n ≥ 2. By definition, Cyl(D 0 ) is the free model on a 1-cell. Therefore, giving a 0-cylinder in a C W -model X is equivalent to specifying one of its 1-cells. If we go one dimension up, we have that a 1-cylinder C : Following the notation in the next section, we have that f = C • Cyl(σ) and g = C • Cyl(τ ). Moreover, α = C • ι 0 and β = C • ι 1 . This cylinder represents the fact that to give a "homotopy" from α to β we first have to give one between a and c, and one from b to d. Only then can we compose these with the cells we want to compare, and consider the 2-cells such as C that fill the resulting square, thus giving us the homotopy we are looking for. , where juxtaposition of cells stands for the result of composing them using the maps w defined in Definition 2.13. Given a k-cylinder C in X, with k > 0, we get by induction source and target cylinders denoted, respectively, by s(C) and t(C). In fact, if k = 1 we set s(C) = C s , t(C) = C t , and the inductive step is straightforward. It is possible to construct a coglobular object Cyl(D • ) : G n → Mod(C W ) that corepresents these cylinders, and their coglobular structure, in the sense that the set of k-cylinders in a C Wmodel X is given by Mod(C W )(Cyl(D k ), X), and the source and target cylinders are induced by precomposition with the structural map of this n-coglobular object. Moreover, this comes endowed with a map (ι 0 , ι 1 ) : where D • is the canonical n-coglobular n-groupoid of globes, which is a direct cofibration in the sense of 2.25 (i.e. it belongs to the class Finally, the source of the latching mapL k (ι 0 , ι 1 ) will be denoted by ∂Cyl(D k ). This can be constructed as the following pushout Therefore, we get a cofibration of C W -modelŝ It can also be represented as the following data in X Or, in a way that better justifies its name, as where the front face is the square (i.e. 1-cylinder) given by t(C), and the back one is s(C). Definition 5.10. We call ∂Cyl(D k ) the boundary of the k-cylinder. Given a k-cylinder in X C : Cyl(D k ) → X, we call the boundary of C, denoted by ∂C, the following composite Thanks to (8), we know that specifying the boundary of an k-cylinder in an n-groupoid X is equivalent to providing the following data: We can define a map of n-coglobular n-groupoids C • : Cyl(D • ) → D • that fits into the following factorization of the codiagonal map by solving the following lifting problem in Mod(C W ) G : This is done using 5.3 and the fact that such an extension exists in ω-Cat to construct a solution in Mod(C), and then applying the left adjoint to the forgetful functor U : Mod(C W ) → Mod(C) to get the the extension in Mod(C W ). A degenerate k-cylinder F : Cyl p q (D k ) → X is a k-cylinder in X whose p-iteration of the source and q-iteration of the target are collapsed. For instance, this is the picture of a 1-cylinder with degenerate source: In this case, a map Cyl 0 (D 1 ) → X corresponds to 1-cells g, α, β and a 2-cell C : gα → β in X. This will also be denoted by F : α 0 β. See Definition 9.1 in [EL] for a detailed description of the general case. In [EL] we defined the vertical composition of a compatible stack of an m-tuple of (possibly degenerate) k-cylinders in an ∞-groupoid. This operation takes as input a sequence of k- in an ∞-groupoid X, and produces a k-cylinder denoted by with p = min{p i } 1≤i≤m and q = min{q j } 1≤j≤m . Moreover, it has the property that for ε = s, t. This definition can be easily adapted to the case of C-models, and to get one for C W -models we simply apply the free functor to the map of C-models we are about to construct. The construction makes use of coherence cylinders which are defined using the contractibility of globular sums in ∞-Gpd, and so we need to examine this construction more carefully in the context of a categorical coherator. This is really the only issue that needs to be addressed to get such operation. In detail, we need to solve the following extension problems, as defined in Definition 8.3 of [EL]: Let us explain in detail how ψ m,k is defined and how to get the extension Ψ m,k : the other cases can be treated similarly and are thus left to the interested reader. In what follows, we assume we have chosen composition operations γ : 1 for k, m, n > 0, which are compatible with the source and target maps, i.e. There is no risk of confusion in referring to all such maps as γ, because the codomain uniquely determines such γ. Let us restrict, for sake of conciseness, to the case m, k = 0, and define maps by setting the first component in dimension n to be given by the composite and the second one to be This means that given an ∞-groupoid X and a map we get a pair of (n+1)-cells in X of the form g k . . . where juxtaposition is the result of composition using the appropriate γ. The extension problem at hand satisfies the hypotheses of Lemma 5.3, and thus such an extension exists if and only if the corresponding extension problem in ω-Cat admits a solution. The latching mapL n (Σ(ι)) ∼ = Σ L n (ι) : Σ∂Cyl(D n ) → ΣCyl(D n ) is a pushout of the sphere inclusion Σ(S n → D n+1 ) ∼ = S n+1 → D n+2 , and therefore the extension problem admits a solution thanks to Lemma 5.4, since dim(D ⊗m After having solved the other extension problems in a similar fashion, we have all the tools we need to define the operation of vertical composition of cylinders inside Mod(C). Modifications In what follows, we define modifications in Mod(C), which induce modifications in Mod(C W ) upon applying the free functor F : Mod(C) → Mod(C W ) (i.e. the left adjoint to the obvious forgetful functor) to the coglobular object representing them. Given k-cells A, B and 2-cells b, c in a given C-model X with t 2 (b) = s k (A) and t k (B) = s 2 (c), we make use of (k − 1)-cylinders in X s These were obtained in [EL] by using contractibility of globular sums and the fact that (ι 0 , ι 1 ) : D • D • → Cyl n (D • ) is a direct cofibration, therefore this needs some extra justification in this context. We only construct Γ, the other case being entirely analogous. We have to find an extension in the following diagram: Definition 2.13). At this point it is enough to observe that dim(D 2 ∐ D 0 ΣD n ) = n + 1 and that the latching map of the vertical arrow at n is a pushout of the boundary inclusion S n+1 → D n+2 to conclude that an extension exists thanks to Lemma 5.4. Definition 5.11. Given a C-model X, a modification in X between k-cylinders Θ : C ⇒ D amounts to a pair of 2-cells Θ s : s k (C) → s k (D), Θ t : t k (D) → t k (C) together with a modification of (k − 1)-cylinders in X(x, y) If k = 1 then we can depict C and D as, respectively Therefore, a modification Θ : C ⇒ D corresponds to the data of a pair of 2-cells in X S : f → f ′ , T : g ′ → g and a modificationΘ : Υ(ι 0 C, Θ t ) ⊗C ⊗ Γ(Θ s , ι 1 C) ⇒D, which is easily seen to correspond to a 3-cell in XΘ : (βΘ s )(Γ(Θ t α)) → ∆, where we denote by juxtaposition the result of the appropriate operations w involved in the definition. Notice that if f = f ′ and g = g ′ , then a modification Θ : C ⇒ D such that Θ s and Θ t are identities can be equivalently thought of as a 3-cell between the 2-cells Γ and ∆. Similarly to the case of cylinders, it turns out that modifications are representable by a coglobular object which we denote with M • : G n → Mod(C). This can be endowed with a direct cofibration of the form where Cyl n (D • ) * Cyl n (D • ) denotes the colimit of the diagram below: Note that, by extending M • to Θ 0 , we can make sense of modifications of the form Θ : C ⇒ D for C, D : Cyl(A) → X in an C-model X. As an immediate consequence of (13) we get the following result. The elementary interpretation̺ We now want to refine a result proven in [EL], namely the construction that takes a homogeneous map ̺ : D k → A in a homogeneous coherator for ∞-categories C (i.e. equipped with a globular map F : C → Θ that detects homogeneous maps) and gives back its "elementary interpretation"̺ : Cyl(D k ) → Cyl(A) in ∞-Gpd. Recall that this map satisfies the following two important properties: We will show that it is actually possible to construct̺ : Cyl(D k ) → Cyl(A) in Mod(C). Moreover, there is a (non-canonical) globular map C → G, where G is the coherator for ∞-groupoids we are using to model ∞-Gpd, which then induces a cocontinuous functor J : Mod(C) → Mod(G) by left Kan extension: the previous construction of̺ is then nothing but the image of this refined version of̺ under the functor J, as will be clear from what follows. The version for C W is obtained in a similar way. Observe that having this definition for homogeneous maps implies its extension to the whole category C, thanks to the homogeneous-globular factorization system on it. The construction was performed inductively, i.e. assuming that we already have a construction for ̺ ε : D k−1 → A ′ , where we denote by ̺ ε the homogeneous part of the factorization of ̺ • ε into a homogeneous map followed by a globular one, for ε = σ, τ . The map ̺ : Cyl(D k ) → Cyl(A) was obtained by vertically composing a stack of (possibly degenerate) , and each cylinder in the stack was the transpose of a map of the form Σ(Cyl r i q i (D k−1 )) → B, where B is a globular sum endowed with a map i B : B → Cyl(A), as we are going to describe later in this section. This map was obtained by solving an extension problem of the following form, exploiting the contractibiliy of the globular sum B: where the horizontal map is defined by induction. Thanks to Lemma 5.2, it is enough to verify the existence of an extension in ω-Cat. The vertical map is a pushout of the boundary inclusion S k → D k+1 , so that such an extension always exist if dim(B) ≤ n, thanks to Lemma 5.4. By construction, the dimension of any B ∈ L(A) is at most n + 1, where n = dim(A) ≤ k (as ̺ is homogeneous). Therefore the only case left out is when n = k and dim(B) = n + 1, i.e. the new vertex has been added at maximal height. In this case we have r B = q B = k − 2, so that the extension problem is of the form: From the explicit description of the stack of cylinders given in Section 9.3 of [EL], it is clear that A = ∂B and f = ∂ σ • p, g = ∂ τ • p for p equal the unique homogeneous map D k → A in Θ. Therefore, the existence of the extension is granted by the fact that Θ admits extensions of admissible pairs. As anticipated earlier, we will need some more details of how the cylinders are obtained besides the mere existence of this map. To begin with, we associate an ordered set L (A) to the globular sum A, which is defined by considering all the possible globular sums obtained from A by adjoining a new vertex to the tree associated to A (see Chapter 2 of [Ar1] for the correspondence between globular sums and trees). The order is obtained by letting the new edge traverse the tree counterclockwise, starting from the bottom right corner. For instance, this is D 1 , whose associated tree is Then, we construct a zig-zag diagram involving these globular sums B ∈ L(A), whose colimit is precisely Cyl(A). This endows each globular sum B in the list with a structural map i B : B → Cyl(A) given by a colimit inclusion. We will now describe the stack of cylinders we get in the case k = 2, and we assume (without loss of generality) that ̺ is a homogeneous operation, which forces dim(A) ≤ 2. Also, we describe this stack representably, i.e. we assume given a map C : Cyl(A) → X, with C : U V , and we describe the stack of 1-cylinders in X(x, y) that we get out of that. To each of the globular sums in L (A) we associate a (possibly degenerate) 1-cylinder in X(x, y), where x = s 2 (X(̺)(U )) , y = t 2 (X(̺)(V )). These 1-cylinders will be vertically composable in the order induced by that of L (A), and the composite will produce a 2-cylinder̺ upon transposing along the adjunction Σ ⊣ Ω (here, we make use of the fact that an n-cylinder is defined to be an (n − 1)-cylinder in the hom-C-model between two objects, with the appropriate top and bottom (n − 1)-cells, see Definition 5.8). The first square, i.e. the one associated with the globular sum B ∈ L (A) where the new vertex * B has been added at height 1 as the maximal element over the root (i.e. the right most one) is given by: Here, U <p and U p respectively denote the restriction of U to Σα 1 ∐ Σα p as in Lemma 5.6. Furthermore, juxtaposition is the result of composing using the maps introduced in Definition (2.13), and given a map W : A → X, which we think as an A-shaped pasting diagram in X, we denote X(̺)(W ) with ̺(W ). Finally, we denoted C • Cyl(∂ 2 τ ) with C t . Both the sides and the interior of the square are obtained by solving extension problems in the globular sum B, using the tools developed in this section, and the same holds for all the other cases to follow. Dually, the last square in the stack is associated to the globular sum B ′ obtained from A by adjoining a new vertex at height 1 as the minimal element over the root (i.e. the left most one). This time, the square is given by: Suppose now the new vertex in B ∈ L (A) is adjoined at height 1 over the q-th 0-cell of A, with q = 0, p. The associated square then looks like the one depicted here below: Here, we have used a to denote the 1-cell in X corresponding to the restriction of the composite map C • i B : B → Cyl(A) → X to the 1-cell in B associated with the newly added vertex. Suppose now the vertex has been added to A at height 2, to get a globular sum B ∈ L (A). We need to consider all the vertices over the one at which the new edge has been adjoined, and again we distinguish according to the position of the newly added vertex. Firstly, let's consider the case in which it has been added over a copy of D 1 (i.e. it is the only vertex above the one to which the new edge is attached). This determines two sub-globular sums of A, A < and A > , which can be informally described as being obtained by removing the 1-cell over which we have attached the new vertex (A < being the one on the left). Precomposing U (resp. V ) with the inclusion of A < (resp. A > ) we get an A < -shaped (resp. A > -shaped) pasting diagram in X which we call U < (resp.V > ). The square is then given by: Here, ∂ ε • ̺ ε is the homogeneous-globular factorization of ̺ • ε for ε = σ, τ , F is the 2-cell that fills the 1-cylinder corresponding to the image via the functor Cyl(·) of the inclusion D 1 → A of the copy of D 1 over which we have added a new vertex, and ∂ ε W denotes, given a map W : A → X, the precomposition of W with ∂ ε : ∂A → A. Finally, ̺ * ε is obtained as an extension of the following form: where ∂A + is obtained from ∂A by adjoining a new vertex in the same position as the one that was added to A in order to get B. If the new vertex * B is not the only one over the vertex z to which the new edge has been added to, then we have to distinguish according to the order of the set of vertices over z. If * B is the maximal element, and it has been added to Σα q , then the 1-cylinder we get has degenerate source, and can be depicted as follows: Here, we have denoted with α the 2-cell of the 1-cylinder C |Σαq • Cyl(∂ τ ), and with a its target 0-cylinder (viewed as a 1-cell). Dually, if it is the minimal element, the 1-cylinder has degenerate target, and is of the following form: Here, we have denoted with α the 2-cell of the 1-cylinder C |Σαq • Cyl(∂ σ ), and with a its source 0-cylinder (viewed as a 1-cell). Finally, if the new vertex has been added as the r-th element over the vertex the new edge is attached to, then we get sub-globular sum of Σα q ∼ = ΣD ⊗m 1 of the form ΣD ⊗r 1 and ΣD ⊗m−r 1 . Corresponding to this subdivision we have a ΣD ⊗r 1 -shaped diagram in X induced by U , that we denote with U ≤r q , and, similarly, a ΣD ⊗m−r 1 -shaped diagram induced by V , that we denote with V ≥r q . The corresponding 1-cylinder is essentially a 2-cell in X(x, y), since its source and target are degenerate, as depicted here below: Here, we have denoted with α the 2-cell of the 1-cylinder given by the target of the r-th 2-cylinder in the image of C |Σαq . The last case is that of a globular sum B ∈ L (A) in which the new vertex * B has been added to A at height 3. Say the 2-cell the new edge has been attached to is the r-th in Σα q ∼ = ΣD ⊗m 1 , then the associated 1-cylinder has degenerate source and target, and is of the following form: Here, F denotes the 3-cell of the 2-cylinder in X, whose 0-dimensional source we denoted by a, picked out by precomposing C with Cyl(D 2 → A), where the copy of D 2 in question is the one that corresponds to the vertex in A of height 2 over which * B has been added. So far, we have described a stack of |L (A)| vertically composable (possibly degenerate) 1cylinders in X(x, y). Its (vertical) composite is a 1-cylinder C t ̺(U ) ̺(V )C s in X(x, y), that transposes under the adjunction Σ ⊣ Ω to give the desired 2-cylinder C •̺ : ̺(U ) ̺(V ). The Division Lemma The proof of the crucial fact that ev i = π i • ev is a trivial fibration for i = 0, 1 (where π i denotes the product projection onto the i-th factor) relies on the division lemma, i.e. the fact that given a pair of parallel n-cells A, B and a 1-cell f in an ∞-groupoid X, any (n + 1)-cell H : f A → f B (where juxtaposition denotes the choice of a whiskering operation) is homotopic to one of the form f H for some H : A → B (this is essentially the content of Lemma 4.12 in [Ar2]). The proof of this lemma requires contractibility, and we were not able to generalize it to C W (as defined in Definition 4) in the case where C is a coherator for ∞-categories. The three dimensional case can still be proven by hands, as follows. Note that, in the presence of both a left and a right inverse for every cell, any of them can be promoted to a two-sided inverse, therefore we will use the notation f −1 with no reference to left or right. If n = 1 and we have a 2-cell in X of the form: then we can define H as the following composite: where "≃" denotes coherence constraints that exist in C W . It is a routine exercise to check that f H is homotopic to H. Turning to n = 2, we assume we have a 3-cell H : f A → f B. We define H : A → B as the following composite of 3-cells: Here, the 2-cells denoted with "≃" denote coherence constraints that exist in C W , and the first and last 3-cells are also composite of constraints, whereas the one in the middle is a whiskering of H with the other cells depicted there. Again, it is a tedious but straightforward exercise to check that f H is homotopic (i.e. equal,for dimensionality reasons) to H. Finally, if n = 3 we have to prove that f A = f B implies A = B, which is entirely analogous to the arguments given so far. A path object in Mod(C 3 W ) Given a coherator for 3-categories C 3 , we are going to endow the globular set with the structure of a C 3 -model, which we can then extend to a C 3 W -model thanks to the result of Section 6 in [EL], thus providing a proof of Proposition 4.9. From now on we will drop the subscript and simply denote this coherator by C. It follows from 2.14 that in the cellularity condition for a coherator for n-categories C, we can assume that all the "basic" operations of dimension k are added at the k-th step of the tower that defines C. More precisely, we can assume that In particular, if n = 3, we can assume without loss of generality that C ∼ = C 4 fits into the diagram displayed below: We can adapt the argument used in Proposition 11.4 of [EL] to find a lifting of the functor P as depicted below It turns out that extending along i 3 is automatic, thanks to the following result. Proof. If n = 0 the result is clear. Assume n > 0, then we get (n − 1)-cylinders F , G in X (s(f ), t(g)) where f = F s = G s and g = F t = G t . By definition, we have F , G : gA Bf and ε(F ) = ε(G) for ε = s, t. Therefore, by inductive assumption we get that F = G, which concludes the proof. We are now left with the problem of finding a lift of the form: and this, in turn, amounts to defining a map Cyl(̺) : Cyl(D 3 ) → Cyl(A) for every ̺ : D 3 → A added as a filler of a pair (h 1 , h 2 ) ∈ X 2 , in such a way that Cyl(̺) • Cyl(σ) = Cyl(h 1 ) and Cyl(̺) • Cyl(τ ) = Cyl(h 2 ). Note that these last 2 equations make sense, since h 1 , h 2 ∈ C 2 . The strategy for constructing such maps will be the same as the one used to get the extension to C 2 , namely to prove that we can endow every interpretation of a 2-dimensional operation Cyl(ϕ) : Cyl(D 2 ) → Cyl(A) with a modification Θ ϕ : ϕ ⇒ Cyl(ϕ) (25) in a way that is compatible with source and target (as will be explained in more detail later on), so that we can then use the following lemma to produce the map we are after. Proof. The proof is just a word-by-word copy and paste of that of Lemma 10.5 in [EL], promoting left or right inverses to two-sided ones when necessary. In fact, we can apply this lemma to the situation depicted in the diagram below, thus getting the desired extension: We are going to prove several lemmas to obtain the modification in (25). To simplify some arguments, we will sometimes assume (without loss of generality) that our computations happen in dimension n = ∞, thus replacing identities with appropriate cells. The result we are looking for can then be obtained by simply quotienting out these higher cells. Also, all the results that hold true in Mod(C) will be proven in that context using the techniques illustrated in Section 5. The case of C W -models then follows, as usual, by applying the free functor F : Mod(C) → Mod(C W ), that is easily seen to preserve cylinders, vertical composites of them and modifications. In the previous section we recalled the salient features of the construction of the map̺ : Cyl(D k ) → Cyl(A) associated with a homogeneous map ̺ : D k → A in C. We refer the reader to Definition 9.15 of [EL] for a fully detailed version. We start with a lemma that allows us to "plug" modifications of globular sums of cylinders into the elementary interpretation of a 2-dimensional operation. Proof. The proof is structured in the following manner: since both cylinders C •̺ and D •̺ are built as the vertical composite of a stack of cylinders, we will construct compatible modifications from each of the cylinders that compose the stack associated with C•̺ towards the corresponding ones in the stack associated with D•̺. We will then conclude by using the bicategorical structure described in part B of the appendix to compose up these modifications thus getting the desired map Θ •̺. Let be the table of dimensions of A. Since ̺ is homogeneous, we have dim(A) ≤ 2, and therefore i k = 1, 2 for every 1 ≤ k ≤ m. By precomposing with the appropriate colimit inclusions we thus get cylinders C k , D k : The cylinders associated with case (15) to (17) in both stacks coincide thanks to the assumptions, thus we can use identity modifications in these cases. We now consider case (18): i.e globular sums B ∈ L (A) in which we added a new vertex * B to A at height ht( * B ) = 2, in such a way that this new vertex is the unique element of the fiber over the vertex of height one that is right below it (this is in the explicit description of cylinders outlined in the previous section). Fix a globular sum B in this family, such that the vertex * B has been added to A over D ir = D 1 , and consider the vertical stacks of 1-cylinders whose composites are the transpose of C •̺ and D •̺ respectively. The 2-cell in B corresponding to the vertex * B picks out the 2-cell associated with the 1-cylinder C r (resp.D r ) via the composites We can use the components of Θ to construct the following boundary of a 1-modification in where the 1-cylinders Γ B and ∆ B are the ones associated with the globular sum B in the two stacks, as follows (using the notation established in the previous section): Here, we have committed a minor abuse of language in denoting by U σ what we normally denote with ∂ σ U , and with U σ <r , V σ >r Θ r the result of composing that pasting diagram with a chosen operation whose boundary is given by (∂ σ , ∂ τ ) • ̺ * σ , and similarly for the analogues with τ . We can now use the fact that a filler certainly exists in ω-Cat to extend this to a modification of 1-cylinders, and this concludes the construction for the first case. Let us now address the case of globular sums B ∈ L (A) of case (19) to (22) that appear consecutively in L(A). We will build a modification involving the sub-stack associated with this subset of L(A), all at once rather than cylinder by cylinder. Σα p be the decomposition of A. We can consider the maximal sub-globular sum of A of the form D 2 ∼ = ΣD ⊗k 1 ∼ = Σα q for some q that contains the copy of D 2 to which the new edges have been adjoined. The globular inclusion ΣD ⊗k 1 → A picks out k composable 2-cylinders Γ 1 , . . . , Γ k via C and ∆ 1 , . . . , ∆ k via D. Notice that there exists an integer r such that Γ i = C r+i , and the same holds if we replace C and Γ with D and ∆, with the same r. Consider the vertical stacks of 1-cylinders whose composites are the transpose of C •̺ and D •̺ respectively. The sub-stack associated with the globular sums in the ordered set L (A) comprised between the one in which the new edge has been added at the far right of the corolla represented by ΣD ⊗k 1 and the one in which it has been added at the far left is mapped in X(x, y) under C to a pasting diagram of the form: where d = t 2 (Γ 1 ), e = s 2 (Γ 1 ). Obviously we get a similar one replacing every occurrence of C with D and of Γ with ∆. Here, we have set: where we have used F to denote the underlying 3-cell of a 2-cylinder F , and t(Γ 0 ) is defined to be s(Γ 1 ). The 2-cells in X(x, y) labelled with α's represent 1-cylinders whose source and target are degenerate. In particular, each α 2m+1 is a whiskering of the 3-cell in Γ k−m and each α 2m is an associativity constraint, for every 0 ≤ m ≤ k − 1, as explained in detail at the end of the previous section. We can use the components of Θ to find a modification between the vertical composites of these (degenerate) cylinders using the following lemma, which concludes the proof. The following is a result that is needed in the proof of the previous lemma, but we only concern ourselves with a small simplification of it, leaving the (straightforward) proof of the generalization of the result to the interested reader. The simplification consists of restricting to the case k = 2, following the notation established above. Nevertheless, the proof of the general case is entirely similar and has no more genuine content than the one we present. Lemma 6.4. Assume given 2-cylinders This implies that, in particular, t(A 0 ) = s(A 1 ) and t(B 0 ) = s(B 1 ). Also, assume given modifications Θ : C ⇒ D, Θ ′ : G ⇒ H, with t(Θ) = s(Θ ′ ), whose sources and targets, denoted respectively with S : s(C) ⇒ s(D), S ′ : s(G) ⇒ s(H) and T : t(D) ⇒ t(C), T ′ : t(G) ⇒ t(H), are essentially represented by 3-cells (i.e. they have trivial 0-dimensional boundary). Then we get an induced modification εΘ ′ Θϕ between the vertical composite depicted below (where we have used F to denote the underlying 3-cell of a 2-cylinder F , α 2 is simply an associativity constraint, and the 2-cells labelled with "≃" are also given by coherence constraints) and the one obtained by replacing each occurrence of C with D and of G with H, with corresponding β's in place of α's and g's in place of h's. x y The notation is defined as follows: where juxtaposition is either the result of composing using ̺ or using the composition operations that appear in the definition of cylinders, as should be clear from the context. Proof. To begin with, we observe that the hypotheses imply C s = D s and C t = D t , and we denote these 1-cells with a and b respectively. We consider the following pasting diagram in Ω 2 (X, x, z) with x, z being the appropriate 1-cells of X depicted in the diagram above: (εt(H)ϕ)(ε((bA 1 )(bA 0 ))ϕ) (εt(G)ϕ)(ε((bA 1 )(bA 0 ))ϕ) B 1 a)s(G))(bA 0 ))ϕ ε (((B 1 a) B 1 a)(B 0 a))ϕ)(εs(C)ϕ) (ε ((B 1 a)(B 0 a) in which all the cells labelled by "≃" are obtained by verifying their existence in ω-Cat, since their boundaries factor through appropriate globular sums, and Θ, Θ ′ are the underlying 4-cell of the modifications. The composite of this pasting diagram is the modification εΘΘ ′ ϕ of the statement. Given an operation ̺ : D 2 → A in C, we can consider the map̺ : Cyl(D 2 ) → Cyl(A) obtained by applying Lemma 6.2 to the following diagram: By construction, there is a modification χ ̺ :̺ ⇒̺. Also, note that Cyl(̺) and̺, although potentially different, are parallel 2-cylinders. Lemma 6.5. In the situation of Lemma 6.3 and in the context of C W -models, we can replace̺ with̺. In what follows, we consider a homogeneous map ϕ : A → B and we use the notationφ to denote the map Cyl(A) → Cyl(B) obtained by glueing the various mapsφ j : Cyl(D i j ) → Cyl(B j ) for 1 ≤ j ≤ m, where ϕ j is the homogeneous part of the composite D i j → A → B for every globe D i j in the globular decomposition of A, and colim k B k ∼ = B. Lemma 6.6. Assume given homogeneous operations ̺ : D 2 → A and ϕ : A → B. There is a modification Λ :φ •̺ ⇒ ϕ • ̺ in C W -models that is essentially given by a 4-cell (which is possible as both cylinders are parallel). The idea of the proof is to consider the following diagram of composable modifications, where the solid ones have already been constructed: We then have to construct the modification denoted with η, and then prove that the resulting composite can be adjusted so as to consist of a 4-cell. This is accomplished by making use of the following lemma, once we have proven that its assumptions are satisfied in this case. Lemma 6.7. Assume given a pair of parallel n-cylinders C, D : A B in a C W -model X, together with a modification Θ : C ⇒ D between them. Assume further that s(Θ) and t(Θ) are essentially given by (n+1)-cells between the (n−1)-cylinders involved, and that there are (n+2)cells η s : s(Θ) → 1 s(C) , η t : t(Θ) → 1 t(C) , where E denotes the n-cell filling an (n − 1)-cylinder. Then there exists a modification Θ ′ : C ⇒ D which essentially consists of an (n + 2)-cell between n-cylinders. Proof. We prove the statement by induction on n > 0. Let n = 1, and set Θ ε = ε(Θ) for ε = s, t. Consider the following pasting diagram in X (s(C s ), t(C t )), where the unlabelled 2-cell comes from unitality of composition in C W : The composite of this pasting diagram is the modification Θ ′ we are looking for. Now let n > 1, we have a modification of (n − 1)-cylinders Θ : C ⇒ D in X (s(C s ), t(C t )). For ε = s, t we have ε(C) = ε(C) = ε(D) = ε(D) and ε(Θ) is an n-cell between (n − 2)-cylinders. Also, we can view η s , η t as (n + 1)-cells in X (s(C s ), t(C t )), so that we can apply the inductive hypothesis to get a modification Θ ′ : C ⇒ D which consists of an (n + 1)-cell between (n − 1)cylinders in X (s(C s ), t(C t )). Its transpose Θ ′ : C ⇒ D is the modification we are looking for and this concludes the proof. We now recall Lemma 11.10 of [EL], since the modification we want to construct has to be compatible with the one obtained in that lemma in a sense that will be made precise in what follows. Lemma 6.8. Given compatible operations ̺ : for 1 ≤ j ≤ k similarly to the previous lemma, there is an induced modification of C-models The proof of the following lemma is quite technical, but crucial to get the missing piece for this section. Lemma 6.9. Assume given homogeneous operations ̺ : D 2 → A and ϕ : A → B. There is a modification of C-models ∆ : ϕ • ̺ ⇒ ϕ • ̺ with source and target given by the modifications Cyl(j)•δ (ϕ•i)ε ,̺ε for ε = σ, τ , where i•̺ ε is the homogeneous-globular factorization of ̺•ε and, similarly, j •(ϕ•i) ε is the homogeneous-globular Here, we have used the arrow ։ to denote homogeneous maps and for globular ones. Proof. The proof proceeds very similarly as to that of Lemma 6.3, i.e. we construct the modification ∆ as the composite of modifications from substacks of the stack defining ϕ • ̺ towards substacks of the one defining ϕ • ̺, parametrized by the globular sums in L(A) in an exhaustive fashion. We prove this representably, and we let be the table of dimensions of A and B respectively. This means that we are given a map C : Cyl(B) → X, with C : U V and X a 3-groupoid. From this, we get cylinders C 1 , . . . , C q in X, where C k is a j k -cylinder. Notice that, by assumption on the homogeneity of ̺ and ϕ, we have i k , j r ≤ 2 for every 1 ≤ k ≤ m and 1 ≤ r ≤ q. The two cylinders involved are both obtained as vertical composites of (different) stacks of 1-cylinders in X(x, y) for x = s i 1 (C 1 ) 0 , y = s im (C m ) 1 . Therefore, we need to provide a filler for this pair of composite 1-cells in the bicategory hom (D 1 , X(x, y)), and we do so by decomposing it into some sub-composite, and we then explain how to find fillers for each such piece. As in Lemma 6.3, we firstly consider the cases (15) to (17), where modifications can be constructed by using the fact that the corresponding cylinders factor through appropriate globular sums, and in ω-Cat the boundary data of these modifications admits fillers in a way that is compatible with the modifications we already have for the boundary. We now address case (18), i.e. globular sums D ∈ L (A) which have been obtained by adding a new vertex * D to A at height ht( * D ) = 2, in such a way that this new vertex is the unique element of the fiber over D i k = D 1 . Given such globular sum D, the 2-cell in D represented by * D picks out the 2-cell associated with the 1-cylinder F k = ϕ k (C n k , . . . , C n k+1 −1 ), where ϕ = (ϕ i ) 1≤i≤m , according to the globular decomposition of A, and each of the ϕ k has the sub-globular sum G k ⊂ B spanned by D jn k , . . . , D jn k+1 −1 as codomain. Corresponding to such D, we have a cylinder in the stack associated withφ •̺ of the form: We have used ϕ >k U |G >k to denote the result of composing, using (ϕ i ) i≥k , the restriction of U to the union of the sub-globular sum of B given by G i for i > k. Similarly for the other piece of notation involving the indices smaller than k. We want to produce a modification having this cylinder as source, and having as target a sub-composite of the vertical stack of 1-cylinders whose composite is the transpose of ϕ • ̺ under the adjunction Σ ⊣ Ω. This sub-stack is the one parametrized by the family of globular sums of the form Notice that the respective boundaries of these cylinders are of the same form as the ones appearing in the proof of Lemma 11.10 in [EL], and therefore we can use the modifications we produced there to compare the boundaries. These constitute the boundary of the modification we want to construct, whose existence follows, finally, from the fact that this boundary factors trough the globular sum and a filler for it certainly exists in ω-Cat. We now turn to the case of globular sums C ∈ L (A) corresponding to case (19) to (22). We can thus consider the maximal sub-globular sum of A of the form D 2 ∼ = ΣD ⊗k 1 that contains the copy of D 2 to which the new edges have been adjoined. The globular inclusion ΣD ⊗k 1 → A picks out k composable 2-cylinders Γ 1 , . . . , Γ k in X, where we have Γ i = ϕ r+i (C n r+i , . . . , C n r+i+1 −1 ) for a unique integer r. We have to construct a modification whose source is given by a stack of (collapsed) cylinders of the form: Here, we set d = t 2 (Γ 1 ), e = s 2 (Γ 1 ), and: where we have used F to denote the underlying 3-cell of a 2-cylinder F , and t(Γ 0 ) is defined to be s(Γ 1 ). The 2-cells in X(x, y) labelled with α's represent 1-cylinders whose source and target are degenerate. In particular, α 2m+1 is a whiskering of the 3-cell in Γ k−m and α 2m is an associativity constraint. The target of the modification we want to construct is a composite of a sub-stack of the one associated with ϕ • ̺, parametrized by the family of globular sums given by To finish this construction, we introduce an intermediate step in this modification by taking into consideration Lemma A.2 in the Appendix, and we focus on the square (28) that originates from it. By applying this to each of the (possibly degenerate) 1-cylinders in the sub-stack we are considering, we obtain a new stack of the same shape where all the new 1-cylinders are whiskering of the previous ones in the appropriate sense. The respective boundaries of these stacks we are comparing are of the same form as the ones appearing in the proof of Lemma 11.10 in [EL], and therefore we can use the modifications we produced there to compare the boundaries. In the same way, the boundary of this new "whiskered" stack and the one of the source of the modification we want to build can also be compared using the modification of Lemma 11.10 in [EL] (which was indeed the composite of two such). After having adjusted the boundaries, filling in the rest of the modification follows from a straightforward application of the classical result of coherence for pseudofunctors and bicategories. By construction, it is clear that the boundary of the composite modification in 26 satisfies the assumptions of Lemma 6.7. Hence, when we quotient out the 4-cells, we find that the following identity holds:φ •̺ = ϕ • ̺ Therefore, we see that an extension to C 2 is equivalently obtained by setting Cyl(̺) =̺ for all homogeneous operations ̺ : D 2 → A. Finally, we recall that, by definition, C 3 ∼ = C 2 [X 2 ] and so we can obtain the desired extension depicted in 24 by defining Cyl(Φ), for every Φ : D 3 → A added as a filler of (ϕ 0 , ϕ 1 ) ∈ X 2 , in the following manner: Cyl(A) This concludes the construction of the path 3-category of a 3-groupoid, as we record here below. Theorem 6.10. Let C be a 3-coherator for categories, and denote with 3-Cat the category of weak 3-categories modeled by it. Then there is a lift of the form: Thanks to Theorem 6.5 of [EL], it is possible to extend the codomain a bit further, thus landing in a category whose objects posses a richer algebraic structure. We recall here the definition that is needed to formulate this extension result. In what follows, an operation ϕ : D n+1 → A in an n-globular theory means an equality of the form ϕ • σ = ϕ • τ . Corollary 6.11. Let C be a 3-coherator for categories, then there exists a lift of the form: Proof. Such an extension amounts to endow the C-model PX obtained in the previous theorem with a system of inverses with respect to the chosen systems of compositions and identities. This was done in Theorem 6.5 of [EL] in the case of two-sided inverses: therefore, we can interpret both the left inverse operation and the right inverse one by means of that, or simply adapt the proof given there to produce a left and a right inverse. This proves Theorems 4.9 and 4.10 and concludes this section. A Pseudofunctoriality of whiskerings In this section of the appendix we want to record some results and constructions that involve the mapping 2-groupoid of morphisms X(x, y), where X is a 3-groupoid and x, y are 0-cells in X. Having in mind that 2-groupoids essentially corresponds to unbiased bicategories with weak inverses, we will treat them as such. Let's consider the following situation, we are given 1-dimensional globular pasting diagrams in X of the form α : Moreover, we are given a homogeneous operation ̺ : We then have the following result: Lemma A.1. The data above extend to a pseudofunctor of bicategories of the form Proof. Choose operations ̺ 2 , ̺ 3 as depicted in the following diagrams: Next, define the underlying map of globular sets to be given by The fact that this extends to a pseudofunctor is a simple exercise using contractibility of globular sums, and is therefore left to the interested reader. If we go one dimension up, we can consider the following situation: we are given globular sums A, B with max{dim(A), dim(B)} = 2, and maps α : Furthermore, assume given a homogeneous operation of the form ̺ : . We then have the following result, whose proof is analogous to that of the previous one. Lemma A.2. The previous data determine a pseudo-natural transformation of the form: where ̺ ε denotes the homogeneous part of the composite ̺ • ε for ε = σ, τ . Finally, we observe that given bicategories K, L , a square in K of the form: we get a filler for the square Indeed, it is enough to consider the following composite: It is clear that an analogous statement holds if we replace squares with commutative triangles or even just a 2-cell, thus covering the case of all possible degenerate 1-cylinders. B A bicategory of cylinders and modifications Given an ∞-groupoid X and an integer n ≥ 0, we want to organize the collection of n-cells, n-cylinders and modifications between n-cylinders into an algebraic structure that allows us to perform calculations with them and encode the low dimensional structure of a yet to be defined internal hom between ∞-groupoids. This is going to be a truncation of an ∞-groupoid that results from the existence of a Gray tensor product : ∞-Gpd × ∞-Gpd → ∞-Gpd. To simplify things, the bicategory that we define will be of a special kind, as defined below. Remark B.1. Everything that follows can be proven to hold true also in Mod(C) for any given coherator for ∞-categories C. Indeed, all the fillers obtained using contractibility can be obtained using the methods we described in Section 5, once we observe that the latching map of Ξ : Cyl(•) * Cyl(•) → M • is a pushout of the boundary inclusion S n+1 → D n+2 , as proven in Lemma 10.3 of [EL]. Suppose given a 2-truncated globular set X : G op ≤2 → Set, we want to get a locally posetal bicategory χ(X) from it by setting X 0 as its set of objects and defining the underlying graph of each of its hom-categories to be In words, we are saying that there is a 2-cell α : f → g if and only if the set X 2 (f, g) is non-empty. What extra structure do we need to define, and what conditions should it satisfy in order to get a locally posetal bicategory? The properties not encoded by the structures, i.e. the axioms for a bicategory, all concern equality between 2-cells, and therefore are trivially satisfied. Thus we only need to define the following operations: 3. whiskerings χ(X) 2 × χ(X) 0 χ(X) 1 → χ(X) 2 and χ(X) 1 × χ(X) 0 χ(X) 2 → χ(X) 2 ; 4. identity 1-cells 1 a ∈ χ(X)(a, a) for every a ∈ Ob(χ(X)); 5. identity 2-cells 1 f : f ⇒ f for every f ∈ X 1 ; 6. unit constraints, which amount to check that 7. associators, which amount to check that Given an ∞ groupoid X, we define a 2-truncated globular set out of it, for each n ≥ 0, as follows: where, the globular structure is induced by precomposition with the structural maps ι = (ι 0 , ι 1 ) : D n D n → Cyl(D n ) and Ξ = (Ξ 0 , Ξ 1 ) : Cyl(D n ) + Cyl(D n ) → M n All the proof and construction that follow, can be adapted to the more general case of (possibly) degenerate cylinders as 1-cells and (possibly) degenerate modifications as 2-cells. The latter can be defined in a straightforward way by mimicking the changes made in going from normal cylinders to degenerate ones. We already have some of the operations required to get a locally posetal bicategory out of it: composition of 1-cells is given by vertical composition of cylinders, and the identity 1-cell on an n-cell A ∈ X n is the trivial cilinder C A defined as the composite Cyl(D n ) D n The existence of the rest of the structure in the case n = 0 is straightforward, and follows directly from the contractibility of the coherator C. In what follows, we fix an integer n > 0 and we assume as inductive hypothesis that hom(D k , X) is a locally posetal bicategory for each k < n. Let us now address point (2), i.e. vertical composition of modifications. From here onwards, until the end of this section, whenever a 1-cell is labelled with Θ, Ψ or Φ, that refers to the coherence cylinders considered in Definition 12. using the same operation representing vertical composition of 2-cells in X that has been chosen for hom(D 0 , X) (e.g. the one used in the definition of cylinders). Consider the following 2-dimensional pasting diagram in the bicategory hom(D n−1 , X(x, y)), where x = s 2 (Θ s ) and y = t 2 (Θ t ): Here, the existence of α (resp. β) follows by an application of Lemma 5.12 to the contractible ∞groupoid D n D 0 D 2 D 1 D 2 (resp. D 2 D 1 D 2 D 0 D n ). The composite of this pasting diagram defines the modification claimed in the statement, thus concluding the proof. Let us now address the problem of constructing identity 2-cells in hom(D n , X). Lemma B.4. Given an n-cylinder F : A B in X, there exists a modification of n-cylinders in X of the form 1 F : F ⇒ F . Proof. Define a pair of 2-cells (1 F ) ε = 1 ε n (F ) for ε = s, t, where 1 f denotes the choice of an identity 2-cell on f , when f is a 1-cell of X. Consider the following 2-dimensional pasting diagram in hom (D n−1 , X(x, y)), with x = s n (A) and y = t n (B): Here, α (resp. β) is obtained by applying Lemma 5.12 to the contractible ∞-groupoid D n D 0 D 1 (resp. D 1 D 0 D n ), and γ is a pasting of unit constraints in the bicategory hom(D n−1 , X(x, y)). The composite of this pasting diagram provides the modification we are looking for, and thus we conclude the proof. We prove the next two lemmas by a simultaneous induction on n. Proof. We prove the statement by induction, the case n = 0 being straightforward. To begin with, we have to define a pair of 2-cells Θ g,f,h s : (s n (G)h)(s n (F )h) → (s n (G)s n (F ))h and Θ g,f,h t : (t n (G)t n (F ))h → (t n (G)h)(t n (F )h) in Ω m (X, ϕ 1 h, ϕ 2 h). These are easily obtained from the contractibility of the globular sum D 1 ∐ D 0 D m+1 ∐ Dm D m+1 . Indeed, one has the following string of equalities: s n+m (A) = s m (s n (A)) = s m (t n (A)) = s m (s(t n (F ))) = s m (t(t n (F ))) = s m (s(t n (G))) which implies that there is a map (h, t n (F ), t n (G)) : For sake of simplicity, we denote by f ε the 1-cell ε n (F ) in Ω m (X, ϕ 1 , ϕ 2 ) for ε = σ, τ , and similarly for G. We have the following diagram in the bicategory hom(D n−1 , Ω m+1 (X, s n (Ah), t n (Ch))) The 2-cells filling this diagram either come from the inductive hypothesis oh this lemma and of the following one (when specified), from contractibility of appropriate globular sums (the unlabeled 2-cells) or are of the form (1) and (2). The construction of (2)is similar to that of(1), which is the content of Lemma B.7. The composite of this pasting diagram provides the 2-cell we are looking for, the left-hand side (resp. right-hand side) composite being (isomorphic to) Lemma B.6. Given a pair of n-cylinders F, G : A B in Ω m (X, ϕ), a modification Λ : F ⇒ G and a 1-cell c : b = t n+m (B) → b ′ , we get an induced modification cΛ : cF ⇒ cG between the n-cylinders cF, cG : cA cB in Ω m (X, cϕ). Consider the bicategory hom(D n−1 , Ω m+1 (X, s n (cA), t n (cB)), inside which we define the following 2-dimensional pasting diagram: The 2-cells that fill the diagram either come from the inductive hypothesis of this lemma or the previous one, or by contractibility of suitable globular sums when unlabeled. The composite of this pasting diagram is the 2-cell we are looking for, and so this concludes the proof. Lemma B.7. Given an n-cylinder F : A B in Ω m (X, ϕ), a 1-cell g in Ω m (X, ϕ) and a 1cell h : a → s n+m (A) in X, such that s 2 (g) = t n+1 (A) = t n+1 (B), there is a modification χ as displayed below, where the cylinders denoted by λ 1 , λ 2 are obtained by contractibility of the appropriate globular sum. Proof. Firstly, notice that the existence of such modification does not depend on the choice of λ 1 , λ 2 . By definition, given ε = s, t, we have that ε n (λ 2 • (gh)(F h) • λ 1 ) is given by a composite where the first and the third map arise from contractibility of suitable globular sums. On the other hand, ε n ((gF )h) is given by (gε n (F ))h : (gε n (A))h → (gε n (B))h. From these observations it is clear that we can find a pair of two cells χ s , χ t as required in the definition of a modification. The rest of the proof follows analogously to that of the previous results, so it will be omitted. The next lemma address the problem of contructing the whiskering operations. The other half that is required follows from a duality-kind argument. Lemma B.8. Assume given n-cylinders F : A B, G : B C together with a modification Θ : F ⇒ F ′ in X. Then there is an induced modification The cases n = 0, 1 are pretty straightforward. To prove the inductive step, consider the following 2-dimensional pasting diagram in the bicategory hom(D n−1 , X(s n (A), t n (C))): The unlabeled cells come from the contractibility of the appropriate globular sums, while (1) is provided by Lemmas B.5 and B.6. Finally, the 2-cell labeled with (2) is constructed in the following lemma. we get an induced modification Here, Λ 1 and Λ 2 are obtained by contractibility of appropriate globular sums, and the existence of ∆ does not depend on the choice of these. Lemma B.10. Assume given an n-cylinder C : A B in Ω m (X, ϕ) and a choice of an identity 1-cell 1 a : a → a in X, where a = s n+m (A). We then get a modification of the following form: Again, Λ 1 and Λ 2 are obtained by contractibility of the appropriate globular sums and the existence of β does not depend on a choice of such. The next lemma provides the unit constraint for the bicategory structure on hom(D n , X). We only prove one side of the unit constraint, the other one being analogous. Lemma B.11. Given an n-cylinder C : A B there exists a modification υ : C • C A ⇒ C. Proof. The existence of the pair of 2-cells υ s , υ t is straightforward. Consider the following pasting diagram in the bicategory hom(D n−1 , X(s n (A), t n (B))), where a = s n (A), b = t n (B): The unlabeled 2-cells come from contractibility of appropriate globular sums, as well as λ 1 and λ 2 , and the 2-cell labeled with (1) is provided by the previous lemma. We now turn to the final construction, that of the associator for the bicategory hom(D n , X). We start with a preliminary lemma Lemma B.12. Given an n-cylinder F : A B in Ω m (X, ϕ), and a pair of composable 1-cells Here, λ 1 , λ 2 come from the contractibility of D n D 0 D 1 D 0 D 1 , and the existence of ζ does not depend on the choice of such cylinders. Finally, here is the construction of the modification representing the associativity constraint in the bicategory hom(D n , X). Here, the unlabelled 2-cells and the 1-cells η i , µ i and ν i for i = 1, 2 all come from contractibility of suitable globular sums. The 2-cells labelled with (0) have been constructed in Lemmas B.5 and B.6. Finally, (1) is constructed in Lemma B.12, and (2) and (3) are built up in an analogous way. We conclude this section of the Appendix with the following results, which requires the existence of inverses and does not hold true in Mod(C). Lemma B.14. Given a pair of n-cylinders F, G : Cyl(D n ) → X in Mod(C W ) (see Definition (4)) and a modification Θ : F → G there exists a modification Θ ′ : G → F Proof. We denote by f −1 the result of promoting either a left or a right inverse for f to a two-sided inverse. If n = 0 then Θ ′ is obtained by inverting the 2-cell Θ. Let n > 0, we define Θ ′ s = (Θ s ) −1 and Θ ′ t = (Θ t ) −1 . By definition, Θ induces a modification of (n−1)-cylinders of the form Θ : Υ(C 0 , Θ t ) ⊗C ⊗ Γ(Θ s , C 1 ) ⇒D (where ⊗ denotes the vertical composition operation). By inductive hypothesis this can be inverted, to give us Θ ′ :D ⇒ Υ(C 0 , Θ t ) ⊗C ⊗ Γ(Θ s , C 1 ).
2018-09-21T02:55:16.000Z
2018-09-21T00:00:00.000
{ "year": 2018, "sha1": "b94ed80404b6919cb9f2b9619d936acead3c29e0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b94ed80404b6919cb9f2b9619d936acead3c29e0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
91997914
pes2o/s2orc
v3-fos-license
First record of Dikerogammarus bispinosus Martynov, 1925 in Kazakhstan: invasive or overlooked native in the Caspian Sea basin? The Ponto-Caspian amphipod Dikerogammarus bispinosus is regarded as a native species throughout the lower stretches of rivers that drain into the Black Sea. Its occurrence in the Caspian Sea basin was uncertain due to conflicting reports. Here, we provide the first conclusive evidence for its presence in this basin. Individuals of both sexes, including ovigerous females, were collected in May 2000 from the Ural River in Kazakhstan, suggesting full establishment. If it was a recent invasion, the most probable dispersal pathway into the Caspian basin would have been via the Volga-Don canal as D. bispinosus was reported in the early 2000s from the lower Don River and the Saratov reservoir on the Volga River. However, given that until relatively recently D. bispinosus was considered a subspecies of D. villosus , we cannot rule out that it has been overlooked in earlier reports from the Caspian Sea basin by being mentioned as D. villosus or even D. haemobaphes . We also provide new data on the distribution of Gammarus lacustris, Obesogammarus platycheir , Pontogammarus abbreviatus , P. robustoides , Turcogammarus aralensis and Wolgagammarus dzjubani in western Kazakhstan and southwestern Russia Here, we report D. bispinosus for the first time from Kazakhstan and thus confirm its presence in the Caspian basin. This is the second locality where this species is mentioned in this basin and the easternmost point of its entire distribution range. One male (14.3 mm) and four ovigerous female (10.7-13.7 mm, 46-114 eggs) D. bispinosus were collected from the Ural River in the vicinity of the settlement Zelenoe, Kazakhstan. The identified specimens exhibited the usual morphological characteristics of this species. Diagnostic features are presented in Figure 2. The compilation of own and literature data indicated that D. bispinosus has a broad geographical distribution (> 3000 km), being encountered from the Rhine estuary, throughout the lower Rhine (North Sea basin), Danube, Dniester, lower Dnieper and lower Don rivers (Black Sea basin) and reaches the middle Volga and lower Ural rivers (Caspian Sea basin) (Figure 3, Table 2). The record from this study represents the easternmost point of the species range (Figure 3). Discussion Dikerogammarus bispinosus was described by Martynov (1925) from the lower Dnieper and appears to be native to the Black Sea basin (Cărăuşu et al. 1955;Jażdżewski and Konopacka 1988). In Western Europe it has spread throughout the southern invasion corridor reaching the Rhine estuary via the Rhine-Main-Danube canal (Bij de Vaate et al. 2002). This wide-ranging dispersal is contrasted by its considerable decline during recent decades in its native region in the lower Danube and also in Lake Table 2 for references. (Grigorovich et al. 2002 and references therein). However, undocumented or unintentional introductions cannot be completely excluded either (Grigorovich et al. 2002). This further suggests that the species could have dispersed naturally via the Volga-Don canal or was passively introduced through shipping activity. On the other hand, it is also possible that D. bispinosus reached the Caspian basin earlier than the 1990s given that the Volga-Don canal was opened in 1952. Moreover, D. bispinosus was considered for a long time as a subspecies of D. villosus and only relatively recently was elevated to specific status based on mitochondrial and nuclear genetic markers (Müller and Schramm 2001;Müller et al. 2002). In addition, Pjatakova and Tarasov (1996) considered D. villosus (and consequently D. bispinosus) as a synonym of D. haemobaphes, so they may have overlooked D. bispinosus in the Caspian basin (Tarasov 1995). Similarly, it is likely that other authors did not distinguish D. bispinosus from D. villosus due to its subspecific status until 2002. Nevertheless, it appears that neither D. villosus is native to the Caspian basin (Mordukhai-Boltovskoi 1979), where it has been reported at least since 1964 (Mordukhai-Boltovskoi 1964), suggesting a similar dispersal route as for D. bispinosus. It is important to keep in mind that Dikerogammarus species are some of the most successful Ponto-Caspian invaders, being highly capable of dispersal in anthropogenic landscapes (Rewicz et al. 2014(Rewicz et al. , 2015Šidagytė et al. 2017). The only Dikerogammarus species that is most likely native to both basins is D. haemobaphes since it was described from the Black Sea but has been reported from the Caspian Sea since 1880 (Sars 1894), well before the construction of the Volga-Don canal. In contrast, D. caspius, a native Caspian species, has spread into the Black Sea basin in recent times (Sayapin 2003). Thus, according to the available data, we tentatively conclude that even if D. bispinosus has been overlooked, it appears that it is not a native species in the Caspian basin and that it reached it between 1952 and late 1990s. Of course, at present, we also cannot completely rule out the possibility that it might be a native Caspian species. Phylogeography could prove invaluable in illuminating its origin and dispersal pathways. So far, D. bispinosus, D. villosus and Shablogammarus shablensis appear to be the only Black Sea native amphipod species that have spread into the Caspian basin (Grigorovich et al. 2002). Further upstream dispersal of D. bispinosus along the Volga and Ural rivers may be expected given its rheophilous affinity (Borza et al. 2017).
2019-04-03T13:07:21.936Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "0583be011b7941c29cdafbfd9a137cc1bf442010", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3391/bir.2018.7.3.09", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "26eeb2b8164674851a04b75905708eee7a0f948f", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
261979323
pes2o/s2orc
v3-fos-license
Synergy between Electric Vehicle Manufacturers and Battery Recyclers through Technology and Innovation: A Game Theory Approach : Power battery recycling (PBR) has triggered profound changes in the industrial chain of electric vehicles (EVs). The PBR innovation network provides information channels and resource conditions for enterprises, but the mechanism of its impact on the synergistic innovation benefits and sustainable development ability of EV and PBR enterprises still needs further exploration. In this paper, we collect patent data for PBR from 2012 to 2020, identify the structural characteristics of innovation networks, and construct a synergy game model for PBR technology, aiming to analyze the synergistic effect of network embedding and knowledge spillover in PBR enterprises on technological innovation. First, we find that the PBR innovation network exhibits the small-world effect, which has a double-edged sword effect on technological cooperation innovation. Second, structural holes benefits of the main body of PBR technological innovation have a significant impact on cooperation innovation behavior. Third, the enhancement of the relevance and deep complementarity of knowledge cooperation is sufficient to make up for the input cost of PBR technological cooperation innovation, with additional benefits created by the increase in the output of structural holes. However, companies tend to be more inclined toward non-cooperative innovation as the knowledge spillover effect of the innovation network increases. Introduction As the world's largest green industry, the continuous development of the new energy automobile industry has led to profound changes in the industrial chain of vehicle assembly, battery development, and waste recycling [1].The recycling market created by recovering metals such as cobalt, nickel, manganese, lithium, iron, and aluminum from waste power lithium batteries was expected to exceed RMB 5.3 billion in 2018 and exceed RMB 10 billion by 2020.However, as of January 2019, the planned production capacity of China power battery dismantling and regeneration projects had reached 1.2 million tons/year, whereas the actual processing capacity was less than 20,000 tons (statistics from the Institute of Process Engineering, Chinese Academy of Sciences).A considerable number of batteries will soon reach their end of life, which will complicate matters further.All materials used to make electric vehicle batteries are extremely hazardous to both the environment and human health and are able to permeate into soil and therefore water supplies when they are directly placed into landfills.For this reason, how to properly deal with so many used power batteries has become an urgent challenge [2].Critically, a new recycling process must be commercialized that is capable of recovering valuable materials at a high efficiency. A new technology has been developed by the researchers at Worcester Polytechnic Institute that is capable of recovering LiNi x Mn y Co z O 2 cathode material from a hydrometallurgical process, making the recycling system, as a whole, more economically viable [3]. Driven by both policy and the market, the PBR market occupies considerable space, with a considerable value of the industrial chain.The PBR technological cooperation innovation network accelerates enterprise technology cooperation and sharing.Batteryoperated electrical vehicles are gradually replacing combustion-engine-based vehicles.The materials used for electrodes play a vital role in deciding the battery performance, cost, and life [4,5].However, as resources are expanding and disorderly and industrial agglomeration is decreasing, the construction of a standard and normative system is not perfect.Although the PBR market has considerable potential, optimism is lacking in terms of its "profitability".Some illegal small businesses blindly invest to seek short-term benefits, use the limited knowledge spillover in the industry, and quickly deploy and recover production capacity.Companies with small workshops take advantage of the close distance, lower transportation and labor costs, and the lack of a need for supervision by an environmental protection department, as well as business qualifications.They can obtain greater benefits with low fixed costs and high profits, while compliance companies concerned with environmental protection make large investments in equipment, workshops, personnel, etc.Furthermore, the supply of decommissioned power batteries is scattered, and transportation and freight complexities are relatively high.Compliant companies have high fixed costs to a certain extent, including costs related to sorting, testing, coding, regrouping, and recycling, which sometimes do not justify the economic gain.Gresham's law proves irregularities in the industry and unfair competition.The sustainable development of the economy, society, and environment of the PBR industry is particularly important to respond to the national "carbon neutrality and carbon up" policy requirements, reduce fossil fuel consumption, and improve power battery technology cooperation and innovation capabilities, as well as resource utilization efficiency.Therefore, limited research has been conducted on how knowledge spillovers between innovative entities affect the synergy benefits and how the embedding characteristics of PBR innovation networks affect the game equilibrium of PBR technological innovation. In short, realizing the sustainable and innovative development of the PBR industry supports the explosive growth of electric vehicles.Supply-demand docking, technological cohesion, and value connection between industry and the market require diversified technical cooperation and technological layouts.The PBR technological cooperation innovation network is a systematic topology abstraction connected by many node enterprises and has typical complex network characteristics.Network effects directly affect cross-domain knowledge spillovers, technical cooperation, and technological innovation among the main players in the industrial chain.In the process of technological cooperation, enterprises further adjust their technological investment and cooperative innovation strategies through knowledge learning.Analysis of the evolution characteristics of the PBR industrial network becomes the top priority; the key is to identify the crucial influencing factors of the synergy benefit equilibrium mechanism of the complex network between the main network nodes at the enterprise level.Considering the complex network characteristics of the PBR industry diffusion network, in this paper, we build an evolutionary game model based on individual and group characteristics of an innovation network.The aim of this network is to analyze the innovation and cooperation behavior in the actual network to the greatest extent possible.By analyzing network embedding and knowledge spillover effects, the synergistic effect of industrial chain technological innovation can be enhanced. The main contributions of this paper can be summarized as follows.First, we extend research on synergy innovation by responding to the call for adequate attention to the innovation network in the synergy literature.This article provides new insights into whether network structure affects the synergy benefits of innovation entities by identifying the positive effects of small-world networks and structural hole factors on synergy benefits of battery recyclers.Second, theoretical linkages are developed between technical cooperation and innovation network structure in the PBR technology innovation process.We identify the contextual variables that affect the efficacy of synergy innovation from the perspective of network embeddedness and knowledge spillover that have remained unexplained to date.Our findings of a complex moderating effect of the network structure indicate the important boundary conditions of synergy innovation, which may further resolve the more intriguing issue of when innovation network embedding is beneficial to synergy innovation.Finally, we complement the understanding of synergy innovation by focusing on the interacting role of innovation network structure and knowledge spillover of organizations in game activities. Innovation Network Innovation networks can promote mutual exchanges between companies, as well as access to external technologies and resources [6].EV manufacturers are under considerable pressure to reduce vehicle cost without compromising safety, performance, and driving range.EV companies are not able to reduce the vehicle price owing to the expense of batteries.Therefore, battery technology choices of EV manufacturers can help to optimize vehicle cost [7].Zhang et al. [8] compared and analyzed the technology patents of PBR institutions at home and abroad.They reported that Chinese battery recyclers have more vitality in terms of technology research and development and that the development of PBR technology is in high demand.However, Chinese PBR technology is still in the stage of technology introduction, digestion, and reinnovation.The number and quality of patents of patent holders are not clear, advantageous technical fields are not concentrated, and the core technology layout is lacking [9].Based on patent analysis of key components of new energy vehicles (power batteries and motors), Li [10] reported a long-term movement towards the development of pure electric vehicles in China.At present, the patent layout of new energy vehicles in China is decentralized, and the degree of industrial clustering is low.Zhang [11] studied the relevance of national policies and the technological innovation of power battery companies.Mu et al. [12] found that the higher the degree of cooperation between enterprises, the better the independent innovation capabilities and technological introduction effects of battery recyclers.The cooperation network of new-energy-vehiclerelated patents, including its features and performance during different stages, has evolved smoothly, with a growing network density, stable structure, and cohesive subgroups [13]. The development mechanism of emerging industry networks has been confirmed by many innovation practices and related theoretical studies to be conducive to aggregating innovation resources, promoting the transformation of scientific and technological achievements, and improving innovation efficiency [14].PBR technological cooperation and innovation continue to extend upstream and downstream.Chevrolet established an energy storage station using used EV batteries at a General Motors plant in Michigan.In Europe, Tesla has begun recycling in cooperation with Umicore.Coordinating an optimal pricing strategy between manufacturers and remanufacturers, as well as relationships between return yield, sorting rate, and recycling rate, may optimize total profit in different periods [14].Besides the provision of renewable energy for vehicle charging, a circular use system of EV batteries that functions well and is efficient could also be a sustainable and circular economy solution for electromobility [15].Kannan et al. [16] proposed a closedloop supply chain network to recover valuable materials from used and decommissioned batteries to reduce the total cost of PBR.Li et al. [17] also proposed a network similar to remanufacturing, integrating batteries into the remanufacturing supply chain, which can increase profits.Liu and Gong [18] studied the matching behavior of vehicles and batteries under the retailer recycling mode and analyzed the factors that affect the recycling and the degree of influence.Li et al. [19] studied the impact of the deposit-refund system on the recovery rate of power batteries. Existing research has analyzed the evolutionary characteristics of PBR recycling innovation networks, as well as the influencing factors of cooperation benefits and recycling efficiency.However, little research has been conducted on the knowledge spillover correlation of innovative entities' synergistic cooperation and the equilibrium mechanism of synergistic innovation network games. Complex Network Evolutionary Game The role of innovation networks in the EV industry, from the perspective of evolution, is to integrate the overall network and the entities' microscopic features and design relative variables.The overall strength of the relationship of the network modulates the inverted U-shaped relationship between the central location and the technological niche [20].Tang et al. [21] investigated the social, economic, and environmental impacts of recycling retired electric vehicle batteries using reward-penalty mechanisms by developing a Stackelberg game theoretical model.They then proposed that compared with the subsidy mechanism, the reward-penalty mechanism exerts greater effects on the recycling rate and social welfare.Zhao et al. [22] explored the promotion impact of government subsidies on EV diffusion and established a three-stage evolutionary game model. Based on the actual application, some scholars incorporated influencing factors into the modeling system to conduct game analysis of the relationship among stakeholders in a diffusion problem.For example, Zhu and Dou [23] established an evolutionary game model between the government and core enterprises and proposed that GSCM (green supply chain management) diffusion among core enterprises is affected by the costs and benefits of implementing GSCM.Another study addressed the problem of spent EV battery collection through multiple channels, i.e., an automobile manufacturer (AM), a 4S shop, and a third-party recycler (TPR).The results showed that the recycling rates of a 4S shop and a TPR are both higher than that of an AM.However, the profits of an AM and a 4S shop are higher than those of a TPR [24].A game between the government and battery recyclers can be modeled with the aim of promoting innovation with respect to recycling technology to achieve sustainable development in terms of energy and the environment.The benefits of adopting green innovative technologies for power battery producers outweigh additional input costs.The evolutionary game system eventually converges, leading the government to impose strict regulations and the manufacturer to adopt the novel technology.At that time, battery recyclers have enhanced the recovery efficiency through the adoption of green innovation technology and have achieved competitive advantages in terms of recycling [25]. Previous studies have shown that the advantages of knowledge correlation and spillover directly affect the dynamic evolution patterns and laws of innovation networks.However, how the structural characteristics of innovation networks, represented by structural holes and small worlds, affect the synergy benefits of technological innovation entities and how the spillover effects of network knowledge affect the synergy revenue effects of PBR technology cooperation remain unknown. Network Analysis of Technological Cooperation Innovation Complex network analysis is an effective method to help reveal the inner relationship between different knowledge and technological cooperation innovation networks.Such an investigation involves the analysis of the evolution trend and evolution characteristics of the network through the overall network and individual network of the PBR industry.The patent cooperation network can reflect the characteristics of innovation cooperation and the agglomeration of the PBR industry and measure the evolution of the cooperation innovation network.According to the frequency of patent applicants' cooperation on different IPCs, we can also speculate on interorganizational technology associations, future cooperation directions, and the potential for cooperation in other related fields. Small-World Effect The formal and informal relationships established by the main body of the PBR technological innovation cooperation network promote the sharing and mutual benefit of resources.The positive topological properties of small-world networks have always been a hot spot in research on complex networks.The combination of characteristic path length and clustering coefficient is a key parameter of the small-world effect.When the clustering coefficient is larger and the characteristic path length is shorter, the network is called a small-world network.In this kind of network, the average path length between any two nodes is much smaller than the number of nodes.The clustering coefficient is an index concerning the local network structure, which can be measured by the clustering coefficient or the transfer coefficient, as well as (in most cases) the average local density.The local clustering coefficient (C i ) of node v i is the ratio of the number of connections between adjacent nodes to the number of possible connections.According to Strogatz and Duncan [26], Equation ( 1) can be used to measure small-world networks: G = (V, E) indicates that graph G contains innovation node V and the edge (E) connecting them; then, the origin of the undirected graph is defined as follows: where v j , v k ∈ N i , e jk ∈ E, N i = {v : e ij ∈ E e ji ∈ E} is the i-th note adjacent to v i , and k i is the number of nodes adjacent to v i . Network Structural Holes Companies can gain resource advantages in the network not only from their own innovation capabilities but also from their network locations.Social network scholars pay attention to the nature and structure of the network, among which the strength of the relationship and structural holes cannot be ignored.The bonds established between groups and organizations can be strong or weak.Through the transmission of information, weak ties connect multiple groups together, and groups with different hierarchical structures present cohesive characteristics.In theory, only about five individuals can establish a connection between any two individuals, establishing the six degrees of segmentation theory [27].If no direct relationship exists between two individuals or groups in the social network and there is no indirect redundant relationship between them, then the gap between the two is called a structural hole.The structural hole index of scholars such as Burt [28] combines the effective scale, restrictions, and indicators.The effective size of an actor is equal to the size of the actor's individual network minus the redundancy of the network, which is the non-redundant factor in the network.Node q is the common adjacent point of v i and v j , p ij represents the weight ratio of v j among all adjacent points of v i , and the effective scale of actor v i and limitation by other nodes are respectively expressed as: Innovation Network Data PBR is an emerging technological industry and presents a high level of ambiguity and uncertainty.The main body of PBR shows a trend of diversification and large-scale development due to the potential value of the PBR market.The large-scale development of the Chinese PBR industry is relatively late.In 2012, the technical cooperation network of the PBR industry began to take shape.Therefore, in this paper, we consider patent data from 2012 to 2020 divided into three time windows (2012∼2014, 2015∼2017, 2018∼2020).The duration of each time window is 3 years.A search for "power battery recycling", "battery gradient utilization", "power battery secondary utilization", "new energy vehicle battery recycling", "waste power battery", "power battery", "recycling", and "utilization of power battery" in the Chinese patent full-text database resulted in 7398 related patents.Because the focus of this research is the cooperative innovation behavior of enterprises, through data cleaning and sorting, applicants with two or more patents were included, and individual patents were excluded.As shown in Figure 1, before 2011, there were few patents for PBR technology.Starting in 2012, the number of PBR patents began to increase.With 150 as the demarcation point, the data from 2012 to 2020 were selected as the time window for the PBR cooperative innovation network.The total number of patents shows a rapid growth trend, whereas cooperation shows a gentle growth trend and the growth rate of cooperative patents shows large irregular fluctuations. Overall Network Analysis The innovation network can effectively reflect the continuity of innovation activities.Samples from 2012 to 2020 were considered, and three patent cooperation relationship matrices were formed.Calculations of the network structure indices were conducted.A network topology structural diagram was generated to obtain the network evolution map.The PBR network evolution maps of technological innovation networks are shown in Figures 2-4.Originally, the PBR industry innovation network was small in scale; then, the network scale showed a growth trend that was slow at first, then increased in speed, with a growth rate of 266.67%.As the size of the network increases, the scale effect becomes significant, and the opportunities for cooperation and communication among nodes within the network increase.Therefore, opportunities for the cooperative innovation of nodes in the network increase.The density of the PBR cooperation innovation network has changed from high to low.This shows that with an increase in the number of nodes in the network, the cooperation between subjects has not increased correspondingly, resulting in a decrease in network density, and the relationship between subjects tends to be decentralized in the network.According to the average distance and clustering coefficient shown in Figure 5, the PBR innovation network has a small-world effect as a whole.The average distance of the PBR network is less than 2.5, and the clustering coefficient is greater than 0.7.From 2012 to 2020, the small-world effect of the PBR network gradually increased.As the number of patent applications increases, the small-world network effect becomes more noticeable. Network density is generally not the decisive factor, as the relationship model is more important.Although in the past three years (2017-2019), the relationship density of cooperation innovation networks has been decreasing (see Figure 6), the accessibility of relationships is different.The network has had the largest scale in the past three years, whereas the density and cohesion have been the smallest, and the average distance has been the largest, showing that in the past three years, PBR industry knowledge and rights with respect to technology innovation development have been relatively concentrated, the status of the main players in the industry has been uneven, rights and information centers have been present, most innovation subjects have been easily affected by individual subjects, and the network has exhibited a factional structure.The centrality degree shows that although the scale of the network is gradually expanding, the ability of nodes to cooperate with other nodes is reduced, with the network becoming decentralized.From the perspective of centrality degrees, all are above 0.1, indicating that the network maintains a certain concentration degree.The centrality of the network dropped from 0.0729 to 0.0249, indicating that the network's tendency to concentrate on a certain node is decreasing.In addition, the cohesion index shows a downward trend, indicating that the PBR network information, knowledge, and rights are more concentrated, and the network is vulnerable to and controlled by individual nodes.The cohesion index of the overall network dropped from 0.063 to 0.019. Analysis of Structural Holes In order to further explore the individual characteristics of the nodes of the PBR innovation network, the individual network was applied to analyze the node status.Three time windows were separately intercepted to analyze the activeness, network influence, and resource control of the innovation subject.Judging from the technological cooperation innovation network from 2012 to 2014, State Grid Corporation ("State Grid" for short) and Zhejiang Geely Holding Group Co., Ltd., Hangzhou, China ("Zhejiang Geely") are in the core position of the PBR technological cooperation innovation network in terms of centrality degree and structural hole indicators.In particular, the centrality degree and the effective scale of the network of the State Grid Corporation are higher than those of other institutions (see Table 1).This indicates that the two companies play a core bridging role in the overall network and that the information and resource control ability of the network is relatively high.However, some nodes in the network have a high degree of centrality, although the intermediate centrality is 0. Most such nodes support companies or subsidiaries with limited resources and an information advantage, becoming non-core nodes of the network.However, during the 2015-2017 window (see Table A1 in Appendix A), although State Grid and Zhejiang Geely still occupied the core of the network, they exhibited high activity levels, network influence, and resource control.The centrality of Xu Ji Power Co., Ltd., Xuchang, China decreased, and the network restriction increased, indicating that the company's ability to control resources and information in the network was reduced, gradually approaching the center right, in association with an increase in its ability to cooperate with more influential enterprises in the network.In the PBR technological cooperation innovation network, State Grid has a significant bridge position and influence.The distribution of technological cooperation innovation among enterprises is uneven, and most enterprises are indirectly affected, whereas less other enterprises directly affected. From the perspective of the last three years (2018-2020), the individual characteristics of the PBR technological cooperation innovation network are determined by standardized processing according to the two types of individual structural hole attribute measurement indicators (see Figure 7).The ranking of State Grid and its subsidiaries is still reliable.Furthermore, the active ability and influence of research institutions and universities became apparent.As a knowledge-intensive industry, PBR technology research institutions and related universities have become active and have managed more resources and information in the innovation network.However, their own average control of resources needs to be strengthened, and such institutions need to become even more active in the industry.Enterprises and core enterprises have engaged in in-depth cooperation.However, a group of power battery production and recycling companies such as GEM Co., Ltd., Shenzhen, China and Yinlong New Energy Co., Ltd., Zhuhai, China have increased their influence on the network.This is an indication that PBR technology is developing rapidly according to the theory of technological innovation, and the corresponding technological achievements are beginning to be applied in enterprises. In the last three years (see Figure 8), more and more companies have begun to strengthen PBR technical cooperation and technological innovation in their respective fields.The effective scale increase is limited, but they have high structural holes in limited fields.For example, although the effective scale of the State Grid Co., Ltd., Beijing, China is relatively high, the level of network constraint is very low.The fact that state-owned enterprises or state-controlled groups or companies are currently occupying the central position in the Chinese PBR technological cooperation innovation network is an indication of the state's participation in and support for the construction of the PBR system.At the same time, the corresponding subsidiaries and their affiliated companies also obtain more information and resources through the knowledge and technology spillovers of the parent company, and their activity and influence in the industrial cooperation network continue to increase. Evolutionary Game Analysis on Technological Cooperation Innovation The technological cooperation innovation behavior of battery recyclers refers to the cooperation and innovation behavior of companies, universities, research institutions, etc., involved in battery dismantling, precious metal recycling, residual assessment, and cascading recycling technology research and development.From the perspective of technological innovation, technological cooperation innovation behavior refers to the realization of complementary advantages, knowledge and resource sharing, and risk sharing to maximize synergistic benefits.Battery and vehicle design strongly affect the technical feasibility of disassembly and optimal utilization at the component level [5].Existing research shows that the value orientation of the enterprise, the penalty mechanism for breach of contract, cooperation benefits, cooperation cost, the degree of trust, and the level of communication all have a significant impact on the cooperative innovation behavior of an enterprise.These factors affect the stable state of the game of the innovation cooperation subjects but fail to show differences in the behavior of innovative subjects.Previous research has also ignored the influence of knowledge spillovers and small-world network effects on the initial state and evolution game.Research on the game strategy and evolutionary stable state of bounded rational entities in technological innovation cooperation is lacking. Evolutionary Game Model and Its Assumptions The innovative network structure and distribution affect the communication and exchange of the main body.The average distance between the subjects of the small-world network is short, and the clustering coefficient is considerable.Therefore, the efficiency of information transfer between subjects is high, and the efficiency of knowledge transfer and spillover increases.An increase in the degree of agglomeration also further enhances the willingness of the main body to cooperate, promotes technical exchanges and cooperation, and thus reduces the cost of cooperation.According to statistics on PBR technology patents, the main players influencing enterprise innovation cooperation are state-owned enterprises and battery manufacturers.In this paper, we consider two types of innovation cooperation entities for PBR technology cooperation, namely battery recyclers (D) and EV manufacturers (Z).Therefore, the following assumptions are made: (1) Both enterprises (Z and D) are bounded rationality.Enterprises begin to gradually expand the scope of innovation cooperation and optimize the cooperation innovation network over time, and most enterprises begin to play important intermediary and bridge roles in the innovation cooperation network.When both enterprises (Z and D) choose not to carry out technological cooperation innovation, they can obtain an initial profit of R i (i = Z, D) > 0. (2) In addition to the technologies adopted to dispose of used batteries, the design of recycling networks also has a significant impact on costs and profits.Generally, a recycling technological innovation cooperation network contains a collection center, disassembly center, material recycling center, and waste disposal center.Because transportation between these centers involves costs and carbon emissions, profits can be increased by optimizing the design of the recycling supply chain network.Therefore, the assumption is that the input cost of the EV manufacturer (Z) and the battery recyclers (D) is C i (i = Z, D) > 0, including technical input in materials, formulae, specifications, and structures.Due to the small-world network effect, the average distance between enterprises decreases, the degree of agglomeration increases, and the cost of cooperation decreases.Therefore, the small-world effect has a negative impact on the cooperation costs of innovative companies, assuming that both parties can save indirect costs (bC i (i = Z, D)) through cooperative innovation, where b is the small-world coefficient. (3) When EV manufacturers (Z) and battery recyclers (D) choose technological innovation, both companies can obtain benefits, such as battery echelon utilization and precious metal extraction and recovery.The relevance and deep complementarity of knowledge cooperation are enhanced, and the output of structural holes increases.The enterprises can also obtain increased product sales and policy subsidies as a result of technological upgrades.Assuming that the two companies' cooperative innovation can create additional benefits of ∆R, the revenue that can be allocated to the EV manufacturer is a∆R, and the revenue that can be allocated to battery recyclers is (1 − a)∆R. (4) If EV manufacturers and battery recyclers withdraw halfway, for example, if a non-cooperative party pays the liquidated damages (F) but, due to the knowledge spillover effect, the company that defaults halfway can still obtain additional knowledge spillover income (E).Because the small-world effect affects the efficiency and quality of this knowledge spillover, it is assumed that the party who defaults halfway can harvest the knowledge spillover as bE. (5) The probability that the EV manufacturer chooses cooperative innovation of PBR technology is x(0 ≤ x ≤ 1), and the probability of not choosing cooperative innovation is (1 − x); the probability of the battery recyclers choosing cooperative innovation is y(0 ≤ y ≤ 1), and the probability of not choosing cooperative innovation is (1 − y). Based on the above assumptions and definitions of variables, as well as references [29,30], the resulting game payment matrix of cooperative technology innovation between EV manufacturers and battery recyclers can be constructed (Table 2).The mathematical formulations for the income variables can be found in Appendix C. Battery Recyclers Cooperation Non-Cooperation The replication dynamic equations of EV manufacturers and battery recyclers are: Evolutionary Game Stable State After solving the stability point of the cooperative innovation decision results [31], the game equilibrium point is obtained according to the following Equation: (0, 0), (1, 1), (x * , y * ), where x * = The local stability of the Jacobian matrix is used to analyze the stability of the equilibrium point of the system.The Jacobian matrix of the system is: The determinant and trace of each equilibrium point are shown in Table 3. Table 3. Determinant and trace of the equilibrium point of the technological cooperative innovation game between EV manufacturers and battery recyclers. Equilibrium Point Det(J) Tr(J) According to the replicator dynamic equation, when x = (0, 1) or y * = (C z − F)/(a∆R + bC Z − bE), the proportion of EV manufacturers choosing cooperative innovation is stable.When y = (0, 1) or x * = (C D − F)/((1 − a)∆R + bC D − bE), the proportion of battery recyclers choosing cooperative innovation is stable.In the plane of 4. According to the local stability analysis results listed in the table above, there are two stable equilibrium solutions (ESS, O(0, 0)): the EV manufacturer and battery recyclers choose not to cooperate in innovation, or they choose the reverse.Neither M(0, 1) nor N(1, 0) is a stationary point, and P(x * , y * ) is a saddle point.The phase diagram of the evolutionary game presented in in Figure 9 shows that after multiple games, three possible results can occur: (1) both the power battery enterprise and the EV manufacturer choose non-cooperative innovation, and O(0, 0) is the equilibrium point; (2) both sides choose cooperative innovation, and Q(1, 1) is the equilibrium point; or (3) the saddle point is maintained as P(x * , y * ).In the regional OMPN, the equilibrium outcome return gradually approaches O(0, 0), that is, EV manufacturers and battery recyclers choose non-cooperative innovation.In the regional MPNQ, the equilibrium outcome gradually approaches Q(1, 1), that is, both sides choose cooperative innovation. Evolutionary Game Results (1) Under the premise of (1 − a)∆R − bE + bF > 0 and a∆R − bE + bF > 0, the greater the cost of cooperative innovation of the EV manufacturer, the greater the possibility that both companies choose non-cooperative innovation.Considering the high cost of cooperation, the two companies tend toward non-cooperation in innovation, which is also confirmed by the small-world network effect.The small-world effect (b > 0) reduces the investment cost (C i ) of technological innovation between the Z and D; so, the probability of the enterprise choosing innovation cooperation increases.As the cost of cooperative innovation for battery recyclers increases as S OMPN grows, the equilibrium tends toward O(0, 0), with an increased probability that EV manufacturers and battery recyclers choose not to cooperate in innovation (see supplementary proof in Appendix B). (2) With the increase in additional benefits (∆R) from cooperative innovation, companies on both sides are increasingly inclined to choose cooperative innovation for PBR technology, showing that EV manufacturers and battery recyclers increasingly likely to choose cooperative innovation.Cooperative innovation can obtain higher profits.For example, upgrades in PBR technology increase the efficiency of power battery echelon utilization and the recovery of precious metals.Therefore, both companies tend to choose cooperative innovation. (3) The characteristics of a small-world network have a double-edged sword effect on technological cooperation innovation .If C i − E < 0, as b increases, both enterprises tend to cooperatively innovate, and the small-world network effect is enhanced.In other words, the average path of cooperation subjects becomes shorter, but the agglomeration coefficient increases.From the perspective of the individual, the increased relevance and deep complementarity of knowledge cooperation are sufficient to make up for the input cost of technological cooperation innovation, and the additional benefits brought about by the output of structural holes increase.If C i − E > 0, that is, if the small-world network effect is enhanced, the input cost of cooperative innovation is greater than the knowledge spillover income, and EV manufacturers and battery recyclers tend to not cooperate in innovation. (4) From the perspective of the evolution of the overall innovation network, with an increase in the small-world coefficient (b), as the area of S OMPN increases, the system tends toward O(0, 0), and the probability of EV manufacturers and battery recyclers choosing non-cooperative innovation increases.Because the cost of cooperative innovation is higher than that of the knowledge spillover effect, one party enterprise can obtain indirect profits through knowledge spillover effects without choosing cooperative innovation.Furthermore, the high costs of cooperative innovation do not need to be incurred, so they can obtain higher "information interests" and "control interests" in the PBR network, enabling parties to be more competitive than other members in the network and leading to an increased degree of structural holes.However, the stronger the small-world effect, the more profit the non-cooperative side generates.In this way, the system evolves to O(0, 0), and firms become more inclined to engage in non-cooperative innovation.When C i − E < 0, with the growth of small-world effect (b), the area of S OMPN decreases.The probability of EV manufacturers and battery recyclers choosing cooperative innovation increases.Because the cost input of cooperative innovation is less than that of the spillover effect, both enterprises tend to adopt cooperative innovation. (5) With an increase in the knowledge spillover effect in the innovation network, enterprises are increasingly inclined to choose non-cooperative innovation PBR technology.The probability of EV manufacturers and battery recyclers choosing non-cooperative innovation increases.The enhancement of the knowledge spillover effect generates more indirect profit for the non-cooperative innovation side.Free rider problems and opportunism can boost the profits of the non-cooperative side, so the game evolution of the two enterprises tends to favor non-cooperative innovation. Conclusions and Implications In this paper, we extend the application of innovation network structure in evolutionary economics theory, emphasizing the moderating role of network embedding in organizational collaborative innovation and game activities.A novel evolutionary game model of technological cooperation innovation between EV manufacturers and battery recyclers is proposed.According to the technology cooperation innovation patent data of battery recyclers, the characteristics of network evolution were studied in stages from two perspectives: the small-world effect of the overall network and the individual structural attributes.The evolutionary game model of technological cooperation innovation was adopted to analyze the individual income characteristics and the technological cooperation mechanism of enterprises based on innovation input costs, opportunity benefits, and knowledge spillover effects.The analysis also considered the influence of these factors on the technological cooperation innovation of enterprises based on the network evolution characteristics of the PBR industry. In this paper, we constructed a PBR technology cooperation innovation network to analyze its characteristics and used evolutionary game theory to construct an "EV manufacturers-battery recyclers" technology cooperation innovation game model.The objective was to promote the cooperation and innovation of recycling technology by battery recyclers.Our analysis yields the following four specific implications. First, an increase in the special funds for joint research on major projects of PBR technology is recommended.Government departments can provide financial subsidies to encourage EV manufacturers and battery recyclers to cooperate and innovate on PBR technologies and ensure the smooth progress of the cooperative innovation process.In the evolutionary game model, increased R&D costs reduced the willingness of EV manufacturers and battery recyclers to cooperate and innovate.Considering the high complexity of PBR technology, human resources and a large amount of funds need to be invested in the early stage of R&D.R&D for power PBR technology is also associated with certain safety risks.The high R&D costs and safety risks imposed by technology are a burden on enterprises, making innovation difficult. Second, a cooperation platform should be established between EV manufacturers and battery recyclers to promote the clustered development of battery recyclers, giving full play to the advantages of a waste battery recycling management center to increase the intensity of its regulation and improve the regulatory system.The small-world effect of the innovation network promotes the cooperation willingness of innovation entities (b > 0, and C i decreases).Due to the long average distance between enterprises and the low degree of aggregation, problems such as information asymmetry and low information circulation efficiency between EV manufacturers and battery recyclers are encountered.Resource sharing and complementary advantages can establish a cooperation platform between EV manufacturers and battery enterprises, promote the clustering of battery recyclers to shorten the average distance, enhance their aggregation, compensate for investment costs, optimize vehicle costs of enterprises through small-world networks and knowledge spillover effects, and promote synergistic benefits and cooperation willingness.Such advantages can be achieved through cooperation platforms of industrial clusters to improve the synergistic innovation and R&D capabilities of enterprises.The dynamic evolution of network embedding and the negative effects of knowledge spillover should also be considered in order to avoid potential evolutionary equilibrium strategies (O(0, 0)). Third, the patent technology protection system should be improved so that the R&D results can be effectively protected, with vigorous promotion to support popular awareness of intellectual property protection, in addition to encouraging battery recyclers to actively participate in market-downstream technological innovation.PBR is a knowledge-intensive industry with high technical barriers in which patent protection plays an important role.The results of the evolutionary game show that some enterprises can easily own the fruits of an innovative enterprise's labor due to the negative impact of the knowledge spillover effect.Such behavior reduces the willingness of innovative entities to cooperate and innovate.Therefore, the patent protection system must be improved in order to reduce the negative impact of the knowledge spillover effect and protect the patent achievements of innovation entities.R&D entities would then be able to obtain appropriate returns to encourage enterprises to carry out the technological innovation of PBR.Such changes would also improve the structural hole level of some dominant nodes to prevent inbreeding and faction prosperity. In addition to the issues already addressed in this paper, the following issues require further research.First, in this article, we only extracted the small-world effect and structural hole attributes of the PBR innovation network as the embedding features of a network structure and analyzed their moderating effects on the technological innovation game between vehicle manufacturers and battery recyclers.In the future, further analysis will be conducted on the effects of other features on the benefits of synergistic innovation among enterprises.Secondly, the PBR innovation network data used in this study were patent data from 2012 to 2020, with a lag in terms of technological innovation effects.Therefore, no dynamic change or real-time impact on the benefits of synergy cooperation among enterprises was observed.In this study, we only used theoretical analysis to analyze the game of enterprise synergy innovation and did not conduct a simulation.In the future, simulation verification will be conducted on the benefits of enterprise synergistic innovation to enhance the analysis effect. Figure 1 . Figure 1.Trends in the number of patent applications for PBR technology. Figure 2 . Figure 2. Evolutionary map of the innovation network in for 2012-2014. Figure 3 . Figure 3. Evolutionary map of the innovation network for 2015-2017. Figure 4 . Figure 4. Evolutionary map of the innovation network for 2018-2020. Figure 5 . Figure 5. Overall attributes of the PBR network. Figure 6 . Figure 6.Individual attributes of the PBR network. Figure 7 . Figure 7. 2018-2020 individual characteristics ranking of the PBR technological cooperation innovation network. Figure 8 . Figure 8. Individual characteristics ranking of PBR technological cooperation innovation network in 2018-2020. Figure 9 . Figure 9. Evolutionary game phase diagram of technological cooperation innovation between ev manufacturers and battery recyclers. Table 1 . Ranking of Individual characteristics of PBR the technological cooperation innovation network in 2012-2014. Table 4 . Local stability analysis of technological cooperative innovation system between EV manufacturers and battery recyclers.
2023-09-17T15:13:44.703Z
2023-09-14T00:00:00.000
{ "year": 2023, "sha1": "14d8aa794312bc8099995daa713ff9966ff36763", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/15/18/13726/pdf?version=1694694516", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "632cf37d7f851d3805587dc4a1cf01b5dd0d4e3e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
245356420
pes2o/s2orc
v3-fos-license
Effect of the Length of Oat Hay on Growth Performance, Health Status, Behavior Parameters and Rumen Fermentation of Holstein Female Calves The aim of this study was to evaluate the effect of the length of oat hay on the performance, health, behavior, and rumen fermentation of dairy calves. For this purpose, two hundred and ten healthy two-day-old Holstein dairy calves were randomly allocated into three groups: basic diet (calf starter) without hay (CON), or a basic diet with oat hay at either long (OL: 10–12 cm) or short (OS: 3–5 cm) length cut. The basic diet was fed from day 4, while the hay was offered from day 14. All calves were weaned at day 56 and remained in their individual hutches till the end of the trial (day 70). Calf starter intake and fecal scores were recorded daily. Bodyweight, body size, and rumen fluid samples were collected biweekly before weaning and weekly after weaning. Overall, providing oat hay (OS and OL) in the diet increased the body weight, starter intake, and average daily gain compared to the CON group. Similarly, feeding oat hay improved rumen fermentation. More specifically, hay enhanced the rumen pH and changed the rumen fermentation type. Hay fed calves spent more time on rumination but less time performing abnormal behaviors compared to control. As it can be concluded, feeding oat hay to calves enhances the growth performance, rumen fermentation, and normal calf behaviors, implying improved animal welfare irrespective of the hay length. Introduction Improved morphological and metabolic functions of the rumen are important at early stage of calves' life. The normal development of the rumen epithelium depends on ingesting solid feed, and especially concentrates that produce butyric and propionic acid [1]. In the first few weeks of life, the calf cannot consume sufficient solid feed and depends mainly on milk for its maintenance, growth, and development needs. However, milk bypasses the rumen via the esophageal groove into the abomasum and small intestine, where it is digested and absorbed [2] and may restrain the rumen development in calves [3]. Although solid feed intake increases with age, it is not until complete weaning from milk that the calf can consume adequate solid feed to support its nutritional needs, providing required substrates for growth and development of the rumen mass papillae [4]. Feeding diets with a high percentage of concentrates can predispose calves to low rumen pH due to rapid ferment of easily fermentable carbohydrates, leading to the accumulation of rumen fermentation products [5]. On the contrary, the large particle size and rough surfaces of the forage can stimulate rumination and increase the flow of saliva into the Metabolites 2021, 11, 890 2 of 17 rumen [6], which can alleviate the low rumen pH. Moreover, due to the small particle size of concentrates, plaque formation on the rumen epithelium increases, eventually leading to ruminal papillae hyperkeratosis and rumen mucosa thickening [7][8][9] and ultimately a decrease in VFA absorption [10]. The physical form of the forage is a key factor that influences nutrient digestion and growth performance in calves. Forage can be cut to different lengths before feeding calves, resulting in different forms and texture. Compared to ground grass hay, calves that were fed chopped grass hay had improved total DMI, and nutrient digestibility and displayed reduced rates of non-nutritive behavior [11]. Another research project reported that calves had higher ADG when a diet containing 25% long alfalfa hay and calf starter was provided [12]. Norouzian et al. [9] reported that long alfalfa hay was more advantageous than short alfalfa hay (fine: 2 mm, long: 3 to 4 cm, as geometric mean) in promoting the development of rumen (as the former stays longer). Mirzaei et al. [8] reported that feed intake and weaning body weight were improved as the physical size of alfalfa hay increased up to 8% of the basic diet. However, no differences were observed in calves fed at 28% alfalfa of the basic diet in the same trial. Indeed, some studies have completely discouraged feeding calves long hay [13,14] as they associate hay feeding with low starter intake (partly due to rumen fill and limitation in forage digestion) and thus decreased energy intake [15,16]. On the other hand, although some farms use oat hay to feed calves, there was little literature on the physical form of oat hay supply (mostly on alfalfa hay). Castells et al. [17] compared the effects of different forage sources (oat hay, barley straw, triticale silage, and alfalfa hay) on calves and found higher DMI and ADG in the oat hay group compared with the alfalfa hay group. Many factors affect calf behavior, including environment, weaning, and feeding management [18]. Moreover, the forage sources [17], physical form [11], supplementation level and particle size [8] could also influence calf behavior. Understanding calf behavior can help farmers better manage their young stock [19]. Phillips [20] found that providing hay to calves could reduce the behavior of bedding intake and the licking of the bucket and pen. Montoro et al. [11] also found that compared with finely ground (2 mm) grass hay, calves spent less time on non-nutritive oral behaviors when calves receiving coarsely chopped (3-4 cm) grass hay. Despite extensive research on dairy calves, controversy remains on whether calves should be fed hay in the pre-weaning period. Moreover, dairy farmers have no clear recommendations on the optimal length of hay fed to dairy calves or may lack the capacity to cut hay finely. We hypothesized that the length of oat hay cut but not feeding hay itself may affect calf performance. Thus, the objective of this study was to determine the effects of feeding long and short oat hay on calf growth performance, rumen fermentation, health status, and calf behavior. BW, ADG and Starter Intake BW, ADG, and starter intake data are shown in Table 1 and Figures 1-3; BW was affected by the addition of hay to the diet, since higher BW was observed in calves fed hay (p < 0.01) during the whole trial period (Figure 1). There were no differences observed in BW from day 1 (week 1) to 28 (week 4). Nevertheless, calves fed hay had a higher BW than the CON (p < 0.01) group from the period ending on day 42 (week 5-6) until the end of the study on day 70 (week 10). At the same time, calves fed short oat hay had higher ADG than CON (p < 0.01) and long oat hay treatment (p < 0.01) during pre-weaning and the entire trial (Table 1). However, no significant differences between short oat hay and long oat hay treatments were reported in post-weaning calves. Consistently, calves fed with hay had higher ADG between weeks 2-4 and weeks 8-9 (p < 0.05) except for weeks 6-8 ( Figure 2). The starter was greater in hay-fed groups compared to CON during pre-weaning (p < 0.05) and entire trial (p < 0.05) periods. In addition, short oat hay calves consumed more calf starter than the long oat hay calves in week 2 to 3 (p < 0.05), week 6 (p < 0.05) and week 9 to 10 (p < 0.05). , and contained corn, soybean meal, cotton meal, barley, stone powder, sodium chloride, vitamins and retinoids. Daily individual feed intake (g) = amount of fresh feeds given (g)-amount of feeds refusals (g). 4 Data were analyzed for the entire trial (pre-weaning, post-weaning) period. * The interaction between treat and time (T * t) or treat and period (T * p). Metabolites 2021, 11, x FOR PEER REVIEW 3 of 18 hay had higher ADG between weeks 2-4 and weeks 8-9 (p < 0.05) except for weeks 6-8 ( Figure 2). The starter was greater in hay-fed groups compared to CON during pre-weaning (p < 0.05) and entire trial (p < 0.05) periods. In addition, short oat hay calves consumed more calf starter than the long oat hay calves in week 2 to 3 (p < 0.05), week 6 (p < 0.05) and week 9 to 10 (p < 0.05). 1 Pre-weaning: from birth to week 8 of age; Post-weaning: from week 8 to 10 of age. Entire trial: from birth to week 10. 2 CON = control (basic diet without hay); OS = inclusion of short oat hay (3-5 cm); OL = inclusion of long oat hay (10-12 cm). 3 The Calf starter was provided by Yuanxing Co. Ltd. (Hohhot, Inner Mongolia Autonomous Region, China), and contained corn, soybean meal, cotton meal, barley, stone powder, sodium chloride, vitamins and retinoids. Daily individual feed intake (g) = amount of fresh feeds given (g)-amount of feeds refusals (g). 4 Data were analyzed for the entire trial (pre-weaning, post-weaning) period. * The interaction between treat and time (T * t) or treat and period (T * p). Figure 1. BW for Holstein's calves fed a basic diet without hay (CON: empty circles; red line) and inclusion of short oat hay (OS: 3-5 cm, empty squares; green line) or inclusion of long oat hay (OL: 10-12 cm, empty triangles; blue line). Differences between CON and hay treated groups are represented by an asterisk (p < 0.05, denoted by **). Figure 1. BW for Holstein's calves fed a basic diet without hay (CON: empty circles; red line) and inclusion of short oat hay (OS: 3-5 cm, empty squares; green line) or inclusion of long oat hay (OL: 10-12 cm, empty triangles; blue line). Differences between CON and hay treated groups are represented by an asterisk (p < 0.05, denoted by **). Mean ADG for Holstein female calves fed a basic diet without hay (CON: empty circles; red line) and inclusion of short oat hay (OS: 3-5 cm, empty squares; green line) or inclusion of long oat hay (OL: 10-12 cm, empty triangles; blue line). Differences between CON with hay are represented by an asterisk (p < 0.05, denoted by **). ☨ ☨ ☨ ☨ ☨ Figure 3. Calf starter intake for Holstein female calves fed a basic diet without hay (CON: empty circles; red line) and inclusion of short oat hay (OS: 3-5 cm, empty squares; green line) or inclusion of long oat hay (OL: 10-12 cm, empty triangles; blue line). Differences between CON with hay are represented by an asterisk (p < 0.05, denoted by **). Differences between OS with OL are represented by a cross (p < 0.05, denoted by ☨). Body Growth Parameters Data of body structural measurements (height, body length, heart girth, abdominal girth, circumference of cannon bone) are presented in Table 2. Hay treatments did not influence body height and circumference of cannon bone during the three-time periods (pre-weaning, days 1-56; post-weaning, days 57-70 and entire trial, days . No differences were observed for body length and heart girth during the pre-weaning period. However, body length (p < 0.01) and heart girth (p < 0.05) in calves fed hay were greater Figure 3. Calf starter intake for Holstein female calves fed a basic diet without hay (CON: empty circles; red line) and inclusion of short oat hay (OS: 3-5 cm, empty squares; green line) or inclusion of long oat hay (OL: 10-12 cm, empty triangles; blue line). Differences between CON with hay are represented by an asterisk (p < 0.05, denoted by **). Differences between OS with OL are represented by a cross (p < 0.05, denoted by ☨ ☨ ☨ ☨ ☨ Figure 3. Calf starter intake for Holstein female calves fed a basic diet without hay (CON: empty circles; red line) and inclusion of short oat hay (OS: 3-5 cm, empty squares; green line) or inclusion of long oat hay (OL: 10-12 cm, empty triangles; blue line). Differences between CON with hay are represented by an asterisk (p < 0.05, denoted by **). Differences between OS with OL are represented by a cross (p < 0.05, denoted by ☨). Body Growth Parameters Data of body structural measurements (height, body length, heart girth, abdominal girth, circumference of cannon bone) are presented in Table 2. Hay treatments did not influence body height and circumference of cannon bone during the three-time periods (pre-weaning, days 1-56; post-weaning, days 57-70 and entire trial, days 1-70). No differences were observed for body length and heart girth during the pre-weaning period. However, body length (p < 0.01) and heart girth (p < 0.05) in calves fed hay were greater ). Body Growth Parameters Data of body structural measurements (height, body length, heart girth, abdominal girth, circumference of cannon bone) are presented in Table 2. Hay treatments did not influence body height and circumference of cannon bone during the three-time periods (preweaning, days 1-56; post-weaning, days 57-70 and entire trial, days 1-70). No differences were observed for body length and heart girth during the pre-weaning period. However, body length (p < 0.01) and heart girth (p < 0.05) in calves fed hay were greater during post-weaning and the entire trial compared to CON. The hay groups had greater abdominal girth (p < 0.01) than the CON group during the three time periods. In the aforementioned results, there was no difference between short oat hay and long oat hay groups. Entire trial: from birth to week 10. 2 CON, control (basic diet without hay); OS, inclusion of short oat hay (3-5 cm); OL, inclusion of long oat hay (10-12 cm). 3 Data were analyzed for the entire trial (pre-weaning, post-weaning) period. * The interaction between treat and time (T * t) or treat and period (T * p). Rumen Volatile Fatty Acids Acetate concentration was not different among groups during pre-weaning and entire trial periods (Table 4). A higher proportion of acetate was observed in OS (p < 0.05) and OL (p < 0.05) groups during the post-weaning period compared to CON. The CON group had a significantly higher propionate proportion than hay (OL and OS) (p < 0.05) treatments during post-weaning, while OS group showed a lower propionate proportion than CON and OL during the entire trial. Calves receiving long oat hay had a lower butyrate proportion than the short oat hay (p < 0.01) during the pre-weaning and post-weaning periods. The CON group had higher TVFA than OL (p < 0.05) during the pre-weaning and entire trial period. Calves fed with hay had higher value of C2/C3 than CON in postweaning (p < 0.05) period. However, no differences were observed between short and long oat hay groups. Rumen Volatile Fatty Acids Acetate concentration was not different among groups during pre-weaning and entire trial periods (Table 4). A higher proportion of acetate was observed in OS (p < 0.05) and OL (p < 0.05) groups during the post-weaning period compared to CON. The CON group had a significantly higher propionate proportion than hay (OL and OS) (p < 0.05) treatments during post-weaning, while OS group showed a lower propionate proportion than CON and OL during the entire trial. Calves receiving long oat hay had a lower butyrate proportion than the short oat hay (p < 0.01) during the pre-weaning and post-weaning periods. The CON group had higher TVFA than OL (p < 0.05) during the pre-weaning and entire trial period. Calves fed with hay had higher value of C2/C3 than CON in post-weaning (p < 0.05) period. However, no differences were observed between short and long oat hay groups. 3 Data were analyzed for the entire trial (pre-weaning, post-weaning) period. * The interaction between treat and time (T * t) or treat and period (T * p). Calf Health The effect of oat hay length on calf health (Table 5) showed that different treatments had no significant influence on the diarrhea frequency. The diarrhea frequency changed with time and was higher in pre-weaning compared to the post-weaning period (p < 0.01). No differences were observed on diarrhea duration among the treatments at different periods. No significant change in the occurrence of pneumonia among different groups was found during the pre-weaning period. 5 Data were analyzed for the entire trial (pre-weaning, post-weaning) period. * The interaction between treat and period (T * p). Calf Behavior Time spent on each behavior at day 57, 63, and 70 is presented in Table 6. The OS and OL calves spent less time standing (p < 0.05) and OS spent more time lying (p < 0.05) compared with CON during these three days. The CON group calves spent more time on eating starter than OS and OL groups (p < 0.05). However, the length of hay did not affect the time the calf spent eating hay. No difference was found among treatments in drinking behavior. The calves supplemented with hay (p < 0.01) in the diet spent more time than CON ruminating. Furthermore, our findings showed that calves receiving hay (p < 0.01) shown a reduction in the time spent on abnormal behavior and head out of the pen. 2 Abnormal behavior mainly for non-nutritive oral behavior. 3 Self-grooming: calf licked itself with its tongue. * The interaction between treat and time (T * t). Calf Growth Performance Providing hay, especially long particle-sized hay, increases salivary secretion and buffers rumen fluid in dairy calves [5,6,21]. Furthermore, it improves the rumen environment, increases dry matter intake, and contributes to greater BW [11,22,23]. The reason why feeding hay resulted in heavier calves may be the cause of increasing muscularis mucosa, weight and volume of the rumen and subsequently could enhance the feed intake and BW [8,10,[24][25][26]. To some extent, early in life, the rumen is not fully developed and is less likely to digest hay efficiently. Thus, longer as compared to shorter hay particles may stay longer in the rumen due to difficulties in digestion and low passage rate in the gut [16] which may cause an increase in BW due to greater gut fill [11]. Our results showed no difference in BW between OL and OS fed calves although both were heavier than CON from day 42 to the end of the study at day 70, suggesting that calf BW could be positively influenced by hay feeding, irrespective of its cut length. Although there was no difference in BW between OS and OL, ADG was higher in the OS. In our study we fed OS and OL ranging from 3 cm to 5 cm and 10 cm to 12 cm, respectively. In a previous study, the length of longer and short grass hay was 3 cm to 4 cm (CRS) and 2 mm (FN) [11], respectively, and found that calves fed CRS had greater performance (higher feed intake and nutrient digestibility). Therefore, hay for calves needs a moderate length, neither too long nor too short, 3-5 cm may be the best choice. Castells et al. [10] reported positive outcomes in ADG when calves were fed chopped oat hay in the pre-weaning period. The importance of hay in young ruminants was demonstrated more than six decades ago. As early as 1962, Tamate et al. [25] demonstrated that hay supplementation was important in stimulating muscularis and promoting rapid growth of the rumen. A well-developed rumen enhances the production and absorption of VFAs and increases the output of microbial protein, which can be utilized for growth [27]. Thus, the low ADG in CON calves was most likely linked to decreased availability of nutrients to support growth. On the other hand, compared with OS, the OL might have filled more space in the rumen and thus reduced the overall solid feed intake [16]. Consequently, a decrease was reflected in the ADG of OL calves during the entire trial compared with OS in our study. Calves fed short-cut hay consumed more calf starter. Similarly, Montoro et al. [11] reported that providing chopped grass hay (3-4 cm) to young calves could improve feed intake by improving gut fill. An increase in gut fill might accommodate and digest more solid feed [28]. In addition, the increased availability of feed substrates in the rumen can enhance the production of VFA, which in turn stimulates rumen epithelial development [7]. The measurements of the structural (body height, body length, heart girth, abdominal girth, circumference of cannon bone) are good indicators of the calf's growth, feeding, and management conditions. The effect of hay feeding on structural measurements has shown inconsistencies in many studies. Khan et al. [24] had reported that the structural measurements were not improved in calves fed grass hay (1.2 ± 0.4 cm) compared to texturized starter. Similarly, Hosseini et al. [29] reported no differences in hip height, body length, body barrel, heart girth and wither height between starter feed and starter feed plus 15% chopped alfalfa hay (3 mm). However, Gasiorek et al. [30] reported higher hip height on OH (starter feed containing 10% DM basis chopped oat hay) compared with starter feed only. In the present study, calves fed with hay had higher body length, heart girth, and abdominal girth during the entire trial period, mostly due to greater growth in the post-weaning period. Rumen Fermentation In agreement with previous studies, our study showed that feeding hay increases ruminal pH in calves [7,8,31] compared to the CON calves. This observation was independent of the hay cut length. Calves might take a longer time ruminating due to more frequent regurgitation of the long particles that are occupying greater volume in the rumen. Consequently, more saliva, which buffers the acidic rumen pH is produced [6]. Suarez-Mena et al. [32] investigated and found no effect of straw of different particle sizes (0.82, 3.04, 7.10, and 12.7 mm as geometric mean) mixed at the rate of 5% in the calf starter on rumen fermentation and pH in pre-weaning dairy calves. Similarly, Mirzaei et al. [8] compared two different particle sizes (medium, 2.92 mm; or long, 5.04 mm as geometrical means) of alfalfa hay supplemented at different levels (low, 8%; or high, 16% on DM basis) in calf starter and found no effect on VFA production and rumen pH in calves. Terre et al. [33] by adding chopped oat hay (49.2% between 8 and 20 mm) in the pelleted starter and found it could improve the ruminating behavior and result in a higher pH compared with starter only. Forage length which could partially influence the time spent on rumination was an important physical form in maintaining optimal rumen pH [34]. Mirzaei et al. [8] reported that in calves provided with forage of different sizes (alfalfa hay: short = 1.96 mm or long = 3.93 mm; and wheat straw: short = 2.03 mm or long = 4.10 mm as geometric mean), rumination time increased in those that received forage with long particle sizes. In our experiment, no differences were observed on the rumen pH between OS and OL treatments, probably due to a lack of controlling the level of forage supplementation. Calf performance might depend on the interactions between forage source, level, and particle size [35]. Thus, further studies need to focus not only on feeding but also on the forage source and supplementation level. Rumen ammonia partly comes from the degradation of dietary crude protein available for microbial proteins synthesis [36]. Improved microbial development in the rumen enhances the utilization of NH 3 -N. Hence, to some extent, the concentration of NH 3 -N reflects the development of rumen microbiota [37] and the utilization rate of NH 3 -N [38]. A relatively stable rumen microbiota is gradually achieved as the calves begin to ingest a significant amount of solid feed [39]. Nevertheless, in the early stages of life (three months), the dominant bacteria in the calf rumen are continuously altered [40], especially as an effect of diet changes and age. Feeding high fiber diets increase the abundance of fiber-degrading microbiota [41] and rumen pH resulting in an optimal rumen environment that can stimulate the rapid development of important rumen bacteria [42]. Therefore, providing hay to calves could decrease the concentration and mitigate the negative effects of the rapid accumulation of NH 3 -N in the rumen. Karimizadeh et al. [43] reported higher NH 3 -N concentration when feed block diet compared to pellet or mash diet. Block diet might be devoted to a higher protein intake, which was the same as the starter used in this trial and led to the increased ammonia concentration. As calves grow up, the concentration of rumen NH 3 -N decreases gradually. Apart from its utilization by rumen microbiota, NH 3 -N is also absorbed across the rumen wall [44]. Mirzaei et al. [8] reported that rumen corneum thickness decreased in calves fed alfalfa hay with a long particle size (5.04 mm). At the same time the NH 3 -N concentration decreased in the rumen fluid suggesting improved absorption of NH 3 -N across the rumen wall. Our results showed that NH 3 -N was not affected by the length of hay during different periods. Our results imply that OL (10-12 cm) might slightly limit starter intake compared to OS (3-5 cm), thereby reducing energy supply [45]. Compared to hay, the high portion of carbohydrates in calf starter contributes to greater concentrations of VFA during rumen microbial fermentation. Generally, the CON calves had greater concentrations of TVFA compared to hay calves during the whole trial period. This corresponded to low rumen pH, a common feature in calves fed high levels of concentrates. Similar to our TVFA results, Castells et al. [10] reported that calves fed oat hay could lower the retention time of feed (28.4 h for starter vs. 18.8 h for oat hay) in the gastrointestinal tract compared to those fed starter only, thus reduce fermentation time and VFA concentration. Propionate is a key source of energy [45], while butyrate is important in promoting rumen epithelium development [46]. Our study reported no differences in acetate concentrations among treatments during the pre-weaning period and entire trial, while propionate decreased in the calves fed hay during entire trial. Suárez et al. [31] showed that different forage to concentrate ratio affects acetate concentration in the rumen. Hay contains abundant fibers that attract and encourage the growth of cellulolytic bacteria which could produce large amounts of acetate [47] and increase pH. The concentration of acetate increased significantly as well as C2/C3 during post-weaning in the calves fed hay, indicating that the rumen fermentation favored acetate fermentation in these calves. Lower concentrations of butyrate were observed in OL compared to OS calves. The low fiber content in the form of lignin and cellulose [26] or the increased butyrate metabolism in the rumen epithelium [23] in calves fed OS might have contributed to the low butyrate concentrations. Calf Health Diarrhea and pneumonia are the most common and important calf diseases on dairy farms throughout the year. Several factors, such as poor passive immunity, milk volume, and environmental conditions, can increase the incidence of diarrhea and pneumonia in dairy calves. Ultimately, sick calves experience a decrease in growth and survival rate. Porter et al. [48] documented that fecal score decreased with increasing dietary fiber (low fiber pellet and low fiber coarse mash vs. high fiber pellet and high fiber coarse mash). In the present study, no differences were found between treatments in diarrhea frequency, which implied that hay did not affect diarrhea. However, higher diarrhea frequency was observed in the pre-weaning period (week 1-8). The results might be partially explained by the lack of active immunity that is yet to be fully established during the peri-weaning period [49]. Although there was no difference in the incidence of pneumonia between groups during pre-weaning, the pneumonia occurrence was higher (CON: 47.92%; OS: 43.36%; OL: 45.13%) compared with other study [50] (pneumonia occurrence: 20.71% before weaning). This may be caused by lower environmental temperature during the trial period (average temperature in October, November, and December and January was 14.5 • C, 7 • C, 0 • C and −1 • C, respectively). Environmental temperature changes are important factors leading to the higher occurrence of calf pneumonia, previous researchers [51,52] also documented higher pneumonia occurrence in the autumn (October-December) compared to spring (April-June). Calf Behavior Behavioral responses are the normal animal feedback to the nervous system stimuli, which is important for survival at a particular time or in a certain environment [53]. Our study focused on standing, lying, eating, walking chewing and ruminating, abnormal behaviors, self-grooming, and heading out of the pen. Resting is an important calf behavior that has been associated with improved calf welfare. Previous studies [54] show that increased walking, standing, or starter intake time may enhance the maintenance energy expenditure and heat increment thus reducing feed efficiency. Consequently, calves tend to lie down instead of standing in order to reduce energy consumption. Calves fed with hay spent less time on standing, and more time on lying. Since hay digestion is difficult for young calves [7,11], increased lying time enables the calf to spend more time ruminating [17] as rumination is also an energy-consuming process [55]. Indeed, our study and others have shown that calves fed hay devoted more time to rumination. Interestingly, Terre et al. [33] reported lower-lying time in calves fed hay. The differences in lying and ruminating behavior between the two studies could be linked to how we defined lying time. We considered time spent ruminating while the calf was lying and treated the two activities (chewing and lying) separately [33]. Compared with the CON group, hay fed calves spent less time eating calf starter but their intake was higher. This increase in starter intake could be associated with greater rumen capacity [29]. Consequently, OS and OL calves might require more time than CON to digest the large amounts of calf starter consumed in just a few minutes. However, longer hay supplementation also results in higher rumen fill than shorter hay, which might result in reduced feed intake [56]. It has been shown that calf satiety can reduce the time calves spend performing abnormal behavior (i.e., if the calf licked any surface such as fences, floors, windshields) [57]. Castells et al. [17] also found that calves fed hay devoted less time on abnormal behavior. More self-grooming behavior and less head out of pen were found in both groups of calves fed hay. Self-grooming occurs mainly when the calf is in a frustration mood, while head out of pen reflects the curiosity and distress caused by its separation from other calves [58]. The present study found that providing hay could reduce the time spent on head out of pen which suggests that providing hay to calves can reduce distress in calves, especially at weaning time. However, the concrete connection between these behaviors requires further study. In our study, starter and oat hay were fed separately in two different buckets placed side by side, allowing the calves to consume either feed freely. The feeding protocol was intended to reduce stress (calves can eat according to physiological needs, such as meeting nutrient needs and preventing rumen acidosis) [59] and allow the researchers to observe the feeding behavior among calves easily. However, most calves spilled their hay portion on the ground, making it difficult to accurately calculate the forage to concentrate ratio. Ethical Statement Animal care and use were approved by the Ethical Committee of China Agricultural University (Yuanmingyuan Road, Haidian District, Beijing, China; Case number: Aw10601202-1-2; Date of approval: 1 June 2021). Samples Selection and Treatments This research was conducted at Zhongyuan Animal Husbandry Co. Ltd. (Shijiazhuang, China) from October 2018 to January 2019 (average temperature in October, November, December, and January was 14.5 • C, 7 • C, 0 • C and −1 • C, respectively) and this work is part of a greater project [60]. Two hundred and ten healthy Holstein female calves (initial BW = 35.8 ± 2.6 kg; serum total protein ≥ 5.5 g/dL) were randomly allocated into three groups: calves fed basic diet (calf starter) without hay (CON) or basic diet with oat hay, as either long (OL: 10-12 cm) or short (OS: 3-5 cm) hay cut. The experiment lasted for 70 days (from birth to the end of week 10). Calves were fed 6 L of colostrum in two portions (4 L colostrum within 1 h after birth, 2 L colostrum 8 h later). Pasteurized milk (60 • C, 30 min) was provided twice daily at 07:00 and 14:00 in equal amounts as follows: 6 L/day from days 2 to days 7, 8 L/day from days 8 to 42, 6 L/day from days 43 to 49, 4 L/day from days 50 to 56. Weaning was imposed on day 57 and calves remained in their individual hutches (Hutches were designed with a fenced area, the inside dimensions of the hutch were 215 cm long, 220 cm wide and 136 cm high and the outside fence were 160 cm long, 110 cm wide and 120 cm high; the space between two individual hutches was 80 cm). Clean water, calf starter and oat hay were provided ad libitum. Calf starter was fed from day four while the hay was fed from the second week of life. Same batches of calf starter and oat hay were offered throughout the experimental period. Fresh starter and refusals were weighed daily at 8:00 a.m. after morning milk feeding. Due to the limitations of the hay bucket (Capacity: 5 L; fixed on hutches manually) and the lightness of the hay, most of the calves spilled their portions. Hence, we could not accurately determine the daily intake of hay and these data were excluded in the final analysis. The calf hutches were kept clean and dry throughout the trial. The ingredient and chemical composition of calf starter and oat hay are shown in Table 7. Feed Analysis and Body Measurements Representative starter and oat hay samples were collected once a week for further determination of dry matter (DM), crude protein (CP), ether extract (EE), crude Ash (Ash), neutral detergent fiber (NDF), and acid detergent fiber (ADF) following the methods of AOAC [61]. The concentration of calcium, phosphorus, sodium chloride, lysine was recorded from the label on the package. The body weight, body height, body length, heart girth, abdominal girth, and circumference of the cannon bone of the leg of calves were measured before morning feeding on days 1, 14, 28, 42, 56, 63, and 70. Collection and Determination of the Rumen Fluid Samples Fourteen (14) healthy Holstein female calves were selected randomly from each treatment for rumen fluid sampling on days 14, 28, 56, 63, and 70. Rumen fluid was collected with esophageal tube (2 mm wall thickness, 6 mm internal diameter; Anscitech Co., Ltd., Wuhan, China) 3 h after morning feeding. The first 20 mL of rumen fluid was discarded to reduce the chances of saliva contamination. The rumen fluid was filtered through four layers of cheese cloth, then divided and placed into two 15 mL centrifuge tubes. The pH value of rumen fluid was measured immediately with a pH meter (HORIBA Advanced Techno Co., Ltd., Osaka, Japan). Rumen fluid was then stored at −20 • C for determination of the concentration of VFA and NH 3 -N with gas chromatography [62] and Phenol-sodium hypochlorite colorimetry [63], respectively. Evaluation of Calf Health Status Fecal scores were determined before morning and afternoon milk feeding based on a standardized diarrhea scoring system [64]. The reference criteria for the fecal score included a 4 points scale (1-4 points; calves were considered diarrheic when the calf fecal score was >2). Feces was scored as 1 when calves had a firm but not hard feces; 2 when feces did not hold form and piles but spread slightly; 3 when feces spread readily to about 6 mm depth; and 4 when calves feces had a kind of liquid consistency and easily splatters. The rate and frequency of diarrhea were calculated to reflect the degree of diarrhea in the calves. Pneumonia scores were evaluated based on the scoring system presented by Love et al. [65]. Ocular discharge (any discharge, 2 points), nasal discharge (any discharge, 3 points), coughing (induced or spontaneous, 2 points), ear and head carriage (ear droop or head tilt, 5 points), and respiratory quality (abnormal respiration, 2 points) were considered during the pneumonia scoring. The calves were confirmed positive when their total score was ≥4. Calf Behavior The same fourteen calves from which rumen fluid was collected were used to observe calf behavior on days 57, 63, and 70 of the experimental period using a video camera (Hikvision Digital Technology Co., LTD. Hangzhou, Zhejiang Province, China). The duration of a given behavior for each calf was recorded using the time sampling method [66,67], which involves recording a portion of the total time the calf performs a given behavior. We recorded the duration for each behavior within the first 20 min of each hour; a total of 24 20-min durations were recorded throughout the day (24 h/day). These durations were then averaged and multiplied by three. The behaviors studied included standing, lying, eating starter and hay, drinking, walking, rumination, abnormal behavior, self-grooming and heading out of the pen (Table 8). Table 8. Definitions of the examined behaviors. Behavior Definition of the Behavior Standing 1 Four hooves on the ground, whether moving or not Lying 1 Lying on the sternum with head held in a raised position or down Eating starter 1 Head in starter feed bucket accompanied by chewing movements Eating Hay 1 Head in hay feed bucket accompanied by chewing movements Drinking 1 Mouth around drinker Walking 1 Stepping and moving Chewing and Ruminating 1 Chewing irregularly and repeatedly without food in the mouth Abnormal Behavior 2 Calf licked any surface such as fences, floors, windshields Self-Grooming 1 Calf licked itself with its tongue Head out of Pen 3 Calf's head out of the pen to look around and do not engage in any feeding activities 1 Adapted from [68]. 2 Adapted from [17]. 3 Adapted from [69]. Statistical Analysis All raw data was processed using EXCEL. Data of BW, body structure, rumen pH, NH 3 -N, and VFA concentration at days 1, 14, 28, 42, 56, 63, and 70 were analyzed separately in two periods. ADG and starter intake data were pooled and analyzed bi-weekly and weekly, respectively, and then analyzed separately by periods (pre-weaning, days 1-56; post-weaning, days [57][58][59][60][61][62][63][64][65][66][67][68][69][70]. The data of calf behavior was recorded for each calf by day (day 57, 63, 70). The above data were analyzed by a mixed model (PROC MIXED, version 9.2; SAS Institute, Inc., Cary, NC, USA) with time as a repeated measure. The model included the fixed effects of treatment, time (day or week) and their interactions (treatment × time), and calf as a random effect. The data of BW, structural measurements, ruminal pH, NH 3 -N, VFA proportion, ADG and starter intake for the entire trial (day 1-70 or week 1-10) used the mixed model with fixed effect of treatment, period (pre-weaning, post-weaning), and their interactions (treatment × period) and calf as a random effect. Fecal scores for each calf were recorded daily and used to calculate diarrhea frequency and diarrhea duration. Data on diarrhea frequency and diarrhea duration were analyzed using a GLIMMIX procedure in SAS (version 9.2, SAS Institute Inc., Cary, NC, USA) with fixed effect of treatment, time (day 1-70) and their interactions (treatment × time), and the random effect of calf within treatment [70]. Data on pneumonia occurrence were analyzed using a chi-square test model (PROC FREQ, version 9.2; SAS Institute, Inc., Cary, NC, USA). p < 0.05 showed significant differences, p < 0.01 showed extremely significant differences, while trends were indicated as 0.05 < p ≤ 0.10. Conclusions Our results showed that feeding oat hay to pre-weaning dairy calves can improve growth performance, rumen fermentation, and reduce abnormal behaviors compared with the starter only. Furthermore, the ADG were significantly improved in calves fed oat hay cut at 3-5 cm during pre-weaning and entire trial. We suggested that feeding short oat hay (3-5 cm) might be the best hay size to feed calves on dairy farms that supply forage to their calves. Further studies are recommended to determine other factors and their interactions that may contribute to better performance in calves.
2021-12-22T16:15:24.804Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "5654719ecaa40d63cdbc38494d2d496647400b80", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/11/12/890/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2c8099bd5d643c20767c37e79c600d989278f90", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
2283638
pes2o/s2orc
v3-fos-license
Friend or foe: Endoplasmic reticulum protein 29 (ERp29) in epithelial cancer Highlights • ERp29 regulates epithelial cell plasticity and the mesenchymal–epithelial transition.• ERp29 shows a tumor suppressive function in primary tumor development.• ERp29 is potentially associated with distant metastasis in cancer.• ERp29 modulates cell survival against genotoxic stress.• Thus, ERp29 displays dual functions as a “friend or foe” in epithelial cancer. Introduction The endoplasmic reticulum (ER) is found in all eukaryotic cells and is complex membrane system constituting of an extensively interlinked network of membranous tubules, sacs and cisternae. It is the main subcellular organelle that transports different molecules to their subcellular destinations or to the cell surface [10,85]. The ER contains a number of molecular chaperones involved in protein synthesis and maturation. Of the ER chaperones, protein disulfide isomerase (PDI)-like proteins are characterized by the presence of a thioredoxin domain and function as oxido-reductases, isomerases and chaperones [33]. ERp29 lacks the active-site double-cysteine (CxxC) motif and does not belong to the redoxactive PDIs [5,47]. ERp29 is recognized as a characterized resident of the cellular ER, and it is expressed ubiquitously and abundantly in mammalian tissues [50]. Protein structural analysis showed that ERp29 consists of N-terminal and C-terminal domains [5]: N-terminal domain involves dimerization whereas the C-terminal domain is essential for substrate binding and secretion [78]. The biological function of ERp29 in protein secretion has been well established in cells [8,63,67]. ERp29 is proposed to be involved in the unfolded protein response (UPR) as a factor facilitating transport of synthesized secretory proteins from the ER to Golgi [83]. The expression of ERp29 was demonstrated to be increased in cells exposed to radiation [108], sperm cells undergoing maturation [42,107], and in certain cell types both under the pharmacologically induced UPR and under the physiological conditions (e.g., lactation, differentiation of thyroid cells) [66,82]. Under ER stress, ERp29 translocates the precursor protein p90ATF6 from the ER to Golgi where it is cleaved to be a mature and active form p50ATF by protease (S1P and S2P) [48]. In most cases, ERp29 interacts with BiP/GRP78 to exert its function under ER stress [65]. ERp29 is considered to be a key player in both viral unfolding and secretion [63,67,77,78] Recent studies have also demonstrated that ERp29 is involved in intercellular communication by stabilizing the monomeric gap junction protein connexin43 [27] and trafficking of cystic fibrosis transmembrane conductance regulator to the plasma membrane in cystic fibrosis and non-cystic fibrosis epithelial cells [90]. It was recently reported that ERp29 directs epithelial Na(+) channel (ENaC) toward the Golgi, where it undergoes cleavage during its biogenesis and trafficking to the apical membrane [40]. ERp29 expression protects axotomized neurons from apoptosis and promotes neuronal regeneration [111]. These studies indicate a broad biological function of ERp29 in cells. Recent studies demonstrated a tumor suppressive function of ERp29 in cancer. It was found that ERp29 expression inhibited tumor formation in mice [4,87] and the level of ERp29 in primary tumors is inversely associated with tumor development in breast, lung and gallbladder cancer [4,29]. However, its expression is also responsible for cancer cell survival against genotoxic stress induced by doxorubicin and radiation [34,76,109]. The most recent studies demonstrate other important roles of ERp29 in cancer cells such as the induction of mesenchymal-epithelial transition (MET) and epithelial morphogenesis [3,4]. MET is considered as an important process of transdifferentiation and restoration of epithelial phenotype during distant metastasis [23,52]. These findings implicate ERp29 in promoting the survival of cancer cells and also metastasis. Hence, the current review focuses on the novel functions of ERp29 and discusses its pathological importance as a ''friend or foe'' in epithelial cancer. Epithelial-mesenchymal transition (EMT) and MET The EMT is an essential process during embryogenesis [6] and tumor development [43,96]. The pathological conditions such as inflammation, organ fibrosis and cancer progression facilitate EMT [16]. The epithelial cells after undergoing EMT show typical features characterized as: (1) loss of adherens junctions (AJs) and tight junctions (TJs) and apical-basal polarity; (2) cytoskeletal reorganization and distribution; and (3) gain of aggressive phenotype of migration and invasion [98]. Therefore, EMT has been considered to be an important process in cancer progression and its pathological activation during tumor development induces primary tumor cells to metastasize [95]. However, recent studies showed that the EMT status was not unanimously correlated with poorer survival in cancer patients examined [92]. In addition to EMT in epithelial cells, mesenchymal-like cells have capability to regain a fully differentiated epithelial phenotype via the MET [6,35]. The key feature of MET is defined as a process of transdifferentiation of mesenchymal-like cells to polarized epithelial-like cells [23,52] and mediates the establishment of distant metastatic tumors at secondary sites [22]. Recent studies demonstrated that distant metastases in breast cancer expressed an equal or stronger E-cadherin signal than the respective primary tumors and the re-expression of E-cadherin was independent of the E-cadherin status of the primary tumors [58]. Similarly, it was found that E-cadherin is re-expressed in bone metastasis or distant metastatic tumors arising from E-cadherin-negative poorly differentiated primary breast carcinoma [81], or from E-cadherin-low primary tumors [25]. In prostate and bladder cancer cells, the nonmetastatic mesenchymal-like cells were interacted with metastatic epithelial-like cells to accelerate their metastatic colonization [20]. It is, therefore, suggested that the EMT/MET work co-operatively in driving metastasis. Molecular regulation of EMT/MET E-cadherin is considered to be a key molecule that provides the physical structure for both cell-cell attachment and recruitment of signaling complexes [75]. Loss of E-cadherin is a hallmark of EMT [53]. Therefore, characterizing transcriptional regulators of E-cadherin expression during EMT/MET has provided important insights into the molecular mechanisms underlying the loss of cell-cell adhesion and the acquisition of migratory properties during carcinoma progression [73]. Several known signaling pathways, such as those involving transforming growth factor-b (TGF-b), Notch, fibroblast growth factor and Wnt signaling pathways, have been shown to trigger epithelial dedifferentiation and EMT [28,97,110]. These signals repress transcription of epithelial genes, such as those encoding E-cadherin and cytokeratins, or activate transcription programs that facilitate fibroblast-like motility and invasion [73,97]. The involvement of microRNAs (miRNAs) in controlling EMT has been emphasized [11,12,18]. MiRNAs are small non-coding RNAs (23 nt) that silence gene expression by pairing to the 3 0 UTR of target mRNAs to cause their posttranscriptional repression [7]. MiRNAs can be characterized as ''mesenchymal miRNA'' and ''epithelial miRNA'' [68]. The ''mesenchymal miRNA'' plays an oncogenic role by promoting EMT in cancer cells. For instance, the well-known miR-21, miR-103/107 are EMT inducer by repressing Dicer and PTEN [44]. The miR-200 family has been shown to be major ''epithelial miRNA'' that regulate MET through silencing the EMT-transcriptional inducers ZEB1 and ZEB2 [13,17]. MiRNAs from this family are considered to be predisposing factors for cancer cell metastasis. For instance, the elevated levels of the epithelial miR-200 family in primary breast tumors associate with poorer outcomes and metastasis [57]. These findings support a potential role of ''epithelial miRNAs'' in MET to promote metastatic colonization [15]. ERp29 promotes MET in breast cancer The role of ERp29 in regulating MET has been established in basal-like MDA-MB-231 breast cancer cells. It is known that myosin light chain (MLC) phosphorylation initiates to myosin-driven contraction, leading to reorganization of the actin cytoskeleton and formation of stress fibers [55,56]. ERp29 expression in this type of cells markedly reduced the level of phosphorylated MLC [3]. These results indicate that ERp29 regulates cortical actin formation through a mechanism involved in MLC phosphorylation ( Fig. 1). In addition to the phenotypic change, ERp29 expression leads to: expression and membranous localization of epithelial cell marker E-cadherin; expression of epithelial differentiation marker cytokeratin 19; and loss of the mesenchymal cell marker vimentin and fibronectin [3] (Fig. 1). In contrast, knockdown of ERp29 in epithelial MCF-7 cells promotes acquisition of EMT traits including fibroblast-like phenotype, enhanced cell spreading, decreased expression of E-cadherin and increased expression of vimentin [3,4]. These findings further substantiate a role of ERp29 in modulating MET in breast cancer cells. ERp29 targets E-cadherin transcription repressors The transcription repressors such as Snai1, Slug, ZEB1/2 and Twist have been considered to be the main regulators for E-cadherin expression [19,26,32]. Mechanistic studies revealed that ERp29 expression significantly down-regulated transcription of these repressors, leading to their reduced nuclear expression in MDA-MB-231 cells [3,4] (Fig. 2). Consistent with this, the extracellular signal-regulated kinase (ERK) pathway which is an important up-stream regulator of Slug and Ets1 was highly inhibited [4]. Apparently, ERp29 up-regulates the expressions of E-cadherin transcription repressors through repressing ERK pathway. Interestingly, ERp29 over-expression in basal-like BT549 cells resulted in incomplete MET and did not significantly affect the mRNA or protein expression of Snai1, ZEB2 and Twist, but increased the protein expression of Slug [3]. The differential regulation of these transcriptional repressors of E-cadherin by ERp29 in these two cell-types may occur in a cell-context-dependent manner. ERp29 antagonizes Wnt/ b-catenin signaling Wnt proteins are a family of highly conserved secreted cysteine-rich glycoproteins. The Wnt pathway is activated via a binding of a family member to a frizzled receptor (Fzd) and the LDL-Receptor-related protein co-receptor (LRP5/6). There are three different cascades that are activated by Wnt proteins: namely canonical/b-catenin-dependent pathway and two non-canonical/ b-catenin-independent pathways that include Wnt/Ca 2+ and planar cell polarity [84]. Of note, the Wnt/b-catenin pathway has been extensively studied, due to its important role in cancer initiation and progression [79]. The presence of Wnt promotes formation of a Wnt-Fzd-LRP complex, recruitment of the cytoplasmic protein Disheveled (Dvl) to Fzd and the LRP phosphorylation-dependent recruitment of Axin to the membrane, thereby leading to release of b-catenin from membrane and accumulation in cytoplasm and nuclei. Nuclear b-catenin replaces TLE/Groucho co-repressors and recruits co-activators to activate expression of Wnt target genes. The most important genes regulated are those related to proliferation, such as Cyclin D 1 and c-Myc [46,94], which are overexpressed in most b-catenin-dependent tumors. When b-catenin is absent in nucleus, the transcription factors T-cell factor/lymphoid enhancer factors (TCF/LEF) recruits co-repressors of the TLE/Groucho family and function as transcriptional repressors. b-catenin is highly expressed in the nucleus of mesenchymal MDA-MB-231 cells. ERp29 over-expression in this type of cells led to translocation of nuclear b-catenin to membrane where it forms complex with E-cadherin [3] (Fig. 3). This causes a disruption of b-catenin/TCF/LEF complex and abolishes its transcription activity. Indeed, ERp29 significantly decreased the expression of cyclin D 1 /D 2 [36], one of the downstream targets of activated Wnt/b-catenin signaling [94], indicating an inhibitory effect of ERp29 on this pathway. Meanwhile, expression of ERp29 in this cell type increased the nuclear expression of TCF3, a transcription factor regulating cancer cell differentiation while inhibiting self-renewal of cancer stem cells [102,106]. Hence, ERp29 may play dual functions in mesenchymal MDA-MB-231 breast cancer cells by: (1) suppressing activated Wnt/b-catenin signaling via b-catenin translocation; and (2) promoting cell differentiation via activating TCF3 (Fig. 3). Because b-catenin serves as a signaling hub for the Wnt pathway, it is particularly important to focus on b-catenin as the target of choice in Wnt-driven cancers. Though the mechanism by which ERp29 expression promotes the disassociation of b-catenin/TCF/LEF complex in MDA-MB-231 cells remains elusive, activating ERp29 expression may exert an inhibitory effect on the poorly differentiated, Wnt-driven tumors. Cell adherens and tight junctions Adherens junctions (AJs) and tight junctions (TJs) are composed of transmembrane proteins that adhere to similar proteins in the adjacent cell [69]. The transmembrane region of the TJs is composed mainly of claudins, tetraspan proteins with two extracellular loops [1]. AJs are mediated by Ca 2+ -dependent homophilic interactions of cadherins [71] which interact with cytoplasmic catenins that link the cadherin/catenin complex to the actin cytoskeleton [74]. The cytoplasmic domain of claudins in TJs interacts with occludin and several zona occludens proteins (ZO1-3) to form the plaque that associates with the cytoskeleton [99]. The AJs form and maintain intercellular adhesion, whereas the TJs serve as a diffusion barrier for solutes and define the boundary between apical and basolateral membrane domains [21]. The AJs and TJs are required for integrity of the epithelial phenotype, as well as for epithelial cells to function as a tissue [75]. The TJs are closely linked to the proper polarization of cells for the establishment of epithelial architecture [86]. During cancer development, epithelial cells lose the capability to form TJs and correct apico-basal polarity [59]. This subsequently causes the loss of contact inhibition of cell growth [91]. In addition, reduction of ZO-1 and occludin were found to be correlated with poorly defined differentiation, higher metastatic frequency and lower survival rates [49,64]. Hence, TJs proteins have a tumor suppressive function in cancer formation and progression. Apical-basal cell polarity The apical-basal polarity of epithelial cells in an epithelium is characterized by the presence of two specialized plasma membrane domains: namely, the apical surface and basolateral surface [30]. In general, the epithelial cell polarity is determined by three core complexes. These protein complexes include: (1) the partitioning-defective (PAR) complex; (2) the Crumbs (CRB) complex; and (3) the Scribble complex [2,30,45,51]. PAR complex is composed of two scaffold proteins (PAR6 and PAR3) and an atypical protein kinase C (aPKC) and is localized to the apical junction domain for the assem-bly of TJs [31,39]. The Crumbs complex is formed by the transmembrane protein Crumbs and the cytoplasmic scaffolding proteins such as the homologue of Drosophila Stardust (Pals1) and Pals-associated tight junction protein (Patj) and localizes to the apical [38]. The Scribble complex is comprised of three proteins, Scribble, Disc large (Dlg) and Lethal giant larvae (Lgl) and is localized in the basolateral domain of epithelial cells [100]. ERp29 restores the establishment of AJs and TJs ERp29 is involved the establishment of the apical-junctional complex, which is formed by AJs and TJs [3]. These complexes are located in the upper portion of a polarized epithelial cell and are composed of trans-membrane proteins that interact with molecules in adjacent cells [69]. In MDA-MB-231 cells, b-catenin is expressed and localized in nuclear. ERp29 over-expression resulted in an increased expression and membrane localization of E-cadherin and translocation of b-catenin from the nucleus to the cell membrane [3] (Fig. 3). The ERp29-mediated membrane localization of b-catenin facilitates the assembly of E-cadherin/b-catenin complex and formation of AJs [45]. ERp29 over-expression led to an increase of TJ components such as ZO-1 and occludin at the membrane and cell-cell junctions in breast cancer cells [3] (Fig. 4). The increased expression of ZO-1 and occludin is regulated at translational level, as ERp29 overexpression did not affect their mRNA levels [3]. The role of ERp29 in ZO-1 protein expression and trafficking was further demonstrated in the ERp29-knockdown MCF-7 cells. Translational up-regulation of ZO-1 and occludin by ERp29 in these cell models may provide a mechanism of how ERp29 induces tumor suppression in breast cancer [4]. In addition, the formation of cortical actin filaments is critical for the establishment of AJs and TJs and the regulation of epithelial cell apical-basal polarity [75]. Reorganization of the actin cytoskeleton induces recruitment of ZO-1 to cell periphery before the assembly of junctional complexes between adjacent cells [37]. The ERp29-induced restoration of ZO-1 expression may be associated with actin reorganization. Hence, ERp29 plays a critical role in restoration of an epithelial-like phenotype by establishing cell-cell contact. ERp29 restores cell polarity In line with the role of ERp29 in regulating MET and re-establishment of the epithelial-like phenotype, ERp29 over-expression restores epithelial polarity [3] (Fig. 4). In mesenchymal MDA-MB-231 and basal-like BT549 cells, ERp29 expression did not affect mRNA levels of Par3 and Scribble, but increased their protein translation and membrane distribution. It was reported that Cdc42, a small GTPase, is one of the key regulators modulating the expression of Par6 and aPKC [54,60] and has a critical role in establishing cell polarity in epithelial cells [72]. However, ERp29 over-expression did not affect both the expression and localization of Cdc42, Par6 and aPKC, indicating these PAR complex members are not involved in ERp29-regulated apical polarity. Thus, ERp29 is likely to specifically up-regulate Par3 protein expression during epithelial morphogenesis. ERp29 over-expression did not markedly alter the expression and distribution of Crumb1 [3], a member of the Crumbs complex [93], Similar to that observed for Par3, ERp29 over-expression resulted in a significant increase of protein expression of Scribble in both MDA-MB-231 and BT549 cells [3]. Suppression of ERp29 by shRNA in epithelial MCF-7 cells resulted in reduction of these core polarity proteins, leading to the disruption of cell-cell contact and increased cell spreading [3]. Previous studies demonstrated that polarity proteins are synthesized in the endoplasmic reticulum, transported to the Golgi complex and sorted at the trans-Golgi network into distinct apical and basolateral vesicular routes [88]. Given that ERp29 mediates the folding and secretion of newly synthesized proteins in the ER system [8], it is plausible that, in addition to increased protein expression of TJs and the core polarity complex, ERp29 may also have a critical role in protein trafficking and the maintenance of protein stability to modulate epithelial cell integrity. In agreement with this, the ERp29-induced tumor suppression in breast cancer cells is linked to the integrity of apical-basal polarity that is crucial for the prevention of tumor development [80]. ERp29 and primary tumor progression The association of ERp29 with primary tumor development has been studied in only a few types of cancer. In lung tumors, ERp29 expression was observed to be variable within and between tumor stages, and inversely correlated with tumor progression [87]. A tissue array study in 98 breast tumors showed that ERp29 expression was reduced with the progression of tumor stage and grade [4]. In gallbladder adenocarcinoma, ERp29 positive rate is significant lower in poorly differentiated tumors (vs well differentiated tumors) and tumors at T4 stage (vs T1 stage) [29]. Taken together, these results indicate a negative association of ERp29 expression with primary tumor progression in these cancers. However, to further substantiate ERp29's role in primary tumor development, extensive studies are needed in a large cohort of clinical specimens. ERp29 and metastasis The association of MET and distant metastasis has been well studied. For instance, analysis of MDA-MB-468 xenografts revealed that some tumor cells exist a metastable phenotype, characterized by the expression of both vimentin and E-cadherin [9], The cells at the invasive front showed a positive expression for vimentin and negative expression for E-cadherin, consistent with an EMT. On the other hand, the lymphovascular-invaded tumor cells showed a gradual transition of invaded tumor cells from mesenchymal to metastable and then to the epithelial phenotype, indicating that a MET process occurs as an early event in the metastatic process. Given the function of ERp29 in promoting MET in breast cancer cells [4], the role of ERp29 in cancer cell metastasis has been examined. Recent studies showed that ERp29 was significantly increased in the highly metastatic variant of parental MDA-MB-231 cells compared to the parental cells [104]. Similarly, ERp29 was found to be one of the proteins that were highly expressed in the metastatic tissues compared to the primary uveal melanoma tissues [61]. In colon cancer, ERp29, together with CLIC4 and Smac/DIABLO, was integrated into a novel panel associated with metastasis and was stratified for the prognostic risks of colorectal cancer [29]. These results may implicate an important role of ERp29 in cancer cell metastasis and disease recurrence. Indeed, our recent studies revealed that high expression of ERp29 in breast tumors strongly associated with reduced relapse time of disease and short survival time of patients (unpublished data). The role of MET in facilitating distant metastasis has been clinically recognized by the observation that MET is able to reversibly convert the disseminated mesenchymal cancer cells to an epithelial cell state [23]. Hence, ERp29 may have a critical role in promoting distant metastasis during cancer progression, although this needs to be investigated further. Consequently, understanding the association of ERp29 with disease recurrence and distant metastasis is of significance in assessing its prognostic value in clinical applications. The tumor microenvironment is an important factor in regulating cancer metastasis via MET [41,89]. The interplays between tumor cells, host cells, and the extracellular matrix in tumor ecosystem endow cancer cells with malignant properties, leading to metastatic dissemination. It has been reported that the expression of ERp29 was significantly affected by the culture conditions, where ERp29 expression was significantly increased in xenografts compared with the same cell types cultured in monolayer or spheroid condition [87]. This indicates that ERp29 could be physiologically regulated in the tumor ecosystem. When MDA-MB-231 cells were co-cultured with hepatocytes, E-cadherin was re-expressed, resulting in an increased chemo-resistance [24]. In vivo studies demonstrated that MDA-MB-231 cells formed E-cadherin-negative primary tumors, but showed a re-activated E-cadherin expression in lung metastatic site via MET, suggesting an effect of the microenvironment on cells at the metastatic site [25]. However, it is uncertain whether ERp29 was increased in parallel with metastasis in this in vivo experiment. Although the tumor microenvironment-induced MET and metastasis is a complex process, investigating the involvement of ERp29 in MET and metastasis may enhance our understanding of its pathological functions in cancer progression. ERp29 confers resistance to genotoxic stress in cancer cells To survive from the stress environment, cells have developed a variety of responsive mechanisms to cope with stress-induced cell death, such as cell cycle arrest and activation of the DNA repair. Recent studies have demonstrated that ERp29 is a novel molecule protecting cells from the genotoxic stress induced by doxorubicin and radiation [76,108,109]. Doxorubicin is one of the conventional chemotherapeutic drugs for cancer intervention via the intercalation of DNA and subsequent activation of the tumor suppressor p53 [62]. While most cancer cells are sensitive to doxorubicin and eventually killed by this drug, some cells develop an adaptive response to doxorubicin-induced genotoxic stress and survive. Clinically, chemo-resistance of cancer cells is a predominant cause of cancer recurrence after long-term treatment. It has been found that doxorubicin induced ERp29 expression and ERp29 expression is causally linked to resistance to this drug by a mechanism that requires PERK [34]. PERK activation promotes the phosphorylation of a general translation factor eIF2a and attenuates translation of global proteins including cyclin D 1 [14] , thereby resulting in inhibition of cell cycle. Apparently, the doxorubicin-induced ERp29 facilitates cell's response to genotoxic stress that ultimately results in resistance against chemotherapy by doxorubicin. Indeed, when ERp29 was over-expressed in MDA-MB-231 cells, these cells showed a significant G 0 /G 1 growth arrest and resistant to doxorubicin treatment, whereas knockdown of ERp29 in MCF-7 cells led to an enhanced sensitivity of these cells to doxorubicin [109]. Mechanistic studies revealed a critical role of up-regulated Hsp27 in the ERp29-induced doxorubicin resistance in these cell models. In addition, the ERp29-induced activation of ER stress-related XBP-1/p58 IPK cell survival pathway also plays a pivotal role in this aspect [36]. In support of this, silencing of p58 IPK in MCF-7 cells and ERp29-overexpressing MDA-MB-231 clones re-sensitizes them to doxorubicin by activating ATF4/ CHOP/caspase-3 pro-apoptotic signaling [36]. In an early study, when cells were exposed to ionization radiation, ERp29 expression was elevated in several types of cultured cells [108]. Concomitantly, splicing of XBP-1 mRNA under radiation was increased, suggesting the involvement of ER stress sensor might be a reason to induce ERp29 gene expression [108]. In nasopharyngeal carcinoma (NPC) cells, ERp29 knockdown attenuated radio-resistance of NPC CNE-1 cells, whereas ERp29 over-expression enhanced radio-resistance of NPC CNE-2 cells. Hence, ERp29 could potentiate resistance to radiation in NPC cells [76]. Furthermore, ERp29 was significantly expressed in radio-resistant nasopharyngeal carcinoma (NPC) tissues compared to radio-sensitive NPC tissues, indicating a potential role ERp29 in radio-resistance in NPC tumors [103]. Our recent studies in MBA-MD-231 and MCF-7 breast cancer cells indicated that ERp29 expression increased post-irradiation survival rate, whereas ERp29 repression by siRNA reduced post-irradiation survival rate and increased c-H2AX expression and DNA damage induced by irradiation (unpublished data). These findings further indicate a protective role of ERp29 in DNA integrity and stability. Mechanistic studies revealed that ERp29 over-expression in MDA-MB-231 cells significantly up-regulated the expression of the DNA repair gene, O 6 -methylguanine-DNA methyltransferase (MGMT). MGMT repairs the mutagenic and cytotoxic interstrand DNA cross-links via rapidly reversing alkylation, including methylation, at the O 6 position of guanine by transferring the alkyl group to the active site of the enzyme [101]. In addition to DNA repair function, MGMT plays a role in integrating DNA damage/repairrelated signals with replication, cell cycle progression and genomic stability [70,105]. Hence, MGMT is also an important factor in ERp29-induced anti-genotoxic stress and cell survival. The ERp29-upregulated DNA repair pathway might cause resistance to chemo-and radio-therapy, and thus targeting this pathway might have a potential to develop alternative strategy for efficient treatment of chemo-and/or radio-resistant cancer cells. Conclusion The current data from breast cancer cells supports the idea that ERp29 can function as a tumor suppressive protein, in terms of suppression of cell growth and primary tumor formation and inhibition of signaling pathways that facilitate EMT. Nevertheless, the significant role of ERp29 in cell survival against drugs, induction of cell differentiation and potential promotion of MET-related metastasis may lead us to re-assess its function in cancer progression, particularly in distant metastasis. Hence, it is important to explore in detail the ERp29's role in cancer as a ''friend or foe'' and to elucidate its clinical significance in breast cancer and other epithelial cancers. Targeting ERp29 and/or its downstream molecules might be an alternative molecular therapeutic approach for chemo/ radio-resistant metastatic cancer treatment.
2016-05-14T02:07:52.702Z
2015-01-30T00:00:00.000
{ "year": 2015, "sha1": "8fa7eaf1fb37ee22f5f0b0b23a45eac13f7574d3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fob.2015.01.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fa7eaf1fb37ee22f5f0b0b23a45eac13f7574d3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
216255533
pes2o/s2orc
v3-fos-license
EVALUATION AND COMPARISON OF ANTI-SICKLING ACTIVITIES OF MACERATED SEEDS OF CAJANUS AND Sickle cell patients' interest in traditional pharmacopoeia comes from the healing promise made by traditional practitioners and the expensive cost of medicines. The objective of this study was to evaluate the antisickling activity of macerated seeds of Cajanus cajan and compare its activity to that of phenylalanine. An induction of sickling due to 2% sodium metabisulfite was performed on 30 blood samples from SSFA 2 homozygous sickle cell patients. The macerated seeds of Cajanus cajan at 10 to 500 mg / ml concentrations and 10 mg / ml phenylalanine were added then. The reading was made after a contact time of 30 minutes. All extracts from the seeds of Cajanus cajan led to a reduction in the percentage of sickle cells which rose from 64.70% to 19.03%. The phenylalanine solution also caused a reduction in the percentage of sickling which rose from 64.70% to 22.76%. The activity of the maceration at the concentration of 400 mg / ml was higher than that of the phenylalanine. These results advocate for the use of Cajanus cajan seeds in the diet of sickle cell disease to reduce the occurrence of painful crisis. by 50% the of In a study amino analysis showed that the solvent extracts of cajan contain 26.3% phenylalanine (Phe) in the of free amino and according to the phenylalanine, for anti-sicklingactivity of the plant (Ekeke and 1990). It there been as a general objective: to assess the anti-sickling activity of the maceration of the of Cajanus cajan and to S5, S6), a mixture at equal volumes of blood diluted at 1/10 th of 2% sodium metabisulfite solution and the different macerations was performed. The observation by an optical microscope at 40 x magnification of one drop of each mixture between blade and lamella was then made after 30 minutes and the percentage of sickle (RBCs) was determined. Sickle cell patients' interest in traditional pharmacopoeia comes from the healing promise made by traditional practitioners and the expensive cost of medicines. The objective of this study was to evaluate the antisickling activity of macerated seeds of Cajanus cajan and compare its activity to that of phenylalanine. An induction of sickling due to 2% sodium metabisulfite was performed on 30 blood samples from SSFA 2 homozygous sickle cell patients. The macerated seeds of Cajanus cajan at 10 to 500 mg / ml concentrations and 10 mg / ml phenylalanine were added then. The reading was made after a contact time of 30 minutes. All extracts from the seeds of Cajanus cajan led to a reduction in the percentage of sickle cells which rose from 64.70% to 19.03%. The phenylalanine solution also caused a reduction in the percentage of sickling which rose from 64.70% to 22.76%. The activity of the maceration at the concentration of 400 mg / ml was higher than that of the phenylalanine. These results advocate for the use of Cajanus cajan seeds in the diet of sickle cell disease to reduce the occurrence of painful crisis. Introduction:- Sickle cell disease, or sickle cell anemia, is a serious inherited genetic disorder with recessive autosomal transmission. It is caused by an abnormal hemoglobin (Hb S), which polymerizes under deoxygenated state resulting in a deformation of red cells which take the shape of a crescent moon. Sickle cell disease is a chronic disease; whose managementis for a life time. The only current curative treatment is the hematopoietic stem cell transplant, which unfortunately cannot be performed in Ivory Coast. Face to socio-economic problems and problems of accessibility to pharmaceutical products in Africa, 90 % of the population resort to the many remedies held by traditional practioners. Previous studies showed that Cajanus cajan and Fagara xanthoxyloids are among the species that are used for the management of sickle cell disease (Sofowora and Isaacs -Sodeye, 1971). Previous studies (N'Draman-Donou and al., 2013) showed that the 873 aqueous extract of Cajanus cajan seeds reduces by 50% the formation of sickles. In a study conducted by (Ekéké and al., 1990), amino acid (AA) analysis showed that the solvent extracts of Cajanus cajan seeds contain 26.3% phenylalanine (Phe) in the form of free amino acids, and according to them, the phenylalanine, essential amino acid would be responsible for anti-sicklingactivity of the plant (Ekeke and al., 1990). It has there fore been proposed as a general objective: to assess the anti-sickling activity of the maceration of the seeds of Cajanus cajan and the phenylalanine and to compare them. Equipment: It was an experimental type study initiated by the Department of Hematology, Immunology and General Biology of the Faculty of Pharmaceutical and Biological Sciences of the University of Felix Houphouët Boigny, Abidjan, Ivory Coast. It was carried out in collaboration with the department of pharmacognosy of the same Faculty and the hematology unit of the central laboratory of the University Hospital (CHU) of Yopougon over a period from January to April 2018. Extraction was carried out on the seeds contained in the mature Cajanus cajan pod (Fabaceae) and phenylalanine was obtained commercially. The evaluation of the anti-sickling activity is made in vitro using blood taken from purple tube containing Ethylene Diamine Tetra-Acetate elbow crease. Methods:-Search for anti-sickling activity: Preparation of working solutions: Maceration: After harvest, the drug was dried for a week in the shadow at the laboratory for pharmocognosis at room temperature and then ground with a mortar and a pestle to obtain a coarse powder. The maceration was obtained by soaking 1 g of the drug powder with 100 ml of distilled water for 24 hours in a 500 ml erlenmeyer flask. The maceration obtained was filtered first on an ordinary filter and then on a wattman type paper filter and constituted solution 1 (S1). This solution was put in a glass flask, with a hermetic seal and stored in a refrigerator. The process toobtaining solutions S2 to S6 is summarized in Table 1. Phenylalanine: A solution of phenylalanine (Phe) was prepared at the concentration of 10 mg / ml of distilled water; that is 10 mg of phenylalanine powder for 1 ml of distilled water according to the method of Seck M. and al. (2015) . And it is this solution that was used for the performance of the different tests. 2% sodium metabisulfite: The 2% solution was obtained by using 100 ml of distilled water for 2 grams of metabisulfite. Evaluation of anti-sickling activity: Induction of sickling and microscopic reading: This induction of sickling is made by performing in advance a dilution of the blood Hb SSFA 2 at 1/10 th with a normal saline solution in the proportions of 20.mμl of blood diluted for 180.mμl of saline solution. Then, a mixture at equal volumes (20.mμl) of blood diluted at 1/10 th and sodium métabisulfite solution at 2% in a hemolysis tube was performed. After 15 minutes of contact, observation by a microscope at 40x magnification of a drop of this mixture deposited on a blade covered with lamella was made to determine the percentage of sickle cells by counting the number of red blood cells (RBCs) and sickle cell in a field. Saline solution test: Control test: The purpose of this test was to search for a possible return of sickle cells to their normal shape after a lapse of time. Therefore, a mixture in a hemolysis tube with plug of 20.mμl of blood diluted at 1/10 th , of sodium metabisulfite 20.mμl 2% and of 20.mμl of saline was performed. After 30 minutes of contact, a drop of the mixture was observed between blade and lamella in the microscope with a G x 40 objective to determine the percentage of sickle cells over time. Test with the macerated seeds of Cajanus cajan: After identifying 6 cap hemolysis tubes (S1, S2, S3, S4, S5, S6), a mixture at equal volumes of blood diluted at 1/10 th of 2% sodium metabisulfite solution and the different macerations was performed. The observation by an optical microscope at 40 x magnification of one drop of each mixture between blade and lamella was then made after 30 minutes and the percentage of sickle (RBCs) was determined. Phenylalanine test: The phenylalanine test was carried out on a mixture in a cap hemolysis tube containing 20.mμl of diluted blood, 20.mμl of 2% sodium metabisulfite and 20.mμl of phenylalanine solution. Microscopic observation with Gx 40 objective of a drop of the mixture between blade and lamella was made after 30 minutes to determine the percentage of sickle red cells. Data Analysis: The test used to compare our proportions is that of t-student at the significance level α = 5%. Evaluation of anti-sickling activity of the different solutions: Control test: After 30 minutes of contact between the diluted blood, 2% sodium metabisulfite and the saline solution used as a control, it was noted the presence of 64.7% sickle cell. When this incubation reached 60 minutes, the variation in the number of sickle cells over time was not significant with a p = 0,17. Test with the different solutions to evaluate: To perform the tests, each solution was mixed with the blood of a sickle cell disease patient and 2% sodium metabisulfite. A decrease in the percentage of sickle cells was observed and was accentuated in the first 30 minutes. The results are reported in Table 2. Following the same procedure as for the maceration, the addition of 10 mg / ml phenylalanine also caused a reduction in sickling ( Table 2). Comparison of anti-sickling activity of the solutions: The activity of the maceration for concentrations of 10 mg / ml to 300 mg / ml and 500 mg / ml is substantially equal to that of the phenylalanine because there is no significant difference (p> 0,05). However, at a concentration of 400 mg / ml, the activity of the maceration was maximal and the effect obtained was higher than that of the phenylalanine ( Table 2). 875 Test with the maceration: After a conditioning time of 30 minutes with the maceration of the seeds of Cajanus cajan, there was a decrease of about 30% in sickle cells (Table 2). In the dose of 10 mg / ml, sickling which concerned 64.7% of red blood cells decreased to 24% in this period of time. Similar results were found for concentrations of 100, 200, 300 and 500 mg / ml with maceration. This inhibition was statistically significant. The maceration process of Cajanus cajan seeds was maximum at a dose of 400 mg / ml. Ogada and al (2002) Test with the Phenylalanine: When replacing the maceration with the Phe at a dose of 10 mg / ml, we observed an inhibition of sickling. The percentage of sickle cells decreased from 64.70% to 22.76%. The Phe increases the capacity of erythrocytes to absorb water without lysing and there fore stabilizes their membrane (Elekwa and al., 2005). This anti-sickling activity of the Phe could be explained by studies which have shown that AA and in particular aromatic AA have the possibility of inhibiting the gelling of deoxyhemoglobin S and partially preventing the formation of sickle cells (Noguchi, 1977;Noguchi andAckeman, 1983). The Phe esters and Phe-containing peptides have the same property (Acquaye and al., 1982;Votano and al., 1984). Comparison between the activity of the maceration of Cajanus cajan and phenylalanine: According to a study conducted in 1990, the Phe is the main AA contained in the soluble aqueous fraction of a seed extract of the plant (Ekéké and Shode, 1990). In addition, analysis of AA has shown that methanolic extracts of Cajanus cajan seeds contain, in the form of free AA, up to 26.3% Phe (Ekéké and Shode, 1990). It was observed during our study that the activity of the plant, regardless of the concentration to which the test was carried out (10 to 300 mg / ml and 500 mg / ml), is substantially equal to that of the Phe at 10 mg / ml after a contact time of 30 minutes. These results are similar to those found in the studies of Ekéké and al. (1990) who also demonstrated that Phe is responsible in vitro for the anti-sickling activity of the plant. It would appear that the presence of this AA alone could account for about 70% of anti-sickling activity of the maceration. At 400 mg/ml, the activity of the plant was maximal and superior to that of the Phe. This may be due to the presence of compounds other than Phe. Indeed, earlier work by Akojie and Fung (1992) revealed the presence of phenylalanine and hydroxybenzoic acid in a methanolic extract of Cajanus cajan. Vanderjagt and al (1977), observed that there is a significant decrease in all AA, especially essential AA such as Phe, in sickle cell disease patients as a result of increased urinary excretion, which partly helps explain the growth retardation of patients. The seeds can there fore be recommended to sickle cell anemia patient on the one hand, to compensate for urinary losses and on the other hand, to reduce pain attacks. Conclusion:- Biological tests revealed that the maceration of the Cajanus cajan's seeds (from 10 to 300 mg / ml) has an activity comparable to that of phenylalanine (from 10 mg / ml). After a contact time of 30 minutes, the activity was maximum at a dose of 400 mg / ml with the macerate. The alkaloids in general and phenylalanine in particular could be responsible for the ant-sickling activity of these seeds. This plant shows quite promising results in the care of sickle cell anemia. As a matter of fact, since it is an edible plant, it could be recommended as a food supplement for patients with sickle cell anemia or even be the subject of galenic preparations so as to contribute to the care of patients. Also, more in-depth studies will make it possible to assess the in-vivo effect of these seeds after administration to patients or even to know their mechanism of action in the reversibility of sickling.
2020-03-12T10:54:48.037Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "9b001859b868dd5e1f12d7fb4adacb7ac47881fb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21474/ijar01/10519", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "0b746a1bf0c6ee1ad19d3105e6fd8b0a0b16837a", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
250534417
pes2o/s2orc
v3-fos-license
The impact of bariatric and metabolic surgery on cancer development Obesity (BMI ≥ 30 kg/m2) with related comorbidities such as type 2 diabetes mellitus, cardiovascular disease, sleep apnea syndrome, and fatty liver disease is one of the most common preventable risk factors for cancer development worldwide. They are responsible for at least 40% of all newly diagnosed cancers, including colon, ovarian, uterine, breast, pancreatic, and esophageal cancer. Although various efforts are being made to reduce the incidence of obesity, its prevalence continues to spread in the Western world. Weight loss therapies such as lifestyle change, diets, drug therapies (GLP-1-receptor agonists) as well as bariatric and metabolic surgery are associated with an overall risk reduction of cancer. Therefore, these strategies should always be essential in therapeutical concepts in obese patients. This review discusses pre- and post-interventional aspects of bariatric and metabolic surgery and its potential benefit on cancer development in obese patients. Introduction Obesity (BMI ≥ 30 kg/m 2 ) has become an increasing pandemic disease in Western countries, often associated with comorbidities such as type 2 diabetes mellitus, cardiovascular disease, sleep apnea syndrome, or fatty liver disease (1). Today, almost 20% of all adults in the European Union (EU) are affected by obesity, and the prevalence is increasing yearly (2). Besides increased age, geographic and socialeconomic factors are considered risk factors for developing cancer such as colon, ovarian, endometrial, breast, pancreatic, and esophageal cancer (3)(4)(5)(6)(7). Weight loss of as little as 10% by either an increase in physical activity, diets, drug therapies (GLP-1receptor agonists), or bariatric and metabolic surgery is reported to be associated with an overall risk reduction of cancer development (6,8,9). Among all weight loss therapies, bariatric and metabolic surgery is the most effective and sustainable form of obesity treatment with a long-term weight reduction of 25% (9-13) expecting a more significant effect on risk reduction of cancer development compared to non-invasive treatment methods (8,10,14). This review discusses pre-and postoperative aspects of bariatric and metabolic surgery and its potential benefits on risk reduction of cancer development. Pathogenesis of cancer development in the obese Besides many pathways known to affect the pathogenesis of cancer development in the obese, the molecular interactions and potential targets regarding carcinogenesis are diverse and, therefore, not yet fully understood (15). The oncologic risk and its association to the obese have been poorly studied in terms of molecular interactions, although this association has been described for decades (15,16). Lifestyle, including low physical activity, is known to be one major independent risk factor for both obesity and cancer development, especially in colon, endometrial, renal, and esophagus cancer (15). In colon cancer (16), strong evidence exists for the link between reduced dietary fiber intake and carcinogenesis (17)(18)(19). Furthermore, more and more information is available on the relationship between food processing and its role in cancer development (18,19). A red meat diet is known to be correlated with both an increased risk for obesity (18) and the prevalence of colon cancer (19). Other carcinogenesis factors include hormones such as estrogens and their metabolic precursors, mainly produced by adipose tissue (20-26). Chronic exposure to these compounds is reported to be associated with an increased risk for breast cancer in obese women (27,28). This is, in general, more frequently seen in patients with type 2 pre-diabetes mellitus (20, 24). But not only women are affected by increased levels of estrogens and their metabolic precursors (24,25,29,30). There is growing evidence that men with type 2 diabetes mellitus are also at increased risk for prostate cancer (20). Inflammation is another essential mechanism involved in regenerative processes and seems to be in particularly involved in carcinogenesis (26, [31][32][33][34][35][36]. Obesity, often associated with impaired glucose hemostasis and insulin resistance, is reported to be connected to a higher activity of inflammatory cytokines such as tumor necrosis factor (TNF)α and interleukin (IL)-6 (27). Elevated levels of unbound insulin-like growth factor (IGF)-1 protein, for example, promote tumor growth by insulin and IGF-1 receptors (20, 21, 27), whereas leptin and adiponectin play another essential role in these molecular mechanisms (37). Glucagon-like peptide (GLP)-1 and its receptor have also been implicated in the insulin metabolism that plays a dominant role in cell and tumor growth (27,35). Therefore, it is considered that GLP-1 receptor agonists, acting as a powerful weight loss drug therapy, might also play an important role in cancer development of different tumors (15,16). This supports the concept that high levels of GLP-1 and adipokines by an expansion of adipose tissue might amplify these interactions (27). Some types of cancer, especially colon (22,23) and breast cancer (24), are known to be associated with high levels of adipokines (21,25,26). Weight loss therapies Increased physical activity and changes in diet are useful tools for weight loss therapy, but for some patients, these methods are not effective or only temporary (8,10,14). In the long-term, less than 1% of patients undergoing these conservative weight-loss therapies can reduce their BMI from 40 to 45 kg/m 2 to below 30 kg/m 2 (8). Medication with GLP-1 receptor agonists and/or bariatric and metabolic surgery might be an option (10,14). On average, patients undergoing treatment with GLP-1 receptor agonists lose 15% of their weight, whereas patients undergoing bariatric surgery lose 25% within the first six to twelve months after treatment with only a small proportion of relapses (9)(10)(11)(12)(13). Bariatric and metabolic surgery is currently the most effective and sustainable form of weight loss therapy (8), with a significantly better effect on the remission rate of type 2 diabetes mellitus and other cardiovascular and metabolic diseases (9)(10)(11)(12)(13). This also impacts the risk of cancer development after weight loss therapy leading to an increased life expectancy of the treated patients (9). Tumor screening prior to bariatric and metabolic surgery According to the American Association of Clinical Endocrinologists (AACE)/American College of Endocrinology (ACE), The Obesity Society, the American Society for Metabolic and Bariatric Surgery (ASMBS), the Obesity Medicine Association (OMA), and the American Society of Anesthesiologist (ASA), all these clinical guidelines propose an age-and family history-related, risk-based tumor screening before bariatric and metabolic surgery in obese patients (1,38). On the other hand, a recent meta-analysis reviews how preoperative gastroscopies change the way of surgical treatment in obese patients. Besides high costs by the high frequency of bariatric and metabolic procedures worldwide, only 0.4% of these patients' surgery was delayed or canceled (38,50). Colonoscopy, computed tomography (CT) scans, ultrasound or other screening examinations for cancer are not routinely recommended and are rarely used because of the tremendous effort required for the patient and the associated additional health costs (1,38). Nevertheless, these screening examinations are often non-specific, and tumors tend to be found by chance (3,4,6). In non-invasive weight loss therapies like diets or drug therapies, such as treatment with GLP-1-receptor agonists, these screening examinations are not even considered to use for tumor screening (4, 6). However, these obese patients tend to have the same risk of developing cancer as those undergoing bariatric and metabolic surgery (8,(44)(45)(46)(47)(48)(49)(50). Newer bariatric and metabolic procedures like single anastomosis stomach-ileal bypass with sleeve gastrectomy (SASI-S) or the single anastomosis duodenal-ileal bypass with sleeve gastrectomy (SADI-S) are considered to combine the benefits of a sleeve gastrectomy and mini-gastric bypass (omega-loop gastric bypass) while reducing the risk of malnutritional problems in the long-term (8,(10)(11)(12). Vertical banded gastroplasty, jejunocolic, or jejuno-ileal bypasses are considered outdated procedures and are widely abandoned due to the severe malabsorptive and malnutritional long-term complications (9,12). Adjustable gastric banding and biliopancreatic diversion are still performed in certain obese subgroups counting for less than 2% of all bariatric and metabolic procedures (11,13,14) Risk modification of cancer development after bariatric and metabolic surgery Compared to the immediate effects of bariatric and metabolic surgery on metabolic diseases, such as type 2 diabetes mellitus, sleep apnoea syndrome, and fatty liver disease with a remission rate of 50-75% within the first weeks and months, longer and more extensive studies are needed to prove the risk reduction for cancer development (8)(9)(10)(11)(12)(13)(14)(31)(32)(33)(34). After bariatric and metabolic surgery, the oncological risk seems to be significantly reduced for breast cancer, endometrial, and other women-specific cancers (51). But the tumor's biology, behavior, and aggressiveness depend not only on the gender but also on the patient's age and metabolic and biological conditions (52,53). Thus, it is unsurprising that the risk reduction for cancer development must not be examined and considered generally but under these respective aspects and groups (15, 16,54). Type 2 diabetes mellitus, bariatric and metabolic surgery, and the risk of cancer development One of the most notable and immediate effects of bariatric and metabolic surgery in type 2 diabetes mellitus patients with hyperinsulinemia and peripheral insulin resistance is the restoration of glucose hemostasis (55)(56)(57)(58)(59)(60)(61)(62). Based on direct mechanisms, bariatric and metabolic surgery has a tremendous impact on glycemic metabolism and substantially reduces immediately circadian blood glucose fluctuations (55,56,(59)(60)(61). Thus, insulin therapy can be reduced or even discontinued in patients with insulindependent type 2 diabetes mellitus within a few days after surgery (55, 57-59, 61, 62). These effects are the same for sleeve gastrectomy and Y-Roux gastric bypass (59,60). Data from studies with a follow-up of at least ten years confirm that Lunger et al. 10.3389/fsurg.2022.918272 Frontiers in Surgery bariatric and metabolic surgery significantly also impacts the risk of cancer development in type 2 diabetes mellitus patients (36)(37)(38)54). Women with type 2 diabetes mellitus and an HbA1C ≥ 8% appear to have an increased risk of developing a tumor than men (35,36). This might be explained by the fact that endocrine interactions between the insulin signaling and the gonadal axis leading to increased estrogen levels are thus higher in premenopausal women than in men (37,38). Indirect mechanisms that lead to a risk reduction for cancer development include increased postprandial secretion of satiety hormones such as GLP-1, peptide YY (PYY), and oxyntomodulin (OXM) (52,53). The postoperatively increased GLP-1 levels and their effect on insulin are discussed as a possible mediator of angiogenesis and cell growth (20- 28,30,[33][34][35][36][37]53). Accordingly, to these mechanisms, this effect is increased in insulin-dependent type 2 diabetes mellitus patients (31,32,34,(52)(53)(54)(55)(56)(57)62). The risk of colorectal cancer after bariatric and metabolic surgery Colorectal cancer is one of the most common tumor diseases, accounting for almost 10% of all cancers worldwide, and its prevalence is increasing every year (54,(63)(64)(65)(66). The risk of developing colorectal cancer depends on the degree of overweight and increases with higher BMI over time, which is more pronounced for colorectal cancer than for rectal cancer (66). It is therefore surprising that studies conducted to prove this relationship showed an increased risk of colorectal cancer after bariatric and metabolic surgery. Two studies from Sweden both confirmed this unexpected finding (63,64). In both studies, the risk of colorectal cancer was higher than in overweight nonoperated patients and increased steadily with time after surgery (63,64). An increased risk of colorectal cancer, especially after Y-Roux gastric bypass, was also demonstrated in a recent study from England, in which almost 9,000 patients after surgery were compared with non-operated overweight patients (66). This could be explained by the fact that hyperproliferation of rectal mucosal cells can be observed Y-Roux gastric bypass, which could be related to an increased risk of colorectal cancer after bariatric and metabolic surgery (66). Nevertheless, recent cohort studies found no increased risk of colorectal cancer after bariatric and metabolic surgery compared to the general population (63,66). In addition, a recent meta-analysis suggests that bariatric and metabolic surgery may also reduce the risk of colorectal cancer compared to non-operated obese individuals (66). These contradictory results are certainly related to the limited data and number of studies available and highlights the need for larger prospective studies with a longer follow-up. Barrett metaplasia and esophageal cancer in the of context bariatric and metabolic surgery In Barrett's metaplasia, a more extensive retrospective study showed that in 43% of patients with these preoperative conditions Y-Roux gastric bypass led to remission of acid reflux and was associated with histological regression of Barrett's metaplasia (42)(43)(44)(45). In contrast, patients who underwent primary sleeve gastrectomy developed in one third a de novo Barrett's metaplasia (45). A secondary switch in these patients to Y-Roux gastric bypass resulted in a significant reduction in acid exposure and histological remission of reflux esophagitis in over 80% of these patients (42,43). Since the introduction of sleeve gastrectomy about twenty years ago, Barrett's metaplasia and adenocarcinoma of the distal esophagus have become the focus of clinical interest, as a de novo incidence of more than 30% of acid reflux has been reported following primary sleeve gastrectomy (45,46). In this context, an estimated 0.05% to 0.5% of new esophageal cancer cases worldwide are expected to occur after bariatric and metabolic surgery, particularly after sleeve gastrectomy (45,46). In our opinion, a comprehensive preoperative assessment of the acid exposure of the distal esophagus and esophagogastric junction using routine gastroscopy and 24-hour pH-manometry, as well as an appropriate follow-up after bariatric and metabolic surgery, especially sleeve gastrectomy, is mandatory. Conclusion It has long been known that the risk for certain cancers is strongly related to obesity and that weight loss can substantially reduce this risk in the long term. Bariatric and metabolic surgery offer the most significant potential for weight loss, with an average weight loss of 25%, providing a significant postinterventional overall risk reduction for cancer development. This link acts through direct and indirect mechanisms whose molecular levels are very complex and not yet fully understood. Preoperative tumor screening before bariatric and metabolic surgery is mainly based on recommendations of different societies rather than solid evidence. However, a thorough clinical evaluation, including detailed family history, is highly recommended and helps to find the proper bariatric and metabolic procedure which fits the patient's prerequisites and needs. Since no tumor screening is usually carried out during conservative or medical weight loss therapy, more attention should be paid to this point in managing overweight patients in the future. Regarding a preoperative gastroscopy or 24-hour pHmanometry, we believe this screening method is a low-risk intervention and should be performed when planning Lunger et al. 10.3389/fsurg.2022.918272 Frontiers in Surgery bariatric and metabolic surgery, especially since these patients require close follow-up anyway. However, significantly more data are needed to make a more detailed and precise recommendation which is suitable for practice. Looking at the two most often performed bariatric and metabolic procedures worldwide in more detail, Y-Roux gastric bypass, compared to sleeve gastrectomy, seems to be more effective in patients with Barrett's metaplasia by an almost complete remission of acid reflux. This also means that patients after a primary sleeve gastrectomy to reduce weight could also benefit from a Y-Roux gastric bypass as a secondary procedure to treat or to avoid de novo acid reflux and Barrett's metaplasia. Although weight loss therapies, including bariatric and metabolic surgery, cannot be routinely recommended as a cancer prevention strategy, considerations in this context should always be made in treating obese patients. Author contributions FL, GP, PA, and PN contributed to conception and design of the review. FL wrote the first draft of the manuscript. GP, PA and PN wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
2022-07-15T13:21:38.073Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "5c272766a19b845e71cd759b1aacf003c9296aef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "5c272766a19b845e71cd759b1aacf003c9296aef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25797919
pes2o/s2orc
v3-fos-license
Multiple Fractures in Patient with Graves' Disease Accompanied by Isolated Hypogonadotropic Hypogonadism Isolated hypogonadotropic hypogonadism (IHH) is known to decrease bone mineral density due to deficiency of sex steroid hormone. Graves' disease is also an important cause of secondary osteoporosis. However, IHH does not preclude the development of primary hyperthyroidism caused by Graves' disease, leading to more severe osteoporosis rapidly. Here, we describe the first case of 35-year-old Asian female patient with IHH accompanied by Graves' disease and osteoporosis-induced multiple fractures. Endocrine laboratory findings revealed preserved anterior pituitary functions except for secretion of gonadotropins and showed primary hyperthyroidism with positive autoantibodies. Sella magnetic resonance imaging showed slightly small sized pituitary gland without mass lesion. Dual energy X-ray absorptiometry revealed severe osteoporosis in lumbar spine and femur neck of the patient. Plain film radiography of the pelvis and shoulder revealed a displaced and nondisplaced fracture, respectively. After surgical fixation with screws for the femoral fracture, the patient was treated with antithyroid medication, calcium, and vitamin D until now and has been recovering fairly well. We report a patient of IHH with Graves' disease and multiple fractures that is a first case in Korea. INTRODUCTION Isolated hypogonadotropic hypogonadism (IHH) is characterized by impairment of gonadal function secondary to deficient gonadotropin secretion. [1] It can result from a variety of congenital, acquired, and functional defects related to gonadotropin releasing hormone (GnRH) deficiency. In general, IHH is caused by genetic mutation or acquired anatomical abnormalities including infiltrative disorders or space-occupying tumors involving the hypothalamic-pituitary axis, subsequently promoting deficiency of sex hormones. [2,3] Sex steroid hormones are important factors in bone mineral dynamics and play an essential role in the pathogenesis of osteoporosis. There have been many investigations of the links between sex hormone status and bone mineral density (BMD) for various clinical conditions. In renal transplant recipients, serum levels of estradiol predict BMD in women. [4] Estrogens also play a pivotal role in the regulation of bone loss and metabolism in elderly men. [5] Maintenance of proper BMD requires not only sex steroid hormones but also thyroid hormones and vitamin D. Moreover, abnormal status of thyroid hormone or lower levels of vitamin D can promote pathologic or non-trauma induced fractures. [6][7][8][9] Although abnormal thyroid hormonal status is rare in patients with IHH, IHH accompanied by primary or secondary hypothyroidism including bradycardia and heart failure was recently reported. [10] However, to our best knowledge, there has been no report of IHH associated with Graves' disease. Therefore, we herein report a rare case of IHH accompanied by multiple fractures due to thyrotoxicosis and sex steroid hor mone deficiency. CASE A 35-year-old Asian woman, born from non-consanguineous parents, was referred to the department of endocrinology for evaluation of a patient with multiple fragility fractures, and severe osteoporosis accompanied by diffuse goiter. The patient was an ex-smoker and non-drinker. She was 1.71 m tall and weighed 51.2 kg with a body mass index (BMI) of 17.5 kg/m 2 . The diagnosis of IHH was estab- lished by suggestive clinical findings with primary amenorrhea and absence of growth and development of secondary sexual characteristics and laboratory findings at 16 years. The patient had no facial anomaly or olfactory complaints. No familial history of anosmia, delayed puberty or hypogonadism was reported by the patient. The karyotype was 46XX and genetic screening for mutations in the hypogonadotropic hypogonadism genes was not performed. At that time, anterior pituitary function was preserved except for gonadotropin secretion. The patient has been treated with estrogen replacement since she was 16 years old, but she was taken off estrogen by herself several years ago. Sella magnetic resonance imaging scan revealed a small sized pituitary gland without mass-like lesion and thinning of the lower half of the pituitary stalk ( Fig. 1A). In the combined pituitary stimulation test at 28 years old, the peak luteinizing hormone (LH) was 1.79 IU/L and peak follicle stimulating hormone (FSH) was 1.50 IU/L, suggesting hypogonadotropic hypogonadism (Table 1). In the recent visit, the patient's blood pressure was 130/82 mmHg and her heart rate was 98 beats/min. On laboratory examination, complete blood count revealed hemoglobin: 12.3 g/ dL; leukocyte count: 4. , and 10 pg/mL (range, follicular 21-251; mid-cycle 38-649; luteal 21-312), respectively. Symptoms and sign of thyrotoxicosis including tachycardia, smooth skin, and goiter were also developed in the patient. A Technetium-99m (Tc-99m) pertechnetate scin-tigraphy revealed diffuse enlargement of both lobes of the thyroid gland with markedly increased uptake (Fig. 1B). A thyroid function test showed newly developed primary hyperthyroidism in the patient (Table 2). Moreover, the level of thyrotropin binding inhibiting immunoglobulin was also increased ( Table 2). Neck ultrasonography showed an enlarged heterogeneous echo genic thyroid gland with increased vascularity determined by the color doppler method (Fig. 1C). Serum 25-hydroxy-vitamin D level was also decreased in the patient. The antero-posterior pelvic X-ray showed left proximal femoral fracture (Fig. 1D) and shoulder X-ray revealed a non-displaced proximal humeral fracture (Fig. 1E). BMD was measured at the lumbar spine and femoral neck of the patient using dual energy X-ray absorptiometry. The patient had significantly lower BMD at both lumbar spine and femur neck (Table 3). She was treated with conservative management for humeral fracture and received surgical fixation with screws for the left femoral fracture. The patient was also treated with methimazole, estrogen replacement, calcium, and vitamin D for two years, thereby leading to 2.34% and 6.97% increase in BMD of lumbar spine and femur neck, respectively (Table 3). She was maintained in an euthyroid state with 2.5 to 5.0 mg of methimazole per day and has been recovering fairly well with estrogen replacement and treatment of calcium and vitamin D 2,000 IU per day for six months ( Table 2). DISCUSSION In general, IHH presents as decreased ovarian function leading to menstrual defect, diminished vaginal secretion, infertility, and impaired breast development in premenopausal woman. In this case report, we presented IHH accompanied by Graves' disease and multiple fractures. To our best knowledge, this case report is the first paper describing severe osteoporosis-induced bone fracture in a patient with IHH accompanied by Graves' disease. Pituitary hormone deficiencies causing hypogonadism, hypothyroidism, or hypoadrenalism may induce lower BMD. [11] People with IHH are also prone to develop osteoporosis or fragile bones leading to higher risk of fractures induced by otherwise minor injuries. [9] Althou gh the mechanisms underlying the relationship between central hypogonadism and BMD have not yet been determined, unreplaced sex steroid deficiency is associated with lower BMD in adults with growth hormone deficiency. [9,12] Therefore, cyclical replacement of estrogen and progesterone is recommended to prevent premature osteoporosis and to promote sexual characteristics in premenopausal women. In addition, testosterone treatment was also effective for in-creasing lumbar spine BMD in hypogonadal middle-aged men. [13,14] IHH is rarely accompanied by central hypothyroidism due to structural abnormalities of the hypothalamic-pituitary axis. [10] Thyroid-stimulating hormone (TSH) is critical for regulating expression of sodium-iodide symporter which is important for the production of thyroid hormone in the thyroid gland. In the pre-specified subgroup of premenopausal-aged women, TSH deficiency is independently related to lower BMD in the lumbar spine and femur neck. [9] However, Graves' disease or excessive replacements of thyroid hormone are also known as risk factors of osteoporosis. [7] Graves' disease promotes bone loss by increased bone turnover, leading to decreased BMD and osteoporosis. [7] On the other hand, vitamin D deficiency is also an important risk factor for osteoporosis and increased risk of pathologic fractures in adults. [6,8,15] Therefore, we thought that low levels of vitamin D as well as inappropriate estrogen replacement with Graves' disease might be contributing to aggravation of the osteoporosis and development of fractures in the patient. Although the co-occurrence of hypopituitarism and Graves' disease are rare, several reports have been described in the literature (Table 4). A patient with hyperthyroidism in the presence of panhypopituitarism developed a radioiodine-induced thyroid storm. [16] Graves' disease developed eight years after the diagnosis of hypopituitarism in this case. In another case, a 24-year-old male patient presented with hypopituitarism accompanied by hyperthyroidism and diabetes insipidus was described in 1999. [17] A third report described cases of concomitant Graves' disease and Sheehan's syndrome. [18] A more recent report described that a subject with known panhypopitutarism developed thyrotoxicosis that contributed to acute glucocorticoid deficiency. [19] Another report showed that it was http://dx.doi.org/10.11005/jbm.2016.23.1.40 possible for hyperthyroidism secondary to toxic thyroid nodule, to occur with hypopituitarism. [20] However, our case is IHH rather than panhypopituitarism, and the patient presented with multiple osteoporosis-induced fractures associated with Graves' disease. In conclusion, herein we report a case of IHH with Graves' disease and multiple fractures. Sex hormone, calcium, and vitamin D replacement are essential for prevention of osteoporosis in patients with IHH. Secondary osteoporosisinducible factors including hyperthyroidism should also be considered in patients with fragility fracture accompanied by IHH.
2017-08-29T23:17:21.988Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "109f6415da8f7d4bb1ff87d89506eb84ab049c26", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4791437?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "109f6415da8f7d4bb1ff87d89506eb84ab049c26", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225018702
pes2o/s2orc
v3-fos-license
The Relationship between Depression Symptoms and Severity of Coronary Artery Disease in Patients Undergoing Angiography Objective: Cardiovascular diseases are the main cause of mortality worldwide. Depression is one of the effective factors in the incidence of cardiovascular diseases like coronary artery stenosis. This study aimed to investigate the relationship between depression symptoms and severity of coronary artery disease (CAD) in patients scheduled for angiography. Method : This prospective, cross sectional research was conducted on as many as 401 patients scheduled to undergo angiography at Dr. Heshmat heart hospital as the referral center in the north of Iran in 2016. Before cardiac catheterization, patients' demographic information (age, gender, level of education, and place of residence) and patients’ medical history (history of diabetes mellitus, hypertension, and family history of cardiac disease) were obtained. Also, Beck Depression Questionnaire 2 (BDI II) was completed by a psychologist before angiography. After collecting the data, SPSS v.21 and statistical tests such as Spearman correlation, and Mann-Whitney U regression were used to analyze the data. Results: After controlling for age, sex, and having history of diabetes mellitus, no relation was found between having depression symptoms and more frequency of vessel involvement (OR = 1.35, 95% CI: 0.92 to 1.98, P =0.130) or higher severity of CAD (OR = 1.47, 95% CI: 0.95 to 2.28, P = 0.087). The results were similar for the relation between severity of depression symptoms and CAD extent or CAD severity. Conclusion: The results of this study showed that in patients undergoing angiography, depression symptoms were not related to CAD severity and number of involved vessels. Depression was associated with angina, independently of CAD severity. Our study found no significant correlation between CAD severity and severity of depression. The reason may be that measuring depression at a single time point cannot accurately reveal the impact of this problem on the trend of atherosclerosis over time. Serotonin has a role in depression pathophysiology (8). On the other hand, in the vessels at risk of obstruction, serotonin causes aggregation of platelets through binding to 5-hydroxytryptamine (5-HT) receptors; therefore, a huge amount of serotonin in blood can lead to cardiac ischemia (6,9). In addition, another noteworthy point is that depression causes endothelial dysfunction. This will contract the arteries in blocked sites and is likely to cause coronary obstruction and myocardial ischemia (10,11). In many studies, depression has been regarded as a risk factor in patients with coronary artery disease (12) and is seen as an independent risk factor in these patients (13). The incidence of depression increases the risk of mortality and recurrent infarction (13)(14)(15). Depression at the time of undergoing angiography in patients with coronary artery disease doubles the risk of suffering a cardiac event a year after angiography (16,17). Identifying depression symptoms in patients with CAD is highly crucial as these patients have poor prognosis (18,19). Depression exerts much more extensive and important influence on the daily performance of the patients and symptoms of the disease than angiographic findings (20). Therefore, these patients should be screened and receive treatment for depression (21). Given the role of depression in the outbreak, process, and prognosis of cardiovascular diseases (22), it seems that depression symptoms are related to severity of CAD and coronary artery disease in patients undergoing angiography. Although most studies have investigated the relation between depression and cardiac disease, little research with inconsistent findings has been conducted on the relation between depression and severity of coronary artery disease. Thus, this study aimed to examine the relation between depression and severity of coronary artery disease. Materials and Methods This prospective, cross sectional study was conducted on patients undergoing elective angiography and hospitalized in the angiography ward of Dr. Heshmat heart hospital, a referral center in the north of Iran. Accessibility sampling method was employed and patients were followed consecutively from October 2016 to November 2016 for 5 months for coronary angiography to investigate stable angina pectoris or severe chest pain that was possibly caused by CAD. A total of 401 patients participated in the present study. Our institutional ethics committee approved the study, and informed consent was obtained from all patients . The inclusion criteria were age over 18, willingness to participate in the study, not having verbal and hearing problems, not having sever urgent conditions, complete consciousness at the time of research, and not being mentally handicapped. The exclusion criteria were cardiomyopathy, history of congenital heart disease, chronic renal failure, previous myocardial infarction or coronary artery bypass grafting, medical treatment for chronic psychosis, or recent medical treatment for depression, and incomplete study form. Instruments Two separate questionnaires were used for the purpose of data collection. The first part collected data on demographic characteristics, including age, gender, level of education, and place of residence; and patients' medical history, eg, history of diabetes mellitus, hypertension, and family history of cardiac disease. The second part was Beck Depression Inventory 2 (BDI II). All forms were completed by a psychologist prior to angiography at the CAD lab. To measure the severity of depression, BDI II, a 21-item questionnaire, was used, and its internal consistency has been calculated to be between 0.73 and 0.92 by Beck et al (23). The range of scores on the inventory is between 0 and 3, with zero indicating absence of depression and 3 the maximum level of depression. The range of the scores is between 0 and 63. The patients were asked to choose an option which expresses their feelings best in the past 2 weeks . Evaluation of depression was conducted based on the following points: 0-13: normal; 14-19: minor depression; 20-28: average depression; and scores over 29: severe depression (24). The inventory has been standardized for the Iranian context and the reliability and validity of the instrument has been confirmed (25). The psychometric characteristics of the Persian version of BDI-II in the patients with CHD were examined by Ahmadi et al in 2019; and its internal consistency which was measured using Cronbach's alpha was estimated at 0.90. The obtained correlation between BDI-II and PHQ-9 and GAD-7 was 0.74 and 0.65, respectively (p < 0.001) (26). Procedure To determine the severity of the CAD (the stenosis of the heart's coronary arteries and the number of arteries involved) angiography, which is a diagnostic procedure, was used. The findings of angiography were interpreted by 2 specialists blindly. In the first step, one of the 2 specialists examined the films visually in the CAD lab; and in the second step, a second specialist examined the films to add the accuracy of the examinations. The findings of angiography were categorized based on the severity of the disease and number of arteries involved . Angiograms with no visible atherosclerotic changes in the coronary arteries were considered as normal. Stenosis that reduced the lumen diameter was defined as mild < 50% stenosis, moderate 50% to 70% stenosis, and severe > 70% stenosis. The presence of stenosis in 1, 2, or 3 of major coronary arteries (left main, right coronary artery, left anterior descending, and circumflex) was respectively considered evidence of single, 2-, or 3-vessel coronary artery disease (27). To observe ethical considerations, written informed consent was obtained from the participants in the study. The data were collected by a trained research expert supervised by a psychologist. The trained research expert introduced herself and the goal of the study and assured the patients that their information will remain confidential and the results of the study will be used in a research paper . The collected data were analyzed using SPSS V.21. Mean (SD) or frequency (percentage) was reported to describe the patients' characteristics. Normal distribution assumption was met for both age and total cholesterol using the Kolmogorov-Smirnov test. In the univariable analyses, Mann-Whitney U test and Spearmen correlation coefficients were used to assess the relation between patients' characteristics and depression level, CAD extent and CAD severity, for nominal and ordinal variables, respectively. In the multivariable analyses, for controlling the possible confounding effect of patients' sex, age, and history of diabetes mellitus, ordinal regression model was used to determine the relation between depression level and CAD extent or severity. Assumption of parallel lines was tested and met. There was no multicollinearity among independent variables. To have a comparable sample size in groups, in the regression analyses, CAD severity was grouped into 3 normal, mild/moderate, and severe categories. All statistical tests were 2-sided and P < 0.05 was considered as statistically significant. Results Patients' characteristics are presented in Table 1. Of the total of 401 patients, depression symptoms were observed in 165 (41%) patients; whereas, 92 (23%), 50 (12%), and 23 (6%) of patients had mild, moderate, and severe depression, respectively. Female sex (P < 0.001), lower education (P = 0.002), living in a rural area (P = 0.007), and having history of hypertension (P = 0.030), or heart disease (P = 0.030) were related to higher level of depression. Frequency of patients with normal, 1-, 2-, 3-or more vessel involvement CAD was 133 (32%), 78 (20%), 84 (21%), and 106 (27%), respectively. Patients with more vessel involvement CAD were older (mean age 56, 58, 60, and 61 years for normal, 1-, 2-, 3-or more vessel involvement CAD groups, respectively; r = 0.23, P < 0.001), were more likely to be male (male ratio of 40%, 56%, 67%, and 62% of patients with normal, 1-, 2-, 3-or more vessel involvement CAD groups, respectively (P<0.001), and were more probable to have a history of diabetes mellitus (DM ratio of 28%, 39%, 43%, and 51% of patients with normal, 1-, 2-, 3-or more vessel involvement CAD groups, respectively; P < 0.001). Data showed that severity of CAD and frequency of vessel involvement CAD were highly correlated (r = 0.841, P < 0.001); also, similar results were observed for severity of CAD. Patients with severe CAD were older (mean age 56, 56, 61, and 60 years for normal, mild, moderate, and severe CAD, respectively; r = .20, P < 0.001), were more likely to be male (male ratio of 40%, 50%, 54%, and 64% of patients with normal, mild, moderate, and severe CAD, respectively; P < 0.001), and were more likely to have a history of diabetes mellitus (ratio of 28%, 29%, 39%, and 47% of patients with normal, mild, moderate, and severe CAD, respectively; P < 0.001). The results of univariable analysis revealed no relation between depression level and frequency of vessel involvement CAD (r = 0.009, P = 0.864) or severity of CAD (r = -0.011, P = 0.829). Also, in the multivariable analysis, after controlling for sex, age, and history of diabetes mellitus, no statistically significant relation was found between depression level and frequency of vessel involvement or severity of CAD (P >0.1 for all odds ratios). The odds ratios and 95% confidence intervals are shown in Table 2. Increasing the odds ratios were not linear with regards to higher level of depression symptom . However, compared with patients with no depression symptom, patients with severe depression symptoms had higher severity of CAD (OR = 1.88, 95%CI: 1.10 to 3.21, P = 0.020); however, it should be noted that the sample size was small in patients with severe depression level . The results of assessing depression symptom variables in 2 levels (patients with and without depression symptoms) were similar. Controlling for sex, age, and history of diabetes mellitus, the odds ratio of having more vessel involvement CAD and higher severity of CAD in the patients with depression symptoms versus patients without depression symptoms was 1.35 (95% CI: 0.92 to 1.98, P =0.130) and 1.47 (95% CI: 0.95 to 2.28, P =0.087), respectively. Based on the results in Table 2, controlling for other independent variables in the model, the odds of males having more vessel involvement CAD and higher severity of CAD was 2.87 (95% CI: 1.92 to 4.30) and 3.66 (95% CI: 2.30 to 5.81) times that of females (P < 0.001), respectively. The odds of patients with history of diabetes mellitus having more vessel involvement CAD and higher severity of CAD was 2.54 (95% CI: 1.72 to 3.74) and 2.98 (95% CI: 1.89 to 4.69) times those of patients without diabetes mellitus (P < 0.001), respectively. Also, 1-year increase in age was related to an increase in the odds of having more vessel involvement CAD and higher severity of CAD, with an odds ratio of 1.04 (95% CI: 1.02 to 1.07, P < 0.001) and 1.05 (95% CI: 1.02 to 1.07, P < 0.001), respectively ( Table 2). Discussion The results of this study showed that in patients undergoing angiography, depression symptoms were not related to CAD severity and number of involved vessels. Being a female, low education level, living in a village, and a history of hypertension and cardiac diseases were found to be related to high levels of depression . The relation between psychological variables like depression and CAD in patients undergoing angiography has been investigated in different studies and conflicting results have been reported. A few studies, however, have examined the relationship between depression and severity of CAD; and their results have been different (22,(28)(29)(30)(31)(32)(33)(34). For example, a Cohort study on 164 patients undergoing angiography in Greece showed no significant difference in the score of depression between patients with nonsevere CAD and severe CAD, and only the score of anxiety in males was related to severe CAD. In these cohort studies, the average age of 82% of male participants was 65 ± 11 years. Females in this study had CAD with less severity and more depression (31). Carney et al also found no significant relationship between the severity of CAD in depressed patients and the nondepressed (32). In a study by Hayek et al, 5158 patients undergoing cardiac catheterization were studied. It was found that depression was related to cardiac angina and was independent of CAD severity; depression was also found as the most important predictor of chest pain, which was caused by depression and not the severity of the disease (33). In contrast to the findings of this study, a study by Vural et al in 2009 showed no association between CAD and depression initially, but after controlling for sex differences and other confounding variables, they found that every 1-point rise in the depression score was associated with an average of 5% to 6% increase in the abnormal coronary angiographic findings or definitive coronary artery disease, respectively (P = 0.01 and P = 0.002) (31). To score depression as we did in the present study, they used BDI, but they checked the results of angiography using Judkins technique. Also, in a study by Ekici et al, which has been done on 225 patients undergoing elective angiography, the results showed a significant relationship between CAD and the score of depression and anxiety. Although the size of the reported correlation explained the weakness of the relationship, to explain the difference between this research and our study, it can be pointed out that Ekici's research was a case control study and the severity of CAD was determined by Gensini score. Furthermore, the scoring of depression was estimated by Hospital Anxiety and Depression Scale (HADS) (22). While in our study, the severity of CAD was estimated visually and the scoring of depression was determined by Beck II inventory. In a cross sectional study in 2007, a statistically significant difference was reported between the average score of depression in a CAD and Non-CAD group; and hyperactivity of the noradrenergic system was one of the most important mechanisms explaining this relationship (34). In addition, in the present study, female gender was related to higher levels of depression; and many other studies have reported similar findings (31,(34)(35)(36)(37). On the other hand, in studies by Kokkou et al and Hayek et al, no significant relationship was reported between depression and sex (29,33). However, in our study, based on the angiography findings, the frequency and severity of CAD in males was more than females, which is consistent with many previous studies (22,34). Moreover, in our study, reviewing the history of cardiac disease among the participants showed a significant relationship between depression symptoms and hypertension and cardiac disease history, which corresponds to the results of the study by Hayek in 2017 (33). In addition, Abbasi et al in a study in Tehran Cardiac Center showed that depression had a relationship with hypertension (35). In this study, depression did not have a significant relationship with DM and hypocholesteremia, which is in contrast with the study of Vural et al in 2007 and2009 (31, 34). One of the limitations of the present study was its small sample size in different levels of CAD severity. Moreover, in this study, the emphasis was on depression symptoms as self-reported by the patients and without clinical interview. Moreover, patients might have reported their feelings incorrectly in self-reports, and this could adversely affect the findings of the study . The presence of a psychologist and heart specialists in the research team, accuracy in diagnosis of CAD severity, arterial involvement based on angiography, and the detailed data collected from patients' cardiac medical history were the merits of this study. Furthermore, this study was done at a referral hospital in north of Iran, which covers a large number of patients in this area. Limitations As for the limitations of the present research, one could refer to the use of the patients' self-reports for collecting accurate data which influences the findings. Conclusion In general, our study showed no relationship between depression and severity of coronary artery disease in patients undergoing angiography. The reason may be that measuring depression at a single time point (at the time of research and not paying attention to the history of disease) cannot accurately reveal the impact of this problem on the trend of atherosclerosis over time. Considering the cross sectional nature of our study, which prevents discovering the relationship between depression and severity of coronary artery disease, conducting longitudinal studies with larger sample sizes is highly recommended in the future. Additionally, further studies with better designs are warranted to explore the impact of psychological intervention on CAD severity and its long-term outcome.
2020-10-19T18:09:09.681Z
2020-09-29T00:00:00.000
{ "year": 2020, "sha1": "ca0cff9695c7d4f2efa5a568a1c31dda64d7c3fe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18502/ijps.v15i4.4305", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6996e2785d24c09c14142e0307d92bf60358474a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10147787
pes2o/s2orc
v3-fos-license
Action of an endo-β-1,3(4)-glucanase on cellobiosyl unit structure in barley β-1,3:1,4-glucan β-1,3:1,4-Glucan is a major cell wall component accumulating in endosperm and young tissues in grasses. The mixed linkage glucan is a linear polysaccharide mainly consisting of cellotriosyl and cellotetraosyl units linked through single β-1,3-glucosidic linkages, but it also contains minor structures such as cellobiosyl units. In this study, we examined the action of an endo-β-1,3(4)-glucanase from Trichoderma sp. on a minor structure in barley β-1,3:1,4-glucan. To find the minor structure on which the endo-β-1,3(4)-glucanase acts, we prepared oligosaccharides from barley β-1,3:1,4-glucan by endo-β-1,4-glucanase digestion followed by purification by gel permeation and paper chromatography. The endo-β-1,3(4)-glucanase appeared to hydrolyze an oligosaccharide with degree of polymerization 5, designated C5-b. Based on matrix-assisted laser desorption/ionization (MALDI) time-of-flight (ToF)/ToF-mass spectrometry (MS)/MS analysis, C5-b was identified as β-Glc-1,3-β-Glc-1,4-β-Glc-1,3-β-Glc-1,4-Glc including a cellobiosyl unit. The results indicate that a type of endo-β-1,3(4)-glucanase acts on the cellobiosyl units of barley β-1,3:1,4-glucan in an endo-manner. Measurement of enzyme activity by reducing sugar assay. The activities of enzymes were measured using reaction mixtures (0.1 mL) consisting of the enzyme, 0.1% (w/v) polysaccharide, and 200 mM acetate buffer, pH 5.0. After incubation at 37°C for the appropriate reaction time, the liberated sugars were determined reductometrically by the method of Nelson 23) and Somogyi. 24) One unit of enzyme activity liberates 1 μmol of reducing sugar per min. The concentration of protein was determined by the method of Bradford 25) using bovine serum albumin as the standard. Preparation of C5 oligosaccharides. One gram of barley β-1,3:1,4-glucan, E70-S, was digested with endoβ-1,4-glucanase from A. niger in 10 mM sodium acetate buffer (pH 4.5) at 37°C for 24 h. The hydrolysate was lyophilized by freeze-dry and dissolved into 4 mL of water. Oligosaccharides released from the β-glucan were separated by gel permeation chromatography on a Bio-Gel P-2 column (26 mm × 925 mm, Bio-Rad). The V 0 and V i of the column were determined with dextran (Sigma) and Glc. Oligosaccharides were fractionated into C3, C4, and C5 fractions in order of increasing degree of polymerization (DP), which was determined by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-ToF-MS) with a Bruker AutoflexIII (Bruker Daltonics, Bremen, Germany). C5 fraction was further fractionated into C5-a, -b, and -c by paper chromatography using Whatman 3MM filter paper with 6:4: Fig. 2). The sugar content of the fractions was determined by the phenol-sulfuric acid method using Glc as the standard. 26) Methylation analysis. For the analysis of sugar linkage, the oligosaccharide (approximately 100 μg) was subjected to the methylation analysis. Methylation was performed by the Hakomori method, 27) and the products were analyzed by gas liquid chromatography (GLC). GLC of neutral sugars as their alditol acetate derivatives was done with a Shimadzu gas chromatograph GC-6A equipped with a column (0.28 mm × 50 m) of Silar-10C, according to the method of Albersheim et al. 28) Action of enzymes on oligosaccharides. The action of the Trichoderma enzyme, rGI, and rGII on C4 and C5-b was analyzed using a reaction mixture (total volume, 20 μL) containing the enzyme, 0.1 mM oligosaccharide, and 50 mM sodium acetate buffer (pH 5.0). After incubation at 37°C for 24 h, the sample was inactivated by heating. The reducing sugars liberated were coupled at their reducing terminals with p-aminobenzoic acid ethyl ester (ABEE) by the method of Matsuura and Imaoka. 29) The ABEE-derivatized sugars were analyzed on an HPLC system equipped with a TSKgel Amide-80 column (4.6 mm × 250 mm; Tosoh). The column was eluted with a linear gradient of CH 3 CN:water from 74:26 to 58:42 (v/v), for 40 min at a flow rate of 1 mL/min and 40°C. ABEE sugars were monitored by a fluorescence detector model RF-10A XL at 305 nm (excitation) and 360 nm (emission). Structural analysis of oligosaccharide. For MALDI-ToF/ToF-MS/MS, per-methylation of glycans was performed using the NaOH slurry method described by Ciucanu and Kerek 30) using 1 mL of methyl iodide (Fluka, Buchs, Switzerland). Dry samples were resuspended in 100 μL of methanol and were kept at room temperature for MALDI-ToF/ToF-MS/MS analysis. Per-methylated methanol dissolved samples (5 μL) were mixed with 5 μL of 2,5-dihydroxybenzoic acid matrix {10 mg/mL dissolved in 50% (v/v) metha-nol} and 1 μL of the mixture was spotted on a MALDI target plate and analyzed by MALDI-ToF/ToF-MS/MS (4700 proteomics analyzer, Applied Biosystems, Foster City, CA, USA) as previously described. 31) Highenergy MALDI collision-induced dissociation (CID) spectra were acquired with an average 10,000 laser shots/spectrum, using a high collision energy (1 kV). The oligosaccharide ions were allowed to collide in the CID cell with argon at a pressure of 2 × 10 −6 Torr. Polysaccharide analysis using carbohydrate gel electrophoresis. Products from the C5-b oligosaccharide by the Trichoderma enzyme were analyzed by polysaccharide analysis using carbohydrate gel electrophoresis (PACE). The derivatization of carbohydrates was performed according to previously developed protocols. 32) Carbohydrate electrophoresis and PACE gel scanning were performed as described by Goubet et al. 32) Linkage analysis of oligosaccharides To determine the structure, C5-b oligosaccharide was subjected to methylation analysis for glucosidic linkages together with C3 and C4 oligosaccharides. In the analysis, sugars at reducing end were converted to their respective alditols before methylation of free OH groups. C4 oligosaccharide appeared to have nearly equal molecular ratio of terminal Glc (t-Glc), 3-linked Glc (3-Glc), 4-linked Glc (4-Glc), and 4-linked reducing-end Glc (4-Glcol), which coincides with the ratio obtained from G3G4G4G (Table 2). Similarly, C3 was identified as G3G4G. Compared with C3 and C4, C5-b had roughly two units of 3-Glc, indicating that C5-b is derived from a cellobiosyl unit or continuous β-1,3glucosyl residues in β-1,3:1,4-glucan. Structure of C5-b oligosaccharide For further structural analysis, C5-b oligosaccharide was also per-methylated and analyzed by high-energy MALDI-CID (Fig. 4, Supplemental Fig. 4). 33) However, because the direct annotation of the fragmentation spectrum was very ambiguous, we followed a comparative approach where the CID spectra of per-methylated cellopentaose, laminaripentaose, and C5-b oligosaccharide were simultaneously analyzed. In particular, we compared the relative proportions of various molecular ions in the corresponding spectra in order to decipher the structure of C5-b oligosaccharide. Comparing the CID spectra of cellopentaose and laminaripentaose, it becomes apparent that the intensity of the D 1 "elimina- Notes: A-D, β-1,3:1,4-Glucan, E70-S, and the β-glucans with high, medium, and low viscosity were digested with the Trichoderma endoβ-1,3(4)-glucanase, respectively. E and F, E70-S was treated with barley rGI and rGII, respectively. As the products were monitored by refractive index, smaller products eluted with salts at V i could not be detected. tion ion" (m/z 227.3) 34) is higher than the neighboring E 1 or G 1 "elimination ions" (m/z 211.3) 35,36) when the nonreducing-end Glc is 1,3-linked to the second Glc residue (Fig. 4, panel I). In fact, the relative intensity of the D 1 ion in the C5-b CID spectrum is higher than the G 1 or E 1 ion intensity, suggesting that the nonreducing-end Glc is linked via a 1,3-linkage to the second Glc residue. This is further supported by the absence of the 3,5 A 2 cross-ring fragment ion (m/z 329.4) 33) in the C5-b CID spectrum (Fig. 4, panel II) and the absence of the V 4 "elimination ion" (m/z 809.4) (Fig. 4, panel VII). A strong 3,5 A 3 cross-ring fragment ion (m/z 533.4) over a 0,2 X 2 cross-ring fragment ion (m/z 519.4) 33) indicates that the second Glc is 1,4-linked to the middle Glc residue (Fig. 4, panel IV). This is further supported by the presence of a strong V 3 "elimination ion" (m/z 605.4) (Fig. 4, panel V). From the comparison of the CID cellopentaose spectrum with the corresponding laminaripentaose spectrum, it becomes apparent that a weak V 2 "elimination ion" (m/z 401.4) is indicative of a 1,3-linkage between the middle and penultimate from the reducing-end Glc residues (Fig. 4, panel III). Since the C5-b pre-methylated spectrum has a weak V 2 ion, we conclude that in this oligosaccharide the middle Glc residue is 1,3linked to the penultimate Glc residue. This is further supported by the absence of the 3,5 A 4 cross-ring fragment ion (m/z 737.4) (Fig. 4, panel VI) and the absence of the D 4 "elimination ion" (m/z 839.4) (Fig. 4, panel VII). In Fig. 4, panel VIII, it is shown that a strong 3,5 A 5 cross-ring fragment ion (m/z 941.4) is indicative of a 1,4-linkage between the penultimate and the reducing-end Glc residues. Taken together, these data allow the C5-b oligosaccharide to be identified as β-Glcp-1,3β-Glcp-1,4-β-Glcp-1,3-β-Glcp-1,4-Glcp (G3G4G3G4G). Products released from C5-b by Trichoderma enzyme As shown in Fig. 3, Trichoderma enzyme properly hydrolyzed C5-b into smaller saccharides; however, the oligosaccharide could not be identified on HPLC. Therefore, the products released from C5-b were also subjected to PACE. As shown in Fig. 5, the smaller saccharides in the products were identified as laminaribiose and Glc. The result suggests that the enzyme acted on both β-1,4-linkages in G3G4G3G4G. Discussion Poaceae β-1,3:1,4-glucan is mainly consisted of cellotriosyl and cellotetraosyl units linked through single β-1,3-glucosidic linkages, but it has also been shown to possess cellobiosyl units as the minor structure. 4) Through the structural analysis of unexpected oligosaccharides released by endo-β-1,3:1,4-glucanase, the cellobiosyl units appeared to locate at the nonreducing side of cellotriosyl units in barley, lichen, and horsetail. 5) Based on the proportion of the released oligosaccharides, the frequency of cellobiosyl unit was estimated less than 2% in barley β-1,3:1,4-glucan. 5) In this study, based on sugar content, C5 fraction obtained by digestion with A. niger endo-β-1,4-glucanase belonging to GH12 was less than 1.5% of total sugar released from barley β-1,3:1,4-glucan, confirming that the cellobiosyl units exist as the minor structure in barley β-1,3:1,4-glucan. Notes: A, the action of the Trichoderma enzyme on C4; B, the action of the Trichoderma enzyme on C5-b; C, the action of rGI on C5-b; and D, the action of rGII on C5-b were analyzed. Reducing sugars before and after enzyme reaction were derivatized with ABEE and analyzed on HPLC, respectively. Together with cellobiosyl unit and long stretch of β-1,4-glucosidic linkage, continuous β-1,3-glucosidic linkages have also been presumed in maize β-1,3:1,4glucan. 4) However, we could not detect any hydrolysis of barley β-1,3:1,4-glucan by barley endo-β-1,3-glu-canases belonging to GH17, rGI, and rGII. The fact that laminaritriose is the smallest substrate for GII 22,37) suggests that barley β-glucan does not have three continuous β-1,3-glucosyl residues. Hence, the hydrolysis of β-1 Notes: The signals used for characterization were boxed numbered from I to VIII. Glucosidic and cross-ring fragments are identified according to the nomenclature of Domon and Costello. 33) not likely occur in barley. On the other hand, we cannot exclude the possibility that barley β-1,3:1,4-glucan has two continuous β-1,3-glucosyl residues that can be hydrolyzed by distinct endo-β-1,3-glucanases secreted by fungi and bacteria. In the analysis of 3-D structure of PcLam16A, 20,21) two Trp residues have been shown to be involved in the specific recognition of β-1,3-glucosidic linkage between the subsites −1 and −2. In the enzyme, the substrate-binding cleft has a narrow and straight canyon structure along which a linear oligosaccharide such as G4G3G can lay. The Trichoderma enzyme hydrolyzed C5-b oligosaccharide, G3G4G3G4G, into laminaribiose and Glc as the final products. This result suggests that the Trichoderma enzyme either first hydrolyzed G3G4G3G4G into G3G4G3G and Glc, and then into two laminaribioses and Glc, or the enzyme first hydrolyzed G3G4G3G4G into G3G4G and laminaribiose, and then into two laminaribioses and Glc. Because the Trichoderma enzyme did not act on C4 oligosaccharide (G3G4G4G), the former case is more probable. These facts also suggest that the smallest substrate for the Trichoderma enzyme is G3G4G3G.
2016-05-12T22:15:10.714Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "494e4c2a72e1c706c5c58aaa4f4e7129da08c8b7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/09168451.2015.1046365", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "39ae05b529a805be7d51ee677d58638b37b4e540", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
15738749
pes2o/s2orc
v3-fos-license
Automated Analysis of Crackles in Patients with Interstitial Pulmonary Fibrosis Background. The crackles in patients with interstitial pulmonary fibrosis (IPF) can be difficult to distinguish from those heard in patients with congestive heart failure (CHF) and pneumonia (PN). Misinterpretation of these crackles can lead to inappropriate therapy. The purpose of this study was to determine whether the crackles in patients with IPF differ from those in patients with CHF and PN. Methods. We studied 39 patients with IPF, 95 with CHF and 123 with PN using a 16-channel lung sound analyzer. Crackle features were analyzed using machine learning methods including neural networks and support vector machines. Results. The IPF crackles had distinctive features that allowed them to be separated from those in patients with PN with a sensitivity of 0.82, a specificity of 0.88 and an accuracy of 0.86. They were separated from those of CHF patients with a sensitivity of 0.77, a specificity of 0.85 and an accuracy of 0.82. Conclusion. Distinctive features are present in the crackles of IPF that help separate them from the crackles of CHF and PN. Computer analysis of crackles at the bedside has the potential of aiding clinicians in diagnosing IPF more easily and thus helping to avoid medication errors. Introduction Crackles are a common finding in patients with interstitial pulmonary fibrosis (IPF). Their presence in a patient is often the first clue that the disease is present. Unfortunately, they can be misinterpreted as being due to congestive heart failure (CHF) or pneumonia (PN), and as a consequence patients may receive inappropriate therapy. On occasion, this can lead to serious, unwanted side effects such as dehydration due to the inappropriate administration of diuretics or an adverse reaction to an antibiotic that was not indicated in the first place. In an attempt to reduce these complications, we studied the sound patterns of patients with these diseases using a multichannel lung sound analyzer (STG16) to determine if such analysis could help differentiate IPF from CHF and PN. Using advanced statistical techniques we compared features of IPF crackles to those in patients with CHF and PN. Our goal was to determine if there are features of the lung sounds in IPF patients that would help to distinguish them from the lung sounds of patients with CHF and PN. Materials and Methods Patients were selected for this study from a pool of patients who had undergone lung sound analysis as a part of a broader study of the correlation of disease processes with lung sounds patterns. To acquire patients into this study, we identified hospitalized patients and outpatients of a community teaching hospital who were diagnosed as having a specific cardiopulmonary disease or were considered to be normal by their caregivers. The studies were not made on consecutive patients; this is a convenience sample and we currently have over 1,000 patients for whom we have both the diagnosis and the lung sound analysis. The diagnostic category of each of the patients was that of the clinicians caring for these patients. The CHF and PN patients were inpatients in a teaching hospital, and diagnoses were confirmed by board certified specialists. The IPF patients were outpatients and were all seen by pulmonary specialists. There were 39 patients with IPF, 95 with CHF, and 123 with PN. All patients were examined using a multichannel lung sound analyzer (STG16). ms Highest peak (a) Negative polarity (c) Figure 1: The waveform of a typical crackle (a). The crackle analysis starts by identification of the crackle's highest deflection highest peak. The half-period to the left of the highest peak is marked as T 1 . The half-period to the right of the highest peak is marked as T 2 . Crackle pitch is calculated from 4 consecutive half-periods, with T 1 as a 1st half-period. The amplitude is determined separately for each half-period and marked as A 1 , A 2 , and A 3 . Crackle polarity (b) crackle polarity is defined positive if the highest peak is upward (c). Crackle polarity is defined negative if the highest peak is downward. Crackle timing (timing) Crackle timing is defined as follows: 1 for early inspiration, 2 for mid-inspiration, 3 for late inspiration, 4 for early expiration, 5 for mid-expiration, 6 for late expiration Crackle transmission coefficient (CTC) The degree of crackling sound transmission through the ipsilateral chest, as calculated from crackle family observation by multiple microphones. The CTC has a value of 0% in the absence of any transmission and 100% when there is equal transmission to all ipsilateral channels see [7] for detailed description and discussion. Amplitude Amplitude of the highest peak (arbitrary units) Half period amplitude variability (%) (Standard deviation Crackle polarity (polarity) Direction of the highest peak, Figures 1(b) and 1(c) see [8] for detailed description and discussion The details of this device have been described [1]. In brief, patients are asked to lie on a soft foam pad, which has stethoscope chest pieces embedded in it. Each of these chest pieces contains a microphone. The sounds detected by these microphones are amplified, filtered, and input into a computer for analysis. In our usual practice, patients are asked to perform several breathing maneuvers: normal breathing, deeper than normal breathing, coughing, and a vital capacity maneuver. In this study, we chose the data obtained during the deeper than normal breathing maneuver. Crackles were defined in accordance with accepted criteria [2,3]. The STG software automatically identified crackles in all full breaths. The validation of the use of the device as a crackle counter has been reported [4]. A single recording lasted 20 seconds and typically contains a minimum of 3 breaths. To develop algorithms for testing, the crackle features shown in Table 1 were assessed. Crackle features were calculated separately for inspiratory crackles and for expiratory crackles. Figure 1 demonstrates the process of calculating features of the crackle. In addition to these features, we combined the individual crackle features in the form of a median (median T1, median pitch, etc.) In addition to features based on individual crackle properties we captured information reflecting the distribution These 4 features count the total number of crackles observed in each quadrant of the chest. Together they add up to the total number of crackles per breath Percentage differences between crackle quadrants (6 total) Calculated from the 4 features described above, these features represent a comparison between quadrants. Each percentage is a pairwise comparison of all 6 possible combinations of quadrants Maximum distances (x,y,z) Distances between crackles in 3-dimensional space. There are separate features for x, y, and z planes. One feature also records a maximum distance across all 3 dimensions Channel distances These features are similar to those described above, except that they are defined based upon which channel microphone picked up the crackle. Distances are defined accordingly of the patient's crackles. Diseases differ in the pattern of crackles distribution over the chest. Distribution information required aggregation of data on a per-breath level and led to the development of aggregate crackle features shown in Table 2. To perform classification and prediction we utilized supervised learning nonparametric classifiers: neural networks and support vector machines [5,6]. Supervised learning can teach the system to nonlinearly map the input features to the associated label of disease. We divided the data into a training set, used for feature extraction and model building, and a validation set, used for evaluation of the results. Validation data set performance indicates how well the features generalize to the unseen data. We used a fivefold cross-validation to increase the pool of validated data. We used individual crackle features to distinguish crackles of IPF from CHF crackles and PN crackles. Once individual crackles were classified as IPF, CHF, or PN, majority voting was used to classify the patient into one of the three disorders. To incorporate features of crackle distribution, we performed majority voting among individual breaths during the single recording; for example, if a patient had 6 breaths, and 3 of them were classified as IPF, 2 as CHF, and 1 as PN, then the patient would be classified as having IPF. The final classification of IPF versus CHF and IPF versus PN was performed using this breath majority voting. The study was approved by the Institutional Review Board of the Brigham and Women's/Faulkner Hospitals. The display of a single crackling event reveals that the crackling sound is transmitted differently in the three diseases. The right panels in Figure 2 show time-expanded sound waveforms that were recorded by the 14 microphones positioned over the posterior chest. The waveforms are superimposed on a body plot. Each waveform is positioned on the part of the body where the sound was recorded. Results In the patient with CHF, panel (d), a prominent crackle is seen on the tracing from channel 6 (indicated by a large triangle). At the same time the crackling sound was also detected in all ipsilateral microphones 1, 2, 3, 4, 5, and 7 (marked by triangles). The set of crackles generated by a single event and recorded by multiple microphones is referred to as a crackle family [7,8]. The crackle waveforms corresponding to the crackle family are shown in the stack mode in the insert in the upper-right corner. Notice that the crackle recorded by microphone 6 (the most prominent crackle or mother crackle) occurs earlier than the other crackles. The crackle transmission coefficient was calculated for each crackle family. In the crackle family shown in the CHF patient (Figure 2(d)), the CTC was 50%. (The CTC has a value of 0% in the absence of any transmission and 100% when there is equal transmission to all ipsilateral channels.) In the crackle family shown in the PN patient, Figure 2(f), the CTC was 16%. In contrast, the crackle in the IPF patient was detected at only a single microphone (Figure 2(b)). The CTC of this crackle family was 1%. The low CTC is typical in IPF patients. In addition to the CTC, note the difference in the pitch of the crackles shown in Figure 2, right panels: 588 Hz in IPF versus 218 Hz in CHF and 364 Hz in PN. Also note the difference in the number of zero crossings: 15 in IPF versus 5 in CHF and 5 in PN. The observations in single patients shown in Figure 2 are supported by statistical analysis of all available data. Table 3 shows crackle rate and individual crackle features in IPF, CHF, and PN. Note that multiple individual crackle features are significantly different between IPF and the other two diseases. In order to perform classification of patients into one of the three diseases we utilized two statistical methods: neural networks and support vector machines. Table 4 presents the results of binary comparisons of individual crackles in IPF versus CHF and IPF versus PN. As seen in Table 4, the sensitivity, specificity, and overall accuracy are over 70%, consistent with the conclusion that individual IPF crackles have features that differ from those of patients with PN and CHF. The accuracy increased to 83% (Table 5) on the application of majority voting to the classification of individual breaths based on crackle features. The addition of aggregate crackle features improved the accuracy to 86% (Table 6). Finally, we used majority voting to classify patients based Table 7). The performance of perbreath and that of per-patient classification are quite similar suggesting that most breaths of the same patients are classified in a similar manner. Discussion This study shows that the crackles of IPF have features that help distinguish them from the crackles of patients with CHF and PN. As noted, we believe that the crackles of IPF are not infrequently misinterpreted. They are most commonly considered to be due to CHF, and diuretics are administered inappropriately. There is not much literature to support this observation, but it is our personal experience and an informal survey of clinicians confirmed this opinion. In addition to providing evidence that helps in accurately identifying IPF crackles, computerized lung sounds analysis also quantifies them. It has long been noted that crackles of IPF become more widespread when the disease progresses. Thus crackle quantification can be important in assessing the severity of IPF, and this could be useful in providing evidence of response to therapy. We focused on the difference between crackles of IPF and those of CHF and PN. Baughman et al. took a different approach. They showed that the presence of crackles could help clinicians in distinguishing sarcoidosis from IPF [9]. Crackles were much less numerous in patients with sarcoidosis than in those with roentgenologically equivalent severity of changes due to IPF. Among features that are significantly different between IPF and CHF/PN is the crackle pitch (P < .0000001). This is consistent with the commonly held believe that the crackles of IPF are generated in smaller airways than those of CHF and PN. The distinctive features of crackles of IPF have been long recognized. For example, the crackles of pulmonary fibrosis caused by asbestos, described in early as 1930 by Wood and Gloyne to be a prominent feature of this industrial disease, were described by Smither as "characteristic in their sound and distribution" [10,11]. He also pointed out that they are present first at the bases in the midaxillary line and then tend to spread to the posterior bases. As the disease progresses, crackles become audible higher on the chest. In one study a technician was able to screen workers for asbestosis by detecting crackles. The technician correctly identified all workers in whom the diagnosis was most certain, that is, those with all the clinical, physiological, and roentgenologic criteria used in the study [12]. Using time-expanded waveform analysis, Kawamura et al. studied 18 patients with IPF and 23 patients with crackles who did not have this disease. Two crackle parameters (initial deflection width and two cycle duration) were shorter in the IPF patients. This finding correlated with HRCT findings in these patients [13]. British investigators have reported that detecting crackles on time-expanded waveform analysis was equivalent to CT scans in detecting asbestosis [14]. Finnish investigators also showed a significant positive correlation with frequencies of lung sounds and pulmonary fibrosis detected on HRCT [15]. Of course in industrial settings, in contrast to ER's and ICU's, neither CHF nor PN crackles are likely to be confounding variables. To perform classification and prediction, we utilized well-established supervised learning nonparametric classifiers: neural networks and support vector machines [5,6]. Neural networks (NNs) are the name for non-linear statistical data modeling tools. They are used to model complex relationships between inputs and outputs and are an attempt to build an architecture similar to the one of the human brain. NNs consist of an interconnected group of artificial neurons that learns and updates its internal structure using a connectionist approach to computation. NNs utilize a data-driven approach where changes in internal structure are based on external or internal information that flows through the network during the learning phase. In this study, we used a back propagation neural network. Support Vector Machines (SVM) are one of the newest methods in the supervised learning field. Generally speaking, a support vector machine seeks to create a hyperplane in a high-dimensional space that separates the two data classes. Not only does the hyperplane separate the data, but also it is oriented in such a fashion that creates the maximum "margin" on both sides of it ensuring the largest possible separation between the two classes. The algorithm proved to be fast and very efficient. We note here that both NN and SVM classification achieved similar results. The technology for this study came about in part because there has been resurgence in interest in lung sounds. This has been stimulated by the development of computerized techniques. A number of investigations demonstrating the usefulness of computerized lung sound analysis have been reported [16][17][18][19][20][21]. While crackle pitch can be assessed by a clinician using an acoustic stethoscope, other crackle features that are significantly different between IPF and CHF/PN can only be gained with the use of a computerized stethoscope. And some crackle features such as crackle transmission coefficient can only be calculated with the use of a multichannel lung sound analyzer. Computerized lung sound analysis can now be done at the bedside. The examinations are easy to do and can be performed in a few minutes. They have been shown to help in the detection of pneumonia [22]. Unfortunately, devices capable of doing this are not currently widely available. However, it is likely that this will change as the advantages of this technology become more widely known. Used in the context of a complete medical evaluation, we believe that this information could help avoid misinterpretation of IPF crackles and thus potentially decrease the occurrence of inappropriate treatment.
2014-10-01T00:00:00.000Z
2010-12-21T00:00:00.000
{ "year": 2010, "sha1": "5d40a5d2de08a7550b3bce214a8c5788b20b0fea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/pm/2011/590506.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8165a9d34e5bcbeb0854191d5d95795a546e261", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51886782
pes2o/s2orc
v3-fos-license
Destigmatizing Migraine A migraine is one of the most disabling diseases in adults globally. There has been some progress in assessing various quantitative components of quality of life while we struggle to discuss the less addressed, albeit important, qualitative aspects, such as the stigma associated with this disease. People with a migraine can be viewed negatively by society. Victims of an invisible disease, they can feel dismissed by spouses, society, and physicians who may convey the sense that their disease is insignificant. There is an emergent need to promote the destigmatization and offer an enriched understanding at the level of both patients and healthcare providers. in persons with migraine Stigma is a recognized social concept that describes a characteristic, trait, or diagnosis that is used to disrepute an individual. Stigma leads to a significant amount of prejudice, discrimination, and loss of status [1]. The stigmatizing effects of some diseases, like depression and epilepsy, are known to cause disrupted social relationships and decreased QoL. Beyond the 17th century, people with migraines started to be represented as privileged and self-absorbed individuals [2]. Interestingly, this view took another turn by the 19th century, when a migraine was perceived as a weakness of women of lower socio-economic status [2]. This gender difference in perception seems to line up with the fact that the prevalence of migraines is about three times in women than in men. Additionally, women have more frequent, longer lasting, and more severe headaches than men [3]. Therefore, it is hard not to wonder if epidemiological research has contributed in some way to the changing views of women in society. Further, physicians caring for persons with a migraine were ridiculed as enabling and incompetent practitioners who encouraged their patients' neurotic inclinations [2]. This paved the way for a considerably negative and gender-stereotyped view of the person with a migraine, which has persisted in varying intensities through decades. Unfortunately, stigma still follows migraines. An exploratory study using a focus group found that participants verbalized an unwillingness to tell others when they were experiencing a migraine, reporting that people were unsympathetic. "I think people look at you like, 'Yeah, right, everybody has headaches. They are not that bad; just get a grip and keep going.'" Participants' comments suggest that others' reactions to migraines are mostly associated with anger and frustration. The consensus was that most people dismissed migraines as insignificant and assumed that patients with a migraine were exaggerating their symptoms [4]. The Chronic Migraine Epidemiology and Outcomes study threw light on the fact that more than 75% of spouses of patients with a chronic migraine did not believe them about their headaches [2]. Sadly, this emotionally laden issue extends to treating physicians. People with a migraine reported feeling "dismissed" by physicians who did not appear to take complaints of their pain seriously. Specifically, some participants reported that they had endured years of frequent migraines since being told to "live with it" by a physician [4]. Additionally, in selected groups, physicians viewed patients as drug seekers and see patients with a migraine as not having a severe disease [2]. However, the possibility exists that patient recollections of physician interactions may not truly reflect the nature of their communications and may have been taken out of context [4]. Despite the high prevalence of migraines, there is a significant disparity toward research on this neurological disorder. In a study conducted to measure QoL among patients with migraines, it showed that persons with a chronic migraine experience more stigmas and disability, and are less likely to work when compared to persons with epilepsy [1]. It is, therefore, both intriguing and surprising to note that of the various chronic neurological disorders with episodic manifestations (CDEM), including migraines, epilepsy, and multiple sclerosis, the estimate of the National Institutes of Health (NIH) categorical funding for the year 2018 is still the lowest for migraines. Specifically, the projected NIH funding for migraines is about 21-million US dollars, which is a mere 1/7th of that for epilepsy ( Figure 1) [5]. migraine compared to other chronic neurological disorders The graph portrays the annual support level for selected neurological conditions based on grants, contracts, and other funding mechanisms used across the National Institutes of Health (NIH) data published by the National Center for Health Statistics (NCHS) at the Centers for Disease Control & Prevention (CDC). In light of past and recent evidence, there exists an emergent need to address the stigma surrounding migraine in these domains (family, doctors, and research), as this might facilitate an improved understanding of the perspectives of persons with migraines regarding their disease and treatment. Moving forward Destigmatization needs to take place at the level of both patients and healthcare providers. At the level of patients, patient participatory advocacy activities (particularly ones where patients congregate such as walks, runs, and education camps) have proven to be instrumental in destigmatizing diseases. Moreover, the participation of people with migraines along with their families and friends is critical for migraine advocacy [2]. At the interpersonal level, counseling, therapy, and empowerment endeavors have been shown to benefit people with migraines [1]. At the level of healthcare providers, public education and employing causalitybased (pathophysiology) descriptions of diseases make a difference. Patients want their physicians to demonstrate listening and taking their concerns seriously before offering treatment. Attempting to reflect the concerns that the patient has expressed and having asked the patient what he or she expects from treatment could initiate a mutually beneficial collaborative relationship: The patient feels understood and heard while the physician gains an accurate understanding of what the patient desires in treatment. Ultimately, the result may be a greater success with therapy [4]. Hence, there is also a need for clinicians to be trained on the use of non-stigmatizing language to describe the disease to their patients [2]. Additionally, patients and healthcare providers are instrumental in reshaping policies and funding distributions by working in conjunction on disease advocacy ( Figure 2) [1]. FIGURE 2: Roles proposed for patients and healthcare providers in destigmatizing migraine As pharmacotherapy for migraines advances, we are in need of a concurrent movement to allocate our efforts to patient-centered outcomes. It is time we regard a migraine as the neurological disorder it is.
2018-08-14T12:22:48.295Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "a020adb0d4784234f9daeabbf3a1f48f2fc3a496", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/12768-destigmatizing-migraine.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a020adb0d4784234f9daeabbf3a1f48f2fc3a496", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118939086
pes2o/s2orc
v3-fos-license
Searching for Axion Dark Matter with Birefringent Cavities Axion-like particles are a broad class of dark matter candidates which are expected to behave as a coherent, classical field with a weak coupling to photons. Research into the detectability of these particles with laser interferometers has recently revealed a number of promising experimental designs. Inspired by these ideas, we propose the Axion Detection with Birefringent Cavities (ADBC) experiment, a new axion interferometry concept using a cavity that exhibits birefringence between its two, linearly-polarized laser eigenmodes. This experimental concept overcomes several limitations of the designs currently in the literature, and can be practically realized in the form of a simple bowtie cavity with tunable mirror angles. Our design thereby increases the sensitivity to the axion-photon coupling over a wide range of axion masses. Axion-like particles are a broad class of dark matter candidates which are expected to behave as a coherent, classical field with a weak coupling to photons. Research into the detectability of these particles with laser interferometers has recently revealed a number of promising experimental designs. Inspired by these ideas, we propose the Axion Detection with Birefringent Cavities (ADBC) experiment, a new axion interferometry concept using a cavity that exhibits birefringence between its two, linearly-polarized laser eigenmodes. This experimental concept overcomes several limitations of the designs currently in the literature, and can be practically realized in the form of a simple bowtie cavity with tunable mirror angles. Our design thereby increases the sensitivity to the axion-photon coupling over a wide range of axion masses. Recently, laser interferometry has been shown to be an effective way of searching for ALP dark matter [21,22]. The interaction term in Eq. (1) causes a difference in phase velocity between left-and right-handed circularly polarized light, and an appropriately designed highfinesse Fabry-Perot cavity can be used to accumulate the resulting phase difference. These studies have shown how to exploit the exquisite sensitivity of interferometry to small phase differences to obtain new limits on low mass axions. Despite their ingenuity, these designs face two key limitations. First, they are limited by the non-ideal behavior of optical elements. The introduction of quarter-wave plates inside a cavity, as proposed by Ref. [21], leads to losses and imperfect phase shifts between polarization modes that accumulate with each pass of laser light in the cavity. Ref. [22] attempts to overcome this difficulty by using a bowtie cavity; however, circularly polarized light is not in general a bowtie eigenmode, as reflection off any surface at a nonzero angle of incidence does not preserve circular polarization. These difficulties would have to be addressed for an actual realization of these proposals. * Electronic address: hongwan@mit.edu † Electronic address: bdelwood@mit.edu ‡ Electronic address: m3v4n5@mit.edu § Electronic address: jthaler@mit.edu Second, and more importantly, these proposed experiments rely on the coherent build-up of the phase difference over the entire light storage time in the cavity. The sensitivity of these experiments starts to deteriorate once the axion oscillation period becomes comparable to the storage time, i.e. when m a ∼ 1/F, where is the length of the cavity, F is the finesse, and m a is the mass of the axion. Increasing F therefore restricts the experimental sensitivity to lower axion masses, even though a large value of F is desirable to maximize a possible axion signal. In this Letter, we propose a new axion interferometry experimental design that simultaneously overcomes both of these limitations. The presence of ALP dark matter results in a rotation of horizontally polarized laser light propagating with frequency ω 0 in a cavity, causing a small, vertical polarization to develop in the frequency sidebands ω 0 ± m a . We exploit the fact that oblique reflection generally results in a phase difference between different linear polarizations to design a cavity that is resonant at ω 0 in the horizontal (carrier) polarization, and ω 0 ± m a in the vertical (signal) polarization. The signal sidebands can then be detected using conventional interferometry techniques. Our design is sensitive to axion masses m a 1/ independent of the finesse of the cavity, significantly improving the reach in m a without compromising on the strength of the axion signal. All of this can be achieved by a simple, practical cavity design requiring only that light reflects off multiple mirrors at oblique angles. Axions and Light Polarization. Consider two orthogonal, circular polarizations of a laser beam (denoted by and ) propagating with frequency ω 0 and wavenumber k 0 in the presence of an axion field a(t) = a 0 cos(m a t − k a z), starting at some time t 0 . The axion momentum is k a = m a v, where v ∼ 10 −3 is the typical dark matter velocity at the Earth. We will only be interested in m a 1, so that k a 1, allowing us to neglect spatial gradients in the axion field. The interaction term in Eq. (1) leads to the following arXiv:1809.01656v1 [hep-ph] 5 Sep 2018 dispersion relation for the two polarizations: After some time t , , each polarization travels a distance , given by where G ≡ g aγγ √ 2ρ DM /2, and ρ DM = m 2 a a 2 0 /2 is the local density of dark matter. Equating the result from each polarization on the right-hand side of Eq. (3), and working out the phase difference between the two polarizations ∆α ≡ ω 0 (t − t ) to first order in G/m a , we obtain Eq. (4) makes it clear that the axion field takes a carrier wave with frequency ω 0 and generates signal sidebands with frequencies ω 0 ± m a . This phase difference between circular polarizations is equivalent to a rotation of linearly polarized light. Writing the complex electric field in each circular polarization as a vector (E , E ) and keeping track of the relative phase difference only, the translation matrix over a distance can be expressed as diag(e i∆α/2 , e −i∆α/2 ). The circular polarizations are related to the linear polarizations via E , = E → ∓ iE ↑ , so that in the linear polarization basis (E ↑ , E → ), the matrix for translation is Axion Interferometry. The basic principle of axion interferometry is summarized in Fig. 1. A carrier wave with electric field E → 0 in the horizontal polarization is injected into a cavity that is tuned to be resonant in the horizontal polarization at the laser carrier frequency ω 0 . As the field propagates in the presence of axions, signal sidebands in the vertical polarization are generated, with frequencies ω 0 ± m a . The amplitude of the sidebands can be enhanced using an appropriately tuned high-finesse Fabry-Perot cavity. At each end of the cavity, a reflection occurs at a mirror with some real reflectivity coefficient, and a phase difference ∆ϕ 1,2 between horizontally and vertically polarized light. In order to distinguish between the two sidebands, we split the vertical signal polarization into its two frequency components by writing the electric field in the cavity as the complex column vector The subscripts indicate that the components have different frequencies (ω 0 − m a , ω 0 , ω 0 + m a ), respectively. The transfer matrix for translation in our 3-component notation follows from Eq. (5): For reflection at either end, we can write the transfer matrix as R 1,2 = r 1,2 diag e i∆ϕ1,2 , 1, e i∆ϕ1,2 . The signal field in the cavity is then given by the solution to the following equation [23]: where E 0 = (0, E → 0,in , 0) is the electric field of the laser fed into the cavity, and t X = 1 − r 2 X is the field amplitude transmission coefficient. Axion interferometry shares many parallels with conventional microwave cavity experiments like ADMX [20]. In both, the axion converts a frequency mode pumped to a large energy density (a DC magnetic field in microwave cavities, ω 0 in our set-up) into another mode related to the original by m a (a standing electromagnetic mode of frequency m a in microwave cavities, and the signal sidebands ω 0 ± m a in our set-up). This upor down-conversion between electromagnetic modes is a generic property of ALPs coupled to photons through Eq. (1), as studied more generally in Ref. [24]. The parallel extends to the power stored in both cavities. In the laser cavity, the power stored in the signal sidebands within the cavity is P ± ∝ |E ↑ ± | 2 w 2 , where w is the laser beam width. Solving Eq. (7) gives P ± ∼ g 2 aγγ (ρ DM /m a )E →2 0 V Q ± , where V ∼ w 2 is the volume encompassed by the beam, and Q ± is a quality factor associated with the cavity. This reproduces the scaling of the signal power produced in ADMX, with E → 0 = B → 0 for the laser. Birefringent Cavities. We now turn our attention to the importance of ∆ϕ 1,2 in axion interferometry. Previous work has always ensured that ∆ϕ 1 The red optical path is that of the input and cavity, while the blue optical path is read-out. The beam enters at A and is read out after C. Two sets of mirrors A, C and B, D can be rotated to change the angle of incidence θ while roughly maintaining cavity alignment and length. To produce an electrical signal, the leakage fields from mirror C pass through a half-waveplate (λ/2) before reflecting off a polarizing beam splitter (PBS) and arriving at a photodetector (PD). either by using quarter-wave plates in front of mirrors with near-zero transmission [21], or by performing two reflections at each end of the cavity, separated by an optical path length that is much shorter than the cavity length [22]. The signal generated by the axion builds constructively as long as the axion field value does not change significantly during the storage time, i.e. m a F 1. Once m a ∼ 1/(F ), the cavity loses sensitivity to the axion signal. An equivalent way of understanding this criterion is to observe that setting ∆ϕ 1,2 = 0 means that light in both polarizations are resonant at the laser frequency ω 0 . The full-width half-maximum of the cavity transmission band is δλ ∼ 1/(F ), and so we must have m a δλ ∼ 1/(F ) in order for the signal sidebands (produced by axion-driven polarization modulation) to lie within the transmission band. Now consider the case where ∆ϕ 1,2 = ∆ϕ = 0 (we take r 2 = 1 in the following discussion for simplicity). When the resonance condition in the signal polarization m a = |∆ϕ| is met, the signal polarization builds constructively in the cavity. With a phase shift of ∆ϕ = ±π/2, the cavity is resonant at m a = π/(2 ), the maximum mass reach, for the sidebands ω 0 ± m a . Axion masses up to this maximum value can be scanned by increasing ∆ϕ in steps from 0 to π/2. Since a larger finesse F is desirable for producing a large signal field, this represents a significant improvement in axion mass reach without affecting the sensitivity in coupling. Moreover, a possible axion detection can be confirmed by looking for a signal for both phase differences ±∆ϕ. Although higher frequency resonances exist for each choice of ∆ϕ, the axion field value at these higher frequencies oscillates more than once over the cavity length , suppressing the sensitivity by sinc(m a ) [23]. Experimental Set-up. Fig. 2 gives a schematic of the proposed Axion Detection with Birefringent Cavities (ADBC) experiment, featuring a practical cavity design with the necessary birefringence. The Fresnel equations [25] show that orthogonal, linear polarizations reflecting off a dielectric surface at an oblique angle of incidence θ in general develop a relative phase shift ∆ϕ. By rotating the mirrors to adjust the angles of incidence, we can thus tune the cavity birefringence to make the axion-induced, vertically polarized sidebands resonant in our cavity at m a = |∆ϕ|. The proposed design consists of two sets of two mirrors spaced 2 m apart, with each set acting as a retroreflector that can pivot independently. The angle between the mirrors in a set should be fixed at slightly less than 90 • , so that the angles of incidence are roughly θ and 90 • − θ. This allows us to vary the angle of incidence while roughly maintaining optical path-length and cavity alignment. The short dimension of the cavity (e.g. DB ) is of order 10 cm. One set, A and C, will be taken as our input and output ports respectively, so that the optical path goes in the order ADBC. The Fresnel equations show that the reflectivity of the horizontal polarization will be lower than the vertical. Placing the carrier in the horizontal polarization (lower finesse) therefore reduces the accumulation of experimental noise in the cavity, while simultaneously placing the signal in the vertical polarization (higher finesse) leads to a larger signal-to-noise ratio (SNR). To prevent appreciable leakage of the carrier from the cavity, the cavity should be optimally coupled, meaning the transmissivity of A in the carrier polarization, t A → , must be matched to the total losses in the cavity. This would almost entirely eliminate any reflection off A. To allow a significant signal field to be read out, we also need t C ↑ to be larger than the total losses from the other mirrors. However, the Fresnel equations force t → > t ↑ , and as a result, cavity loss for the carrier will be dominated by t C → , leaving us with t A → t C → . To maintain high finesse in the signal and carrier, all other transmissivities should be smaller than the cavity optical loss. To maximize the axion mass reach, the mirrors should cover as much of the range 0 ≤ ∆ϕ ≤ π/2 as possible. ∆ϕ increases with more oblique angles of incidence, but large optical surfaces are required near grazing incidence. Experimental Sensitivity. The sensitivity of ADBC to g aγγ is ultimately dependent on the finesse of the cavity F ↑ and F → in each polarization, and on t C →,↑ , for which we will use benchmark values of F ↑ = 2.25 × 10 5 , F → = 2700, t C ↑ = 0.0037, and t C → = 0.030 (recall that t X is the amplitude transmission coefficient). The reach in axion mass is determined by ∆ϕ, which in turn depends on the mirror properties. We find that a range of 0 < ∆ϕ π/5 can typically be probed over a 6 • range in angle of incidence θ, with θ 65 • . The signal field inside the cavity can be found by solving this cavity's equivalent of Eq. (7) for E cav . For simplicity, we neglect the translation matrix for the short legs (i.e. DB and CA ), and take the same matrix R for both sets of mirrors. The reflection matrix has the form R = diag(r ↑ e i∆ϕ , r → , r ↑ e i∆ϕ ), with r 2 → and r 2 ↑ being the product of the reflectivities of all 4 cavity mirrors. These quantities are related to the finesse by F ↑,→ π/(1 − r 2 ↑,→ ). The signal sidebands emerging from the cavity are read out using a heterodyne detection scheme. The carrier and signal are rotated by a small angle ε, after which a polarizing beam splitter (PBS) is used to isolate the vertical polarization for readout by a photodetector. This mixes a small amount of the DC (carrier) component into the AC (sideband) component modulated at the frequency corresponding to the axion mass. If the phase difference is tuned so that ∆ϕ = m a , the cavity is resonant in the vertical (signal) polarization at a frequency sideband ω 0 − m a , giving an output AC power at the heterodyne readout of ���� �������� ������ where we have assumed that all reflectivity coefficients are approximately 1. The sensitivity is estimated by find-ing the value of g aγγ that sets the SNR to 1, with [26] SNR = P AC S 1/2 shot where S shot = 2P DC ω 0 is the laser shot noise power spectral density with the DC power given by P DC = (2εt C → ) 2 P cav , T is the integration time for this step in ∆ϕ, τ ≡ 2π/(m a v 2 ) is the coherence time of the axion field, and we assume T τ . Another source of noise in our set-up is laser technical noise, which leads to finite laser frequency width and decreases the sensitivity of ADBC as m a → 0. In order to probe axion masses down to m a ∼ 10 −13 eV, technical noise must be subdominant to shot noise down to ν a ∼ 10-100 Hz. Since only a single beam is used in a cavity which is held on resonance via feedback to ω 0 , radiation pressure and other displacement noises are less relevant. Thermal noise in the mounted optics, for instance, will dominate over other non-technical noises (e.g. quantum radiation pressure noise), with an estimated magnitude of [27] This places requirements on the experimental design for small values of m a where G/m a ∼ S 1/2 ∆ϕ (g aγγ ∼ 10 −14 GeV −1 at m a ∼ 10 −13 eV). The expected sensitivity for a 2 m and a 40 m version of ADBC is given in Fig. 3. In order to cover the range 0 < m a π/(5 ), the experiment must be run a number of times given by F ↑ /5 ∼ 5 × 10 4 , each with a different value of θ. F ↑ /5 is chosen so that the peak of each resonance in m a falls on the half-maximum for the previous resonance, starting from m a = 10 −13 eV. Given a total integration time of T tot = 30 days, we integrate each step for T = max(N τ, 1 sec), where N 2 m = 35 and N 40 m = 4. This choice is equivalent to allocating the integration time logarithmically among bins of m a , as recommended by Ref. [19], and in agreement with Refs. [13,15]. The envelope of the sensitivity to g aγγ can be obtained analytically from Eq. (9), giving g aγγ > 6.13 × 10 −11 GeV −1 N with λ 0 being the laser wavelength. For a given m a , adding up the SNR in quadrature from every step may improve the reach by up to a factor of 2. A 40 m cavity with a circulating laser power of 1 MW in the cavity improves upon CAST limits [17] by almost four orders of magnitude for m a ∼ 10 −13 eV. Conclusion. We proposed a new axion interferometry experimental design that exploits the birefringence of a bowtie cavity in order to generate axion-modulated, vertically polarized sidebands from a horizontally polarized laser beam carrier. This design is practical to implement and can improve on the reach of previous interferometry designs from m a ∼ 1/(F ) up to m a ∼ 1/ , with the sensitivity improving with finesse. The sensitivity and mass range of our experiment can both be improved by a careful design of the mirrors used in the cavity, so that the cavity is optimally coupled with minimal loss, and the phase shift at each end extends to ∆ϕ = π/2. We look forward to implementing this design and beginning the search for axions with the ADBC experiment.
2018-09-05T18:00:00.000Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "e8488c3a408c4f3868c67ba5ea82284c91c98d8b", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.100.023548", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "fac5a5431da438bd3fe877c7c3a1145aa8ed6c84", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233697348
pes2o/s2orc
v3-fos-license
Exploring the relationship between microdosing, personality and emotional insight: A prospective study Having entered the recent public and research zeitgeist, microdosing involves consuming sub-perceptual doses of psychedelic drugs, allegedly to enhance performance, creativity, and wellbeing. The results of research to date have been mixed. Whereas most studies have reported positive impacts of microdosing, some microdosers have also reported adverse effects. In addition, research to date has revealed inconsistent patterns of change in personality traits. This prospective study explored the relationship between microdosing, personality change, and emotional awareness.Measures of personality and alexithymia were collected at two time points. 76 microdosers participated at baseline. Invitations to a follow-up survey were sent out after 31 days, and 24 participants were retained.Conscientiousness increased, while neuroticism decreased across these time points (n = 24). At baseline (N = 76), neuroticism was associated with alexithymia. In addition, neuroticism correlated negatively with duration of prior microdosing experience, and extraversion correlated positively with both duration of prior microdosing experience and lifetime number of microdoses.These results suggest that microdosing might have an impact on otherwise stable personality traits. INTRODUCTION There has been recent growing public awareness about the phenomenon of "microdosing", with stories featured in Vogue (Mechling, 2017), Forbes (Williams, 2017), The New York Times (Glatter, 2015) and the Australian Financial Review (Valentish, 2018). Microdosing involves taking small quantities of psychedelic substances at regular or semi-regular intervals with the intention to improve psychological wellbeing, work performance and/or creativity (Fadiman, 2011). Classic psychedelic substances, such as lysergic acid diethylamide (LSD), psilocybin and mescaline, act (at least partially) by binding to the 2A subtype of 5-hydroxytryptamine receptors in the brain where they act as agonists (Nichols, 2016). Consumption of recreational doses of psychedelic substances is usually associated with an alteration of consciousness. Such consumption often leads to changes in visual and auditory perception. Microdoses are considered subperceptual doses, usually ranging from 1/20 to 1/ 10 of a recreational, psychoactive dose (Kuypers et al., 2019). Ideally, such subperceptual doses should not result in visual effects (Fadiman, 2011) or a marked change in consciousness (Johnstad, 2018). LSD and psilocybin are the most common psychedelics reportedly microdosed (Anderson et al., 2019), so a typical microdose might be considered between 4 and 20 mg of LSD or between 0.1 and 0.6 g of dried psilocybin mushrooms (Lea, Amada, & Jungaberle, 2019). According to microdosers' selfreports, there is individual variance in the optimal dose required to obtain the intended effects while remaining subperceptual (Hupli, Berning, Zhuparris, & Fadiman, 2019). Most microdosers reportedly adjust their initial dose through trial and error (Lea et al., 2019). While various microdosing regimens have been identified by , most people reported microdosing between once and a few times a week, usually in the morning and as a cyclic activity (e.g., every third day). According to Johnstad (2018), most experienced microdosers restricted their use to phases lasting from a few weeks to a few months, as some users believe that microdoses may not provide benefits when used for long durations. Although acute effects of microdosing have been found to be subtle in clinical trials (Bershad, Schepers, Bremmer, Lee, & Wit, 2019;Ramaekers et al., 2020;Yanakieva et al., 2018), proponents claim that regular microdosing can lead to a range of long-term benefits, such as improved mood, wellbeing, sociability, creativity and performance (Webb, Copes, & Hendricks, 2019). Andersson and Kjellgren's (2019) thematic analysis of 32 microdosing videos on the Youtube platform aimed to understand microdosers' experiences, expectations, approaches, and viewpoints. They found claims that microdosing led to a facilitation of self-reflection and personal insights. Meanwhile, people who engage in microdosing primarily to address an existing mental health issue have reported they do it to self-treat depression, (social) anxiety, suicidality, addiction, trauma, intrusive thoughts, and chronic pain (Petranker, Kim, & Anderson, 2020). Although psychedelic microdosing has been growing in popularity (Anderson et al., 2019;Hupli et al., 2019), Fadiman (2011 acknowledged that the practice likely has roots going back to indigenous cultures and healers who likely "systematically and fully explored every dose level" (p. 210). He further recognized that modern psychedelic research had often overlooked these subperceptual doses. Following a 30-year embargo on psychedelic research (Strauss, Bright, & Williams, 2016), there has been a recent resurgence of scientific interest in the therapeutic and medicinal effects of psychedelic substances, often referred to as the psychedelic renaissance (Bright & Williams, 2018). Contemporary clinical research on higher doses of psychedelics has shown that LSD administered in a supportive setting increases optimism in healthy subjects (Carhart-Harris et al., 2016). High doses of psilocybin decreased depressive and anxious symptoms in patients with lifethreatening cancer diagnoses (Griffiths et al., 2016) and promoted abstinence in patients with alcohol (Bogenschutz et al., 2015) and tobacco addiction (Garcia-Romeu, Johnson, & Griffiths, 2014). However, contemporary microdosing research remains limited. To date, few placebo-controlled, randomized, doubleblind studies have investigated the effects of microdosing LSD on mood, perception and cognition. Yanakieva et al. (2018) observed changes in time perception following LSD microdosing. Bershad et al. (2019) observed a dose-related subjective sensitivity to drug effects, and an increase in ratings of vigor at 26 mg of LSD. Ramaekers et al. (2020) observed a significant increase in pain tolerance and decreased pain perception, which was significant at doses of 20 mg of LSD. Family et al. (2020) found no adverse effects from 5 to 20 mg LSD but no marked cognitive changes. In contrast, Hutten et al. (2020) found that low doses of LSD (5-20 mg) increased positive mood, friendliness, arousal, and decreased attentional lapses. However, increases in anxiety and confusion were also observed. Finally, Holze et al. (2020) found that the threshold-dose for subjective drug effects was at 10 mg of LSD. A non-blinded experimental study by Prochazkova et al. (2018) examined the cognitive-enhancing potential of microdosing psilocybin-containing truffles in a naturalistic setting. Microdosers were found to have significantly increased performance in convergent and divergent thinking after a non-blinded microdose. Polito and Stevenson (2019) were the first researchers to systematically investigate the long-term effects of microdosing psychedelics by tracking the experiences of microdosers (N 5 63) over a six-week period. Participants filled out daily reports and completed a comprehensive questionnaire battery at baseline and the end of the study. The results of this prospective study suggested that microdosing led to improved mental health (decreased depression and stress) and to improved attentional capacities (decreased mind wandering and increased levels of absorption). In addition, a small but significant increase of the personality trait neuroticism was observed. Neuroticism is a person's tendency to experience negative emotions more easily and has been described as emotional instability. Personality traits such as neuroticism are considered relatively stable constructs and typically not expected to alter over a short period of time. Differences in personality between microdosers and controls have also been found in the trait openness; however, to date this has only been reported in cross-sectional designs, so it is unclear if microdosing is the cause of these differences (Anderson et al., 2019;Bright, Gringart, Blatchford, & Bettinson, 2021). Microdosing and personality The increase in the personality trait neuroticism observed by Polito and Stevenson (2019) appears to contradict the finding that people who microdose tend to have improved mental health, observed by Polito and Stevenson and others (Anderson et al., 2019;Fadiman, 2011;Webb et al., 2019). However, Bright et al. (2021) found that microdosers had higher levels of depression and anxiety compared to a yoga control group. In a recent review of 14 microdosing studies, Kuypers (2020) observed that "while low LSD doses were experienced as pleasant, it was also shown that drug disliking and anxiety increased, and that a cycling pattern of depressive and euphoric mood changes can occur" (Kuypers, 2020, pp. 9-10). Andersson and Kjellgren (2019) observed increased negative emotion among microdosers, proposing increased emotional awareness may initially trigger negative emotions, but also provide "more insights and possibilities to work through personal issues" (Unwanted effects and lack of results, para. 2). Similarly, Polito and Stevenson (2019) suggested that reduced mind wandering and increased absorption in immediate experiences might assist participants to identify and process both negative and positive emotions, leading to more emotionally intense experiences. Our study aimed to further understand the relationship between microdosing, changes in personality over time and emotional awareness. Personality can be assessed by using the five-factor model of personality, where each of the factors (extraversion, agreeableness, conscientiousness, neuroticism and openness) represents a distinct personality trait (Costa & McCrae, 1992). Extraversion describes the sociability of an individual. An extroverted person might be more talkative in social settings or get excited more easily. Agreeableness reflects the degree to which a person may be altruistic, cooperative and trust others. Conscientiousness represents the determination and self-control of an individual. Individuals who score high on this scale are typically considered reliable, responsible and organized. Openness reflects someone's interest in new experiences and impressions. Although not a classic personality trait, another individual difference characteristic that may shape the experience of microdosers is alexithymia. Alexithymia represents a person's emotional literacy and is an indicator of emotional insight. In particular, alexithymia reflects a difficulty in identifying, describing and expressing one's feelings (Bagby, Parker, & Taylor, 1994). Research questions 1. Does microdosing lead to changes in self-reported personality traits? 2. Is an individual's level of prior microdosing experience related to self-reported personality traits? 3. Are emotional insight and neuroticism associated in people who microdose? 4. Does emotional insight predict further increases in neuroticism among people who microdose? METHODS Data were collected at two time points using a prospective within-subject design. There was a minimum interval of 31 days between Time 1 (T1) and Time 2 (T2) to ensure that self-reported microdosers would be able to have multiple dosing sessions between the time points. Participants Participants were recruited through announcements on webpages, newsletters and the social media of non-profit psychedelic organizations (e.g. PRISM Australia, the MIND Foundation, the OPEN Foundation, the Third Wave, microdosing.nl). Additional recruiting strategies included posts in psychedelic Facebook groups and online discussion forums (e.g., the microdosing subreddit on reddit.com). Recruitment occurred between October 2019 and April 2020. Participants were asked to only participate if they were 18 years or older, fluent in English, and had past and current experience with microdosing. Participants with a current mental health or neurological diagnosis, a current substance use disorder or a history of psychosis were asked to not participate. Ninety participants submitted complete responses at T1. After a minimum waiting period of 31 days, 32 participants responded to an invitation to complete the followup survey at T2, of which 28 participants submitted complete responses. Participants were excluded from the analysis if they: were not current microdosers (n 5 4); microdosed a substance not considered a psychedelic (n 5 2); reported dosing amounts typically considered a recreational dose (e.g., 75-100 mcg of LSD [n 5 5]); reported very high (non-psychedelic) drug use (n 5 5). Finally, one participant reported using medication for depression during the course of the study and was excluded. Exclusion of these participants led to 76 responses for T1 and 24 responses for T1 and T2. Procedure Potential participants were directed to the study webpage, hosted on the Qualtrics platform. Participants who chose to begin the survey were asked to create an anonymous e-mail address. Instructions explaining how to do this were provided. To secure participants' anonymity, they were asked to use an unidentifiable name. Participants then completed the online survey at T1, which included: the completion of the substance use disorder screening tool 'Modified ASSIST'; the M5-50 Personality Questionnaire; and the 20-Item Toronto Alexithymia Scale (TAS-20). Participants also provided basic demographic information and answered questions about their microdosing behavior (e.g., type of substance consumed; primary motivation for microdosing). 31 days after survey completion, an invitation link to a follow-up survey was sent to the anonymous e-mail addresses of participants. Participants who did not respond to the follow-up survey at T2 were sent a second reminder after another 14 days. The M5-50 Personality Questionnaire and TAS-20 were re-administered at T2. The surveys took about 15 minutes to complete and there was no incentive for participation. Ethical approval was provided by the Ethics Committee of the Georg-Elias-M€ uller-Institute of Psychology (Application number 2019/232). Materials Modified version of the alcohol, smoking and substance involvement screening test (World Health Organization, 2010). Developed by the World Health Organization, the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) was designed to detect risky and harmful substance use behavior, in addition to dependence. It is usually administered in primary healthcare settings. The ASSIST has demonstrated good reliability and feasibility, and good concurrent, construct, predictive and discriminant validity (World Health Organization, 2010). We used a modified version of the ASSIST consisting of 7 items, which collect information about the types of drugs ever used and their frequency of consumption. Follow up items are asked to assess for hazardous use, harmful use, and dependence (e.g., "During the past three months, how often have you failed to do what was normally expected of you because of your use of [first drug, second drug, etc.]?"). The modified version asked about the frequency of substance use within the last year and social difficulties associated with substance use within the past three months. An 8th item, concerning injected substance use, was omitted. Ratings of frequency (e.g. "Never", "Daily or almost daily") were scored from 0 to 6 and an overall score was calculated. Any participant who had used a drug within the last three months and exceeded a total score of 27 for that drug was excluded from the analysis due to very high (non-psychedelic) substance use. The questionnaire consists of 50 items with 10 items for each subscale, taken from Goldberg's International Personality Item Pool (Socha et al., 2010). The M5-50 Personality Questionnaire was designed as a short form of Costa and McCrae's (1992) NEO-PI-R (Socha et al., 2010). A five-point Likert-type scale is used to rate level of agreement with each item, with scores ranging from 1 (Inaccurate) to 5 (Accurate). The measure has been shown to have good reliability and a reasonably good model fit (Socha et al., 2010). -20;Bagby et al., 1994). The TAS-20 is a 20-item measure of alexithymia. Questions focus on difficulties identifying feelings, difficulties describing feelings, and externally oriented thinking. Participants rate their level of agreement with each statement on a five-point Likert scale. The overall scale has demonstrated good internal consistency and a good testretest reliability (Bagby et al., 1994). 20-Item toronto alexithymia scale (TAS During the analysis, it was discovered that due to an administrative error, Item 2 of the TAS-20 had been phrased incorrectly and so this item was omitted from all calculations. RESULTS Participants consumed between 3 and 30 microdoses during the study period (M 5 11.3 microdoses; SD 5 7.53). As can be seen in Table 1, the majority of participants reported microdosing psilocybin or LSD. Specifically, 33 participants reported microdosing with psilocybin (Dose M 5 0.367 g, SD 5 0.503), and 23 participants reported microdosing with LSD (Dose M 5 15.3 mg, SD 5 6.27). There were an additional 4 psilocybin microdosers and an additional 5 LSD microdosers who did not provide clear information on their typical dose. Finally, 11 participants reported microdosing a range of other substances including 1P-LSD/1cP-LSD, Ibogaine, DMT, 4-HO-MET, 4-AcO-MET, ALD-52, 25-I, and BOD. As can be seen in Table 2, the primary motives were Personal Growth and Self Medication. A smaller number of participants reported microdosing primarily to Increase Productivity, Curiosity or to Increase Creativity. Specified text-responses of participants coded as "Other", revealed that their motives were typically a combination of motives outlined above. As can be seen in Tables 1 and 2, participants' reported substance and motive were relatively consistent at T1 and T2. Alpha was set at 0.05 for all analyses, parametric inferential statistics were used for all analyses, and assumptions were met unless stated otherwise. To explore whether any of the personality traits changed over time, five two-tailed, simple paired t-tests were conducted comparing mean scores of T1 against mean scores of T2. As can be seen in Table 3, conscientiousness significantly increased (n 5 24, t 5 À2.26, P 5 0.034, d 5 À0.460), while neuroticism significantly decreased (n 5 24, t 5 3.26, P 5 0.003, d 5 0.666). There was no change in extraversion, agreeableness, or openness. We also investigated whether prior microdosing experience (indexed by total lifetime doses, and number of months since first microdose) correlated with any personality traits at T1. Both indicators of prior experience were not normally distributed in our sample, so the Spearman correlation was used. Based on the responses of all 76 participants at T1, a negative correlation was found between prior experience in months and neuroticism (r 5 À0.237, P 5 0.039), though not with participants' lifetime number of microdoses. Extraversion correlated positively with prior experience in months (r 5 0.228, P 5 0.047) and also with participants' lifetime number of microdoses (r 5 0.262, P 5 0.022). The relationship between emotional insightfulness and the personality trait neuroticism was explored by examining Pearson's correlation between alexithymia at T1 and neuroticism at T1. There was a significant positive association between alexithymia and neuroticism (N 5 76, r 5 0.526, P 5 <0.001). Finally, we investigated whether there was a relationship between alexithymia at baseline and changes in neuroticism over the course of the study using linear regression. Alexithymia at T1 was entered as the predictor, while the difference in neuroticism scores (Time 2 minus Time 1) was entered as the dependent variable. Alexithymia was not a significant predictor of the change in neuroticism (n 5 24, P 5 0.077, R 2 5 0.135). Data for this study are available at https://osf.io/f9vr5/. DISCUSSION In the present study, we aimed to explore the relationship between microdosing, changes in personality over time, and alexithymia. After at least one month of microdosing, we observed significant increases in conscientiousness and decreases in neuroticism. Agreeableness, extraversion and openness were unaffected. Although alexithymia and neuroticism were positively correlated at T1, alexithymia did not predict neuroticism changes between T1 and T2. Extraversion was positively correlated with participants' number of lifetime doses and the duration of their prior microdosing experience. Neuroticism was negatively correlated with the duration of prior microdosing experience. Microdosing and personality Contrary to previous findings by Polito and Stevenson (2019), we observed a significant decrease in neuroticism. The present finding appears more consistent with other contemporary microdosing research (Anderson et al., 2019;Webb et al., 2019) and many anecdotal reports highlighting the positive effects of microdosing on mental health, mood and (psychosocial) wellbeing (Fadiman, 2011;Waldman, 2018). Our finding is also more consistent with clinical research on higher psychedelic doses, in which the administration of oral psilocybin (10 mg and 25 mg, one week apart) has been linked to decreased neuroticism at a threemonth follow-up (Erritzoe et al., 2018). One key difference between the present study and Polito and Stevenson's study is that the samples appear to differ in terms of their prior microdosing experience. The majority of Polito and Stevenson's participants (66.7%) had microdosed 10 times or less, including a substantial portion (31.7%) who had never microdosed before taking part in the study. By contrast, only 37.5% of participants in the current study had microdosed 10 times or less, and all our participants reported at least some prior microdosing experience. As previously discussed, research by Andersson and Kjellgren (2019) suggests that microdosing may initially trigger negative emotions, which in turn provides opportunities to address personal issues. Polito and Stevenson's sample contained mainly participants who were microdosing naı €ve when entering the study or still in an early phase of their microdosing experience. The initial phase of microdosing may have increased awareness of unaddressed (negative) emotions, leading to increased scores in neuroticism. Participants in the current sample, due to relatively greater prior exposure to microdosing, may have learned to process and integrate their emotions better. This interpretation is supported by the negative correlation between prior microdosing experience and neuroticism at T1. Longer term microdosing might actually reduce rather than increase neuroticism. However, it is also possible that microdosers who demonstrate larger decreases in neuroticism are more likely to engage in long-term microdosing. Journal of Psychedelic Studies Conscientiousness significantly increased in our sample, suggesting that participants perceived themselves as more organized, responsible and determined after microdosing. Microdosers might be able to complete their daily tasks with more focus, resulting in more reliable and organized behavior. The increase in conscientiousness is consistent with Andersson and Kjellgren's (2019) findings, where microdosers reported "less procrastination and a spontaneous impulse to clean the house, tidy drawers, pay bills, or address other postponed or neglected tasks" (Insights and transformation section, para.3). The present finding is also consistent with research on higher psilocybin doses by Erritzoe et al. (2018), who observed a trend toward increased conscientiousness at a 3-month follow-up. We observed no significant change in extraversion, agreeableness, or openness from T1 to T2 (although there was a trend toward increased extraversion, P 5 0.053). Extraversion was also positively correlated with prior microdosing experience. This suggests that extroverted individuals may be more likely to engage in long-term microdosing. Findings from high-dose psychedelic research have shown that ingestion of psilocybin has led to persistent increases in openness (Erritzoe et al., 2018;Maclean, Johnson, & Griffiths, 2011). Similarly, in research with cross-sectional designs (Anderson et al., 2019;Bright et al., 2021), microdosers have been found to score higher in openness compared to non-microdosing controls. The null finding related to openness in the current study suggests that rather than microdosing increasing openness, it may be that people already high in openness are more likely to try microdosing. Alexithymia Alexithymia and neuroticism were positively correlated at T1, demonstrating that emotional insightfulness may be associated with lower neuroticism among microdosers. These findings are consistent with previous research that has found relationships between neuroticism and alexithymia in healthy Italian graduate students (Messina, Fogliani, & Paradiso, 2010), parents of daughters with eating disorders (Espina, 2003), and in subjects with medically unexplained physical symptoms (Gucht, Fischler, & Heiser, 2004). These diverse findings indicate that emotional insightfulness may generally be associated with lower neuroticism. In the current study alexithymia at T1 did not predict subsequent neuroticism change. However, the negative correlation between prior microdosing experience and neuroticism suggests that more experienced participants may have learned to integrate their emotional insights. Limitations and future directions There are several limitations to the study design. Due to practical and legal restrictions, our study was not dose or placebo controlled. Participants microdosed a range of substances and differed in dosing amounts and their frequency of consumption. Although this allowed an examination of microdosing in a naturalistic setting, this study was not as precise as a controlled experiment with predetermined dosing amounts, fixed schedules, and a placebo-control condition. Participants were recruited worldwide and through a wide range of psychedelic organizations and forums. However, the survey was carried out in English, making it inaccessible for non-English speaking microdosers. Due to ethical restrictions of this study, we were not able to recruit participants with a current mental health diagnosis or substance use disorder. This exclusion criterion limits the generalizability of our sample. Only 23.7% of participants selected Self-Medication as their primary motive; the number of people engaging in microdosing with this motive is likely to be higher in the general population. In addition, there could have been sampling bias, leading to an over-representation of participants who had mainly positive experiences with psychedelics. Finally, most participants reportedly engaged at least once in recreational drug use while microdosing (excluding alcoholic beverages and tobacco products), making it difficult to be certain that effects found are entirely due to microdosing. Other observational studies have also found that microdosers often report past experience with, or recent use of, recreational drugs (Anderson et al., 2019;Johnstad, 2018;Lea, Amada, Jungaberle, Schecke, Scherbaum & Klein, 2020;Rosenbaum et al., 2020;Webb et al., 2019), presenting a common limitation in studies of microdosing in naturalistic settings. For this reason, the current findings should be interpreted with caution. Future research could build upon our exploratory findings by testing specified hypotheses regarding personality change in an experimental setting with a placebo control condition. Further, it will be important to untangle the effects of recreational psychedelic use by including psychedelic naı €ve microdosers. CONCLUSION Our results indicate that microdosing may impact personality traits. In this study a short course of microdosing led to increased conscientiousness. Contrary to earlier findings with mostly naı €ve participants, we also found that neuroticism decreased in this sample of more experienced microdosers. In addition, prior microdosing experience correlated negatively with neuroticism and positively with extraversion. Finally, we found a negative association between emotional insight and neuroticism, although this was not predictive of future personality change. Future research could build upon these findings by investigating whether personality variables develop differently between microdosing naı €ve and experienced participants. The role of alexithymia could also be explored in microdosers with a current mental health diagnosis.
2021-05-05T00:08:21.403Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "99ffa03b63ebe43693cb9180b674446151d91273", "oa_license": "CCBYNC", "oa_url": "https://akjournals.com/downloadpdf/journals/2054/5/1/article-p9.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "56abf9f3f1cd648bc3bcf9d93fd4001f25081fc9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
271739878
pes2o/s2orc
v3-fos-license
Dangers of Herpesvirus Infection in SLE Patients Under Anifrolumab Treatment: Case Reports and Clinical Implications Case series Patients: Female, 23-year-old • Female, 56-year-old Final Diagnosis: Herpesvirus infection Symptoms: Abdominal pain • cognitive impairment • fever • headache • skin rash Clinical Procedure: — Specialty: Infectious Diseases • Rheumatology Objective: Unusual clinical course Background: Anifrolumab, a monoclonal antibody targeting the type 1 interferon (IFN-I) signaling pathway, holds promise as a therapeutic intervention for systemic lupus erythematosus (SLE). However, its use is associated with an increased risk of infections, particularly viral infections like herpes zoster (HZ). Results from the clinical trials on anifrolumab show yearly rates of upper respiratory tract infections of 34% and HZ of 6.1%. An increased frequency of other specific viral infections, including herpes simplex virus (HSV), was not reported. Case Reports: Here, we present 2 cases of patients with SLE treated with anifrolumab, both experiencing severe adverse reactions in the form of disseminated herpesvirus infections, specifically disseminated HSV-2 and varicella zoster virus (VZV, HZ encephalitis). To the best of our knowledge, no previous reports of severe disseminated HSV-2 or HZ have been published in anifrolumab-treated patients. The patient in case 1 experienced a primary HSV-2 infection following anifrolumab treatment, potentially explaining the severity of the infection. The patient in case 2 had a history of previous HZ skin infections, which may have increased her risk of disseminated infection. Both patients recovered from the infections with minor sequelae, but they still require prophylactic antiviral treatment. These cases highlight the critical role of IFN-I immunity in protecting against herpesvirus infections. Conclusions: Thorough risk assessment before anifrolumab initiation, considering the patient’s viral infection history, vaccination status, and potential exposure risks, is essential. Administration of recombinant zoster vaccine before anifrolumab therapy may benefit susceptible individuals. Introduction The type I interferon (IFN-I) is a family of potent antiviral cytokines comprising IFN-a (12 subtypes), IFN-b, IFN-w, IFN-e, and IFN-k, and plays a central role in the control of viral infections.IFN-I also represents a compelling therapeutic target in systemic lupus erythematosus (SLE), which has garnered significant research attention over the past decade [1,2].Studies have been conducted to explore the selective inhibition of IFN-a inhibition using sifalimumab and rontalizumab, as well as the comprehensive suppression of the IFN-I signaling pathway achieved with anifrolumab, targeting the IFN-a/b receptor subunit 1 (IFNAR1) (Figure 1) [2][3][4][5][6][7]. In 2022, anifrolumab was approved by the European Medicines Agency (EMA) for patients with SLE as an add-on therapy to the standard of care in moderate-to-severe disease as its exclusive indication [8].Despite showing efficacy in the 3 existing randomized clinical trials on anifrolumab and SLE [9][10][11], concerns over serious adverse effects arose due to the broad inhibition of the IFN-I response, which is a critical part of the human antiviral response [12].Results from the clinical trials and reports from the EMA on anifrolumab show undesirable effects, with yearly rates of upper respiratory tract infections of 34% and herpes zoster (HZ) infections of 6.1% [9][10][11]13].The long-term follow-up (feeder trial of TULIP-1 and TULIP-2 trial), encompassing 4 years of treatment, reported similar rates [10,11,13].HZ infections were predominantly mild and resolved without discontinuing anifrolumab therapy, even though cases of disseminated HZ infection with multi-dermal and central nervous system involvement have been reported to the EMA [13].An increased frequency of other specific viral infections, including herpes simplex virus (HSV), was not reported. Here, we present 2 cases of severe disseminated herpesvirus infections observed in patients with SLE undergoing treatment with anifrolumab. Case Reports Case 1 The patient was a 32-year-old woman with SLE since the age of 20 years.The diagnosis was established based on a constellation of clinical and immunological manifestations.The clinical features were fever, oral ulcers, serositis, arthritis, alopecia, skin rash, and antiphospholipid syndrome (APS).Her blood samples showed lymphopenia and normocytic anemia of chronic disease, with immunological and inflammatory manifestations such as low complement C3, elevated C-reactive protein, and the following positive autoantibodies: antinuclear antibody with a homogenous nuclear staining pattern, anti-double-stranded deoxyribonucleic acid (dsDNA) antibody, anti-Sjogren syndrome-related antibody, and Coombs test.At the age of 25 years, the diagnosis of APS was established based on persistent positivity in the lupus anticoagulant test (single anti-phospholipid antibody positivity) and the occurrence of a deep vein thrombosis and pulmonary embolism within 6 months from the positive test results.Apart from that, no organ damage had occurred.Other comorbidities included attentiondeficit/hyperactivity disorder (AD/HD).Regarding anticoagulation therapy, she was initially treated with warfarin upon the diagnosis of APS, but managing the therapeutic dose proved challenging over the years.Consequently, after 7 years without any recurrence of thrombosis, she was switched to rivaroxaban.Beside her anticoagulant treatment and AD/HD treatment with methylphenidate, her immunosuppressive treatment prior to the present disease course included hydroxychloroquine (HCQ); she had previously been treated with azathioprine and In 2023, she presented with a moderate SLE flare with extreme fatigue, recurrent arthralgia, severe skin rash, persistent mild cognitive impairment (memory and concentration difficulties that was primarily attributed to the AD/HD diagnosis), and reduced quality of life.No sign of serological activity was seen, with normal C3 levels and persistent undetectable levels of anti-dsDNA-antibodies, since 2014.She was prescribed anifrolumab treatment (intravenous 300 mg/every 28 days) as an add-on to HCQ.Six weeks after initiation of anifrolumab treatment (after 2 infusions with no clinical response to treatment yet), she was admitted to the hospital with fever, lower abdominal pain, and severe headache.The anti-HSV-2 immunoglobin-G (IgG) antibodies were positive 3 weeks after admission, but previous HSV-2-antibody measurements were not available.Other infectious, autoimmune (including SLE disease flair), and malignant differential diagnoses were considered but ultimately excluded.Anti-dsDNAantibody levels remained undetectable.A whole genome sequencing found no evidence of inborn error of immunity, based on an analysis of 572 associated genes, according to the most recent guidelines [14].The only other positive microbiological finding was Ureaplasma parvum DNA from a vaginal swab. The patient was treated with antibiotics, including meropenem and doxycycline, during hospitalization.After the suspicion of HSV-2 infection (later confirmed by PCP from blood and vaginal material), the patient was treated with intravenous acyclovir (30 mg/kg/day) for 3 weeks, followed by oral valaciclovir 2 g/day for 2 weeks.Following the antiviral intervention, clinical and laboratory improvements were observed. No immunosuppressives aside from HCQ were administered during the hospitalization.Following discharge, a month after discontinuation of antiviral therapy, the patient presented with a recurrent HSV-2-positive skin lesion, prompting treatment with valacyclovir followed by prophylactics (acyclovir 800 mg/day).As of today, almost 6 months after the HSV-2 disease, the patient has recovered but continues to require prophylactic acyclovir treatment and treatment for post-herpetic neuralgia.Notably, there has been no sign of SLE disease activity.The patient is currently maintained on HCQ, with no additional immunosuppressive medications administered since the last dose of anifrolumab. Case 2 The patient was a 56-year-old woman with SLE and mixed connective tissue disease features since the age of 25 years.Clinical characteristics of the disease included malar rash, photosensitivity, arthritis, oral ulcer, alopecia, puffy fingers, and Raynaud's disease.Immunological characteristics included low complement C3 and C4, lymphopenia, and the following positive autoantibodies: antinuclear antibody with nucleus speckled staining pattern, anti-dsDNA antibody, anti-Smith antibody, and anti-U1 ribonucleoprotein antibody. The patient had no comorbidities, signs of organ damage, or history of central nervous system or renal involvement.Since the diagnosis of SLE, the patient has been continuously on HCQ. Treatments with azathioprine, methotrexate, and mycophenolate had proven inadequate to treat her severe skin manifestations and persistent arthritis.The patient commenced belimumab (anti-B lymphocyte stimulator monoclonal antibody) in 2020, resulting in near remission of her disease, albeit with ongoing skin manifestations. In 2022, a significant flare involving the skin, joints, and alopecia, coupled with the necessity for continuous prednisolone doses exceeding 10 mg/day, prompted a transition to anifrolumab treatment (intravenous 300 mg/every 28 days) as an adjunct to HCQ and prednisolone 5 mg/day.The patient reported no initial adverse effects from the anifrolumab treatment.Having been treated for single dermatome skin HZ twice by her general practitioner years prior to the current presentation, she opted not to receive vaccination for HZ before commencing anifrolumab therapy.The patient experienced substantial improvement in clinical symptoms, particularly the resolution of skin manifestations and hand puffiness, which had not previously achieved remission with other treatments.Persistently, anti-dsDNA-antibody levels were in the range of 20-50 × 10 3 IE/L and C3 levels below 0.9 g/L. One and a half years after anifrolumab initiation, she was admitted to the hospital due to fever, severe headache, mild cognitive impairment, and a rash and neuropathic pain on the left side of the thorax overlapping several dermatomes.Analysis of the rash vesicles detected VZV DNA by PCR.A cerebrospinal fluid analysis found 489 × 10 6 leucocytes/L (RI: <4×10 6 /L) and positivity to VZV DNA.The patient received a diagnosis of disseminated VZV, with suspected central nervous system involvement.At the time of admission, her immunosuppressive treatment included HCQ 400 mg/day, prednisolone 10 mg/day, and anifrolumab 300 mg/every 28 days (latest administration 4 weeks before admission).HCQ and prednisolone were continued during hospitalization, and intravenous antiviral treatment with acyclovir (30 mg/kg/day) was given.The patient subsequently experienced slow clinical improvement.No signs of SLE activity were observed.No significant increase in anti-dsDNA-antibody levels were observed during hospitalization.After 2 weeks of intravenous antiviral treatment, the patient was discharged and switched to oral treatment with valacyclovir 6 g/day for another week. Presently, 1 month after discharge, the patient is in recovery, although she continues to experience intermittent headaches, insomnia, and mild cognitive impairment.Her SLE disease remains in remission on HCQ and prednisolone 10 mg/ day.Anifrolumab has not been administered since the hospitalization.The patient has not previously been vaccinated against HZ, but a recombinant zoster vaccine (RZV) is scheduled before any potential further immunosuppression. Discussion In the past decades, significant progress has been made in understanding SLE disease and developing novel treatment strategies to control disease activity and prevent organ damage.Despite these advances, infections remain a prominent cause of death in SLE [15].Bacterial infections constitute a substantial proportion, accounting for up to 75% of all reported infections in SLE [16].Various opportunistic viral and mycobacterial infections also contribute significantly to morbidity, and the co-occurrence of infections with disease flares further complicates the diagnosis and management of concurrent infections in SLE [17].Infections in SLE are influenced by various risk factors, including conventional elements such as advanced age and concurrent comorbidities, alongside more specific characteristics, such as recent onset of SLE (<5 years), early disease onset, high disease activity, presence of organ damage, and use of glucocorticoids and other immunosuppressive therapies [17]. Upper respiratory tract infections are the most prevalent type of infections in SLE [18].Among viral infections in general, HZ stands out as one of the most prevalent [19].In contrast to other types of infections in SLE, two-thirds of HZ cases occur after 5 years of disease duration, and 80% exhibit no or mild disease activity [19].HZ in SLE generally carries a favorable prognosis, however, independent risk factors for an unfavorable outcome includes lymphopenia and immunosuppressive drugs [19,20]. In this paper, we highlight 2 cases of patients with SLE who were subjected to anifrolumab therapy and experienced severe adverse reactions manifested as disseminated herpesvirus infections, specifically disseminated HSV-2 and VZV.Regarding viral infections following anifrolumab treatment, HZ infection is well known from previous clinical trials [9][10][11], and reports of severe COVID-19 infections in 2 anifrolumab-treated SLE patients has previously been published [21].To the best of our knowledge, no severe disseminated HZ infections or severe HSV-2 infections have previously been reported in anifrolumab-treated patients.Rigorous clinical investigations, including a comprehensive genetic immunodeficiency workup involving whole-genome sequencing in case 1, failed to reveal any alternative explanations for the observed infections, strongly implicating anifrolumab as a potential contributing factor in both cases. Typically, HSV-2 infections manifest with mild symptoms localized to the skin or mucosa, although disseminated infections are well-documented in immunocompromised individuals, particularly in cases presumed to be primary infections [22][23][24].The status of a primary HSV-2 infection in case 1 remains uncertain.Notably, she lacked prior history of HSV symptoms and had a possible exposition to HSV-2 infection 3 weeks before admission. A comprehensive review in 2020 indicates a pooled mean seroprevalence of HSV-2 antibodies among European women of 14% [25].Considering the relatively low seroprevalence of anti-HSV-2 antibodies, the practicality of incorporating a screening protocol for HSV-2 antibodies prior to commencing anifrolumab treatment appears to be of limited value.Nonetheless, it stresses the importance of assessing the risks of potential exposition to HSV infections before treatment initiation.Furthermore, exploring the screening of antibodies against other potentially detrimental viruses in scenarios involving primary infections could yield significant benefits.For example, screening for HSV-1 IgG antibodies may be worthwhile, given its reported European mean seroprevalence of 67% [26]. Similarly, assessing IgG antibodies against cytomegalovirus, with a seroprevalence of 70% in European women of reproductive age, can offer valuable insights into the risk evaluation before initiating anifrolumab therapy [27]. It is unknown whether the APS diagnosis in case 1 might have influenced the anifrolumab treatment or susceptibility to infection.In previous trials on anifrolumab and SLE [9][10][11], APS was not an exclusion criterion if anticoagulant therapy had been stable for 3 months and there was no history of severe or catastrophic APS.However, the number of patients with APS included in these trials, or the results of stratified analysis based on the APS diagnosis, have not been published. Primary defects in the IFN-I system have been associated with severe viral disease, including COVID-19, herpesvirus disease, and influenza [28,29].Disseminated HSV infection as well as disseminated VZV infection after living viral VZV vaccination have previously been reported in children with complete deficiency for either of the type I IFN receptor subunits IFNAR1 or IFNAR2, and in indigenous people of Polynesia and Artic ancestry, where loss-of-function alleles of these genes are relatively common [30,31].Recently, a case of disseminated HSV-2 primary infection was described in a patient harboring high levels of anti-IFN-a autoantibodies, and many years ago, disseminated varicella zoster disease was described in an elderly and otherwise healthy woman with neutralizing anti-IFN antibodies [32,33]. Notably, anti-IFNa autoantibodies have been observed in up to 11% of individuals with SLE, correlating with viral infections and tuberculosis [34,35].More recently, anti-IFNa antibodies have been linked with reduced disease activity in SLE [36].Nevertheless, the clinical implications of these autoantibodies remain uncertain.In fact, their presence was also reported in approximately 10% of cases with severe COVID-19 and 5% in cases with severe influenza pneumonia [37,38]. Regarding VZV, the patient in case 2 was not vaccinated against HZ.She had a history of previous HZ but no other HZ risk factors except SLE disease and age > 50 years [39].The RZV (Shingrix) has recently been granted expanded indications to immunocompromised adults aged >18 years by the Food and Drug Administration [40].So far, there are no clinical efficacy data on RZV in patients with SLE, but it has been shown to be safe and effective in the general population [41].The efficacy of the RZV in preventing HZ likely correlates with a comparable reduction in central nervous system manifestations (HZ encephalitis), although this has not been studied so far [42]. The American College of Rheumatology 2022 guideline for vaccination in patients with rheumatic diseases recommends RZV for all patients >18 years taking immunosuppressive medications [43].The European League Against Rheumatism 2023 guideline for the management of SLE also recommends RZV for all patients [44]. Conclusions These case presentations highlight the occurrence of severe adverse effects involving disseminated HSV-2 and VZV infections after anifrolumab treatment in 2 patients with SLE.Together with previous publications, it attests to the significant importance of IFN-I immunity in protection against herpesvirus infections in humans.Meticulous and prompt diagnostic workup for viral infection should be done in an affected anifrolumabtreated patient to avoid or reduce potentially irreversible damage.It emphasizes the need for continuous surveillance and comprehensive reporting of adverse events associated with newly approved drugs.Furthermore, this necessitates a meticulous risk assessment before initiating such treatments, explicitly focusing on the patient's viral infection history, vaccination status, and potential exposure risks.A reasonable strategy to mitigate the escalated risk of severe viral infections, especially in susceptible individuals, is therapeutic education and patient awareness of viral risks and the proactive administration of RZV before initiating anifrolumab.Additionally, screening the antibody status of various viruses can provide valuable insights in select cases. Figure 1 . Figure 1.Mechanisms of action of type 1 interferon-linked monoclonal antibodies.The type 1 interferon (IFN-I) receptor (IFNAR) comprises the 2 subunits IFNAR1 and IFNAR2.Upon activation, intracellular receptorlinked kinases initiate phosphorylation and coupling of specific proteins, subsequently translocating to the nucleus.Here, they activate the expression of INFstimulated genes, which mediate the antiviral effector functions of IFN-I.Anifrolumab (underlined red) targets IFNAR1, thereby preventing IFN-I (here illustrated with IFN-a and IFN-b) from activating IFNAR.Sifalimumab and rontalizumab selectively bind to IFN-a, allowing the other cytokine members of the IFN-I family (such as IFN-b) to still activate IFNAR.Created with BioRender.com. Laboratory investigations revealed elevated C-reactive protein level of 412 mg/L (reference interval (RI): <8 mg/L), alanine transaminase of 591 U/L (RI: 10-45 U/L), lactate dehydrogenase of 865 U/L (RI: 105-205U/L), and low-level pleocytosis from the cerebrospinal fluid (leucocytes count of 7×10 6 /L [RI: <4×106 /L] with no concomitant red blood cells).A computed tomography scan of the thorax, abdomen, and pelvis revealed signs of widespread inflammation, with (1) bilateral pleural and pericardial effusion, (2) peripancreatic fluid and adipose tissue reaction, (3) ascites and periportal edema, (4) numerous hypodense changes in the liver (suggestive of herpes hepatitis) (Figure2),(5) splenomegaly, and (6) reaction around the vagina and left ovary.Several positive HSV-2 DNA polymerase chain reaction (PCR) tests from the plasma and ascites confirmed the diagnosis of disseminated HSV-2 infection.Expositions preceding admission involved unprotected vaginal intercourse 3 weeks before.A vaginal swab also found a positive PCR test of HSV-2 DNA.Exacerbation of cognitive impairment, headache, photophobia, and low-level pleocytosis strongly suggested HSV-2 central nervous system involvement as well.However, HSV-2 analysis of the cerebrospinal fluid and cerebral magnetic resonance imaging did not confirm this, since the PCR tests from the cerebrospinal fluid was negative, and the scan was normal.
2024-08-07T15:11:10.676Z
2024-07-29T00:00:00.000
{ "year": 2024, "sha1": "900d5ae5296a4715fe545484cb31ba5c7aaba044", "oa_license": "CCBYNCND", "oa_url": "https://amjcaserep.com/download/inPress/idArt/944505", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "17ae2e515413526818bfef686d036a971757b81e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119443306
pes2o/s2orc
v3-fos-license
Fermion Loops, conserved currents and single-W The relevance of fermion loop corrections to four fermion processes at e+ e- colliders is reviewed with regard to the recent extension to the case of massive external particles and its application to single-W processes. Introduction The problem of preserving gauge invariance in the calculations involving unstable particles is well known since several years [ 1,2,3,4] . It is connected with the fact that the use of a width (e.g. iΓM ) in the denominator of an s-channel unstable boson propagator is necessary to prevent divergences at p 2 = M 2 , but it is in fact an effective way of including only a part of higher order corrections, and it therefore violates gauge invariance. There are various examples in which this violation becomes numerically relevant. For instance, this happens at tree level with four fermion final states for WW production at high energies, where gauge cancellations among the three double resonant (CC03) diagrams become relevant, and for contributions at low e − angle. This last case is relevant for final states like e − e + → e −ν e ud, e − e + → e −ν e µ + ν µ or e − e + → e − e + ν eνe with e − undetected, which are commonly referred to as single-W processes. For them, gauge invariance determines the behaviour of the amplitude as a function of t. In the above mentioned examples even small violations of gauge invariance may easily give results which are completely unreliable, and differ from the correct ones not by a few percent but by some large factor. Various gauge restoring methods have been described in the literature and have been used to avoid such inconsistencies. Some of the most used are : Fixed width(FW): In all massive-boson propagators one performs everywhere the substitu- This gives an unphysical width in t-channel, but retains U(1) gauge invariance. Complex Mass(CM): The substitution M 2 → M 2 − iΓM is applied not only in propagators but also in relations involving couplings. This gives unphysical complex couplings in addition to the width in t-channel. It has however the advantage of preserving both U(1) and SU(2) Ward identities. Fermion Loop(FL): At least the imaginary part of all (propagators and vertices) fermion loop corrections is used together with resummed boson propagators. The real part of FL corrections can as well be considered. It is not necessary for gauge restoration, but it constitutes an important gauge invariant subset of the radiative corrections, which automatically determines the correct evolution of the coupling constants. It has to be noticed that the first schemes are somehow "ad hoc" prescriptions to restore gauge invariance introducing some non correct feature, whose consequences are expected to be of little numerical impact but this should in principle be verified case by case. FL on the contrary is the only fully consistent and justified scheme in field theory. In the following we will review the FL scheme and its application to single-W processes, with particular regard to the latest results which take into account current non conservation (massive external fermions). Fermion Loop The application of FL corrections to 4f processes has been fully described in the papers of ref. [ 1,4] In the first paper only the Imaginary part of FL corrections (IFL) have been considered and as a case study the process e − e + → e −ν e ud has been used. The approximation of considering massless external fermions has been taken, which implies that all external current are conserved (CC). In this approximation U(1) Ward Identities are satisfied by adding a fixed width iΓM to all denominators of W propagators (also in t-channel), even if there is no physical justification for this procedure. IFL corresponds to resumming the imaginary part of fermionic loop corrections in propagators and adding the imaginary part of vertex loop corrections. This has to be considered as the minimum set of corrections that has to be added in order to restore gauge invariance. With massless fermions in the internal loops and in the decay width, i Γp 2 M in propagators satisfies gauge invariance. Some numerical applications for e − e + → e −ν e ud have been considered in [ 1]. The complete FL corrections have instead been computed in ref [ 4]. Again in massless external fermion approximation, but with massive fermions in loops. The renormalization of gauge boson masses at their (gauge invariant) complex poles [ 5] has been used. It turns out that in this scheme all corrections can be reabsorbed in running couplings and renormalized propagator functions and triple vertices, so that an effective Born prescription can be used in which only tree level type diagrams appear. Numerical applications to e − e + → e −ν e ud at low e − angle and e − e + → µ −ν µ ud at high energies show differences from other schemes. At high energies differences also with IFL are found. Single W Processes with at least one electron and one electron-neutrino like e − e + → e −ν e ud e − e + → e −ν e µ + ν µ , and e − e + → e − e +ν e ν e , besides being relevant to WW or ZZ physics, are particularly interesting in the kinematical configuration in which the electron is lost in the pipe. In such a configuration (single-W), they become important as a background to searches and for anomalous coupling studies. The cross sections are significant because of t-channel contributions, and they are directly measured at LEP2. Single-W processes are divergent in massless external fermion approximation. Therefore not only external electron masses have to be exactly accounted for, but also those of the other fermions (u, d, µ,..). Several fully massive 4fermion MonteCarlo's are now available: COMPHEP [ 6], GRC4F [ 7], WPHACT [ 8], KORALW [ 9], NEXTCALIBUR [ 10], SWAP [ 11] and WTO [ 12] which accounts for masses where they become important. Good technical agreement among all these codes has been achieved for single-W processes in tuned comparisons. In fig. 1 one can find the results from the first three codes up to Linear Collider energies, while we refer to Ref. [ 13] for further comparisons with the others at LEP2 energies. It has to be remarked that in fig. 1 the three codes used respectively overall, L µν transform method [ 14] and fixed width as gauge restoring schemes. IFL and Non Conserved Currents FL calculations of ref. [ 1,4] are not appropriate for single-W processes because of the assumption of conserved currents (massless external fermions), which is in conflict with the necessity to account for external fermion masses. Nevertheless, numerical studies have been performed using these results together with fully massive matrix elements in [ 15], where it was noticed that the corresponding U(1) gauge violation is proportional to m e . It can however be enhanced by large factors at high energy, as it will be shown in table 1 . The Imaginary Fermion Loop scheme with fully massive ME and exact non conserved current contributions has been studied recently [ 16] and implemented in WPHACT. The unitary gauge has been used. While for massless internal fermions the Ward Identities (WI) are satisfied by the fixed width propagator with "running width" Π = p 2 ΓW MW With it, WI are properly satisfied only if exact IFL triple vertex corrections are computed. As it can be deduced from tables 1 , 2, the use of IFL does not give significant numerical differences with "ad hoc" schemes for cross sections with typical single-W cuts. The last line of table 1 shows instead that the approximation of conserved currents together with massive ME lead to inconsistent numerical results at high energies. Some differences between IFL and other schemes is evident (fig 2) when one considers mass distributions. This is connected to the fact that with IFL one properly makes use of the running width. FL and Non Conserved Currents Big theoretical uncertainties for single-W processes, in absence of complete O(α) corrections, are connected to the scales of the couplings and to the scale for ISR with dominating t-channel contributions. Complete FL calculations with massive external fermions have been recently performed [ 17]. These are necessary to solve the first of the uncertainties just mentioned. Complex mass renormalization has been used as in [ 4] and it leads again to effective Born calculations. Besides running couplings and renormalized propagators now also running boson masses are needed. They are defined by It has to be remarked that, using the Feynman gauge, complete one loop resummation gives a Wpropagator which is equivalent to some effective "unitary gauge" form: The complete FL scheme for non conserved currents has been implemented in WTO for e − e + → e −ν e ud and e − e + → e −ν e µ +ν µ , where the fermion masses are accounted for "when needed": in t-channel γ exchange diagrams for log(m 2 /s) and constant contributions. The numerical results [ 17] show differences with tree level fixed width G f scheme up to ∼ 7%. This result is reported in fig 3. One may expect that the bulk of such a difference comes from α running effects, as it is rather obvious that the value of α em in G f renormalization scheme is not the correct one to describe t-channel γ propagator couplings. For such a reason, α(t) with IFL (IFLα) has been implemented in WPHACT for t-channel contribution only. For cuts used at LEP2 energies for single-W, this seems to describe with very good approximation e − e + → e −ν e ud, as can be deduced by a comparison with FL both for cross sections (table 3) and for angular distributions (table 4). Moreover, it has been checked in fig. 4 that the agreement does not depend on the M (ud) invariant mass cut and it remains below 1% down to the very low cuts. For the process e − e + → e −ν e µ + ν µ instead, the difference between FL and IFLα turns out to be of the order of 2-3% both in cross sections (table 5) and in angular distributions (table 6). The reason for such a different behaviour between the two processes is probably due to the cuts and the relative importance of multiperipheral contributions in them. The results of tables 5,6 in any case indicate that the running of α is not in general sufficient for a very accurate description of the effects accounted for by complete FL calculations Table 5 Total single-W cross-section in fb for e + e − → e −ν e µ + ν µ , for | cos θ e | > 0.997, E µ > 15 GeV, and | cos θ µ | < 0. (6) In fig. 5 one can see the differences of IFL and IFLα angular distributions for various processes, including e + e − → e + e − νν where FL corrections are not available yet. For the latter, the estimate by WPHACT of the theoretical uncertainty of IFLα calculations is of the order of about 3% [ 13]. This refers only to the uncertainty connected to the absence of complete FL calculations and not to the one due to the treatment of ISR/FSR in presence of dominant t-channel contributions [ 13]. Conclusions FL corrections have been extended to massive external fermions in 4f physics. With them calculations can be safely performed down to θ e = 0. Single-W processes represent one of the most important applications of such extension. IFL corrections show that "ad hoc" gauge restoring schemes are reliable for total cross sections but may produce some differences in distributions. Complete FL are a gauge invariant subset of radiative corrections, essential for single-W processes. Their results differ from G f scheme by several percent. IFL + α running for t-channel reproduces FL re- sults at less than 1% level for e − e + → e −ν e ud and at 2-3% for e − e + → e −ν e µ + ν µ . Further theoretical uncertainties for single-W, connected with ISR in t-channel dominated processes have not been considered in this short account. Further analyses and improvements are probably still needed for single-W processes, expecially at Linear Collider.
2019-04-14T02:31:28.408Z
2000-05-31T00:00:00.000
{ "year": 2000, "sha1": "fa91c4b3c196b7c5a8353231cbdc0fe28879939f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0005325", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "819786bcb0383ed2b56ae050e7b74fab17292f58", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238239920
pes2o/s2orc
v3-fos-license
DJ-1 attenuates the glycation of mitochondrial complex I and complex III in the post-ischemic heart DJ-1 is a ubiquitously expressed protein that protects cells from stress through its conversion into an active protease. Recent work found that the active form of DJ-1 was induced in the ischemic heart as an endogenous mechanism to attenuate glycative stress—the non-enzymatic glycosylation of proteins. However, specific proteins protected from glycative stress by DJ-1 are not known. Given that mitochondrial electron transport proteins have a propensity for being targets of glycative stress, we investigated if DJ-1 regulates the glycation of Complex I and Complex III after myocardial ischemia–reperfusion (I/R) injury. Initial studies found that DJ-1 localized to the mitochondria and increased its interaction with Complex I and Complex III 3 days after the onset of myocardial I/R injury. Next, we investigated the role DJ-1 plays in modulating glycative stress in the mitochondria. Analysis revealed that compared to wild-type control mice, mitochondria from DJ-1 deficient (DJ-1 KO) hearts showed increased levels of glycative stress following I/R. Additionally, Complex I and Complex III glycation were found to be at higher levels in DJ-1 KO hearts. This corresponded with reduced complex activities, as well as reduced mitochondrial oxygen consumption ant ATP synthesis in the presence of pyruvate and malate. To further determine if DJ-1 influenced the glycation of the complexes, an adenoviral approach was used to over-express the active form of DJ-1(AAV9-DJ1ΔC). Under I/R conditions, the glycation of Complex I and Complex III were attenuated in hearts treated with AAV9-DJ1ΔC. This was accompanied by improvements in complex activities, oxygen consumption, and ATP production. Together, this data suggests that cardiac DJ-1 maintains Complex I and Complex III efficiency and mitochondrial function during the recovery from I/R injury. In elucidating a specific mechanism for DJ-1’s role in the post-ischemic heart, these data break new ground for potential therapeutic strategies using DJ-1 as a target. Ischemic heart disease continues to have a high burden of disease globally 1 . Myocardial ischemia-reperfusion injury (I/R) is a major determining factor of the severity of ischemic heart disease 2 . Despite decades of research on therapeutic interventions for myocardial I/R, few have made it to the clinic 3 . The sheer complexity of the cellular processes that occur during the onset and progression of myocardial I/R is a major barrier to successful translational therapies. There is a deficit in the understanding of the biological mechanisms that occur in the hours and days following the onset myocardial I/R injury 4 . Thus, it is critical to investigate the cell signaling systems that are still not fully elucidated. DJ-1 is a cytoprotective protein that has been shown to play an important role in multiple cellular processes 5,6 . Early examination discovered that mutations and oxidative damage to the protein were associated with earlyonset Parkinson's disease. Thus, DJ-1 is most well-characterized in the brain, where it has been revealed to have anti-oxidant and anti-apoptotic properties [6][7][8] . DJ-1 shares sequence homology with a family of bacterial proteases and with an E. coli chaperone that possesses protease activity 8 . Proteases regulate cellular processes by catalyzing the cleavage of peptide bonds in proteins. Proteases generally reside in cells as latent precursors called zymogens so as to avoid potentially hazardous consequences of unregulated protease activity. In response DJ-1 localizes to the mitochondria and interacts with complex I and complex III after myocardial I/R. Previous studies indicate the in response to different stimuli, DJ-1 localizes to the inner mitochondrial membrane and mitochondrial matrix 24,25 . Additionally, we previously found that DJ-1 localizes to the mitochondria 4 h after the onset of reperfusion following myocardial ischemia 10 . Here, we found that the expression of the full-length form of DJ-1 (DJ1FL) and the cleaved form (DJ1ΔC) were both elevated in mitochondrial fractions at 3 days of reperfusion (Fig. 2). As noted, the physiological consequence of this localization is not fully understood. There is evidence that DJ-1 interacts with different components of the mitochondria electron transport chain 24 . Therefore, we sought to determine if myocardial I/R induced the interaction of DJ-1 with Complex I and Complex III. For these experiments, we performed separate immunoprecipitation experiments using antibody capture kits to Complex I and Complex III. In both experiments, we found that the interaction of DJ-1 with Complex I and Complex III were increased 3 days post myocardial I/R injury (Fig. 3). (Fig. 4). Analysis revealed that myocardial I/R injury increased the levels of CML bound to Complex I in the Wild-Type heart (Fig. 4A,B). The levels were further increased in the hearts of DJ-1 KO mice. To further assess the impact of glycation on Complex I, we next evaluated its activity (Fig. 4C). Myocardial I/R injury led to a marked decrease in Complex I activity in the Wild-Type heart. A further decline was observed in the hearts of DJ-1 KO mice. Similar results were observed for Complex III (Fig. 4D-F). Together, this data suggests that the absence of DJ-1 augments the onset of glycative stress in the mitochondria under I/R conditions leading to the glycation and inhibition of Complex I and Complex III. Mitochondrial Function is suppressed in DJ-1 knockout hearts after I/R. Next, we sought to characterize the mitochondrial functional phenotype of hearts from Wild-Type and DJ-1 KO mice following myocardial I/R injury. Specifically, respiration (oxygen consumption) was assessed using isolated mitochondria. For these experiments, pyruvate and malate were used as substrates. Basal respiration was assessed followed by maximal ADP-stimulated state 3 respiration. Finally, the proportion of uncoupled respiration was assessed by measuring oxygen consumption following the addition of the ATP synthase inhibitor oligomycin, allowing for the calculation of the respiratory control ratio (RCR; state 3 respiration/postoligomycin respiration) 26 . Myocardial I/R did not alter the basal respiration rates of mitochondria isolated from Wild-Type hearts (Fig. 5A). However, a marked decline in maximal ADP-stimulated State 3 respiration rates (Fig. 5B) and a slight increase in oligomycin-stimulated rates (Fig. 5C) were observed. Together this led to a reduction in RCR (Fig. 5D). Basal www.nature.com/scientificreports/ respiration rates of mitochondria isolated from DJ-1 KO hearts were lower following myocardial I/R compared to Sham levels. Likewise, the deficiency of DJ-1 led to a further I/R-induced reduction in maximal ADP-stimulated State 3 respiration rates, increase in oligomycin-stimulated rates, and reduction in RCR when compared to mitochondria from Wild-Type hearts. Next, maximal rates of ATP synthesis from ADP were assessed during state-3 respiration (Fig. 5E). In agreement with the findings related to respiratory rates, myocardial I/R injury led to a decrease in ATP production rates with lower rates observed in mitochondria isolated from DJ-1 KO hearts. Finally, the ratio of state 3 ATP synthesis rates to state 3 oxygen respiratory rates (ATP/O) confirmed that mitochondria isolated from DJ-1 KO hearts following myocardial I/R injury were less efficient than mitochondrial isolated from Wild-Type hearts (Fig. 5F). Together, this data suggests that the absence of DJ-1 leads to reduced mitochondrial coupling and reduced maximal ATP synthetic capacity in the setting of myocardial I/R injury. AAV9-DJ1ΔC attenuates the glycation of complex I and complex III and Improves mitochondrial function. Previously, we found that overexpression of DJ1Δc attenuated myocardial I/R-induced heart failure via the reduction in glycative stress 11 . Here, we asked if the overexpression of DJ1Δc altered the glycation of Complex I and Complex III. For these experiments, WT and DJ1 KO mice treated with AAV9-DJ1Δc were followed for 2 weeks and then subjected to myocardial I/R. Analysis at 3 days of reperfusion revealed that AAV9-DJ1Δc significantly reduced the glycation of Complex I and Complex III in both strains (Fig. 4). Consistent with a reduction in glycation, the activities of Complex I and Complex III were increased (Fig. 4). With evidence that the overexpression of DJ1Δc reduces glycation of Complex I and Complex III, we next sought to determine the impact of AAV9-DJ1Δc treatment on mitochondrial function. Using a similar approach as outlined above, we found that AAV9-DJ1Δc significantly improved maximal State-3 respiration, reduced oligomycin-induced respiration, improved rates of ATP synthesis, and improved ATP/O ratios in both strains (Fig. 5). Together this data indicates that the overexpression of DJ1Δc leads to improvements in mitochondrial coupling and maximal ATP synthetic capacity in the setting of myocardial I/R injury. Discussion Mitochondrial dysfunction has been shown to be a major player in the pathogenesis of myocardial I/R injury 27 . The most recognizable role of cardiac mitochondria is the production of energy via ATP synthesis, however, the mitochondria also serve a critical function in mediating cellular homeostasis as they regulate intracellular signaling, calcium storage, fuel utilization, and cell death [28][29][30] . When normal functions of the mitochondria are disrupted, viability of the entire cell is threatened. Glycative stress contributes to I/R-induced cardiac injury 17,19,20 . However, very little is known about how glycative stress causes cell injury beyond AGE-RAGE signaling-induced pro-apoptotic and pro-inflammatory pathways [18][19][20]31,32 . In other settings, glycative stress has been shown to have an impact on mitochondrial function 33 . More specifically, treatment of isolated mitochondria or cells with methylglyoxal, glyoxal, of AGEs reduced the activity of respiratory chain complexes, decreased mitochondrial membrane potential, and reduced ATP synthesis [34][35][36][37][38] . Mechanistically, there is some evidence that glycation of Complex III contributes to the impairment of mitochondrial function 23,39 . Here, we found an accumulation of MG, AGE, and CML in mitochondrial fractions 3 days post myocardial I/R injury, suggesting that mitochondria experience glycative stress following the onset of myocardial I/R injury. Moreover, we found that Complex I and Complex III were glycated 3 days post myocardial I/R injury and that this glycation was associated with diminished complex activities and diminished mitochondrial function. Complex I is the largest enzyme complex of the respiratory chain 40 and some of the most common human oxidative phosphorylation disorders are attributed to Complex I dysfunction 41,42 . Importantly, in response to myocardial I/R injury there is evidence that the activity of Complex I is diminished [43][44][45][46] . Complex III is an important component of the electron transport chain, as it independently receives electrons from both Complex I and Complex II 47 . During and after the onset of myocardial ischemia, Complex III activity is depressed, contributing to a reduction in mitochondria respiration 48 . As such, our findings indicate that glycative stress, in part, contributes to the decrease of Complex I and Complex III activities 3 days post myocardial I/R injury. The evidence for a cardioprotective role for DJ-1 is well established. DJ-1 deficiency has been shown to enhance myocardial infarction and exacerbate left ventricular dysfunction in multiple models including myocardial I/R, permanent myocardial ischemia, and pressure overload-induced heart failure 9,10,49,50 . Further, delayed treatment with the cleaved form of DJ-1 was shown to improve function in mice with myocardial I/R-induced heart failure 11 . However, the exact biological mechanisms behind DJ-1's role are not well understood. There are many beneficial properties of DJ-1, and it is likely that the advantageous functions of DJ-1 are multi-faceted. DJ-1's ability to reduce glycative stress 11 is of particular interest given the evidence for enhanced glycative stress in key injuries to the heart 17,19,20 . The cleavage of DJ-1 into an active protease has been shown to be contribute to its anti-glycative actions 11,14,51 . We have previously shown that the cleaved form of DJ-1 is present in the heart as early as 2 h following the onset of reperfusion after ischemia and persists for at least 7 days 10,11 . In the current study, we found that the cleaved form of DJ-1 localizes to the mitochondria 3 days post I/R injury-a critical time where signaling events are important determinants of cardiac remodeling 11,52 . Here, we have provided a potential mechanism through which DJ-1's anti-glycative actions protects mitochondrial function in the days following the onset of myocardial I/R injury. Specifically, we expanded on these our findings and suggest that DJ-1 opposes the glycative stress at the mitochondria during the recover from myocardial I/R injury. Our study found that in the absence of DJ-1, mitochondria experienced an enhanced accumulation of reactive dicarbonyls that was associated with depressed mitochondrial function. Mechanistically, we found that DJ-1 interacted with Complex I and Complex III following the onset of I/R injury. This interaction was found to be important in protecting them from glycation, as evidenced by the findings that the glycation of both complexes was exacerbated in hearts of DJ-1 deficient mice. This is further supported by the findings that overexpression of the active www.nature.com/scientificreports/ form of DJ-1 attenuates the glycation and inactivation of the complexes. Based on this evidence, we hypothesize that during the recovery from myocardial I/R injury DJ-1 maintains the activity of Complex I and Complex III by shielding them from glycation. In turn, this preserves mitochondrial function and reduces cardiac injury 11 . There is disagreement in the field over the exact mechanism by which DJ-1 protects against glycation. There is evidence to suggest that DJ-1 acts as a glutathione-independent glyoxalase 14 . As a glyoxalase, the mechanism by which DJ-1 acts to diminish glycative stress would include metabolizing the reactive dicarbonyls, thereby halting the formation of AGEs in the mitochondria. Alternatively, a few studies have argued that DJ-1 is a deglycase rather than a glyoxylase 51,53 . As a deglycase, the proposed mechanism by which DJ-1 acts to diminish glycative stress would include the binding to specific proteins and removing the glycation moiety. The findings of the current study tend to support the later mechanism given that DJ-1 directly interacted and altered the glycation of Complex I and Complex III. However, we cannot rule out the possibility that DJ-1 also acts in some capacity as a glyoxalase to reduce the levels of reactive dicarbonyls. While our current study focused on the ability of DJ-1 to alter glycation of Complex I and Complex III, there are other mechanisms of action whereby DJ-1 can offer protection during the recover from myocardial I/R injury. First, given the evidence that DJ-1 interacts with Complex V 24 it is possible that DJ-1 can also shield it from glycation. Second, by reducing glycative stress, DJ-1 could indirectly improve mitochondrial function. Finally, DJ-1 has been reported to influence transcription factors, bind to RNA, alter mitochondria morphology, impact mitophagy and reduce apoptosis 54 . All of these factors could contribute to the protective effects of DJ-1 and indirectly affect mitochondrial function. These alternative hypotheses reveal just how involved and overlapping many of these pathways are. Teasing out the nuances of DJ-1's protective mechanisms necessitate future study. Finally, while our current study found that DJ-1 interacted with and altered the glycation of Complex I and Complex III in the setting of myocardial I/R injury, we do know the specific subunits of each complex that are affected. Future studies are therefore warranted to address these interactions. In summary, this study provides novel evidence that DJ-1 protects Complex I and Complex III from glycation during the recover from myocardial I/R injury. In discerning a specific mechanism for DJ-1's role in attenuating I/R injury, these data bring us closer to uncovering a therapy that targets the glycative stress pathway in the heart. Methods Animals. C57BL/6 J mice and DJ-1 deficient (DJ-1 KO) mice 10 (Male; 8-12 weeks of age) were used in all experiments. Gender influences the development of cardiovascular disease 55 . As such, we only used male mice in our studies. This allowed for the evaluation of DJ-1 in a well-controlled experimental system. All experimental protocols were approved by the Institute for Animal Care and Use Committee at Emory University and conformed to the Guide for the Care and Use of Laboratory Animals, published by the National Institutes of Health (NIH Publication No. 86-23, revised 1996), and with federal and state regulations. Approximately 175 mice were included in the present study after accounting for animal deaths. All mice were randomly assigned to the treatment groups. No animals were excluded from the study. All animal experiments were conductance in accordance with the ARRIVE guidelines. Myocardial ischemia-reperfusion injury. Myocardial Ischemia-reperfusion injury was induced by subjecting mice to 60 min of LCA occlusion followed by reperfusion for up to 4 weeks. Surgical ligation of the LCA was performed under anesthesia (ketamine, 100 mg/kg; sodium pentobarbital, 20 mg/kg) as previously described 56 . All animals received prophylactic antibiotic therapy with cefazolin (20 mg/kg) and buprenorphine (0.05 mg/kg) for pain. Production of adeno-associated viruses. Plasmid containing a truncated form of human DJ-1 lacking the C-terminal 15 amino acids (DJ1Δc) has been previously described 8 . The DJ1Δc cDNAs was used to generate the recombinant adeno-associated viral expression vector for expression of cleaved human DJ-1 (AAV9-DJ1Δc) under the control of the cytomegalovirus (CMV) promoter. pAAV2/9 containing AAV2 rep and AAV9 capsid genes was kindly provided by the Penn Vector Core (University of Pennsylvania School of Medicine). Recombinant AAV-DJ1Δc viruses were produced by Emory Viral Vector Core, using the triple transfection method with HEK 293 T cells as previously described 57 . The extracted recombinant AAV9 viruses were purified by an iodixanol gradient and was dialyzed using an Amicon 15 100,000MWCO concentration unit. The titer was determined by quantitative polymerase chain reaction. AAV9-GFP also was packaged and used as a control. The gels were electrophoresed and activated using a ChemiDoc MP Visualization System (Bio-Rad Laboratories, Hercules, CA, USA). The protein was then transferred to a PVDF membrane. The membranes were then imaged using a ChemiDoc MP Visualization System to obtain an assessment of proper transfer and to obtain total protein loads. The membranes were then blocked and probed with primary antibodies overnight at 4 °C. Immunoblots were next processed with secondary antibodies (Cell Signaling) for 1 h at room temperature. Immunoblots were then probed with a Super Signal West Dura kit (Thermo Fisher Scientific) to visualize signal, followed by visualization using a ChemiDoc MP Visualization System (Bio-Rad Laboratories, Hercules, CA, USA). Data Mitochondrial isolation, mitochondrial respiration and ATP synthesis. Cardiac mitochondria were isolated using the Mitochondria Isolation Kit (MITOISO1) according to the manufacture's instructions (MilliporeSigma, St. Louis, MO). Oxygen consumption of cardiac mitochondria was measured in a sealed chamber magnetically stirred at 37 °C by using calibrated Clark-type oxygen electrode (Hansatech Instruments, Amesbury, MA) in the presence of glutamate and malate as previously described 58 . Maximal (ADP-stimulated) respiration was measured after the addition of a saturating concentration of ADP (1 mmol/L) 59 . Additionally, respiration in the absence of ADP phosphorylation was determined in the presence of 1 mg/ml oligomycin. Respiratory control ratios were determined as the ratio of oligomycin to state 3 respirations. To evaluate ATP synthesis, aliquots were taken from the respiration chamber over a 1-min period after the addition of ADP. ATP was then quantified with a bioluminescence assay using an ATP determination kit (A-22066; Molecular Probes, Eugene, OR). The ATP/O 2 ratio was calculated with the state 3 respiratory rate for each sample. Complex I and II/III activity. The activity of mitochondria complexes was evaluated in isolated mitochondria using the Complex I Enzyme Activity Microplate Assay Kit (ab109721, Abcam, Cambridge, MA) and the Mitochondrial Complex III Activity Assay Kit (MAK360, MilliporeSigma, St. Louis, MO) according to the manufactures' instructions. Statistics. All data are expressed as mean ± SEM. The data were first evaluated for normal distribution using the D' Agostina and Pearson omnibus normality test. Subsequent, statistical significance was evaluated as follows: (1) unpaired Student t test for comparison between 2 means; (2) a 2-way ANOVA with a Tukey test as the posthoc analysis for comparison among the means from groups of WT and DJ-1 KO mice. A value of p < 0.05 denoted statistical significance and p values were two-sided. All statistical analysis was performed using Prism 7 (GraphPad Software Inc). Data availability Data will be available from the authors on reasonable request.
2021-10-02T06:17:19.299Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "9e322ccc0646552a34428ac9aa7528ace98a8af0", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-98722-1.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6d120802674abacfa08846ba437c1a58bf32eae7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246864279
pes2o/s2orc
v3-fos-license
Views on artificial intelligence (AI) assisted clinical trials It is of interest to document the views of medical professionals on the application of artificial intelligence (using known data for the prediction of unknown events) in clinical trials using a web survery with a structured questionnaire from 377 subjects. The questionnaire contained 17 statements which were categorised into awareness (1,2 statements), perception (3-10 statements) and opinion (11-17 statements). The data obtained was compared between the subjects using two tailed Fisher's exact test with p-value <0.05 for data significance analysis. Data shows that majority of professionals have possitive views on the application of artificial intelligence in clinical trials. This will accelarrate the drug evaluation process. However, the use of emerging tools such as AI will not replace human subjects in this context. Background New drugs require more than 10 years to reach the market [1]. Hence, investment in drug design, research, development and formulation comes with high risk for pharma companies [2,3]. Therefore, the use of artificial intelligence help clinicians and researcher to identify specific targets in this context [4]. Materials and Methods: This study was conducted by BioHymns Innovations Pte. Ltd, Singapore, during Dec 2019 to May 2020 in Tamil Nadu, India. The sample size was measured using the Raosoft online calculator (Raosoft) and included 377 subjects. Medical doctors, who are involved or have been involved in clinical trials as investigator or co-investigator and medical Doctors who are interested in participating in this online survey were included in the study. Medical doctors who are not involved or have not been involved in clinical trials, non-medical doctors and medical doctors not interested in participating this online survey were excluded from the study. Study procedure: This was a questionnaire-based study. Regarding questionnaire validity and reliability, a structured questionnaire was developed after a thorough literature review, which was conducted initially by the chief investigator and research papers were shortlisted for further discussion among the research team. All the views, thoughts and concerns on the proposed study were taken into consideration during the design phase. An initial draft of the questionnaire was designed after the research team had reviewed all the selected papers comprehensively. Individual survey items were reviewed by a group of medical professionals and consensus were reached regarding the clarity and importance of each item. The validation process was further expanded by piloting the questionnaire with four experienced doctors who meet the eligibility criteria and are not aware of this study. There was voluntary participation by the physicians. The questionnaire was framed in English. The questionnaire comprised 17 statements which were sectioned in to 3 events like awareness (1,2 statements), perception (3-10 statements) and opinion (11-17 statements). After obtaining ethics committee approval VISTAS-SPS/IEC/VIII/2019/04, the questionnaire was shared to professionals involved in clinical trials using survey Google forms. Statistical analysis: An anonymous questionnaire was shared through Google forms to all participants. Basic statistics for the responses was done and represented as total number and percent. The data obtained was compared between the specialities using two tailed Fisher's exact test. P-value <0.05 was taken as significant. Results: This study included 377 participants comprising resident doctors (N= 143), doctors working as clinical research associates (N= 12), paediatricians (N= 7), general physicians (N= 47), pharmacologists (N= 161), and clinical trial physicians (N= 7). The questionnaire consisting 17 statements have been categorized in to 3 types as for awareness, perception and opinion. The statements in the questionnaire were enlisted in Table 1. The response rate towards the questionnaire statements was 100%. Responses against questionnaire statements The consolidated responses were tabulated in Table 1. To describe the responses in general, majority (83.5 % & 65.5%) of the participants were aware of the AI based health care delivery and clinical trials. Most of the participants identified the potentiality of AI in, clinical trial processes, time saving or accelerating drug development, cost-effectiveness, and handling vast data. The AI based clinical trials was supported by large number of participants, but some has suggested that AI cannot substitute human intelligence and also, might raise ethical and legal concerns. Awareness Statements 1 and 2 were categorised for analysis of awareness in this study. The responses were analysed as per the category of speciality. 220 positive responses out of 286 towards awareness were obtained from resident doctors. 16/24, 9/14, 66/94, 242/322, and 9/14 positive responses were obtained from clinical research associates, paediatricians, general physicians, pharmacologists, and clinical trial physicians respectively (Figure 1).
2022-02-17T05:13:07.600Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "6388a6bd4a5998f4ba34bc479d7dc71825b0a599", "oa_license": "CCBY", "oa_url": "https://www.bioinformation.net/017/97320630017616.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6388a6bd4a5998f4ba34bc479d7dc71825b0a599", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
4676351
pes2o/s2orc
v3-fos-license
Frankfort horizontal plane is an appropriate three-dimensinal reference in the evaluation of clinical and skeletal cant Objectives In three-dimensional computed tomography (3D-CT), the cant is evaluated by measuring the distance between the reference plane (or line) and the tooth. The purpose of this study was to determine the horizontal skeletal reference plane that showed the greatest correlation with clinical evaluation. Materials and Methods The subjects were 15 patients who closed their eyes during the CT image taking process. The menton points of all patients deviated by more than 3 mm. In the first evaluation, clinical cant was measured. The distance from the inner canthus to the ipsilateral canine tip and the distance from the eyelid to the ipsilateral first molar were obtained. The distance between the left and right sides was also measured. In the second evaluation, skeletal cant was measured. Six reference planes and one line were used for the evaluation of occlusal cant: 1) FH plane R: Or.R - Or.L - Po.R; 2) FH plane L: Or.R - Or.L - Po.L; 3) F. Ovale plane R: Rt.F.Ovale - Lt.F.Ovale - Or.R; 4) F. Ovale plane L: Rt.F.Ovale - Lt.F.Ovale - Or.L; 5) FZS plane R: Rt.FZS - Lt.FZS - Po.R; 6) FZS plane R: Rt.FZS - Lt.FZS - Po.L, and; 7) FZS line: Rt.FZS - Lt.FZS. Results The clinical and skeletal cants were compared using linear regression analysis. The FH plane R, FH plane L, and FZS line showed the highest correlation (P<0.05). Conclusion The FH plane R and FH plane L are the most appropriate horizontal reference plane in evaluation of occlusal cant on 3D-CT. The features of skeletal asymmetry are not always detected because soft tissue may compensate for underlying skeletal imbalances [3][4][5] . Furthermore, patients may mask facial asymmetry with their pose. Canted occlusal plane can be corrected by slight head tilting 4 . Facial asymmetry is evident when a patient smiles during physical examination. On the other hand, the presence of elevated labial commissure or alar base on one side is often an indication of vertical skeletal asymmetry. These features should be detected during routine evaluation in an interview 6 . For correct diagnosis, the setting of the reference plane is a critical factor in facial asymmetry analysis. Two-dimensional (2D) image analyses such as facial photographs and cephalogram have been performed. In a medi-lateral direction, postero-anterior (PA) cephalometric radiographs were used in the evaluation of skeletal components of facial asymmetry 7 . Mid-sagittal line and horizontal reference line were defined using valid landmarks on the PA cephalometries. Distances between midline and laterally positioned landmarks were used for the evaluation of facial I. Introduction Facial asymmetry is important to a patient due to its great impact in the aspects of esthetics, socio-psychology, and functions. Most patients with dentofacial deformities have facial asymmetry 1 . It can occur as a congenital or a developmental disorder. Skeletal asymmetries <3% in degree are not clinically discernible 2,3 . Even if they are not severe, patients may regard them as very serious. randomly selected and analyzed. All four distances were measured on the chair side and on the 3D-CT image. The following distances were measured: 1) Distance from the bottom point of the right eyelid (REL) with the eyes closed to the mesiobuccal cusp tip of the right maxillary 1st molar; 2) Distance from the bottom point of the left eyelid (LEL) with the eyes closed to the mesiobuccal cusp tip of left maxillary 1st molar; 3) Distance from the right inner canthus (RIC) with the eyes closed to the Rt. maxillary canine tip, and; 4) Distance from the left inner canthus (LIC) with the eyes closed to the Lt. maxillary canine tip. (Fig. 2) Validation of the correlation coefficient between CT measurement and clinical evaluation was performed. This study was approved by the institutional review board. Correlation between clinical cant and skeletal cant Among the patients who visited the aforesaid department from 2009 to 2011 for orthognathic surgery, 16 patients who received CT imaging with eye closure were selected and analyzed. The Me points of all patients deviated by more than 3 mm. Exposure conditions were set at 120 kV, 300 mA, matrix of 512×512, and pixel size of 0.3 mm. The area from the superior part of the orbit to the lower jaw was included in the image, and the axial slice thickness was 1 mm. Upon completion of CT scanning, the images were obtained as Digital Imaging and Communication in Medicine (DICOM) files, with each slice 1 mm thick. CT images were also reconstructed to produce 3D images using Simplant Pro 2011 (Materialize Dental NV, Leuven, Belgium). asymmetry. Differences between the measurements on the two sides were evaluated for facial asymmetry. The major features of facial asymmetry are deviation of the menton (Me) point, dental midline, occlusal cant, lip cant, and orbital dystopia. To date, clinical evaluation of occlusal cant has been performed using a tongue depressor on the occlusal surface. In a study involving 2D PA cephalograms, Reyneke 8 suggested an evaluation method for occlusal cant using triangular angulation. (Fig. 1) In this study, occlusal cant was determined by calculating the differences in vertical distance from eye to teeth on each side. Standard protocols for the evaluation of skeletal cant have yet to be established. An ideal analysis on three-dimensional computed tomography (3D-CT) should not only reflect the clinical evaluation but should also be reproducible. This study assessed the magnitude of skeletal occlusal cant reflecting the clinical occlusal cant before and after orthognathic surgery. Validation on 3D-CT measurement and clinical measurement Fifteen patients (seven males, eight females) were included in this study. The mean age of the patients was 26.2±3.2 years. The distance from the mesiobuccal cusp tip of both maxillary 1st molar to the bottom point of the ipsilateral eyelid and the distance from both maxillary canine tip to the ipsilateral inner canthus were measured. The measurements were made twice on the chair side and on 3D-CT. Statistical verification was performed to check the concordance between the two groups. The distance between eye and tooth as measured on 3D-CT and on the chair side did not differ significantly with a high intra-class correlation coefficient (P<0.05, ICC>0.900). (Table 1) Correlation between clinical and skeletal cants The correlation between clinical and skeletal cants as measured with the seven reference planes was statistically All of the evaluations were performed on 3D-CT images. The first evaluation was the measurement of the clinical cant. The distances from the inner canthus to the ipsilateral canine tip and from the eyelid to the ipsilateral 1st molar (mesiobuccal cusp tip) were obtained. The distances between the left and right sides were also calculated. The second evaluation was a skeletal cant measurement. limitations due to difficulties in distinguishing between right and left anatomical landmarks [9][10][11] . The combined use of frontal, lateral, and submento-vertex views has been advocated by some clinicians for the 3D evaluation of the maxillofacial complex 12 . On the other hand, 2D radiographs have disadvantages such as enlargement and distortion of the image, which may lead to misdiagnoses 13,14 . Cephalometric measurements can cause the distortion of an image due to the projection technique. Therefore, 2D analysis should be used only for comparison and not for quantitative evaluation. 2D analysis has crucial limitations on the evaluation of facial asymmetry because the latter requires quantitative evaluation. The use of conventional cephalometric radiographs to evaluate the reliability of quantity has some limitations. First, there are problems in the head position. When taking conventional cephalometries, head positioning is based on the external auditory meatus. Note, however, that a patient with facial asymmetry has mal-positioned anatomical structures including external auditory meati; hence the possible difficulty of reaching any conclusion regarding the actual measurement of asymmetric factors using frontal cephalometric radiography. Second, frontal cephalometric radiography does not have clearly defined anatomical landmarks such as sella and basion points. 2D radiography cannot overcome the overlaying or overlapping of landmarks. Thus, the 3D mid-sagittal reference plane, based on the cranial base of landmarks, cannot be used in 2D analysis. Some authors have advocated the use of panoramic radiographs for the evaluation of asymmetry 15 . Comparison of the left and right sides on panoramic views may be a practical method, although the length and angle cannot be calculated accurately. Some authors measured the condyle and ramus heights in panoramic views and dry skulls, reporting the tendency of many false positives and negatives 16 IV. Discussion 2D PA cephalometry has long been a valuable tool in the diagnosis of facial asymmetry. It has been the most popular conventional imaging technique used for the analysis of craniofacial anomalies even though it sometimes fails to provide accurate information. Frontal and lateral cephalometries have been used for the quantitative evaluation of facial asymmetry. Note, however, that lateral cephalometric radiographs have some can also be used to determine the occlusal cant. A plane is drawn connecting the occlusal surfaces of the left and right maxillary first molars. The angle of this plane relative to the transverse axis of the skull, i.e., the angle of the occlusal cant, is measured 6 . Similarly, Susarla et al. 26 reported that the degree of cant is equal to the linear millimeter difference between the right and left medial canthi to the ipsilateral canine tips. In the aforementioned study, the degree of cant was measured as the angle of occlusal plane against the true horizontal plane defined as tangent to the normal supraorbital rim. Since there is a reference plane on 2D as mentioned above, there should be a reference plane on 3D, too. This study investigated which of the 7 reference planes set on 3D would be the most appropriate horizontal reference plane for facial asymmetry analysis by performing clinical evaluation and comparative analysis of related planes. Since all the measurements were made on CT in this study, a validation study was performed to check if the distance from the eyes to the teeth on CT was identical to the distance from the eyes to the teeth on the actual chair side. In the validation study, clinical linear measurement was highly correlated with linear measurement on 3D-CT. (Table 1) Based on this, 3D-CT linear measurement was reflected on clinical linear measurement. Given the very high intermethod correlation of the two methods, this study judged that the distance from the eyes to the teeth on CT could be expressed as clinical cant. Suseok Oh et al: Frankfort horizontal plane is an appropriate three-dimensinal reference in the evaluation of clinical and skeletal cant. J Korean Assoc Oral Maxillofac Surg 2013 The measured skeletal cant with FH plane showed high correlation with the clinical cant, i.e., both FH plane R (molar cant: R 2 =0.845, unstandardized coefficients=1.030, canine cant: R 2 =0.792, unstandardized coefficients=0.699) and FH plane L (molar cant: R 2 =0.845, unstandardized coefficients=1.035, canine cant: R 2 =0.775, unstandardized coefficients=0.702). The orbitale and porion points are not far from the inner canthus and eyelid, and the FH plane is almost parallel to an occlusal plane. In this respect, the cant measured with FH plane may be highly correlated with the clinical cant. The orbitale point is a defined point on 3D-CT, and the porion point is advantageous since it does not affect the angle of the horizontal reference plane. Moreover, the FH plane has been used as horizontal reference plane on 2D analysis, so it would be easy to find correlation with 2D research. A foramen ovale plane has some advantages in superimposition because the foramen ovale point does not change with growth. However, it has low correlation with the clinical cant. Since the lateral point of the foramen ovale has vertical depth, there is high possibility of errors being improve the accuracy of 3D measurement 17 . The authors reported that the error in linear measurement with the software was within 1.5 mm. According to Cavalcanti et al. 18 , spiral CT imaging allows for precise and accurate 3D-CTbased measurements for neoplastic lesion in the mandible. CT scans are widely used to acquire 3D information on craniofacial complexes 19 . For easy access to maxillofacial 3D images, CT and computer technology were developed. Nonetheless, the high cost and high radiation dose are disadvantages of conventional CT despite its usefulness when performing a lengthy procedure in a confined space. On the other hand, 3D-CT images have advantages in the identification of anatomical structures, leading to problemfree superimposition. The accuracy and reproducibility of 3D-CT have been proven. Matteson et al. 20 and Hildebolt et al. 21 measured the skull using conventional non-spiral/helical whole body CT scanners and reported favorable results. The reproduction of landmark marking for 3D analysis itself should be excellent, including the reproduction among interobserver and in the same observer to increase the precision of the analysis. Hassan et al. 22 researched the method of enhancing the precision of tracing in the analysis using cone beam CT. He stated that tracing twice on multiplanar reconstruction (MPR) image and on 3D reconstructed image would increase precision compared to tracing on 3D only. Agreeing with the aforesaid article, this study performed MPR tracing additionally when marking on 3D only was deemed unable to guarantee precision and when there was no confidence in repetitive reproduction. In particular, on Ba, Po R, Po L, Dent, Op, and Na, which should have a point in anatomical structure with wide and round shape on 3D, both 3D and MPR image tracing were performed. A mid-sagittal reference plane was set with three reference points 23 . Hwang et al. 24 defined the mid-sagittal reference plane as the plane connecting the three landmarks: opisthion (Op), crista galli (Cg), and anterior nasal spine (ANS). In some cases, however, mid-sagittal reference planes would be set based on horizontal reference planes. Consequently, the setting of the horizontal reference plane is the most important factor and should be performed primarily for the evaluation of facial asymmetry. To measure the occlusal cant in a clinical evaluation, a wooden tongue depressor can be placed across the right and left posterior teeth, and the parallelism or the angle of the tongue depressor to the inter-pupillary plane can be documented. Alternatively, the vertical distance between the maxillary canines and the medial canthi of the eyes can be measured 25 . An analysis of the frontal cephalometry committed by inter-observers or intra-observers. An FZS plane has an advantage, i.e., a medial point of FZS itself is the clear reference point with high reproducibility. Note, however, that pointing is difficult in 2D ceph. Moreover, as indicated by the present results, it has low correlation with the clinical cant. An FZS line has a good reference point like the FZS plane. Moreover, since it consists of only two points in the frontal part of the skull, it is not affected by the reference point at the back in the evaluation of the cant. The reason the FZS line was highly correlated with the clinical cant has been considered above. Rachmiel et al. 11 used the horizontal plane at the level of fronto-zygomatic suture, defining a line connecting the bilateral latero-orbitals and a vertical line perpendicular to the horizontal line through Cg, which were employed as horizontal and vertical reference lines. V. Conclusion 3D-CT and 2D cephalometric analyses are both useful in evaluating the occlusal cant; note, however, that 3D is being highlighted because of the limitations of 2D analysis. In 3D analysis, there could be several references that can serve as criteria for evaluating the occlusal cant, and those references should reflect the clinical occlusal cant properly. Among the references used in this study, those with the highest correlation with the clinical cant were FH plane R, FH plane L, and FZS line. Among them, the FZS line has limitations in 3D analysis since it is a 2D structure. Furthermore, since the orbitale points consisting of the FH plane are easy to point and are close to the eye, using it as the reference plane may be appropriate.
2018-04-03T00:00:36.906Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "2521fdac035e811868ddce1b8bff7f98bd582e7c", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3858149?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2521fdac035e811868ddce1b8bff7f98bd582e7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258829284
pes2o/s2orc
v3-fos-license
Mitochondria-Targeted COUPY Photocages: Synthesis and Visible-Light Photoactivation in Living Cells Releasing bioactive molecules in specific subcellular locations from the corresponding caged precursors offers great potential in photopharmacology, especially when using biologically compatible visible light. By taking advantage of the intrinsic preference of COUPY coumarins for mitochondria and their long wavelength absorption in the visible region, we have synthesized and fully characterized a series of COUPY-caged model compounds to investigate how the structure of the coumarin caging group affects the rate and efficiency of the photolysis process. Uncaging studies using yellow (560 nm) and red light (620 nm) in phosphate-buffered saline medium have demonstrated that the incorporation of a methyl group in a position adjacent to the photocleavable bond is particularly important to fine-tune the photochemical properties of the caging group. Additionally, the use of a COUPY-caged version of the protonophore 2,4-dinitrophenol allowed us to confirm by confocal microscopy that photoactivation can occur within mitochondria of living HeLa cells upon irradiation with low doses of yellow light. The new photolabile protecting groups presented here complement the photochemical toolbox in therapeutic applications since they will facilitate the delivery of photocages of biologically active compounds into mitochondria. ■ INTRODUCTION The development of novel photolabile protecting groups (PPGs) or caging groups that can be photoactivated with biologically compatible visible light has raised in recent years a growing interest in photopharmacology owing to the extraordinary properties of light. 1 This noninvasive external stimulus can be delivered to living organisms with a high spatiotemporal resolution, allowing the manipulation of cellular processes by phototriggering the release of bioactive molecules from photocaged inactive precursors without using potentially toxic chemical reagents. 2 Moreover, light of long wavelengths (e.g., far-red and near-infrared (NIR)) is nonphototoxic and offers higher tissue penetration than UV or blue light (300−400 nm), which facilitates in vivo applications and clinical translation. 3 Among visible-light-sensitive PPGs based on organic chromophores, o-nitrobenzyl, 4 quinone, 5 coumarin, 6 naphthalene, 7 BODIPY, 8 xanthenium, 9 cyanine, 10 and porphyrin 11 derivatives have been widely used in chemical, biological, and materials science applications. Transition metal complexes with absorption in the visible region of the electromagnetic spectrum, such as ruthenium(II) polypyridyl complexes, have also been explored as caging groups. 12 Although many efforts have been dedicated to the design of caging groups with optimal photophysical and photochemical properties (e.g., operability at long wavelengths and high photolytic efficiency), 13 the molecular size and structural complexity of the PPG and its ease of synthesis are also important parameters sometimes underestimated when developing new caging groups for therapeutic applications. Aqueous solubility and dark stability to spontaneous hydrolysis are also important factors to be considered for newly synthesized caging groups. Among subcellular organelles, mitochondria are one of the most relevant targets in drug design and development for combating human pathologies since they are involved in many key cellular processes. 14 Mitochondrial dysfunction has been associated with cancer disease, aging, and neurodegenerative, cardiovascular, and metabolic diseases. 15 In addition, mitochondria are the major sources of endogenous reactive oxygen species. 16 A common strategy for developing mitochondriatargeted diagnostic and therapeutic tools consists of attaching lipophilic positively charged chemical motifs (e.g., triphenylphosphonium) to the compound of interest to induce mitochondria accumulation by exploiting the negative potential across the external and internal membrane of this organelle. 17 However, this strategy implies several limitations since bulky hydrophobic groups can modify the physicochemical and pharmacological properties of the molecule of interest and, in addition, they do not provide cell or tissue specificity. The latter is especially important in anticancer therapies since toxic side effects of conventional chemotherapeutic agents are usually associated with their poor ability to discriminate between normal and cancer cells. In such a context, organellespecific photocages offer a powerful method for delivering and releasing bioactive compounds in specific subcellular compartments by using light of suitable wavelengths, as recently described by different research groups in the case of mitochondria. 18 Our group has developed a new class of coumarin-based fluorophores (COUPY) through the replacement of the carbonyl group of the lactone in the conventional coumarin scaffold (e.g., compound 1 in Scheme 1) by cyano(N-alkyl-4pyridinium/pyrimidinium)-methylene moieties (e.g., compounds 2a and 2b), which exhibit promising photophysical and photochemical properties for bioimaging and therapeutic applications owing to the π-extended system. 19 Recently, we have initiated the transformation of such coumarin derivatives into a novel class of visible-light-sensitive PPGs. As a proof of concept, COUPY photocage 3, in which benzoic acid was caged through the formation of an ester bond through position 4 of the coumarin skeleton, was synthesized and fully characterized. 20 Compound 3 was efficiently photoactivated with biologically compatible yellow (560 nm) and red light (620 nm) under physiological-like conditions but remained stable to spontaneous hydrolysis when incubated in the dark. Importantly, COUPY photocage 3 was found to accumulate selectively in the mitochondria of living HeLa cells according to confocal microscopy studies owing to the presence of the Nmethylpyridinium moiety, which would facilitate the delivery of caged analogues of bioactive molecules to this organelle. Here, we synthesized three new COUPY-caged model compounds (4−6) to assess how the structure of the coumarin caging group influences the uncaging process, particularly how the incorporation of a methyl group in a position adjacent to the photocleavable bond in the coumarin skeleton influences the photodeprotection rate (Scheme 1). This is an important factor since the rate of the overall photolysis process in coumarin-based caging groups, including that of nonconventional dicyanocoumarin derivatives, 22 depends on the rate constant of the initial heterolytic cleavage of the C−O bond. 13b,21 Benzoic acid and acetic acid were selected as model compounds to be caged with COUPY coumarins through the formation of an ester bond to investigate the effect of the basicity of the leaving group, and a pyridine heterocycle was replaced by pyrimidine to further red-shift the absorption maximum of the compound. 23 In addition, by taking advantage of the intrinsic preference of COUPY scaffolds for mitochondria, we have synthesized two COUPY-caged versions of the protonophore 2,4-dinitrophenol (DNP) (7 and 8) to investigate photoactivation in living cells by confocal microscopy. ■ RESULTS AND DISCUSSION Synthesis and Characterization of COUPY-Caged Model Compounds. COUPY photocages 4−6 were synthesized in two steps from thiocoumarins 9−11 (Scheme 2), which were prepared from coumarin 1 following previously published procedures developed in our group. 19,22 First, condensation of 9−11 with 4-pyridylacetonitrile or 2-(pyrimidin-4-yl)acetonitrile, 23 mediated by the deprotonation of the acidic methylene protons with a strong base, followed by silver nitrate treatment afforded neutral COUPY scaffolds 12− 14 with high yields (80−87%) after purification by silica column chromatography. After N-methylation of the pyridine or pyrimidine heterocycles, COUPY-caged model compounds 4−6 were isolated as pink/purple solids with excellent yields (94−97%). The compounds were fully characterized by HR ESI-MS and NMR ( 1 H, 13 C, and 19 F), and their purity was assessed by reversed-phase HPLC-MS ( Figure S1). As shown in Figures S2−S4, the 1 H NMR spectra of coumarins 12−14 showed two sets of proton signals in ∼90/80:10/20 ratios, which reproduces the behavior previously found in COUPY derivatives 19a,19,20,23 and demonstrates the existence of two exchangeable rotamers around the exocyclic carbon−carbon double bond. Full NMR characterization by using 1 H, 1 H 2D-NOESY experiments confirmed that the E rotamer (as usually drawn in this manuscript) was the major species in solution in the case of compounds 12 and 13. By contrast, the Z rotamer was preferred in the pyrimidine-containing derivative (14), which parallels the behavior of some pyrimidine-containing COUPY fluorophores. 23 As previously found with Nmethylated COUPY dyes 19a,b and COUPY photocage 3, 20 the 1D and 2D NMR spectra revealed that only the E rotamer was found in solution for compounds 4−6 ( Figures S7−S9). As shown in Scheme 2, compounds 7 and 8 were synthesized by nucleophilic aromatic substitution from Nalkylated alcohol precursors 15 and 18, respectively, using 1fluoro-2,4-dinitrobenzene in the presence of a strong base (NaH) and fully characterized by HR ESI-MS and 1D and 2D NMR (Figures S10 and S11). Absorption and Emission Properties of COUPY Derivatives. The photophysical properties of COUPY-caged model compounds (4−8) are shown in Table 1 and compared with those of the parent fluorophore (2a) 19a and COUPY photocage (3). 20 As shown in Figure 1, the visible spectrum of all the compounds exhibited an intense absorption band, with absorption maxima ranging from 555 nm (5) to 570 nm (6). Esterification with both carboxylic acids caused a slight redshift in compounds 3−5 (about 9−11 nm) relative to coumarin 2a. Very interestingly, the replacement of pyridine with the more electron-deficient pyrimidine heterocycle caused a 13 nm red-shift in the absorption maximum of 6 with respect to 3 (24 nm when compared with 2a) and an increase in the value of the molar absorption coefficient (ε = 59 mΜ −1 cm −1 for 6 vs 35−38 mΜ −1 cm −1 for 3−5). Such bathochromic effects were even more pronounced for the emission wavelength of all the model caged compounds (λ em = 619− 634 nm) when compared with 2a (λ em = 603 nm). However, the incorporation of the methyl group on the coumarin structure caused a remarkable blue-shift in the emission maximum (10 nm) with respect the non-methylated analogues (e.g., compare 3 and 4), which was partially compensated for in the pyrimidine-containing coumarin (e.g., compare 4 and 6). As a result, the Stokes shifts were slightly larger in the nonmethylated COUPY-caged compounds than in the methylated analogues (e.g., 73 nm for 4 vs 62 nm for 3) but always larger than the value of the original fluorophore (57 nm in 2a). On the contrary, fluorescent quantum yields were reduced by more than 50% in the caged compounds (e.g., Φ F = 0.08−0.10 in 3− 6 vs Φ F = 0.22 in 2a). Compared to COUPY photocage 3, the incorporation of the 2,4-nitrophenol moiety in 7 and 8 caused a slight red-shift in the absorption maxima (3 and 6 nm, respectively). Interestingly, the emission properties in the case of 8 were not modified with respect to the parent compound 3, and the same emission maximum (619 nm) and fluorescence quantum yield (Φ F = 0.10) were obtained. By contrast, as indicated in Table 1, the emission maximum was slightly blue-shifted in the case of 7, and Φ F slightly reduced. Overall, these results indicate that N-alkylation of COUPY derivatives with a long alkyl chain (e.g., hexyl in 8 vs methyl in 7) seems to be positive for the photophysical properties of the compound. Photolysis Studies of COUPY-Caged Compounds. Photoactivation of COUPY-caged model compounds 4−6 was evaluated first in a 1:1 (v/v) mixture of PBS buffer and ACN at 37°C after irradiation with visible LED light ( Figure S12) and compared with that of the parent COUPY photocage 3. 20 The progress of the photolysis process was followed by HPLC-MS analysis by monitoring the disappearance of the compounds with time (Figures S13−S15). As shown in Figure 2, the concentration of all the compounds decreased gradually with irradiation time with visible light. The initial quantum yields of photolysis are collected in Table 2. In the case of compounds 4 and 5, two main photolytic coumarins were released and identified by MS: the expected coumarin alcohol 19 and its oxidized byproduct 20 in a 3:1 relative ratio (Scheme 3). Conversely, photoactivation of compound 6 gave the coumarin alcohol 21 as the main photolytic product, as well as a minor vinyl coumarin derivative (22), which reproduced the results previously found for 3 where compounds 15 and 23 were also identified. 20 In the case of compounds 3 and 6, vinyl coumarin photoproducts are expected to be formed via a β-elimination reaction from the secondary carbocation intermediate generated upon heterolytic cleavage of the C−O bond (Scheme 3). Although the same trend was obtained when a 560 nm bandpass filter (yellow light, 40 mW cm −2 ; Figure S16) was incorporated in the LED source, the overall process was slower due to the reduced irradiance of the lamp employed in the photolysis studies. The photolytic process of compounds 3−6 was also monitored by UV−vis and fluorescence spectroscopy. As shown in Figure S17, a decrease of the absorbance of the band attributed to the coumarin core was observed in all cases, which parallels the progress of the photolysis monitored by HPLC-MS and confirmed that the phototrigger underwent photocleavage upon visible-light irradiation. The emission intensity of COUPY photocages 4−6 also decreased upon irradiation, whereas that of coumarin 3 increased, which could be attributed to a higher fluorescence quantum yield of coumarin alcohol 15 compared with 19 and 21. The stability of the compounds to spontaneous hydrolysis was also studied in a 1:1 (v/v) mixture of PBS buffer and ACN at 37°C (Figures S18− S21). Importantly, compounds 3−5 remained stable after incubation in the dark for 5 h at 37°C, whereas a slight stability reduction was observed for COUPY photocage 6. Overall, the results from the photolysis experiments with COUPY-caged model compounds 4−6 revealed that the structure of the coumarin caging group as well as the nature of the leaving group (i.e., the carboxylic acid in our models) had a strong influence on the photoactivation process. As expected, the photolysis of compound 3 was much faster than that of 4: the release of benzoic acid from 3 was almost complete (ca. 90%) after 15 min of irradiation with visible light, whereas it was required more than 60 min to completely uncage 4 (k u = 0.172 min −1 for 3 vs k u = 0.052 min −1 for 4; see Table 2). Similar results were obtained with yellow light irradiation: compound 3 was completely uncaged after 90 min of irradiation, whereas only half of 4 was photoactivated at this time ( Figure S16). As previously found in other coumarinbased caging groups, 20,22 the higher stability of the secondary carbocation intermediate generated upon photolysis of 3 might account for this result. Hence, considering that the rate of the overall photolysis depends on the rate constant of the initial heterolytic C−O bond cleavage, the incorporation of a methyl group in a position adjacent to the photocleavable bond in the coumarin skeleton seems to be a key parameter for modulating the photoactivation process. As expected, the photocleavage process was slightly faster with 4 than with 5, owing to the presence of a better-leaving group in the former compound (benzoate vs acetate; see Table 2). To our surprise, the replacement of pyridine by pyrimidine (compare 3 with 6) had a negative effect on the photosensitivity of the COUPY caging group since about 70% of the starting caged compound 6 was still present in the reaction mixture after irradiation with visible light for 30 min, while about 98% of the pyridine analogue (3) was uncaged at this time. Hence, the introduction of the pyrimidine heterocycle in COUPY coumarins has its pros and cons since it improves the photophysical properties of the chromophore (i.e., red-shifts absorption and emission maxima and increases the molar extinction coefficient) but slows down the uncaging process. This drawback can be likely attributed to the higher electronwithdrawing character of pyrimidine compared with pyridine, which might destabilize the carbocation component of the carbocation−carboxylate ion pair (Scheme 3) and, consequently, would lead to a decrease of the rate constant of the first bond cleavage. Since the photoheterolysis mechanism for coumarins requires the presence of a nucleophilic solvent to avoid 21 we decided to investigate the photoactivation of COUPY-caged model compounds 3 and 4 in a 4:1 (v/v) mixture of PBS buffer and ACN to assess the effect of increasing the amount of water in the photolysis rate. As expected, reduction of the non-nucleophilic ACN co-solvent from 50 to 20% led to a 3-fold increase of the photolysis rate for both compounds when irradiated with yellow light ( Figure S22 and Table 2). Next, we evaluated the photoactivation of COUPY-caged DNP derivatives 7 and 8 using visible LED light ( Figures S23 and S24). To our delight, DNP was efficiently photoreleased from both compounds and a main photolytic coumarin alcohol product (15 or 18) was formed in both cases (Scheme 4), which demonstrates that COUPY caging groups can also be used for the protection of aromatic alcohols in addition to carboxylic acids. It is worth noting that some other minor coumarin photoproducts were also generated according to MS characterization data, including vinyl coumarins 23 and 24 (see Tables S1 and S2), which reproduced the behavior of COUPY photocages 3 and 6. As shown in Figure 2, photolysis of the Nhexylpyridinium COUPY photocage (8) was slightly faster than that of the N-methylated analogue (7): the release of DNP from 8 was almost complete (ca. 95%) after 7 min of irradiation with visible light, whereas it required more than 20 min to completely uncage 7 (k u = 0.118 min −1 for 7 vs k u = 0.355 min −1 for 8; see Table 2). Similar results were obtained by UV−vis and fluorescence spectroscopy ( Figure S25). It is worth noting that both DNP-caged derivatives underwent photochemical cleavage with almost quantitative chemical yield upon visible-light irradiation when completed photolysis was achieved (94% for 7 after 25 min and 97% for 8 after 9 min), which agrees with the full consumption of the starting material according to HPLC analysis (see Figures S23, S24, and S26). Encouraged by these results and considering that our previously reported COUPY photocage 3 could be photoactivated with red light, we investigated the photo- Figure S12). As shown in Figure 2 and Figures S27 and S28, the concentration of both compounds decreased gradually with irradiation time, uncaging of the N-hexyl derivative being slightly faster than that of the N-methyl counterpart (k u = 0.019 min −1 for 7 vs k u = 0.036 min −1 for 8; see Table 2), which parallels the results obtained with visible light. Moreover, as previously found with the benzoic acid-caged derivative 3, 20 DNP-caged derivatives took longer to be uncaged on irradiation with red light as compared to visible light, which is a consequence of the lower rate of light absorption. The photolytic efficiency of the uncaging process using visible light (560 ± 40 nm; 40 mW cm −2 ) was determined as the product of the absorption coefficient at the irradiation wavelength and the photolysis quantum yield (ϕ Phot ) calculated from the disappearance of COUPY photocages 3− 8 upon irradiation (Table 2). 20 In good agreement with the results from the photoactivation experiments, the ϕ Phot for compound 3 was higher than that of the analogue lacking the methyl group adjacent to the photolabile bond (ϕ Phot = 5.4 × 10 −5 for 3 vs ϕ Phot = 1.8 × 10 −5 for 4) under yellow light, which led to higher product εxϕ Phot (2.1 for 3 vs 0.63 for 4) since both compounds have similar molar absorption coefficients. A similar photolysis quantum yield was obtained for COUPY photocage 7 with red light (ϕ Phot = 5.1 × 10 −5 ). Interestingly, increasing the water percentage of the uncaging medium from 50 to 80% resulted in a remarkable enhancement in the uncaging efficiencies of COUPY photocages 3 and 4 (6.4 in PBS/ACN 4:1 vs 2.1 in PBS/ACN 1:1 for compound 3 and 6.5 vs 0.63, respectively, for compound 4). Photoactivation Studies in Living HeLa Cells. Once demonstrated that both COUPY-caged DNP derivatives can be efficiently photoactivated with visible light, we focused on investigating their photoactivation in living cells. First, the stability of COUPY photocages 7 and 8 in complete cell culture medium (Dulbecco's modified Eagle's medium (DMEM) containing high glucose and supplemented with 10% fetal bovine serum (FBS) and 50 U/mL penicillin− streptomycin) was studied. As shown in Figures S29 and S30, both compounds exhibited high dark stability upon incubation in the cell culture medium for 1 h at 37°C. Next, the cellular uptake of compounds 7 and 8 was studied in HeLa cells (2 μM, 30 min incubation) by confocal microscopy and compared with that of the corresponding coumarin alcohol photoproducts (compounds 15 and 18, respectively; see Scheme 4). As shown in Figure 3, the fluorescence emission signal was observed inside the cell for all the compounds after excitation at 561 nm, which confirmed an excellent cellular uptake. In the case of compound 8, the staining pattern was similar to that previously found for the parent N-alkylated COUPY fluorophores (e.g., 2a and 2b) 19a,e and COUPY photocage 3, 20 which suggested accumulation mainly in mitochondria. Hence, the incorporation of the DNP cargo does not alter the subcellular localization of the resulting COUPY photocage. Similarly, the photoreleased alcohol derivative (18) accumulated mainly in mitochondria. Subsequent co-localization experiments with MitoTracker Green FM (MTG) confirmed the localization of both To our surprise, the pattern of staining for COUPY photocage 7 was different from that of 8 and the reference compound 3 since the fluorescence signal was dispersed in different cellular compartments (nucleoli, intracellular vesicles, and cell membranes) rather than located mainly in the mitochondria (Figure 3). By contrast, coumarin alcohol 15 was located mainly in mitochondria and, to a lesser extent, in nucleoli and intracellular vesicles. Hence, the replacement of the benzoic acid cargo in our previously reported N-methyl COUPY photocage (3) by DNP (7) seems to alter the subcellular localization of the compound. Thus, N-alkylation of the pyridine heterocycle in the COUPY caging group with a long alkyl chain (e.g., hexyl) seems to be an important factor to retain mitochondria specificity in COUPY photocages, as found with compound 8. To investigate the photoactivation of COUPY photocage 8 within mitochondria of living HeLa cells, we followed an indirect approach described recently by Weinstain and collaborators with BODIPY photocages incorporating triphenylphosphonium as a mitochondria-targeting moiety, 18b which is based on the use of rhodamine 123 (Rho123), a lipophilic cationic dye that accumulates selectively in mitochondria. 24 Since this probe is highly sensitive to changes in the mitochondrial membrane potential (Δψ m ), the light-mediated release of DNP from 8 should induce the exit of Rho123 from mitochondria and its redistribution to the cytoplasm. This phenomenon is a consequence of the well-known ability of DNP to decrease Δψ m by disrupting the proton gradient across the mitochondrial membrane. 25 As expected, a strong mitochondria-localized fluorescence signal was observed after excitation of HeLa cells incubated with Rho123 (26 μM, 15 min) with a green light laser (λ ex = 488 nm). However, as shown in Figure S33, addition of DNP caused a decrease of the overall mitochondrial fluorescence signal, which was redistributed along the cytoplasm and nucleus, thereby indicating that Rho123 was released from mitochondria due to DNPinduced modification of Δψ m . It is worth noting that mitochondria localization of Rho123 was not modified upon irradiation of the cells (BP 545/25 filter, 1.4 mW/cm 2 , 15 s) in the absence of DNP. Once demonstrated the sensitivity of Rho123 to the external addition of DNP in our cell experiment, we focused on investigating if DNP was photoreleased from COUPY photocage 8 in live cells. For this purpose, HeLa cells were incubated with Rho123 (26 μM) and COUPY photocage 8 (2 μM) for 30 min in the dark. As shown in Figure 4, both compounds localized in mitochondria, leading to a perfect correlation between Rho123 and COUPY photocage 8 signals ( Figure S32), as inferred by the high Pearson coefficient (r = 0.88), which confirms that COUPY photocage does not disrupt the mitochondrial membrane potential by itself. This was supported by the Manders' coefficients since the degree of co-localization of 8 over Rho123 (M1 coefficient) was 0.80, whereas that of Rho123 over 8 (M2 coefficient) was 0.87. To our delight, the Rho123 mitochondrial fluorescence intensity was clearly reduced (ca. 40%) upon irradiation of the cells with yellow light (BP 545/25 filter, 1.4 mW/cm 2 ) for 15 s (Figure 4 and Figure S34), which confirmed the photorelease of DNP from COUPY photocage 8. By contrast, compound 8 fluorescence intensity remained unaltered. It is worth noting that the photoreleased coumarin alcohol 18 was not sensitive to changes in the Δψ m since no significant changes in the mitochondrial fluorescence intensity were observed upon incubation of HeLa cells with 18 alone and after the addition of DNP ( Figure S35). ■ CONCLUSIONS In summary, we have synthesized and fully characterized five new COUPY photocages for the protection of carboxylic acids (4−6) and 2,4-dinitrophenol (7 and 8) to investigate how the structure of the caging group affects the rate and efficiency of the photoactivation process compared with our previously described COUPY photocage 3, 20 as well as if uncaging can be triggered in living cells. All COUPY-caged model compounds exhibit attractive photophysical and physicochemical properties for use in biological applications, such as absorption in the visible region (λ max ranging from 555 to 570 nm), large molar extinction coefficients (27.6−59.4 M −1 cm −1 ), and moderate aqueous solubility. The newly synthesized COUPY photocages were found stable to spontaneous hydrolysis when incubated in cell culture medium in the dark, and they could be efficiently photoactivated by yellow and red light in phosphate-buffered saline medium. Photolysis studies have demonstrated that the incorporation of a methyl group in the position adjacent to the photocleavable bond in the coumarin structure is particularly important to fine-tune the photochemical properties of the resulting caging group. Additionally, the use of a COUPYcaged version of the protonophore 2,4-dinitrophenol allowed us to confirm that photoactivation can occur within the mitochondria of living HeLa cells upon irradiation with low doses of yellow light. The new PPGs presented here complement the photochemical toolbox since they will facilitate the delivery and release of photocages of bioactive molecules into mitochondria for therapeutic applications. Work is in progress in our laboratory to further improve the photophysical and photochemical properties of COUPY-based caging groups through modification of the coumarin scaffold. ■ EXPERIMENTAL SECTION Materials and Methods. Unless otherwise stated, common chemicals and solvents (HPLC grade or reagent grade quality) were purchased from commercial sources and used without further purification. A hot plate magnetic stirrer, together with an aluminum reaction block of the appropriate size, was used as the heating source in all reactions requiring heat. Aluminum plates coated with a 0.2 mm thick layer of silica gel 60 F 254 were used for thin-layer chromatography (TLC) analyses, whereas column chromatography purification was carried out using silica gel 60 (230−400 mesh). Reversed-phase high-performance liquid chromatography (HPLC) analyses were carried out on Jupiter Proteo C 12 columns (column 1, 250 × 4.6 mm, 90 Å 4 μm; column 2, 250 × 4.6 mm, 90 Å 4 μm; flow rate, 1 mL/min) using linear gradients of 0.1% formic acid in H 2 O (A) and 0.1% formic acid in ACN (B). The NMR spectra were recorded at 25 or 75°C in a 400 MHz spectrometer using the The Journal of Organic Chemistry pubs.acs.org/joc Article deuterated solvent as an internal deuterium lock. The residual protic signal of chloroform or DMSO was used as a reference in the 1 H and 13 C NMR spectra recorded in CDCl 3 or DMSO-d 6 , respectively. Chemical shifts are reported in part per million (ppm) in the δ scale, coupling constants in Hz, and multiplicity as follows: s (singlet), d (doublet), t (triplet), q (quartet), qt (quintuplet), m (multiplet), dd (doublet of doublets), dq (doublet of quartets), br (broad signal), etc. The proton signals of the E and Z rotamers were identified by simple inspection of the 1 H spectrum, and the rotamer ratio was calculated by peak integration. The 2D-NOESY spectra were acquired in CDCl 3 with mixing times of 500 ms. The electrospray ionization mass spectra (ESI-MS) were recorded on an instrument equipped with a single quadrupole detector coupled to an HPLC and high-resolution (HR) ESI-MS on an LC/MS-TOF instrument. Synthesis of COUPY Scaffolds (12−18). Compound 12. 4-Pyridylacetonitrile hydrochloride (400 mg, 2.60 mmol) and NaH (60% dispersion in mineral oil, 210 mg, 5.2 mmol) were dissolved in anhydrous ACN (30 mL) under an argon atmosphere. After stirring for 15 min at room temperature, a solution of thiocoumarin derivative 9 22 (0.5 g, 1.36 mmol) in a 1:1 mixture of anhydrous ACN and DCM (30 mL) was added dropwise under Ar, and the reaction mixture was stirred at room temperature for 2 h and protected from light. Then, silver nitrate (0.57 mg, 3.41 mmol) was added and the mixture was stirred at room temperature for 2 h. The crude was evaporated under 162.8, 154.6, 150.9, 150.0, 140.6, 140.4, 124.6, 121.1, 119.3, 110.9, 109.4, 107.1, 97.3, 84.0, 61.7 (127 μL, 1.12 mmol) Compound 7. To a solution of coumarin 15 (33 mg, 0.063 mmol) in anhydrous ACN (5 mL), sodium hydride (60% dispersion in mineral oil, 7.6 mg, 0.19 mmol) was added and the resulting mixture was stirred for 15 min at room temperature under an Ar atmosphere. After the addition of 1-fluoro-2,4-dinitrobenzene (40 μL, 0.31 mmol), the reaction mixture was stirred overnight at 30°C. Then, more NaH was added (5.0 mg, 0.13 mmol) since some starting material was still present according to HPLC-MS analysis, and the reaction mixture was stirred again overnight at 30°C. After removal of the solvent under reduced pressure, the product was purified by column chromatography (silica gel, 50−100% DCM in hexanes, and then 2−25% MeOH in DCM) to give 7 mg (16% yield) of a purple solid. TLC: R f (10% MeOH in DCM) 0. 28 Compound 8. To a solution of coumarin 18 (27 mg, 0.051 mmol) in anhydrous ACN (5 mL), sodium hydride (60% dispersion in mineral oil, 6.12 mg, 0.15 mmol) was added under an Ar atmosphere. After stirring for 15 min at room temperature, 1-fluoro-2,4dinitrobenzene (32 μL, 0.26 mmol) was added and the reaction mixture was stirred overnight at 30°C. Then, more NaH was added (2.04 mg, 0.051 mmol) since some starting material was still present according to HPLC-MS analysis, and the reaction mixture was stirred for 3 h at 30°C. After removal of the solvent under reduced pressure, the product was purified by column chromatography (silica gel, 0.25− 10% MeOH in DCM) to give 11 mg (31% yield) of a purple solid. Photophysical Characterization of COUPY-Caged Compounds (3−8). The absorption spectra were recorded in a Jasco V-730 spectrophotometer at room temperature. Molar absorption coefficients (ε) were determined by direct application of the Beer− Lambert law using solutions of the compounds in a 1:1 (v/v) mixture of PBS buffer and ACN with concentrations about 10 −6 M. The emission spectra were registered in a Photon Technology Interna-tional (PTI) fluorimeter. Fluorescence quantum yields (Φ F ) were measured by the comparative method using cresyl violet in ethanol (CV; Φ F;Ref = 0.54 ± 0.03) as a reference. 26 Then, optically matched solutions of the compounds and CV were excited and the fluorescence spectra were recorded. The absorbance of sample and reference solutions was set below 0.1 at the excitation wavelength (540 nm), and Φ F values were calculated using eq 1: Irradiation Experiments. Photolysis studies were performed at 37°C in a custom-built irradiation setup from Microbeam, which includes a high-performance quartz glass cuvette, a thermostated cuvette holder, and mounted high-power light-emitting diodes (LEDs) from BWTEK Inc. of red (620 ± 15 nm; 130 mW cm −2 ) and wide range (470−750 nm range, centered at 530 nm; 150 mW cm −2 ) light ( Figure S12). The incorporation of a bandpass filter in the visible LED provided yellow light with a maximum emission wavelength around 560 ± 40 nm (40 mW cm −2 ) ( Figure S12). In a typical experiment, the cuvette containing 1.5 mL of a solution of the caged compound (20 μM) and 4-N,N′-dimethylaminopyridine (internal standard, 20 μM) in a 1:1 (v/v) mixture of PBS buffer and ACN was placed in front of the light source (distance <0.1 mm) and irradiated for the indicated times while constantly stirred. Light irradiance at the cuvette was measured by using a light meter and used to calculate the photon irradiance spectra using the emission spectra of the LEDs. Then, the rate of photon absorption by the sample was calculated by multiplying the photon irradiance spectra by the absorption factor of the sample at each wavelength (1−10 −A(λ) , where A(λ) is the sample absorbance) and integrating over the entire spectrum. At each time point, samples were taken and analyzed by reversed-phase HPLC-ESI-MS with a Jupiter Proteo C 18 column (250 × 4.6 mm, 90 Å, 4 μm, flow rate: 1 mL min −1 ) by using linear gradients of 0.1% formic acid in H 2 O (A) and 0.1% formic acid in ACN (B). Photolysis quantum yields were calculated as the initial slope of the plot of the amount of coumarin deprotected vs the number of photons absorbed. 20 Only the initial points were included in the calculation to avoid inner-filter effects due to the photoproducts, which absorb in the same range and thus slow down the process as the reaction progresses. Area Confocal Microscopy Studies. Cell Culture and Treatments. HeLa cells were maintained in DMEM containing high glucose (4.5 g/L) and supplemented with 10% FBS and 50 U/mL penicillin− streptomycin. For cellular uptake experiments and posterior observation under the microscope, cells were seeded on glass-bottom dishes (P35G-1.5-14-C, MatTek). Twenty-four hours after cell seeding, cells were incubated for 30 min at 37°C with the compounds (7, 8, 15, or 18, 2 μM; Rho123 200 μM) in supplemented DMEM. Then, cells were washed two times with DPBS (Dulbecco's phosphate-buffered saline, pH 7.0−7.3) to remove the excess of the fluorophores and kept in low-glucose DMEM without phenol red for fluorescence imaging. For co-localization experiments with MitoTracker Green FM, HeLa cells were treated with compounds 8 or 18 (2 μM) and MitoTracker Green FM (0.1 μM) for 30 min at 37°C in nonsupplemented DMEM. After removal of the medium and washing two times with DPBS, cells were kept in low-glucose DMEM without phenol red for fluorescence imaging. Fluorescence Imaging. All microscopy observations were performed using a Zeiss LSM 880 confocal microscope equipped with a 405 nm laser diode, an argon-ion laser, a 561 nm laser, and a 633 nm laser. The microscope was also equipped with a Heating Insert P S (Pecon) and a 5% CO 2 -providing system. Cells were observed at 37°C using a 63× 1.4 oil immersion objective. Compounds 7, 8, 15, and 18 were excited using the 561 nm laser and detected from 570 to 670 nm. Rho123 and MTG were observed using the 488 nm laser line of the argon-ion laser, whereas the 405 nm laser diode was used for observing Hoechst 33342. Irradiation experiments were also performed in the confocal microscope by using its fluorescence filter set 43 with an excitation BP 545/25 filter and its HXP 120 V fluorescence lamp at 1.4 mW/cm 2 for 15 s. Image processing and analysis were performed using Fiji. 27 Intensity Measurement. The compound and Rho123 images were processed by background subtraction (rolling ball radius = 50) and median filtering (radius = 2). Mean intensity was measured after setting the Huang threshold. 28 Co-Localization Coefficients. The MitoTracker and compound channels were processed by median filtering (radius = 1), Gaussian filtering (sigma = 1), and background subtraction (rolling ball radius = 30). Then, images were segmented by applying the Li threshold, 29 and the resulting binary images were used to mask the original images. Co-localization coefficients were measured using the JaCoP plugin17 on the different stacks of images (n = 5) with each stack containing 25 cells on average. ■ ASSOCIATED CONTENT Data Availability Statement The data underlying this study are available in the published article and its Supporting Information. UV−vis absorption and fluorescence emission spectra of the compounds, additional figures and material from stability studies, irradiation experiments and fluorescence imaging, and copies of 1 H and 13 C{ 1 H} NMR and HRMS spectra of the synthesized compounds (PDF)
2023-05-22T06:15:57.375Z
2023-05-20T00:00:00.000
{ "year": 2023, "sha1": "d5191cf353b619863ea63ad9a6d46146b43346d9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acs.joc.3c00387", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6bdb7607b81832efab3a7c14432c5c9588c570a6", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
212781793
pes2o/s2orc
v3-fos-license
Mathematical model of dry coal deshaling by using FGX vibrating air table Dry coal separation process is a relatively new method for coal separation in Poland. First air separators which were used on bigger scale was in United States before Second World War in 1930s. Nowadays is widely used in China, United States, Russia and all those places where access to water is limited. Last year’s very popular become separation by using vibrating air tables FGX where beneficiation process use air suspension which separate heavier particles from light coal grains. This separation process depends on different parameters like preparation of feed and it parameters, grain size fraction, air supply etc.. This paper will describe what is it a dry coal separation, modern separation techniques and in the main part focus on creating mathematical model which will present the quality of using this kind of process for coal deshaling. Mathematical model will base on dependence between calorific value and ash content in the tested samples. Introduction Coal preparation is based on many different methods like the most common gravity separation (by using jigs, dense medium separators, dense medium cyclones, shaking tables or typical physiochemical method like flotation (commonly used for coking coals). One of old gravity separation methods is dry coal separation by using air jigs or air tables. In the beginning of 20 th century this method was used in United States (air dense medium suspension created by air and sand in Frazer-Yancey dry separator). In 1930s in United States build few dry coal preparation plants. The biggest dry separation plant was in Lundale West Wirginia, the biggest separator in this plant had total capacity 200t/h. In the same time in Europe dry separation where used by countries such as England (1925), Belgium, Germany or Poland (1928). Dry enrichment is usually applied in places where there are water shortages to wet processes and in a harsh climate due to the possibility of freezing separation products after separation in a water medium. The raw materials that can be enriched with this method are mainly hard coals with a large proportion of coal-fired or waste fractions and for brown coals (hard types). In Poland common dry separation equipment is FGX air vibrating [1,2,3,4,5,7,8,9,11]. Dry deshaling principles Dry beneficiation on air table work by using ever-rising stream of air. Work plate can also get vibration movement to increase the accuracy of separation. Receipt of products depends on construction of table. As an example of this type of separators is air vibrating table FGX. Second type of air separators are air jigs. The air is fed by pulsation under the material layer. The material is separated in the same way as during wet enrichment in a water jig. The raised material is loosening, and grains differing in density from each other begin to form layers of grains of the same density. Vibrating air separator FGX consists of a funnel feed, dosing feeder, perforated work plate, vibrator, air chambers, dust removal module and a mechanism that allows to change the angle of inclination of the working plate and the frequency of vibrations. On the installation using a vibratory feeder a feed is fed which goes to a working plate inclined at various lateral and longitudinal angles set in a vibrating motion by means of a vibratory drive [13]. In order to ensure air supply under the working plate there are air chambers, which are fed by a centrifugal fan. The fine carbon material forms a fluidized bed (air-solid slurry) as a result of contact with air. As a result individual grains can fall relative to each other depending on their size or density. Under the influence of combined forces: air current and vibrations the coal bed is raised and then depending on the density it becomes stratified. The lighter material is suspended on the surface of the fluidized bed and the grains with a higher density sink deeper. An additional phenomenon is the liquefaction effect resulting from the interaction of small grains between them, which is a suspension and coarse grains. This phenomenon improves the efficiency of the separation of coarse fractions. Fine material located on the surface of the layer tends to slide over its surface and fall continuously under the influence of gravity through the partition at the edge of the plate (dumping of enriched coal). The heavy material falls to the bottom of the layer and is moved towards the waste collection point (gangue). Figure 1 shows the distribution on FGX working plate [4,5,6,7,8,11,13]. Figure 1. Material distribution on FGX working plate [12]. Previously mentioned scientific team lead other research to show possibilities of the applicability of FGX vibrating air table as a good solution to remove sulfur [16,17], mercury [18,19,20,21,22,23] or they are working on the application of this method as a supplement to the enrichment technology in jigs (initial averaging of feed using dry deshaling) [24,25]. Parameters influencing the separation on the air table separator Air separators have low separation accuracy. This process need to be conducted with strictly followed rules of enrichment. The separation process depends on many parameters [14,15]:  Initial preparation of the feedwith the low coefficient of equal fallen grains to separation process should be send narrow grain classes for example 50-25 mm or 25-6 mm etc.. Industry experience show that it is also possible to send for beneficiation process grains with size 80(75)-0 mm or 50-0 mm. Very necessary during separation process is to keep in the main fee the amount of small grains between 15 and 30%. Those grains are necessary to not lost the important static pressure (before it change into dynamic pressure),  Grain size & weight feed compositionsometimes during separation processes in processing plants are separated even grains with maximum size 100 mm. Grain size smaller than 0,8(0,5)mm is not separated. Feed with high amount of middlings is also not good for separation process. In situations where is a lot of middlings it should be changed classification scale,  Amount of air for separation processthis parameter is define by tests. If the feed have high amount of grains with big size and high humidity, thicker feed layer which is send to separation process the amount of air for separation process should be bigger, Experience described by foreign users shows that separation process also depends on factors mentioned below [14,15]:  Total moisture,  Dimension of the separated research material (maximum grains),  Grain size fraction,  Amount of grain size fraction 0 -6 mm in the feed,  Relation between amount of rock and coal grains in the feed,  Total amount of ash in research material,  Total amount of middlings in raw feed. Other specific parameters and dependence of grain behavior on the surface of the working board and during the separation are described in this paper [26]. Characteristic of the research material Research material was a steam coal from three different coal mines located in Poland (in total 12 samples). Total minimum weight of one sample used for test was 25 Mg, each sample was different to another by calorific value Q u , ash content A r or difficulties with coal separation process were different between samples. In practical way comparing of those samples should not be applicable but for this paper and discussion we compare them by making correlation between calorific value Q u & ash content A r . After this comparison of results it will be created a theoretical mathematical equation which is define dependence calorific value Q u and ash content A r during dry separation process. This paper will be introduction to the further research over the issue of dry coal separation efficiency. Results From many published papers about dry coal separation in Poland and new vibrating air table separator was chosen some of important data which show the effects of deshaling etc.. Based on data published by team of Prof. Blaschke & Prof. Baic in Polish and foreign journals and descriptions about working principles of FGX vibrating air table and parameters which can affect the efficiency of beneficiation process we choose one characteristic which was very good base to create theoretical mathematical model of dry separation [1,3,4,7,10,11,14,15,16,17,18,19,20,21,22,23,24,25,26]. The main parameters taken for consideration were calorific value and ash content in raw feed and beneficiation products. Parameters mentioned and described in this chapter are widely described below:  A F -ash content in the feed,  Q uF -calorific value in the feed,  A C -ash content in concentrate,  Q uC -calorific value in concentrate,  A M -ash content in middlings,  Q uM -calorific value in middlings,  A T -ash content in tailings,  Q uT -calorific value in tailings. Mathematical models were created for calorific value. To create these models two experiments were taken into consideration. One consisted of 12 samples for which 4 models were created, separately for the feed, concentrate, middlings and tailings. Second experiment consisted of 48 samples and the model was created generally for the whole sample, without partition of the products. Correlation matrices for both experiments are presented in Tables 1 and 2 and the occurring regressive models are positioned in Tables 3-7. Looking at the results of correlation indexes between calorific values and ash contents for experiment 1 it can be noticed that there are high values of these indexes for all products. The worst case occurred for the concentrate, but still the value is statistically significant. The same observations were found for the experiment 2. That was the basis to create regressive models, presented in Tables 3-7 where: b -value of parameter; b* -value of parameter of normalized distribution; t -value of t-Student test; p -significance level; R 2 -coefficient of determination; F -value of F-Snedecor test. ; R 2 =86,33% ; R 2 =69,43% ; R 2 =84,24% (5) Conclusions Basing on the conducted calculations for both experiments considered in the paper it occurred that there are high correlations between calorific values and ash contents. That means that it is possible to forecast the calorific value of the coal sample knowing its ash contents. Dry coal separation method used in these experiments indicated that it can be efficient tool to divide the feed into subproducts (concentrate, middlings and tailings) and thanks to the models the appropriate tool or technique can be selected easier and more accurately. The amount of samples considered in this paper was not sufficient to claim that the results are representative for all circumstances. These samples were prepared especially for the purpose of evaluating the FGX dry separation method. In case of having more samples with more changeable conditions of the process conductance and more parameters being measured (like sulfur contents, phosphorus contents, mercury contents, different enrichment characteristic of coal) the results would be more adequate to describe the process itself. However, this method is not popular in Poland and requires more studies.
2019-11-22T00:44:49.719Z
2019-11-19T00:00:00.000
{ "year": 2019, "sha1": "b8cf49c81ae79a86739540d82d93df7ab61fd2cb", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/641/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "32a286e894f3d5ab8b0693109acb001eaa583f6e", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
266374783
pes2o/s2orc
v3-fos-license
Structure of the Λ ( 1670 ) resonance We examine the internal structure of the Λ (1670) through an analysis of lattice QCD simulations and experimental data within Hamiltonian e ff ective field theory. Two scenarios are presented. The first describes the Λ (1670) as a bare three-quark basis state, which mixes with the π Σ , ¯ KN , η Λ and K Ξ meson-baryon channels. In the second scenario, the Λ (1670) is dynamically generated from these isospin-0 coupled channels. The K − p scattering data and the pole structures of the Λ (1405) and the Λ (1670) can be simultaneously described well in both scenarios. However, a comparison of the finite-volume spectra to lattice QCD calculations reveals significant di ff erences between these scenarios, with a clear preference for the first case. Thus the lattice QCD results play a crucial role in allowing us to distinguish between these two scenarios for the internal structure of the Λ (1670). I. INTRODUCTION Given that the strange quark is considerably heavier than the light quarks, it is a remarkable feature of the baryon spectrum that the lightest odd-parity baryon is not an excited state of the nucleon but lies within the Λ family, with nonzero strangeness.This resonance, the Λ(1405), with I(J P ) = 0(1/2 − ), has been the subject of considerable theoretical work and speculation.Before the discovery of quarks, Dalitz and Tuan suggested that it might be a KN molecule, since its mass is slightly below the KN threshold [1].However, there have been many other options explored since then, including conventional baryon states, dynamically generated states, threequark states mixing with multiquark components as well as other explanations . There has been considerable interest in the analytic structure of the S-matrix for this system.In particular, the two-pole structure of the Λ(1405), once SU(3) chiral symmetry of QCD was treated seriously, was unveiled for the first time in Ref. [4].The two resonance poles in the second Riemann sheet are related to the thresholds of the KN and πΣ channels [37,[45][46][47][48][49].The evolution of the poles for the Λ(1380), Λ(1405) and Λ(1670) away from the SU(3) limit was first studied in Ref. [3]. There have recently been many other examples of systems which are suspected of being molecular in nature.For example, this has been proposed [50,51] as an explanation of the Ξ(1620) recently observed by the Belle collaboration [52].When it comes to heavy quarks, many multiquark states have been announced by different experiments, including the P c (4312), P c (4440), P c (4457) [53,54], T cc (3875) [55,56], P cs (4459) [57], and the P cs (4338) [58].The occurrence of these exotic states near thresholds and their exotic quark content has resulted in the molecular picture being a popular interpretation of these states. As the Λ( 1405) is now commonly interpreted as a KN bound state, it is natural to ask where one might find the lightest P-wave uds baryon expected in the conventional quark model.The analysis of Veit et al. suggested that the J P = 1/2 − Λ(1670) might be identified with this triquark core [34].In such a scenario the structure of the Λ(1405) and the Λ(1670) would be very different.It is important to investigate this interpretation very carefully, given the suggestion that once one removes molecularlike states from the baryon spectrum it appears as though the quark model idea of oscillatorlike major shells might be correct [59]. With this in mind, here we present a detailed study of the S = −1, J P = 1 2 − system, covering both the Λ(1670) and the Λ(1405) resonance region within HEFT.First we extend our earlier analysis of the cross section data for K − p scattering to include K − laboratory momenta up to 800 MeV/c, including the near threshold cross sections measured in 2022 [60].The pole structure of the Λ(1670) and the Λ(1405) resonances are examined. We analyze the lattice QCD data for the negative parity Λ baryons [16,31,38,61], as well as the corresponding eigenvectors describing the structure of those eigenstates.Given that just a few months ago, the BaSc collaboration presented their coupled-channel simulations with both single baryon and meson-baryon interpolating operators at m π ≈200 MeV, which is close to the physical pion mass [62,63], we pay particular attention to their results [62,63]. On the experimental side, there has been considerable progress in updating the K − p scattering information associated with the negative-parity Λ baryons, which has been included in our analysis.For example, J-PARC provided the πΣ invariant mass spectra in K − p induced reactions on the deuteron in 2022 [64].The ALICE collaboration extracted the K − p scattering length with a measurement of momentum correlations in 2021 [65], while measurements of the energy shift of the 1s-state in kaonic-hydrogen by the SIDDHARTA collaboration have yielded precise values for the K − p scattering lengths [66,67].Simultaneous K − p → Σ 0 π 0 , Λπ 0 nearthreshold cross sections were measured at DAΦNE in 2022 [60]. This paper is organized as follows.In Sec.II, we outline the HEFT framework as applied to the negative parity Λ hyperons, while in Sec.IV the corresponding numerical results and discussion are presented.A summary and concluding remarks are provided in Sec.V. II. HAMILTONIAN EFFECTIVE FIELD THEORY In this section, we introduce the framework within which we describe the K − p scattering processes.In order to obtain the mass spectra of Λ baryons with J P = 1/2 − , which can be compared with those observed in lattice QCD, we also present the finite-volume Hamiltonian. A. The Hamiltonian In the rest frame, the Hamiltonian includes noninteracting and interacting parts where the kinetic energy piece of the Hamiltonian is written as Here, B 0 denotes a "bare baryon" with mass m 0 B .This state may be interpreted as representing a quark model baryon, which is then dressed by its coupling to meson-baryon channels [68].Here α labels the meson-baryon channel and α M (α B ) is the meson (baryon) in channel α.The meson (baryon) energy is simply For the interacting part of the Hamiltonian we include a vertex interaction coupling the bare baryon to the meson-baryon channel α as well as the direct two-to-two particle interactions The form factors associated with the coupling of the bare baryon to meson-baryon channel α are and the form of the potentials motivated by the Weinberg-Tomozawa interaction [34,69,70] are Here, we use the dipole form factor u(k) = (1 + k 2 /Λ 2 ) −2 with regulator parameter Λ = 1 GeV.The g I B 0 ,α and g I α,β are the couplings of the corresponding interaction terms in the isospin I channel.As discussed in Sec.III.C of Ref. [71], when working in a nonperturbative effective field theory, couplings encounter significant renormalization such that the perturbative couplings are best promoted to fit parameters.While this is a form of modeling, the Lüscher formalism within HEFT protects the model-independent relation between the scattering observables and the finite-volume spectrum.As discussed in Sec.III, our task is to ensure there are sufficient parameters to accurately describe the scattering data. B. Infinite-volume scattering amplitude Here we introduce the formalism to describe the cross sections of K − p scattering in infinite volume.The two particle scattering T matrices can be obtained by the threedimensional reduction of the coupled-channel Bethe-Salpeter equation where the scattering potential can be expressed from the interaction Hamiltonian above Note that for K − p scattering the T matrices, t α,β (k, k ′ ; E), appear as a linear combination of I = 0 and I = 1 channels, i.e., T α,β (k, , where a and b involve the corresponding Clebsch-Gordan coefficients.The poles associated with the Λ(1405) and Λ(1670) can be obtained by searching for those of the T I=0 α,β (k, k ′ ; E pole ) matrix on the unphysical Riemann sheet. In addition, the cross section for the process β → α is related to the T matrices by where the subscript "cm" refers to the center-of-mass momentum frame. C. Finite-volume Hamiltonian model The scattering processes need interactions in both the I = 0 and I = 1 channels.Since the mass of Λ(1670) is only 130 MeV below the KΞ threshold, in this work we include the KΞ as well as the πΣ, KN and ηΛ channels for I = 0 and the πΣ, KN, πΛ, ηΣ, and KΞ channels for I = 1.In the finite volume, the Hamiltonian H consists of free and interacting terms, H = H 0 + H I .Such a Hamiltonian can be expressed as a matrix, using the corresponding discrete momentum basis. In the cubic, finite volume of lattice QCD with length L, the momentum of a particle is , where n = 0, 1, 2, ....The noninteracting isospin-0 Hamiltonian is and the interacting Hamiltonian can be written as where Here, C 3 (n) denotes the number of ways one can sum the squares of three integers to equal n. III. MODEL (IN)DEPENDENCE IN HEFT Understanding the model-dependent and modelindependent aspects of HEFT is important.HEFT incorporates the Lüscher formalism [39,42], and therefore there are aspects of the calculation that share the same level of model independence as the Lüscher formalism itself. A. Model independence The Lüscher formalism provides a rigorous relationship between the finite-volume energy spectrum and the scattering amplitudes of infinite-volume experiment.In HEFT, this relationship is mediated by a Hamiltonian.In the traditional approach, the parameters of the Hamiltonian are tuned to describe lattice QCD results.When the fit provides a high-quality description of lattice QCD results, the associated scattering-amplitude predictions are of high quality.The key is to have a sufficient number of tunable parameters within the Hamiltonian to accurately describe the lattice QCD results. However, in the baryon sector, high-quality lattice QCD results are scarce and HEFT is usually fit to experimental data first.The HEFT formalism then describes the finite-volume dependence of the baryon spectrum, indicating where highprecision lattice QCD results will reside.This is the approach adopted herein.We will show high-quality fits to the experimental scattering observables such that HEFT provides rig-orous predictions of the finite-volume lattice QCD spectrum with model independence at the level of the Lüscher formalism. Of course, this model independence is restricted to the case of matched quark masses in finite-volume and infinitevolume.The Lüscher formalism provides no avenue for changing the quark mass.In other words, direct contact with lattice QCD results is only possible when the quark masses used in the lattice QCD simulations are physical. On the other hand, χPT is renowned for describing the quark mass dependence of hadron properties in a modelindependent manner, provided one employs the truncated expansion in the power-counting regime, where higher-order terms not considered in the expansion are small by definition.Given that finite-volume HEFT reproduces the leading behavior of finite-volume χPT in the perturbative limit by construction [42,71], it is reasonable to explore the extent to which this model independence persists in the full nonperturbative calculation of HEFT. This has been explored in Ref. [71].In the one-channel case where a single particle basis state (e.g. a quark-model like ∆) couples to one two-particle channel (e.g.πN), the independence of the results on the form of regularisation is reminiscent of that realised in χPT.Any change in the regulator is absorbed by the low-energy coefficients such that the renormalised coefficients are physical, independent of the renormalisation scheme. However, in the more complicated two-channel case with a π∆ channel added, the same was not observed.The form of the Hamiltonian becomes constrained, describing experimental data accurately for only a limited range of parameters with specific regulator shapes.The Hamiltonian becomes a model in this case, with regulator-function scales and shapes governed by the experimental data.The principles of chiral perturbation theory no longer apply in this nonperturbative calculation.However, for fit parameters that describe the data well, the model independence of the Lüscher formalism remains intact.The Hamiltonian is only a mediary. B. Quark mass variation The consideration of variation of the quark masses away from the physical point provides further constraints on the Hamiltonian.In particular, lattice QCD results away from the physical point provide new constraints on the form of the Hamiltonian.In the two-channel case, the Hamiltonian becomes tightly constrained when considering experimental scattering data and lattice QCD results together [71]. With the Hamiltonian determined by one set of lattice results, one can then make predictions of the finite-volume spectrum considered by other lattice groups at different volumes and different quark masses.This is a central aim of the current investigation where we confront very recent lattice QCD predictions for the odd-parity Λ spectrum at an unphysical pion mass of 204 MeV [62,63].The Hamiltonian will be constrained by lattice QCD results from Refs.[31,38] for the lowest-lying odd-parity excitation, such that the confrontation with contemporary lattice QCD results is predictive. For the cases previously considered in the baryon spectrum the predictions of HEFT are in agreement with lattice QCD spectrum predictions.For example, in the ∆-channel, HEFT successfully predicts the finite-volume spectrum of the CLS consortium [71,72].In the N1/2 + channel, HEFT reproduces the lattice QCD results from Lang et al. [59,73].In the N1/2 − channel, HEFT successfully predicts spectra from the CLS consortium [41,74], the HSC [41,75,76] and Lang & Verducci [41,77].Thus one concludes that the systematic errors of the HEFT approach to quark-mass variation are small on the scale of contemporary lattice QCD uncertainties.As the Hamiltonian is constrained by model-independent scattering data and lattice QCD results, we expect this success to be realised in the current investigation. Variation in the quark mass is conducted in the same spirit as for χPT.The couplings are held constant and the hadron masses participating in the theory take values as determined in lattice QCD.The single-particle bare basis state acquires a quark mass dependence and this is done in the usual fashion by drawing on terms analytic in the quark mass.In most cases, lattice QCD results are only able to constrain a term linear in m 2 π , as is the case here. The model independence associated with the movement of quark masses away from the physical point is largely governed by the distance one chooses to move from the physical quarkmass point.The HEFT approach is systematically improvable, reliant on high-quality lattice QCD results to constrain the higher-order terms that one can introduce.For example, one could include an additional analytic m 4 π term or higherorder interaction terms from the chiral Lagrangian.However, this increased level of precision is not yet demanded by current experimental measurements nor contemporary lattice QCD results. C. Model dependence Now that the Hamiltonian has become a tightly constrained model, the eigenvectors describing the manner in which the noninteracting basis states come together to compose the eigenstates of the spectrum are model dependent.At the same time, there is little freedom in the model parameters of the Hamiltonian such that the predictions of the Hamiltonian are well defined. The information contained in the Hamiltonian eigenvectors describing the basis-state composition of finite-volume energy eigenstates is analogous to the information contained within the eigenvectors of lattice QCD correlation matrices describing the linear combination of interpolating fields isolating energy eigenstates on the lattice.These too are model dependent, governed by the nature of the interpolating fields used to construct the correlation matrix. What is remarkable is that with a suitable renormalisation scheme on the lattice (e.g.interpolators are normalised to set diagonal correlators equal to 1 at one time slice after the source), the composition of the states drawn from the lattice correlation matrix is very similar to the description provided by HEFT [41,59].While both eigenvector sets are model dependent, their similarity does indeed provide some relevant insight into hadron structure.And because regularisation in the Hamiltonian is tightly constrained, one can begin to separate out the contributions of bare versus two-particle channels. D. Error analysis It may be of interest to compare the systematic uncertainties of the HEFT formalism with the statistical uncertainties of contemporary lattice QCD determinations of the finitevolume hadron spectrum.To do so requires an exploration of alternative Hamiltonians that continue to describe both experimental data and lattice QCD results. Variation of the regularisation parameters provides an opportunity to move in the Hamiltonian parameter space.However, the constraints of experiment and lattice QCD are quite effective in constraining the Hamiltonian, allowing only a small range of variation in the regularisation parameters.Moving the parameters outside of the range allowed by the data, spoils the fit to the data and thus the associated finitevolume energy spectrum. Recalling that the embedded Lüscher formalism governs the relation between the scattering data and the finite-volume spectrum and noting that the Hamiltonian plays only a mediary role, one concludes that the systematic error is governed by the quality of the experimental data and its ability to uniquely constrain the multichannel Hamiltonian.This issue is quantified in the following section. E. Summary In summary, there is a direct model-independent link between the scattering observables of experiment and the finite-volume spectrum calculated in HEFT at physical quark masses.This model independence is founded on the Lüscher formalism embedded with HEFT.Similarly, variation of the quark masses away from the physical quark mass has systematic uncertainties that are small relative to contemporary lattice QCD spectral uncertainties.Finally, the Hamiltonian eigenvectors describing the basis-state composition of finitevolume energy eigenstates are model dependent.They are analogous to the interpolator dependent eigenvectors of lattice QCD correlation matrices describing the linear combination of interpolating fields isolating energy eigenstates on the lattice.The similarity displayed by these two different sets of eigenvectors suggests that they do indeed provide insight into hadron structure. IV. NUMERICAL RESULTS In this section, we first study the K − p cross section, adjusting the parameters of the interaction Hamiltonian to reproduce the experimental data.Then we use these fitted parameters to discuss the finite-volume results. A. Cross section As the internal structure of the Λ(1670) is still not very clear, we examine two possible scenarios.In the first, we postulate that the Λ(1670) is a resonance which is dynamically generated through rescattering in the isospin-0 πΣ, KN, ηΛ and KΞ coupled channels.In the second, the Λ(1670) is treated as a quark-model-like baryon, which mixes with the meson-baryon channels.We fit the experimental data for the and K − p → ηΛ cross sections over the laboratory momentum range 0-800 MeV, which covers both resonances.We will explore which scenario can give a better description to the experimental cross section data.We present a comparison of the experimental cross sections with our fitted results in Fig. 1.The fitted parameters are given in Table I.The peak of Λ(1670) can be clearly seen in the K − p → ηΛ channel shown in the last subfigure of Fig. 1.It is clear that we can describe this resonance well, both with and without the bare quark-model-like baryon. From Fig. 1, we see that the calculated cross section and the experimental cross section data have clear discrepancies at laboratory momenta ⃗ p lab = 350 ∼ 450 MeV in the K − p → K0 n, π − Σ + , and π 0 Σ 0 processes.Here, we need to emphasize that we only introduce the S -wave interactions relevant to spin-1/2 odd-parity Λ baryons.It is well known (e.g., Refs.[91,92]) that such discrepancies can be well understood by introducing the effects from P and D waves, while the S -wave interactions play essentially no role in forming those peaks.Specifically, Refs.[92][93][94] presented detailed analyses showing that the peaks at p lab = 350 ∼ 450 MeV in K − p → π 0 Σ 0 and π ± Σ ∓ arise mainly from contributions involving the D-wave Λ(1520) state.The contribution from the Λ(1520) is also responsible for the peak in the energy range ⃗ p lab = 350 ∼ 450 MeV in the K − p → K0 n process [93].Because they have different quantum numbers, these contributions cannot influence the Λ(1670), and we do not consider them further. In the channels of K − p → K − p, K − p → K0 n, K − p → π 0 Λ 0 , and K − p → π 0 Σ 0 , the cross sections for ⃗ p lab > 500 MeV are much smaller than those at LOW momenta.The Λ(1670) resonance is not obvious in these channels either.Since we do not consider further channels, such as K * N and πΣ * , which are close to this energy region, our results deviate from the experimental data for ⃗ p lab > 500 MeV in some channels. Except for these minor discrepancies, our calculations in both scenarios give a very good description of the experimental data.The recently measured threshold cross sections in Ref. [60] for K − p → π 0 Λ 0 , and K − p → π 0 Σ 0 provide more accurate constraints and can be described well, as can be seen in the third row of Fig. 1.The line shapes of the fits with and without the inclusion of a bare-baryon contribution are very similar. Using these fits, we can obtain the Λ(1670) pole in both scenarios.As shown in Table I, in the first scenario the pole position is located at 1676−17 i MeV.This is not far from that in the second scenario, namely 1674 − 11 i MeV.Our results are consistent with those of other groups [5, 22-24, 26, 95-97].The well-known two-pole structure of the Λ( 1405) is also reproduced. The close agreement between the two scenarios for the fitted cross sections, as well as the pole positions, indicate that the present experimental data are not able to distinguish between these two very different physical pictures for the structure of the Λ(1670).Therefore, we bring these results to the finite volume of lattice QCD and confront their predictions with lattice QCD simulation results. B. Finite volume spectrum and structure As discussed in the previous subsection, the scenarios with or without a bare basis state give very similar fits to contemporary experimental cross section data.That is, the present experimental data are not able to distinguish the internal struc-TABLE I: The fit parameters obtained from K − p cross sections within the following two scenarios.One describes the Λ(1670) as a bare quark-model-like single-particle state mixed with meson-baryon interactions from the πΣ, KN, ηΛ, and KΞ channels.The other describes the Λ(1670) as pure dynamically-generated resonance from isoscalar coupled channels.Error estimates for the bare baryon case are obtained through the consideration of allowed variation in the regularisation parameter, Λ, as described in Sec IV C. Coupling Without bare baryon With bare baryon ture.We shall see that the lattice QCD simulation results provide more information about this question.By studying the finite-volume Hamiltonian with the fitted parameters given in Table I, we can obtain the corresponding lattice energy eigenvalues and eigenvectors.The lattice QCD results are provided at different pion masses, and thus we need the hadron mass dependence on the pion mass as input.For the masses of m π ), we use a smooth interpolation of the corresponding lattice QCD results.The mass of the η meson is [98] We plot the pion mass dependence of the eigenstates in the finite volume Hamiltonian in Fig. 2 (a) for the case where the Λ(1670) is a state without a bare baryon component.The lattice results in ∼3 fm box, shown as red dots with error bar, are taken from the CSSM group [31,38] in 2 + 1 flavor QCD [99].From Fig. 2 (a), we find that the results are consistent with the lattice QCD data at small pion masses, but display significant differences for the two heaviest quark masses considered. We show the corresponding eigenvectors in Fig. 3 (a) for this first scenario.Near the physical pion mass, the first and second eigenstates are mainly πΣ and KN states, respectively.The second and third eigenstates are predominantly mixtures of these two channels with KN continuing to dominate the third state.The fifth and sixth states are dominated by the KΞ and ηΛ mixture.The fifth and sixth eigenenergies in the ∼3 fm box are close to the position of the Λ(1670) at the physical pion mass, as we see in Fig. 2 (a).With the fourth, fifth and sixth states residing in the region of the Λ(1670) resonance, all four of the two-particle channels considered can play an important role in governing the structure of this resonance. However, at large pion masses, these results without a bare baryon are inconsistent with the lattice QCD data.This was also reported in our earlier work, which focused on the Λ(1405) [37].From Fig. 2 (a), the lattice simulation at the largest pion mass is considerably lower than the first Hamiltonian eigenstate.This greatly reduces the probability that the odd-parity Λ spectrum can be described by a model without the bare baryon. To study the case with a bare quark-model-like baryon, we need to know the variation of the bare mass, m 0 B , as the pion mass increases.Within the quark model its mass is expected to increase linearly with the light quark mass as m 2 π increases and hence we take For the N * (1535), α 0 N = 0.944 GeV −1 was obtained in Ref. [41].For the Λ, where the strange quark mass is held fixed, it is natural to take 2/3 of this, such that α 0 B = 0.629 GeV −1 . In Fig. 2 (b) we present the Λ spectrum with a bare baryon basis state.Our results clearly reproduce the lattice QCD simulations well at all pion masses.The content of the corresponding eigenstates is shown in Fig. 3 (b).Some of this information has been brought to Fig. 2 (b), where colour and texture have been added to the solid lines indicating the eigenstate energies.This additional information illustrates the energy eigenstates where the bare baryon state makes a substantial contribution to the composition of the state in HEFT.The The pion mass dependence of the eigenstates obtained using the finite-volume Hamiltonian.In the upper plot, the broken lines denote noninteracting meson-baryon energies, while the solid lines denote the eigenenergies obtained from the finite-volume Hamiltonian matrix.In the lower plot, energy eigenstates based on the inclusion of a bare quark-model-like basis state are illustrated.The thick (red), dot-dashed (blue), and dotted (green) lines label the states composed with a significant contribution from the bare quark-model-like basis state, with red illustrating the largest bare state component.The negligible component of the bare basis state in the first state of the spectrum at light quark masses explains its absence in the lattice QCD spectrum excited with local three-quark operators.The lattice results are taken from the CSSM group [31,38] in 2 + 1 flavor QCD [99]. largest bare basis-state contribution is illustrated in solid red, the second largest in dot-dash blue and the third largest contribution in dotted green. In the first eigenstate, the main component is πΣ at small pion masses, while the contributions of πΣ and KN channels become comparable and then the bare baryon dominates as the pion mass becomes larger.The second eigenstate is mainly composed of KN, while the third and fourth are dom- [31,38] in 2 + 1 flavor QCD [99]. inated by the KN and πΣ channels at small pion masses with KN continuing to dominate for both states.At the physical pion mass, the fifth state is dominated by ηΛ with a significant bare baryon component.The sixth state is dominated by the quark-model-like basis state.Remarkably, all four of the two-particle channels provide the balance of basis-state contributions at the physical point. With the bare basis state contributing in the Λ(1670) region, it is now clear the CSSM collaboration was able to excite the KN state with a local three-quark operator due to its localised structure.While the strange magnetic form factor shows the contribution of a vacuum quark-antiquark pair to create a 5-quark KN state [38], the electric form factors describe a localised state.Figure 3 of Ref. [100] shows the strange quark distribution is largely unchanged between the ground-state positive-parity Λ and its first excitation in the Λ(1405) region.Similarly, the light-quark distribution grows only slightly from the ground state to the KN state in the Λ(1405) resonance regime. In summary, the lattice QCD results favor the scenario in which the Λ(1670) contains a bare-baryon component.As the fifth and sixth states in this description contain very significant quark-model-like basis state contributions and because the fourth, fifth and sixth states sit in the Λ(1670) resonance regime, as illustrated in Fig. 2(b), one can conclude that the Λ(1670) has a quark-model-like core dressed by all four of the meson-baryon channels considered. C. Uncertainty analysis To obtain an error estimate on the link between experiment and the finite-volume spectra driven by the embedded Lüscher relation, we draw on the regularisation parameter, Λ, to move in the Hamiltonian parameter space and explore alternative mediations between experiment and theory. As noted in the previous section, the constraints of experiment and lattice QCD are effective in constraining the Hamiltonian parameters, allowing only a small range of variation in Λ.If one forces Λ outside of the range allowed by the experimental data, the fit to the data is spoiled and thus the associated finite-volume energy spectrum becomes incorrect.In a similar manner, the correct description of lattice QCD results places constraints on the variation of parameters. We commence by changing Λ by 50 MeV from our initial value of 1.0 GeV and refitting the parameters to describe experiment.This small variation is repeated, monitoring the χ 2 per degree of freedom to ensure the experimental data continues to be described in an accurate manner.The finite volume spectrum is then calculated.We compare the results with the CSSM lattice QCD results to ensure a valid description of the lattice QCD constraint. The variation of the χ 2 /dof for the cross section fits is subtle over the range 0.90 ≤ Λ ≤ 1.10 GeV but jumps significantly for the values Λ = 0.85 and 1.15 GeV.On this basis alone, the fits for Λ < 0.85 GeV and Λ > 1.15 GeV are excluded.However, considering the lattice QCD constraint further excludes Λ > 1.15 GeV.Over the rage 0.90 ≤ Λ ≤ 1.10 GeV the three pole positions do not change by more than 10 MeV. The best description of the lattice QCD results is provided by Λ = 1.00 GeV and we refer to this for our central values.To produce uncertainties in the finite-volume results, we refer to the predictions for Λ = 0.90 and 1.10 GeV and use these results to shade error bars in Figs. 4 and 5. Uncertainties in the fit parameters of Table I also follow from this range of allowed Λ variation. D. Comparison with the latest lattice QCD simulation One can clearly see that some eigenstates predicted by the HEFT are absent in the lattice QCD simulations of the CSSM group from Fig. 2.More than ten years have passed since that odd-parity Λ spectrum was obtained [31] and lattice QCD techniques have improved.With the parameters of the Hamiltonian constrained by experimental data and the results from one lattice QCD collaboration, we can now proceed to make predictions for the finite-volume spectra observed in other lattice QCD calculations, both at different volumes and at different quark masses.Very recently, the BaSc collaboration presented their coupled-channel simulations with both single baryon and meson-baryon interpolating operators in a larger box at m π ≈200 MeV [62,63].We now compare our HEFT predictions with this latest lattice QCD simulation. We use the corresponding hadron masses at m π ∼ 200 MeV as reported in the lattice QCD simulations [62,63] and give our HEFT results, including the bare baryon, in Fig. 5.One can see that the HEFT results describe the BaSc simulations very well.The lowest data point has a very small error bar but sits exactly on our lowest-lying odd-parity state.All of the HEFT energy eigenstates are far from the noninteracting meson-baryon thresholds but still coincide with the lattice results.We stress that no parameters have been adjusted in making the finite volume predictions for the BaSc lattice results. In our approach, at m π = 204 MeV, the first and second states observed in this ∼ 4 fm box are mainly πΣ and KN states, respectively.The third and fourth ones are the πΣ-KN mixtures.The fifth eigenstate contains KN and πΣ with some bare baryon.The sixth eigenstate is dominated by ηΛ mixed with a small component of the bare baryon.Noting that the fifth and sixth energy eigenstates are in the Λ(1670) resonance regime, one can once again conclude that the Λ(1670) is composed of a single-particle quark-model-like core dressed by the isoscalar meson-baryon channels considered. V. SUMMARY In this work we have studied two different scenarios for the internal structure of the Λ(1670).One scenario assumes that the Λ(1670) is dynamically generated through rescattering between the KN, πΣ, ηΛ and KΞ channels with I = 0.The other assumes that the Λ(1670) is a bare quark-model-like basis state mixing with these I = 0 interacting channels.We fit the experimental cross section data for the K − p → K − p, K − p → K0 n, K − p → π 0 Λ 0 , K − p → π − Σ + , K − p → π 0 Σ 0 , K − p → π + Σ − , and K − p → ηΛ 0 reactions, with the laboratory momentum of the anti-kaon in the range 0-800 MeV/c, including the recently measured threshold cross sections which have small error bars [60].Our fits are consistent with the cross section data if we neglect the effect of the D-wave Λ(1520) resonance.In addition, we have checked the two-pole structure of the Λ(1405) and obtained the pole position of the Λ(1670).All of these results are consistent with those of other groups. It is clear from the quality of the fits to the cross section data under both scenarios that one cannot distinguish between them using scattering data alone.This serves as motivation to introduce HEFT to further explore the structure of the Λ(1670) in the finite volume of lattice QCD.The scenario without a bare baryon is inconsistent with the lattice QCD data at large pion masses.Without adjusting any other parameter, the scenario including a bare-baryon basis state yields an excellent description of the lattice QCD results over the full range of light quark mass.Our HEFT results also agree very well with the latest BaSc lattice QCD simulation results at m π = 204 MeV.Not only are the predicted energy levels very close to those reported by BaSc, but all five of the lowest eigenstates predicted in HEFT were observed in the lattice calculations. Based on the present HEFT analysis, the lattice QCD calculations provide invaluable information about the structure of the Λ(1670).It definitely contains a considerable singleparticle quark-model-like basis state component, which mixes with the meson-baryon channels.While our calculations could be extended by considering the Λ(1800) resonance as well as K * N and πΣ * channels, the main conclusion in this work is not expected to be sensitive to extensions well beyond the Λ(1670) resonance regime. FIG.2:The pion mass dependence of the eigenstates obtained using the finite-volume Hamiltonian.In the upper plot, the broken lines denote noninteracting meson-baryon energies, while the solid lines denote the eigenenergies obtained from the finite-volume Hamiltonian matrix.In the lower plot, energy eigenstates based on the inclusion of a bare quark-model-like basis state are illustrated.The thick (red), dot-dashed (blue), and dotted (green) lines label the states composed with a significant contribution from the bare quark-model-like basis state, with red illustrating the largest bare state component.The negligible component of the bare basis state in the first state of the spectrum at light quark masses explains its absence in the lattice QCD spectrum excited with local three-quark operators.The lattice results are taken from the CSSM group[31,38] in 2 + 1 flavor QCD[99]. FIG. 3 : FIG.3:The pion mass dependence of the Hamiltonian matrix eigenvector components for the first six states, under the assumption of no bare-baryon (left two columns) and including a bare-baryon basis (right two columns).The five circles in each diagram represent the five quark masses considered by the CSSM group[31,38] in 2 + 1 flavor QCD[99]. FIG. 4 : FIG.4: Error estimation for the spectrum of odd-parity strange spin-1/2 baryons in HEFT for the CSSM lattice volume.The black solid lines represent the values with the optimal Hamiltonian parameters the blue shaded regions illustrate the uncertainty in the HEFT results obtained through the allowed variation of Hamiltonian parameters as described in Sec.IV C. FIG. 5 : FIG. 5:The energy eigenvalues calculated in HEFT in the scenario including a quark-model-like single-particle basis state (solid black lines) are compared with lattice QCD calculations from the BaSc collaboration[62,63] in the G 1u (0) irreducible representation (data points) on a L = 4.05 fm lattice.Dashed lines indicate the mesonbaryon thresholds.The blue shaded regions illustrate the uncertainty in the HEFT predictions obtained through the allowed variation of Hamiltonian parameters as described in Sec.IV C.
2023-12-21T06:41:36.091Z
2023-12-20T00:00:00.000
{ "year": 2023, "sha1": "d3088574626abf90e098a073791170df48d1d477", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.109.054025", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "e47aefc742e6c508c7dedfefdc068b523891c6af", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
205607865
pes2o/s2orc
v3-fos-license
A minimal resting time of 25 min is needed before measuring stabilized blood pressure in subjects addressed for vascular investigations Blood pressure (BP) measurement is a central element in clinical practice. According to international recommendations 3 to 5 minutes of resting is needed before blood pressure measurement. Surprisingly, no study has modelled the time course of BP decrease and the minimum resting-time before BP measurement. A cross-sectional bicentric observational study was performed including outpatients addressed for vascular examination. Using two automatic BP monitors we recorded the blood pressure every minute during 11 consecutive minutes. The data was analyzed by non-linear mixed effect regression. Systolic (SBP) and diastolic BPs were studied and we tested the effect of covariates on its evolution through log-likelihood ratio tests. We included 199 patients (66+/−13years old). SBP was found to decrease exponentially. Simulations based on the final model show that only half the population reaches a stabilized SBP (defined as SBP + 5 mmHg) after 5 min of resting-time while it takes 25 min to ensure 90% of the population has a stabilized SBP. In conclusion, our results and simulations suggest that 5 minutes are not enough to achieve a stabilized SBP in most patients and at least 25 minutes are required. This questions whether the diagnosis of hypertension can be reliably made during routine visits in general practitioners’ offices. Hypertension is a highly prevalent disease worldwide affecting currently more than 1 billion people 1 . Various studies have shown that elevated blood pressure increases cardiovascular diseases such as stroke, myocardial infarction, and peripheral artery disease [2][3][4][5] . In 2010, high blood pressure was recognized as the major risk factor for global disease burden before tobacco smoking and alcohol use 6 . Yet, lifestyle modifications (including exercise, healthy dietary habits) and treatments can reduce and control blood pressure, and thus allowing reducing the morbidity and mortality in specific group population 4,[7][8][9][10][11] . The American Heart Association (AHA), the European Society of Cardiology (ESC), the European Society of Hypertension (ESH) and others have proposed recommendations for blood pressure measurement 2,11,12 . These guidelines suggest a resting time varying from 3 min to at least 5 min before blood pressure measurement 2,5,12 . Surprisingly, no study has modelled the time course of blood pressure decrease and the minimum resting time before blood pressure measurement in a population that should be screened for hypertension. The hypothesis is that the resting time before the blood pressure measurement should be longer than 5 minutes to reach the stabilization of the blood pressure in patients 13,14 . Confirming this hypothesis could have important implication regarding hypertension screening. Indeed, following current guidelines could potentially mean that a significant number of individuals are wrongly diagnosed as hypertensive. This potential overdiagnosis could lead to spending money and giving inappropriate medications thus causing adverse events such as falls. In this observational study, we evaluated the time course of the blood pressure decrease during the resting time before the blood pressure measurement in patients addressed at our vascular offices and the parameters explaining the blood pressure decrease during the resting time. Results Between September 2014 and April 2015, 199 patients were recruited. Table 1 shows the baseline characteristics of the population studied in this analysis, which included 101 subjects in the reclining position (51%) and 98 subjects in the sitting position (49%). BP was followed on the left arm in n = 106 subjects (53%) and the right arm in n = 93 subjects (47%). Patients who came for a carotid ultrasound exam or suspected peripheral artery disease, or suspected venous disease or a cardiovascular prevention visit were 30 (20%), 32 (16%), 31 (15%), and 95 (48%) respectively. Two data were missing. Figure 1 shows a decreasing trend with time for SBP, which was confirmed by a repeated measure ANOVA analysis (F(1,198) = 394.4, p < 0.0001). There was also a small but steady decrease in DBP, while heart rate remained unchanged throughout the observation period. The trend for SBP was found in the absence or presence of medical treatment (Fig. 2), which shows the decrease in SBP stratified according to treatment, separating those with no treatment, those receiving drugs other than hypertensive drugs (defined as at least one diuretic, ACE inhibitor or beta-blocker drug), and those treated for hypertension. All groups exhibited the same decrease profile, but subjects who were completely free of medication tended to have lower SBP both at baseline and after resting. An exponential decrease was found to fit the data well (Appendix, Figure S3). Parameter estimates for the final model are reported in Table 2 with a systolic Prest estimated at 133 mmHg in this population. The half-life was found to be 1.7 min with large variations in the population. We did not find an impact of the position (lying or sitting) or heart rate during blood pressure measurement on the BP decrease. Baseline SBP was estimated to be 25% higher on average than systolic Prest. Age was found to be correlated both to systolic Prest and to the change from baseline. There was a large interindividual variability in the speed of decrease, as reflected by the large standard deviation on k ( Table 2). Demographic covariates Mean (SD) Age (yr) 66.6 (13. Evolution of the blood pressure and heart rate during the resting time. Evolution of measured systolic (red) and diastolic (blue) arterial blood pressure with time, expressed as mean and + /−1.96 standard deviation. Also shown, in black, is the evolution of heart rate over the measurement period (slightly displaced not to overlap the diastolic pressure); the scale for the cardiac frequency is given as the axis on the right hand side of the figure. patients receiving a treatment other than hypertensive drugs; right: patients receiving an hypertensive treatment (defined as receiving at least one drug from the following therapeutic classes: Diuretics, Betablockers or Angiotensin-converting-enzyme inhibitors). An additional analysis was undertaken to reduce the variability in k by defining a covariate reflecting the shape of the decrease. Individual non-linear regression showed that a non-linear model fit the data best in 128 subjects (64%), while 44 (22%) had a linear decrease and 24 (14%) did not exhibit significant variation of BP. This covariate was included in the model, and was found to impact both the systolic Prest and the time to stabilisation, yielding 3 groups of subjects with different values of k: fast, regular and slow stabilisers (further details in the Appendix). Taking into account individual decrease shape allowed reducing the variability in k by 40%. Time to reach a stable blood pressure. Simulations using the final model showed that only 50% of the population was stabilised to within 5 mmHg of systolic Prest after 5 min resting time, while up to 25 min may be needed to ensure stable BP in 90% of the population (full results in the Appendix). This resting time falls to 15.0 min if a variability of 10 mmHg is accepted. Proportion of the population considered as hypertensive. Figure 4 shows the proportion of subjects diagnosed as hypertensive depending on the number and time of the SBP measurement and its associated prediction interval. It illustrates that measurements of SBP at 3 or 5 min, single or averaged, tend to overpredict the proportion of hypertensive subjects, which stabilises only after 25 min. For example, the predicted proportion of subjects diagnosed as hypertensive drops from 50% [44-57] when averaging measurements at times 3 and 5 to 44% [38-50] 2 mn later, down to 33% [27][28][29][30][31][32][33][34][35][36][37][38][39] when averaging times 25 and 27 min. The width of the prediction intervals reflects the small sample size (n = 199), as we simulated replicates of the original dataset to reflect the distribution of covariates in a typical population screened for hypertension. Discussion Our study showed that among outpatients addressed for a vascular examination the minimal resting time before blood pressure measurement to obtain a stable SBP 5mmHg in 90% of the population is 25 min. Refining the analysis by considering individual variations in the evolution of SBP, we found that there were three subgroups of patients. The regular and fast groups had the same estimated systolic Prest suggesting the "fast" group is in fact stable from the start, while slow stabilisers take much longer to reach a stable SBP than the majority of subjects. The resting time found in this study is far longer than the resting time proposed by all previous scientific statements 5,12,15 . AHA scientific statement stated, "ideally, 5 minutes should elapse before the first reading is taken" 12 . The ESC have shortened this resting time and suggested a time between 3 to 5 minutes 15 whereas "The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure" have increased this resting time writing that "at least five minutes" were required 5 . Therefore, according to our results, the current leading guidelines about the office measurement of blood pressure might contribute to increase the prevalence of hypertension in our societies, and consequently increasing the number of patients receiving antihypertensive drugs. The other interesting point is that we found 3 different subgroups of patients with three different patterns of pressure evolution within the resting time. The "fast" pressure group hardly needed to rest for pressure to stabilize. To obtain stable pressure for 90% of the patients in the "regular decrease pressure" group, 9 min were needed whereas up to ~53 min were needed in the "slow decrease pressure" group, although this last figure (Table S3, appendix) is extrapolated much beyond the duration of measurement in the data and would need to be evaluated with additional data. The lower systolic Prest we found for the slow group could also be due to the same reason. To our knowledge no study has shown that three different groups of patients exist for blood pressure stabilization. Unfortunately it was not possible in this study to identify specific characteristics of each group. It would be interesting to investigate this issue in a larger study, including measurements of SBP at later times as well as repeated visits to ascertain whether this finding is reproducible. Table 2. Parameter estimates for final model. Prest is the asymptotic resting blood pressure representing the BP reached after a long rest, dP is the relative difference between the baseline pressure (before the subject sits or lays down), and k is a rate constant measuring the speed at which BP stabilises. RSE stands for relative estimation error, IIV denotes the interindividual variability, quantified by the standard deviation (SD) of the random effect associated to the population parameter. The β terms denote the influence of covariates on parameters, and are indexed according to the parameter and covariate names. For instance, β Prest,Age denotes the effect of Age on Prest. Covariate effects were entered multiplicatively (see equations in Appendix). Continuous covariates were centered to their median value in the population (median age: 67 years old, median heart rate: 69 bpm). The other point raised by the study concerns the organization of the public health system to determine where and how can we diagnose hypertension. It is widely admitted that office blood pressure measurement remains the gold standard for screening 12 , 15 . Screening and diagnosis of hypertension is mainly performed by General Practitioners (GPs) 2 . However, applying our results in clinical practice at the GPs office is not realistic due to time constraints. Indeed, the mean duration of GP consultation was evaluated as 10.7 + /−6.7 minutes in six European countries 16 and sixteen minutes in France 17 . Thus in our opinion hypertension diagnosis should be performed in a dedicated place during a specific consultation. At the very least, a suspicion of hypertension in a patient should warrant a second measurement after a longer rest period than the current recommended 5 min. Another option could be to use an out-of-office blood pressure measurement that is currently devoted to specific clinical conditions such as suspicion of white-coat hypertension or masked hypertension for example 2,18 . Even in these cases and according to our results, it seems that patients should be counseled that they have to rest at least 25 minutes and always wait the same time before blood pressure measurements. Our findings corroborate recent result showing that BP measured after a 30 mn rest is significantly lower than BP measured according to the current recommendations 19 . Finally, when we extrapolate our results in France where ~15 million patients are treated for a hypertension we can estimate that 696 000 patients could be over diagnosed for an hypertension. This could represent an annual health public cost of 292 million euros only for the treatment cost. Furthermore giving antihypertensive drugs to patients who do not need it might also provoke drug adverse events 4 . Limits. Our study has several limitations. First, our population is a population of outpatients addressed to a vascular examination and not patients issued from GPs' offices. Because they came for a regular visit, the observation period was limited to the duration of a normal consultation so we could not obtain measurement after 11 minutes, and some of our findings need to be verified using data collected in a longer study. Interestingly, our results are in accordance to a recent study from Bos and Buis 19 , who showed a significantly lower BP measured after a 30 minutes rest compared to BP measured during a routine consultation. On the other hand, the general characteristics of our population are nearly similar to those included in large randomized controlled trials on hypertension 4,9,10 . Furthermore, our outpatients came for a suspected vascular problem and should have benefited from a blood pressure measurement in order to control this cardiovascular risk factor 5,12,15 . Second, in this study, blood pressure measurements were not compared to home or ambulatory blood pressure measurements because it was not the aim of this study, and home blood pressure or ambulatory measurements are not realized in the majority of patients screened for hypertension. Third, the association of blood pressure values at specific times from baseline with cardiovascular events or target organ damage was not evaluated. As our data showed different models of blood pressure decrease upon time, it would be interesting to perform an additional study to determine which value is the best associated with cardiovascular complications. Fourth, the study included all patients addressed for a vascular appointment, and was thus not designed to test for changes in BP evolution or resting BP in populations with specific features such as comorbidities or treatment. However we did find some differences in the analysis, with older patients in particular having both higher BP and larger differences between baseline and resting BP. The model and estimated parameters from the present study could be used to design future studies investigating specific target populations 20 . Finally, repeated blood pressure measurements that induce repeated short arterial occlusion might have a role in the blood pressure decrease since transient ischemia during cuff inflation and reactive hyperemia after cuff deflation can induce dilation of upper arm artery, which is likely mediated by baroreceptor reflex and endothelium-dependent vasodilation 21,22 . However, these physiological responses are seen after occlusions that last 5 minutes 21,22 . When measuring blood pressure with the automatic device, the occlusion lasts only 10 s meaning that for 11 measurements the total duration of occlusion is less than 2 minutes. As we performed repeated occlusions, these can also mimic ischemic preconditioning 23,24 . However it has been shown that: i) there was no evidence for ischemic preconditioning during a repeated vessel occlusion that lasts 2 minutes 25 , and ii) a minimal interval of 2 minutes is needed to observe a significant effect 26 . Therefore it is not likely that such phenomenon occurs. Furthermore, Nikolic et al. measured the ankle SBP in 250 treated hypertensive patients at 5 and 10 mn (taking the average of two measurements 1 mn apart) and found a decrease of 4 mmHg on average between the two measurements (SD 14 mmHg), which is the same as the decrease we observed in our patients, and suggests that the additional measurements taken in our study do not impact the evolution of SBP too strongly 27 . As a final note, all international recommendations suggest to perform several measurements of blood pressure and to perform a mean of the measurements 5,12,15 . Conclusion Our study suggests that the current recommended practice of measuring SBP after 5 minutes of resting may not allow for adequate stabilization of SBP, which we find could take at least 25 minutes. Public Health Policies should take into account this result to organize the best way to diagnose hypertension in our societies and avoid overdiagnosis. Study design. A cross-sectional, observational, bi-center (University hospital of Rennes and Private practice in Angers) study was performed in France. The ethics review board of our institution (named "Comité d' éthique du CHU de Rennes") approved the "Opti-PA study" (n°14.44; August 2014). All included subjects signed an informed consent. This study has been conducted according to the principles expressed in the declaration of Helsinki. Study population. Outpatients (18 years of age or older), arriving on foot for a vascular examination either for a carotid ultrasound exam or suspected peripheral artery disease, or suspected venous disease or a cardiovascular prevention visit. Only patients in whom arm pressure could be measured in both arms were included. Patients with fistula or lymphedema after breast cancer were not included. Several variables were collected such as age, gender, body mass index, comorbidities and medical treatments during the medical interview or retrieved from medical record. Study measurements. Demographic and clinical data were recorded at inclusion. Each patient was invited to participate in this study when coming into the office room, then the patient was invited to lie down on a bed or to seat on a chair in a controlled room temperature for the blood pressure measurements. Two cuffs adapted to the arm circumference were placed at heart level on each arm of the patient. Heart rate as well as systolic and diastolic blood pressures were automatically and simultaneously measured at each arm every minute for 11 minutes, using two Dinamaps CARESCAPE V100 (GE Healthcare ® ). Outcome measure. The primary study outcome was the blood pressure measurements every minute for 11 minutes. Statistical analyses. The statistical analysis was performed first on Systolic Blood Pressure (SBP) and second on Diastolic Blood Pressure (DBP). We followed for each subject the SBP or DPB for the arm with the highest measurement at the first measurement (1 mn). The main objective was to model the decrease of blood pressure. The secondary objective was to identify the explanatory variables of this decrease, including the general characteristics of patients described in Table 1, measurement position (lying or sitting), recent coffee intake (within the previous 2 h), and mean heart rate. Data analysis. Baseline pressure was the first pressure measured after 1 minute of resting. A repeated measure ANOVA analysis was first performed to determine whether a significant trend in time could be detected. The data were then analyzed through non-linear mixed effect regression using the Monolix software(see the Appendix for details on the statistical models and methods). We assumed an exponential decrease of blood pressure P(t) with time t from an initial value to an asymptotic resting pressure (Prest), according to the following equation: where dP is the percentage difference in blood pressure between baseline supine pressure and systolic Prest, and k represents the rate at which the blood pressure stabilises. We assumed a log-normal distribution for the three parameters in the model and tested different residual error models to represent the measurement error. We studied the relationships between parameters and covariates. Technical details of the model building and its evaluation are provided in an Appendix, along with a similar analysis performed for DBP. Prediction of the time to stabilisation. The model and parameter estimates were used to predict the time when most subjects are expected to reach a stable SBP 5mmHg , which was defined as SBP within 5 mmHg of systolic Prest. We also looked at stable SBP 10mmHg (SBP within 10 mmHg). The variability of the automatic blood pressure monitor is expected to be 5 mmHg according to the manufacturer. Prediction of the number of patients considered as hypertensive at various resting times. A patient is considered as having hypertension when measured SBP is equal or above 140 mmHg. To investigate the impact of resting time on the proportion of subjects predicted as hypertensive, we simulated 1000 replicates of the original datasets by sampling parameters from the final model estimates, and computed the corresponding proportion of subjects for various measurements (single measurement or mean of two points 1 or 2 mn apart) 12,15 . We also report the associated prediction intervals. Availability of data and materials. The datasets generated and analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.
2018-04-03T01:35:35.777Z
2017-10-10T00:00:00.000
{ "year": 2017, "sha1": "14047fbfca164bcb4b8ff23ade6d8c558529a2cb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-12775-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bca91e551784a0b84baaaac209a4c9417824e7d6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202364536
pes2o/s2orc
v3-fos-license
The Effect of Transformational Leadership on Company Innovation Culture: Perspectives from the Service Sector of an Emerging Economy Focusing on the importance of innovation culture to the growth of firms, the study examined the effect of transformational leadership on the innovation culture of firms in the service industry in Ghana. The study further assessed the moderating role of organizational learning capability and market dynamism. A sample of 210 employees in the telecommunication, banking and insurance, tourism and hospitality sectors were surveyed. A quantitative research approach was employed to test the various relationships. The findings indicated that transformational leadership had a significant and positive relationship with innovation culture. The findings further revealed that the relationship between transformational leadership and innovation culture was positively enhanced by market dynamism. Organizational learning capability which had two dimensions, that is, interaction with external environment and dialogue had a partial moderating effect on innovation culture. The study provides executives with critical insights on the need to allow employees to make some decisions where necessary and also trust in their decision and not always control the work tightly from the top; as such micromanaging could have adverse impact on the firm’s innovation drive and culture. The researcher contributes to extant literature by finding the moderating role of market dynamism and learning capability on the effect of transformational leadership and innovation culture in an organisation specifically in the service industry of an emerging country. Introduction In recent times, organizations have been forced to transform their old processes and products to meet the changing needs and preferences of customers. Firms are achieving competitive advantage with the advent of new products and services and even ideas. Regardless of type of industry a firm is in, innovation in products and services has become an important driver of business success for any firm (De Brentani, 2001). Firms which are not able to keep up with globalisation and its new trends are not able to survive in a changing market business environment (Betancourt & Gautschi, 2001). Innovation, either in the form of product, process, marketing or systems have become an important instrument of global competitive advantage. In Ghana, and even worldwide, the service industry has become very important due to its contribution to the economies of countries. The service industry which is one of the booming sectors of the Ghanaian economy ranges from telecommunications to banking, insurance, hospitality, media and tourism among others. It is key to note that the booming nature of the industry has its attendant challenges, some of which are talent acquisition and retention war, effective employee engagement, satisfaction of customer demands and also the need for sustainable competitive advantage which then makes it imperative for innovation to be the core DNA of the individual companies in the industry. Research has indicated that 70% of the GDP of developed economies is from the service industry; making it a key component in the development of creativity and innovation (Ostrom et al., 2010). This case is not different from emerging countries like Ghana. The success of firms in the service industry has been attributed to the levels of innovation (Ostrom et al., 2010). We could infer that innovation plays an important role in the competitiveness and performance of firms (Chapman, Soosay & Kandampully, 2003). Innovation researchers have identified that corporate leaders have the ability to impact the performance of employees to deliver innovative and creative results (Shalley & Gilson, 2004). This has been one of the attributes of successful leaders (Zhang & Bartol, 2010;Wang, Rode, Shi, Luo & Chen, 2013). For example, Amabile, Schatzel, Moneta and Kramer (2004) in their study ascribed that, the behaviour of corporate leaders is a determinant of employee creativity and innovation in a work environment. For an organisation to develop innovative capabilities, there is the need to consider the role of the leader (Jung, Chow & Wu, 2003). Notably, this assertion focuses on transformational leadership style because, there has been extensive research for example (Gumusluoglu & Ilsev, 2009;Wang & Rode, 2010;Wang et al., 2013) which found out that, transformational leadership style is a stronger determinant of innovation and creativity. In the study of Jung et al (2003) transformational leadership was found to highly encourage innovation culture and further support employees to exhibit their creative mindset and ideas for the growth of the organisation. Again, the service industry in Ghana today deploys all manner of technology as a means to differentiate themselves in the competition but they make less effort in understanding that to make innovation a differentiator, it goes beyond an event or a product or technologyrather, it is driven by the top management as a culture that shakes and moves the very fabric of the organization to get all hands on deck. Past researchers have focused on the individual organizational sectors within the service industry but not taking on board the service industry in Ghana as one holistic research ground focusing on leadership styles as having a relationship with the promotion of innovation culture. This calls for a research into this; but more importantly, the study is premised on the fact that, investigations into the factors affecting innovation have been the concern of numerous empirical studies (e.g. Anderson, Potočnik & Zhou, 2014;Molina-Castillo& Manuera-Aleman, 2009) and there has been some researches which have established relationships between transformational leadership and innovation culture (Sattayaraksa & Boon-itt, 2018;Xeniko, 2017). Research has shown that under certain market conditions, firms are able to innovate and also learn new ways of doing things. In support of the above assertion, some researchers (Yayavaram & Chen, 2015;Hedge & Shapira, 2007;Tanriverdi, 2005) have established that, for firms to be innovative, it is crucial that the resources of the firm are properly managed; and this depends on the ability of the firm to learn from external environment. Learning capability has been identified to improve organizational creativity and innovation (Li, Wei, Zhao, Zhang & Liu, 2013;King, 2009;Tsoukas & Mylonopoulos, 2004), but despite the numerous empirical reviews establishing direct effect of transformational leadership and innovation culture, the moderating effect of learning capability and market dynamics have been non-existent. Firms innovate in order to adjust to the needs and preferences of customers and also be able to learn new knowledge for prudent management of their resources. This is because innovation culture happens in response to change in the internal and external environment (Gomes & Wojahn, 2017). The relationship between leadership and innovation could be more complex than what was previously imagined. Accordingly, we explored the following research questions: RQ1: What is the effect of transformational leadership on company innovation culture? RQ2: What is the moderating role of market dynamism and learning capability on the effect of transformational leadership on innovation culture in an organisation? Leadership and Innovation Culture Leadership sets the tone for innovation. Lack of leadership vision and drive will place the concept of innovation culture in the basket of mere rhetoric. Whether new interventions in the organization will increase efficiency and give best offerings to the customer or not will depend largely on the measurable elements that are evaluated through the leadership. Leadership as defined by Robbins, Judge and Breward (2003), is how a group of people who have been given authority are able to influence another group of people to deliver results. In support of this, Chemers, (1997) states that this group of people in the higher echelon always have more power which they are expected to exercise to harness the skills of others to accomplish common tasks. Leadership is tasked to provide appropriate direction, motivate and solicit the commitment of the people to buy into his (the leader's) vision (Gallagher, International Journal of Human Resource Studies ISSN 2162-3058 2019, Vol. 9, No. 3 Goodyear, Brewer & Rueda, 2013. The ability to make followers believe that the leader possesses superior knowledge of the situation, greater wisdom to cope with the unknown, or greater moral force which is also key to leadership. On the other hand, the concept of innovation involves an effort to put forward purposeful creativity that can result in a positive change in an organization's economic, social and reputational potential in the eyes of its stakeholders especially consumers. Graham, (2008) indicates that innovation is about the implementation of new ideas which have the potential to cause the organization to increase its profitability and or market share. According to Krause (2014), innovation occurs when the whole process of identifying and taking control of opportune situations to inventions and development of ideas becomes a reality. In the view of Yeoh and Mahmood (2013), an idea becomes an innovation when it transforms from being an idea to a solution that a customer can attest to as having added value to his/her perspective. This was earlier buttressed by Burkus, (2011), that the ability to come up with new ideas is just the basic ingredient of innovation so until it is nurtured, developed and applied to bring value-add, it ceases to qualify as an innovation. Innovation culture is the kind of work environment where leaders create the opportunities for employees to grow and be nurtured to think outside the box and apply same to situations with the view to achieving better results than the status quo would have typically produced. According to De Jong and Den Hartog (2007), it is that kind of environment where the way of doing things relate to providing support for the effort of those who think creatively and provide solutions that add economic and social value to knowledge, processes, products and service delivery. It is also seen to be a culture of collaboration among teams. Some researchers (Crossan & Apaydin, 2010;Wang, & Ahmed, 2004;Ahmed, 1998;Martins & Terblanche, 2003) have identified innovation culture as a multidimensional construct, but there is no agreement concerning the determinants of the dimensions. The dimensions of innovation culture have been stated in literature to include the infrastructure to support innovation, environment to implement innovation, intention to be innovative, the operational level of the behaviors needed to influence the market and value orientation (Dobni, 2008). Leadership style of top managers/leaders forms a critical component of the environment needed to implement innovation. Review of literature (see Jung, Chow & Wu, 2003) suggests that, transformational and participatory leadership styles create an environment that support innovation. This is done through the promotion of active followership where organizations become breeding grounds for fresh ideas from all units and departments. Transformational Leadership Bass (1999), defined transformational leadership as a process in which leaders and followers raise one another to higher levels of morality and motivation. According to Kang, Solomon and Choi, (2015), transformational leadership is one of the most commonly and extensively researched areas of leadership. Interestingly enough, some researchers do not attribute any relationship at all between transformational leadership and innovation culture. What has become increasingly clear though with these researches is the fact that, as mentioned by Avolio and Bass (1988), researchers into leadership only interrogate the context within which International Journal of Human Resource Studies ISSN 2162-3058 2019 transformational leadership can be most effective. According to Akbar, Sadegh and Chehrazi (2015), there are four key tenets of transformational leadership, popularly referred to as the 4Is. These are Individualized consideration, Intellectual stimulation, Inspirational motivation and Idealized influence. Individualized consideration is where the leader considers the uniqueness of each of his/her followers and gives them individual attention through separate communication, specific workload allocations and relevant customized support. A transformational leader shows respect to his/her followers, giving every indication to them and others that they are valued members of the team. According to Horwitz et al;, a transformational leader applies the individualized consideration concept to support his/her followers to develop their individual potential and recognize them for current performance. The strengths and developmental gaps of the individual followers are determined under this concept of Individualized consideration. The result of such determination is utilized by the transformational leader to assign roles and responsibilities to each individual follower with the aim of supporting their personal growth (Hoy & Miskel, 2008). Intellectual stimulation requires the transformational leader to tease out from followers the need to rethink to debunk some stereotypes and rather discover new solutions to organizational challenges. Fernet, Tré panier, Austin, Gagné & Forest (2015) and Tonkenejad, (2006) mention that the leader does this by asking the followers specific questions which push them to rethink about working processes and the efficient ways to reengineer them. The leader presents the followers with existing challenges but re-framed so that they (the followers) can re-organize their thoughts to find different solutions to old challenges. There is no public ridicule or criticism of the followers if errors are detected. Followers are given the opportunity to showcase their creativity by finding innovative solutions to a problem. By this the followers consider themselves as being part of the problem-solving process. The transformational leader employs various emotional symbols to encourage the team members to go the extra mile, over and above their personal interests. The team and individual spirits are heightened by the leader's inspirational motivation. This creates the avenue for the team's optimism and enthusiasm to increase, pushing them to always look forward to future situations to prove themselves again. Northouse (2015) states that inspirational motivation improves the followers' understanding of the organizational mission and vision which are very fundamental to organizational innovation culture. Lastly, for the concept of Idealized Influence, it describes leaders who become role models for their followers. According to Bigharaz et al, (2010); such leaders often set high moral and spiritual standards for their followers to strive to reach the height that they (leaders) have reached. Such leaders attract the admiration, respect and trust of their followers and the followers imitate them. The central core of this technique is the fact that it provides purpose and instils a sense of high standards in the followers. The Leader-Member Exchange (LMX) and Innovation Culture According to Lunenburg (2010), the fundamental principles of this theory is that organizational leaders categorize their employees into two main groups, those who form the In-Group or inner circle and the Out-Group who are outliers. In the view of Wayne, Shore and Liden (1997), LMX is premised on the foundation that these relationships between leaders and their team members are as a result of some physical or mental effort, information flow from each party, open communication channels, and a psychological or emotional bond between the two parties. The LMX theory views the relationship between the leader and the team members on individual basis and each of these relationships can have different dimensions and focus. It is the aggregation of these individual relationships that culminate into an In-Group or Out-Group. LMX influences organizational outcomes, employee engagement and deliverables which have a direct bearing on the creation and promotion of innovation culture. It is critical to note that the type of relationship between the leader and the team members influence the quality of job outcomes. The relationship between an employee and his/her leader is akin to the lens through which the entire workplace experience is viewed. Ilies, Nahrgang and Morgeson (2007); Chen, Lam, & Zhong (2007); point out that the In-Group members will have higher productivity, job satisfaction, motivation, empowerment, engagement and will be better behaved and ready to act innovatively. The leaders deliberately and consciously invest more resources in this group members than the others. Basu, (1991) has supported the claim that there is a positive correlation between -leader-member exchange and innovation behaviour‖. So when the positive relationship exists, there is a high degree of the tendency of the leader winning the hearts and minds of the employeesthey start thinking outside the box and break new grounds. On the other hand when they experience or perceive themselves as part of the out-group, they work-to-rule, become superficial, disengaged, less effective, frustrated and dissatisfied. Such employees care less about the fortunes of the organization and consider themselves as only hanging in there, usually as they wait for better opportunities outside of the organization. The result of that can also be devastating to the organization and no deep-seated innovation culture can thrive in that kind of environment. This pre-supposes that the leadership style, behaviour and actions are impactful in driving innovation culture. Employees would like to perceive their leaders as treating them fairly, flexibly and firmly but with the freedom to deliver within their space. Clear leadership support for innovation will thus generate innovative behaviour because the climate of the organization is, more often than not, measured by the leadership behaviour/attitude. The degree of cordiality between employees and their leaders represents a key environmental influence in the work situation. Where the degree of cordiality is high, the employees believe their innovative behaviour will result in organizational performance gain with its antecedent recognition of their effort. It is important to state that, some leadership styles encourage knowledge sharing which leads to innovation but that is beyond the scope of this paper. Transformational leadership and innovation culture It is important to note that some literature have it that certain variables may contribute to the International Journal of Human Resource Studies ISSN 2162-3058 2019 extent to which transformational leadership may thrive in the world of business. Such variables range from start-up business environments to large and mature ones with different degrees of uncertainty, competitiveness, availability of resources, profitability levels, value system, agility of decision-making process/structures, turnaround time, the level of diversity and inclusiveness among the workforce and very critically, the closeness of the leader to the led. Some researchers (such as Bass & Riggio, 2006;Dvir, Avolio & Shamir, 2002;van Knippenberg & Sitkim, 2013), have established that critical situations including crisis, produce effective transformational leaders. Transformational leaders encourage employees to appreciate the importance of sacrificing their personal comfort and interest for the attainment of organizational goals. A number of researchers theorize that transformational leadership is linked to organizational performance (Para-Gonzá lez, Jimé nez-Jimé nez & Martí nez-Lorente, 2018; van Knippenberg & Sitkim, 2013;Bass & Riggio, 2006;Wang et al., 2005;Bass, 2003;Dvir, Avolio & Shamir, 2002). Conceptually, it is argued that the visionary and inspirational skills of transformational leaders motivate followers to deliver superior performance (Nicholls, 1988;Quick, 1992) and others (e.g. Bain, Mann & Pirola-Merlo, 2001;Scott & Bruce, 1994) have found some relationship between transformational leadership and innovation culture. Jung (2001) also views managers as playing key roles in developing, transforming, and institutionalizing organizational culture. Jung, Chow & Wu (2003) found out that transformational leadership by the top manager can enhance organizational innovation directly and also indirectly by creating an organizational culture in which employees are encouraged to freely discuss and try out innovative ideas and approaches. Along the same vein, Schein (1992) argues that as organizational founders and leaders communicate what they believe to be right and wrong, these personal beliefs become part of the organization's climate and culture. In summary, we could deduce that there is enough evidence to suggest that transformational leadership style impacts on innovation culture; but there have been inconclusive results to that effect. Based on this, the researchers hypothesize that: Hypothesis 1: Transformational leadership has a positive effect on innovation culture The moderating role of Market Dynamism Market dynamism refers to the dynamics that happen within a given market environment. These changes can occur as a result of changes in technology, defined or undefined market structure, instability of market demands and intense fluctuations in resource supply (Jansen , George, Van den Bosch & Volberda, 2006, Simon et al;. According to Miller and Friessen (1983), market changes occur when the environment is volatile and unpredictable. When those situations occur, it becomes very challenging to clearly define market boundaries, or build sustainable market models or build sustainable structures around the stakeholders in the market to hold them constant, for example, customers, competitors and suppliers (Eisenhardt & Martin 2000). During such times organizations become quite vulnerable from the external uncertainties. Such situations typically defy existing knowledge, strategies, policies, practices and activities the organization may have applied to achieve results in more stable times. ISSN 2162-3058 2019 Inspite of the difficulties organizations may encounter during such times, customer demands must still be met. It takes organizations with innovation culture to continuously find creative ways of improving and modifying their products and services to satisfy customers. If an organization which may be experiencing such market changes decides to continue to apply the same strategies, policies, procedure sand knowledge as it would have done in -normal‖ times, such an organization would be heading towards extinction. Where there are no such regular changes, the level of predictability ensures that organizations can follow a relatively clear linear path and not be heavily concerned about modifications in products and services (Eisenhardt & Martin 2000;Schilke, 2014). International Journal of Human Resource Studies The researchers state that, increasing technological changes, globalization and stiff competition require organizations to pay critical attention to market structural dynamics and orientation. An organization that cannot comprehend the new orientation of its market cannot survive and as Kohli and Jaworski (1990 p3) define market orientation, it is -the organization-wide generation of market intelligence, dissemination of intelligence, across departments, and organization-wide responsiveness to this intelligence‖. This requires that the organization must understand the market dynamics and responsibly apply innovation to meet the demands to create organizational sustainability. The market structure has a relationship to the concept of competitiveness. Where competition is tense, organizations with innovation in their DNA can survive. When the intensity in competition is low, innovation may not take centre stage as against when the intensity is high. Thus the speed of innovation is either heightened or slackened depending on the competitiveness of the landscape in the market structure. Persistent dominance of an organization in a market has the potential to affect the market structure. Where there is a clear tilt towards the heavy dominance of one organization against others, similar organizations within the sector then may consider either Mergers & Acquisitions, capital injection and changes in shareholding structure. The market dynamism and competition give rise to talent flights as well. Organizational leaders have a responsibility to ensure that innovative practices are introduced into talent attraction, development and retention strategies as well as reward and recognition schemes and a generally welcoming work environment for employees. Such plausible innovative ways would have to be explored to situate the organization appropriately in the market where a change in a competitor's business model could impact the entire market structure. Competition in the market may be conceptualized as the rivalry between organizations in terms of pricing, product characteristics, and distribution strategy and customer service. However, a more recent and nuanced conceptualization is encapsulated in the ability to innovate within the customer space The relationship between market dynamism and innovation culture has not been extensively researched into and the few studies which has somehow considered the moderating role of market dynamism have inconclusive results. For example, in a study by Kamasak, Yavuz and Altunaz (2016), they found out that the relationship between knowledge management and innovation capabilities of firm was stronger when market dynamism was high rather than when it was low. Also, Park and Ryu (2007) found out that when there is frequent market changes then technology commercialisation will have a positive impact on business International Journal of Human Resource Studies ISSN 2162-3058 2019 outcomes. The researchers hypothesize that: Hypothesis 2: Market dynamism will negatively moderate the relationship between transformational leadership and innovation The moderating role of Organisation Learning Capability Learning capability according to Gomes and Wojahn (2017, p.165) is -the ability of an organization to process knowledge, i.e., the ability to create, acquire, transfer and integrate knowledge and, also, to modify the behaviour to reflect the new cognitive situation, with the aim of improving organizational performance‖. Similarly, Jerez-Gomez et al (2005, p.38) defines organizational learning capability as -a firm's capability to learn from internal and external sources and to adjust or modify its behaviour to reflect the new cognitive situation, with a view to improving its performance‖. The ability of a firm to understand and learn the tangible and intangible asset is very imperative to their competitive urge. Organizational learning helps firms to apply their knowledge to achieve competitive advantage by being innovative. Learning capability has been conceptualized by previous researches as a multi-dimensional constructs (Alegre & Chiva, 2008;Jerez-Gomez et al., 2005), which includes participative decision making, dialogue and teamwork, interaction with external environment, risk taking and openness and experimentation (Mbengue & Sane, 2013;Camps & Luna-Arocas, 2012;Jyothibabu, Farooq & Pradhan, 2010). The openness and experimentation dimension is defined as -the extent to which new ideas and suggestions are attended to and treated sympathetically‖. This dimension relates to how the organisation adopt and treat new ideas and suggestions. Experimentation covers how the organisation seek for new innovative ways. Risk taking is also defined as -the tolerance of ambiguity, uncertainty, and errors'' (Alegre & Chiva, 2008, p.317). The risk dimension also relates to the ability of the firm to tolerate errors, uncertainty and ambiguity. Organizations should be able to accept mistakes and by doing so, promote learning. The ability of the organisation to take risk afford it the opportunity to be innovative. Also, the interaction with external environment is -the relationship that a firm maintains in its immediate environment. Chiva et al (2007) is of the view that, the relationship between the organisation and its environment involves receiving and sharing information and interactions of employees with the external environment and reporting of information from the external environment. Dialogue was defined by Kamasak et al (2016, p.236) as -a sustained collective inquiry or a basic process that enhances communication and allows people to see the hidden meanings of words‖. Dialogue helps in building understanding between parties. Dialogue facilitates communication and the presence of multifunctional work teams and free and open communication within work teams. For innovation to take place, firms need to break barriers to effective communication such as authoritarianism, centralization of power and hierarchical system; whiles the last dimension of learning capability, participative decision-making was defined by Alegre and Chiva (2008, p.37) as -the level of influence that employees have in the decision-making process‖. The researcher conceptualizes learning capability with the five dimensions. International Journal of Human Resource Studies ISSN 2162-3058 2019, Vol. 9,No. 3 The researcher believes that learning capabilities of the firm plays a key role in the innovation process, this is because of the importance of the individual constructs of learning capability. For example, there has been extensive research on how learning capability positively affects innovation performance (e.g. Alegre & Chiva, 2008;Jimenez-Jimenez & Sans-Valle, 2011;Alegre & Chiva, 2013), but none of these researchers has recognized the effect of transformational leadership on innovation culture taking into consideration the role of learning capability. The leadership of an organisation plays key role in promoting organizational learning and it will be important for find out the condition under which organizational learning capability affects this relationship. Based on the above review, the researchers hypothesize that; Hypothesis 3: Organizational learning capability will positively moderate the relationship between transformational leadership and innovation culture The researcher conceptualizes the hypotheses below: Research Design Our research was conducted as a quantitative research study to establish the relationship between transformational leadership and innovation culture and more importantly, finding out Market dynamism Innovation Culture Learning Capability ISSN 2162-3058 2019 the moderating role of market dynamism and organizational learning capability. Borg and Gall (1989) indicate that quantitative research deals with fairly large sample sizes and is devoted to the study of relationships between objects as they exist. In this case, the researcher assumes a passive stance and does not immerse herself/himself in the research. Information about the object of study is deduced by the use of statistical techniques of data gathering. This data is then analyzed and the results presented in numerical formats. Measures The study used the survey methodology which is in line with the positivist paradigm. Consistent with other previous studies on transformational leadership and innovation culture, a quantitative research approach using survey questionnaires was adopted. A multiple regression analysis was adopted and a multi-step process were adopted (Blankson et al., 2007). The scales used in the questionnaire were based on other measurements scales adopted from previous research on transformational leadership, organizational learning capability, market dynamism and innovation culture (Jaiswal & Dhar, 2015;Wang et al., 2015;Alegre & Chiva, 2008;Dobni, 2008). The questionnaire was designed into two parts. The first section was on the Likert scale type of questionnaire and they were anchored on 1= strongly disagree to 5= strongly agree. The second section was designed to obtain information on the respondents. The Likert scale question on Transformational Leadership was adopted from the study of Jaiswal and Dhar (2015) and was measured with eight items. Organizational learning capability was also adopted from the study of Alegra and Chiva (2008) which incorporated with fourteen items covering experimentation, risk taking, interaction with the external environment, dialogue and participative decision making. The questions relating to market dynamism was also adopted from Wang et al (2015) with six items, whiles the questions on innovation culture was also adopted from Dobni (2008) with nine items. Data Collection and Analysis Similar to other previous studies on transformational leadership styles and innovation culture (Keller, 2002;Jaiswal & Dhar, 2015), the researchers adopted a threefold data collection process. A total of 30 firms in the telecommunication, banking and insurance, tourism and hospitality located in Greater Accra region of Ghana were contacted. To ensure their acceptance to participate in the study, a letter was sent to the top management of each firm about the objective of the research and also emphasised the practical implications of the study. Next, after the acceptance, questionnaires in sealed envelopes were sent to each firm to be given to the employees through convenient sampling approach. The questionnaire was given to their respective Human Resource Executives to be given to the employees. After three weeks a total of 400 questionnaires were shared to all the firms targeted. 250 were received but 210 were included in the analysis because the remaining had incomplete information and were thus discarded. In line with the specification of Bihani and Patil (2014), descriptive statistics, exploratory factor analysis and multiple regression analysis techniques were used to analyze the data collected and cleaned data. ISSN 2162-3058 2019 Profile of respondents The descriptive analysis of the data showed that out of the 210 samples used for the analysis, 64.3% of the respondents were males whiles 35.7% of the respondents were females. In terms of the structure of the ages of the respondents, 1.9% were between the ages of 18-25 years, 31.0% were between the ages of 26-35 years, 51.9% were between the ages of 36-45 years whiles 15.2% were above 46 years. 23.8% of the respondents were junior level employees, 51% were middle level staffs whiles senior level staffs comprised 24.8%. In terms of years of being in their current positions, 10.5% had been in their current position for less than a year. 40.5% had been in their current position for 1-3 years in the service industry, 33.3% had been in their current position for 4-7 years, and 7.6% had been in their current position for 8-10 years whiles 8.1% had been in their current position for above 10 years of working experience. Further, 1.9% of the firms has employees of less than 20, whiles 33.3% of the firms surveyed had between 21-50 full time employees, 18.1% of the firms had between 51-99 full time employees whiles majority (46.7) of the firms had above 100 full time employees. Lastly, in terms of the years of existence of the surveyed firms, 6.2% of the firms had been in existence for less than 5 years, 4.3% had been in existence for between 6-10 years, 8.1% had been in existence for 11-15 years, 9.5% had been in existence for 16-20 years whiles 71.9% had been in existence for more than 21 years. The result is illustrated in Exploratory Factor Analysis (EFA) To identify the factors which are relevant in predicting innovation culture, the EFA was employed as data reduction strategy. Prior to the extraction of the relevant factors, the Bartlett test of Sphericity (Approx: x 2 =1828.343, df=210, Sig. 0.000) and the Kaiser-Meyer-Ohlin measure of sampling adequacy value of 0.836 (Chan & Idris, 2017) confirmed that there were significant correlation among the variables to warrant the use of the factor analysis for further analysis. The researchers accepted variables with factor loading of 0.5 or greater and factors with Eigen values which were equal or greater than 1 for further analysis (Malhotra & Birks, 2006;Hair et al., 2010). The extraction method used was the Principal Component Analysis (Hair et al., 2010). There were a number of cross loading of the initial 38 items to 21 items which explained 62.67% of the satisfactory variance. Based on the recommendations of Hair et al (2010), items could be dropped when they cross loading or have factor loading of less than 0.5 and in line with this, 18 items were dropped. The researcher used varimax rotation in the EFA. The researcher used above 0.7 as the ideal level for the reliability of the scales. The 20 items loaded perfectly with 5 extraction. ‗OLC' which was used to denote organizational learning capability had two sub factors and they were all used in the analysis. The first factor was denoted Transformational Leadership with 4 items. The second factor was Market dynamism with 3 items. The third factor was Organizational Learning capability (Interaction with external environment with 4 items and Dialogue with 4 items). Finally, the last factor was Innovation culture with 6 items. The internal consistency of the 5 factors were analysed using the Cronbach's alpha (Chan & Idris, 2017). The results indicated that all the five factors were reliable between 0.726 to 0.876. On the basis of this results, all the five factors were accepted for further analysis. Table 2 illustrates the factor loading and the reliabilities of the scales We have an innovation vision that is aligned with projects, platforms, or initiatives .727 .644 This organization's management team is diverse in their thinking in that they have different views as to how things should be done .727 .641 There is a coherent set of innovation goals and objectives that have been articulated .824 .732 Innovation is a core value in this organization .817 .728 Test of Validity In order to establish convergent and discriminant validity of the construct used, there is need for an appropriate AVE (Average Variance Extracted) analysis (Fornell & Larcker, 1981). There is convergent validity when items measuring the same construct correlate highly with each other (Campbell & Fiske, 1959). We assessed convergent validity by ensuring there is adequate composite reliability, average variance extracted (AVE), and adequately high factor loadings as recommended (Komiak & Benbasat, 2006;Hair et al., 2014). The criterion for establishing reliability is that the AVE measures should exceed .50 to ensure that, on the average, the measures share at least half of their variation with the latent variable (Fornell & Larcker, 1981;Hjorth, 1994). As shown in Table 4, the AVE criterions was met. For discriminant validity, a test performed to see if the square root of every AVE value belonging to each latent construct is much larger than any correlation among any pair of latent ISSN 2162-3058 2019 constructs. AVE measures the explained variance of the construct. When comparing AVE with the correlation coefficient we actually want to see if the items of the construct explain more variance than do the items of the other constructs. Table 4 shows results of the AVE analysis. It can easily be seen that the AVE values are above 0.5 and, moreover, the square root of the AVE in the diagonal in Table 3 are above the correlation coefficients for each construct ensuring discriminant validity. For all the constructs, the items have high loadings, with majority above 0.50 therefore demonstrating convergent validity. Table 3 shows the correlation matrix and descriptive statistics of the measures used in the study. An examination of the skewness and kurtosis showed that measures used in the study met the normality assumption (Flora & Curran, 2004). Hypothesis Testing The researcher used a multilevel hierarchical regression analysis to test the study hypotheses (Wang et al., 2011). According to Hox (2002), the use of this method of analysis is appropriate for cross sectional studies. Below is the results shown in Table 4. Table 4 (Model A) shows that the control variables explain 4.2% of the variance in innovation culture. The addition of the independent variable in Model B increased the variance in innovation culture to 19% (∆R 2 = .14, ∆F = 37.640, p < .001). In Model C, the moderation variables were introduced together with the independent variable, which increased R 2 by 26.5% (∆F = 32.860, p < .000). The introduction of the interaction terms in Model D4 further increase R 2 by 11% (∆F = .011 p < .001). Model A shows that the control variable (Number of employees) had a negative but significant relationship with innovation culture (b = -.087, p < .010). Model B in Table 6 shows that transformational leadership is positively and significantly related to innovation culture (b = .387, p < .001). We could therefore establish that, the first hypothesis, which stated that: There is a positive relationship between transformational leadership and innovation culture is supported. Similarly, when the moderators were added to the independent variables, it was seen that, transformational leadership had a positive relationship with innovation culture (b = .155, p < .010), whiles market dynamism also had a positive and significant relationship with innovation (b = .245, p < .000). Similarly, interaction with external environment (b = .373, p < .000) and dialogue (b = .116, p < .000) had a positive and significant relationship with innovation culture. ISSN 2162-3058 2019 Further, the moderating effect of Market dynamism and Organizational Learning capability (Interaction with external environment and Dialogue) were also assessed. The product of the transformational leadership and moderator variables showed that transformational leadership and market dynamism (b = .256, p < .00) was significantly and positively related to innovation culture. From these results, we could establish that, the hypothesis two, which stated that, market dynamism will negatively moderate the relationship between transformational leadership and innovation culture is moderately supported. Transformational leadership and Interaction with external environment had a positive but a significant relationship with innovation culture (b = -.103, p < .005) while transformational leadership and dialogue (b = .114, p < .005) were positively and significantly related to innovation culture. Since interaction with external environment and dialogue are two dimensions of organizational learning capability, we could infer that there is partial support for the hypothesis that Organizational learning culture will moderate the relationship between transformational leadership and innovation culture. Discussion This research sought to find out the effect of transformational leadership on innovation culture. The results from the study showed that, the coefficient of this variable was positive and significantly related to innovation culture at 1% significance level once again conforming to the priori expectation of the study. The magnitude of the coefficient implies a 38.7% increase in innovation culture when transformational leadership was increased by a unit. A transformational leader articulates a compelling vision of the future, intellectually stimulate followers, recognizes individual differences and helps develop their strengths (Bass, 1985). Transformational leaders encourage employees to appreciate the importance of sacrificing their personal comfort and interest for the attainment of organizational goals. As a results in order to ensure innovation culture, organization's must adopt transformational leadership style as it allows leaders to use inspirational motivation and intellectual stimulation which are key for innovation. Transformational leaders promote creative ideas within their organizations and their behaviours are suggested to act as -creativity-enhancing forces‖; individualized consideration -serves as a reward‖ for the followers, intellectual stimulation -enhances exploratory thinking‖, and inspirational motivation -provides encouragement into the idea generation process. Transformational leaders have the Charisma in their personality which is not only capable of creating imagination, long vision and meaning in the project; but also inspires value, respect and confidence in the team which in the end promote innovation culture among individuals. Hence, to promote innovation culture leaders and followers must raise one another to higher levels of morality and motivation as suggested by (Burns, 1978). These findings generally conform to the studies of Jung (2001), Jung, Chow & Wu (2003) and Schein (1992) which reveal that, transformational leadership can enhance innovation culture. Transformational leadership has been found to be the most important in terms of impacting on innovation culture. This is because a transformational leader provides inspirational motivation and clearly establishes himself/herself as a role model who exudes trust and confidence (Cole et al, 2009;Bass, 1998). From the results obtained, we could deduce that in creating the innovation culture, the International Journal of Human Resource Studies ISSN 2162-3058 2019 transformational leader must exhibit the ability to let his/her follower understand the need to support their organizations to be efficient. The leader does this by ensuring that the followers receive mentoring and coaching to develop their skills and capabilities. Through idealised influence, the transformational leader becomes a role model for his/her team and by that the learning capabilities of the organisation improves to positively support innovation culture. Employees lean from what they see of their leaders and it takes transformational leaders to make this happen. This learning opportunity given to employees can become a policy under a transformational leader and influence the promotion of innovation culture. The market environment is an evolving one and therefore the transformational leader does not lose sight of its dynamism which influences innovation culture in an organisation. Transformational leadership style therefore relates positively to innovation culture when moderated by market dynamism. The element of competition, national economy, infrastructural development , improved communication channels, customer exposure and tastes are all factors that keep the market place dynamic, and it takes a transformational leader to keep his/her eye on the environment and adapt measures that positively influence the promotion of innovation in his/ her organisation. Implications The study also recommends that, leaders especially (transformational leaders) should be selfless, promote employee engagement, communicate to the employees with confidence and should be futuristic and inspirational based on the study. This is so important because the brightest and best ideas may not always come from the person sitting at the higher echelon of the organization. It is important to state that, the way and manner the organization is structured and evolves is significant because when people are only confined to rigid -boxes‖ in the name of organizational structures they are not able to get creative; rather working in project teams, for instance must be encouraged. Also, the study reveals that leaders/managers in the Ghanaian service industry must actively engage their subordinates in the leadership process; especially for decisions focusing on customer innovation. This is necessary for many reasons. First, subordinates have better understanding of how the customer space is metamorphosing with respect to changing customer demands and perceptions on services. This coupled with their frequent interaction with customers at the service touchpoints means that they have a rich repertoire of information which could be used to make meaningful inputs during decision making processes on how to improve customer experience through innovation. Second, when leaders in the Ghanaian service industry indulge the viewpoint of their subordinates, resistance to innovation is less likely to ensue. This will greatly foster a conducive climate for the growth of an innovation culture. Typically, managers in Ghanaian organizations (as pertains in other Sub-Saharan African nations) tend to be authoritative in their leadership style (Beugre & Offodile, 2001). However, the findings from study points to a growing need for them to have a rethink of their respective leadership approaches to include the viewpoint of their followers. This cannot be achieved through a passive approach. More importantly, Ghanaian service industry leaders need to actively pursue styles that engage and empower their followers to be part of the leadership process. In essence, leaders within Ghana's service industry will need to adapt their leadership approach to styles that offer platforms for employees to become active followers. Active followership will breed knowledge sharing which then serves as a strong foundation for building innovative teams. Limitation and Directions for Future Research As with all works of original research, replication of this study would serve as a check on the reliability and generalization of the present findings. Also, researchers may wish to extend this study by undertaking more detailed analysis of the predictors found to be important in affecting innovation culture. One major implication of this study for industry practitioners and academia, as intimated by Aragon-Correa et al (2007), is that innovation, though a critical part of an organization's growth, is not just there for the taking but available to those organizations with the appropriate internal characteristics. This assertion, coupled with insights into other internal and external factors, will support the promotion of innovation culture in the service industry in Ghana. Conclusion In conclusion, this study attempted to examine the effect of transformational leadership style on innovation culture in the Ghanaian service industry; and further, the moderating role of market dynamism and organizational learning capability. The findings provided insight to the effect of changes in the market and the ability of firms to interact favourably with the external environment.
2019-09-11T10:18:33.333Z
2019-08-12T00:00:00.000
{ "year": 2019, "sha1": "3fd2f2522d245b6db619283ca82fe50ea843b7b7", "oa_license": "CCBYNC", "oa_url": "https://www.macrothink.org/journal/index.php/ijhrs/article/download/15255/11978", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2f7c0816fdd6212e4f0eac605e3c2835645143fe", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
119612337
pes2o/s2orc
v3-fos-license
The moduli space of stable coherent sheaves via non-archimedean geometry We provide a construction of the moduli space of stable coherent sheaves in the world of non-archimedean geometry, where we use the notion of Berkovich non-archimedean analytic spaces. The motivation for our construction is Tony Yue Yu's non-archimedean enumerative geometry in Gromov-Witten theory. The construction of the moduli space of stable sheaves using Berkovich analytic spaces will give rise to the non-archimedean version of Donaldson-Thomas invariants. In this paper we give the moduli construction over a non-archimedean field $\kk$. We use the machinery of formal schemes, that is, we define and construct the formal moduli stack of (semi)-stable coherent sheaves over a discrete valuation ring $R$, and taking generic fiber we get the non-archimedean analytic moduli of semistable coherent sheaves over the fractional non-archimedean field $\kk$. For a moduli space of stable sheaves of an algebraic variety $X$ over an algebraically closed field $\kappa$, the analytification of such a moduli space gives an example of the non-archimedean moduli space. We generalize Joyce's $d$-critical scheme structure in \cite{Joyce} or Kiem-Li's virtual critical manifolds in \cite{KL} to the world of formal schemes, and Berkovich non-archimedean analytic spaces. As an application, we provide a proof for the motivic localization formula for a $d$-critical non-archimedean $\kk$-analytic space using global motive of vanishing cycles and motivic integration on oriented formal $d$-critical schemes. This generalizes Maulik's motivic localization formula for motivic Donaldson-Thomas invariants. 1.1. Structure of the paper. This paper contains two parts. The first part is a construction of the moduli space (stack) of (semi)-stable coherent sheaves in the world of non-archimedean geometry. The motivation for our research is Tony Yue Yu's study of Gromov compactness of the moduli space of stable maps in nonarchimedean analytic geometry in [77]. The central part of his theory, motivated by the work of Kontsevich-Soibelman, is to define the open version of Gromov-Witten invariants using non-archimedean analytic geometry. Tony Yu has made several progresses along this direction, see [78], [79]. The advantage of T. Yu's theory is that the invariants he defined satisfy the Kontsevich-Soibelman wall crossing formula as in [45]. So this theory should give the right invariants, and these invariants can be used to construct mirrors for some log Calabi-Yau geometries, see [79]. This is parallel to the Gross-Siebert program in the series of works [29], [30], [31], [27], where their goal is also to construct mirrors using open or punctured version of Gromov-Witten invariants. All of these achievements provide deep evidences in mirror symmetry. One of the important conjectures in enumerative geometry is the Gromov-Witten/Donaldson-Thomas (GW/DT) correspondence by MNOP [56], [57]. The GW/DT-correspondence conjecture holds for any smooth projective threefold. We only restrict to Calabi-Yau threefolds. Roughly speaking, for a projective Calabi-Yau threefold Y, GW/DT-correspondence states that the Gromov-Witten partition function of the curve counting invariants of Y via stable maps is equivalent to the Donaldson-Thomas (in [74]) partition function of the curve counting invariants via ideal sheaves after a change of variables. T. Yu's theory [77] is the Gromov-Witten like invariants on non-archimedean analytic spaces. So there should exist a theory of the sheaf counting in non-archimedean analytic geometry. In this paper we provide the first step for the sheaf counting in the non-archimedean sense, i.e., we construct the moduli space (stack) of the (semi)-stable coherent sheaves for non-archimedean analytic spaces. We work in the category of formal schemes and Berkovich non-archimedean analytic spaces for this construction. Of course it is ambitious at the moment to see if the counting sheaf theory provides better sights on the construction of mirrors than Gromov-Witten like invariants. We define the formal moduli functor of the semistable sheaves over a stft formal scheme, and prove that the functor is represented by an algebraic stack using Artin's criterions for the representability of algebraic stacks in [2]. The Berkovich non-archimedean analytic space is natural to the study of degenerations of algebraic varieties. Over filed of character zero, every Berkovich analytic space X has a simple normal crossing (SNC) formal model X over R, which is a stft formal scheme over a discrete valuation ring R such that its special fiber X s is a κ-scheme and has only simple normal crossing divisors and the generic fiber X η -X. The non-archimedean analytic space X is independent to the formal model we choose, i.e., if we have a different formal model X 1 , then the generic fiber X 1 η is also isomorphic to X. For instance, elliptic curves can be degenerated into a circle of projective lines, called the skeleton of the degeneration. One can blow-up the special fiber so that there are some branches on the circle, but the skeleton does not change. Using this idea we construct the universal stack M of SNC formal models for a fixed pair (X, X); and the formal moduli stack of stable ideal sheaves of M. The Berkovich non-archimedean analytic spaces can be used to study the degeneration of stable coherent sheaves for the variety Y with simple normal crossing divisors. In [50], [52], the moduli space of relative stable coherent sheaves for a pair (Y, D) is defined by the so called "expanded degenerations" and the central idea is to solve the normality property of the stable coherent sheaf with the divisor D. Geometrically this means that the underlying curve associated with the stable sheaf intersects with the divisor D transversally. This normality property can also be studied by the techniques of logarithmic structures as in [1], [28]. Using the SNC formal model of the non-archimedean analytic space X, the normality property of stable coherent sheaves on a κ-scheme X s with respect to a SNC divisor D is automatically satisfied, since the non-archimedean analytic space X does not depends on the SNC formal model and one can do admissible formal blow-ups along the special fiber to keep the transversality property. We prove that the formal moduli stack of stable ideal sheaves of M is the formal completion of the moduli stack of ideal sheaves of the stack of expanded degenerations. It is very interesting to see if one can use non-archimedean analytic spaces to study the degeneration formula for both Gromov-Witten and Donaldson-Thomas invariants in [51], [52]. The second part is motivic Donaldson-Thomas theory. We outline the basic materials and questions here, and more details can be found in §5 and in the sections of Part II. Let Y be a smooth Calabi-Yau threefold. The moduli space X of stable coherent sheaves over Y with fixed topological invariants admits a symmetric obstruction theory [4] and the Donaldson-Thomas invariant of Y is the weighted Euler characteristic of X weighted by the Behrend function ν X [4,Theorem 4.18], which coincides with the Donaldson-Thomas invariant of Thomas [74] by using virtual fundamental classes. This proves that Donaldson-Thomas invariants are motivic invariants. The general construction of motivic Donaldson-Thomas invariants are given by Kontsevich-Soibelman [45] for any Calabi-Yau category, and in degree zero by Behrend-Bryan-Szendroi [5]. The key part in the motivic Donaldson-Thomas theory is to construct a global motive, or a global vanishing cycle sheaf for X such that taking Euler characteristic of the cohomology of such a sheaf we get the weighted Euler characteristic of X. In a series of papers [16], [17], [18], Joyce etc achieved this goal by using the symplectic derived schemes or stacks, since the moduli space X can be extended naturally to a (´1)-shifted symplectic derived scheme X in [67]. The underlying scheme X of a (´1)-shifted symplectic derived scheme X is a dcritical locus in the sense of [42] or a virtual critical manifold in the sense of Kiem-Li in [44] . In this paper we generalize Joyce's definition of d-critical schemes to the setting of formal schemes and Berkovich non-archimedean analytic spaces. This at least gives a formal and analytic version of Joyce's d-critical scheme structures, and will have applications in motivic Donaldson-Thomas theory. It is hoped that the formal and analytic d-critical schemes or non-archimedean analytic spaces will have more applications. Similar to the case of d-critical schemes and (´1)-shifted symplectic derived schemes, we hope that the d-critical formal schemes and dcritical non-archimedean K-analytic spaces are the underlying schemes (spaces) of the corresponding (´1)-shifted symplectic derived formal schemes and derived non-archimedean analytic spaces, see the corresponding results in this direction [55,Chapter 8], [68], [69]. We also generalize Joyce's orientation of d-critical schemes to d-critical formal schemes and d-critical non-archimedean analytic spaces. This provide a global motive M F φ X,s of vanishing cycles for the d-critical non-archimedean analytic spaces and d-critical formal schemes. This global motive M F φ X,s lies in the localized Grothendieck ring M κ of varieties over κ. If the d-critical nonarchimedean analytic space (X, s) admits a good G m -action, which is circlecompact, we generalize Maulik's motivic localization formula for the global motive M F φ X,s of (X, s) by using motivic integration for formal schemes in [64], [38], see Theorem 1.8. 1.2. Main results of the construction of moduli spaces. We list our main results in this section on the moduli construction. 1.2.1. Main results. Let Y be a projective scheme over κ with a polarization O Y (1). Let us fix a Hilbert polynomial P. We first recall the construction theorem of moduli space of (semi)-stable sheaves in [34,Theorem 4.3.4]. We prove a similar theorem in non-archimedean analytic geometry. For the consideration of Donaldson-Thomas theory, we only restrict to three dimensional smooth non-archimedean K-analytic spaces, although the main result below is true for any smooth non-archimedean K-analytic space. Let I P K (X) denote the moduli space of analytic ideal sheaves of curves on X with Hilbert polynomial P. The main result is: (Corollary 4.20) Let X be a smooth non-archimedean K-analytic space of dimension three. Suppose thatL is a Kähler structure on X with respect to an SNC formal model X of X. Let I P K (X) denote the moduli stack of analytic ideal sheaves on X with Hilbert polynomial P such the degree of P with respect toL is bounded. Then I P K (X) is a compact K-analytic space. Let κ has character zero. If the K-analytic space X is proper, then I P K (X) is a proper K-analytic stack. Remark 1.3. Here for a smooth non-archimedean K-analytic space X, "proper" means "compact" and "no boundary". Our strategy to prove Theorem 1.2 is through the formal model X of X. So it is routine to construct a formal version of the above result. Let M(P) denote the general moduli functor of semistable coherent sheaves with Hilbert polynomial P. Let S be a locally noetherian base scheme, and X /S a scheme locally of finite presentation over S. Our result for the formal moduli space is: Theorem 1.5. (Theorem 4.11)(Construction of the moduli stack of formal semistable sheaves over R) Let X be a stft formal R-scheme. Let T be a strictly K-affinoid space and let (F Ñ T, f ) be a family of K-analytic semistable coherent sheaves on X η over T. Then up to passing to a quasiétale covering of T, there exists a formal model T of T and a family of formal semistable sheaves (F Ñ T,f ) of X over T such that when applying the generic fiber functor, we get the family (F Ñ T, f ) back. Here is a result about the moduli space of stable coherent sheaves on a degeneration family. Assume that the character of κ is zero. Let X be a quasicompact non-archimedean K-analytic space. A simple normal crossing (SNC) formal model of X is a stft formal scheme X, such that its special fiber X s has only simple normal crossing divisors, and its generic fiber X η -X. Let us fix such a pair (X, X). Let tD i : i P I X u be the irreducible components of X s , and let D I := X jPI D j for any I Ă I X . The non-archimedean K-analytic space X is independent to the SNC formal model X we choose, i.e., if X 1 Ñ X is an admissible formal blow-up along some D I , then X 1 -X η -X. Using admissible formal blow-ups, we construct a universal stack M of SNC formal models of (X, X). The stack M is a stack over Spf(R) with generic fiber X. We also define the moduli stack I P R (M) of stable ideal sheaves with Hilbert polynomial P to the universal stack M, see §4.3.4. Let us fix to a simple case. Let X be a κ-scheme, and π : W Ñ A 1 κ be degeneration family of X in [52] Let I P 1 κ (D i , D 12 ) be the moduli stack of relative stable ideal sheaves in [52] to the stack of relative expanded pairs. For a decomposition γ = (P 1 , P 2 , P 12 ) of the Hilbert polynomial P, we have a gluing result. See §4.3.5 for more details. [52,Theorem 5.28]) Let X be a proper κ-scheme and X its t-adic formal completion such that X s = D 1 Y D 12 D 2 . Then the moduli stack I P κ (M s ), after applying the special fiber functor, has a canonical gluing isomorphism of Deligne-Mumford stacks. In character zero, every smooth non-archimedean K-space X has a SNC formal model X. If X is proper, then X s is proper, and we prove that the moduli stack I P κ (X s ) is a proper stack, see Proposition 4.19. Then the moduli space I P K (X) is proper since the moduli stack I P κ (X s ) is proper. 1.2.2. Outline of the proof of the main results. In §2 we review the basic materials for the formal schemes and Berkovich analytic spaces. The moduli stack of semistable sheaves over a locally noetherian scheme is constructed in §3. This proves Theorem 1.4. We construct the non-archimedean analytic moduli stack of semistable sheaves over K in §4 and prove the main results in Theorem 1.5. Main results on motivic Donaldson-Thomas invariants. In this section we apply the construction of formal and non-archimedean moduli space of stable sheaves to the motivic localization formula of motivic Donaldson-Thomas invariants. Let (X, s) be an oriented d-critical non-archimedean K-analytic space. Let (X, s) be a SNC formal model of X, then (X, s) is an oriented d-critical formal R-scheme. We define the absolute motive where ş X s means pushforward to a point. The motive M F φ X,s is independent to the choice of the SNC formal model. If X admits a good, circle-compact action of where T x (X) + and T x (X)´are the weight positive and negative parts of the G maction on the tangent space T x (X) for a generic point x in the strata X G m i . Then we have the following result of the motivic localization formula. Theorem 1.8. (Theorem 7.17) Let (X, s) be a d-critical non-archimedean K-analytic space and µ is a good, circle-compact action of G m on X, which preserves the orientation We use the techniques of motivic integration for formal schemes developed in [64] to prove Theorem 1.8. If (X, s) is the oriented d-critical formal scheme corresponding to the moduli space X of stable coherent sheaves over Calabi-Yau threefold Y, and denote by M F φ X the global motive on X. Suppose that there is a G m action on the scheme X, which is good and circle compact. In this case, we get the motivic localization formula of Maulik in [59]. is the fixed locus of X under the G m -action. The notation ş X M F φ X,s means pushforward to a point, i.e., the absolute motive, and ind virt (X G m i , X) is the virtual index on tangent space similar as in (1.3.1). Our method to prove Corollary 1.9 is to use formal schemes. We take the t-adic formal completion X = p X of X and its generic fiber X η have a formal d-critical scheme structure (X, s) and a d-critical non-archimedean analytic space structure (X η , s). The canonical line bundle K X,s is isomorphic to the formal completion of the canonical line bundle K X,s where (X, s) is the d-critical scheme in [42]. Moreover, if there exists an orientation K 1 2 X,s , then K 1 2 X,s exists and there exists a unique M F φ X,s P Mμ X such that if X admits a good circle-compact G m -action which preserves the orientation K 1 2 X,s , then ż . Thus we get the localization formula in Corollary 1.9. This motivic localization formula in Corollary 1.9 is very useful in the calculations of refined Donaldson-Thomas invariants, for instance, in [19], the authors use this formula to calculate the refined Donaldson-Thomas invariants for local P 2 . We expect more interesting applications about this formula. 1.4. Related and future works. As we mentioned earlier, this work is motivated by Tony Yue Yu's study of non-archimedean enumerative geometry in [77], [78]. Motivated by the MNOP conjecture equating the Gromov-Witten and Donaldson-Thomas invariants, it is interesting to construct a sheaf counting theory of Tony Yu's non-archimedean curve counting theory by using Behrend's weighted Euler characteristic of the moduli space of stable sheaves over smooth Calabi-Yau threefolds. The moduli stack M g,n (Y) of stable maps to a Calabi-Yau threefold Y admits a perfect obstruction theory of Li-Tian [49], and Behrend-Fantechi [6]. Hence there is a dimensional zero virtual fundamental cycle [M g,n (Y)] virt . In the nonarchimedean counting of stable maps as in [77], [78] (the paper [78] works on log Calabi-Yau surfaces), the author didn't use perfect obstruction theory to directly define his invariants N L,β on the corresponding moduli space M L,β for a tropical spine L in the Berkovich retraction Y an Ñ B and curve degree β, instead he uses a restriction of the usual virtual fundamental class of a higher dimensional stable map spaces to the moduli space M L,β . In the forthcoming work of [69], Porta and Yu will construct a perfect obstruction theory on the non-archimedean moduli analytic space and define the virtual fundamental cycle directly. In [39], we will address such a problem of symmetric obstruction theory on the non-archimedean moduli space of stable coherent sheaves. The notion of d-critical formal schemes and d-critical non-archimedean analytic spaces will take an important role. We will study the Kashiwara-Schapira index theorem for non-archimedean analytic spaces, and a non-archimedean version of Behrend's theorem equating the Donaldson-Thomas invariants with the weighted Euler characteristic of a canonical constructible sheaf of vanishing cycles. 1.5. Convention. Let us fix some notations. Throughout this paper, R will be a complete discrete valuation ring R = κ [[t]], with quotient field K := C((t)), and perfect residue field κ. We fix a uniformizing element t in R, i.e., a generator of the maximal ideal. The field K is a non-archimedean field with valuation v such that v(t) = 1. The absolute value |¨| = e´v (¨) . All formal schemes over R is stft (separate and topologically of finite type) in sense of [62], and the nonarchimedean analytic spaces over K are quasi-compact Berkovich analytic spaces [9]. For the applications in §4.3.3 and §7.5.2, we consider the schemes and stacks over κ = C, the field of complex numbers. For any κ-variety X, we denote by Div(X) the abelian group of divisors of X, and N 1 (X) = Div(X)/ Div 0 (X) the divisor class group, where Div 0 (X) is the group of principal divisors. An element D P N 1 (X) is said to be nef if the intersection D¨C ě 0 for any curve C in X; ample if D is a ample divisor. For the complete discrete valuation ring R, Rtx 1 ,¨¨¨, x n u is denoted by the Tate algebra, which is the ring of convergent formal power series over R. For a strictly affinoid algebra A over the non-archimedean K, we use Sp(A) to represent the affine rigid variety; and Sp B (A) the Berkovich spectrum, i.e., the space of seminorms of A endowed with Berkovich real topology. Denote by F sch R the category of stft formal schemes over R. Also denote by An K the category of strictly Kanalytic spaces, equipped with the quasi-étale topology, and Rig K the category of K-rigid varieties, equipped with the Grothendieck topology. We use Fraktur symbols X to represent R-formal schemes. We use general symbols X to represent both κ-varieties or schemes and Berkovich nonarchimedean K-analytic spaces. In §3 the Calligraphic symbols X represent the schemes over a locally noetherian scheme S. For a Berkovich analytic space X, we use χ(X) to represent the Euler characteristics theétale cohomology of X. We use L to represent the Lefschetz motive [A 1 κ ]. In Part I, we use M(P) to represent the moduli functor of (semi)-stable coherent sheaves on algebraic schemes, formal schemes and non-archimedean analytic spaces. While in Part II, M and Mμ represent the localized Grothendieck ring and the equivariant localized Grothendieck ring. Song Sun, and Tony Yue Yu for valuable discussions on Berkovich analytic spaces, especially Johannes Nicaise for answering questions about the motivic integration of formal schemes in [64], Yifeng Liu for the hospitality when visiting Northwestern University, and Tony Yue Yu for the correspondence on his non-archimedean enumerative geometry in Gromov-Witten theory and valuable suggestions on the generalized version of the motivic localization formula to non-archimedean analytic spaces. Y. J. thanks Junwu Tu for the discussion of deformation quantization, and its relation to perverse sheaf of vanishing cycles and twisted de Rham complex; and Qile Chen for the discussion of log stable maps to generalized Deligne-Faltings pairs. Many thanks to Professors Tom Coates, Alessio Corti and R. Thomas for the support in Imperial College London, where the author started to think about the research along this direction, and is inspired by joint work [40] with R. Thomas on the Behrend function and Lagrangian intersections. Y. J. also thanks Professor Jun Li and Professor Dominic Joyce for the discussion of d-critical schemes and virtual critical manifolds, and Professor Sheldon Katz for the discussion about the motivic localization formula on motivic Donaldson-Thomas invariants when visiting UIUC in January 2017. Y. J. thanks Professors Jim Bryan, Andrei Okounkov and Balazs Szendroi for email correspondence on the index formula of plane partitions, and especially thanks to Balazs Szendroi for pointing an error in the last example in an earlier version of the paper. This work is partially supported by NSF DMS-1600997. PART I: 2. PRELIMINARIES ON FORMAL SCHEMES AND BERKOVICH ANALYTIC SPACES 2.1. Formal schemes, rigid varieties and Berkovich analytic spaces. An adic Ralgebra A is said to be topologically finitely generated over R if A is topologically R-isomorphic to a quotient algebra of the algebra of restricted power series Rtx 1 ,¨¨¨, x n u. The algebra Rtx 1 ,¨¨¨, x n u is the Tate algebra which is the subalgebra of R[[x 1 ,¨¨¨, x n ]] consisting of the elements such that c i Ñ 0 (with respect to the t-adic topology on K) as |i| = i 1 +¨¨¨+ i m tens to 8. Let J = tR be the ideal of definition of R, then from [24, Ch. 0. 7.5.3], the quotient algebra has an ideal of definition J A. An stft affine formal scheme is the formal spectrum Spf(A) for a topologically finitely generated R-algebra A. In general, an stft formal R-scheme X is a separated formal scheme, topologically of finite type over R, which is covered by a cover tX i u of stft affine formal subschemes of the formal X i = Spf(A i ) for A i topologically finitely generated over R. We denote its special fiber by X s , and its generic fiber by X η . X s is a κ-scheme, which in the affine case X = Spf(A), X s = Spec(A/(J)). In general X s is covered by affine κ-schemes Spec(A i /(J)). The generic fiber X η is a quasi-compact Berkovich non-archimedean K-analytic space in the sense of [9]. In the category of rigid varieties as in [14], X η is a separate quasi-compact rigid K-variety. In the case that X = Spf(A), in notations we have X η = Sp B (A) for A = A b R K in the sense of Berkovich, and X η = Sp(A) in the category of rigid varieties. Like the difference between varieties and schemes, the Berkovich spectrum will add generic points and has a real topology, and rigid varieties have Grothendieck topologies. If there is no confusion, we will mixed-use these two notions. For instance, if X = Spf(Rtx 1 ,¨¨¨, x n u), then the generic fiber is the closed unit disc D n (0, 1) in the affine space where D n (0, r) = Sp B (Ktr´1 1 x 1 ,¨¨¨, r´1 n x n u) for r = (r 1 ,¨¨¨, r n ). From Berkovich's classification theorem, there are four type of Berkovich points in A n and each point is the limit of a sequence of points }¨} D n corresponding to a nested sequence D 1 Ą D 2 Ą¨¨¨of balls of positive radius. We fix a locally finite covering tX i u iPI of X, where X i are affine formal subschemes of the form Spf(A i ) and A i is a topologically finitely generated Ralgebra. Then for any i, j P I the intersection X ij = X i X X j is also an affine subscheme of the same form. The generic fiber X ij,η is a closed analytic domain in X i,η , and the canonical morphism X ij,η Ñ X i,ηˆXj,η is a closed immersion. From [9], we can glue X i,η to get an analytic space X η . 2.2. The specialization map. The special fiber X s is a R/(J) = κ-scheme. In the affine case The specialization map as in [62, §2.2] sp : X η Ñ X s sends the points in the generic fibre X η to the special fibre X s . Let x P X η be a point, which corresponds to a semi-norm Let H(x) = A/℘ x , where ℘ x is the kernel of x. Then the point x gives a character map r Then the kernel of the map r χ x is defined to be the image of x under sp. Let Y Ă X s be a closed subset, which is given by an ideal ( r This correspondence means that under the reduction map π, the preimage of a closed subset is open, and similarly the preimage of an open subset is closed. This is one of the special properties for Berkovich analytic spaces. be the t-adic completion of the morphism f , where X = Spf(A) and A = RtTu/( f´t). The algebra RtTu := RtT 1 ,¨¨¨, T m u is the algebra of convergent power series. The formal scheme X Ñ Spf(R) is a stft formal scheme, see [62]. The special fiber X s is a κ-scheme Spec(A/(t)), which is canonically isomorphic to the fiber of f over 0. The generic fiber X η = Sp B (A b R K) is a Berkovich analytic space over the field K. Let Crit( f ) be the critical subscheme of f inside A m κ . We make a assumption that Crit( f ) Ă X s . Set to be the formal completion of X along Crit( f ). Then the special fiberX s is the subscheme Crit( f ). The generic fiberX η = sp´1(Crit( f )) is a subanalytic space of X η . For any x P X s , F x := F x ( f ) is called the analytic Milnor fiber of f at x. Sheaf of vanishing cycles. We recall the vanishing functor for schemes in [32], which is reviewed in [10, §5]. Let S = Spec(R) be the spectrum of R. The scheme S consists of the closed point s = Spec(κ) and the generic point η = Spec(K). The field K is quasi-complete ([9, §2.4]) and the valuation on Kextends uniquely to the separable closure K s , and so the integral closure of R in K s coincides with R s , the ring of integers of K s . Set S = Spec(R s ) = ts, ηu. Let X be a scheme over S, and let X s and X η (resp. X s and X η ) be the closed and the generic fibers of X (resp. X = XˆS S). We have a canonical diagram: The nearby cycles functor is given by Ψ η (F ) = i˚(j˚F ), where F is the pullback of a sheaf F on X η to X η . The functor Ψ η takes values in the category ofétale sheaves on X s that are endowed with a continuous action of G η = G(K s /K) compatible with the action of G(κ s /κ) on X s . Let F sch be the category of stft formal R-schemes. Let X P F sch be a formal R-scheme. For n ě 1, denote the scheme (X, O X /t n O X ) by X n . A morphism of formal schemes over R, φ : Y Ñ X is said to beétale if for all n ě 1, the induced morphisms of schemes φ n : Y n Ñ X n areétale. Let φ : Y Ñ X be a morphism of formal schemes. Then it induces the morphism between the generic and central fibres, i.e. φ η : Y η Ñ X η and φ s : Y s Ñ X s , where φ η is a morphism of Berkovich analytic spaces and φ s is a morphism of schemes. Here are two known results from [10] which are needed to construct vanishing cycles. Theétale morphim between K-analytic spaces can be similarly defined, see [10, §2]. Denote by X ηé t theétale site of X η , which is the site induced from the Grothendieck topology of allétale morphisms of K-analytic spaces. Let X " ηé t be the category of sheaves of sets on theétale site X ηé t . For two Berkovich K-analytic spaces X η and Y η . A morphism ψ : Y η Ñ X η is called "quasi-étale" if for every point y P Y η there exist affinoid domains V η,1 ,¨¨¨, V η,n Ă Y η such that the union V η,1 Y¨¨¨Y V η,n is a neighbourhood of y and each V η,i is identified with an affinoid domain in a K-analytic spaceétale over X η . A basic fact from [10, Proposition 2.3] is that anétale morphism φ : Y Ñ X of formal schemes induces a quasi-étale morphism φ η : Y η Ñ X η over the generic fibres. Denote by X η qét the quasi-étale site of X η , which is the site induced from the Grothendieck topology of all quasi-étale morphisms of K-analytic spaces. Let X " η qét be the category of sheaves of sets on the quasi-étale site X η qét . There exists a natural morphism of sites which is understood as the pullback. Let Y s Þ Ñ Y be the functor obtained from inverting the functor in Lemma 2.3. Then from Lemma 2.4 and the fact thatétale morphisms on formal schemes induce quasi-étale morphisms on generic fibres, the composition of the functors Y s Þ Ñ Y and Y Þ Ñ Y η gives a morphism of sites (2.4.2) ν : X η,qét Ñ X s,ét . Let sé t be the functor obtained from composition. Let F be anétale abelian torsion sheaf over X η . Let X η = X η b K s , and F the pullback of F to X η . Then define the nearby cycle functor Ψ η by Definition 2.5. The nearby cycle functor is defined as Let x P X s be a point and Q l be anétale abelian sheaf. Then the stalk This isomorphism is compatible with the monodromy action. Formal model of non-archimedean analytic spaces. Definition 2.8. Let X be a stft quasi-compact non-archimedean K-analytic space. A formal model X of X consists of a stft formal scheme X over R such that the generic fiber X η is isomorphic to X. We recall the simple normal crossing (SNC for short) formal schemes. ą0 . We assume that any m i does not equal to the characteristic of κ. (2) All the intersections of the irreducible components of the special fiber X s are either empty or geometrically irreducible. Let X be a stft quasi-compact non-archimedean K-analytic space and let X be a SNC formal model of X. If the characteristic Ch(κ) = 0, then Temkin [72] shows that the SNC formal model X always exists by resolution of singularities. Let tD i |i P I X u denote the set of irreducible components of X red s with the reduced scheme structure. For any I Ă I X , let D I = X iPI D i . We denote by m i the multiplicity of D i in X s . We recall the Clemens polytope S X for the formal scheme X. Definition 2.10. The Clemens polytope S X for a SNC formal scheme X is the simplicial subcomplex of the simplex ∆ I X such that for every non-empty subset I Ă I X , the simplex ∆ I is a face of S X if and only if D I is non-empty. From [12], there is a deformation retraction map According to [78], this retraction map corresponds to the Gross fibration and is the non-archimedean version of the SYZ-fibration. Definition 2.11. A simple function ϕ on the Clemens polytope S X is a real valued function that tis affine on every simplicial face of S X . For i P I X , let ϕ(i) be the value of ϕ at the vertex i. Let Div 0 (X) R be the vector space of R-divisors on X, which is, by definition, the Cartier divisors on X supported on X s . Div 0 (X) R has the dimension of |I X |. The effective divisors D on X can be similarly defined and is locally given by a function u. The valuation val(u(x)) defines a continuous function on X η which we denote it by ϕ 0 D . Then D Þ Ñ ϕ 0 D extends by linearity to a map from Div 0 (X) R to the space of continuous functions C 0 (X η ) on X η . Hence we get a map The map τ defined in (2.6.1) has the following properties: (1) The image of τ concides with S X ; (2) For any D = ř i a i D i P Div 0 (X) R , there exists a unique ϕ D on S X such that ϕ 0 D = ϕ D˝τ . Remark 2.13. The Clemens polytope S X is the tropicalization of the K-analytic space X, and it satisfies some functoriality property as in [77, Proposition 2.9]. CONSTRUCTION OF MODULI OF (SEMI)-STABLE SHEAVES OVER LOCALLY NOETHERIAN SCHEMES Let S be a locally noetherian base scheme. We define the moduli space of semistable sheaves M X (P) on the scheme X locally of finite presentation over S. We construct the moduli stack of semistable sheaves over a general scheme locally of finite presentation over a locally noetherian scheme S. This will be used to construct K-analytic semistable sheaves over a K-analytic space X. We fix S to be a locally noetherian scheme. Let S ch S be the category of schemes over S. where the equivalent relation " is given by: where p : F Ñ T is the family the semistable sheaves. Remark 3.2. We recall the semistability of coherent sheaves here. is given by a commutative diagram: Then M S (P)(X ) is a category of semistable sheaves over X /S fibered by groupoids over S ch S . Theorem 3.3. Let X be a scheme locally of finite presentation over a locally noetherian scheme S. Then the functor M S (P)(X ) is an algebraic stack locally of finite presentation over S. The proof of Theorem 3.3 is based on checking the conditions (1), (2), (3), (4) in Theorem 5.3 of [2]. We list them as lemmas. First from Tag07WP in the stack project [75], Lemma 3.4. Let T 1 , T 2 , T , T 1 be spectra of local Artinian rings of finite type over S. Assume that T Ñ T 1 is a closed immersion and that is a pushout diagram in S ch S , then the functor of fiber categories is an equivalence of groupoids. Proof. The pushout property here follows from this property for quasi-coherent sheaves, see Tag 08LQ and Tag08IW in [75]. The condition (2) in [2, Theorem 5.3] is the limit preserving property: Let be a complete local algebra over S, with maximal ideal m, and the residue field is finite type over S. Then the canonical map is an equivalence of groupoids. Proof. Let tF (l) f (l) Ñ SpecÂ/m l u be a formal object on the right hand side, then the Grothendieck Existence Theorem for formal schemes tells us that there exists a formal object tFf Ñ SpecÂu on the left hand side. Hence we only need to show that tFf Ñ SpecÂu is semistable overÂ. This is from the fact that the semistability condition is a closed condition on the base scheme T and the full faithfulness of the functor by Grothendieck's existence theorem. All the other conditions in [2, Theorem 5.3] are deformation and obstructions. Then (1) The module of infinitesimal automorphisms is Aut X (n); Proof. This is the standard results in deformation-obstruction theory of coherent sheaves as in [74]. This lemma verifies Condition (3) and the last part of (1) of Theorem 5.3 of [2]. We are left to check Condition (4), which is the "local quasi-separation" property. Let x := tF f Ñ T u be an element in M S (P)(T ) and φ an automorphism of x. Suppose that φ induces the identity on M S (P)(T ) for a dense set of points t P T of finite type, then φ is the identity ona dense set of points of finite type on F . Hence φ must be the identity on the whole space since F Ñ T is flat and separate over T . So from [2, Theorem 5.3], the category M S (P) is an algebraic stack locally of finite presentation over S. This finishes the proof of Theorem 3.3. THE MODULI STACK OF NON-ARCHIMEDEAN (SEMI)-STABLE SHEAVES 4.1. The construction. We construct the moduli stack of formal semistable coherent sheaves and the moduli stack of non-archimedean analytic semistable sheaves. We first recall the definition of formal stacks locally of finite type over R, and the definition of strictly K-analytic stacks. Definition 4.1. A formal stack X locally of finite type over R is a stack fibered by groupoids over the site F sch R , such that the diagonal morphism X Ñ XˆR X is representable and there exists a formal scheme U locally of finite type over R and a smooth effective epimorphism U Ñ X. Definition 4.2. A strictly K-analytic stack X is a stack fibered by groupoids over the site An K , such that the diagonal morphism X Ñ XˆK X is representable and there exists a strictly K-analytic space U and a quasi-smooth effective epimorphism U Ñ X. We say that a strictly K-analytic stack X si compact we there is a covering tU i u by compact strictly K-analytic spaces. Remark 4.4. For a fixed Hilbert polynomial P and a non-archimedean analytic space X, we use theétale sheaf cohomology H i (X, F ) in [9] for the Berkovich space X to define P(F ). We take into account the analytification functor (¨) an , the special fiber functor (¨) s , and the generic fiber functor (¨) η for a formal scheme in the definitions above. We have: Proof. A geometric point of the scheme T η is given by a morphism for some algebraically closed non-archimedean field K 1 . Let R 1 be the ring of integers of K 1 and let T 1 = Spf(R 1 ). Let be the morphism given by R 1 , and let be the pullback of the family over T. By flatness it suffices to show that the family s ) be the object by applying the special fiber functor (¨) s . Then it is a semistable sheaf over T 1 s . Consider the reduction maps: Then by the relative GAGA, The family on X s is semistable, so does X η . We then prove some global results on the moduli spaces, parallel to the definitions and lemmas above. Let X be an stft formal R-scheme and let M R (P)(X) be the moduli stack of semistable formal sheaves over X with Hilbert polynomial P. We denote by M K (P)(X) the moduli stack of K-analytic semistable coherent sheaves with Hilbert polynomial P. Let ̟ be a uniformizer of the field K, and let where both stacks are over the site F sch R . Hence there is a natural isomorphism where (¨) s denote the special fiber functor. Proof. Since S m is a locally noetherian scheme, from Theorem 3.3 M S m (P)(X m /S m ) is an algebraic stack locally finite presented over S m . If T P F sch R is a formal scheme, and let T m = TˆR S m . From the definition of the limit lim is given by a compatible sequence of morphisms such that t m = t m+1ˆS m+1 S m . So we have a family of formal semistable coherent sheaves F Ñ T and hence a morphism T Ñ M R (P)(X). The result follows. The following result is an analogue of Theorem 8.7 in [77]. If a K-analytic space X is the analytification of a proper algebraic variety over K, then the representability of the moduli stack is given by the non-archimedean analytic GAGA. Theorem 4.9. Let X be a proper algebraic variety over K. There exists a natural isomorphism of stacks Proof. Let T = Sp B (A) be the Berkovich spectrum for a strictly K-affinoid algebra A. A morphism T Ñ (M K (P)(X)) an gives rise to a family of semistable sheaves over Spec(A). From Lemma 4.5 the analytification of (4.1.1) gives a family of Kanalytic semistable coherent sheaves F Ñ Sp B (A). So we get a morphism: The construction is functorial, hence we have a natural morphism (M K (P)(X)) an Ñ M K (P)(X an ). We show that the functor is equivalent as categories fiber by groupoids over An K . It is faithful (easily from the construction). To prove the surjectivity, let (F f ÝÑ X an ) be a family of semistable K-analytic coherent sheaves of X an over T. Let X an T = X anˆK T. Then by the K-analytic GAGA [21], [22], we have F Ñ T is the analytification of an algebraic family of semistable coherent sheaves over Spec(A). So the functor is surjective. The fullness of the functor is also by the GAGA theorem. The following result implies that one can globally take the generic fiber for moduli stack of formal coherent sheaves. Theorem 4.10. Let X be a stft formal R-scheme. There is a natural morphism of stacks over the category An K of non-archimedean K-analytic spaces: In order to prove Theorem 4.10, the existence of the formal model of K-analytic semistable sheaves is essential. Proof. From [15], of (F f ÝÑ T) such that it is flat family of formal coherent sheaves. Parallel to the stable curve case in [77] using De Jong's method of alterations, we need to modify the formal model (Ff ÝÑ T) so that it bocomes semistable. First let A be a topological algebra finitely presented over R such that T = Spf(A). Let T alg = Spec(A). Since (Ff ÝÑ T) is a flat family of coherent sheaves over T, by formal GAGA by Grothendieck and Conrad [21], (Ff ÝÑ T) is isomorphic to the completion of a family of algebraic coherent sheaves along the special fiber of T. Then since the semistability of sheaves is an open condition [46], there must exists an open locus T, which is equivalent to anétale covering T 1alg of T alg such that is a semistable family. Then we take the completion along the special fiber over κ. Then we get the desired semistable family of Kanalytic coherent sheaves. Proof of Theorem 4.10. First if we have an affine stft formal scheme T over R, then a morphism T Ñ M R (P)(X) gives rise to a family of formal semistable coherent sheaves of X over T. By Lemma 4.7, when applying the generic fiber functor we get a family of K-analytic semistable coherent sheaves. Hence we have a morphism We prove that the functor is an equivalence of groupoids for any strictly K-affinoid space T. By construction, the functor is faithful. We prove that the functor Φ is full. Suppose that we have two families of formal semistable sheaves of X over T 1 and T 2 , respectively. Assume that we have an isomorphism of K-analytic semistable sheaves when passing to the generic fiber we have the following commutative diagram: From [64,Proposition 2.19], up to replacing T 1 and T 2 by admissible blow-ups, we can assume that T 1 -T 2 , which we denote it by T 12 . As in the proof of Theorem 4.11, up to passing to the Zariski open covering of T 12 , the formal GAGA implies that is extended over p T 12 and we are done the fullness of the functor Φ. The surjectivity of Φ is given by the following argument. Let be a morphism, which gives a family of K-analytic semistable coherent sheaves of X η over T. By Theorem 4.11, up to quasi-étale covering T 1 Ñ T, there exists a formal scheme T 1 and a family of formal semistable coherent sheaves Applying the generic fiber functor to above we get the family (F Ñ T) back. So we obtain morphisms Using the proof of fullness the morphism Φ is an equivalence of groupoids. The theorem follows. 4.3.1. Kähler structure on non-archimedean analytic spaces. In the case of the curve counting via stable maps in Gromov-Witten theory, or the sheaf counting in a Calabi-Yau threefold Y in Donaldson-Thomas theory, the degree β of the curve is a second homology class H 2 (Y, Z). In order to make this work in non-archimedean geometry, we make use of the Kähler structure of Tschinkel-Kontsevich for the K-analytic space X, as reviewed in [77, §3]. We review the most useful part of [77]. For a K-analytic space X with X a SNC formal model of X. Let Sim X be the sheaf on S X such that for any open subset U of S X , Sim X (U) is the set of simple functions of S X restricted to U. As in [77], let Lin X (resp. Conv X , SConv X ) the subsheaf of Sim X whose germs are germs of linear (resp. convex, strictly convex) functions. The sheaf Lin X acts on the sheaf Sim X (resp. Conv X , SConv X ) via: where ψ is a local section of Lin X and ϕ is a local section of Sim X (resp. Conv X , SConv X ). Definition 4.14. A virtual line bundle L on a non-archimedean K-analytic space X with respect to the formal model X is a torsor over the sheaf Lin X . A simple (resp. convex, strictly convex) metrixation p L of a virtual line bundle L is a global section of the sheaf Sim X bL (resp. Conv X bL, SConv X bL), where the tensor product is taken over the sheaf Lin X . Definition 4.15. A Kḧler structure p L on X with respect to the formal model X is a virtual line bundle L over X with a strictly convex metrization p L. For each i P I X , a simple metrization p L gives a germ of a simple function ϕ i at the vertex i up to addition by linear functions. We get a collection of numerical classes Tony Yu also proves some functoriality preperty of the curvature: In this section we define the degree of virtual line bundle on curves. Let X be a smooth connected proper K-analytic curve. Let X be its formal model, then the Clemens polytope S X is a finite connected simple graph, see [8, §4]. Although the K-analytic curve X is smooth, the singularities in the formal model (only double point singularities) correspond to the Type II Berkovich points in X. In [3], Baker, Payne, and Rabinoff prove a theorem which states that the semistable vertex sets of X are in natural bijective correspondence with semistable models of X, where the semistable vertex sets of X are a finite Type II Berkovich points in X. Let us fix an order of the set I X . For i, j P I X , we say i ă j if i, j are connected by an edge and i is inferior to j with respect to the fixed order. For i P I X , let U i be the open neighborhood, which is the union of the vertex i and all the open edges whose closure contain i. If e ij is an edge, then U i X U j is the interior of e ij . As in [77, §5], usingČech cohomology of the open cover tU i u, Tony Yu proves a degree map Moduli of analytic curve counting via ideal sheaves. For the counting sheaf theory, we fix our K-analytic space X to be three dimensional over K. For instance, let X be a smooth Calabi-Yau threefold over κ, then the analytification X an of X is a smooth three-dimensional K-analytic space. Let X be a SNC formal model of X. Let P be the Hilber polynomial determined by the Chern character We fix a Kähler structure p L on X with respect to a SNC formal model X. In this section we mainly consider the ideal sheaves I C of a connected proper smooth K-analytic curve C, which is surely stable. We fix a Hilbert polynomial P. Let I P K (X) be the moduli stack of K-analytic ideal sheaves over X with Hilbert polynomial P. Similarly let I P R (X) and I P κ (X s ) be the moduli stack of stable formal ideal sheaves over X with Hilbert polynomial P and algebraic ideal sheaves over X s with Hilbert polynomial P. In this section we prove a result that ideal sheaves (or more general stable coherent systems as in [52]) on a proper K-analytic space X, after a choice of SNC formal model X of X, the sheaf when restricted to the special fiber X s , will induces a stable sheaf on the SNC divisors of X s . Hence we deduce a decomposition result for the moduli space. From Theorem 4.10 and Proposition 4.8 If the formal scheme X is proper and X s has only simple normal crossing divisors, then the moduli stack I P κ (X s ) is proper. Proof. Still let tD i |i P I X u be the smooth irreducible components of X s . Then every ideal sheaf I Z of a curve Z Ă X s with Hilbert polynomial P admits a splitting of P, which is the set of γ = tP I |I Ă I X u such that Thus there is a closed embedding of the moduli stacks which sends an ideal I Z to its corresponding restriction on D I . Since every D I is smooth and proper, the moduli stack I P I K (D I ) is proper. Hence we get the properness of I P κ (X s ). We define a universal stack M := M (X,X) of SNC formal models of (X, X). Recall I X is the number of irreducible components of X s . For I Ă I X a subset, we let for some I Ă I X over S and a tautological projection W Ñ XˆR S. We denote by ξ this effective family. Let ξ 1 : S 1 Ñ Spf(R) and ξ 2 : S 2 Ñ Spf(R) be two effective families. Then an arrow r : ξ 1 Ñ ξ 2 is given by We denote by r this isomorphism. We say two effective formal families W 1 and W 2 over S are isomorphic if W 1 is S-isomorphic to W 2 by r and they are compatible with respect to the projections to XˆR S. We define the universal stack M of SNC formal models of (X, X). Then we define the functor 4.3.5. Relation to the stack of expanded degenerations. Recall from [50], let X be a κscheme and W Ñ A 1 κ be a degeneration family such that W t -X, and W 0 -D 1 Y D 12 D 2 . Let X Ñ Spf(R) be the formal completion of W along W 0 . Then X Ñ Spf(R) is a stft formal scheme such that X s = W 0 . We have the universal stack of SNC formal models M (X,X) for X. Since X s only has two irreducible components (1) Suppose that the support of F is a curve C on X s . The normality of F means that C intersects with D ij transversally; (2) Transversality of the curve C with divisors D ij of X s is essential to the study of relative Gromov-Witten and Donaldson-Thomas theory as in [50], [51], [52]. Gross-Siebert [27] use Log stable maps to Log schemes to define the log version of Gromov-Witten invariants, where the normality is solved by logarithmic techniques. See [1] for a program on the case that Y is normal crossing. (3) Our result implies that it seems natural to use non-archimedean geometry to study degenerations of the moduli stack of stable coherent sheaves. It is also interesting to produce the degeneration formula of Jun Li [51] using non-archimedean geometry. We set for the splittings of the Hilbert polynomial P, which is the set of γ = tP 1 , P 2 , P 12 u such that P = P 1 + P 2´P12 . Let I P i κ (D i , D 12 ) for i = 1, 2 be the moduli stack of relative stable ideal sheaves of D i relative to D 12 in the sense of [52] using the stack of relative expanded pairs. We have the following gluing theorem as in [52,Theorem 5.28]. Proof. This is from the decomposition of the stable sheaf and the normality property. PART II: Donaldson-Thomas invariants. Let Y be a smooth Calabi-Yau threefold or a smooth threefold Calabi-Yau Deligne-Mumford stack. The Donaldson-Thomas invariants of Y count stable coherent sheaves on Y. In [74], R. Thomas constructed a perfect obstruction theory E ‚ in the sense of Li-Tian [49], and Behrend-Fantechi [6] on the moduli space X of stable sheaves over Y, hence a virtual fundamental class [X] virt on X. If X is proper, then the virtual dimension of X is zero, and the integral is the Donaldson-Thomas invariant of Y. Donaldson-Thomas invariants have been proved to have deep connections to Gromov-Witten theory and provided more deep understanding of the curve counting invariants, see [56], [57], [66], etc. In the Calabi-Yau threefold case, in [4] Behrend proves that the moduli scheme X of stable sheaves on Y admits a symmetric obstruction theory which is defined by him in the same paper [4]. Also Behrend constructs a canonical integer-valued constructible function ν X : X Ñ Z by using the local Euler obstruction of an intrinsic integral cycle c X P Z˚(X) on X. We call ν X the Behrend function of X. If X is proper, then in [4,Theorem 4.18] Behrend proves that where χ(X, ν X ) is the weighted Euler characteristic weighted by the Behrend function, and c CSM (ν X ) is the Chern-Schwartz-MacPherson class of the Behrend function ν X . The above result is the index theorem of MacPherson, which is a generalization of Gauss-Bonnet theorem to singular scheme X. Same result for a proper Deligne-Mumford stack X with a symmetric perfect obstruction theory is conjectured by Behrend in [4], and is proved in [37]. This makes the Donaldson-Thomas invariants motivic. Categorification of Donaldson-Thomas invariants. Around 2006, Kai Behrend proposed a natural question called "the categorification" of Donaldson-Thomas invariants, i.e., to find a cohomology theory of the Donaldson-Thomas moduli space X of stable sheaves on Calabi-Yau threefolds such that its Euler characteristic is the weighted Euler characteristic by the Behrend function. If locally the moduli space X is the critical locus of a holomorphic function f : M Ñ κ on a higher dimensional smooth scheme M, then the categorification of Donaldson-Thomas invariants is given by M F So the question is whether locally the Donaldson-Thomas moduli space X is the critical locus of a holomorphic function. The local or germ deformation theory of X is controlled by a differential graded Lie algebra L, then one can study this local question by studying the local controlled differential graded Lie algebra L. Behrend and Getzler [7] actually studied the local behavior of the moduli space using cyclic differential graded Lie algebra and cyclic L 8 -algebras, and announced that the local function f is coming from the cyclic L 8 -algebra structure of the Donaldson-Thomas moduli space X and is holomorphic. But the paper is still not available. The local structure of the moduli space X is solved by Ben-Bassat, Brav, Bussi and Joyce [16], [17] by using the techniques of derived schemes of [67], [55]. In [67], Pantev, Toen, Vaquie and Vezzosi introduced the notion of (´n)-shifted symplectic structure on the derived schemes. The moduli space X of stable sheaves over the Calabi-Yau threefold Y can be lifted to a (´1)-shifted symplectic derived scheme X such that its underlying scheme is X. There is a natural inclusion i : X Ñ X to the derived scheme, and there is a symmetric obstruction theory of Behrend [4] on X, which is given by the pullback of the cotangent complex L X of X to X. Brav, Bussi and Joyce in [16] prove that if X is the underlying scheme of a (´1)-shifted symplectic derived scheme X, then locally X is given by the critical locus of regular function f . That is: for any point x P X, there is an open neighborhood x P R, and a regular function f : U Ñ κ on a smooth scheme U such that R = Crit( f ). Joyce calls (R, U, f , i) a critical chart of X, where i : R Ñ X is the inclusion. We kindly reminder the reader here that R represents the critical scheme Crit( f ) of the function f , not the discrete valuation ring. All the critical charts of X glue together to give a structure on X, which Joyce [42] calls X the d-critical scheme. Hence locally on R, we have a sheaf M F φ U, f of vanishing cycles of f . In [18], Bussi, Joyce, and Meinhardt prove that these data of vanishing cycles glue to give a global sheaf M F φ X if there is an orientation on X, i.e., a square root K 1 2 X of the canonical line bundle K X . Thus the categorification of X is obtained. The vanishing cycle sheaf is a perverse sheaf, Kiem and Li [44] also use the gluing of perverse sheaves to give a global sheaf M F φ X and categorify the moduli space X. The perverse sheaf of vanishing cycles M F φ X is used recently by D. Maulik and Y. Toda in [58] to define the Gopakumar-Vafa invariants for Calabi-Yau threefolds and relate them to Gromov-Witten invariants and Pandharipande-Thomas stable pair invariants. Note that Maulik and Toda require the orientation data K 1 2 X,s is trivial, which is called the CY orientation data. Motivic Donaldson-Thomas invariants and derived schemes. The vanishing cycle sheaf can be made to be motivic by the notion of motivic vanishing cycles, which is an element in Mμ X , the equivariant Grothendieck ring of varieties. Kontsevich and Soibelman [45] introduced the motivic Donaldson-Thomas theory for any oriented Calabi-Yau category C. They defined the motivic weights for any object E P C by using the motivic vanishing cycle of the object E and the technique of a cyclic A 8 -algebra L E = Ext(E, E) associated with the object E. Then they prove that there is a homomorphism from the motivic Hall algebra H(C) of C to the motivic quantum torus of C, hence deduce a wall crossing formula of motivic Donaldson-Thomas invariants. In the case that the moduli space X is the global critical locus of a regular function, the motivic Donaldson-Thomas invariants are defined and studied in [5]. In [18], Bussi, Joyce, and Meinhardt also study the motivic Donaldson-Thomas invariants for the oriented d-critical scheme X. On each d-critical chart (R, U, f , i), one needs to consider the motive Υ(P) of a principal Z 2 -bundle P Ñ R. For another principal Z 2 -bundle Q Ñ R, it is hard to prove that Υ(P b Z 2 Q) = Contribution of this work. In the second part of the paper we generalize Joyce's d-critical scheme to the case of formal d-critical schemes and d-critical non-archimedean K-analytic spaces. We mainly use Joyce's definition of d-critical schemes. The version of Kiem-Li's virtual critical manifold structure in [44] can also be generalized to formal schemes and Berkovich non-archimedean analytic spaces. The d-critical scheme of Joyce is the classical model for the (´1)-shifted symplectic derived scheme in [67]. The notion of derived formal schemes was developed in Chapter 8 of Lurie [55], and the notion of derived non-archimedean K-analytic spaces was given in [68] using the terminology of Lurie. The (´n)shifted symplectic structures on such derived spaces can be similarly defined, and to the author's knowledge, these have not been explored. Once there is a (´1)-shifted symplectic structure on the derived formal scheme and derived non-archimedean K-analytic spaces, one hopes that taking generic functor of the derived formal scheme we get the latter. We mainly focus on the classical part of such spaces, and generalize Joyce's (or Kiem-Li's) arguments to formal schemes and non-archimedean K-analytic spaces. As mentioned earlier, we hope that the d-critical non-archimedean Kanalytic spaces will be the underlying non-archimedean spaces of derived nonarchimedean spaces with a (´1)-shifted symplectic structure. Hence there should exist a symmetric obstruction theory of Behrend in [4] on d-critical nonarchimedean K-analytic spaces. Such a space will be the foundation space to the non-archimedean counting invariants coming from a symmetric obstruction theory. Assume that X is a d-critical formal R-scheme, such that its generic fiber X η is a d-critical non-archimedean K-analytic space. We also construct a canonical line bundle K X on X and define the notion of orientation of X. With the orientation K 1 2 X , we prove that there exists a global motive of vanishing cycles M F φ X , which is an element in Mμ X s where X s is the special fiber of X and is a κ-scheme. This global motive is also obtained by gluing the local motivic vanishing cycle M F φ U,f for the formal d-critical chart (R, U,f , i) of X by considering the motive of principal Z 2 -bundle on R and the orientation. Suppose that X is the formal completion of a d-critical scheme X of Joyce in [42] or Kiem-Li in [44], then by the relative GAGA in [21], the global sheaf M F φ X is the formal completion of M F φ X . Let X be a d-critical non-archimedean K-analytic space. X is said to be oriented if the square root K 1 2 X of the canonical line bundle K X exists. Let X be a formal model of X, such that X is an oriented d-critical formal R-scheme and X η -X. We define the absolute motive M F where ş X s means pushforward to a point. The absolute global motive M F φ X depends only on X, i.e., is independent to the choice of the formal model. We also introduce the G m -equivariant d-critical formal schemes and G mequivariant d-critical non-archimedean K-analytic spaces. As an application, we generalize the motivic localization formula of D. Maulik [59] for the motivic Donaldson-Thomas invariants to d-critical non-archimedean K-analytic spaces and d-critical formal schemes under the G m -action by using motivic integration for formal schemes in [64]. 5.5. Outline. We outline the structure of Part II. In §6 we introduce the notion of d-critical formal R-schemes and orientations on d-critical formal R-schemes. The d-critical K-analytic spaces will also be constructed. We basically follow Joyce's methods in [42], and provide proofs for necessary steps. §6.1 contains the main construction and results. §6.2 provides the proof of the main construction of the structure sheaf of the d-critical formal schemes. We introduce d-critical non-archimedean K-analytic spaces in §6.3; and in §6.4 we talk about the G mequivariant d-critical formal R-schemes and d-critical non-archimedean K-analytic spaces. In §7 we study the motivic localization formula for motivic Donaldson-Thomas invariants; where in §7.1 we recall the equivariant Grothendieck group of varieties; in §7.2 we briefly review the motivic integration for formal schemes in [64]; in §7.3 we define the global motive for oriented d-critical formal schemes; in §7.4 we define the global motive for oriented d-critical non-archimedean Kanalytic spaces; and finally in §7.5 we prove the motivic localization formula for oriented d-critical non-archimedean K-analytic spaces. We also relate it to Maulik's motivic localization formula for motivic Donaldson-Thomas invariants. d-CRITICAL FORMAL SCHEMES We introduce the formal version of d-critical locus in the sense of [42], and critical virtual manifold in the sense of Kiem-Li [44], which we call the d-critical formal scheme. Definitions and Results. 6.1.1. Main construction. The basic knowledge for formal schemes can be found in [13], [25]. For instance, the sheaf of differentials Ω X for a stft R-formal scheme X is defined in [25, §5.2]. We first have the following result for formal schemes: where O U , O X are the structure sheaves, and an exact sequence of sheaves of Rcommutative algebras: is a morphism of sheaves of R-commutative algebras over R. As Φ˝i = j| R , then the above maps to I S,V | R Ñ I R,U , and I 2 S,V | R Ñ I 2 R,U . Thus the map (6.1.2) induces the morphism in the second column of (6.1.1). Similarly, dΦ : Φ´1(Ω V ) Ñ Ω U induces the third column of (6.1.1). In [42], the sheaf P X for a d-critical scheme X has a natural decomposition P X = P 0 X ' κ X where κ X is the constant sheaf on X, and P 0 X Ă P X is the kernel of the composition map: Let X be a stft formal R-scheme with ideal of definition I. Then the underlying scheme is (X, O X /I). From [25,Proposition 1.1.26], any locally noetherian formal scheme X has a unique large ideal of definition I of finite type such that the underlying scheme (X, O X /I) is reduced. So we do not pursue a decomposition of the sheaf P X here. (X, s), where X is a stft formal R-scheme, and s P H 0 (P X ) is a section such that the following conditions are satisfied. Definition 6.2. A formal d-critical R-scheme is a pair For any x P X, there exists an adic morphism of formal schemes We already have the embedding i :X Ñ U. Let us define the coherent sheaf PX. Let IX ,U be the formal ideal sheaf ofX inside U, which is generated by d f . Then Proposition 6.3. Let (X, s) be a formal d-critical R-scheme, and let be an embedding of formal critical charts on (X, s). Then for any x P R, there exist open neighborhoods U 1 , V 1 of i(x), j(x) in U, V, such that Φ(U 1 ) Ď V 1 and functions: is an isomorphism and Φ| U 1 = id | U 1 , β˝Φ| U 1 = 0, and Proof. Let us briefly prove this result. Recall that an embedding of formal critical for some Q bc P Rtr x 1 ,¨¨¨, r x m , r y 1 ,¨¨¨, r y n u. The thing is that Q bc is invertible so that we can have new algebras A = Rtx 1 ,¨¨¨, x m u and B = Rtx 1 ,¨¨¨, x m , y 1 ,¨¨¨, y n u such that g = f + y 2 1 +¨¨¨+ y 2 n . Our main result in this section is that we also can define a canonical line bundle on (X, s). Theorem 6.4. Let (X, s) be a d-critical formal scheme. Then there exists a line bundle K X,s , the canonical line bundle on (X, s), which is natural up to isomorphism and has the following properties: (1) Let (R, U, f , i) be a formal critical chart of (X, s). Then there exists a natural isomorphism V, g, j) is an embedding of formal critical charts on (X, s), let (3) For each x P X, we have (4) For a critical chart (R, U, f , i), and x P R, we have the following exact sequence ÝÑ Ω x,R Ñ 0 and a commutative diagram: Here α x,R,U, f ,i is the one given by (6.1.3). be an embedding of formal critical charts. Then let N U/V be the normal bundle of Φ(U) in V, we have Then there exists a unique quadratic form q UV P H 0 (S 2 (NŮ /V )) on i ‹ (N U/V ). These data will satisfy the properties in [42, Proposition 2.25] since we choose our embeddings and critical charts as: and V = Spf(Rtx 1 ,¨¨¨, x m , y 1 ,¨¨¨, y n u). For another embedding of formal critical charts is also an embedding of formal critical charts. We have the following diagram: where γ UVW and δ UVW are defined accordingly. Pulling back by i ‹ : Hence we have by taking the top exterior powers in the dual of exact sequence. Since q UV is a nondegenerate quadratic form, its determinant det(q UV ) is a nonvanishing section of i ‹ (Λ n N U/V ) b2 . Hence we can define the following isomorphism: V )| R Then one can check that the isomorphism J Φ is independent of the choice Φ and W, h, k) is another embedding of formal critical charts. The formula is Proposition 2.27 of Joyce in [42] and we check that it holds for formal critical charts. Then we construct the line bundle K X,s locally using K b2 Proof. We use a similar result as in Theorem 2.20 of [42]: for any formal d-critical charts (R, U, f , i) and (S, V, g, j), let x P R X S, then R X S is also a formal scheme of the form Spf(A RS ), where A RS is a stft R-algebra. Then we have a morphism Spf(A RS ) Ñ S = Spf(Rtx 1 ,¨¨¨, x m , y 1 ,¨¨¨, y n )/(I (dg) ) as formal schemes. Since also Spf( where U 1 is another smooth formal scheme Spf(Rtx 1 ,¨¨¨, x r u) with r ă m. Then similar to the arguments as in Theorem 2.20 of Joyce [42] we have and we can construct a formal critical chart (T, W, h, k) such that W = VˆSpf(Rtz 1 ,¨¨¨, z n , t 1 ,¨¨¨, t n u); T = S; k = jˆt0u and T = S ã Ñ VˆSpf(Rtz 1 ,¨¨¨, z n , t 1 ,¨¨¨, t n u) = W by Φ(u) = (Θ(u), r 1 (u),¨¨¨, r n (u), s 1 (u),¨¨¨, s n (u)); Ψ(v) = (v, 0) and f 1 = g˝Θ + r 1 s 1 +¨¨¨+ r n s n h(v, (z 1 ,¨¨¨, z n , t 1 ,¨¨¨, t n )) = g(v) + z 1 t 1 +¨¨¨+ z n t n . Then using this result we can prove that J´1 Ψ˝J Φ | RXS is independent to the data chosen in the theorem as in Lemma 6.1 of [42]. So these isomorphisms give the conditions in the theorem. Then it is routine check that K X,s satisfies the conditions in the theorem. We omit the details. Thus the proof of Theorem 6.4 is complete. X,s for K X,s . That is a line bundle L over X such that L b2 -K X,s . We call (X, s) the oriented d-critical formal scheme if K 1 2 X,s exists. As in [42], an orientation on (X, s) can be explained by principal Z 2 -bundles. Definition 6.7. Let (X, s) be a formal d-critical scheme. For each embedding of formal critical charts: let π Φ : P Φ Ñ R be the bundle of square roots of the isomorphism J Φ in (6.1.6). A local section s α : R Ñ P Φ corresponds to a local isomorphism k) is another embedding of formal critical charts, then there exists an isomorphism correspond to local sections s α : R Ñ P Φ , s β : R Ñ P Ψ | R , and s γ : R Ñ P Ψ˝Φ | R . Then Π Ψ,Φ (s γ ) = s β b Z 2 s α if and only if γ = β˝α. Remark 6.8. Actually using the theory of gerbes an orientation on (X, s) is a BZ 2 -gerbe over (X, s). Let K 1 2 X,s be an orientation on (X, s). Let (R, U, f , i) be a formal critical chart, define a principal Z 2 -bundle π R,U, f ,i : Q Ñ R to be the bundle of square root of ι R,U, f ,i : Then there exists a natural isomorphism by local isomorphisms: is another embedding of formal critical charts then we have the following commutative diagram: So the following proposition is straightforward. X,s on (X, s), and isomorphism classes of the data: (1) each formal critical chart (R, U, f , i), and a choice of principal Z 2 -bundle π : a choice of Λ Φ in (6.1.7) such that the diagram (6.1.8) holds. Proof of main theorem. We prove Theorem 6.1. We mainly check that the proof in [42, §3.1, §3.2] work for formal schemes, since a stft R-formal scheme X is covered by affine stft R-formal schemes of the type Spf(A). First let (R, U, f , i) be a formal critical chart as in the theorem. Then we have: where K R,U,i is the kernel. Then K R,U,i is a sheaf of R-commutative algebras. Similar arguments as in [42, §3.1] works for any morphism as a morphism of sheaves of R-commutative algebras. If Ψ : (S, V, j) Ñ (T, W, k) is another morphism of formal critical charts, then Also Lemma 3.1 of [42] is true for formal critical charts: if Φ, r Φ : (R, U, i) Ñ (S, V, j) are morphisms of triples, then The main point in the proof is that for any local section α P K S,V,j | R around x P R, We can write down (V = Spf(Rtx 1 ,¨¨¨, x m )) where A ab are functions. Then one can prove that each factor (¨) in (6.2.1) lies in Similar as before, let (R, U, i), (S, V, j) be formal critical charts. Then for any is an embedding of formal critical charts. We can show that is an isomorphism, and for any other (T, W, k). Hence we can define Then these data glue together to give a sheaf P X of R-commutative algebras over X. 6.3. d-critical non-archimedean K-analytic spaces. In this section we briefly recall that there exists a notion of d-critical rigid varieties and non-archimedean Kanalytic spaces by taking the generic fiber (¨) rig , (¨) η to a d-critical formal scheme. First, let X be a stft formal R-scheme. Then from [64], [9], recalled in Remark ??, there is a functor (¨) η : F sch R Ñ An K ; (¨) rig : F sch R Ñ Rig K from the category of stft formal R-schemes to the category of paracompact Kanalytic spaces by taking the generic fiber of formal schemes. If X = Spf(A) for a stft t-adic R-algebra A, then is the Berkovich spectrum of semi-norms of A b R K. If we take take A b R K as the Tate's K-affinoid algebra then we get the rigid variety X η . A general stft formal R-scheme is covered by a finite covering tX i u of open affine formal R-schemes of the form Spf(A i ). The intersections X ij = X i X X j are separate formal R-schemes, and X ij,η are closed analytic domains of X i,η . Gluing X i,η along X ij,η we get a paracompact K-analytic space X η . Proposition 6.10. Let X be a quasi-compact K-analytic space. Then there exists a coherent sheaf P X , unique up to isomorphism, and is characterized by the properties in Theorem 6.1. Proof. For a quasi-compact K-analytic space X, from [15], [70], there is a formal model X for X, i.e., there exists a stft formal R-scheme X such that X η -X. Hence from [64, Proposition 2.6], there exists a unique exact funcotr from the category of coherent sheaves over X to the category of coherent sheaves over X η . Then we get the coherent sheaf P X = (P X ) η . (X, s), where X is a quasi-compact K-analytic space, and s P H 0 (P X ) is a section, such that for any x P X, there exists a local embedding Definition 6.11. A d-critical K-analytic space (or virtual critical K-analytic space of Kiem-Li) is a pair such that R ã Ñ U is a closed embedding and R,U . Similar to Theorem 6.4, we have Proposition 6.12. Let (X, s) be a d-critical K-analytic space. Then there exists a line bundle K X,s , the canonical line bundle on (X, s), which is natural up to isomorphism and satisfies similar properties in Theorem 6.4 by replacing the formal critical charts by the analytic critical charts. Proof. Since for a d-critical K-analytic space (X, s), there is a formal model (X, s) which is a d-critical formal scheme. Then the properties in Theorem 6.4 are from the above generic fiber functor (¨) η . Definition 6.13. An oriented d-critical K-analytic space (X, s) is a d-critical K-analytic space such that there is a square root for the canonical line bundle K X,s . The following proposition is from the above construction of functors. Proposition 6.14. Let (X, s) be an oriented d-critical formal scheme. Then the generic fiber (X η , s η ) is an oriented d-critical non-archimedean K-analytic space. Similarly, Proposition 6.15. Let (X, s) be a d-critical formal scheme. Then the special fiber (X s , s) is a d-critical scheme in [42]. Proof. From Lemma 2.3, there is a special fiber functor (¨) s : F sch R Ñ Sch κ and an exact functor (¨) s : Coh X Ñ Coh X s such that (P X ) s = P X , the unique sheaf in [42]. The canonical section s P H 0 (P X ) will induce the canonical section s P H 0 (P X s ). Then the result follows. Example 2. From [9, §1.5] a quasi-compact K-analytic space X is smooth if for any connected strictly affinoid domain V, the sheaf of differentials Ω V is locally free of rank dim(V) and there is no boundary. Then P X = 0 and the pair (X, 0) is a trivial d-critical K-analytic space. Example 3. If (X, s) is a d-critical K-analytic space such that it is the generic fiber of a d-critical formal R-scheme (X, s) of the form where s = f t + I 2 X,U and U = Sp B (Ktx 1 ,¨¨¨, x m u) is the unit closed disc. Remark 6.16. One also can define a canonical line bundle K X,s on any d-critical Kanalytic space (X, s). Micmicing the construction in §6.1.2 or by taking the generic fiber (¨) η we get a canonical line bundle over X η = X. 6.4. G m -equivariant d-critical formal schemes. We talk about the torus action on formal schemes and K-analytic spaces. Proof. Let (X, s) be a formal d-critical R-scheme with a good G m -action. That the action is good means that there is a G m -invariant affine R-formal scheme R 1 ã Ñ X, such that R 1 = Spf(A/I). Performing the same argument as in the proof of Theorem 2.43 of [42], we can take a G m -equivalent formal critical chart (R, U, f , i) such that dim(U) = dim(T x X). This is (1). (2) is clear. Proposition 6.21. Let (X, s) be a formal d-critical R-scheme which is G m -invariant under the G m -action. Let X G m be the fixed formal subscheme. Then the fixed subscheme X G m inherits a formal d-critical scheme structure (X G m , s G m ), where s G m = i ‹ (s) and i : Proof. This result is from a G m -equivariant version of the following result. Let (R, U, f , i), (S, V, g, j) be G m -equivariant formal critical charts on (X, s). Then for x P R X S, there exist G m -equivariant charts (R 1 , U 1 , f 1 , i 1 ), (S 1 , V 1 , g 1 , j 1 ) and a G mequivariant formal chart (T, W, h, k), and G m -equivariant embeddings W, h, k). A similar notion of G m -invariant d-critical K-analytic space and G m -equivariant critical chart (R, U, f , i) can be similarly defined and we omit the details. If S is a point Spec(κ), we write K 0 (Var κ ) for the Grothendieck ring of κvarieties. One can take the map Var κ ÝÑ K 0 (Var κ ) to be the universal Euler characteristic. After inverting the class L = [A 1 κ ], we get the ring M κ . We introduce the equivariant Grothendieck group defined in [23]. Let µ n be the cyclic group of order n, which can be taken as the algebraic variety Spec(κ[x]/(x n´1 )). Let µ md ÝÑ µ n be the map x Þ Ñ x d . Then all the groups µ n form a projective system. Let lim Ð Ý n µ n be the direct limit. Suppose that X is a S-variety. The action µ nˆX ÝÑ X is called a good action if each orbit is contained in an affine subvariety of X. A goodμ-action on X is an action ofμ which factors through a good µ n -action for some n. The Let S be a scheme. Following [18], we need to define a new product d on Mμ S . The following definition is due to [18,Definition 2.3]. τ on X, Y factor through µ n -actions σ n , τ n . Define J n to be the Fermat curve J n = t(t, u) P (A 1 zt0u) 2 : t n + u n = 1u. Letv be the induced goodμ-action on J n (X, Y), and set in Kμ 0 (Var S ) or Mμ S . This defines a commutative, associative product on Kμ 0 (Var S ) or Mμ S . Motivic integration on rigid varieties. Let X be a generically smooth stft formal R-scheme. We follow the construction of Nicaise-Sebag, Nicaise in [62], [64] for the definition of the motivic integration of a gauge form ω on X η , which takes values in M X s . We briefly recall the method to define the motivic integration ş X |ω|. First we have m+1 . In Greenberg [26], the functor from the category of κ-schemes to the category of sets is presented by a κ-scheme of finite type such that for any κ-algebra A. The projective limit lim Ý Ñm X m is denoted by Gr(X). The functor Gr respects open and closed immersions and fiber products, and sends affine topologically of finite type formal R-schemes to affine κ-schemes. The motivic integration of a gauge form ω is defined by using the stable cylindrical subsets of Gr(X), introduced by Loeser-Sebag in [53], and Nicaise-Sebag in [63]. Let C 0,X be the set of stable cylindrical subsets of Gr(X) of some level. If A Ă C 0,X is a cylinder, and we have a function where r µ : C 0,X Ñ M X s is the unique additive morphism defined in [47,Proposition 5 X s ]´( m+1)d for A a stable cylinder of level m, d is the relative dimension of X, and π m : Gr(X) Ñ Gr(X m ) is the canonical projection. Let ω be a gauge form on X η , in [53], the authors constructed an integer-valued function ord π,X (ω) on Gr(X) that takes the role of α before. The motivic integration ş X |ω| is defined to be From [53], [62], the forgetful map ż only depends on X η , not on X. Remark 7.2. In [64] Nicaise generalizes the motivic integration construction to generically smooth special formal R-schemes. A special formal R-scheme X is a separated Noetherian adic formal scheme endowed with a structural morphism X Ñ Spf(R), such that X is a finite union of open formal subschemes which are formal spectra of special Ralgebras. From Berkovich [11], a topological R-algebra A is special, iff A is topologically R-isomorphic to a quotient of the special R-algebra The Noetherian adic formal scheme X has the largest ideal of definition J. The closed subscheme of X defined by J is denoted by X s , which is a reduced Noetherian κ-scheme. We briefly review the motivic integration of Nicaise in [64] for special formal schemes. Since every stft formal R-scheme X is a special formal scheme, the result below definitely works for stft formal R-schemes. Definition 7.3. Let X be a special formal R-scheme. By a Néron smoothening we mean a morphism of special formal R-schemes Y Ñ X, such that Y is adic smooth over R and Y η Ñ X η is an open embedding satisfying Y η ( r K) = X η ( r K) for any finite unramified extension r K of K. In [64, §2], Nicaise proves that a Néron smoothening of X exists and is given by the dilatation of X. Then Y is a stft formal R-scheme. X η (m) := X ηˆK K(m). If ω is a gauge form on X η , we denote by ω(m) the pullback of ω via the natural morphism X η (m) Ñ X η . Definition 7.5. Let X be a generically smooth special formal R-scheme. Let ω be a gauge form on X η . Then the volume Poincaré series of (X, ω) is defined to be Definition 7.6. Let X be a generically smooth flat R-formal scheme. A resolution of singularities of X is a proper morphism h : Y Ñ X of flat special formal R-schemes such that h induces an isomorphism on generic fibers, and such that Y is regular (meaning the local ring at points is regular), with a special fiber a strict normal crossing divisor Y s . We say that the resolution h is tame if Y s is a tame normal crossing divisor. By Temkin's resolution of singularities for quasi-excellent schemes of characteristic zero in [72], any affine generically smooth flat special formal Rscheme X = Spf(A) admits a resolution of singularities by means of admissible blow-ups. In general for any generically smooth R-formal scheme X, suppose that there is a resolution of singularities The limit is called the motivic volume of X η . Definition 7.9. ( [23], [62]) For the formal scheme f : X Ñ Spf(R), the motivic cycle is called the motivic nearby cycle of f . Let (X, f ) be a generically smooth formal R-scheme. From Proposition 7.8, the motivic nearby cycle M F f belongs to Mμ X s . For any point x P X s , let where h : X 1 Ñ X is the resolution of singularities. We call M F f ,x the motivic Milnor fiber of x P X s . In summary, if we let K(GBSRig K ) be the Grothendieck ring of the category of gauge bounded smooth rigid K-varieties. Here for an object X η in GBSRig K we understand that the rigid variety X η comes from the generic fiber of a generically smooth special formal R-scheme f : X Ñ Spf(R) with gauge bounded form ω. The Grothendieck ring Let K(BSRig K ) be the Grothendieck ring of the category BSRig K of bounded smooth rigid K-varieties, which is obtained from K(GBSRig K ) by forgetting the gauge form. Then we can represent the above results in §(7.2) as follows: for a generically smooth special formal R-scheme X. Moreover, if X has relative dimension d, then So MV is a morphism from the group K(BSRig K ) to the group Mμ κ . Moreover, if x P X s and letf be the formal completion of X along x, then the generic fiber Spf( p O X,x ) η of the formal completion is the analytic Milnor fiber F x (f ) off at x in Definition 2.2 and (2.5), and we have 7.3. Global motive of oriented formal d-critical schemes. We first define the motive of principal Z 2 bundles. Let Z 2 (X) be the abelian group of isomorphism classes [P] of principal Z 2bundles P Ñ X, with multiplication [P]¨[Q] = [P b Z 2 Q] and the identity the trivial bundle [XˆZ 2 ]. We know that P b Z 2 P -XˆZ 2 , so every element in Z 2 (X) has order 1 or 2. In [18], the authors define the motive of a principal Z 2 -bundle P Ñ X by: whereρ is theμ-action on P induced by the µ 2 -action on P. In [18], for any scheme Y, the authors define an ideal Iμ Y in Mμ Y which is generated by for all morphisms φ : X Ñ Y and principal Z 2 -bundles P, Q over X. Then define Then (Mμ Y , d) is a commutative ring with d and there is a natural projection Let (X, s) be an oriented formal d-critical R-scheme and K 1 2 X,s exists as a line bundle over X. Let (R, U, f , i) be a formal critical chart of (X, s). Then we have: is a formal scheme, such that the underlying scheme is given by the critical locus of the function f . Then we have the sheaf of vanishing cycles Proof. We need to show that for formal critical charts (R, U, i, j), (S, V, g, j), Recall the orientation of (X, s). For any x P R X S, we choose (R 1 , U 1 , f 1 , i 1 ) and (S 1 , V 1 , g 1 , j 1 ) and (T, W, h, k) such that we have morphisms W, h, k). The quadratic form Q T,W,h,k satisfies the property: , and the data are defined by the following local isomorphisms: . Then we can calculate as in (5.10) of [18], [42], and let (X, s) be the formal t-adic completion of X. Then by the relative GAGA there is a unique coherent sheaf P X which is the formal completion of P X , and a section s P P X , such that (X, s) is a d-critical formal scheme over R. Moreover, the unique global motive M F φ X,s is the same as M F X,s as an element in Mμ X . Proof. By the relative GAGA in [21], the first statement is obvious. For the second one, note that locally the motivic vanishing cycles M F φ U, f is defined by the motivic nearby cycles, which are the same for formal d-critical charts (R, U, f , i) and d-critical charts (R, U, f , i). So by gluing they must give the same global motive in Mμ X since X s = X. Remark 7.13. The motivic vanishing cycle M F U, f is close related to the perverse sheaf of vanishing cycles PV U, f as in [10] and [71]. In [71], Sabbah proves that the perverse sheaf of vanishing cycles PV U, f is isomorphic to the cohomology of a formal twisted de Rham complex, which is the Kontsevich conjecture and is inspired by the deformation quantization in physics. It seems that working over the non-archimedean field κ((t)) is the right way for the quantization. 7.4. Global motive for oriented d-critical non-archimedean analytic spaces. Let (X, s) be a d-critical K-analytic space. Choose a d-critical formal model (X, s) for (X, s) such that the generic fiber X η -X. Definition 7.14. We define the global motive M F φ X,s to be where ş X s means pushforward to a point. Maulik's motivic localization formula under the G m -action. We prove a G mlocalization formula for the global motive M F φ X,s for an oriented d-critical Kanalytic space (X, s). In the scheme level, this motivic localization formula is originally due to D. Maulik [59], who, using the torus action on local vanishing cycle sheaves, proved the motivic localization formula as recalled in [18,Theorem 5.16]. We generalize Maulik's motivic localization formula to formal schemes and non-archimedean K-analytic spaces and prove it by using motivic integration for formal schemes as in [64], [47] and [35]. 7.5.1. G m -localization formula. Let (X, s) be a d-critical formal R-scheme with a good G m -action. Let be the decomposition of the fixed locus X G m into connected components, such that (X G m i , s G m i ) are oriented formal d-critical schemes. On the tangent space T x X i of X i at a point x P X i , where we take T x X i as a R-module ( when reduced to the residue field κ, T x X i becomes the tangent space T x (X i ) s of the scheme (X i ) s ). The action G m has a decomposition here the direct sums are the parts of zero, positive and negative weights with respect to the G m -action. Maulik [59] defined the virtual index so that it is constant on the strata X G m i . All the above arguments work for d-critical K-analytic space (X, s) with a good G m -action. Let be the decomposition of the fixed locus X G m into connected components, such that (X G m i , s G m i ) are oriented d-critical non-archimedean spaces. The action G m has a decomposition here the direct sums are the parts of zero, positive and negative weights with respect to the G m -action. The virtual index is similarly defined and is constant on the strata X G m i . Definition 7.16. We call the action µ : G mˆX Ñ X; (or µ : G mˆX Ñ X) circle-compact if the limit lim λÑ0 µ(λ)x exists for any x P X(or x P X). If X(or X) is proper, then any G m -action on X(or X) is circle-compact. We present the generalization of Maulik's motivic localization formula in [59] to d-critical non-archimedean K-analytic spaces. Proof. For the oriented d-critical non-archimedean K-analytic space (X, s), we choose a formal model (X, s), which is an oriented formal d-critical scheme. From Proposition 7.15, the global motive only depends on (X, s), and is independent to the choice of the formal models. So it is sufficient to prove the result for a formal model (X, s) of (X, s). Then the G m -action on X can be extended to a good and circle-compact action on the formal scheme X. For each G m -fixed strata (X i , s G m i ), there is a corresponding G mfixed strata (X i , , s G m i ), which is an oriented d-critical formal R-scheme. We need to prove the formula: We divide the proof into three steps. Step 1: We first prove the result on a formal d-critical chart (R, U, f , i) on (X, s). We have the formal scheme U = Spf(Rtx 1 ,¨¨¨, x m u); f P Rtx 1 ,¨¨¨, x m u andf : R = Spf(Rtx 1 ,¨¨¨, x m u/(I d f )) Ñ Spf(R). From our definition of motivic vanishing cycles, , respectively, and for r ď m. Here we assume that U + = Spf(Rtx r+1 ,¨¨¨, x s u); U´= Spf(Rtx s+1 ,¨¨¨, x m u). for the formal d-critical charts, then we have (X G m i , s G m i ) is a formal d-critical scheme over R for any i. We show that there is an induced orientation on (X G m i , s G m i ). For the formal d- where all the morphisms are inclusions. Since the G m -action preserves the orientation K 1 2 X,s , we make the following diagram: In practice, if X is the completion of a moduli scheme X of stable sheaves over a smooth Calabi-Yau threefold, then there is a d-critical scheme structure (X, s) on X, and the canonical line bundle K X,s = { det(E ‚ X ) is the completion of the determinant line bundle of the symmetric obstruction theory complex E ‚ X . If the G m -action preserves the orientation K 1 2 X,s , then there exists an equivariant symmetric obstruction on (X, E ‚ X ), and on the fixed locus X G m i , there exists an induced symmetric obstruction theory and an oriented d-critical scheme structure (X G m i , s G m i ). Taking completion we get the oriented formal d-critical scheme structure (X G m i , s G m i ). Step2: We prove the following: From Theorem 7.10, also [48], we have ż Here R η is the generic fiber of the formal scheme R Ñ Spf(R) and / / - Here val(x 0 ) := min 1ďiďr tval(x i )u, and val(x + ) := min r+1ďiďs tval(x i )u, val(x´) := min s+1ďiďm tval(x i )u. We explain here why in R η , val(x´) ě 0. This is because the G m -action on R is circle-compact, which means that lim λÑ0 µ(λ)x exists on R. The formal scheme R Ñ Spf(R) is the formal completion of the formal scheme U Ñ Spf(R) along R = Crit( f ). So the condition that the G m -action on the cell U + has positive weights is a closed condition on R s , and the corresponding preimage under the specialization map sp : R η Ñ R s must be open which is |x + | ă 1 and equivalent to val(x + ) ą 0; while on the affine formal scheme U´, the G m -action has negative weights and this is an open condition on R s , so the corresponding preimage under the specialization map sp is closed, which is |x´| ď 1 and equivalent to val(x´) ě 0. Now let where R 0 = t(x 0 , x + , x´) P R η |x + = 0 or x´= 0u R 1 = t(x 0 , x + , x´) P R η |x + ‰ 0, x´‰ 0u = R η zR 0 . Since the function f is G m -invariant, if one of x + , x´is zero, then the function f will not have x + , x´terms, i.e., f (x 0 , x + , x´) = f (x 0 , 0, 0). The key point is that using the Cluckers-Loeser motivic constructible functions the motivic volume of an annulus is zero. For the completeness, we provide a proof here. Let Def κ be the category of all definable T-subassignments and S P Def κ be an element. Then we have the equivariant Grothendieck group Kμ 0 (RDef S ), where RDef S is the subcategory of Def S whose objects are subassignments of Sˆh A n κ for variable n, morphisms to S are the ones induced by the projection onto the S-factor. with ω a gauge form on R 1 . By choosing a formal model R 1 of R 1 and a Néron smoothening R 1 , according to [64, §4], ż The key part is that the motivic volume of an annulus is zero. Then let s n : R 1,n Ñ N ą0 be the map where we use the same calculation as above such that U 0 = t(x 0 , x + , x´) P U η |x + = 0 or x´= 0u U 1 = t(x 0 , x + , x´) P U η |x + ‰ 0, x´‰ 0u and MV([U 1 ]) = 0 as in [38,Theorem 3.9]. So from (7.5.5), be the formal t-adic completion of X. Then X is a stft formal R-scheme over R. Let P X = y P X be the formal completion of the coherent sheaf P X on X, then (X, s P P X ) is a formal d-critical scheme. Proposition 7.19. Let Y be a smooth Calabi-Yau threefold over κ of character zero, and X = M n (Y, β) the moduli scheme of stable coherent sheaves in Coh(Y) with topological data (1, 0, β, n). Then the t-adic formal completion X = p X of X and its generic fiber X η have a formal d-critical scheme structure (X, s) and a d-critical non-archimedean analytic space structure (X η , s). The canonical line bundle K X,s is isomorphic to the formal completion of the canonical line bundle K X,s where (X, s) is the d-critical scheme in [42]. Moreover, if there exists an orientation K 1 2 X,s , then there exists a unique M F φ X,s P Mμ X such that if X admits a good circle-compact G m -action which preserves the orientation is the fixed locus of X under the G m -action. Proof. We take X as a scheme over Spec(κ[t]). The t-adic formal completion X of X is a stft formal scheme and it has a d-critical formal scheme structure (X, s) from above arguments. Hence from Proposition 6.14, its generic fiber X η also has a d-critical non-archimedean analytic space structure (X η , s). The good and circle-compact G m -action on X will induce a good and circlecompact G m -action on X, and the formal completion X G m i of the fixed locus X G m i has a d-critical formal scheme structure (X G m i , s G m i ). Hence the result just follows from Theorem 7.17. Remark 7.20. For a d-critical scheme (X, s), the canonical line bundle K X,s is isomorphic to det(E ‚ X ), where E ‚ X Ñ L ‚ X is the symmetric obstruction theory of X determined by the d-critical scheme (X, s) in the sense of [4]. Recall that the d-critical scheme (X, s) is the underlying classical scheme of a (´1)-shifted symplectic derived scheme X, and roughly speaking the (´1)-shifted symplectic derived scheme X is the scheme (X, E ‚ X ) together with its cotangent complex E ‚ X . We have the following interesting corollary. Corollary 7.21. Let (X, s) be an oriented formal d-critical scheme over R such that it is the formal completion of an oriented d-critical scheme of [42]. Suppose that X admits a good and circle-compact G m -action such that the G m -action preserves the orientation K [19]. Example 4. Hilbert scheme of points on In the last section we talk about an interesting example, such that we can not find a G m -action satisfying the condition in Corollary 7.21. Let X := Hilb n (A 3 κ ) be the Hilbert scheme of n-points on Spec(κ[x, y, z]) = A 3 κ . When n ě 4, X is a singular variety. Let M(nˆn) be the space of all nˆn matrices over κ. Let V be an n-dimensional complex vector space, and matrices B 1 , B 2 , B 3 P End(V). Let v P V and suppose that B 1 , B 2 , B 3 , v generate the vector space V. We say that the 5-tuple (V, B 1 , B 2 , B 3 , v) satisfies the stability condition if there is no proper subspace V 0 Ă V such that V 0 is stable under B 1 , B 2 , B 3 . Define an action of GL n on the set of 5-tuples by (7.5.8) P¨ (V, B 1 , B 2 , B 3 , v) = (V, PB 1 P´1, PB 2 P´1, PB 3 P´1, Pv). Then we have a statement about the Hilbert scheme Hilb n (C 3 ) of n-points on A 3 κ . As pointed out in [61, §8.2.5], the index ind virt (P, X) = dim(T P X) +´d im(T P X)´) is a complicated function of 3D partitions. In some special cases, this may be calculated by certain sum of boxes in the 3D partition P. Bryan and Szendroi also have some calculations on the pattens of the boxes in the 3D partitions and also in some special cases, they can calculate the index. Of course it is really interesting if we can get the global formula (7.5.13) from motivic localization formula (7.24). At the moment, we can not achieve this goal. But at least the formula (7.24) gives some information of single value of the Behrend function on the isolated fixed points as: We can not get the Behrend function information directly from the formula (7.5.13) of Behrend-Bryan-Szendroi [5]. Remark 7.25. As mentioned in [61], D. Maulik in [59] used the motivic localization formula to prove the formulas in [61,Theorem] is actually the refined Donaldson-Thomas invariants.
2017-11-19T23:07:11.000Z
2017-03-01T00:00:00.000
{ "year": 2022, "sha1": "560d89c5a993359c941317922ad029689acfdff9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "560d89c5a993359c941317922ad029689acfdff9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
12468854
pes2o/s2orc
v3-fos-license
Selective Deletion of PTEN in Dopamine Neurons Leads to Trophic Effects and Adaptation of Striatal Medium Spiny Projecting Neurons The widespread distribution of the tumor suppressor PTEN in the nervous system suggests a role in a broad range of brain functions. PTEN negatively regulates the signaling pathways initiated by protein kinase B (Akt) thereby regulating signals for growth, proliferation and cell survival. Pten deletion in the mouse brain has revealed its role in controlling cell size and number. In this study, we used Cre-loxP technology to specifically inactivate Pten in dopamine (DA) neurons (Pten KO mice). The resulting mutant mice showed neuronal hypertrophy, and an increased number of dopaminergic neurons and fibers in the ventral mesencephalon. Interestingly, quantitative microdialysis studies in Pten KO mice revealed no alterations in basal DA extracellular levels or evoked DA release in the dorsal striatum, despite a significant increase in total DA tissue levels. Striatal dopamine receptor D1 (DRD1) and prodynorphin (PDyn) mRNA levels were significantly elevated in KO animals, suggesting an enhancement in neuronal activity associated with the striatonigral projection pathway, while dopamine receptor D2 (DRD2) and preproenkephalin (PPE) mRNA levels remained unchanged. In addition, PTEN inactivation protected DA neurons and significantly enhanced DA-dependent behavioral functions in KO mice after a progressive 6OHDA lesion. These results provide further evidence about the role of PTEN in the brain and suggest that manipulation of the PTEN/Akt signaling pathway during development may alter the basal state of dopaminergic neurotransmission and could provide a therapeutic strategy for the treatment of Parkinson's disease, and other neurodegenerative disorders. Introduction Pten (for 'phosphatase and tensin homologue deleted on chromosome ten') is a tumor suppressor gene mutated in many human cancers, including glioblastomas, a highly malignant glial tumor in the CNS [1]. Individuals with germline Pten mutations are prone to tumors and may display brain disorders, including macrocephaly, seizures and mental retardation [2]. The tumor suppressive property of PTEN is dependent on its lipid phosphatase activity, which restrains the activation of the Akt (also called protein kinase B) signaling pathway. Upon activation, Akt phosphorylates a diverse spectrum of substrates known to regulate cellular functions related to cell cycle progression, cell growth and proliferation, cell death/survival and cell differentiation [3]. PTEN is widely distributed in the brain and is preferentially expressed in neurons, where it localizes to both the nucleus and cytoplasm [4][5][6][7]. The role of PTEN in the brain has been largely focused on the pathogenesis of glioblastoma; however, progress has recently been made in understanding the broader role of PTEN in neural circuits. A number of in vitro studies indicates a role for PTEN signaling in neuronal size, dendritic and axonal branching and neuronal polarization [8,9]. To analyze the role of PTEN in vivo and in specific neuronal populations mutant mice have been generated to induce the loss of the phosphatase in a tissue-specific manner by using the Cre-LoxP technology. Conditional loss of Pten in the brain can have differing consequences depending on the cell type or state of differentiation. For example, the use of a Nestin promoter to induce Pten deletion in neural stem cells and glial progenitors resulted in neonatal death, as mice showed enlarged and histoarchitecturally abnormal brains [10]. Pten deletion in discrete mature neuronal populations in the cerebral cortex and hippocampus resulted in macrocephaly due to neuronal hypertrophy. In addition abnormalities in axonal growth and synapse number resulted in abnormal social behavior and inappropriate responses to sensory stimuli [11]. Recent studies have emphasized the association of PTEN with different parameters of the dopaminergic system in the ventral midbrain. For example, the continuous activation of the downstream PTEN pathway, Akt, in mesostriatal adult dopamine neurons by unilateral adenoviral injections in the substantia nigra compacta confers almost complete protection against apoptotic cell death in a dopamine toxin specific model [12]. In addition, a recent study has shown Akt signaling in the mesolimbic dopamine system may also regulate functions intrinsic to dopamine neurons, such as cellular and behavioral responses to stressful stimuli [13]. Besides the inhibitory control on the activation of the Akt pathway, PTEN interacts with molecular substrates directly involved in neurotransmission such as glutamate and serotonin receptors [13,14]. Normally the activity of mesolimbic dopamine cells is under the tonic inhibitory control of the phosphorylated serotonin receptor 2c . PTEN deletion in DA cells favors the phospholylated state of the 5-HT2c receptor, inhibits dopaminergic transmission and abolishes the rewarding effects of nicotine and tetrahydrocannabinol [13]. These studies suggest PTEN function in dopamine neurons is not limited to the classical deregulation of the Akt pathway, and PTEN may also have an effect on distinct functions related to dopamine-mediated cognition. To further examine the correlation between PTEN and the dopaminergic system, we developed a mouse transgenic model to inactivate PTEN signaling specifically in DA neurons. We now report that PTEN ablation during development results in profound morphological, molecular and neurochemical changes in DA neuron maturation that translate into permanent adaptations of the dopaminergic system, including significant changes in postsynaptic brain areas. Additionally, and in agreement with previous studies [12], we show that PTEN deletion in dopamine cells provides significant protection against neurotoxic insults. Such adaptations will be documented in detail in this report. Future studies will be aimed at correlating the molecular adaptations mediated by PTEN deletion with behavioral aspects of dopamine neurotransmission. Mice Pten loxP mice were obtained from Jackson laboratories [14]. Mice were obtained in a background comprised of a mix of c57bl/6, 129S4/SvJae. All animals used in this study were analyzed between 3-4 months of age. Animal protocols used in this study were approved by the Animal Care Committee at the National Institutes of Health. For the conditional inactivation of Pten in dopaminergic neurons of the ventral midbrain, we developed a dopamine transporter (DAT) promoter-driven Cre transgenic mouse line Slc6a3 Cre [15]. Cre recombinase binds to loxP target sequences and either deletes or inverts the intervening DNA depending on the individual orientation of the loxP sites [16]. In our model, Cre recombinase expression is driven by the DAT promoter and mediates deletion of exon 5 of the pten gene approximately at embryonic day 15 [15]. DAT is a molecular substrate specific to dopamine neurons and is highly expressed in dopamine neurons of the ventral midbrain, while lesser expression levels are present in the olfactory bulb and hypothalamic areas. To minimize interference with DAT function by preservation of both DAT alleles, Cre recombinase expression was driven from the 39untranslated region (39UTR) of the endogenous DAT gene by means of an internal ribosomal entry sequence (IRES). The Pten loxP and Slc6a3 Cre lines were mixed to obtain regional knockout (Pten loxP/loxP/Cre/wt ) and control mice (Pten wt/wt/Cre/wt ). Animals were genotyped using Pten and Slac6a3-Cre primers described elsewhere [14,15]. To confirm if Pten deletion was specific to DAT expressing regions, primers specific to the recombination (delta5) were developed. Genomic DNAs were prepared from the olfactory bulb, motor cortex, striatum and SN/VTA. For ventral midbrain dissections, coronal cuts were made at the anterior and posterior boundaries of the mammillary nucleus. Dissection of the ventral midbrain region containing the SN/VTA was facilitated by removing the mammillary nucleus on the ventral surface and overlying cortex on the dorsal surface. A block containing the SN/ VTA was then dissected. To further ensure specificity of Cre mediated recombination lung, heart, kidney and liver tissue was dissected and DNA extracted for PCR analysis. Immunohistochemistry Adult mice were deeply anesthetized with an intraperitoneal (i.p.) injection of chloral hydrate (30 mg/kg) and then perfused transcardially with saline solution followed by 4% paraformaldehyde (PFA). Brains were quickly removed from skull and post-fixed in 4% PFA for 4 hrs. After fixation, brains were rinsed in 0.1 M phosphate buffer (PB) and cryoprotected in 18% sucrose solution, overnight at 4uC. Striatum and midbrain regions were cut in coronal sections at 40 um in four series using a Leica CM3050S cryostat (Leica Microsystems, Bannockburn, IL). Sections were processed for Tyrosine Hydroxylase (TH) immunohistochemistry or for Pten or phosho-AKT immunofluoresce. For TH immunohistochemical studies, sections were rinsed with PB 3610 min and then permeabilized and blocked with 0.25% Triton-X-100 and 4% bovine serum albumin (BSA) in 0.1 M PB. Sections were incubated overnight at 4uC with a rabbit polyclonal antibody against TH (1:1000, Chemicon, Temecula, CA). Sections were then rinsed 3610 min in PB, and incubated for 1 hr with a biotinylated anti-rabbit antibody (1:200, Vector Labs, Burlingame, CA). Sections were rinsed with PB 3610 min and incubated with avidin-biotinylated horseradish peroxidase for 2 h. Sections were rinsed and the peroxidase reaction was developed with 0.05% 3,3diaminobenzidine-4 HCl (DAB) and 0.003% hydrogen peroxide (H 2 O 2 ). Sections were mounted on coated slides, coversliped and dried before analysis. For immunofluorescence histochemistry, sections were rinsed in PB 3610 min, and then incubated with 0.25% Triton-X-100 and 4% BSA in 0.1 M PB. Sections were incubated with a mouse monoclonal anti PTEN (Cell Signaling, Danvers, MA) or with anti p-AKT rabbit monoclonal antibody (1:100, Cell Signaling, Danvers, MA) overnight at 4uC. Sections were rinsed with PB 3610 min and incubated for 2 hrs at room temperature with one of the following secondary fluorescent antibodies: anti mouse or anti rabbit (Invitrogen, Carlsbad, CA), rinsed with PB 3610 min, mounted on coated slides and coversliped. Sections were analyzed in a Leica DMLA microscope (Leica Microsystems, Bannockburn, IL) with fluorescent light. Stereologic Analyses Unbiased stereological counts of TH-positive (TH+) neurons within the substantia nigra pars compacta (SNc) and ventral tegmental area (VTA) as well as fiber length in the substantia nigra pars reticulata (SNr) were performed using stereological principles [17] and analyzed with StereoInvestigator software (Microbrightfield, Williston, VT). The optical fractionator probe [18] was used to generate an estimate of neuronal TH+ number and area, nucleator probe [19] to estimate the size of TH+ neurons in SNc and VTA and space balls probe [20] to obtain an estimate of the total length of fibers in the SNr. SNc, VTA and SNr were outlined under a low magnification objective (5x) following landmarks from the Franklin and Paxinos mouse atlas [21] and all the stereologic analysis were performed under the 40x objective of a Leica DM5000B microscope (Leica Microsystems, Bannockburn, IL). The total number of TH+ neurons, neuronal size and fiber density, was estimated for adult naïve control and Pten KO mice. For each tissue section analyzed, section thickness was assessed in each sampling site and guard zones of 2.5 mm were used at the top and bottom of each section. Systematic random sampling design was performed and generated with the following stereologic parameters: grid size: 131 mm, counting frame: 123 mm and dissector height of 25 mm. For space balls a 20 mm radius was used to intersect fibers that crossed the circle. The criterion for a TH+ fiber was a stained and in focus fiber at the crossing point in the circle. Our criterion for counting and measuring the area of an individual TH+ neuron was the presence of its nucleus either within the counting frame, or touching the right or top frame lines (green), but not touching the left or bottom lines (red). The area of TH+ neurons in the SNc and VTA was estimated using the nucleator probe. Every time a neuron was counted with the optical fractionator probe, a set of four rays are extended from the middle of the cell and radiate with a random orientation in four orthogonal directions towards the edge of the neuron. The point at which each ray intersected the boundary of the neuron are used to define the area. Coefficients of error were calculated and values ,0.10 were accepted. Stereologic estimations were also performed with the same parameters in the SNc of control and Pten KO mice thirty days after receiving a unilateral 6OHDA lesion in the striatum. Results were presented as a percentage of surviving neurons on the contralateral side. Optical Density To determine fiber density in the striatum, the mean optical density (O.D.) was measured in TH positive stained sections from naïve and 6OHDA-lesioned mice (thirty days after 6-OHDA unilateral lesion) in control and Pten KO mice. O.D. is a sensitive and reliable tool to measure levels of fiber innervation and to detect changes by experimental manipulations [22]. Images were captured with a Hitachi CCD HV-C20A camera and transformed to grey scale images of 8 bits. The O.D. quantification was perform using ImageJ software [23]. The O.D. measures were determined in each mouse at three coronal levels corresponding to the frontal striatum +1.5 mm, medial striatum +0.14 mm. and caudal striatum 20.50 mm, relative to bregma [21]. Nonspecific background was determined by readings made from the anterior commissure or corpus callosum. For naïve animals, the O.D value corresponds to the mean of both striatae minus the respective background level. For 6OHDA-treated animals, O.D. values were measured on the denervated and non denervated striatum in control and KO mice and the O.D. minus the background was expressed as a percentage relative to the contralateral side. In addition, striatal total area was determined with the same software and results expressed as square pixels. Behavioral Test: Locomotor activity induced by exposure to a novel environment All mice used were adult and kept on a 12:12 light -dark cycle (lights on 06:00 am) with food and water ad libitum. Behavioral tests were performed between 10:00-14:00 hrs. Locomotor activity assessed in the open field, is considered a well organized behavior in rodents determined by several factors, including anxiety, arousal, risk assessment, and exploration, and is an established tool for behavioral phenotyping in mice [24]. Control and Pten KO mice were brought to the testing room 30 minutes before the procedure began for acclimation. After this period, each mouse was gently placed inside a clean Plexiglas chamber (42642630 cm; Accuscan Instruments, Columbus, OH) for 30 minutes to assess locomotor activity in a new environment. Locomotor activity was measured in each cage by 16 horizontal and 16 vertical sensors (infrared beams) spaced 2.5 cm apart. Vertical sensors were located 7 cm above the floor chamber. Data was stored every 5 min and presented as cumulative values after 30 minutes. Real time PCR Striatal tissue was dissected and immediately processed for RNA isolation and DNAse I treatment using the RNAqueous-Micro Ambion kit (Applied Biosystems-Ambion, Austin, TX), following the manufacturers instructions. For cDNA synthesis total RNA was mixed following the manufacturers instructions for the Superscript III reverse transcriptase kit (Invitrogen, Carlsbad, CA). Real-time polymerase chain reaction was performed as duplicate determinations with specific Taqman probes from the mouse Probe Library (Exiqon A/S, Vedbaek, Denmark) designed for primers to mouse DRD1 (F: 59-gag cgt agt ctc cca gat cg-39 and B: 59-tgg tca atc tca gtc act ttt ca-39), DRD2 (F: 59-ctc ttt gga ctc aac aac aca ga-39 and B: 59 -aag ggc acg tag aac gag ac-39), preproenkephalin (F: 59 -aag ggc acg tag aac gag ac-39 and B: 59aag ggc acg tag aac gag ac-39), prodynorphin (F: 59-aag ggc acg tag aac gag ac-39 and B: 59-cgc cat tct gac tca ctt gtt-39), and BDNF (F: 59 -gca tct gtt ggg gag aca ag-39 and B: 59-tgg tca tca ctc ttc tca cct g-39). Hydroxymethylbilane synthase (Hmbs, F: 59-tcc ctg aag gat gtg cct ac-39 and B: 59-aca agg gtt ttc ccg ttt g-39) and aminolevulinate synthase (ALAS, F:59-cca tca att acc caa cag tgc-39 and B: 59-gtg acc agc agc ttc tcc a-39) were used as housekeeping genes to normalize quantification data. The reproducibility of results was determined by inspection of duplicate samples. After an initial incubation step for 10 min at 95uC, qRT-PCR was carried out using 40 cycles (95uC for 15 seconds, 60uC for 60 seconds). The standard curve method was used to compare mRNA expression levels between KO and control animals. Normalization to both endogenous control genes led to similar results. In Vivo Microdialysis Pten deficient mice and controls were anesthetized and implanted unilaterally with a microdialysis guide cannula (CMA/11, CMA microdialysis) in the dorsal striatum (AP: +0.4, L: -2.1, V: -2.2 mm from bregma) using standard stereotaxic techniques, and were allowed to recover for 5 days before the microdialysis experiments [25]. Fifteen hours before the start of experiments, a microdialysis probe (CMA/11, 2 mm membrane length, CMA microdialysis, North Chelmsford, MA) was connected to the dialysis system and flushed with artificial cerebrospinal fluid (aCSF: 145 mM NaCl, 2.8 mM KCl, 1.2 mM CaCl 2 , 1.2 mM MgCl 2 , 0.25 mM ascorbic acid, and 5.4 mM D-glucose, pH 7.2 adjusted with NaOH 0.5 M). The mouse was gently restrained and the probe was slowly inserted into the guide cannula. The dialysis system consisted of FEP tubing (CMA microdialysis) that connected the probe to a 1 ml gastight syringe (Hamilton Co., Reno, NV) mounted on a microdialysis pump (CMA/102) through a quartz-lined, low-resistance swivel (375/D/ 22QM, Instech, Plymouth Meeting, PA). After probe insertion, the mouse was placed in the dialysis chamber with food and water freely available, and the probe perfused overnight with aCSF at a flow rate of 0.3 ml/min. The next morning, the perfusion syringes were loaded with fresh aCSF and probes were allowed to equilibrate for an additional 1 h at a flow rate of 0.6 ml/min before the start of experiments. For no net flux experiments, five different concentrations of DA (0, 5, 10, 20, and 40 nM) in aCSF were perfused in random order through the dialysis probe. Each DA concentration was perfused for 30 min, and then 2|10 min samples were collected. Following completion of the no net flux experiments, normal aCSF was again perfused through the probe for 30 min, allowing for a period of equilibration. Consecutive 15 min samples were then collected. Three baseline samples were then collected followed by changing the perfusion buffer to aCSF containing 60 mM KCl (NaCl concentration was reduced accordingly to maintain osmolarity) and four samples were obtained. The perfusion buffer was switched back to normal aCSF and three additional samples were collected. After the experiments, mice were killed by pentobarbital overdose and their brains were removed. The midbrain and left dorsal striatum were dissected out, frozen on dry ice and stored at 280 for tissue DA content determination. The right forebrain was frozen on dry ice, and 20 mm sections were obtained on a cryostat for the histological verification of probe location. No net flux data were analyzed as described [25]. The net flux of DA through the probe (DAin-DAout) was calculated and plotted against the concentration of DA perfused (DAin). The following parameters were calculated from the resulting linear function. The y axis intercept, corresponding to zero DA perfused through the probe is the dialysate DA concentration in a conventional microdialysis experiment. The x axis intercept corresponds to the point of equilibrium where there is no net flux of DA through the probe and provides an unbiased estimate of extracellular DA concentration. The slope of the regression line corresponds to the extraction fraction (Ed) which has been shown to be an indirect measure of DA uptake [25,26]. The resulting regression lines were compared by Fisher's test using GraphPad Prism software. In the conventional microdialysis experiments investigating the effects of 60 mM KCl, the average of the three baseline samples was calculated, and all the DA concentrations were expressed as % of that baseline. Differences between control and Pten KO animals in the appropriate variables were assessed by comparing both groups using a Student's t-test. DA was determined by HPLC coupled to electrochemical detection. The chromatographic system consisted of a CMA/200 refrigerated microinjector (CMA microdialysis, North Chelmsford, MA), a BAS PM-80 pump (BAS, West Lafayette, IN) and a BAS LC-4C amperometric detector. The mobile phase (0.15 M sodium phosphate, 2.24 mM sodium octanesulfonic acid, 0.94 mM EDTA, adjusted to pH 5.0 and containing 11% methanol (vol/vol) was filtered through a 0.22-mm nylon filter, degassed by a BAS online degasser, and pumped through the system at a flow rate of 0.6 ml/min. DA was separated in a BAS C18 column (100 mm62.0 mm63 mm) and detected on a glassy carbon working electrode at an oxidation potential of +700 mV vs. Ag/AgCl. Dialysate DA levels were quantified by external standard curve calibration, using peak heights for quantification. All the reagents used for the mobile phase were analytical grade. Under these conditions, retention time for DA was 3 min, and the limit of detection was 0.25 nM. Tissue Levels of DA and metabolites The midbrain and the left dorsal striatum were dissected after the microdialysis experiments and stored at 280C for the determination of monoamine content. Tissue samples were homogenized by ultrasonication in 20 volumes of ice cold 0.05N HClO 4 containing 0.5 mM of DHBA as the internal standard. Homogenates were centrifuged at 22000 g for 10 min at 4C. Aliquots from the supernatant were analyzed for monoamine content by HPLC coupled to electrochemical detection. The chromatographic system was similar to the one described for microdialysis experiments except for the column (10061 mm C18,5 um particle size, BAS) and the mobile phase (35.2 mM sodium phosphate, 26.4 mM citric acid, 1.66 mM sodium octanesulfonic acid and 0.1 mM EDTA adjusted to pH 4.2; containing 6% methanol and 0.4% tetrahydrofurane, vol/vol). The concentrations of monoamines were calculated against an external standard curve, normalized by the internal standard and corrected by the protein concentration in the homogenate. 6-Hydroxydopamine (6OHDA) Lesions Mice were anesthetized with Avertin, and placed in a stereotaxic frame. 6OHDA (5.0 mg/ml in 0.9% NaCl/0.02% ascorbate) was injected using a microliter syringe at a rate of 0.5 ml/min for a total dose of 15.0 mg/3 ml. Injection was performed into the striatum at coordinates AP: +0.09 cm; ML: 60.22 cm; DV: 20.25 cm relative to bregma. After 2 min, the needle was withdrawn slowly. For amphetamine-induced rotation, mice were injected with (+) methamphetamine HCl (2.5 mg/kg i.p. injection) and placed in a rotometer (Accuscan, Columbus, OH) for 120 min. Baseline amphetamine rotation tests were performed 7 days before 6OHDA lesions and the results used to assign the side of the subsequent 6OHDA lesion. Mice exhibiting net clockwise turns were lesioned in the left striatum, while mice exhibiting net counter-clockwise turns were lesioned in the right striatum. Following the 6OHDA lesion, amphetamine-induced rotational behavior was tested 14 and 28 days after lesion. The number of clockwise and counter-clockwise turns was counted and expressed as the number of net rotations per 120 minutes to the lesioned (ipsilateral) hemisphere. Statistical analyses Unless otherwise stated, statistical analyses between control and Pten KO animals were performed by using a Student's t test with values of *p,0.05 considered significant. Pten deletion is specific for cells expressing the dopamine transporter (DAT) To establish a mouse model carrying Pten deletion in cells expressing DAT, we crossed Pten loxP mice with a Slc6a3 Cre transgenic line. Of the offspring generated, we used Pten loxP/loxP/Cre/wt as knockouts (KO) and Pten wt/wt/Cre/wt as experimental controls. Pten KO mice were born at the expected Mendelian ratio and were visibly indistinguishable from control mice. To confirm Cre-mediated recombination in Pten loxP/loxP/Cre/wt mice we used PCR and immunological strategies. As shown in figure 1A, Pten exon5 deletion was specific to the SNc, VTA and olfactory bulb in adult animals (delta5 band), with no detectable leakage in other brain areas [15]. In addition, the extent of Pten inactivation in KO mice was examined by immunocytochemistry using antibodies against TH, PTEN and phospho-AKT. In KO animals the loss of PTEN immunoreactivity in the ventral mesencephalon was associated with increased TH and phospho-AKT staining ( Figure 1B). As PTEN is a tumor suppressor gene and deregulation of this pathway has been associated with the formation of multiple tumor types, the animals used for morphological evaluations in this study were examined for the existence of brain tumors. Sections through the entire brain did not show evidence of neoplastic transformation in any of the brains analyzed (data not shown). Pten deletion leads to neuronal hypertrophy and increased number of TH positive neurons and dendrites in the ventral midbrain DA midbrain neurons are first generated near the midbrainhindbrain junction and migrate radially to their final position in the ventral midbrain [27,28]. Tyrosine hydroxylase (TH), the ratelimiting enzyme in the biosynthetic pathway of cathecholamines, can be first detected in the mouse at embryonic day (E) 11.5, suggesting initiation of DA differentiation [27]. In contrast, DAT gene transcripts are not detected in the mouse ventral mesencephalon until DA axons reach the target at about E15, suggesting that during ontogeny, DA synthesis, and high-affinity uptake develop asynchronously and in a non-correlated fashion [27]. In DAT-Cre conditional transgenic mice, Cre recombinase activity is induced shortly after DAT induction. In a previous report we first detected DAT-Cre mediated beta galactosidase expression in a conditional Rosa26LacZ reporter mice at embryonic day 17 [15], suggesting excision of Pten exon5 at around this developmental point. Stereological cell counts of TH positive profiles in the substantia nigra and VTA were significantly elevated already at postnatal day two (data not presented), indicating an early expression of anatomical changes mediated by Pten ablation. A common phenotype associated with the loss of PTEN in neuronal populations is an enlargement of neuronal soma size, increased cell proliferation, inhibition of apoptosis, and the expression of thickened axonal and dendritic processes. We performed detailed morphological analyses of brain structures affected by the mutation in adult animals. PTEN ablation in DA neurons (n = 4) induced a significant increase in SNc and VTA total volume ( Fig. 2A and B; Fig. 3A) as compared to controls (n = 4). At a cellular level, this increased regional volume was mainly attributed to a significant increase in the number of neurons (Fig. 3B), as well as an increase in neuronal soma size ( Fig. 2A and 2B insets and 3C). Fiber density measurements also revealed a significant increase in the number of dendritic processes in the substantia nigra pars reticulata (SNr, Fig. 3D). In many cases, dendritic processes were so close together and thickened in the SNr that at low magnification may appear as cell bodies. Interestingly, changes in neuronal size and number were not accompanied by significant differences in striatal TH staining intensity ( Fig. 4A and 4C). Area measurements suggest a modest but significant enlargement of the caudal striatum in KO animals as compared to controls (Fig. 4B). Pten deletion alters the expression of postsynaptic markers in the striatum Over 90% of neurons of the striatum are medium spiny projecting neurons. Distinct subpopulations of medium spiny neurons can be recognized based on their projections and expression of various markers. Direct and indirect striatal projection neurons selectively express the DRD1 and DRD2 dopamine receptor subtypes respectively [29]. Messenger RNAs encoding DRD1 are selectively localized in striatal neurons projecting to the substantia nigra (direct pathway) and co-localize with dynorphin. Conversely, mRNAs encoding DRD2 are selectively localized in neurons projecting to the lateral globus pallidus (indirect pathway) and colocalize with enkephalin. Real time quantitative PCR analyses were performed to define if Pten ablation altered the markers of the direct and indirect striatal pathways. Striatal mRNA levels for PPE, PDyn, and DRD1 and DRD2 receptors revealed a A and B). The total area covered by TH positive staining was significantly increased in Pten KO animals when compared to controls. Increase in the size of the mesolimbic and nigrostriatal projecting structures are due not only to an increase in the number of DA neurons, but also to an increase in neuronal size and in dendritic outgrowth (see inserts in A and B). Sections correspond to Bregma -3.16 mm with reference to Franklin and Paxinos (1997). Scale bar corresponding to high magnification, 250 mm, and low magnification, 25 mm. doi:10.1371/journal.pone.0007027.g002 significant change in the expression of markers present in medium spiny neurons projecting to the substantia nigra (direct pathway). DRD1 and PDyn mRNA levels were significantly upregulated in KO animals as compared to controls ( Fig. 5A and B). In contrast striatal mRNA expression levels for DRD2 and PPE did not differ between control and mutant animals ( Fig. 5A and B). As striatal dynorphin positive neurons form connections with nigral DA neurons, we investigated the effects of the mutation on nigral BDNF mRNA levels. BDNF messenger levels were significantly increased in Pten KO animals (Fig. 5C). Extracellular DA Levels in Dorsal Striatum and correlation to locomotor activity As previous studies have suggested levels of enkephalin and dynorphin are oppositely modulated by DA transmission, we next investigated if DA levels in the striatum were altered by Pten ablation in DA cells. We used the technique of in vivo microdialysis to investigate the consequences of genetic deletion of Pten in DA cells on basal extracellular DA dynamics as well as on potassiumevoked DA release. Quantitative no net flux microdialysis revealed no significant differences in basal dialysate levels (y-intercept) or in the estimated extracellular concentration (x-intercept) of DA in the dorsal striatum ( Figure 6A). Moreover, the DA extraction fraction, calculated as the slope of the no net flux regression line was unchanged in KO mice, suggesting that deletion of Pten in DA neurons did not alter the clearance of extracellular DA by the DA transporter [26]. Conventional microdialysis revealed a marked increase in basal dialysate DA levels in response to K + evoked depolarization (Fig. 6C). No significant difference between control and KO mice in KCl-evoked DA levels was observed in the dorsal striatum ( fig. 6C). Mice lacking PTEN expression in DA cells showed an increase in catecholamine content in both the midbrain containing the DA cell bodies ( fig. 6D) and in the terminal region in the dorsal striatum ( fig. 6B). This increase was specific for DA and its metabolites since 5HT and 5HIAA levels were not altered (data not presented). Consistent with the dialysis data, no changes were observed in 3MT, a DA metabolite of primarily extracellular origin. In accordance with striatal DA release levels, no differences in locomotor activity were observed between control (n = 11) and Pten KO (n = 15) animals during exposure to a novel environment (Fig. 7). Pten influences neuroprotection after a neurotoxic insult A unilateral injection of 6OHDA into the striatum results in progressive loss of DA neurons in the ventral mesencephalon, and an imbalance in the levels of DA and its receptors between the two striatae [30]. We found that Pten deletion protects DA neurons during this partial 6OHDA lesion and prevents fiber loss in the lesioned striatum when compared to control animals. Four weeks after the lesion, control 6OHDA-treated mice showed an extensive loss of TH positive SNc neurons in the ipsilateral or lesioned nigral region ( Fig. 8A and B). Stereologic cell counts in control animals showed a 63% survival rate of TH positive neurons when compared to the contralateral side. In contrast, Pten KO mice showed an 89% survival rate on the lesioned side as compared to the contralateral nigra (Fig. 8C). Pten deletion also provided significant protection against axonal loss in the lesioned striatum. Four weeks after the lesion, quantification of TH+ fiber density in the lesioned striatum showed a significant decrease in optical density levels in control animals when compared to Pten KO animals. Significant differences in optical density were observed in rostral, medial and caudal sections (Fig. 8D). The morphological preservation of the dopaminergic system observed in Pten KO animals was paralleled by a significant decrease in behavioral impairment developed after this partial 6OHDA striatal lesion. Unilateral 6OHDA striatal lesions result in an imbalance in the levels of DA between the two striatae. As DA stores are depleted due to the loss of DA in axon terminals, the injection of drugs that act to release DA, such as amphetamine, will induce rotational behavior towards the denervated striatum. We examined the effects of Pten deletion on amphetamine-induced rotational asymmetry at different time points after 6OHDA lesioning. Amphetamine-induced rotational activity was recorded for 60 min following an injection of 2.5 mg/kg (+) methamphetamine HCl. The number of clockwise and counterclockwise turns were counted and expressed as the number of net rotations to the lesioned (ipsilateral) hemisphere (net rotations to ipsilateral hemisphere = ipsilateral rotations -contralateral rotations). KO animals exhibited a significant reduction in ipsilateral rotational behavior when compared to control animals at 2 and 4 weeks after the lesion (Fig. 9). Discussion Mice with targeted ablation of Pten in DA neurons provide a unique opportunity to study the correlation between deregulated PTEN signaling and specific changes in DA neurotransmission. While the most described function of PTEN is mediated by the inhibition of cell-survival promoting PI3K/Akt-dependent signals, recent studies have shown PTEN regulates diverse functions in DA neurons, and its relationship to Parkinson's disease and drug addiction has been highlighted. In this study, we have further characterized PTEN function specific to postmitotic mouse mesencephalic DA neurons. We generated a knockin mouse expressing Cre recombinase under the transcriptional control of the endogenous DAT promoter, to mediate restricted DNA recombination events [15]. Pten was selectively deleted in DA neurons in parallel to activation of the dopamine transporter at embryonic day 15. In addition, to allow bicistronic mRNA expression encoding for both DAT and Cre, we targeted the 39 untranslated region (39UTR) of the DAT gene where the Cre gene is preceded by an internal ribosomal entry site (IRES). Pten KO mice showed an enlarged ventral mesencephalon, neuronal hypertophy and increased numbers of DA neurons. Despite the robust trophic effects accompanied by a significant increase in overall striatal DA tissue content, extracellular DA levels in the striatum remained unchanged between control and Pten KO mice, and potassium evoked DA release was not elevated in Pten KO animals. However, DRD1 and PDyn mRNA levels were significantly elevated in striatonigral projecting neurons. PTEN inactivation protected DA neurons during exposure to a neurotoxin. This initial study of PTEN deregulation selective to DA neurons is aimed at describing the state of dopaminergic transmission and molecular adaptations of the mesostriatal and mesolimbic dopaminergic systems. The effects of PTEN deletion in DA neurons, as well as postsynaptic regions will be discussed in detail below. Disruption of PTEN activity in DA neurons during postmitotic developmental growth Cell death by apoptosis is a normal developmental event occurring in most neuronal populations, and it is a determinant of the eventual size of a neuronal population. Developmental cell death is often regulated by the trophic synaptic support that neurons receive from their postsynaptic targets [31,32], as well as from afferent projections [33]. Pruning of DA neurons in the mouse ventral midbrain is essentially a postnatal event with the number of apoptotic neurons culminating within the first days of life [31]. A second peak of apoptotic death among dopaminergic neurons occurs at postnatal day 14, a developmental period corresponding to competition for synapse formation [34]. In addition, and consistent with the model of target-derived support, excitotoxic striatal lesions during development result in a reduced number of nigro-striatal DA neurons. This decrease occurred in spite of the axon-sparing nature of the lesion [35,36]. Thus, several lines of evidence suggest trophic factors, synaptic interactions and apoptotic events may play a key role in shaping the anatomy of the ventral mesencephalon. In this study, we have shown Pten deletion in differentiated DA neurons causes a significant increase in the number and size of surviving neurons in both the mesolimbic and nigrostriatal projecting pathways. A known function of PTEN is antagonizing cell survival signals mediated by Akt-dependent pathways. Akt, or protein kinase B, exerts its role in promoting cell survival and growth by targeting anti-apoptotic downstream signals [37,38]. Because at the time of Pten deletion DA neurons have already completed mitosis and phenotypic determination, it is unlikely the reported increase in DA neurons is due to an increase in newly formed neurons. It is thus likely PTEN ablation preserves DA neurons that normally would undergo apoptosis due to the lack of target support, by repressing the initiation of apoptotic pathways. While Pten deletion does not alter DA release, it induces permanent changes in striatal postsynaptic markers Ablation of PTEN in DA neurons did not only cause robust anatomical changes in the ventral mesencephalon, but also affected functional connections between DA neurons and target areas. Interestingly, PTEN ablation did not generate changes in TH optical density in the striatum or nucleus accumbens, suggesting an equal distribution of dopaminergic terminals between Pten KO and control animals. In vivo microdialysis studies showed no differences in basal extracellular DA levels or in the dynamics of neurotransmitter clearance in the dorsal striatum. In addition potassium evoked depolarizations did not show an increase in DA release in Pten KO animals when compared to controls. While no changes in DA release or uptake could be detected by microdialysis, KO animals showed a significant increase in total DA tissue content in dorsal striatum, an effect accompanied by a significant increase in striatal prodynorphin and DRD1 mRNA expression levels. As previous studies have shown enhancement of DA neurotransmission often results in elevated dynorphin and DRD1 mRNA levels in direct striatal projecting neurons, the unbalance between the direct and indirect striatal projection pathways observed in Pten KO animals could be explained by subtle but long-term changes in DA dynamics. In addition, we cannot rule out the possibility that the molecular changes specifically observed in dynorphin positive medium spiny neurons may be due to the direct synaptic interaction with DA neurons in the ventral mesencephalon. Medium spiny neurons may depend on trophic support and the availability of synaptic connections with their target, the SN, to define their molecular profile or even to survive during development. As Pten deletion creates a very robust change in substantia nigra, it is possible that the nigra may be in a better position to support afferent populations, such as the medium spiny neurons. Increased BDNF mRNA levels in the ventral mesencephalon in Pten KO animals, suggests that striatal afferents may have increased availability to trophic support in KO animals. As locomotor activity and exposure to a novel environment are closely associated with mesocortical and mesolimbic activity [39,40], we compared the locomotor responses of control and Pten KO animals when exposed to a novel environment. Interestingly, and despite higher DA contents in the striatum, Figure 9. The morphological preservation of the Pten deficient nigrostriatal system after 6OHDA treatment correlates with functional recovery after the lesion. To determine the functionality of the nigrostriatal system after the lesion, we examined methamphetamine-induced rotations. Methamphetamine treatment increases the extracellular availability of endogenous DA in the striatum, and it causes animals with a partial DA depletion to rotate against the lesion side (ipsilateral), due to the imbalance in DA release between the striatae. Strong ipsilateral rotational behavior was observed in control animals (n = 8) after treatment with (+) methamphetamine HCl (2.5 mg/kg) at 14 and 28 days after the lesion. Ipsilateral rotational behavior was significantly reduced in Pten KO animals (n = 10). All data are mean6SEM. *p,0.05, Student's t-test. doi:10.1371/journal.pone.0007027.g009 Pten KO animals did not behave differently than controls when exposed to a novel environment. Increased DRD1 and dynorphin levels in medium spiny neurons may enhance the inhibition of DA neurons in the SNc, thereby normalizing overall DA release in the striatum. Suppressing PTEN expression protects DA neurons against exposure to a neurotoxin The relationship between PTEN and Parkinson's disease has been emphasized in several studies often in reference to mitochondrial function. PTEN downregulation in cultured hippocampal neurons inhibits the release of cytochrome c during toxic insults [41]. PTEN directly regulates the activity of PTENinduced putative kinase 1(PINK), a protein kinase that localizes to mitochondria where it inhibits cytochrome c release, and known to be mutated in certain forms of familial Parkinson's disease [42]. In addition, as an upstream modulator of Akt, the phosphatase PTEN has been shown to play a role in the regulation of survival signaling during neuronal injury. A previous study overexpressed a constitutively active form of the oncoprotein Akt/PKB in neurons of the ventral midbrain of adult mice using adeno-associated virus transduction [12]. The authors reported a significant increase in cell size and number of TH positive profiles in the SNc on the injected side. In addition, TH immunostaining showed higher density in the striatum ipsilateral to the adenoviral injection. There was also near complete protection of DA neuron cell bodies after intrastriatal 6OHDA lesion [12]. Our study provides further evidence that ablating PTEN activity leads to decreased dopaminergic neuronal death after a neurotoxic lesion. In agreement with the above mention study, we showed that Pten deletion specific to DA neurons confers significant protection against 6OHDA mediated toxicity [43,44]. PTEN less DA neurons displayed a significant increase in survival rate as compared to control DA neurons. In addition, striatal TH density levels were significantly elevated in Pten KO animals as compared to controls. This data correlates with behavioral parameters, as methamphetamine induced rotations were significantly decreased in KO animals, suggesting increased maintenance of DA release in the lesioned striatum. The demonstration that PTEN/Akt mRNA expression is modulated in response to different apoptotic inducers [7] strongly suggests the association of this pathway with neuronal survival and death. However, in recent years investigations have shown cellular functions attributed to PTEN in various neuronal circuits go beyond interactions with the Akt pathway, and have strengthened the current view of this molecule as a potential therapeutic target. For example, it has been recently shown that PTEN can physically interact with serotonin receptors in dopamine neurons to modulate transmission [45,46]. Such interactions were able to affect the response of DA neurons to drugs of abuse, and therefore manipulation of the PTEN pathway could potentially affect the manner in which humans respond to reward and motivation. In addition, PTEN activity has been related to mitochondrial function in different neuronal systems, suggesting PTEN deregulation may provide a dual neuroprotective function in dopamine cells by the activation of survival pathways and maintenance of mitochondrial function. This molecule may thus have activity not only on cell death/survival, but also on other brain activities. Further studies defining the function of PTEN in dopamine cells may provide the molecular basis for the development of novel therapeutic strategies to treat CNS diseases.
2016-05-12T22:15:10.714Z
2009-09-11T00:00:00.000
{ "year": 2009, "sha1": "b67ba4bce3676c8937cb93203ae1fdb44f892b48", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0007027&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b67ba4bce3676c8937cb93203ae1fdb44f892b48", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235757833
pes2o/s2orc
v3-fos-license
In Vitro Activity of Cefotetan against ESBL-Producing Escherichia coli and Klebsiella pneumoniae Bloodstream Isolates from the MERINO Trial ABSTRACT Extended-spectrum-beta-lactamase (ESBL)-producing Enterobacterales continue to pose a major threat to human health worldwide. Given the limited therapeutic options available to treat infections caused by these pathogens, identifying additional effective antimicrobials or revisiting existing drugs is important. Ceftriaxone-resistant Escherichia coli and Klebsiella pneumoniae containing CTX-M-type ESBLs or AmpC, in addition to narrow-spectrum OXA and SHV enzymes, were selected from blood culture isolates obtained from the MERINO trial. Isolates had previously undergone whole-genome sequencing (WGS) to identify antimicrobial resistance genes. Cefotetan MICs were determined by broth microdilution (BMD) testing with a concentration range of 0.125 to 64 mg/liter; CLSI breakpoints were used for susceptibility interpretation. BMD was performed using an automated digital antibiotic dispensing platform (Tecan D300e). One hundred ten E. coli and 40 K. pneumoniae isolates were used. CTX-M-15 and CTX-M-27 were the most common beta-lactamases present; only 7 isolates had coexistent ampC genes. Overall, 98.7% of isolates were susceptible, with MIC50s and MIC90s of 0.25 mg/liter and 2 mg/liter (range, ≤0.125 to 64 mg/liter), respectively. MICs appeared higher among isolates with ampC genes present, with an MIC50 of 16 mg/liter, than among those containing CTX-M-15, which had an MIC50 of only 0.5 mg/liter. Isolates with an ampC gene exhibited an overall susceptibility of 85%. Presence of a narrow-spectrum OXA beta-lactamase did not appear to alter the cefotetan MIC distribution. Cefotetan demonstrated favorable in vitro efficacy against ESBL-producing E. coli and K. pneumoniae bloodstream isolates. IMPORTANCE Carbapenem antibiotics remain the treatment of choice for severe infection due to ESBL- and AmpC-producing Enterobacterales. The use of carbapenems is a major driver of the emergence of carbapenem-resistant Gram-negative bacilli, which are often resistant to most available antimicrobials. Cefotetan is a cephamycin antibiotic developed in the 1980s that demonstrates enhanced resistance to beta-lactamases and has a broad spectrum of activity against Gram-negative bacteria. Cefotetan holds potential to be a carbapenem-sparing treatment option. Data on the in vitro activity of cefotetan against ESBL-producing Enterobacterales remain scarce. Our study assessed the in vitro activity of cefotetan against ceftriaxone-nonsusceptible blood culture isolates obtained from patients enrolled in the MERINO trial. IMPORTANCE Carbapenem antibiotics remain the treatment of choice for severe infection due to ESBL-and AmpC-producing Enterobacterales. The use of carbapenems is a major driver of the emergence of carbapenem-resistant Gram-negative bacilli, which are often resistant to most available antimicrobials. Cefotetan is a cephamycin antibiotic developed in the 1980s that demonstrates enhanced resistance to beta-lactamases and has a broad spectrum of activity against Gram-negative bacteria. Cefotetan holds potential to be a carbapenem-sparing treatment option. Data on the in vitro activity of cefotetan against ESBL-producing Enterobacterales remain scarce. Our study assessed the in vitro activity of cefotetan against ceftriaxone-nonsusceptible blood culture isolates obtained from patients enrolled in the MERINO trial. contemporary threat to the health and well-being of individuals globally (1,2). Approximately 200,000 infections and 9,000 deaths due to ESBL-producing Enterobacterales infection in U.S. hospitals occur annually (3). Treatment options for ESBL-producing Gram-negative pathogens are limited compared to those for non-ESBL producers. Indeed, coexisting non-beta-lactamase resistance genes are often identified in these isolates (e.g., gyrA and parC mutations mediating quinolone resistance in Escherichia coli ST131) (4). Carbapenems have been regarded as the treatment of choice for infection due to ESBL-producing Enterobacterales (5). The MERINO trial failed to demonstrate noninferiority, with respect to 30-day all-cause mortality, of piperacillin-tazobactam compared to meropenem for treatment of bloodstream infection due to ceftriaxone-resistant E. coli and Klebsiella pneumoniae (6). Rising use of carbapenems, paired with a rising incidence of carbapenem-resistant organisms globally, has prompted a search for suitable therapeutic alternatives to treat these infections (7). Cefotetan is a cephamycin antibiotic developed in the 1980s (8). Its unique structure confers enhanced resistance to beta-lactamases and a broad spectrum of activity against Gram-negative bacteria. It is administered via the intravenous and intramuscular routes and has been approved for use in urinary tract, lower respiratory tract, skin and soft tissue, gynecologic, intra-abdominal, and bone and joint infections. Early in vitro studies indicated that cefotetan achieved an MIC 90 of 4 mg/liter against enterobacteria (9). Moreover, a randomized clinical trial of cefotetan versus cefoxitin or moxalactam for treatment of intra-abdominal infection demonstrated superior infection clearance and bacteriologic response with cefotetan (10). Cephamycins, including cefotetan, are unable to be efficiently hydrolyzed by ESBLs and may prove to be a therapeutic alternative to carbapenems. Data on the in vitro activity of cefotetan against ESBL-producing Enterobacterales remain scarce (11). We aimed to assess the in vitro activity of cefotetan against ceftriaxone-nonsusceptible blood culture isolates obtained from patients enrolled in the MERINO trial (6). RESULTS One hundred fifty isolates (110 E. coli and 40 K. pneumoniae) from the MERINO trial were collected, and their cefotetan MICs were determined by broth microdilution (BMD). Overall, 98.7% were susceptible according to the CLSI cefotetan susceptible breakpoint, with MIC 50 s and MIC 90 s of 0.25 mg/liter and 2 mg/liter (range, #0.125 to 64 mg/liter), respectively. Table 1 presents the cefotetan MIC distribution and percent susceptible according to species and beta-lactamase type. The MIC distributions of E. coli and K. pneumoniae isolates appeared similar, each registering one resistant isolate (64 mg/liter and 32 mg/liter, respectively). The resistant E. coli isolate had bla CTX-M-27 identified, and the intermediate K. pneumoniae isolate had bla SHV-106 and bla DHA-1 present. Overall, MICs appeared higher among isolates with ampC genes present, with an MIC 50 of 16 mg/liter, than among those containing CTX-M-15, which had an MIC 50 of only 0.5 mg/liter. Indeed, isolates with an ampC gene exhibited an overall susceptibility of 85%. Presence of an OXA beta-lactamase did not appear to alter the cefotetan MIC distribution (Fig. 1). The MICs for all the trays testing ATCC strains fell within acceptable ranges. Purity and colony count checks demonstrated pure growth and colony counts ranging from 1 to 9 colonies. DISCUSSION We demonstrated that almost all ESBL-producing E. coli and K. pneumoniae isolates from our study were susceptible to cefotetan in vitro. Unsurprisingly, ampC-carrying isolates showed higher MICs overall; in vitro resistance to cefoxitin is used as a phenotypic marker to infer the presence of ampC, and there exists a structural similarity between cefotetan and cefoxitin. Among AmpC producers, cefoxitin MICs are generally higher than those of cefotetan (12). Isolates harboring the DHA-1 enzyme appeared to have higher cefotetan MICs than those harboring CMY enzymes, although isolate numbers were small. It is unclear whether there is a biological or clinically significant difference in relation to cephalosporinase activity between the two enzyme types. Isolates producing common CTX-M and narrow-spectrum OXA-type beta-lactamases appeared highly susceptible to cefotetan. Although not a new antimicrobial class, cephamycins have demonstrated promising in vitro potency and clinical efficacy against invasive isolates that are resistant to third-generation cephalosporins (13)(14)(15). Previous concerns have been put forward over the use of cephamycins for infections with ESBL-producing organisms and development of outer membrane protein (OMP) mutations and/or plasmidencoding AmpC enzymes during exposure (11). The true significance of this finding from case reports remains uncertain. Cefotetan may be a suitable carbapenem-sparing treatment option for multidrug-resistant Enterobacterales, especially those not harboring an ampC enzyme. This agent could also be formulated with an inhibitor to mitigate the effect of ampC. Cefotetan achieves high plasma levels after intravenous and intramuscular injection and is typically administered twice daily as a 30-min infusion. It achieved a mean plasma concentration of 158 mg/liter at 30 min after a 1-g dose given intravenously to healthy adults. Cefotetan has shown very little in vitro activity against Pseudomonas and Acinetobacter species (MIC 90 s, .32 and .32 to 256 mg/liter, respectively) and wide variation in susceptibility against Enterobacter and Serratia species (MIC 90 s, 2 to 256 mg/liter and 0.5 to 64 mg/liter, respectively) (8). The lack of activity seen against non-lactose-fermenting Gram-negative organisms may explain why it has not been widely adopted for treatment of urinary tract infection. In the era of emerging multidrug-resistant bacteria, use of pathogen-directed therapies (as opposed to a "cure-all" approach with a single agent) based on species or resistance type may be a useful strategy. There are a few limitations to this study. Selection of bacterial isolates was restricted to include a subset of nonrandomly selected representative isolates obtained from the MERINO trial. These isolates may not be truly representative of all the resistance mechanisms seen in third-generation-cephalosporin-resistant E. coli and K. pneumoniae globally. Antimicrobial susceptibility testing was performed using an automated digital antibiotic dispensing platform (Tecan D300e; Tecan Trading AG, Switzerland). In precision studies assessing the performance of this platform in Enterobacteriaceae, essential and categorial agreement levels were 96.8% and 98.3%, respectively (16). This finding supports the accuracy of this approach for use in BMD testing. The clinical efficacy of cefotetan for infection due to ESBL producers remains uncertain but warrants further study. Conclusion. Cefotetan demonstrated favorable in vitro efficacy against ESBL-producing E. coli and K. pneumoniae bloodstream isolates with MIC 50 s and MIC 90 s of 0.25 mg/liter and 2 mg/liter (range, #0.125 to 64 mg/liter), respectively. Higher MICs were seen in isolates coharboring an ampC beta-lactamase. Cefotetan may have a place for therapeutic use as a carbapenem-sparing therapy for infection due to these organisms. MATERIALS AND METHODS Bacterial isolates. The MERINO trial recruited patients with bloodstream infections due to third-generation-cephalosporin-nonsusceptible E. coli and K. pneumoniae in nine countries from February 2014 to July 2017 (6). All blood culture isolates from enrolled patients were stored and had previously undergone whole-genome sequencing (WGS) to detect antimicrobial resistance genes. A subset of isolates that had at least one ESBL gene identified were chosen to be included in this study. Ultimately, isolates containing different combinations of CTX-M ESBLs, narrow-spectrum OXA and SHV enzymes, and AmpC beta-lactamases were used. Each isolate was subjected to broth microdilution (BMD) testing for cefotetan MIC determination. Antibiotic preparation. Cefotetan powder (Glentham Life Sciences, GA5476) was dissolved in DMSO (Thermo Fisher, D/4121/PB08) at a concentration of 10,000 mg/liter. This stock solution was loaded directly to the Tecan D300e (Tecan Trading AG, Switzerland) T8 print cartridge. Quality control. Two ATCC strains were used to check the performance of each batch of trays: Escherichia coli ATCC 25922 (target MIC, 0.125 mg/liter) and Staphylococcus aureus ATCC 29213 (target MIC, 8 mg/liter) (see Table 5A-1 in reference 17). A separate tray was prepared to check E. coli ATCC 25922 at lower concentrations, ranging from 0.004 to 2 mg/liter. Isolate preparation. Test and reference isolates were stored in brain heart infusion (BHI) broth (BD, Bacto 237500) containing 30% glycerol (Chem-Supply, GA010) at 280°C. A scraping from the frozen vials was streaked onto 5% Columbia horse blood agar (HBA) (Edwards, MM1085) and incubated at 37°C in ambient atmosphere for 18 to 24 h. A single colony of each was subcultured to fresh HBA and incubated under the same conditions. Two or three colonies of each isolate were collected using a sterile rayon swab and resuspended in sterile normal saline (0.9% NaCl; Chem-Supply, US008779). Turbidity was adjusted to a 0.5 McFarland standard as read using DensiCHEK Plus (bioMérieux, France). Five microliters of inoculated saline was added to 1 ml of cation-adjusted Mueller-Hinton broth (CAMHB) (BD, BBL 211322) and vortexed, to achieve an approximate concentration of 5 Â 10 5 CFU/ml. Fifty microliters of inoculated broth was dispensed into each into each well of a single row on the BMD tray using an electronic repeat-dispense pipette. Purity and colony count checks were performed by collecting a 1-ml loop of broth from the positive-control well for each isolate and streaking onto half of an HBA plate. A second 1-ml sample from the same well was diluted in 100 ml of sterile saline, and 1 ml was streaked on the other half of the plate. Plates showing pure growth on the undiluted streak and 1 to 10 colonies on the diluted streak passed purity and colony count checks.
2021-07-08T06:16:36.286Z
2021-07-07T00:00:00.000
{ "year": 2021, "sha1": "a40167d891bdafc7e3311faf1c97f108552c4637", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/spectrum.00226-21", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "61588c8099b16243080f5b3ef33c28ce096c6aab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56002854
pes2o/s2orc
v3-fos-license
UV, optical and near-IR diagnostics of massive stars We present an overview of a few spectroscopic diagnostics of massive stars. We explore the following wavelength ranges: UV (1000 to 2000 A), optical (4000--7000 A) and near-infrared (mainly H and K bands). The diagnostics we highlight are available in O and Wolf-Rayet stars as well as in B supergiants. We focus on the following parameters: effective temperature, gravity, surface abundances, luminosity, mass loss rate, terminal velocity, wind clumping, rotation/macroturbulence and surface magnetic field. Introduction The development of sophisticated atmosphere codes combined with the regular access to multiwavelength observational data (from the X-rays to the radio range) allow improved determination of stellar and wind parameters of massive stars. This in turn affects our understanding of these objects. Since massive stars play key roles in different fields of astrophysics (being the progenitors of long-soft GRBs, the producers of most metals heavier than oxygen, important contributors to the release of mechanical energy in the interstellar medium...) it is crucial to be able to accurately constrain their properties. Here, we present a non exhaustive overview of the main spectroscopic diagnostics used to determine the fundamental parameters of massive stars. We restrict ourselves to the UV, optical and near-infrared ranges. The diagnostics we present in the following apply to O and Wolf-Rayet stars as well as B supergiants. Stellar parameters In this section we present the main spectroscopic methods used to determine the stellar parameters: effective temperature, surface gravity, luminosity, surface abundances. Effective temperature The effective temperature of massive stars is usually derived using the ionization balance method. The principle relies on the computation of synthetic spectra from atmosphere models at different temperatures. Depending on the temperature, the ionization of the elements present in the atmosphere is different: the wind is more ionized for higher T eff . Consequently, the lines of ions of the same element but of different ionization states are also sensitive to the effective temperature. Comparing the "The multi-wavelength view of hot, massive stars"; 39 th Liège Int. Astroph. Coll., 12-16 July 2010 strength of synthetic lines to observed lines thus yields the star's temperature (e.g. Herrero et al. 1992, Puls et al. 1996, Martins et al. 2002. In practice, lines from successive ions of the same elements must be observed. The most reliable indicators for O and Wolf-Rayet stars are the HeI and HeII lines. The classical diagnostics are HeI 4471 and HeII 4542. They are the strongest photospheric lines in most stars (HeII 4686 can be stronger than HeII 4542 but it is more sensitive to wind contamination). An illustration of their behaviour with T eff is given in Fig. 1. We see that increasing T eff reduces the HeI 4471 line strength and increases the HeII 4542 absorption. A number of complementary lines can be used to confirm and refine the estimate based on the previously mentioned lines: HeI 4026, HeI 4388, HeI 4712, HeI 4920, HeII 4200, HeII 5412. Note that the HeI singlet lines can be sensitive to subtle details of the modelling related to line-blanketing effects (e.g. Najarro et al. 2006). When the temperature drops below roughly 27000 K, helium is almost neutral in the atmosphere so that no HeII lines are detected. This is the range of mid-and late-B stars. For those objects, one usually switch to the Si ionization balance traced by the following lines: SiII 4124-31, SiIII 4552-67-74, SiIII 5738, SiIV 4089, SiIV 4116 (e.g. Trundle et al. 2004). Depending on the temperature, either SiII and SiIII, or SiIII and SiIV lines are used. The temperatures derived from the optical have typical uncertainties of 500 to 2000 K depending on the quality of the observational data and on the temperature itself (uncertainties are larger when lines from one ionization state are weak). The ionization balance method can be applied to near-IR spectra of O stars. In the K-band, HeI and HeII lines are present above ∼ 30000 K . The strongest HeI lines at 2.058µm must be used very carefully because of its extreme dependence on line-blanketing effects (Najarro et al. 1997. The use of HeI 2.112µm is preferred although the line is weaker and can be blended with CIII/NIII emission. The only HeII line in the K-band is HeII 2.189µm. Repolust et al. (2005) have analyzed the same sample of stars independently with optical and near-IR spectra and found that the derived temperatures were consistent within the uncertainties. When only UV spectra are available, the determination of T eff is more difficult. One usually rely on the iron ionization balance. Line forests from FeIV (resp. FeV, FeVI) are indeed observed in the wavelength range 1600-1630Å (resp. 1360-1380Å, 1260-1290Å). An illustration is given in Fig. 10 of Heap et al. (2006). The relative strength of these line forests provides the best T eff indicator, although the uncertainties are usually larger than those of optical determination. Surface gravity The surface gravity is classically derived from optical spectroscopy. The wings of the Balmer lines are broadened by collisional processes (linear Stark effect) and are thus stronger in denser atmospheres, i.e. for higher log g (which causes larger pressure and thus more collisions). In practice Hβ, Hγ and Hδ are the main indicators, provided they are in absorption and/or their wings are not contaminated by wind emission. They are usually strong and well resolved. In the near-IR, the Brackett lines can play the same role. Again, only the wings have to be considered since they are sensitive to collisional broadening. Repolust et al. (2005) showed that the behaviour of the Balmer and Brackett lines with gravity was similar only in the far wings, the line cores having different variations (see Repolust et al. for a thorough discussion). In practice Brγ is the best gravity indicator in the K-band. Br10 and Br11 (H-band) can be used as secondary indicators. Luminosity Until recently, bolometric luminosities were derived from optical (or near-IR) photometry and bolometric corrections. For instance, one could obtain L bol from log L bol where M V is the absolute magnitude, BC(T eff ) the bolometric correction at temperature T eff and M bol the sun bolometric magnitude. This method requires the use of calibrations of bolometric corrections. Another related method consists in comparing directly absolute magnitudes (usually in the V band) to theoretical fluxes in the appropriate band convolved with the filter's response. Nowadays, sed fitting is becoming the standard way of deriving luminosities. In this process, spectrophotometry ranging from the (far)UV to the infrared is used to adjust the global flux level of atmosphere models. Since the full SED is used, there is no need for bolometric corrections. In addition, the reddening can be derived simultaneously. Any excess emission (due to dust for instance) can be identified and fitted with additional components. An example of such a fit is shown in Fig. 2. For both methods briefly presented above, the distance to the star must be known independently. Surface abundances Once the effective temperature, gravity and luminosity have been constrained, it is possible to derive the surface abundance of several elements using photospheric lines. The classical spectroscopic method consists in comparing synthetic spectra with different abundances to key diagnostic lines. Several NIII lines are also observed between 4630 and 4640Å in O and early B stars. They are rather strong but their modelling is still difficult and they should be treated with care. Low ionization lines are present in B stars while high ionization lines are observed in the earliest O stars. The same lines can be used to constrain the abundances of Wolf-Rayet stars. However, they are usually emitted in the wind and are observed in emission. The knowledge of the wind properties, especially the mass loss rate, is thus necessary to correctly derive stellar abundances. In O and B stars, the determination of surface abundances requires the knowledge of the microturbulence velocity. It is usually constrained from a few metallic lines, either by direct comparison of synthetic spectra of by measurement of equivalent width of synthetic profiles with different microturbulent velocity. The determination is usually done simultaneously with the abundance determination. The value of v turb is chosen to minimize the spread in abundance derived from several lines of the same ion (e.g. Dufton et al. 2005). In the UV range, lines from CNO and Si are usually formed in the wind and are used to constrain the mass loss rate and terminal velocity. A determination of the abundances from the optical lines is necessary to correctly derive the wind properties. There are several iron line forests (see also Sect. 2.1) that can be used to constrain the Fe content. If the relative strength of these line forests constrain T eff , their absolute strength is an indication of the iron composition. In the near-IR, the number of metallic lines is limited, especially in OB stars where the lines are weak. For stars with stronger wind (extreme O supergiants and Wolf-Rayet stars) a few features are available. In the K-band, the NIII doublet at 2.247-2.251 µm is a valid indicator of the nitrogen content ). The MgII 2.138-2.144 µm doublet is detected in the coolest stars. In the hottest stars, CIV lines are observed at 2.070, 2.079 and 2.084 µm. In the H-band, FeII 1.688 µm, SiII 1.691 µm and SiII 1.698 µm are used to constrain the iron and silicon abundance (Najarro et al. 2009). We now turn to the wind parameters of massive stars. We first present the determination of terminal velocities, then the mass loss rates and finally review the spectroscopic diagnostics of clumping. Terminal velocity The terminal velocity is the maximum velocity reached by a stellar wind at the top of the atmosphere. If the wind density is high enough, P-Cygni profiles are observed in several lines. The strongest ones are UV resonance lines. The origin of the blueshifted part of the P-Cygni profile is the Doppler shift associated with the wind outflow in front of the photospheric disk. Consequently, the measure of the blueward extent of this absorption gives a direct access to the terminal velocity. The terminal velocity can be defined as the velocity leading to the absorption up to the point where the line profile reaches the continuum (the edge velocity) or as the velocity producing the bluest complete (i.e. zero flux) absorption (the black velocity). The former is usually affected by additional small-scale (microturbulence 1 ) or large-scale (discrete absorption components) motions so that the latter is usually adopted (e.g. Prinja et al. 1990). This definition is only valid for strong wind stars though: for thinner winds, the P-Cygni profiles are not saturated. The main UV diagnostics are the following: NV 1240, SiIV, 1393-1403, CIV 1548-50, NIV 1718. Additional indicators are found in the FUSE range: OVI 1032-1038, CIII 1176. Other P-Cygni profiles can be found below 1000Å but they are usually blended with interstellar molecular and atomic hydrogen absorption. When stars have strong winds (Ṁ ≥ 10 −5 M yr −1 ) but their UV spectra are not available, other diagnostics can be used. In the optical, the Balmer lines (Hα, Hβ, Hγ, Hδ) and sometimes some HeI lines (e.g. HeI 4471) can have pure emission or P-Cygni profiles. In the latter case, the same method as in the UV is applied. For pure emission lines, the line width is usually related to the wind terminal velocity. Fitting such profiles with synthetic spectra computed from atmosphere models with different terminal velocities will provide an indirect measure of the terminal velocity. Similarly, in the near-IR, HeI 2.058µm and HeI 2.112µm have P-Cygni profiles in late-type WR stars or LBVs, and emission profiles in other strong-wind massive stars. We can proceed as for the optical Balmer lines to estimate terminal velocities. In case no spectroscopic diagnostic is available, the terminal velocity of a massive star can be estimated from the relation v ∞ ≈ 2.25 α 1−α v esc where v esc is the escape velocity and α the line force multiplier parameter of the CAK theory. In practice, v ∞ ∼ 3× v esc for stars hotter than about 25000 K, and v ∞ ∼ 1.5× v esc for stars cooler than this limit. This "bistability jump" is well known (Lamers et al. 1995), although recent studies tend to show that it is more a gradual decrease than a real jump (Crowther et al. 2006b, Markova & Puls 2008). Mass loss rate There are two main classes of spectroscopic diagnostics of mass loss rate: the P-Cygni resonance lines observed mainly in the UV range, and optical emission lines, mainly Hα. UV P-Cygni profiles are sensitive to the wind density times the ionization fraction of the ion responsible for the observed line. Since the density is directly related to the mass loss rate (density ∝ M R 2 v∞ ) fitting such features provides constraints onṀ . The strength of these P-Cygni features allows determinations down to very low values ofṀ (typically down to 10 −10 M yr −1 ). This is especially important for the so-called 'weak wind stars' (Martins et al. 2004, Marcolino et al. 2009). The main drawback is that it requires a good knowledge of the ionization structure. All physical processes affecting this structure have to be included in model atmospheres to ensure accurate determinations. The most common features used are: NV 1240, SiIV 1393-1403, CIV 1548-50, HeII 1640, NIV 1718. PV 1118-28 can also be used provided X-rays and clumping are taken into account (see Sect. 3.3). An example of the fit of the CIV 1548-50 line is shown in Fig. 4. Other lines in the FUV range are available, but they are often contaminated by interstellar atomic and molecular hydrogen absorption (see also Sect. 3.1). The other main diagnostics of mass loss rate is the Hα line in the optical (e.g. Puls et al. 1996). If the density is high enough, hydrogen recombination leads to Hα wind emission which adds to the underlying photospheric absorption. For strong winds, the emission completely dominates the line profile. Fig. 3 shows an example of fit for a SMC B supergiant (Trundle et al. 2004). Since it is a recombination line, it depends on the density square (as opposed to density for P-Cygni profiles). Consequently, its emission decreases quickly with density (and thusṀ ). Hα then turns rapidly into a pure photospheric absorption profile from which noṀ determination is possible. This happens below ∼ 10 −8 M yr −1 . Hα is however less sensitive to ionization issues since hydrogen is almost completely ionized in massive stars atmospheres. It is thus sometimes considered a better diagnostics (but again, only for strong wind stars). A secondary optical indicator is HeII 4686. In the case of Wolf-Rayet stars, the other Balmer lines (Hβ, Hγ, Hδ) are also in emission and are complementary indicators. In the near-IR range, the Brackett lines, especially Brγ, play the same role as the Balmer lines in the optical (Repolust et al. 2005. A rather strong line is Brα at 4.051 µm. Preliminary example of use of this line are shown in Puls et al. 2008. Clumping Several pieces of evidence indicate that the winds of massive stars are not homogeneous. Spectroscopically, the first indirect proof came from Hillier (1991) who realized that the red electron scattering wing of strong emission lines of Wolf-Rayet stars was overpredicted in homogeneous models. The inclusion of inhomogeneous winds by means of a volume filling factor approach lead to a better agreement with observations. The electron scattering wings of emission lines are still used nowadays to constrain the degree of inhomogeneities in strong wind stars. The classical diagnostics are: HeII 4686, HeII 5412, Hβ (see Hillier 1991, Martins et al. 2009). The presence of clumping in massive stars winds has two main effects: first, for a given atmospheric structure, it changes the shape of wind lines; second, due to the increased density in clumps, recombinations are stronger and thus the ionization structure is modified. The first effect can be explained as follows. For a recombination line, the line intensity is proportional to ρ 2 × V where ρ is the density and V the total volume of the wind. In the case of a volume filling factor f , the density in the clumped wind is ρ c = ρ 0 /f where the indexes 'c' and '0' refer to the clumped and unclumped models respectively. Similarly, the volume effectively containing the material is f × V 0 . Hence, the line intensity is proportional to ρ 2 0 /f . Consequently, including clumping increases the line strength by 1/f . Said differently, since ρ 0 ∝Ṁ , the line intensity will be the same for similarṀ / √ f ratios. Hα in the optical (e.g. Puls et al. 2006) and Br10/Br11 in the near-IR (Najarro et al. 2009) are the main ρ 2 diagnostics of clumping. For scattering lines such as the UV P-Cygni profiles, the intensity depends linearly on the density, so that in principle there is no 'first effect' of clumping on these profiles 2 . But the second effectthe change of ionization structure -is present. This is illustrated in Fig. 4 and 5 -from Bouret et al. (2005). In the former figure, UV P-Cygni profiles of an O4V((f)) star are shown for homogeneous (grey dashed line) and clumped (grey solid) models. The clumped models provide a much better fit to OV 1371 and NIV 1718. In Fig. 5, we see that adding clumping strongly reduces the fraction of OV in the atmosphere, leading to weaker OV 1371 line. Another UV diagnostic of clumping is the PV doublet at 1118-1128Å (e.g. Fullerton et al. 2006). More direct evidence for clumping come from time series analysis of selected emission lines of O supergiants and Wolf-Rayet stars. The first study of Eversberg et al. (1998) showed the presence of emission sub-peaks on-top of the main emission of HeII 4686. This structures showed motions from the line center to the line wings. This is interpreted as the presence of clumps moving outward in the stellar atmosphere. Similar conclusions were subsequently reached for different types of emission line stars, using CIII 5696 and CIV 5802-12 in addition to the HeII lines mentioned above (e.g. Lépine et al. 2000). Rotation and magnetic field We finally focus on two properties of massive stars: their rotation rates and the relation to macroturbulence, and their magnetic fields. Projected rotational velocities and macroturbulence The determination of projected rotational velocities (V sini) has become a difficult task since it was realized that line profiles of O stars were also broadened by another mechanism dubbed macroturbulence. Its origin is not well constrained although a recent study by Aerts et al. (2009a) points to the a probable role of stellar pulsations (see also Simón Díaz et al. 2010 for a first observational evidence). Figure 6: Observed profile (solid line) together with a synthetic spectrum including only rotational broadening (V sini = 57 km s −1 , dotted line) and rotational broadening + Gaussian macroturbulence (V sini = 5 km s −1 + v mac = 32 km s −1 , dashed line). The inclusion of macroturbulence leads to a much better fit of the observed profile. From Aerts et al. (2009b). In absence of macroturbulence, two methods have been widely used to constrain V sini: • FWHM-V sini: this method first developed by Slettebak et al. (1975) relies on the computation of synthetic line profiles at different rotational velocities from which the full width at half maximum (FWHM) is measured and subsequently compared to observational data. It was used by Herrero et al. (1992) and Abt et al. (2002) (among others) to derive V sini for O and B stars. It relies mainly on optical metallic lines. • Cross-correlation: here, a low V sini template spectrum is convolved at different rotational velocities and is subsequently cross-correlated with observed spectra. The method has been particularly used in the UV range (e.g. Penny et al. 1996, Howarth et al. 2007) taking advantage of the large IUE database. The direct comparison of synthetic line profiles to observational data revealed that the wings of photospheric lines did not show the classical "curved" shape of rotational profiles, but were wider and more "triangular". This is illustrated in Fig. 6 where we see that a pure rotational profile (dotted line) is a poor fit of the observed spectrum (see also Ryans et al. 2002). The addition of a macroturbulent profile, usually implemented by convolution with a Gaussian profile and thus mimicking isotropic turbulence, leads to a significant improvement. The consequence is a reduction of the derived V sini compared to studies ignoring macroturbulence (see Fig. 6). In practice, optical lines are well suited to constrain V sini and the amount of macroturbulence. Among the key lines, there is: CIV 5812, OIII 5592, NIV 4057, HeI 4712 (see Howarth et al. 2007, Martins et al. 2010. The main drawback of this method is that several combinations of V sini/macroturbulence can give fits of similar quality, rendering the determination of projected rotational velocities uncertain. A powerful method to break this degeneracy is the use of the Fourier transform (FT) of observed profiles. Provided the macroturbulence is well represented by a symmetric kernel (such as a Gaussian profile), the first zero of the FT is thus directly related to the projected rotational velocity by the relation: λ c v sini σ 1 = 0.66 where λ is the wavelength of the line center and σ 1 the position of the first zero. An illustration is given in Fig. 7 where one can see that for a given V sini, the position of the first zero is always the same, regardless of the amount of Gaussian macroturbulence included. Here again, optical metallic lines are well suited for this method: OII 4414, OII 4661, OIII 5592, SiIII 4553, SiIV 4089, NIV 4057, CIV 5812 (e.g. Simón Díaz et al. 2006). We stress that the conclusion about the relevance of the FT method to derive V sini relies on the assumption that macroturbulence was represented by a symmetric function. If it is not the case (as for pulsations where macroturbulence results from the collective effects of hundreds of oscillations -see Aerts et al. 2009a), then the position of the first zero is affected. The recent study of Simón Díaz et al. (2010) favour a Gaussian radial-tangential macroturbulence profile over an isotropic Gaussian shape. More analysis are needed to characterize the origin and properties of macroturbulence in massive stars. Surface magnetic field The development of spectropolarimeters working in the optical range has lead to the detection of surface magnetic fields in several O and B stars (e.g. Donati et al. 2002Donati et al. , 2006Bouret et al. 2008, Grunhut et al. 2009). The principle of the detection relies mainly on the 'least square deconvolution' method (Donati et al. 1997). In practice, the idea is to detect Zeeman splitting in photospheric lines. Given the faintness of the polarized signal, a line mask made of several well understood lines is built and an average line profile is created from it (leading to the Stokes I parameter, see Fig. 8). The detection of a magnetic field is made from the Stokes V profile which is the difference between the right and left circular polarization signal created from the line mask. An example of unambiguous detection is displayed in Fig. 8. The photospheric lines used to build the line mask are usually the following: HeI 4026, HeI 4388, HeI 4471, HeI 4712, HeII 4200, HeII 4542, NIII 4510, OIII, 5592, CIV 5812. Currently, there are no spectropolarimeter working in the infrared range nor in the UV. Summary We have reviewed some of the main spectroscopic diagnostics of massive stars in the UV (1000-2000 A), optical (4000-7000Å) and near-infrared (H and K bands) wavelength ranges. The description was not exhaustive and was meant to give an overview of the most commonly used spectral lines and methods to derive the stellar and wind parameters of OB and Wolf-Rayet stars. A summary of the main diagnostics is given in Table 1.
2010-10-26T09:57:00.000Z
2010-10-26T00:00:00.000
{ "year": 2010, "sha1": "9017c1019b68b48a3f374fb6d60a13347cac6de6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9017c1019b68b48a3f374fb6d60a13347cac6de6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54823702
pes2o/s2orc
v3-fos-license
Proton−Proton Total Cross−Section Based On New Data of Colliders and Cosmic Rays High energy colliders (accelerators) are fundamental tools in many branches of science. Similarly, cosmic rays observatories are one of the windows to study the universe and high energy particle processes. The last advances in these fields are respectively the LHC (Large Hadron Collider) and the Pierre Auger Observatory. Among the main subjects studied in hadronic physics is the proton-proton (pp) elastic scattering. The Total Cross-Section (σpp), has been recently measured at 7 and 8 TeV in the LHC, and at 57 TeV in the Pierre Auger Observatory. Importance of the σpp lies in studies of elastic and diffractive scattering of protons, and to model the development of showers induced by the interaction of ultra high energy cosmic rays in the atmosphere. The gap in data between accelerators and cosmic ray experiment energies does not allow for the exact knowledge of σpp with energy. Furthermore, since cosmic rays results are of indirect nature, there is consequently a high dispersion in predictions of different authors at this regard. Using the new data, we show here that within the frame of the first-order Glauber multiple diffraction theory the overall data fits very successfully. Our results shows that σpp grows more slowly (compared with previous predictions), within narrow error bands that avoid any fast slope change. We predict that the future experimental value at 13 TeV from the LHC will fall nicely within our fitting curve. Our phenomenological approach allows for the calculation of σpp for any other energy value either at the colliders or cosmic ray energies. A deep knowledge, control and handle of hadron-hadron interactions at very high energies will have useful implications in many branches of physics. Introduction An important process in the hadron physics is the pp elastic scattering. In spite of the amount of currently available data and descriptive models of these data, actually there is not a satisfactory description based on pure Quantum Chromodynamics (QCD), that would be widely accepted in considering this dynamic process. The QCD perturbative theory cannot be extended to the weak interactions region and the QCD non-perturbative theory is not able to predict dispersion states. There are approaches based on QCD that try to explain the phenomenon which have been successful in describing processes where there is much transfer momentum, where quarks, which are the particles that compose hadrons behave as if they were free particles. In this case, the perturbative approach can be applied. On the other hand, in the region with low transfer momentum (the namely region of soft collisions), the effective coupling constant of strong interactions is large and therefore, perturbative approach cannot be applied. Historically, the study of the total cross section, which measure the total interaction probability has played a fundamental role in nuclear and particle physics. For energies of only a few GeV, the total cross section in hadrons scattering usually has a complicated structure composed with peaks or resonances, which reveals the formation of excited hadronic states. On the other side, for higher energies, the total cross section have a softer behaviour. Also it has been extensively investigated the pp elastic dispersion, mainly in the region where there is a small transfer momentum, which is where there is a great number of experimental data available, although in some specific energies data have been obtained in the region where there is large transfer momentum. An important feature that have resulted of the analysis of experimental data, is the discovery that the effective range of interaction in hadron collision increases in accordance with the energy growth. In the same way, it has been discovered that the probability of absorption also increases; namely, that particles appear to expand and become blacker for high energies. In this work we develop a prediction based on a purely diffractive model that approximates reasonably the existing experimental data of particle accelerators and cosmic ray observatories for and ̅ total cross section in the Center of Mass Energy range: 10−10 5 GeV, including the values of 7 and 8 TeV obtained in the LHC at the CERN in 2011 and 2013 [1][2][3][4]. We also consider here the new value at 57 TeV from the Pierre Auger Observatory obtained in July 2012 [5]. Elastic Hadronic Scattering Amplitude To know how much the hadrons disperse during an elastic collision, an hadronic scattering amplitude must be constructed. One way to construct the hadronic amplitude from the most elemental frame is by means of Glauber's multiple diffraction theory. The approach is based on the impact parameter and eikonal formalisms as follows: assuming azimuthal symmetry in the collision of two hadrons (neglecting the spin), in our case two protons, we have the following expression for the elastic hadronic scattering amplitude: here q 2 = -t is the four−momentum transfer squared, √ is the center of mass energy, b the impact parameter, J 0 the zero−order Bessel and Ω is a function of real values which is used to describe the opacity of hadrons. The equation describing Ω is the following: where G is the hadronic form factor: and Imf(q,s) is the imaginary part of the elementary parton-parton amplitude: Im$( , ) = - where α 2 , β 2 , a 2 and C are energy dependency parameters of real values. The proportionality factor C, is known as the "absorption factor". The ρ Parameter The ρ parameter is an experimental value that is obtained in particle accelerators, and is equal to the ratio of the real part to the imaginary part of the hadronic scattering amplitude F at q = 0: On high energies this amplitude is mainly imaginary, but a knowledge of the real part allows us to obtain predictions of the total cross section (even for high energies) by means of scattering relations. For obtain the fit of ρ we have used the experimental data that have been obtained in particle accelerators in the Center of Mass (CM) energy range: 13.8−1800 GeV, that is, from the Alternating Gradient Synchrotron (AGS) to the Tevatron. With these experimental data we have calculated the fit for the whole range of energy: 10−10 5 GeV, and thus, we have predicted reasonably the value obtained at 7 TeV in the LHC of CERN. The form of the fit is the following: where s 0 = 400 GeV 2 controls the point where ρ reaches the zero and where the coefficients A 1 , A 2 and A 3 control the maximum and the asymptotic behaviour, which values are the next: The results are shown in table 1 and figure 1. As we can see, the ρ parameter has negative values at CM energies less than approximately 21 GeV, and moreover, this present an asymptotic growth for energies greater than 4 TeV. In the same way we can see that the fit presents a reasonably approximation to the value obtained by the LHC at 7 TeV. (6), together with the experimental data from particle accelerators (see table 1). Obtaining the Fits for the Energy Dependent Parameters With the values that were obtained for the energy dependent parameters (table 2), we have calculated the fits (parametrizations) for each one of those, which allow us to extrapolate our calculations to energies greater than 1800 GeV. Based on the behavior that is shown by both C and α −2 parameters, we can see that their data sets are statistically consistent with quadratic polynomials for [ln(s)] 2, so that using linear regression we have obtained the following fits for each one of them: where we have taken s 0 = 1 GeV 2 . In figures 2 and 3 we show the previous fits together with the corresponding values of table 2. As we can see, both present positive curves with the energy increase. The dimensionless product Cα 2 provides us with information of the blackening and expansion in elastic hadron scattering, the plot of this product can be seen in figure 4. Since we know that the parameter λ has a very similar behaviour of the ρ parameter, we have used a similar fit: where s 0 = 400 GeV 2 controls the point where λ reaches zero, and the coefficients A 1 , A 2 and A 3 control the maximum and the asymptotic behaviour of the fit, whose values are: In figure 5 we can see the obtained fit for λ. Derivation of the pp Total Cross Section In the absence of a QCD description for this phenomenon, a number of models and phenomenological approximations have been developed to describe the available data. Though these formalisms do not give a final answer to the basic involved processes, they are however useful tools that allow for geometric and dynamic assumptions, that lead to reproduce the experimental data. Geometrical models based on the Multiple Diffraction theory of Glauber [19,20] have proved to be good phenomenological approaches. An essential feature in the multiple diffraction formalism is the connection of the elastic dispersion cross-sections for composite particles (originally for nuclei and after for nucleons) with the dispersion amplitudes of their individual components. Following this theory, we present here a prediction of the pp dispersion based on an eikonal (a symmetrical two-dimensional Fourier transform) that depends on parameters describing the hadronic form factor and the elementary parton-parton amplitude. By means of this eikonal, we then calculate the real and imaginary parts of the hadronic scattering amplitude. With this amplitude and with the fits for the parameters associated with the eikonal, we have obtained a prediction curve for the pp total cross section. For the calculation of the total cross-section σ pp we have first obtained the differential cross section (dσ/dt) for each one of the experimental data of the accelerators whose operation were previous to the LHC: which corresponds to the energies range 13.8−1800 GeV. The fit for dσ/dt have been obtained by means of the real and imaginary part of the hadronic scattering amplitude F (equation (1)), which is dependent of the energy and transfer momentum, such as is described by the following equation: JK JL = 8Re ( , ) ! + Im ( , ) ! Figure 6 shows an example for the energy of 52.8 GeV, where we can see a reasonably concordance with the experimental data. Figures for the remaining energies can be consulted in [21]. The knowledge of dσ/dt is also very important for the study of pp elastic dispersion. With the fits for the energy dependent parameters: C, α −2 and λ, we have thus completely determined all the parameters associated with the eikonal, and then, we can now to calculate the pp total cross section by means of the following expression: where the integrand represents the imaginary part of the elastic hadronic scattering amplitude and λ is, as we mentioned before, an energy dependent parameter describing the proportionality between the real and imaginary parts. The procedure to derive the previous equation is somewhat complex and was extensively described in [21][22][23][24]. In figure 7 we show the result obtained for the pp total cross section, calculated by means of (13) and the fits for the energy dependent parameters (8)(9)(10)(11), in the energies range: 10 to 10 5 GeV. We have calculated Error Bands with 95% of prediction for each of the involved parameters. Further, we know that the significance δ is related with the prediction percentage by means of: 100(1−δ)% = 95%. Therefore, we have that δ = 0.05 and the Student's t that we require is S T/! = S . !U for n−2 = 8 degrees of freedom, which is equal to 2.306. 7 also shows the experimental data that have been obtained in particle accelerators and cosmic ray observatories (see table 3). With the aim of better resolution in figures 8 and 9 we can see the same results, but for two different energy intervals. As we can see from the previous figures and table 3, the fit that we have calculated for the pp total cross section reasonably agrees with the particle accelerators data: from that obtained by the Fermilab to the LHC. With respect to the results of the cosmic rays observatories, our prediction is quite consistent with the Akeno's data, the value of Fly's Eye, and the result of Pierre Augier Observatory. In table 4 are shown the calculated values for the upper and lower prediction bands of energies in which there exist experimental data. In the fifth and sixth columns we can see the absolute differences that there is between each one of the prediction bands with respect to the central prediction, that is, ∆σ 1 can observe that for energies less or equal than 1800 GeV the absolute differences are practically the same, that is , we can say that we have an ±; while, for energies greater than 1800 GeV, the absolute differences between the upper and lower prediction bands begins to slightly increment. This is mainly due to the fact that the dispersion in the measurements is not uniform for each one of the energy dependent parameters. On the other hand, we can also see that the error bands begin to get wider for energies greater than 1800 GeV. This occurs because from this energy is where the extrapolation of the energy dependent fits begins, and therefore, the uncertainty in the prediction tends to increase for higher energies. It should be mentioned that for the energy values (546, 630 and 1800 GeV) we used data of ̅ dispersion, because there are not pp experimental data for higher energies than the ISR energies and less than the LHC energies, namely, for 62.5 GeV < √ < 7 TeV. However, from the analysis of the existing experimental data at present, for both reactions, it is known that for higher energies than approximately 35 GeV, the ̅ p and pp total cross sections tend to be equal [25], which justifies our calculations. Comparison with a Previous Fit In a previous work [24]. a fit was obtained using the same methodology that we have employed in the present work, except that the fits for the energy dependent parameters were different, since these were made using a distinct approach for the experimental data of the differential cross sections. In figure 10 are shown both fits, together with the experimental data. The black line corresponds to the prediction of this work and the red line to that published in [24]. In table 5 we can see the obtained values for both fits at energies in which there is experimental data. In the fourth column is shown the absolute difference between both predictions. As we can see, this difference begins to increase from energies greater than 18 TeV. For the energy of 57 TeV, where the Pierre Augier Observatory has presented a value of 133 mb, we can see that in the present work we have obtained a value of 135.93 mb, while in [24] it was obtained a value of 139.31 mb, so, that with the fit of the present work we are more close to the result of the Pierre Augier experiment. For the energies of 7 and 8 TeV, where the LHC present its results, both fits agree reasonably. In the same way, a good match are obtained for the energy range: GeV, that is the energy region that corresponds to the Fermilab and ISR experimental data. Conclusions Our results adequately describe the experimental data obtained in particle accelerators and cosmic rays observatories for the energy range: 10-10 5 GeV, including the most recent values of 7 and 8 TeV published by the LHC (CERN) and the value presented by the Observatory Pierre Auger at 57 TeV. Our results have also improved the previous approaches, published a few years before the current data of the LHC and the Pierre Auger Observatory were found: especially those in [24], whose fits showed a good prediction for the data in the range of energies: 1-100 TeV, few years before the current data of the LHC and the Pierre Auger Observatory were found. Due to the fact that at present our results are the most consistent with experimental data, both from accelerators and cosmic ray observatories, we expect that the same fitting trend will continue when new high energy data values will be available. At the moment we predict the next experimental value at 13 TeV (at CERN) to be M NN = 108.44 + ].!] 0+ ^.] mb, which falls nicely within our fitting curve. Our model allows now to predict any value of M NN at any other energy. [6]. In the fourth column it can be see the absolute difference between both.
2019-04-17T15:38:14.437Z
2015-04-22T00:00:00.000
{ "year": 2015, "sha1": "f20feb057f7103739d86fb1d181f1a52988defa4", "oa_license": null, "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijhep.20150202.12.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8da6b8afe3e9b849e040691c1c057646a354d6fc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1640514
pes2o/s2orc
v3-fos-license
A family of cation ATPase-like molecules from Plasmodium falciparum We report the nucleotide and derived amino acid sequence of the ATPase 1 gene from Plasmodium falciparum. The amino acid sequence shares homology with the family of "P"-type cation translocating ATPases in conserved regions important for nucleotide binding, conformational change, or phosphorylation. The gene, which is present on chromosome 5, has a product longer than any other reported for a P-type ATPase. Interstrain analysis from 12 parasite isolates by the polymerase chain reaction reveals that a 330-bp nucleotide sequence encoding three cytoplasmic regions conserved in cation ATPases (regions a-c) is of constant length. By contrast, another 360-bp sequence which is one of four regions we refer to as "inserts" contains arrays of tandem repeats which show length variation between different parasite isolates. Polymorphism results from differences in the number and types of repeat motif contained in this insert. Inserts are divergent in sequence from other P-type ATPases and share features in common with many malarial antigens. Studies using RNA from the erythrocytic stages of the malarial life cycle suggest that ATPase 1 (including the sequence which encodes tandem repeats) is expressed at the large ring stage of development. Immunolocalization has identified ATPase 1 to be in the region of the parasite plasma membrane and pigment body. These findings suggest a possible model for the genesis of malarial antigens. I NFECTION with Plasmodiumfalciparum, the most virulent human malaria parasite, kills an estimated 0.5-2 million children in Africa alone every year (20,68). The organism multiplies within erythroeytes during the clinically evident phase of infection (77). To achieve this, it subverts both the homeostatic mechanisms of erythrocytes and host defense systems. As it develops, novel permeation pathways are induced which result in increased uptake by the parasite of synthetic precursors and energy substrates, the best studied of which are nucleosides (15), amino acids (12), and glucose (70). Some of these processes depend upon the establishment of electrochemical gradients at the host-parasite interface (secondarily active transporters). These gradients must be maintained in spite of changes in the ionic composition of the infected erythrocyte as the parasite feeds and matures (31). It has been shown that calcium uptake by infected erythrocytes increases with parasite development, although the distribution of calcium within the infected erythrocyte has not been adequately defined (for reviews see references 30,33). In Plasmodium chabaudi infection the uptake of Ca 2+ may depend upon a H+-ion gradient generated by a parasite plasma membrane ATPase (71). Intraerythrocytic parasites also maintain a membrane potential susceptible to protonophores probably through the same mechanism (42). This H+-ion pump may regulate parasite pH, possibly in con-junction with a K+/H + exchanger (31). Erythrocytes infected with P. falciparum gain Na + and lose K + ions because of inhibition of erythrocyte Na+/K+-ATPase activity (17). These alterations in cation status do not interfere with parasite development which is also insusceptible to artificial elevations in Na + and depletion of K + ions within erythrocytes (72). These observations imply that the parasite is capable of internal regulation of the concentrations of these cations. Recently, a malarial vacuolar membrane ATPase ("V" type) has been partially characterized (6), but the mechanisms by which parasites regulate ionic homeostasis in infected cells are poorly understood. Cation ATPases from Leishmania spps have been more extensively studied both physiologically and by sequence analysis (16,41,78). No plasma membrane cation-motive (P-type) ATPases have been isolated from P. falciparum. The P-type ATPases are a ubiquitously distributed class of multi-pass membrane proteins which contribute to electrochemical gradients by pumping ions using energy derived from the hydrolysis of ATP. Although they share many structural features, similar hydropathy profiles and conserved structural motifs such as a nucleotide binding domain and phosphorylation site (18), they are phylogenetically more diverse than many "housekeeping" gene families such as the tubulins or calmodulin (for review see reference 32). This diversity suggests interventions to inhibit the potentially crit-ical ATPases in R falciparum may not affect homologues in the host, particularly as selective inhibition is achievable for this class of enzyme, and isoform-specific ATPase inhibitors are already in clinical use (1, 58). Murakami et al. (43) have recently sequenced a P-type ATPase from P. yoelii similar to a sarcoplasmic reticulum Ca2+-ATPase (>60 % identity in conserved cytoplasmic domains). By contrast, parasite-encoded plasma membrane cation ATPases may be expected to differ from host pumps because they face an environment of unusual (intracellular) composition for most of the erythrocytic phase of the life cycle. To allow more detailed studies on the structure, expression, and possible functions of these molecules, we have isolated a family of cation ATPase-like molecules (called ATPases 1-3) from P. falciparum. We report here the complete nucleotide sequence of one member of this family (ATPase 1). ATPase 1, the largest putative cation-motive ATPase (1,956 amino acids) isolated thus far, has unusual structural features including arrays of tandemiy repeated amino acid sequence. Interstrain analysis demonstrates polymorphism in a region of the ATPase 1 sequence which is unique for this class of enzyme. This report describes the isolation of this parasite cation ATPase, analyzes the nature of the polymorphism in ATPase 1, and compares the structure of malarial with published ATPase sequences. It extends significantly the diversity within the family of cation-motive ATPases and has important implications for understanding the cell biology of host-parasite interactions. Furthermore, it provides a potentially novel chemotherapeutic target. Parasites The parasite isolates used in this paper (47,56) were cultured as described before (56). For experiments on stage specificity, synchronization of parasite cultures to yield a population restricted to a developmental time span of <4 h was carried out by standard methods (11,34,46). Oligonucleotide Synthesis and Hybridization Oligunucleotides were synthesized on a DNA synthesizer (Applied Biosystems, Inc., Foster City, CA) and eluted at room temperature with NI-L~OH (Aldrich Chem. Co., Milwaukee, WI; HPLC grade) before deprotection overnight at 55~ Oligonuclentides (10 pmol) were radiolabeled by incubation with 50 pmol of [732P]ATP and T4 polynucleotide kinase in 6.6 mM Tris-HCl, pH 8, 1 mM magnesium acetate, and 1 mM DTT for 30 rain at 37~ followed by removal of unincorporated label by centrifugation (800 g) through a (325/50 column equilibrated with 3 x SSC. Hybridization of labeled oligonucleotides to Southern blots and filter lifts from libraries was carried out under identical conditions: 370C in hybridization solution (6 mM EDTA, 5 • Denhardt's solution, 100 mg/mi sheared salmon sperm DNA, 0.5% NP-40, 0.9 M NaC1, 90 mM Tris-HC1, pH 8). TO remove unbound probe, filters were washed twice in 6 • SSC/0.1% SDS at room temperature for 5 rain each time, followed by a final 5-rain wash at 54~ in the same solution. Southern and Northern Blot Analyses For Southern blot transfer, purified genomic (10/~g) or plasmid DNA (0.1 /~g) was digested with restriction enzymes (2-3 U//~g), resolved by agarose gel electrophoresis, and visualized by staining with ethidium bromide (59). The DNA in agarose gels was denatured in 0.5 M NaOH, 1.5 M NaCI and neutralized with 0.5 M Tris-HCl, pH 7, 1.5 M NaC1 before transfer to nitrocellulose filters (Schleicher & Schuell, Inc., Keene, NH) in 10 x SSC. DNA was fixed to the filters by baking for 2 h at 80~ Hybridized probe was removed from blots by rinsing in ddH20 at 100oc before reuse. RNA was extracted from parasites by a single step method (7) or, in some experiments, used after purification through a gradient of caesium chloride (55). Total RNA (10/~g) was resolved on standard formaldehyde agarose denaturing gels (5) and transferred to nylon membrane (Hybond-N, Amersham International, Amersham, UK; or Scldeicber & Schuell, Inc.) by capillary blotting in 20 x SSC. Hybridization conditions for Northern blots were identical to those used for Southern blots. Prehybridization and hybridization temperatures were the same (65~ when carried out in Church buffer with albumin (8) or were carried out as described previously (54). All probes (activity of 2-5 • 106 cpm/mi) were incubated overnight with filters and unbound probe was removed by washing filters at a final stringency indicated in the results or figure legends. Libraries and Screening For the initial isolation of P. falciparum ATPase-like sequences, an EcoRI-digested genomic library in 3.gtll (54) was screened by hybridization of duplicate nitrocellulose filter lifts with a degenerate antisense oligunucleotide (Mal2 ATPase). Mal2 ATPase was designed to detect the phosphorylation site of cation ATPases and consists of the following sequence: NGT TAA NGT TCC NGT TTT ArC (N indicates any nucleotide). A clone (X7.6.2; Fig. 1 a) containing a 500-bp insert was isolated from the EcoRI library. To obtain additional ATPase gene sequence, two "partial RsaV libraries were constructed. A time course for RsaI digestion ofP. falciparum DNA was analyzed by agarose gel electropboresis. A DNA sample from the time point giving an average fragment size of 3-5 kb was attached to HindIII linkers and ligated into XCharon 28 (53), or attached to EcoRI linkers and ligated into >,gtll. The 500-bp EcoRI fragment, isolated from ~7.6.2, was used as a radiolabeled probe to screen the Charon 28 RsaI library. The kgtll (26) RsaI library was screened with ATPase 1 sequence obtained from the Charon 28 Rsal library. The sequence encoding the terminal transmembrane hairpins of ATPase 1 was not available from the X libraries, even after repeated screening. This may have been a result of instability of this region in conventional cloning vectors and hosts. Consequently, a recently devised strategy to isolate new DNA sequences using polymerase chain reaction (PCR) I technology was applied. A "vectorette" library (52) was made from HincII-digested genomic "19/96 DNA by ligating the DNA to a partially mismatched doublestranded oligunucleotide (vectorette). The enzyme HincII was chosen to construct this library because Southern blot analysis ( Fig. 1 b) had revealed a HinclI restriction site '~500 bp downstream of the available P. fulciparum ATPase sequence. Directional amplification of further ATPase 1 sequence was achieved using one primer (Seq 61, Table I) to initiate the reaction in a 5' to 3' direction from known ATPase 1 sequence, and a second primer (the universal vectorette primer; see Table I) which hybridizes specifically to the mismatched region of the vectorette. The resulting PCR product was cloned in pUC and sequenced. Subsequent screening or blot hybridizations utilized P. falciparum ATPase sequences from plasmids (20--40 ng) which had been labeled by random hexanucleotide priming with a kit (Amersham International) in the presence of [32P]adCTP. D NA Sequencing Analysis and PC R Sequencing was by the dideoxy chain termination method using a modified T7 polymerase (Sequenase~M; United States Biochemical Corp., Cleveland, OH) with both single-stranded (M13) and double-stranded templates (pUC or pBS-Bluescript TM) (59). Sequencing of inserts derived from the Xgtll and Charon 28 libraries was carried out on subclones prepared after digestion of DNA with one or more of the following enzymes: DraI, RsaI, HincII, ClaI, and Hint. Complete sequence was obtained from both DNA strands. Sequence compilation was aided by the DBUTIL program of Stadcn (66). Nuclcotide and derived amino acid sequence analyses were carried out with the following programs: ANALYSEQ, PSQ, PIP, NAQ, and DBSEARCH (67), Additional database searches (PIR 26) were carried out using a Distributed Array Processor with contouring of the protein sequence (in view of its length), at low stringency (150 PAM, noise reduction 1.5, indels -17) or higher stringency (100 PAM) (9). PCR conditions for all experiments are given in Table I, with the primer positions indicated. The reagents were as supplied in the manufacturer's kits (Amplitaq kiff", Cetus Corp., Berkeley, CA). Cloning of PCR products into plasmid and M13 vectors was us described previously (56). For expres-sion studies, cDNA from P. falciparum was obtained from the large ring/early trophozoite stage of erythrocytic development. Immunofluorescence Studies To raise polyclonal antibodies, groups of 3 BALB/c mice were injected intraperitoneally with the peptides (Pep 1 and Pep 2) shown below. The sequences of the peptides are compared with equivalent sequences in the other ATPases (ATPase 2, ATP'2 and ATpese 3, ATP3) that we have identified when available. Purity and sequence of pcptides was confirmed by microsequencing and HPLC, and identified an inadvertent substitution in Pep 1 at residue 495 (Ser for Phe). Boosting was with an equivalent amount of peptide and Freund's incomplete adjuvant, after 2 wk. A final boost was given 10 d later with no adjuvant. Responses were assayed by an ELISA (data not shown) and the polyclonal antibodies applied to slides of P. falciparum to identify and localize ATPase 1. Slides used in immunofluorescence studies were prepared by washing infected or uninfected erythrocytes (1% haematocrit) in PBS, making smears and fixing at -20~ in acetone. Antiserum was applied to the slides for 30 min at room temperature in a humid atmosphere, and after removing excess serum and three washes in PBS/0.1% BSA, the detecting antibody (FITC goat anti-mouse, Bccton-Dickinson, Immunocytomctry Sys., Mountain View, CA) was incubated at a concentration recommended by the manufacturers. After washing as before, covcrslips were mounted with glycerol and viewed under a model IIIRS (Carl Zciss, Inc., Thorn~l, NY). Isolation and Mapping of Clones Containing Sequence for ATPase 1 An antisense oligonucleotide (Mal2 ATPase), with 64-fold degeneracy that incorporated the malarial bias for A+T residues, was designed to detect the conserved phosphorylation site (DKTGTLT) which is the signature of the P-type class of cation ATPases. Hybridization to a Southern blot of restriction-digested DNA from P. falciparum clone 1"9/96 revealed four bands in the track digested with the enzyme EcoRI, after washing at high stringency (6 • SSC/0.1% SDS, 54~ data not shown). Screening of 40 genome equivalents of a kgtll library containing EcoRI-digested P. falciparum DNA (clone T9/96) under conditions identical to Southern blot analysis yielded 129 positive clones. These were classified by crosshybridization experiments, using the EcoRI fragments as probes, into three subclasses (ATPase 1, 35 % of clones, 500-bp-sized inserts, subclone designated p7.6.2, Fig. 1, a and b; ATPase 2, 42% of clones, 1.53-kbsized inserts; and ATPase 3, 23% of clones, 1.48-kb-sized inserts, data not shown). The size of the fourth EcoRI band (11 kb) detected by Mal2 ATPase oligonucleotide probe precluded cloning in kgtl 1 (maximum insert size 7 kb). Sequence analysis identified the conserved phosphorylation site in representatives from all three classes of clones (Fig. 3 a). The degeneracy in the sequence for Mal2 ATPase allowed almost perfect matches with genomic sequences obtained from this region (one nucleotide mismatch in ATPase 1). Genes for all three ATPases were seen to be present in the genome as single copy ( Fig. 1 b and not shown). 2. An open reading frame (ORF) extended from a putative initiating methionine residue in clone p3.3 to the end of insert in the distal overlapping clone p7.8 (Fig. 1 a). An MI3 subclone containing ,~400 bp of sequence from the 3' end of p7.8 was radiolabeled (25), and the second (partial RsaI-digested) genomic library from clone T9/96 (average insert size 3 kb) ligated into the vector ~gtl 1 via EcoRI linkers, was screened. One positive clone from this library (X7CA, Fig. 1 a) which contained a 4-kb insert was subcloned into plasmid (p7CA) and M13 vectors and its sequence determined. An ORF that was contiguous with the ORFs from p3.3, p7.6.2, and p7.8 extended to the end of p7CA insert, but did not contain the terminal transmembrane segments essential for ion transport in other ATPases. The HincII vectorette library enabled amplification of the remaining portion of ATPase 1 by PCR initiated by the oligonucleotide Seq 61 (for conditions see Table I). Three independent clones were analyzed. A coarse and fine restriction map of ATPase 1 is shown in Fig. 1 a. ATPase I Is a Member of the P-Type Family of Cation Pumps The nuclcotide sequence and predicted amino acid sequence for ATPase 1 is shown in Fig. 2. The initiating methionine was identified as the first in-frame ATG codon (nucleotides 927-929, Fig. 2) of the ORE As with other malarial sequences, there is poor agreement with the Kozak eukaryotic consensus for the context of initiation (54). The gene has no introns and ends at a termination codon (UGA; 6,795 bp; Fig. 2) which forms part of a strong tetranucleotide consensus for eukaryotic termination, signals (UGA [A/G]) (4), and is soon followed by another termination signal (UAAG). After the second termination signal, there is a polypyrimidine tract (25 "T" residues), a feature which is also present in other malarial genes (73). The calculated molecular mass of the unmodified protein is 230 kD; it contains 1,956 amino acid residues and has 27 sequons (potential N-linked glycosylation sites) (50). The predicted topography of the derived amino acid sequence for ATPase 1 conforms to that of other P-type ATPases with 8-10 membrane-spanning domains organized into three regions, and conserved cytoplasmic domains including the phosphorylation site DKTGTLT (amino acids 496-502, reglen f, Figs. 2 and 3 a). The longest of the membranespanning regions corresponds to the terminal transmcmbrane hairpins which contribute to ion transport specificity. The last two membrane-spanning regions (M3, M4, and MS-M9) are separated by an unusually long cytoplasmic region (containing conserved regions f-j) of 1,353 amino acids). A comparison of the conserved cytoplasmic domains in ATPase 1 with representatives from other ion transporting ATPases (with the ATPase 1 sequence as template) is shown in Fig. 3 a. There is preservation of functionally critical S, sense; A, antisense; UVP, universal vectorette primer. For all these reactions 1.5 mM MgCI2 was used. The primers BIVS5', BIVS3', and BIVSe were used in PCRs across one of the introns in the #-tubulin gene to establish that eDNA was free of contamination by genomic DNA. Genomic DNA subjected to PCR across this intron yielded a product sized at ,,~340 bp. Complementary DNA subjected to PCR with the same primers (BIVS5' and BIVS3') did not give any product. When the PCR was applied to eDNA with primers (BIVS5' and BIVSe) which included 150 bp of coding sequence (exon), a product of 150 bp was visualized. Genomic DNA subjected to PCR with these primers (BIVS5' and BIVSe) yielded a product 500 bp in size, as predicted (data not shown). amino acid motifs (18) in all these domains including the transduction domain (region c, containing the motif TGES, amino acids 149-152), the ATP phosphorylation site (region f, containing the phosphorylated aspartic acid residue, Asp 496), the FITC binding site (region g, containing an invariant lysine residue, Lys 949), and the ATP binding domain (region i). The linear organization of the conserved domains is also interesting. Some are spaced in a similar fashion in ATPase 1 as in other ATPases (e.g., regions a-c) while others are much further apart (e.g., regions f and g: in the putative proton pump from Leishmania donovani the intervening length is 85 amino acid residues, in the Ca 2 § pump from P. yoelii it is 246 residues, and in ATPase 1 it is 442 residues). Presumably, folding patterns still bring together functionally related regions which may be separated by some distance in the primary sequence, with the membranespanning domains acting as "anchors" (Fig. 3 a). Partial sequences from the other two ATPases (508 amino acids for ATPase 2 and 492 amino acids for ATPase 3) from P. falciparum also contain the highly conserved phosphorylation site (DKTGTLT, region f in Fig. 3 a, and Materials and Methods). In ATPase 1 there is a Phe residue preceding the phosphorylated Asp residue, unlike the eukaryotic consensus (which is ICSD). The only other published sequence of an ATP phosphorylation site containing Phe residues in this position comes from the cadmium pump of Staphylococcus aureus (44). ATPase I Resembles the ~ Subunit of Nce/K § ATPases ATPase 1 shares greatest similarity to a mammalian Na+/K + ATPase with 32 % amino acid identity (39.7 % similarity) in a comparison of conserved regions a-j containing a total of 282 amino acids (Table II). Comparison with ATPases of different ion transport specificities reveals similar though somewhat lower values (identity range 18-32.3 %). The two malarial cation ATPase sequences available (P. falciparum ATPase 1 and the Ca 2+ ATPase from P. yoelii) share 30% identity in a similar comparison with each other, and caimodulin and phospholamban binding sites (features of Ca 2+ ATPases) are absent from ATPase 1. Ouabain binding to mammalian Na § + ATPases depends on two regions which were first identified in the sheep a~ subunit, an ouabain-sensitive enzyme. The first lies between M1 and M2 (the first and second membrane-spanning segments), and requires a Gin residue (position 111) and an Asn residue (position 122). Mutagenesis at these positions increases similarity to the rodent form of enzyme (l'Gln'-*Arg or mAsn-*Asp) and confers the property of ouabain resistance (23). ATPase 1 possesses both an Asn (amino acid 60, preceded by a conserved Asp residue) and a Gln residue (amino acid 49) separated by 10 amino acids in the appropriate region between M1 and M2 ( Fig. 2 a), a feature shared with H+/K § ATPases (69). The second region contains the motif EYTWLE which binds the steroid ring in ouabain and is found in the extracellular loop between M3 and M4 in the mammalian sequence. The malarial sequence has the motif EYTNHI (amino acids 434-439) in the corresponding position, a motif not found in ATPases with other ion transport specificities. Ouabain does not inhibit the multiplication of cultured P. falciparum in concentrations which inhibit the host enzyme (72). b) (45, 74). Organellar maremarian Ca 2+ ATPases failed to show this type of similarity in sequence when a search under identical conditions was carried out. In addition to the conserved elements found in all cation ATPases, ATPase 1 has four novel regions of amino acid sequence we have called inserts (I1-4, shown in Fig. 2). Inserts are rich in amino acids common to malarial proteins, such as asparagine and lysine, and several also contain degenerate versions of tandemly repeated amino acid motifs. Insert 1 (I1, amino acids 171-359) I4 (amino acids 1,114-1,729) contains the repeat ([S/G]DNI[C/Y/N])I,. These inserts are located in regions which are poorly conserved between eukaryotic membrane ATPases. The Ca 2+ pump from P. yoelii is also extended by two sequences similar to inserts (43), e.g., one region (con-taining a total of 59 residues) follows the conserved cytoplasmic domain f, is rich in Asn, Asp, Lys residues, and corresponds in position to 12 from ATPase 1. Chromosomal Assignment and Northern Blot Analysis ATPase 1 was localized to chromosome 5 by puised field gel electrophoresis (data not shown) and this assignment was independently contirmed (Dr. T. Wellems, National Institutes of Health, Bethesda, MD, personal communication). Analysis of total RNA from late ring/early trophozoitr stages of development by hybridization of Northern blots with a fragment from ATPase 1 (pTCA insert, see Fig. 1 a) revealed a single transcript sized at 5.65-6.3 kb (Fig. 4). This mRNA transcript size is consistent with the size of the ORF (5.865 kb) and confirms that ATPase 1 is expressed during the erythrocytic phase of the life cycle of the parasite. 3,37%3,965, including I3, Fig. 2) was undertaken from eDNA (from erythrocytic stage parasites) cloned into vector CDM8 using the primers Var 1 and Var 2 ('Ihble 1). The observed amplification product was of the size predicted from sequence analysis (data not shown), confirming that this higl"dy A+T-rich region (>80%) was expressed. The un~pected presence of tandem repeats in ATPase 1 which lacked homologues in ATPases from other organisms led us to investigate them further. Amplification by PCR across a region rich in Asn residues~(nucleotide positions To establish that the eDNA used in this and subsequent experiments was free of contamination with genomic parasite DNA, PCR amplification was undertaken across a region of The ATPase 1 sequence is taken as the consensus for these alignments. Identical amino acids and strict semiconservative substitutions (see Table 11) are boxed. Alignmerits to sodium channels were made after searches were carfled out on a DAP (see Materials and Methods). sh (Shaker), sha B, and shaw are Drosophilia genes, rck I and drk 1 are sequences obtained from rat brain cDNA (14); the sodium channel alignments have been constructed from sequences in rat sodium channel HI (21, 28) and the remaining alignments are from reference 27. the P. falciparum ~/-tubulin gene which contained an intron of 344 bp (Table I, primers BIVS5', BIVS3', and BIVSe, reference 10). Genomic and cDNAs subjected to PCR using the same sets of primers yielded product sizes which differed by the size of the intron (Table I). V~]T F A Y V T E F V VIIIT F A Y V T E F V IIVID V S L V S~T To examine the nature of tandem repeats within the ATPase 1 gene in different parasite strains, PCR was performed on DNA templates from 12 P. falciparum isolates using primers Var 3 and Var 4 to amplify part of 14 ( Fig. 2 and Table I; nucleotide sequence 4,134--4,500 bp, corresponding to residues 1,070-1,191). There was clearly variation in the length of products obtained in this experiment (Fig. 5 a). The reasons for this length variation were examined by cloning of the PCR products and sequence analysis. This showed that the flanking regions of the degenerate tandem repeats in ATPase 1 were constant in sequence composition. Length increase in this region arose from duplication of one or more repeated elements within the conserved flanking units. There were no examples of variation in the length of the sequence that comprises a single repeat unit (Fig. 6). This presence of repeat units and variation in their numbers and type was a surprising feature of an ion-motive ATPase. To determine if functional constraints may have restricted variation in phylogenetically conserved regions of ATPase 1, a PCR experiment was carried out across regions a-c using primers Var 5 and Var 6 (nucleotide positions 1,089-1,417, Fig. 2 and Table I) from 12 isolates (Fig. 5 b). In contrast to the previous experiment, no length variation was seen in this region Table I) was performed with genomic or eDNA templates from the following parasite swains under conditions described in Table I Table I), which flank ATPase 1 regions a-c (Table I), was applied to templates from the same parasites as in a. Molecular weight markers and gel running conditions were the same as in Figure 5 a. (Fig. 2). Each box represents one repeat unit, and differences in repeat unit sequence are indicated by differential shading (see key). of ATPase 1. The degenerate tandem repeats contained in region 14 of ATPase 1 were shown to be expressed by PCR amplification of template from cDNA. The PCR product was identical in size to that obtained by amplification of genomic DNA template (Fig. 5 a). lmmunochemical Studies of ATPase 1 Polyclonal antibodies were raised in mice against two synthetic peptides derived from the amino acid sequence of ATPase 1 (Materials and Methods). The regions chosen were designed to minimize the probability of cross-reactivity with other malarial antigens, and were also significantly different between the three malarial ATPases we have isolated (Table II). The two polyclonal reagents were applied to samples on slides prepared for immunofluorescence as de-scribed (Materials and Methods). Negative controls did not reveal staining of parasites or erythrocytes (Fig. 7), and a positive control (monoclonal anti-p195) stained parasites as expected (data not shown). Both antipeptide antibodies stain a region around the periphery of the parasite as well as a region next to the pigment body. No staining of the red cell is seen. These findings suggest that ATPase 1 may be associated with the parasite plasma membrane and/or the parasitophorous vacuole. Similarity to Other P-1~pe ATPases In eukaryotic cells, the Na+/K + ATPase pump is the primary regulator of cell volume (23). The generation of electrochemical gradients (which includes the membrane potential of P. falciparum) and uptake of nutrients also depend upon this or similar enzymes (31). Using oligonucleotide probes we have cloned genes encoding a family of putative P-type ATPases from the most virulent human malarial parasite, P. falciparum. Sequencing and comparative analysis of one member of this family (ATPase 1) has shown many interesting features. It is the largest reported cation-ATPase sequence and resembles most closely the ot-subunit of mammalian Na § + ATPases. Extramembranous loops (between M1, M2 and M3, M4) contain conserved residues common to Na+/K + ATPases, further supporting our suggestion for ion transport (Fig. 2). There is preservation of features shared by cation pumps such as similar hydropathy profiles and critical amino acid residues in conserved cytoplasmic regions (Fig. 3 a). Most Na+/K + ATPase sequences are highly conserved (>90% amino acid identity ranging from Xenopus to mammals), although an invertebrate pump (from Anemia, the brine shrimp) shares 70% identity with the other pumps (3). The malarial sequence would be the most divergent u-subunit Na+/K + ATPase-like sequence identified to date, raising the possibility that it may no longer function as a P-type ATPase, although we believe that for the reasons discussed before, it retains cation transport capability. The primary structure of ATPase 1 may reflect the general tendency of The number of identities and strict semiconservative substitutions (according to Dayhoff rules, I/V, L/M, E/D, K/R, F/Y, and additionally I/L) were calculated for each conserved region using the malarial sequence (ATPase 1) as a template (Fig. 3 a). The names of the ATPases listed are the same as for Fig. 3 a. many proteins already sequenced from P. falciparum to be larger and less well conserved phylogenetically than from other lower eukaryotes, possibly as a consequence of immune selection pressure (discussed below) or intraceUular location. The lipid environment of the pump may also influence the structure and activity of this enzyme (49,69), as parasite membranes have a markedly different lipid composition compared to uninfected erythrocytes (24). It has been suggested that P-type cation ATPases arose as an "evolutionary mosaic" when a primitive ATP hydrolyzing domain combined with different types of ion channels (19). The similarities in sequence between the third membranespanning sequences in some ion channel proteins (region $3), a voltage-gated G protein receptor, and a sequence in the terminal transmembrane hairpins of ATPase 1 (M6, Fig. 3 b) provide the first link between the cation ATPases and the superfamily of ion channels (27). Erythrocytes infected with P. falciparum lose K § ions and gain Na § ions resulting in an erythrocyte cytosolic ratio (K § § of 0.8 (17), in spite of which the parasite continues to maintain a high ('~6:8) K+/Na § ratio (35). ATPase 1 could therefore play a role in regulating the K § + ratio of the parasite cytosol in the face of changes in the ionic composition of host cell cytoplasm. Alternative ion transport specificities for ATPase 1, particularly K+/H § countertransport or H § transport, cannot be excluded. The K+/H § ATPase is very closely related to Na § § ATPases (69) and could contribute in a similar way to the regulation of cation metabolism in infected erythrocytes by parasites (31). The recent cell-free culture of the erythrocytic stage of P. falciparum confirms that the parasite regulates its internal ionic composition independently of the red cell, and may allow direct testing of potential inhibitors of this enzyme (75). Most transporter systems (such as those involved with sugar, amino acid, and glucose uptake) which have been studied in infected erythrocytes become maximally active first at the late ring/early trophozoite stage of development of the parasite. This is the stage at which most RNA from ATPase 1 is detected (data not shown). Polymorphism in ATPase 1 Between conserved regions of ATPase 1, there appears to have been interposition of amino acids, some of which are characteristic of malarial antigens (such as Asn and Lys) and tandemly repeated motifs (60). One of the regions containing tandem repeats has been shown to vary in repeat unit type and number between different P. falciparum isolates. A simple crossover event at meiosis (perhaps with conversion) could account for the observed variation in numbers of tandem repeats. Differences between repeat units arise from one of two base transitions (G--'A and A--~) or one transversion (T~A), each of which changes the repeat structure by one amino acid (Fig. 6). Variation in the number and type of tandem repeat units (between 11-13 in the isolates sequenced) are the means whereby polymorphisms are generated in this region of ATPase 1. The region containing polymorphic tandem repeats and another region rich in Asn and Lys residues are both expressed in mRNA. Partial sequence analysis of ATPases 2 and 3 shows that the unusually divergent nature of ATPase 1 is also a feature of the other two ATPases. The variations in ATPase 1 are not likely to be due to PCR artefacts for the following reasons: (a) more than one independent clone has been sequenced in all studies on ATPase 1 (strain I9/96); and (b) Lockyer et at. (38) have carried out a comparison of polymorphisms in the CSP gene and found PCR across tandem repeats in the malarial genome to be reproduced faithfully in a comparison of at least 10 independent clones. Studies on allelic variation in cDNA from the /3-tubulin gene includes information on some of the parasite isolates that we also have examined (76). In coding sequence, the variation results mainly from single nucleotide changes (10 transversions and three transitions in three strains of P. falciparum), with one 3-nucleotide insertion/deletion event. In the fl-tubulin sequences from three strains that have been examined, the degree of variation appears to be constrained by selection pressures which may result from limits imposed by function (see below). We suggest that the polymorphism in tandem repeats seen among different isolates in the ATPase 1 gene (region 14) is maintained as a consequence of selection pressure, possibly due to immune responses from the host. As this region has no attributable contribution to ATP hydrolysis or ion transport, the putative transporter domains within ATPase 1 may not be affected by the observed variation. ATP hydrolysis and cation transport remain to be demonstrated in ATPase 1. However, functionally constrained regions present in other ATPases should be conserved in ATPase 1 from P. falciparum if it is capable of cation transport. To test this hypothesis, length variation in sequence encoding conserved cytoplasmic regions a-c was examined between 12 strains. As expected, this region was length invariant between isolates, supporting the notion of functionally imposed constraints to variation. Comparison with Other Malarial Genes and Implications for Antigens Many malarial antigens with no attributable functions contain tandemly repeated elements, and some are also rich in asparagine and lysine residues (2, 60). Cross-reactivity of antibodies directed both to asparagine-rich regions and tandem repeats has been amply demonstrated. Antigens are also polymorphic in the nature and types of repeats which are encoded. Although the tandem repeats in ATPase 1 have not yet been identified in the antigens isolated, their polymorphic and repetitive nature is consistent with the behavior observed in some antigens sequenced from different isolates. Malarial gene products vary from 20 to 90% identity in amino acid sequence as compared with mammalian homologues (32). The sequence we have isolated (ATPase 1) is of low similarity to mammalian homologues. Other housekeeping genes share some of the properties of ATPase 1; for example, the RNA POL II gene product from P. falciparum is the largest sequenced so far (36) and the recently published RNA POL III (37) sequence is also enlarged by tandernly repeated elements rich in Asn and Lys residues. Likewise, the DHFR (65), pfmdr (13), and P. yoelii Ca 2+ ATPase (43) gene products contain regions rich in Asn residues. However, the phenomenon of polymorphism of tandem repeats and length invariance of other regions has been shown for the first time in a housekeeping gene like ATPase 1. Immunochemical studies suggest that ATPase 1 is to be found in the region of the parasite plasma membrane and perhaps the food vacuole. The "smokescreen" hypothesis for antigens argues that repeated exposure of the host immune system to antigens serves to delay affinity maturation of antibodies and hence effective immune responses (29). Affinity maturation may be delayed because of the presence of T cell-independent epitopes found in repetitive domains of proteins through a "cis-acting" strategy (for review see reference 60). How can this be reconciled with the location of antigens which are not found on the surface of the infected erythrocyte or the merozoite and are not therefore exposed to immunological scrutiny during the normal life cycle of the parasite? We suggest that in most malarial infections exposure of the immune system to (intraparasitic) antigens takes place during the stage of clearance of parasites which do not complete the life cycle (39). Processing of parasite proteins may therefore provide a continuing antigenic stimulus early in infection which subsequently hampers antibody maturation responses, and delays or prevents effective immunological attack at critical stages of the life cycle of the parasite. The maintenance of diversity in many parasite proteins which are not directly exposed to the host's immune system may therefore allow a few proteins with essential functions (such as invasion of red cells or cytoadherence) to perform in a less impeded fashion. This "trans-acting" strategy (60) may explain why so man), parasite proteins not directly accessible to antibodies are polymorphic (including perhaps ATPase 1). Those proteins which are highly conserved phylogenetically such as parasite calmodulin and tubulin are probably subject to strict functional constraints and therefore cannot tolerate the insertion of novel sequences (including tandemly repeated amino acids). The recent observations on a connection between the intraerythrocytic parasite and the plasma offer an alternative explanation for polymorphisms in antigens located in the parasitophorous vacuolar membrane or the parasite plasma membrane, but they cannot explain polymorphisms in proteins which are predominantly intraparasitic (48). These observations also offer the intriguing possibility of inhibiting the function of some parasite gene products such as ATPase 1 without the prior requirement of inhibitor penetrating the erythrocyte membrane and cytosol. Many of the observations in this paper are amenable to further genetic and functional studies on this family of sequences.
2014-10-01T00:00:00.000Z
1993-01-02T00:00:00.000
{ "year": 1993, "sha1": "bca8588003c6ae3def228ed9d359628e40e7a838", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/120/2/385/1255990/385.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bca8588003c6ae3def228ed9d359628e40e7a838", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235669730
pes2o/s2orc
v3-fos-license
$P$-wave charmed baryons of the $SU(3)$ flavor $\mathbf{6}_F$ We use QCD sum rules to study mass spectra of $P$-wave charmed baryons of the $SU(3)$ flavor $\mathbf{6}_F$. We also use light-cone sum rules to study their $S$- and $D$-wave decays into ground-state charmed baryons together with light pseudoscalar and vector mesons. We work within the framework of heavy quark effective theory, and we also consider the mixing effect. Our results can explain many excited charmed baryons as a whole, including the $\Sigma_c(2800)^0$, $\Xi_c(2923)^0$, $\Xi_c(2939)^0$, $\Xi_{c}(2965)^{0}$, $\Omega_c(3000)^0$, $\Omega_c(3050)^0$, $\Omega_c(3066)^0$, $\Omega_c(3090)^0$, and $\Omega_c(3119)^0$. Their masses, mass splittings within the same multiplets, and decay properties are extracted for future experimental searches. arXiv:2106.15488v3 [hep-ph] 10 Aug 2021 In this paper we shall systematically investigate Pwave charmed baryons of the SU (3) flavor 6 F . In Refs. [85,86] we have studied mass spectra of P -wave bottom baryons using the method of QCD sum rules [87,88], and in the present study we shall replace the bottom quark by the charm quark, and reanalyse those results. In Ref. [89] we have studied decay properties of Pwave bottom baryons using the method of light-cone sum rules [90][91][92][93][94], and in the present study we shall apply the same method to study P -wave charmed baryons of the SU (3) flavor 6 F . We shall study their S-and D-wave decays into ground-state charmed baryons together with pseudoscalar mesons π/K and vector mesons ρ/K * . We shall work within the framework of the heavy quark effective theory (HQET) [95][96][97], and we shall also consider the mixing effect between two different HQET multiplets. This paper is organized as follows. In Sec. II we briefly introduce our notations, and use the method of QCD sum rule to study mass spectra of P -wave charmed baryons of the SU (3) flavor 6 F . The obtained results are further used in Sec. III to study their S-and D-wave decays into ground-state charmed baryons together with light pseudoscalar and vector mesons. The mixing effect between different HQET multiplets is investigated in Sec. IV, and the obtained results are summarized in Sec. V, where we conclude this paper. II. MASS SPECTRA THROUGH QCD SUM RULES In this section we follow Ref. [27] and classify Pwave charmed baryons. A singly charmed baryon consists of one charm quark and two light up/down/strange quarks, and its internal symmetries are: • The color structure of the two light quarks is antisymmetric (3 C ). • The flavor structure of the two light quarks is either symmetric (6 F ) or antisymmetric (3 F ). • The spin structure of the two light quarks is either symmetric (s l ≡ s qq = 1) or antisymmetric (s l = 0). • The orbital structure of the two light quarks is either symmetric or antisymmetric. We call the former λ-type with l ρ = 0 and l λ = 1, and the latter ρ-type with l ρ = 1 and l λ = 0. Here l ρ denotes the orbital angular momentum between the two light quarks, and l λ denotes the orbital angular momentum between the charm quark and the two-lightquark system. According to the Pauli principle, the total symmetry of the two light quarks is antisymmetric, so that we can categorize P -wave charmed baryons into eight multiplets. Four of them belong to the SU (3) flavor 6 F representation, as shown in Fig. 1. We denote them as [F (flavor), j l , s l , ρ/λ], where j l = l λ ⊗ l ρ ⊗ s l is the total angular momentum of the light components. There are one or two charmed baryons contained in each multiplet, with the total angular momenta j = j l ⊗ s b = |j l ± 1/2|. We have systematically studied mass spectra of P -wave bottom baryons in Refs. [85,86]. In the present study we just need to replace the bottom quark by the charm quark, and reanalyse those results. The newly obtained results for charmed baryons are summarized in Table I. In the calculation we have used the following QCD parameters at the renormalization scale 1 GeV [4,62,[98][99][100][101][102][103]: g ss σGs = M 2 0 × ss , M 2 0 = 0.8 GeV 2 , g 2 s GG = (0.48 ± 0.14) GeV 4 . Besides, we have used the PDG value m c = 1.275 ± 0.025 GeV [4] for the charm quark mass in the MS scheme. To better understand P -wave charmed baryons, we shall further investigate their decay properties in the next section. The parameters given in Table I will be used as inputs. To better describe P -wave charmed baryons, we select the following mass values when calculating their decay widths: 20 , which are calculated through: III. DECAY PROPERTIES THROUGH LIGHT-CONE SUM RULES We have systematically studied decay properties of Pwave bottom baryons of the SU (3) flavor 6 F in Ref. [89] using the method of light-cone sum rules within HQET. In this paper we apply the same method to study P -wave charmed baryons of the SU (3) flavor 6 F . We shall study their S-and D-wave decays into ground-state charmed baryons together with pseudoscalar mesons π/K and vector mesons ρ/K * , including: In the above expressions, the superscripts S and D denote S-and D-wave decays, respectively; X and V µ denote P -wave charmed baryons, ground-state charmed baryons, light pseudoscalar mesons, and light vector mesons, respectively. We shall use Ω 0 c (3/2 − ) belonging to [6 F , 2, 1, λ] as an example, and study its D-wave decay into Ξ + c (1/2 + ) and K − (0 − ) in Sec. III A. Then we shall apply the same method to systematically investigate the four charmed baryon multiplets [6 F ], separately in the following subsections. Finally, we use the amplitude, to evaluate its partial decay width to be: In the following subsections we shall similarly investigate the four charmed baryon multiplets [6 F , 1, 0, ρ], We study their S-and D-wave decays into ground-state charmed baryons together with light pseudoscalar and vector mesons. We derive the following nonzero coupling constants: Some of these coupling constants are shown in Fig. 2 as functions of the Borel mass T . We further use these coupling constants to derive the following decay channels that are kinematically allowed: We summarize the above results in Table II. The [6 F , 0, 1, λ] doublet contains altogether three charmed baryons: We study their S-and D-wave decays into ground-state charmed baryons together with light pseudoscalar and vector mesons. We derive the following non-zero coupling constants: Some of these coupling constants are shown in Fig. 3 as functions of the Borel mass T . We further use these coupling constants to derive the following decay channels that are kinematically allowed: We summarize the above results in Table III. The [6 F , 1, 1, λ] doublet contains altogether six charmed baryons: c [ 1 ground-state charmed baryons together with light pseudoscalar and vector mesons. We derive the following nonzero coupling constants: Some of these coupling constants are shown in Fig. 4 as functions of the Borel mass T . We further use these coupling constants to derive the following decay channels that are kinematically allowed: We summarize the above results in Table IV. We study their S-and D-wave decays into ground-state charmed baryons together with light pseudoscalar and vector mesons. We derive the following nonzero coupling constants: Some of these coupling constants are shown in Fig. 5 as functions of the Borel mass T . We further use these coupling constants to derive the following decay channels that are kinematically allowed: We summarize the above results in Table V. To explain these discrepancies, we recall that the HQET is an effective theory, which works quite well for bottom baryons [89], but not so perfect for charmed baryons [105]. Therefore, the three J P = 1/2 − charmed baryons can mix, and the three J P = 3/2 − charmed baryons can also mix. Accordingly, we just need a tiny mixing angle θ 1 ≈ 0 • to make it possible to observe all the P -wave Ξ c baryons in the Λ c K decay channel. V. SUMMARY AND DISCUSSIONS In this paper we perform a rather complete study on P -wave charmed baryons of to the SU (3) flavor 6 F . We use the method of QCD sum rules to study their mass spectra. We also use the method of light-cone sum rules to study their decay properties, including their S-and Dwave decays into ground-state charmed baryons together with pseudoscalar mesons π/K and vector mesons ρ/K * . We work within the framework of heavy quark effective theory, and we also consider the mixing effect between different HQET multiplets. Accordingly to the heavy quark effective theory, we categorize P -wave charmed baryons of the SU (3) We find it possible to interpret the Ω c (3000) 0 as the P -wave Ω c baryon of either J P = 1/2 − or 3/2 − , belonging to this doublet. However, total widths of Σ c ( with the mixing angle fine-tuned to be θ 2 = 37±5 • . Our results suggest: a) the Ξ c (2923) 0 and Ω c (3050) 0 can be interpreted as the P -wave Ξ c and Ω c baryons of J P = 1/2 − , belonging to [6 F , 1, 1, λ]; b) the Ω c (3119) 0 can be interpreted the P -wave Ω c baryon of J P = 5/2 − , belonging to [6 F , 2, 1, λ]; c) the Ξ c (2939) 0 , Ξ c (2965) 0 , Ω c (3066) 0 , and Ω c (3090) 0 can be interpreted the P -wave Ξ c and Ω c baryons of J P = 3/2 − , belonging to the mixing of the [6 F , 1, 1, λ] and [6 F , 2, 1, λ] doublets. To arrive at our interpretations, we need to pay attention to: there exist considerable uncertainties in our results for absolute values of charmed baryon masses due to their dependence on the charm quark mass [85,86]; however, mass splittings within the same doublets do not depend much on this, and are calculated with much less uncertainties; moreover, we can extract more useful information from decay properties of charmed baryons. Summarizing the above results, the present sum rule Σc(2800) 0 Ωc ( Table VI. We suggest the Belle-II, CMS, and LHCb Collaborations to further study them to verify our interpretations. Especially, we propose to further study the Σ c (2800) 0 to examine whether it can be separated into several excited charmed baryons. For convenience, we show their total widths and branching ratios in Fig. 6 using pie charts. The sum rule equation for the Ω 0 c [ 5
2021-06-30T01:16:35.609Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "74110c718bbb3db831b148df7c8fe9825b8c504e", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.104.034037", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "8e43bdb626ee1618d61ed17501e7566daeed536a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
145022402
pes2o/s2orc
v3-fos-license
Race/Ethnicity, Enrichment/Fortification, and Dietary Supplementation in the U.S. Population, NHANES 2009–2012 In the United States (U.S.), food fortification and/or enrichment and dietary supplement (DS) use impacts nutrient intakes. Our aim was to examine race/ethnicity and income (Poverty Income Ratio, PIR) differences in meeting the Dietary Reference Intakes based on estimated dietary intakes among the U.S. population age ≥2 years (n = 16,975). Two 24-hour recalls from the National Health and Nutrition Examination Survey (NHANES) cycles 2009–2012 were used to estimate the intake of 15 nutrients as naturally occurring, enriched/fortified, and plus DSs. Across racial/ethnic groups and within PIR categories, significant differences were observed in the %< Estimated Average Requirement (EAR) for vitamin A following enrichment/fortification (E/F) and for vitamin B12 and riboflavin following both E/F and DS use when comparing non-Hispanic blacks, Hispanics, and the other race/ethnicity group to non-Hispanic whites. The %<EAR for iron and calcium also differed depending on race/ethnicity within PIR category (p < 0.05). The %<EAR was significantly lower for vitamin D after E/F for Hispanics, and after E/F combined with DS use for vitamins C and B6 for Hispanics and the other race/ethnicity group than non-Hispanic whites. Non-Hispanic blacks were inadequate in all nutrients examined except vitamin C based on the %<EAR than individuals of other races/ethnicities. Differences in the tolerable upper intake level (UL) of nutrients, especially folate and zinc, also varied by race/ethnicity and PIR category. Introduction The United States (U.S.) Dietary Guidelines for Americans (DGA) 2015 recommended healthy eating patterns across the lifespan.However, many Americans (42.2%) do not adhere to these guidelines, according to the 2010 Healthy Eating Index (HEI) scores [1].Furthermore, the 2015 Dietary Guidelines Advisory Committee (DGAC) identified a number of shortfall nutrients (folate, calcium, magnesium, fiber, potassium, and vitamins A, D, E, and C) among which consumption had not met the Institute of Medicine's Dietary Reference Intakes (DRIs) of the Estimated Average Requirement (EAR) or the Adequate Intake (AI) [2].While vegetables, fruits, whole grains, and dairy are important sources of shortfall nutrients, intake is low for many Americans [2].Enrichment and/or fortification (E/F) of foods in the U.S. food supply has helped with meeting the recommended nutrient intake levels as well as reducing some deficiencies through replacement of specific nutrients lost during processing (enrichment) and increasing nutrient levels (fortification) [3,4].The use of dietary supplements (DSs) has also played a role in meeting nutrient intake recommendations and reducing the percentage of the population that is below the EAR [5]. Racial/ethnic differences in shortfall nutrients were noted in the 2015 DGAC report (Part D), with a few notable trends.Intake of most nutrients was lowest among non-Hispanic blacks (NHB), vitamin C intake was highest for Hispanics, and intake of several other nutrients were highest among non-Hispanic whites (NHW) such as magnesium, folate, iron, potassium, calcium, and vitamins A, E, and D [2].This disparity among the population is of concern given the associations between health outcomes and the under-consumed shortfall nutrients including calcium, potassium, vitamin D, and iron (among adolescent and premenopausal adult females) [2]. Disparities in adherence to the dietary recommendations by race/ethnicity and income among U.S. adults in general or with regard to a particular food group or food have also been examined [6].In 2011, Wang and Chen observed lower HEI scores among NHB adults compared with NHW adults after adjustment for socioeconomic status (SES) and certain nutrition and health related psychosocial factors [6].Previous research has assessed recommended intakes by socioeconomic level, such as Supplemental Nutrition Assistance Program (SNAP) eligibility.However, these studies have often been limited to a nutrient, food, or food group.Some studies have assessed usual intakes of many nutrients by the poverty index ratio (PIR) [7][8][9] or racial/ethnic differences [9,10].However, only a few studies to date have included E/F and these studies have not focused on racial/ethnic differences [5,11,12]. Fulgoni et al. evaluated the total usual nutrient intakes for 19 micronutrients from naturally occurring, enriched/fortified, and DS sources (aged ≥2 years), using the North American branch of the International Life Sciences Institute (ILSI North America) Fortification database and the National Health and Nutrition Examination Survey (NHANES) 2003-2006 [5].Results from this study showed that many Americans did not meet the DRI recommended micronutrient intake levels prior to E/F and DS [5].Following E/F, intakes of vitamins A, C, and D, thiamin, iron, and folate dramatically improved, and further improved with the use of DSs [5].Additionally, Berner et al. examined the impact of E/F on nutrient intakes among children and adolescents using the NHANES 2003-2006 [11], as well as intakes of certain nutrients with and without fortification by gender and age groups among children and adults aged ≥1 year using the Continuing Survey of Food Intakes by Individuals (CSFII) 1989-1991 dietary data [12]. Analyzing NHANES 2009-2012, Blumberg et al. (2017) recently found less apparent nutrient inadequacies among NHW adults than those in other racial/ethnic groups examined (NHB, Hispanic, and non-Hispanic Asian [NHA]) who reported using DSs [10].Blumberg et al. (2017) also observed nutrient inadequacies that varied by PIR category [7].Adults in the highest PIR category who reported using DSs experienced lower rates of inadequacy in twice as many nutrients compared to counterparts in the middle and lowest PIR categories [7]. Usual nutrient intakes of the eight shortfall nutrients among adults 19 years of age and older by PIR and sex using one NHANES cycle, 2011-2012, and two 24-hour dietary recalls were evaluated by Bailey et al. [8].The authors found that, regardless of PIR category, men had significantly greater mean intakes of folate and vitamins C and D, whereas intake of magnesium was higher for women [8].Compared with the lowest and middle PIR categories, the %<EAR for all micronutrients was significantly lower for those in the highest PIR category (PIR ≥ 350%), and mean total usual nutrient intakes were significantly higher for 7 of the 8 micronutrients in men (except calcium) and women (except vitamin C) [8]. Use of DSs including multivitamins and multi-minerals by the U.S. population ≥1 years of age has also been evaluated by SES and health-related characteristics with the NHANES 2003-2006 [13].Self-reported DS use was highest for NHWs (59%) followed by 36% for NHBs and 34% for Mexican-Americans [13]. The aim of the current study is to determine if nutrient intakes differ among racial/ethnic groups in the U.S. population in individuals ≥2 years of age, when accounting for PIR, using NHANES dietary data for two cycles, 2009-2012, and the ILSI North America Fortification database.To achieve our aim, estimations of usual dietary intake based on the National Cancer Institute (NCI) usual intake estimation methodology [14,15] were used to compare intakes of 15 nutrients as they occur intrinsically in food, after E/F, and after the addition of DSs to intakes from enriched/fortified foods with the appropriate DRIs. Study Population and Dietary Data The NHANES is a nationally representative, cross-sectional survey conducted by the National Center for Health Statistics (NCHS) that uses a complex, stratified, multistage probability cluster sampling design to sample noninstitutionalized, civilian U.S. residents [16].Approval for the NHANES protocol was received from the NCHS Research Review Board, and participants or proxies provided informed consent.Two 24-hour dietary recalls were collected as part of the NHANES using the computer-assisted dietary interview software program, USDA's automated multiple-pass method (AMPM), and are available in the Centers for Disease Control and Prevention (CDC) public dietary data.These 24-hour dietary recalls include an in-person interview followed by a telephone interview administered three to 10 days later.In addition, DSs data are available through NHANES from the first day dietary interview during which participants were encouraged to share current DS consumption with the interviewers to include product label information, dose, strength, and frequency. Across two consecutive NHANES cycles (2009)(2010)(2011)(2012), we combined dietary and individual DS intakes reported over two 24-hour periods [17] with the food patterns equivalent database from the U.S. Department of Agriculture (USDA) as well as the ILSI North America Fortification database (see: http://ilsina.org/our-work/nutrition/fortification/).Since this study was a secondary data analysis of publicly available federal data, Human Subject Institutional Review Board approval was not required by the Medical University of South Carolina. Twenty-four hour dietary intake data were complete and available for 20,293 participants.The sample for analysis included a total of 16,975 participants two years of age and older (see Table S1).Individuals with incomplete data (n = 1947) and those under two years of age (n = 1371) were excluded. Food and beverage consumption reported in the 2009-2010 and 2011-2012 NHANES dietary interview component (What We Eat in America, WWEIA) and nutrient estimates were obtained using the USDA's Food and Nutrient Database for Dietary Studies (FNDDS) [18].Estimates for nutrients as naturally occurring (i.e., intrinsic or all foods and beverages excluding E/F), enriched, and fortified were available through the ILSI North America Fortification database.The ILSI North America Fortification database was combined with the two 24-hour individual food intakes from NHANES, demographics data, and total Supplement Files. The number of grams consumed per participant from the NHANES food file was multiplied by the nutrient proportion in the ILSI North America Fortification database, and divided by 100 to determine the amount of consumed grams of study nutrients for each level (naturally occurring, E/F added, DSs added) in the specified food/beverage.The total nutrient intake (per 100 g) for each nutrient as naturally occurring, enriched/fortified, and DSs was then calculated by summing each of the above levels for every participant from the various foods and beverages consumed. Estimation of Usual Intake, %<EAR, and % ≥UL The NCI usual intake estimation methodology was utilized to estimate the prevalence of usual intake based on data from two 24-hour dietary recalls [14,15].Estimation of usual intake at the individual level is possible through the NCI's MIXTRAN and DISTRIB computer macro.The usual intake macro was run to estimate total nutrient intake following the addition of DSs' nutrient intake.Analyses controlled for age, interview day (1st vs. 2nd), PIR (<131%, 131-185%, >185%), weekend day (yes/no), and race/ethnicity (NHW, NHB, Hispanic, and other races/ethnicities). Three cumulative sources were used to generate estimated usual intake data for the 15 nutrients of interest, which were those included in the ILSI North America Fortification database: a) foods and beverages as naturally occurring, b) foods and beverages including E/F, and c) foods and beverages including E/F and DSs.These nutrients included vitamins A, D, E, C, B 6 , and B 12 , folate, thiamin, riboflavin, niacin, iron, zinc, calcium, magnesium, and potassium.For folic acid and folate, dietary folate equivalents (DFE) was used [19].Percent below the EAR (%>AI for potassium), and % ≥UL were estimated as appropriate.We estimated the proportion of the population with nutrient intakes from sources as naturally occurring, enriched/fortified, and supplements greater than or equal to the ULs with the exception of thiamin, riboflavin, potassium, and vitamins A, E, and B 12 , for which ULs have not been estimated [19][20][21].The ULs for niacin and magnesium were derived from non-food sources (enriched/fortified or DSs) [19,22]. Statistical Analysis We accounted for the NHANES clustered sampling design and oversampling in all analyses and adjusted for differential non-coverage and non-response across the two continuous NHANES cycles [23][24][25].Frequencies were reported for sample size from day 1 and day 2 interviews.Means and standard errors (SE) were calculated for average usual intake, average %<EAR (%>AI for potassium), and average %≥UL.SEs were estimated using Balanced Repeated Replication and NHANES weights were applied.The age group analyzed was ≥2 years for both sexes combined.PIR was defined by the following categories: <131%, 131% to 185%, and >185% of the poverty index, which is similar to PIR cut-points used by the DGAC 2015-2020, SNAP eligibility of ≤130% of the poverty line, and past studies [7,26].Linear mixed models were used to evaluate race/ethnic differences by PIR category and within each level of nutrient intake.NHW was used as the race/ethnic reference category. For ease of interpretation, bar graphs of the percent below EAR are presented for all nutrients stratified by race/ethnicity and PIR category.All analyses were conducted using SAS, version 9.4, and its complex survey-specific procedures (SAS Institute; Cary, NC, USA) and a p-value < 0.05 was considered statistically significant. Estimated Mean Usual Nutrient Intakes from Foods and Beverages as Naturally Occurring, Plus Enriched/Fortified, and Plus Dietary Supplement Sources Estimated mean usual nutrient intakes and %<EAR are displayed in Table 1 for nutrients by three levels (as naturally occurring, with E/F, and with DSs) and by racial/ethnic group.Usual nutrient intake appeared to differ by race/ethnicity over the three levels of nutrient intake.Across all races/ethnicities and levels, %<EAR was highest overall for vitamins A, D, E, and C as well as calcium and magnesium.NHBs appeared to have the highest %<EAR for 13 of the 14 nutrients, and the lowest %>AI for potassium compared with the three other racial/ethnic groups.For vitamin C, the %<EAR was similar (27%) for NHWs and NHBs.A decrease was observed across racial/ethnic groups for the %<EAR for vitamin D from 100% as naturally occurring for all groups to 91%-97% after E/F.However, after the addition of DSs, the %<EAR for vitamin D decreased further for all groups (93% to 65% for NHWs, 95% to 80% for NHBs, 91% to 79% for Hispanics, and 94% to 74% for the other race/ethnicity group).An appreciable decline in the %<EAR was also observed for folate for all groups following E/F that further decreased after considering DSs.Specifically, the %<EAR for folate from naturally occurring, E/F, and DSs sources was 82%, 7%, and 5% for NHWs; 89%, 14%, and 12% for NHBs; 78%, 6%; and 5% for Hispanics; and 78%, 5%, and 4% for the other race/ethnicity group, respectively.Within racial/ethnic groups vitamin A also exhibited large declines in the %<EAR when E/F sources were included, declining from 65% to 32% for NHWs; 76% to 50% for NHBs; 66% to 39% for Hispanics; and 63% to 37% for the other race/ethnicity group.Iron intake for NHWs was higher and %<EAR was lower than the other three races/ethnicities. Total Usual Mean Nutrient Intakes by Race/Ethnicity and PIR Category Total usual mean nutrient intakes by intake level are presented by racial/ethnic group within PIR category (see Table 2).Focusing on the usual intake of nutrients as naturally occurring, comparing intake of folate within PIR categories across race/ethnic groups, usual intake of folate was lowest for NHBs in each PIR category.For example, in the lowest PIR category, mean (SE) usual intake of folate (ug/DFE) was 159 (0.2) in NHBs compared to 193 (0.2) in NHWs, 192 (0.2) in Hispanics, and 192 (0.2) in the other race/ethnic group.Moreover, intake of folate was highest for NHWs in the lowest PIR category and highest for Hispanics in the middle and highest PIR categories.Similarly, calcium intake was lowest for NHBs compared to NHWs, Hispanics, and the other race/ethnic group, across all PIR categories, but highest for NHWs compared to NHBs, Hispanics, and the other race/ethnic group across all PIR categories.Intake of folate with E/F was lowest for NHBs comparing intake of folate within PIR categories across all race/ethnic groups for each PIR category.For example, in the lowest PIR category, mean (SE) usual intake of folate with E/F (ug/DFE) was 458 (0.4) in NHBs compared to 533 (0.4) in NHWs, 536 (0.4) in Hispanics, and 550 (0.8) in the other race/ethnic group.Intake of folate with E/F was highest for the other race/ethnic group, comparing intake of folate within PIR categories across all race/ethnic groups for each PIR category.This pattern was also seen for the usual nutrient intake of calcium with E/F intake lowest for NHBs across all PIR categories and highest for NHWs across all PIR categories. Percent Less than the EAR for Nutrients by Race/Ethnicity and PIR Category The %<EAR for the three nutrient levels is shown based on race/ethnicity and PIR category (see Table 3 and Figure 1a-c).The %<EAR for folate as naturally occurring was highest for NHBs compared to NHWs, Hispanics, and the other race/ethnic group in all PIR categories, and lowest for Hispanics compared to NHWs, NHBs, and the other race/ethnic group in all PIR categories.For example, in the lowest PIR category, the average %<EAR (SE) for folate as naturally occurring was 89.8% (0.6) in NHBs compared to 86.5% (1.0) in NHWs, 79.7% (1.6) in Hispanics, and 83.5% (1.8) in the other race/ethnic group.Compared with NHWs, the %<EAR for folate as naturally occurring was significantly higher for NHBs and significantly lower for Hispanics regardless of PIR category (p < 0.0001 for all).The %<EAR for calcium as naturally occurring was highest for NHBs in all PIR categories, and lowest for NHWs in all PIR categories.The %<EAR for calcium as naturally occurring for NHBs was significantly higher than NHWs in all PIR categories (p < 0.0001 for all). When E/F of foods is added, the %<EAR for folate with E/F was highest for NHBs in all PIR categories.The %<EAR for folate with E/F was lowest for Hispanics in the lowest PIR category, and for the other race/ethnic group in the middle and highest PIR categories.For example, in the lowest PIR category, the average %<EAR (SE) for folate with E/F was 13.5% (1.4) in NHBs compared to 8.3% (1.0) in NHWs, 6.3% (0.8) in Hispanics, and 6.4% (1.4) in the other race/ethnic group.The %<EAR for folate as naturally occurring was significantly higher for NHBs than NHW counterparts, and significantly lower for Hispanics in the lowest PIR category, and the other race/ethnic group in the middle and highest PIR categories (p < 0.0001 for all).The %<EAR for calcium with E/F was highest for NHBs in all PIR categories, and lowest for NHWs in the lowest PIR category and for Hispanics in the middle and highest PIR categories.The %<EAR for calcium with E/F was significantly higher for NHBs in all PIR categories than NHWs (p < 0.0001). With the addition of estimated nutrient intake from DS use, the %<EAR for folate with DSs was highest for NHBs in all PIR categories and lowest for the other race/ethnicity group in all PIR categories.Compared with NHWs in all PIR categories, the %<EAR for folate with DSs was significantly higher for NHBs and significantly lower for the other race/ethnicity group (p < 0.0001 for all).The %<EAR for calcium with DSs was highest for NHBs in all PIR categories and lowest for NHWs in all PIR categories.The %<EAR for calcium plus DSs for NHBs was significantly higher than NHWs in all PIR categories (p < 0.0001 for all). Percent Greater than or Equal to the UL for Nutrients by Race/Ethnicity and PIR Category Table 4 shows the % ≥UL for the four racial/ethnic groups by PIR category and nutrient level for nutrients for which the UL is available.Focusing on usual intake of nutrients as naturally occurring, the % ≥UL was ≤2% for vitamin A, folate, and calcium.For usual intake of nutrients with E/F, the % ≥UL was ≤10.1% for vitamin A, niacin, zinc, and calcium.Following the addition of DSs, the % ≥UL was ≤10% for vitamins D, C, B 6 , calcium, and magnesium.When considering usual intake with DSs, the % ≥UL for folate ranged from 24.4% to 42.6% and the % ≥UL for zinc ranged from 5.4% to 12% across race/ethnic groups and PIR categories. 1 Average percent (standard error). 2 Statistical tests evaluating differences within race/ethnicity categories were run (Non-Hispanic white reference).Three regression models tested for race/ethnicity differences within PIR categories for each food/beverage level.Bold font signifies p-value < 0.0001.* signifies p-value < 0.05. 3 Data source: What We Eat in America, NHANES 2009-2012 [17]. 4Usual intake distribution estimated using the National Cancer Institute Method for individuals two years of age and older, including pregnant and lactating women.Accessible via https://epi.grants.cancer.gov/diet/usualintakes/method.html. 5The other race/ethnicity group included non-Hispanic persons reporting multiple races. 6Dietary Reference Intakes for vitamin A, vitamin K, arsenic, boron, chromium, copper, iodine, iron, manganese, molybdenum, nickel, silicon, vanadium, and zinc (2001) [21]. 7A supplemental file is not currently available for vitamins A and E in NHANES for 2009-2012, and it will be released at a later date. 8Dietary reference intakes for calcium and vitamin D (2011) [27]. 9Dietary reference intakes for calcium, phosphorous, magnesium, vitamin D, and fluoride (1997) [22]. 10Dietary reference intakes for vitamin C, vitamin E, selenium, and carotenoids (2000) [28]. 11 Dietary reference intakes for thiamin, riboflavin, niacin, vitamin B 6 , folate, vitamin B 12 , pantothenic acid, biotin, and choline (1998) [19]. 12Folate EAR is presented as dietary folate equivalents (DFE). 1 DFE = 1 µg food folate = 0.6 µg of folic acid from fortified food or supplement consumed with food = 5 µg of a supplement taken on an empty stomach. 13Dietary reference intakes for water, potassium, sodium, chloride, and sulfate (2005) [20]. 14The AI approach was used for potassium.U.S., United States.NHANES, National Health and Nutrition Examination Survey.PIR, poverty index ratio.RAE, retinol activity equivalents.AT, a-tocopherol.DFE, dietary folate equivalents.The shading present differentiate between the three PIR categories. 1 Average percent (standard error). 2 Statistical tests evaluating differences within race/ethnicity categories were run (Non-Hispanic white reference).Three regression models tested for race/ethnicity differences within PIR categories for each food/beverage level.Bold font signifies p-value < 0.0001.* signifies p-value <0.05. 3 Data source: What We Eat in America, NHANES 2009-2012 [17]. 4Usual intake distribution estimated using the National Cancer Institute Method for individuals two years of age and older, including pregnant and lactating women.Accessible via https://epi.grants.cancer.gov/diet/usualintakes/method.html. 5The other race/ethnicity group included non-Hispanic persons reporting multiple races. 6A UL was not available for the following nutrients: Vitamin A (retinol activity equivalents [RAE]) and vitamin E (a-tocopherol [AT]) for supplements, vitamin B 12 , thiamin, riboflavin, and potassium for all levels. 7Dietary reference intakes for vitamin A, vitamin K, arsenic, boron, chromium, copper, iodine, iron, manganese, molybdenum, nickel, silicon, vanadium, and zinc (2001) [21]. 8Values were zero until dietary supplements were added. 9Dietary reference intakes for calcium and vitamin D (2011) [27]. 10Dietary reference intakes for calcium, phosphorous, magnesium, vitamin D, and fluoride (1997) [22]. 11Dietary reference intakes for vitamin C, vitamin E, selenium, and carotenoids (2000) [28]. 12Dietary reference intakes for thiamin, riboflavin, niacin, vitamin B 6 , folate, vitamin B 12 , pantothenic acid, biotin, and choline (1998) [19]. 13Folate EAR is presented as dietary folate equivalents (DFE). 1 DFE = 1 µg food folate = 0.6 µg of folic acid from fortified food or a supplement consumed with food = 5 µg of a supplement taken on an empty stomach. 14UL for magnesium and niacin established only for supplemental sources.U.S., United States.NHANES, National Health and Nutrition Examination Survey.PIR, poverty index ratio.DFE, dietary folate equivalents.The shading present differentiate between the three PIR categories. Discussion In summary, usual intake levels of vitamins A, C, B 6 , folate, thiamin, riboflavin, iron, zinc, calcium, magnesium, and potassium were lowest in NHBs when compared to NHWs, Hispanics, and the other race/ethnic group, even after accounting for PIR category.Usual intake levels of vitamins A, E, B 12 , riboflavin, niacin, calcium, and potassium were consistently highest in NHWs when compared to NHBs, Hispanics, and the other race/ethnicity group across all PIR categories.The usual nutrient intake levels below the EAR also differed across racial/ethnic groups and within PIR categories.Compared with NHWs, Hispanics, and the other race/ethnic group, the %<EAR was highest in NHBs for vitamins A, B 12 , folate, thiamin, riboflavin, zinc, calcium, and magnesium, whereas the %<EAR was lowest in NHBs for potassium, across all PIR categories.These results are of particular concern because other studies have demonstrated the highly continued prevalence of neural tube defects among infants of young women from inner city low income NHB and U.S. southern state Hispanic populations [29,30].The data indicate a lack of knowledge about the increased need for folic acid during pregnancy.Based on our data and these reports, targeting young, lower economic women of childbearing age from these populations with educational information to encourage them to consume adequate synthetic folic acid daily (from fortified foods or supplements) in addition to food forms of folate from a varied diet is needed. In NHWs, the %<EAR was lowest for vitamin B 12 and riboflavin compared to NHBs, Hispanics, and the other race/ethnic group.The %<EAR was highest for magnesium and lowest for vitamins C, B 6 , and magnesium in Hispanics compared to NHWs, NHBs, and the other race/ethnic group.With regard to the UL, the usual intake exceeded UL by ≥10% for folate and zinc with E/F.The % ≥UL for folate ranged from 10.5% to 27.1% and the % ≥UL for zinc ranged from 3.8% to 10.1% across racial/ethnic groups and PIR categories.The addition of use of DSs further increased the % ≥UL for both nutrients and especially folate for which the % ≥UL ranged from 24.4% to 42.6%. Nutrient intakes of certain vitamins and minerals in the U.S. population have increased as a result of fortification and enrichment of foods in the U.S. food supply even though only a few studies to date have focused on E/F [5,11,12] and studies that also focused on race/ethnicity [9,10] or PIR categories [7][8][9] have been limited in scope.In 2011, Fulgoni et al. utilized the ILSI North America Fortification database to assess E/F for the population two years of age and older, as well as children and adults by age group using NHANES 2003-2006 [5].Similar to findings from our study involving the 2009-2012 NHANES, Fulgoni et al. reported that DRI-recommended nutrient intake levels were not met by many Americans prior to E/F and DS use [5].Specifically, intakes of vitamins A, C, and D, thiamin, iron, and folate greatly improved with E/F.We examined nutrient intake levels by race/ethnicity within PIR category and observed increased intakes of the same nutrients with E/F.Fulgoni et al. did not report on racial/ethnic differences in their earlier study. DS use and nutrient intakes and inadequacies were also evaluated by race/ethnicity among adults 19 years of age and older by Blumberg et al. using the NHANES 2009-2012.However, dietary information for NHA was limited to the NHANES 2011-2012 cycle [10].Compared with NHBs, Hispanics, and NHAs, NHWs were found to have half as many nutrient inadequacies [10].Within each racial/ethnic group (NHW, NHB, Hispanic [Mexican Americans and other Hispanics], and NHA), intakes of most of the 19 nutrients were higher for DS users compared to non-users [10].A second study that used NHANES 2009-2012 data examined vegetable consumption in women of childbearing age [9].While comprehensive, these studies did not examine the impact of E/F.While dietary guidelines have been in place for years, adherence to many food groups has been low and racial/ethnic differences have been observed.Among African-Americans, Latinos born in the U.S., and Latinos born in Mexico or South America who participated in the Multiethnic Cohort Study in Hawaii and Los Angeles, the group with the lowest adherence to the Food Guide Pyramid recommendations were African-Americans followed by U.S.-born Latinos [31].Among the three racial/ethnic groups, adherence to the Food Guide Pyramid recommendations for dairy was lowest among all the food groups.Recently, food groups consumed in the diets of Mexicans, Mexican Americans born in the U.S. or Mexico, and NHWs were assessed for food acculturation using one 24-hour dietary recall from the Mexican National Nutrition Survey 1999 and NHANES 1999-2006 [32].Included in the study were female adolescents and adults 12-49 years of age, as well as children 2-11 years of age.Specifically, desserts, salty snacks, pizza, French fries, low-fat meat and fish, high-fiber bread, low-fat milk, saturated fat, and sugar intakes were higher for the three U.S. groups than for Mexicans [32].Intakes of corn tortillas, low-fiber bread, high-fat milk, and Mexican fast food were lower for the three U.S. groups.When examining food group and nutrient intakes among the three U.S. groups and by age, similar patterns were observed [32].In terms of the timing of acculturation, the findings indicated that "within one generation in the U.S." the positive aspects of the purely Mexican diet were replaced by selections of higher fat and energy U.S. diet components [32].These acculturation data, coupled with what was reported here, suggest that nutrition education targeted to new immigrant groups in the U.S., particularly children, might have strong benefits for reducing a potential lifelong impact of U.S.-acquired poor dietary patterns. Overconsumption of energy from added sugars and solid fats was observed for individuals two years of age and older across all racial/ethnic (NHW, NHB, Mexican American) and income (lowest, middle, highest) groups using the NHANES 2001-2004 in a 2012 study by Kirkpatrick et al. [26].The authors reported increased adherence to most food groups according to dietary guideline recommendations for those with the highest incomes (>185% of poverty threshold), of whom adults were twice as likely to consume the minimum recommendations for milk, oils, and total vegetables than those with the lowest incomes (≤130%) [26].NHBs were the least likely to meet dietary guideline recommendations, and when limited to children, the same was true for the recommended minimum milk consumption (15% compared with 42% and 35% for NHWs and Mexican-American children, respectively) [26].Among adults, only 2% of both NHWs and NHBs consumed the recommended minimum of dry beans and peas compared with 20% of Mexican-Americans [26]. Similar to past findings [33], compared to the lowest and middle income groups, adherence to dietary guideline recommendations for several food groups (e.g., whole grains, total vegetables, and milk) was observed to be higher for adults with the highest income [26].Barriers and facilitators or promoters for consumption of certain food groups including fruits and vegetables have been previously described for diverse populations.Among African-Americans, barriers included cost and finances (for all foods) and preferences (for fruits and fast food) [34], as well as access to fresh produce [35], whereas facilitators/promoters included taste and health concerns [34], with differences reported by age and gender.In focus groups involving Hispanics, African-Americans, and Caucasians, lack of time especially among those less than 50-years-old, cost, rates of food spoiling, media's fast-food promotion, and convenience of pre-packaged foods were reported as barriers to fruit and vegetable intake, while knowledge of health benefits and children's health were facilitators [35].A barrier for Hispanics was the impact of the U.S. culture on food availability [35]. Limitations of our study include use of dietary information from two 24-hour NHANES recalls to estimate nutrient intake, which may be subject to bias (recall and interviewer), although usual intake was assessed given the use of two 24-hour recalls.Past studies involving NHANES 24-hour recall data have been criticized as having underestimation of the overall intake and nutrient levels by U.S. adults due to food and drink item portion size underreporting [36].The current study was able to better estimate the sources of nutrients consumed by the U.S. population through the incorporation of the ILSI North America Fortification database that contains naturally occurring ( intrinsic), enriched, and fortified nutrient estimates for foods and beverages coded using the USDA's FNDDS based on reported consumption in the 2009-2010 and 2011-2012 NHANES.There are many strengths of the current study including use of nationally-representative dietary databases (NHANES and FNDDS) that contain the most comprehensive dietary intake information available for the U.S. population, as well as use of the USDA's AMPM approach for collection of dietary intake data by NHANES to limit misreporting of food and beverage items [18,37].The selected PIR categories were similar to the DGAC 2015-2020 PIR cut points, eligibility for SNAP of ≤130% the poverty line, and that of past studies [7,26]. Conclusions Fortification and enrichment of food, as well as the use of DSs play an important role in meeting nutrient intake recommendations and reducing inadequacies.However, nutrient intakes have been shown to vary by demographic characteristics including race/ethnicity and income.Even after foods were enriched/fortified and DSs were added to the diet, over half of the population failed to meet the EAR for vitamins D (59.7% to 84.4%) and E (76.1% to 86.6%) with ~50% of NHBs not meeting the EAR for vitamin A, calcium, and magnesium.Disparities in access to foods containing nutrients found in enriched/fortified and DS sources impact nutrient intakes and intakes below the EAR can result in negative health outcomes, particularly for the under-consumed shortfall nutrients reported in the 2015 DGAC report.Future studies can further investigate racial/ethnic differences in DRIs by age group and specific nutrient sources.Our study further underscored intake insufficiencies of the same shortfall nutrients identified by the last DGAC.Going forward, educational resources and programs geared toward increased intake of these nutrients by population groups at highest risk for deficiency such as children, women of child-bearing age, and the elderly should continue to be emphasized to improve dietary patterns, while addressing barriers including the convenience of fast-food and promotion by the media. Table 1 . Usual nutrient intake and percent less than the estimated average requirement for the U.S. population 2 years of age and older by race/ethnicity, NHANES 2009-20121-3 . Table 2 . Usual nutrient intake for the U.S. population 2 years of age and older by race/ethnicity and poverty index ratio category, NHANES 2009-20121-3 . Table 3 . Percent less than the estimated average requirement for the U.S. population 2 years of age and older by race/ethnicity and the poverty index ratio category, NHANES 2009-20121-4 . Table 4 . Percent greater than or equal to the tolerable upper intake level for the U.S. population two years of age and older by race/ethnicity and the poverty index ratio category, NHANES 2009-0121-4 .
2019-05-05T13:03:10.387Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "dc6e76f4357cea08b3ea0131acc154e95895df20", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/11/5/1005/pdf?version=1556782691", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "aa92c02fdc923ef3fc207483c0064d7724996dbb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
53625559
pes2o/s2orc
v3-fos-license
Advances in High-Power Laser Diode Packaging Rapid evolution of semiconductor laser technology and its declining cost during the last decades have made the adoption of high-power laser diodes more readily affordable. The continuous pursue for higher lasing power calls for better thermal management capability in the packaging design to facilitate controlled operation. As these laser diodes generate large amount of heat fluxes that can adversely affect their performances and reliability, a thermally-effective packaging solution is required to remove the excessive heat generated in the laser diode to its surroundings as quickly and uniformly as possible. Introduction Rapid evolution of semiconductor laser technology and its declining cost during the last decades have made the adoption of high-power laser diodes more readily affordable.The continuous pursue for higher lasing power calls for better thermal management capability in the packaging design to facilitate controlled operation.As these laser diodes generate large amount of heat fluxes that can adversely affect their performances and reliability, a thermally-effective packaging solution is required to remove the excessive heat generated in the laser diode to its surroundings as quickly and uniformly as possible. For high-power applications, one needs to consider not only the thermal challenges, but also the mechanical integrity of the joint as well as the electrical coupling to the remaining components in the module.These poses significant packaging challenges as these factors complicate the effort to create an ideal packaging design.Firstly, laser diodes generate large heat fluxes, up to the range of MWcm -2 , which causes excessive temperature rise in the active region at high injection currents.The joint which secures the laser diode onto the assembly must be able to withstand the heat generated from the laser diode and capable of maintaining its structural integrity during the long service life of the device.Secondly, the parametric performance and the reliability of these laser diodes are sensitive to both temperature and stress.During the bonding process, excessively-induced bonding stress causes the parametric performances of the laser diode to change.Furthermore, laser diode packaging requires stringent alignment tolerance in order to achieve high optical fiber coupling.On top of the aforementioned, a cost-effective packaging solution is also important because packaging usually dominates the cost of the laser diode module.Hence, the development of laser diode packaging not only is a technological challenge for achieving better performances, but also a critical step for possible commercialization of the product. Thermal management of high-power laser diodes In a laser diode package, the heat generated in the laser diode is transferred to the ambient environment by attaching a heat sink or heat spreader onto the laser diode.The laser diode must be attached to the package optimally to ensure an efficient heat transfer through the thermal interface.A thin void-free bonding interface is desired to create an effective heat dissipation channel through the die attachment process.To understand the effectiveness of its heat dissipation capability, thermal resistance calculations are usually employed to evaluate the thermal design and performance.In interface engineering, the usual measure of the heat flow in a laser diode package can be expressed as: Consequently, to improve the thermal design of the laser diode package, the thermal resistance should be minimized by: • Bringing the heat source to the heat sink as close as possible, • Making the interface as thin as possible, • Increasing the thermal conductivity of the material, Providing intimate thermal contact between the laser diode and the heat sink. For laser diode die attach, there are two bonding configurations; epi-side up and epi-side down (see Fig. 2).Eutectic die bonding processes for epi-side up bonding approaches have well been established by the semiconductor packaging industry (Qu, 2004;Larsson, 1990).The heat generated in the active region of the laser diode has to flow through the entire (GaAs/InP) substrate before reaching to the heat sink.The heat generated in the active region spreads laterally over the entire width of the laser as it flows to the heat sink, leading to a two-dimensional heat flow in the laser diode.This two-dimensional heat flow accounts for the logarithmical dependence of ridge width (Boudreau et al., 1993).Due to the low thermal conductivity of ternary alloys and multiple heterostructures (Capinski et al., 1999) of the laser diode, the heat generated in the active region cannot be dissipated onto the heat sink efficiently.Especially for single-mode (≤4 µm) ridge-waveguide laser diodes, the bulk of the heat generation is confined within the active region.This raises concerns as significant heat accumulation in the active region influences the spectral and spatial characteristics, with longitudinal modes broadening (Spagnolo et al., 2001).The optical output power will also be compromised since their performance characteristics are sensitive to the operating temperature of the laser diode. As depicted in Fig. 2(b), epi-side down bonding is recommended for effective heat transfer since the proximity of the active region is only a few microns from the surface (Hayashi, 1992;Lee & Basavanhally, 1994;Katsura, 1990).The proximity of the active region to the top of the heat sink strongly influences the heat flow.Owing to the good thermal conductivity of the heat sink material, the heat produced in the active region is rapidly distributed to the heat sink.Substantial improvement can be achieved as the R laser diode is inversely proportional to the ridge width for epi-side down bonded approach (Martin et al., 1992).The R laser diode for a epi-side down bonded laser diodes was reported to be ~30% smaller than that of a epi-side up bonded laser diodes.Epi-side down bonding also eliminates the trade-off problem between high-frequency modulation (Delpiano et al., 1994) and temperature control.In a laser diode package, three forms of electrical parasitics were present; intrinsic diode, external chip and package parasitics.For high-frequency applications, the external parasitics of the laser diode chip and the package prevent the module from achieving higher frequency modulation.In order to drive the laser diode into higher frequency modulation, epi-side down bonding approach can help to reduce the external electrical parasitics significantly.However, there are physical constraints for epi-side down bonding.The stress associated with epi-side down bonding may cause physical distortion to the device due to the mismatch in the coefficient of thermal expansion (CTE) between the laser diode and the heat sink material.As shown in Fig. 3, the laser diode will experience stresses after the die-attachment process.When the package is cooled to room temperature, the laser diode will experience compressive stresses due to the large CTE mismatch between the laser diode and the heat sink material.The close proximity between the laser diode and the heat sink further promote the strain accumulation in the active region.Relieving this stress will, hence, improve the operating life of the device (Hayakawa et al., 1983). Fig. 3. Depending on the CTE properties of the two materials, compressive or tension stress can be observed after bonding. Advanced Die-attachment techniques After identifying that epi-side down bonding configuration as the preferred approach, the bonding process must also be able to realise the full potential of the improved thermal management capability.A major consideration in the bonding process is the formation of voids at the interface material.The primary concern with voids involves the loss of thermal conductivity.The void volumes add to the thickness to the joint volume, thus increasing the joint thickness, resulting in higher thermal resistance.Voids can cause hotspots by creating areas of poor heat dissipation paths.Voids in the thermal interface material not only limit the thermal dissipation capability but also deteriorate the electrical and mechanical properties of the joint.Continued product miniaturization and increased power density makes the importance of minimizing the void-fraction in the joint even more significant. Hence, various die-attachment techniques have been introduced to tackle the plurality of complicating issues in epi-side down bonding approach (see Table 1).A flip-chip interconnection technique using small solder bumps (Hayashi, 1992) was introduced to resolve the alignment issues.When the solder bumps melt in the reflow process, the surface tension of the molten solder allows self-alignment and accomplishes precise chip positioning.However, thermal dissipation is compromised since heat can only be transferred through the solder bumps.Bridged die bonding (Boudreau et al., 1993) was introduced to exploit the advantage of epi-side down bonding while avoiding the stress issue by employing a solder pattern with a gap, preventing solder from having contact with the sensitive ridge.This bridged die bond geometry enhances the heat dissipation capability compared to epi-side up bonding configuration.However, this approach does not directly address the thermal issue as the heat flux generated in the active region has to be re-directed to the side of the laser diode before travelling towards the heat sink.To improve the thermal dissipation capability further, a vacuum-release process (Bascom & Bitner, 1976;Mizui & Tokuda, 1988) is recommended to produce a fluxless and virtually voidless solder bond.Epitaxial lift-off technique (Dohle et al., 1996) was also introduced to provide good bond quality between the semiconductor chip and the substrate.Ultrasonic bonding or bonding with scrubbing effect (Pittroff, 2001(Pittroff, , 2002) ) was proposed to resolve any surface irregularities during bonding.However, special care must be taken as the solder material may bridge onto the facet easily, obscuring the emitted laser beam for edge-emitting laser diodes.The stress induced by the scrubbing process should be limited to avert any damage incurred on the laser diodes. In general, R interface material can be reduced by applying a pressure to ensure good thermal contact between the laser diode and the solder material during the bonding (solder reflow) process.The pressure induces compressive stress onto the laser diode which may cause structural distortion to the device.Molten-state bonding (Tew et al., 2004) was proposed to alleviate the bonding stress induced.The solder material was pre-heated into molten state before a pressure is applied.Due to its molten state, the pressure applied onto the laser diode was minimal.At the same time, the bonding temperature can be lowered and the bonding time is reduced to mere seconds.This is potentially advantageous for high-volume production where a rapid bonding cycle is favoured.However, just like ultrasonic bonding or scrubbing approach mentioned earlier, the bonding parameters and conditions must be optimized (Teo et al., 2008). The bonding parameters depend largely on the bonding technique and interface material used. Thermal interface materials As shown in Table 2, solders are utilized in every part of a laser diode assembly due to their electrical interconnect, mechanical support and heat dissipation capabilities.These solders can be commonly categorized into two types; hard solder and soft solder.The decision to use soft or hard solder is based on the optimization of a number of properties, including solder strength, solder migration, creep, fatigue, whisker formation, stress, thermal expansion, liquidus temperature, and thermal conductivity of each solder type.It is also dependent on the application as well as the hierarchy of the package. In general, the solder material must satisfy the following requirements: • Have the desired processing temperature to support high temperature operation • Provide sufficient wetting characteristics to form metallurgical bond between the laser diode and heat sink • Provide an efficient heat dissipating channel to the heat sink • Reduce thermally induced stresses arise from the mismatch of thermal expansion between the laser diode and heat sink • Exhibit no/low deformation during its long-term operation • Exhibit low electrical resistivity to reduce Joules heating at high injection current Table 2 shows a list of some common solder materials used in laser diode packaging.Soft solder, commonly containing large percentage of lead, tin and indium, has very low yield strength and incurs plastic deformation under stresses.Their capability to deform plastically helps to relieve the stress developed in the bonded structure.However, this makes soft solder subject to thermal fatigue and creep rupture, causing long-term reliability problems (Solomon, 1986;Lau & Rice, 1985).They are also attributed to solder instabilities like whisker growth, void formation at the bonding part, and diffusion growth (Mizuishi et al., 1983(Mizuishi et al., , 1984;;Sabbag &McQueen, 1975 and obstruct its optical beam.As mentioned earlier, the laser diode package will experience elevated temperature during operation.When the laser diode package is subjected to temperature above 65 °C, the homologous temperature of soft solder material such as Indium is more than 0.8.The solder material will experience high creep deformation, which implies a reliability concern for these solder joints.Hence, bonding of laser diodes using soft solder will face reliability problems (Shi et al., 2000(Shi et al., , 2002)).The reliability of the joint is a critical issue for the practical design and fabrication of a mechanically stable and reliable assembly. Hard solder, on the other hand, has very high yield strength and thus incurs elastic rather than plastic deformation under stresses.Eutectic Au80Sn20 alloy are usually adopted for high-power laser diode applications to overcome the reliability issues (Fujiwara, 1982).Accordingly, it has good thermal conductivity and is free from thermal fatigue and creep movement phenomena (Matijasevic et al., 1993).Unfortunately, hard solder does not help to release the stresses developed during the bonding cycle because of low plastic deformation in the solder material. Next generation heat sink materials The standard heat sink material in nearly all commercially available laser diode packages is copper owing to its excellent thermal conductivity, its good mechanical machining properties, and its comparably low price.However, with the global demands for increasing output power, heat sink material with even higher thermal conductivity is desired.In response to these needs, an increasing number of ultra-high thermal conductivity materials have been and are being developed that offer significant improvements that may be suitable for high-power laser diode applications (Zweben, 2005).Not only do these materials possess very high thermal conductivity compared to traditional packaging materials, they also offer low CTE properties to reduce thermal stresses that can affect the performance and reliability of the package.While some of these materials are still at its infancy, they offer an alternative www.intechopen.comperception of how they can contribute to future thermal management problems, especially in the area of high-power applications. Cooling approaches for high-power applications In most thermal management problems, removing the heat flux generated from the heat source (in this case, laser diode) to the heat sink by means of conduction alone is insufficient. To maintain its high efficiency and long lifetime, the operating temperature of the laser diode package is kept as low as possible, typically below 60 °C.Depending on the thermal power density, these heat sink materials are further cooled by means of passive-or activecooling approaches.The design and analysis of heat sinks is one of the most extensive research areas in electronics cooling (Rodgers, 2005).The heat sinks function by extending the surface area of heat dissipating surfaces through its use of fins. Air-cooling is traditionally associated with the use of heat sinks.The use of fan technology faces scrutiny as the acoustic noise generated and fan bearing may in some way or another affect the functionality and reliability of the application.Furthermore, the limits of aircooling capabilities force the migration of air-to liquid or thermoelectric cooling.Liquid cooling solutions have proven to be able to manage large transient heat loads, especially in areas where design/space constraints limit the use of forced air-cooling approach.Microchannel heat sinks are the state-of-the-art solution for maximum cooling performance (Leers, 2007(Leers, , 2008)).However, the integration of liquid-cooling technology raises reliability, cost and weight issues.Alternatively, thermoelectric cooler (TEC) can also be used to cool and regulate the operating temperature of the assembly.In fact, thermoelectric coolers have been widely used in a pump laser package to cool the laser diode and achieve wavelength and power stability.The TEC provide an effective negative thermal resistance to regulate the temperature in the laser diode package.Currently, TECs have very low efficiency (more commonly called coefficient of performance) and efforts to develop new TE materials with superior figure of merit (ZT) are ongoing.While each cooling approach has its advantages and limitations, the decision to employ which type of cooling approach depends largely on the thermal power density and construction of the system architecture. Microstructure evolution of solder joint During the bonding (solder reflow) process, a metallurgical bond is formed between the laser diode and the heat sink through the formation of intermetallic compound (IMC) at their respective interfaces.The initial formation of IMCs ensures a good thermal contact R contact at their interfaces.However, these IMCs continues to grow, though much more slowly, during storage and service.The growth of the interfacial IMCs depends on a number of factors, such as temperature/time, volume of solder, property of the solder alloy and the metallizations on the laser diode and heat sink.The IMC growth rate in terms of temperature and time are usually represented according to the empirical relationship, () where X(t) is the layer thickness at aging time t; X o is the initial thickness; A is a numerical constant; Q is the apparent activation energy; T is the aging temperature, and R is the gas constant. Continuous interfacial reaction may compromise on the integrity and reliability of the solder joint as the IMCs makes the solder joint less ductile and less capable of releasing stresses through plastic strain.Increased interfacial IMCs in the solder material also increases the thermal and electrical resistances of the bonded structure during storage and aging (Kressel, 1976;Fujiwara et al., 1979).This undesirable diffusion growth is of particular technological concern since it is always ongoing and may cause cracks and delaminations, especially in the presence of residual stress.In electronics packaging, the thickness of their solder bumps is typically several hundreds of microns while the thickness of the interfacial IMCs is a few microns.However, for laser diode die-attach, the solder joint has a thickness of only several microns.The influence of the IMC volume in the solder material will have a significant impact on the mechanical strength and reliability of the solder joint (Wong et al., 2005).In this section, three different solder materials commonly found in laser diode packaging will be reviewed.IMCs were observed at the laser diode/solder interface.At the solder/heat sink interface, diffusion of Ni from the heat sink into the solder joint could be detected within 2-3 µm from the solder/heat sink interface.For both the 63Pb37Sn and 3.5Ag96.5Snsolder systems, a layer of Ni 3 Sn 4 IMC was formed, followed by (Au,Ni)Sn 4 IMCs.Due to the thin solder joint, the AuSn 4 and (Au,Ni)Sn 4 IMCs could be found in the solder as well as at the interfaces.The AuSn 4 and (Au,Ni)Sn 4 IMC precipitates were randomly dispersed into the solder joint.However, for the 80Au20Sn solder system, only a thin layer of (Au,Ni)Sn IMCs was observed.The solder joint consists of three Au-Sn phases; δ (AuSn), ζ′ (Au 5 Sn) and β (Au 10 Sn) phases (Teo J.W.R et al., 2008).As shown in Fig. 4(c), the δ phase was observed to coalesce to the interfaces while the Au-rich ζ′ and β phases remained at the center of the solder. During operation, the heat generated in the laser diode will caused the package to experience thermal loading.Metallurgical interaction in the solder joint will continue to occur by means of solid-state processes.Consequently, the composition, microstructure and physical properties of the solder joint changes with the device life.Fig. 5 shows the microstructure evolution of 3.5Ag96.5Snsolder joint during aging at 150 °C.Although the laser diode package may not experience such high temperature during operation, the accelerated temperature aging was often adopted to screen if the solder material and the joint is capable of meeting the desired operating lifespan of the applications.During aging, the interfacial IMC thicknesses were found to increase with aging time.For the 63Pb37Sn and 3.5Ag96.5Snsolders, the AuSn 4 and (Au,Ni)Sn 4 IMCs first settled to the interfaces and then grew with time.As depicted in Fig. 5, the (Au,Ni)Sn 4 grains separated from the Ni(P) layer at the roots of the grains in the process of breaking off and a Ni 3 Sn 4 IMC layer was formed in between the (Au,Ni)Sn 4 IMC and the Ni(P) layer.These interfacial IMC layers, which formed a large portion of the solder joint, could grow up to a thickness of 8 µm.A thick IMC layer at the interface pose reliability concern as stress is usually concentrated around a thick IMC layer (Lee & Duh, 1999).To understand the growth kinetic, the total IMC thickness at the interfaces for the three solder systems was shown in Fig. 6.The IMC thickness for 63Pb37Sn and 3.5Ag96.5Snsolder systems was significantly large while 80Au20Sn solder joint exhibited limited Ni solubility into the solder.The IMC growth rate for 63Pb37Sn solder was initially much faster than 3.5Ag96.5Snsolder, followed by 80Au20Sn solder.When the bonded laser diodes were subjected to thermal aging at 150 °C, the homologous temperature for 63Pb37Sn, 3.5Ag96.5Sn,and 80Au20Sn solders were 0.928, 0.856, and 0.765, respectively.The diffusion activation energy for 63Pb37Sn solder was lower than 3.5Ag96.5Snand 80Au20Sn solders.Hence, the IMC thickness for the 63Pb37Sn solder joint was initially observed to be the largest.As the aging duration increased, the IMC growth rate for the 63Pb37Sn solder joint reduced while the IMC thickness for 3.5Ag96.5Snsolder joint continued to increase.During aging, the participation of IMC formation at the interfaces reduced the overall Sn content in the solder joint.The reduced Sn composition lowered the Sn activities at the interfaces and hence, kirkendall voids were introduced in the 63Pb37Sn solder joint.On the other hand, the main constituent of 3.5Ag96.5Snsolder is Sn.Even after 49 days of aging, the solder joint still have significant Sn content in the solder joint for further Ni-Sn and Au-Sn interdiffusion.Hence, with increased aging duration, the IMC thickness for 3.5Ag96.5Snsolder continued to grow and surpass the IMC thickness for 63Pb37Sn solder joint.It is important to reinstated that the interfacial IMCs such as AuSn 4 and (Au,Ni)Sn 4 forms a large part of the R contact highlighted in Eq. 1. Fig. 6.Interfacial IMC growth of the three solder systems with Ni(P) metallization in solidstate reaction at 150°C. In the 80Au20Sn system, the solder microstructure did not change significantly from the asreflowed state (Fig. 5 (c)), even after 49 days of thermal aging.Only a thin layer of (Ni,Au) 3 Sn 2 IMC was introduced between the AuSn(Ni) IMC and Ni(P) layer.The slow interfacial IMC growth was due to the low Sn content in the solder joint.Furthermore, diffusion of Sn to the interfaces was limited as the Au-rich ζ′ and β phases at the center of the solder joint essentially behaved as a diffusion barrier by preventing more Sn from diffusing into the IMC layers.Hence, the introduction of (Ni,Au) 3 Sn 2 IMC rather than the growth of the (Au,Ni)Sn layer.The microstructure details of the three solder systems were summarized into Table 3. Table 3. Microstructure summary of the three solder systems. Structural Integrity of solder joint Microstructure and failure mode are closely related in solder joints as the composition, microstructure and physical properties of the joint changes during aging.To assess the mechanical integrity of the solder joint, the laser diode package was subjected to shear testing, a mechanical overloading condition, to determine the weakest interface or material.Good bonding integrity with brittle fracture occurring within the laser diode was observed for all the three solder systems after bonding.The bonded laser diode exhibited a complete fracture after several microns.As shown in Fig. 7, the brittle fracture consisted of wallner lines at the GaAs material and interfacial delamination at the GaAs/SiN passivation interface of the laser diode.During aging, the fracture mode for both 63Pb37Sn and 3.5Ag96.5Snsystems changed to ductile solder fracture as shown in Fig. 9(a)-(b).The peak shear load reduced after aging and the bonded laser diodes were completely removed after shearing off a distance of more than half the length of the laser diode (see Fig. 8).During aging, the AuSn 4 and (Au,Ni)Sn 4 IMCs settled to the interfaces and grew.These IMCs grew into thick planar morphologies and gross defects were formed in the solder joint.These planar IMC layers reduced the Thermal behavior of laser diodes In the previous section, we have highlighted on the importance of devising an efficient thermal management capability in the packaging design to meet the global demand for high-power applications.The heat dissipation capabilities not only depend on the selection of material and means of external cooling but also on the bonding configuration of the laser diode.Since the parametric performance of the laser diodes is strongly influenced by the heat dissipation capabilities, in this section, the thermal behaviour of the laser diodes under different bonding configurations will be evaluated and compared. Heating response of laser diodes Uncoated 980nm single-mode ridge-waveguide laser diodes with a cavity length of 600nm were used for this comparison study.The laser diodes were bonded onto a copper heat sink using the molten-state bonding technique (Teo et al., 2004(Teo et al., , 2008)).As shown in Fig. 11, the laser diode performance improved with higher optical power achieved after bonding.The typical power achieved for epi-side up bonded laser diodes was ~1.3 times higher whereas in epi-side down bonding approach, the optical power further improved to ~1.5 times before catastrophic damage.This shows that the lasing optical output of the laser diodes is strongly influenced by the heat dissipation capability through the die-attachment interface.Hence, understanding the heating response of the laser diodes is important to the thermal design and optimization.To understand the heat flow in the diode, it is important to study the transient behaviour of the LDs (Teo et al., 2009).Fig. 2 shows the transient heating response of the LDs at different pulse durations and duty cycles.The emission wavelength did not increase within the first 1 µs of operation.As the pulse width increased, transient heating (emission wavelength) could be observed.The temperature increased at a rate of 2.84 °C/ms and saturated within several milliseconds.Likewise, the temperature in the active region was also observed to vary with duty cycle.When the frequency of the pulse repetition rate increased above 10%, the temperature distribution across the LD was non-uniform, and the temperature in the active region increased exponentially until CW operation.At high pulse repetition rate, the temperature rise in the active region might lead to performance deterioration.From the analogy of heat conduction, the time for the excess heat energy to be transported to the GaAs substrate and reached thermal equilibrium depends on the device and its surrounding medium.Using Paoli method (Paoli, 1975), the temperature rise in the active region of the laser diode was estimated.Fig. 13 shows the effects of different bonding configuration on the temperature rise in the active region under pulse and continuous-wave operating conditions.In pulse operation, the temperature in the active region did not increase significantly even at a high injection current of 300 mA.This suggested that, at pulse operation, the heat sink did not have a significant influence on the thermal behaviour of the laser diodes and that the heat generated was localized within the laser diode. However, in continuous-wave operation, Joule heating was evidently shown (see Fig. 13).For the unbonded samples, measurements were conducted until 220 mA before catastrophic damage occurred at its emitting facets.A large temperature rise of more than 100 °C could be observed in the laser diode.For epi-side up and epi-side down bonding, the heat removal means from the laser diode to the heat sink reduced the temperature in the active region at 220 mA to an average of ~70 °C and ~40 °C, respectively.Hence, higher electrical-optical measurements were permissible for epi-side up and epi-side down bonding approach.Two other characteristics were observed.First, at low injection current, the temperature rise for epi-side up bonded laser diodes and unbonded laser diodes were higher than epi-side down bonded laser diodes.The heat generated in the active region could not be removed effectively in unbonded and epi-side up bonded samples and, hence the temperature in the active region was larger than the epi-side down bonded laser diodes.Second, as the injection current increased, an exponential increment of device heating could be observed.At high injection current, additional heating source due to the series resistance of the laser diodes becomes apparent.This behaviour suggested that Joules heating was the dominant heating mechanism at high continuous-wave operating conditions.Fig. 13.Heating response of unbonded and bonded laser diodes at high pulse and continuous-wave operation.Joules heating could be observed at high continuous-wave injection current. Thermal resistance calculation for laser diodes Similar to interface engineering, thermal resistance calculation is a common practice to evaluate the thermal behaviour of semiconductor lasers.Typically, thermal resistance is defined as the ratio of the temperature rise in the device to the input power where ΔT is the average temperature rise in the active region for a given injected power. However, the calculation of R laser diode is not as straightforward.Equation 3 is valid only for heat generated well below the threshold current since most of the electrical input is converted into heat energy.As the current increases nearer to its threshold current, photon emission becomes more apparent (see Fig. 14).The electrical incremental input is now converted into both heat energy and coherent radiation, which can be extracted from the laser.The exponential temperature dependence of threshold current and the relative importance of Ohmic heating at high injection current further complicate in the thermal analysis as the temperature rise in the active region may not be proportional to the input power.Hence, when measurements are conducted near the lasing threshold, the emitted optical power must be corrected to obtain the correct heat generation rate.Consideration of the heat generation in the active region alone is insufficient to deduce the R laser diode for high injection current.Other sources of heating element may also surface; radiative absorption of free carriers and series resistance of the diode.Firstly, the rate of photon absorption differs at different current densities.At high injection current, thermal rollover exists due to an increased of photon absorption.The heat generated in the active region is significantly large and, the effective heat generation rate for ΔP is therefore (1 where η is the differential quantum efficiency of the diode.η is extracted from the LI curve. Following Eq. ( 2) and (3), to account for the optical absorption, the thermal resistance is change to ( ) In addition to the heat generated at the junction, Joules heating due to the series resistance R S may also be present.At low injection current, R S can be neglected as 2 IV I R 〉〉 .However, as the injection current increases i.e. 2 IV I R 〈〈 , Joules heating becomes apparent as it increases to the square of current.Hence, the thermal resistance of the laser diode is expressed as As shown in Fig. 15, the thermal resistance of the laser diode differs at below and above the lasing threshold.A large R laser diode of as much as 1000 °C/W could be observed below the threshold current and it dropped abruptly to 200 °C/W as it approached towards its lasing threshold value.Below the lasing threshold, the thermal resistance followed a linear regression of 130-150 °C/W per mA for all heat sink temperatures, and it remained relatively constant thereafter.The stabilized thermal resistance is terms as the 'effective' thermal resistance.The change of thermal resistance was induced by the transfer of nonradiative energy (non-stimulated emission) into radiative emission of free carriers as discussed earlier.This shows that the dominant cause for the temperature rise, at low injection current, is dominated by the non-radiative recombination.As shown in Fig. 16, the effective thermal resistance of the laser diode package is reduced after bonding, lowest achieved in epi-side down bonding approach.From an initial 'effective' thermal resistance of ~200 °C/W for the unbonded laser diodes, the 'effective' thermal resistance has now dropped to ~100 °C/W and ~40 °C/W after epi-side up and episide down bonding, respectively.The reduction in the thermal resistance led to improved lasing performance shown in Fig. 11. Conclusion In this chapter, the challenges in high-power laser diode packaging were identified. Material-oriented problems concerning electrical, mechanical and thermal issues must be resolved, while design-oriented issues strive for ease of manufacture, rework and inspection.The attributes of various die-attachment techniques using different kinds of interface materials to overcome the thermal management issues in the packaging design were discussed.A well-controlled void-free bonding interface is required to enable an effective heat dissipation path through the die-attachment process.The heat dissipation capabilities has a strong relationship to the parametric performance of the laser diode.Episide down bonding approach has lower thermal resistance, resulting in lower temperature rise in the active region and hence, permitting higher optical output power compared to episide up bonding approach.A preview of some of the state-of-the-art heat sink materials and various cooling methods were also discussed to expand the possibility of providing design flexibility for future demand of high-power applications. As applications continue to demand for higher optical output power and longer lifetime, thermo-mechanical stresses on the die-attachment interface pose a challenge in the laser diode package.To quantify the reliability of the laser diode package, one needs to consider not only the parametric performances of the laser diode device, but also the integrity of the joint.Knowledge on the physical changes at the interface is crucial to the understanding on the device performance and reliability.Three different solder systems -63Pb37Sn, 3.5Ag96.5Snand 80Au20Sn -were compared.Metallurgical bond is form between the laser diode and the heat sink through the formation of IMCs at the interfaces during bonding.For the three solder systems, the total chemical driving force arises from the dissolution of Ni from the heat sink in molten solder and from the interfacial reaction in forming IMCs.63Pb37Sn and 3.5Ag96.5Snsolder exhibit large IMCs growth in the solder joint and the integrity of the solder joint degrades during aging.The mechanical strength of the solder joint weakens significantly and large amount of plastic deformation was observed in the solder joint during shear test.Only 80Au20Sn solder has exhibited a stable microstructure with minimal interdiffusion at the interfaces and the structural integrity of the joint was excellent.Hence, for a reliable assembly, 80Au20Sn solder is the preferred interface material to support high-power laser diode applications. References Fig. 1.Schematic diagram of the typical laser diode package and its associated thermal resistance. Fig. 2 . Fig. 2. Comparison of different bonding configurations of ridge-waveguide laser diodes.(a) For epi-side up bonded laser diode, the heat generated in the active region is ineffectively transferred through the substrate; (b) For epi-side down bonded laser diode, the heat flux is effectively reached the heat sink within several microns. Fig. 4 Fig.4(a)-(c) shows the interfacial reaction of the as-bonded laser diode package using 63Pb37Sn, (b) 3.5Ag96.5Sn,and (c) 80Au20Sn solder systems.During reflow, PtSn and PtSn 4 IMCs were observed at the laser diode/solder interface.At the solder/heat sink interface, diffusion of Ni from the heat sink into the solder joint could be detected within 2-3 µm from the solder/heat sink interface.For both the 63Pb37Sn and 3.5Ag96.5Snsolder systems, a layer of Ni 3 Sn 4 IMC was formed, followed by (Au,Ni)Sn 4 IMCs.Due to the thin solder joint, the AuSn 4 and (Au,Ni)Sn 4 IMCs could be found in the solder as well as at the interfaces.The AuSn 4 and (Au,Ni)Sn 4 IMC precipitates were randomly dispersed into the solder joint.However, for the 80Au20Sn solder system, only a thin layer of (Au,Ni)Sn IMCs was observed.The solder joint consists of three Au-Sn phases; δ (AuSn), ζ′ (Au 5 Sn) and β (Au 10 Sn) phases(Teo J.W.R et al., 2008).As shown in Fig.4(c), the δ phase was observed to coalesce to the interfaces while the Au-rich ζ′ and β phases remained at the center of the solder. Fig. 5 . Fig. 5. Typical SEM micrographs and EDX mapping showing the development of intermetallic compound layers in a 3.5Ag96.5Snsoldered laser diode package as a result of solid-state aging at 150 °C at the (a) laser diode/solder and (c) solder/heatsink interface. Fig. 7 . Fig. 7. Typical fracture surface examination of as-bonded LD package for the three solder systems. Fig. 8 . Fig. 8.Typical shear strength profile of the bonded laser diode.The mechanical strength of 63Pb37Sn and 3.5Ag96.5Snsoldered laser diode package reduced with aging. Fig. 10 . Fig. 10.Cross-sectional examination of the fracture surface at the heat sink surface after 49 days of thermal aging.The thick IMC layers have the tendency to generate structural defects. Fig. 11 . Fig. 11.Influence of bonding on the electrical-optical characteristics of the laser diode. Fig. 12 . Fig. 12. Emission spectra of LDs as a function of pulse width and duty cycle.Transient heating response was observed from the spectrally resolved emission measurements. Fig. 14 . Fig. 14.Temperature rise and quantum efficiency of laser diode at different operating temperatures. Fig. 15 . Fig. 15.Thermal resistance of laser diode at a function of current.The effective thermal resistance of the diode varies with current. Table 2 . Comparison of various solder materials used in butterfly laser diode package and their physical properties[32].
2018-11-04T08:01:48.418Z
2012-04-25T00:00:00.000
{ "year": 2012, "sha1": "87b9f6ad6de369ca36db4ac81562b7919e4caec4", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/35937", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "87b9f6ad6de369ca36db4ac81562b7919e4caec4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
240562052
pes2o/s2orc
v3-fos-license
“Messages of Love from Maoriland”: A. D. Willis’s New Zealand Christmas Cards and Booklets 1883-1893 I have previously explored the beginnings of the New Zealand Christmas card prior to 1883, and the ways that the designers of these cards negotiated the colonial experience of a summer Christmas.1 This paper examines the development, over the decade following 1883, of the chromolithographic work of A. D. Willis, whose production not only continued the work of creating a niche for New Zealand Christmas cards, but also tried to compete with the large overseas ‘art publishers’ who were flooding the New Zealand market with northern hemisphere iconography. Willis’s Christmas cards are frequently used to illustrate books looking at the 1880s, but there has been no detailed study done of them. The paper therefore documents the cards, their production and reception, explores how they record Willis’s understanding of the art publishing business and the market he was working into, and situates them in relation to broader print culture. Understanding this overlooked chapter in ‘commercial art’ provides useful evidence of the murky interplay between the local, national and transnational identities that marked New Zealand cultural production when artists and designers sought to capture the public’s Yuletide sentiments. Willis’s work also displays two very distinct conceptions of how to represent what was increasingly known as ‘Maoriland’ to an overseas market – one focused on the land, and the other on Māori. As such, these cards act as a weathervane for what the New Zealand public accepted as New Zealand, artistic and appropriate as a Christmas gift. I have previously explored the beginnings of the New Zealand Christmas card prior to 1883, and the ways that the designers of these cards negotiated the colonial experience of a summer Christmas. 1 This paper examines the development, over the decade following 1883, of the chromolithographic work of A. D. Willis, whose production not only continued the work of creating a niche for New Zealand Christmas cards, but also tried to compete with the large overseas 'art publishers' who were flooding the New Zealand market with northern hemisphere iconography. Willis's Christmas cards are frequently used to illustrate books looking at the 1880s, but there has been no detailed study done of them. The paper therefore documents the cards, their production and reception, explores how they record Willis's understanding of the art publishing business and the market he was working into, and situates them in relation to broader print culture. Understanding this overlooked chapter in 'commercial art' provides useful evidence of the murky interplay between the local, national and transnational identities that marked New Zealand cultural production when artists and designers sought to capture the public's Yuletide sentiments. Willis's work also displays two very distinct conceptions of how to represent what was increasingly known as 'Maoriland' to an overseas market -one focused on the land, and the other on Māori. As such, these cards act as a weathervane for what the New Zealand public accepted as New Zealand, artistic and appropriate as a Christmas gift. On a cold July night in 1885, members of the Auckland Society of Arts gathered for their annual general meeting. According to the New Zealand Herald reporter, the members were gratified by the society's 'progress,' and the first example given of this progress was the success of the Society's inaugural Christmas card competition. 2 It had been won by "an unassuming young man, holding a subordinate position in an Auckland warehouse." 3 This win marked the first New Zealand highlight in Frank Wright's distinguished career as a landscape artist. However, while his 1884 success excited comment in press at the time, it was quietly culled from the timeline in Wright's 1954 retrospective at the Auckland City Art Gallery. 4 This omission says much about the fate of the Christmas card genre itself. Through the course of the twentieth century, the Victorian recognition of greeting card designing as a valid artistic pursuit was eroded, and the card found itself instead consigned to a quiet, chocolate-boxy cul-de-sac near the intersection between art and design -where it has stayed. Given such current estimation, it is not surprising that a book like Roger Blackley's Galleries of Maoriland, which delves extensively into the art world of the period, should not engage with either the Christmas card, its artists, designers or publishers. 5 While sending Christmas cards may appear now to be a fixed and inevitable practice, in the 1880s the format appropriate for Christmas exchange was still contested, and cards were relatively expensive gifts in their own right. After the Dickens-led revival of Christmas in the 1840's, many Christmas customs had evolved quickly, but card sending was not one of them. Books were initially a more common bearer of Christmas cheer. 6 Dunedin bookseller Joseph Braithwaite's 1880 publication of Vincent Pyke and Frances Ellen Talbot's White Hood and Blue Cap: A Christmas Bough with Two Branches is one local example of the continuing tradition of exchanging books "designed for holiday reading" at Christmas. 7 Braithwaite's 'shilling shocker' cost the same as the first photographic Christmas cards that began to be published in Dunedin that year by the Burton Brothers, in association with J. Wilkie & Co. 8 To justify what might appear a considerable expense in buying a card versus a novel, consumers needed a reason to value them. In this case, the mechanisms for creating this value were a demand for the local and for novelty. Christmas cards had only hit craze proportions in Britain in the late 1870's, but the buzz around them quickly translated into the volume of imports of primarily chromolithographic British cards. 9 New Zealand photographers like the Burtons and Wilkie were the first to respond to this new market opportunity, with a number of producers, particularly around Dunedin, offering cards. And despite most photographers who created cards catering for defined regional markets, Nathaniel Leves' 1882 "Comet Card" demonstrated that a locally-produced Christmas card could command national admiration. 10 In Whanganui, Archibald Duddingston Willis clearly decided it was possible to replicate Leves' photographic success in the more demanding category of chromolithography. 11 A bookseller, printer, entrepreneur and soon-to-be politician, Willis had arrived in New Zealand as a 15 year old, having earlier been apprenticed to the prominent British printers, Eyre and Spottiswoode. 12 After working all around New Zealand as a printer on a variety of newspapers, he became a partner, with John Ballance, on the Wanganui Herald before setting up a stationery shop and printing works in 1872. 13 He had only dipped his toes into the Christmas publishing trade before 1883. 14 However, beginning that year, he used Christmas cards to promote his chromolithographic business, publishing cards for nine of the next ten years and gaining a nation-wide reputation for the quality of his work. Two sample books of his cards in the Turnbull collection collectively cover his production until 1888, and Te Papa's collection contains many of his post 1891 cards. Until now, however, the dating of these cards has been largely speculative. Rosslyn Johnson, in her monumental thesis on New Zealand's colour printing, did a case study on Willis and documented some dated adverts for his cards, 15 but she did not look at Christmas cards in great detail, and did not have access to Papers Past. Here I have used advertisements and editorial comment from Papers Past to date all cards conclusively (see appendix). 16 This means, in turn, that it is possible to trace the development of Willis's design approaches through the period. During 1882, Willis had brought out a set of floral Christmas cards hand-painted by "a Wanganui Lady." 17 In early 1883, he followed this by producing another line of cards, hand-painted by a number of local artists onto 'gelatine' (celluloid) and inset into a printed mount. 18 He claimed to have over 150 designs, comprising a mix of images of birds, flowers and local scenes. 19 These could be mounted as both Birthday and Gift cards (figure 1). One of the Turnbull sample books, which was likely assembled in mid 1885, 20 contains sixty-three different native bird images from this series, displayed as birthday cards. Willis initially seems to have encouraged postal ordering, 21 so these designs could be created according to demand. Given their prominence in the sample book, they must have continued to be popular for several years. As the Wanganui Herald noted, "besides creating a large local demand," through the production of these and the 1882 Christmas cards, "the nucleus of a trade was formed with the leading towns of New Zealand and Australia." 22 In 1883, chromolithography seems to have been in the New Zealand air. As a technology it was promoted as bringing art to the masses, and there was an evident demand for colour. 23 In Auckland, Upton & Co. published six chromolithographic cards of New Zealand plants, designed by Miss Eames. 24 These were almost certainly printed overseas -as the vast majority of New Zealand coloured work continued to be. Willis, however, recognised the need for a local alternative. Being a bookseller, he sold chromolithographic items from the large firms in Britain, the US and Germany, 25 and he would have been aware of Australian chromolithographic cards. 26 He therefore understood the market, and knew that it would accept images of the natural world as appropriate for a Christmas card. 27 According to an advert that he inserted into Freethought Review, for his new cards, he contracted a British artist to do a set of views and native plants. 28 The advert focused strongly on the "views of special interest….that are so well worth preservation," but also talks about the "charm of foliage and flower." His motivation for publishing the cards, he said, was to "supply the special want long felt by residents of the Colony who may wish to send their friends on the other side of the world pictorial illustrations which …. convey an adequate idea and a tasteful realisation of the land we live in." 29 The views he chose, however, were not focused on the inhabited colony. Rather, they showed highlights of natural beauty: three volcanoes, two lakes and Mitre Peak (figure 4). This focus on uninhabited iconic views is typical of the strategies of cultural colonisation that Peter Gibbons says allowed settlers to imaginatively possess the land. 30 And the choice was probably also influenced by the current fascination with the picturesque, a mode that, in applying familiar conventions to unfamiliar subject matter, allowed the new and the strange to be encountered safely. 31 Flowers were also a default Christmas iconography, 22 and Willis must have believed that there would be a greater demand for the plants than the views, since he published 14 of the former (figure 2) and 6 of the latter (figure 3). As Patricia Zakreski points out (using a northern hemisphere analogy) Christmas cards were perennial money-spinners precisely because, like flowers, they worked on an annual cycle, dying off and needing replenishingagain providing novelty within a comfortingly familiar context. 33 Certainty of demand would have been imperative for Willis as he planned this ambitious venture. Chromolithography was a demanding form of printing, involving building up the image on 12 separate lithographic stones. Whereas photography was comparatively easy to embark on, chromolithography demanded a much greater investment in equipment, and needed skilled labour. 34 The Wanganui Herald pointed out the considerable difficulties and expense involved. Putting in the plant had meant enlarging the premises, ordering new machines, installing electric lighting ("the first on the coast") and increasing the staff. 35 Nevertheless, it noted that Willis had the "advantage of having the services of a careful artist in Mr Potts, who closely studies each subject to which he sets his pen. The floral representations bear ample evidence of this." 36 William Potts had recently arrived from England. 37 He was highly skilled, and his art pleased not only the Herald but its competitor the Wanganui Chronicle, which put aside any differences in a show of local pride. It praised Potts and noted that the quality of the work was so high that "these products of a Wanganui house …. should hold their own against any of the coloured cards, prepared by Home or Australian firms." 38 The Herald a few days later compared Willis' cards to another set depicting New Zealand plants that had just been produced by the famous British firm Marcus Ward and Co., saying that "the comparison is most decidedly in favour of the Wanganui article, in every respect -in fidelity to nature as well as artistic effect." 39 Outside the bubble of Whanganui, the reception was more mixed. Like overseas counterparts, Willis used the tactic of sending samples to newspapers. 40 In Christchurch, the Star relegated discussion of local cards to the end of an article on what was available, and concluded, after quoting Willis' claims about the cards being the equal of overseas companies, that despite "some surprisingly good results… the claim has yet to be made good." 41 The Grey River Argus felt that although "scarcely up to Home work for depth and brightness of colouring… they are really very good for all that," seeing them as "a very promising beginning for the colony in this line of art." 42 This view was shared by the Otago Daily Times which, having earlier said that Willis's work was "creditable, though plain as regards finish," 43 wrote an article on Willis's chromolithographic venture, noting the difficulties involved, the advantages of producing work locally rather than having the cost of imports, and praising the work as bearing comparison with imported cards. 44 The most positive response was from the ODT's Dunedin competitor, the Evening Star, which felt that the samples sent to the paper did indeed fulfil Willis's claims. 45 E-936-f-024-3. The image has echoes of Charles Heaphy's iconic picture, however Potts has here shown a less tamed vista, and has exaggerated the mountain less -though the clouds disguise the second vent and make the mountain seem more symmetrical. Note that the greeting is visualised as a 3D object, complete with reflections in the water -a technique adapted from an 1882 Australian card by John Sands (reproduced in Hancox, 2008 p.15). At a shilling for the larger cards (the earlier handpainted birthday cards were, by comparison, 1s 3d and 1s 6d), the cards did not sell out immediately, though the following year Willis claimed that 3000 copies had been sold in Whanganui alone. 46 Despite the marketing, Willis was still advertising "a large stock….on hand" in mid-December. 47 Willis's stated reasons for publishing these cards, and the responses to them, seem fairly straightforward (i.e. it was generally understood that they served to help settlers share the sights of their new land with friends and relatives at 'home'). Nevertheless, greetings cards are more theoretically complex than they might initially appear. Functionally, they are produced for a market of people who buy them as gifts and mementos to send to others, who in turn may discard them, or store them in an album that can be privately reminisced over, or publicly displayed for others. To add to that, they are normally a mix of image and text, and relate to a Christian festival. These factors collectively create a host of thorny issues before ever one begins to talk about the subject matter and its artistic treatment, both of which introduce another range of formal issues, social contexts and power relations into the mix. For example, Willis's designs showed either images of plants or views of landscapes -genres that can be seen as fitting into an interpretative framework which Judith Adler sees as the scientific tourist's "impartial survey of all creation," 48 Tony Hughes-D'Aeth terms the "world-as-gallery," 49 and Susan Stewart calls the collecting of "souvenirs of external sights." 50 However, while the cards might perhaps have ended up being used by some as picturesque tourist collectibles to feed the nostalgic narratives of the collector, 51 greetings cards typically operate more as gift than souvenir. They thus function differently to other mass-produced items which were created to be used and disposed of. 52 Instead, greetings cards are better understood as social and celebratory items that were probably placed in an album with the future-focused intent of displaying them to friends. 53 This sense of feeling with others via printed ephemera is, according to Susan Zieger, an inherent quality of mass print culture. 54 Seen in this light, while the act of sending images of their new environment cannot escape the broader charge of cultural colonisation, 55 it also bespeaks the emotional investment of the senders. Ironically, this sharing and celebrating of a collective investment in the local and national, serves to reinforce the transnational nature of card practice when the cards were sent to the intended recipients overseas. In 1883, consumers choosing cards to send across the world had been given a choice as to whether to continue the Christmas tradition of sending images of flowers, or whether to send miniature Romantic landscapes. In 1884, Willis produced twenty-three new view cards and only one of plants. The consumer had clearly spoken. Willis had also evidently realised that having two thirds of his cards showing North Island views was a demographic and marketing mistake. If the earlier cards were constrained by scenes that the artist had managed to visit, the new cards appear to be the beginnings of a project to construct a record of New Zealand as a scenic wonderland. Over the preceding decade, tourism had been expanding, 56 and tourist views generally concentrated on sublime, untamed nature. 57 Willis's cards reflected this, and eighteen of the new cards showed the South Island: the tourist hotspots of Queenstown and Fiordland garnering eight, two showing Dunedin, and the rest spread around the main centres and iconic attractions like Mount Cook. In the North Island, he concentrated on Wellington, Auckland, Thames and the alluring beauties of Taupo and the Bay of Islands. This coverage was praised by the Evening Star which took credit for having recommended such subjects, predicted good sales and said they were "by far and away the best samples of chromolithography yet turned out in the colony." 58 Willis must have taken note of some of the other critiques of the cards, because he advertised in Freethought Review that he had made many changes while in the process of reprinting copies of the previous year's cards. The earlier of the two Turnbull sample books contains repeats of two images (Lake Rotorua and Mitre Peak) presumably included so that the salesman could show the improvements. 59 These primarily relate to the rocks and the definition of the mountain ( Figure 4), but Willis evidently did not intend to simply repeat the previous year's approach with his new images. A border was introduced around the image of all, allowing newer images to be distinguished from the discounted earlier stock, and to create unity as a series. It is also likely that he started using photographs as the basis for the images. With 288 separations to prepare for the 24 cards (not to mention doing the food labels and ball programmes that were also being produced at the works), 60 Mr Potts the lithographer would have been too busy to travel the length of the country in search of new scenes. And using photographs, worked up by Potts, would explain why Robert Coupland Harding, in gifting his Willis sample book to the Turnbull in 1911, noted that his cousin (Lydia Harding -later Mrs Swain) had been responsible for the designs of the floral borders, but does not mention her doing the scenes themselves. 61 For his third series, in 1885, Willis reduced the number of new cards to twelve (eight South Island and just four North Island views) but began to experiment with more visually challenging approaches that reference both the scrapbook (the most likely display venue for cards) and the 1880s photograph album -the latter typically with its square or oval vignettes surrounded by printed floral designs. The circular tondo in the image of Stewart Island (figure 6), could also reference both the telescope or a porthole. 64 However, scrapbook allusions are more prevalent, as with a card of Lyttleton Harbour, where a trompe l'oeil effect is utilised to make it appear that the image has come away from the page in one corner, revealing a greeting message behind it. Collecting coloured cards in a scrapbook can be framed as creating a compressed sense of abundance. 65 Similarly, images in the 1880s periodical press were also trying to become more abundant, 66 drawing together multiple scenes into a single montage. The covers of the Illustrated London News between the years 1880 -1885, demonstrate this progression. At the start of the period barely any included vignettes. By the end, these are frequent. Tony Hughes-d'Aeth sees this as typical of both the British and American press at the time, 67 and Willis's artistic team was certainly picking up on such effects. Having, in 1884, introduced the idea of an image within an image, they explored it further in 1885. In one card, a page, apparently from a photograph album displays a mockingly full-colour image of Lake Wakatipu, which leans against a Nikau palm (see figure 6). In another, Lydia Anne Harding only turned twenty in 1885, but her influence can be seen in that the borders around the cards became progressively more complex and interesting from then onwards. The bulk of the 1884 cards continued the 1883 approach of being a miniature painting with a textual greeting written gratuitously over the background. Integrating text and image was always a challenge for greeting card designers, who could not use the illustrated press, which largely kept text and image separate, for inspiration. Earlier designers tended to imagine the text as being on a piece of card, with plants surrounding it (see Figure 9). For them, the greeting was the primary object. By the 1880's, however, the image was starting to overpower the text, and in Willis's cards the image is dominant. Yet even during 1884 there were some signs of experimentation with this relationship. The image of Timaru Breakwater had the greetings message carried in the beak of a flying bird. In the Tararu Creek image (figure 5) the greeting was written on a painting on an artist's easel, and this idea of having a painting within the card was extended, when, in the Breaksea Sound card, the view was included as if it were on a large painting or placard, which also carries the greeting. Here the message is hammered home that the scene is, literally, 'picturesque.' The image-within-an-image is held by a Māori figure who blends into a staged landscape of birds, flax and cabbage trees, creating a lavish and realistic border which, typically for much picturesque colonial imagery, treats the country's space, landscape and indigenous people as part of a single discourse, 62 a small girl holds a sizeable painting of Docharty's Bay, Dusky Sound with the message "A Merry Christmas" in a landscape of trees and flax plants (figure 6 below). This format helps make the view appear more obviously artistic and gives it a greater sense of scale than it otherwise would have. And the girl holding the painting plays to the Victorian fascination with childhood, framing a colonial child within an arcadian landscape. This parallels how the Breaksea sound image (figure 5) showed Māori in a similar role, probably with the expectation that this reference to the exotic could help make a connection with an overseas audience. 68 And a Child, dressed for summer, strange plants and a strange people all helped evoke a strange and wonderful Christmas for people shivering in the northern hemisphere. 69 Not all the 1885 cards were as formally inventive, with a mix of approaches occurring. Whilst all had a border around the edge of the card (rather than around the image), some had a much thicker, ornate (and less successful) treatment. The same applies to the 1886 cards. One of the North Shore was almost a conventional view, but still used the illusion of a corner of the scene being tucked over, thus hinting that the receiver should place it in a scrapbook. The most album-like card is the scene of Mount Arthur, Nelson (figure 7), but whereas a photograph inserted in an album would be separated from the surround, here the image is treated like a window or arch through which the imagery spills forward onto the page. This effect links the image to the viewer's reality, creating a consensual space between viewer and viewed, and one that emphasises the depth of the framed image. 70 Other cards, like that of the Grahamstown Goldfield treat the central image like a discrete picture, with foliage sitting behind it -though this coherence is subverted by a portion of the image leaking gold-laden earth, implying that it is too bounteous to be contained by the frame. Here two spatial conventions co-exist playfully, but the effect is primarily scrapbook-like, with the image treated as a precious exhibit. The Manawatu Gorge card makes this connotation even more obvious. 71 If the early images had been miniature paintings, and the 1886 images had experimented, 1887 was the point at which this more lavish style came to fruition. The idea of a card within a card was expanded upon, but instead of using a wide variety of formats, the 1887 cards chose to expand on the Grahamstown and Manawatu Gorge approaches. The latter, in particular, led into some of Willis's most spectacular card designs (figure 8). Here the design unifies the card with the illusion of two separate images (one picture-like, one giving telescopic detail) floating within a floral nest. It is an arrangement that fragments in order to create a more abundant and unified sense of the subject's reality. 72 Using trompe l'oeil to show objects arranged on a card, was labelled a Quodlibet by George Buday. 73 Roger Blackley shows that this term, deriving from 17th century Dutch 'hotchpotch' pictures is better applied to more purely trompe l'oeil effects. 74 However one describes it, the multi-view format was increasingly used through the 1880s. A selection of overseas cards (figure 10) demonstrate how framing changed over the preceding decade, and how cards moved towards demonstrating greater abundance and complexity in the imagery while diminishing the importance of the greeting text. What is also clear, looking at the later international cards is that, despite Willis's work improving, it was difficult to compete with the high quality of chromolithographic printing coming from some of the British and German houses. This would prove the hardest issue for Willis to conquer, and by 1887 his adverts took on a slightly plaintive quality when he felt it necessary to remind the public to "support local industry." 75 The problem was not his marketing. He continued to send cards to newspapers -though with less frequent responses -but primarily he seems to have travelled himself to promote sales. 76 He also had a clear sales pitch: "What is the use of sending English cards back to England. Send N.Z. Christmas Cards home." 77 And the work, although reviews had tailed off, had been generally well received. In 1888, the New Zealand Times described Willis's offerings that year as "gems in their way," 78 while the Wanganui Chronicle congratulated him on "their success as works of art." 79 Robert Coupland Harding described that year's cards as "exceedingly good." 80 Nevertheless, Harding's later description of Willis's sample book as showing the "first crude attempts" at New Zealand Christmas cards, has some validity in terms of the printing, if not the designs. 81 A mix of dynamic design and underwhelming finish is evident in the cards that elicited such positive responses, Willis's final two coloured views ( Figure 10). They used designs by Margaret Olrog Stoddart, 82 then a high-profile graduating student at the Canterbury College School of Art. 83 The effect of the surround is lavish, but Potts' transfer lithographic printing, compared to top European work, is slightly heavy-handed. These two cards are something of an oddity, relative to the rest. They don't claim to show specific places, but rather are generic scenes. And this was the first year since 1883 that Willis did not publish twelve or more cards. The reasons for this are, I think, two-fold. Firstly, if one plots his images across the country, by 1887 he had ticked off most areas bar the predominantly Māori regions of the King Country, Northland and the East Coast. There were eight scenes around Queenstown, six each around Rotorua and Auckland, four each for Milford and Dusky Sounds, as well as Taupo and Thames, three around Christchurch, two each for Wellington, Dunedin and New Plymouth and a single image for most other centres. He therefore now had a catalogue that could be reprinted, and by the start of 1888 he perhaps felt no real need to add to it. However, Margaret Stoddart's designs (which she may well have submitted to Willis speculatively in order to have a real printed outcome in her graduating show), 85 offered something Willis clearly desiredrecognition for the artistic quality of the work. And it would be this that saw him concentrate his 1889 production on his magnum opus New Zealand Illustrated rather than doing Christmas cards. 1889 was therefore the first year since 1882 when Willis produced no Christmas cards. This hiatus must have allowed him to do some thinking about the competition for the Christmas market. Overseas firms like Raphael Tuck could market 3000 different designs a year. 86 Novelties such as cards in strange shapes, folding cards in gold leaf, and handpainted cards on porcelain were coming from all the big overseas firms, and an article in the Press, after discussing these and many more, then went on to talk about "ordinary chromolithographed cards." 87 Although the Christmas card craze was still at its peak, the public was evidently being seduced by imported choice, while coloured lithographs of the sort Willis had been doing were losing their cachet. Furthermore, there was another area of competition emerging. If, at the start of the 1880's, cards had cut into the Christmas book market, by the late 1880s there was a serious publishing response. In 1887, John Watt, a bookseller in Willis Street, Wellington, announced "The Book of the Season," noting that this "little work makes a good substitute for either Birthday, Christmas or New Year cards." 88 The work was a 32-page booklet called As Time Glides On. Written by George Thompson Hutchinson, costing 1s 3d, and published by Hodders, it consisted of "the months in Picture and Poem," and, according to Watt's advert, had 60,000 copies pre-ordered. This was, in fact, the tip of an iceberg that had been some years coming. Although the lavish £1 Christmas gift books of poems and pictures had effectively died out by the late 1870's, 89 (roughly in tandem with the rise of the Christmas cards), it seems to have arisen from the ashes in a cheaper guise. Booklets of 16 or 32 pages, targeted to the Christmas market, had been available in the US from around 1882, and appeared in Britain by 1885. 90 And publishing houses made a point of advertising that booklets could be sent instead of cards. However, the game-changer came in 1888 when the large British Christmas Card publishers Raphael Tuck and Hildesheimer & Faulkner replied to this threat by making their own booklets. 91 An example of these is The Jackdaw of Rheims ( figure 11), which is printed in a mix of black and white illustrations and two Figure 11: Cover and spread from Raphael Tuck's 1889 booklet The Jackdaw of Rheims. Author's collection. It was reviewed in the press as a Christmas card, despite being 32 pages long, and advertised as the "booklet of the season." The cover was printed chromolithographically using six colours. The left page was printed in three tones of brown, while the right-hand image was printed letterpress, using wood-engraved illustrations. or three-coloured sepia images. With so many fewer stones to use, such booklets could be produced at not vastly more than a 12 colour chromolithographic card. Abruptly, Christmas card advertising became 'Christmas Card and Booklet' advertising. As both a publisher and a bookseller, A. D. Willis would have noticed this change, and there was other food for thought. The New Zealand and South Seas Exhibition, for which New Zealand Illustrated was produced and which Willis would have attended, may have provided him with ideas for a change of direction. The exhibition articulated a response to the crisis of national identity during the 1880's that had seen leading intellectuals, as James Belich puts it, "forging a picture of the Maori past for Pakeha ideological purposes." 92 The exhibition contained an extensive section relating to the "anthropology of the aborigines of the colony." 93 This included objects like the 300-year-old flute played by Tūtānekai to Hinemoa. 94 The effect on non-intellectuals interested in New Zealand's national identity must have been significant. The Auckland Weekly News published a chromolithograph entitled "The Advent of the Maori: Christmas A.D. 1000" in its 1889 Christmas supplement. 95 Willis, ever sensitive to a trend, debuted in 1890 (the 50th year of New Zealand's colonisation) with a very different set of offerings. The new series of Christmas cards used the cheaper-toproduce, but artistic, sepia. 96 The plants, frames and views remained, to remind one of the earlier cards, but it is culture instead of nature that takes centre stage. New Zealandness had expanded from natural environments (with the occasional building or town scene) to exploring what made New Zealand unique. Jock Phillips sees the "view that any distinctive New Zealand cultural identity could be based in some way upon the Maori," as typical of the 'Maoriland' thinking in the 1890s, 97 and Roger Blackley's book shows how this focus came to affect many aspects of cultural production. 98 This approach was not, however, unique. Nationalism, in many parts of the world, was looking to earlier cultures and myths. 99 Certainly, New Zealand life, according to the following write-up in the Auckland Star equated to Māori life. The approach of Christmas is already heralded by displays of Christmas cards in the shop windows. Among the New Zealand designers of these popular tokens of goodwill, Mr A. D. Willis, lithographer of Wanganui, has taken the lead, and his designs for this season are again of a novel and attractive character, and are intended to illustrate interesting phases of New Zealand life. ... The views comprise : A Maori Canoe (Waka) Race, A Maori Speech (korero), Maori Going to Market in Canoe, The Home of the Maori (kainga), Maoriland, The Old Style and the New, A Family Repast (kai). ….Friends abroad will undoubtedly value much more highly cards of New Zealand design than English cards, however beautiful, and the novelty of such missives from Maoriland will cause them to be specially prized among the gifts of the season. Talking about "interesting phases in New Zealand life," calling New Zealand 'Maoriland,' and both Willis and the Star including the original Māori words in the titles -all acknowledged that Māori culture was fundamental to an understanding of New Zealand identity. And the cards were well received, with the Taranaki Herald saying Willis had "quite outshone his former efforts" with their "natural and artistic manner." 100 Apart from figure 12, images of the 1890 cards have proven particularly difficult to locate, but one that appeared on Trademe is worth mention. 101 Maoriland, the Old Style and the New shows two elderly Māori, one in traditional garb with a whare in the background. The other is dressed in European clothes, with a wooden house and fence behind. There is no caricature of 'the old' but the implication is clearly one of progress bringing assimilation. And the card's design also, subtly, gives a similar message relating to the Christmas card format which, by 1890, was appearing old. Trompe l'oeil is used in the card to make the image appear like it is the cover of a 16-page booklet. Willis was the first New Zealand publisher to appreciate and respond to the booklet trend and, alongside his 1890 Christmas cards he published a 2s 6d booklet called Hinemoa, which was described as "the first booklet ever produced in New Zealand." 102 This little Booklet is issued in the hope that it will be a change from a mere Christmas card, as a memento to send to European friends, to whom it may convey an idea of the scenery and lands by which we in New Zealand are connected, but which they may not have an opportunity to behold. 103 Much the same point is made in the Wanganui Herald's advance notice of Hinemoa. The paper said that "the get up will be exactly on the lines of the booklets we have been so familiar with of late, and which have to a very great extent taken the place of Xmas cards, etc." 104 The industry, it is made clear, was evolving, and Willis's production was keeping pace. And, after the 1888 experiment with Margaret Olrog Stoddart, Willis involved more high-profile artists. Lydia Harding made way for well-known local painter, George Sherriff, while the booklet's poet, Eleanor Montgomery, was someone who Willis had previously published. 105 The Arawa story of Hinemoa came from Sir George Grey's Polynesian Mythology, which had been republished in 1885. 106 Grey's translation was printed on the inside covers, while the illustrated poem filled the remaining twelve pages. The story of Hinemoa and Tūtānekai was a prime candidate for what Blackley identifies as Maoriland cultural appropriation. 107 It enjoyed huge popularity at the time, and there would be at least twelve 1890's poetic versions, 108 not to mention music, brochures, and later film. 109 Peter Gibbons sees such work as part of a process whereby settlers constructed a manageable version of 'the Maori.' 110 Len Bell similarly talks about the art of this period as being concerned with "drama, anecdote and fiction. Myths… being created about Maori by Europeans for Europeans." 111 Montgomery was clearly working in this manner, picking up on the poet who most influenced this particular genre -Longfellow, whose Hiawatha functioned similarly. 112 While Montgomery was always acknowledged in her booklets, George Sherriff was only ever known by his initials, and did not sign the 1890 cards that he is generally credited with. Nevertheless, Sherriff was named as Hinemoa's illustrator in the abovementioned Wanganui Herald article. He was also quick to correct what he saw as an error in its reportage. The paper had said that in Hinemoa "the German style has been closely followed by Mr G. Sherriff, the artist, and Mr W. Potts the lithographer." 113 Sherriff wrote : You remark that I am producing the illustrations to Hinemoa after the German style. My illustrations are produced in the usual manner in sepia, and are thoroughly original in every way. It is the lithographer, Mr Potts, who is attempting to reproduce them after the modern German illustrated booklet form. I trust you will kindly excuse me for correcting the error, as the article reads as though I was copying from the German. 114 To Sherriff, the booklet was a German form -or at least was coming from German publishers based or operating in Britain, like Tuck, Hildesheimer and Ernest Nister. However, a few days later William Potts wrote in to correct perceived errors in Sherriff's response: In the first place I am a lithographic artist and not a lithographer. I am producing for Mr Willis exact reproductions of Mr Sherriff's illustrations to Hinemoa in the manner I reproduce my own drawings and not in any German or other style. As Mr Sherriff's drawings are original, and I reproduce them, I cannot see what Germany has to do with them. 115 Apart from providing useful clues as to the booklet form, and exemplifying the status invested in the terms 'artist,' 'lithographer" and 'original', 116 this exchange (which must have resulted in some frosty relations at Willis's works) demonstrates the seriousness with which both Sherriff and Potts undertook this work. It is easy, with hindsight, to see Hinemoa as typical of a 'Maoriland' approach, but most of the work that forms this corpus occurred later. In 1890 Sherriff may have been justified in thinking that he was forging something original in New Zealand, though it was hardly "original in every way." The format was borrowed, and the typeface of a style that had been popular in the mid-1880's, 117 and was employed by Tuck in booklets like the 1888 Songs, Carols and Chimes. However this melding of transnational form and local content appears to have been a success. Willis garnered more good press for his 1890 products than he had for any of his earlier attempts. The measure of this can be seen the following year, when Wildman & Lyell in Auckland were advertising works "by the eminent colour printer, Mr Willis of Wanganui." 118 The New Zealand Herald similarly wrote a piece that exemplifies Willis's growing reputation, and acknowledgement that his works were art. After introducing him as "that most industrious and persevering producer and disseminator of New Zealand art" it went on to say: Year after year Mr. Willis has toiled on amidst many discouragements in his attempts to show the outside world that we can produce something besides frozen mutton and kauri gum, and we are pleased to hear that his efforts are at last beginning to be appreciated. The great success of last year's Christmas booklet -"Hinemoa" -has induced the publisher to follow on with something on the same lines, but the work of both the artist and poet, and last, but not least, the printer, is this year immeasurably superior." That follow-on, in 1891, was The Land of the Moa ( figure 14). It is worth noting here that the booklets, like the Christmas cards, have been routinely misdated in libraries and collections. Fortunately, newspaper advertising and comment provides definitive dates, and hence we can also follow something of the development of these works. The title 'Land of the Moa' predates the booklet. The phrase was an alternative description for New Zealand, and was also the title of a 9x5' painting by George Sherriff, which had been exhibited at the Colonial and Indian Exhibition in London in 1886 (where it was the largest work displayed). 119 It was described in the New Zealand Herald's review as "a wild lake scene, surrounded by rugged, glaciercrowned mountains." 120 In 1890, this work was the main prize in Sherriff's Art Union Lottery, where it was valued at £70. 121 Given that Willis and Sherriff would have been planning Hinemoa at this point, It can hardly be a coincidence that Willis arranged for Sherriff to illustrate a booklet the following year with the same title as the painting. However, when completed, the booklet's treatment could hardly be more different. Whereas the painting was searching for the Romantic sublime, with a bleak and expansive aspect, Sherriff adapted the booklet to the Christmas card approach -realising that a popular audience needed more of a commercial sublime. 122 If the painting was full of empty space (a marker of the highbrow), the booklet was packed and abundant. The central image retains elements of the painting of Wakatipu, but a volcano has been added, along with a waterfall, Māori village and lush framing foliage. The aim is to orchestrate a quick emotional response to New Zealand in as many ways as possible. This agglomeration is slightly less marked in the somewhat disjointed images within the booklet, but what the pictures lack, the poet more than makes up for, with some heady hymns to native flora and fauna, which would have been remarkably difficult for non-locals, who could not distinguish one bird from another, to follow. The interesting thing about the advertising in 1891 is that, unlike with Hinemoa, the identity of the G. S. initials was barely acknowledged. This may have been Sherriff's choice, but it may also be that Willis has simply purchased the artwork, as he did with Sherriff's painting of The Latest Scandal, which he then published as a lithograph. 123 He may therefore have felt no particular need to acknowledge the artist, rather than promoting his own Willis brand -much as overseas Art Publishers like Raphael Tuck routinely did. Certainly the Willis brand was blossoming. The Auckland Star had this to say: While the foreign goods are so attractive, we in the colonies prefer in sending missives Home to send something characteristic of the land we live in. A demand sprang up years ago, and has been admirably met by Mr A. D. Willis, colour printer of Wanganui, whose publications incidental to the Christmas season have extended his reputation far and wide. 124 The paper went on to praise The Land of the Moa (without mentioning Sherriff) as "a charming publication….the work compares very favourably with that done either in America or Europe," and to say of the new series of Christmas cards that "they are really charming, and in point of artistic merit even superior to those of last year." 125 The cards' subject matter in part continued the approach of the previous year, with educational images of Māori life with their Māori names ( figure 15), but there were also new, primarily comic, elements. Thoughts of Christmas depicts a smiling Māori woman with an unsuspecting pig. A Christmas Greeting in Maoriland has an amusing encounter between playful Māori and put-upon Pākehā. These have something of the humour of Sherriff's The Latest Scandal (which showed a group of Māori women laughing about the latest gossip), but here it is more at the expense of Europeans than Māori, who are shown as completely at home in their environment. Whether Māori would have found these cards as entertaining as Pākehā is unclear, as there is no evidence directly related to Māori consumption of any of the cards. But, as Blackley has shown, Māori did consume art on their own terms, 126 and there is evidence that The Latest Scandal, at least, amused. "A crowd of laughing Maoris" was reported as gathering around a Napier shop window where the title of the displayed print had been translated, 127 while in Auckland a similar group responded to the picture with "great gusto." 128 By 1892 Sherriff was probably knee deep in Mt Somers stone dust, carving the Lion Monument for Whanganui's Queen's Gardens War Memorial, but he still undertook another commission from Willis. This was the booklet Tiki's Trip to Town (figure 16), a largely pictorial piece with a text by James Duigan (editor of the Willis-friendly Wanganui Herald). Smaller and cheaper (at 1s 3d) than the other booklets, it came the closest to matching a Christmas card in price. It has been regarded, since Betty Gilderdale's work on New Zealand children's book history, as a children's book. 129 This may be the case, but there is absolutely nothing in the advertising to mark it as such. What it claims to be is "humorous At all events, this connection with Sherriff's previous work allows us to see how Tiki fits into the artist's oevre -something that Caroline Campbell was not able to make sense of. 132 Campbell also misses the connection of Tiki's vignette format with Sherriff's previous Christmas cards, seeing this instead as typical of children's illustration. 133 Nevertheless, in the closest and most extensive reading of any of the Willis works discussed here, Campbell's conclusion that "the figurative treatment of the main and supporting characters is an attempt by an immigrant artist/illustrator to articulate an indigenous reality contradicting the aims of colonial agencies of power," seems a fair assessment of Sherriff's approach. 134 The bulk of Willis's 1892 work was, however, done by a different artist. In 1891 The New Zealand Herald had reported that Willis had engaged two Auckland artists for his next year's production, 135 however only one eventuated, and it is less easy to ignore the 'colonial agencies of power' in some of his pieces. Kennett Watkins was by far the most high-profile artist Willis had yet employed, and he not only drew six Christmas card designs, but also the main booklet of the year The Tohunga ( figure 17). This again had poetry by Eleanor Montgomery which, as a review put it, aimed to "reproduce in English verse the kind of naturalism and savagery which we understand to be the characteristics of Maori verse." 136 The artwork, however, seems to have been almost universally appreciated, and described as "real works of art". 137 It was certainly more cohesive than The Land of the Moa and the consensus seems to have been that "nothing could be more suitable than this pretty little book as a Christmas card for sending to friends at home." 138 Watkins' set of six cards for 1892 were similarly well received. Apart from their stronger sepia colouring (as opposed to the bluish grey of Sherriff's earlier cards) several of them follow the pattern of their predecessors fairly closely, with cards like A Native Pet ( figure 18) providing an intimate view of Māori life. However, Watkins also introduced new elements. He had worked hard to establish himself in the high-status genre of history painting. 139 With only 50 years of Pākehā history to play with, the genre almost by default propelled New Zealand painters into depictions of historical Māori, a territory which Watkins regarded as an inheritance. 140 Thus, instead of documenting Māori life in the present, as Sherriff had, Watkins began to show historical customs in cards like A Maori Challenge. For these cards, an explanatory text was added to the back. If the ideology is implicit in the images, it is explicit in the texts. The gist of the Maori Challenge text is that uncivilised races taunt their enemies, whereas with the introduction of European guns, this primitive custom has stopped. The task of fully analysing these images and their accompanying texts is beyond the scope of this paper, but Watkins' version of Maoriland is clearly shot through with colonial attitudes that attempt, as Roger Blackley puts it, to "rescue the Māori past on behalf of Pākehā successors." 141 It also, rather more than Sherriff's work, falls victim to what Stafford and Williams have called "the Maoriland habit of splitting the present from the past, the actual from the ideal." 142 An honoured place in the past did not guarantee a place in the colony's future. And the fact that these cards were a commercial enterprise opens them to the critique that they, like other Maoriland work, were motivated more by economic benefit than any intrinsic interest in Māori. 143 How much these texts were Willis's doing, and how much they were Watkins' ideology is unclear, though their appearance in tandem with Watkins' arrival is suggestive. That they tapped into contemporary sentiment is, however, quite evident. The Auckland Star noted that "the subjects….are well chosen and the letterpress description on the back of the card makes these messages of love from Maoriland specially suitable for transmission to friends abroad." 144 Willis's final year of Christmas card production was 1893, and Watkins again provided the artwork. The Observer called them "a set of six exquisite cards illustrative of New Zealand life and scenery [that would] give folks in England an idea of what Maoriland is really like." 145 The cards moved from a three-colour sepia to the tinted lithograph form and are both technically and artistically more ambitious than the previous year's offerings. The subjects also vary from A Travelling Party ( figure 19), an image of Māori life with a descriptive and fairly neutral text, to cards that very much explore the impact of European settlement. In Colonial Progress, the Māori included are, like the native bush around the frame, simply symbolic of the old and untamed. Within the picture of progress (Watkins' picture-within-a-picture here stages this like a drama), the land is being tamed, and the accompanying text charts the stages. Progress is seen as inevitable and European. There is a more nuanced approach in Oar Versus Paddle which seems to document real boat races. The description allows Māori paddling prowess, but the introduction of sleek rowing boats means that four Europeans can take on (and beat over a longer distance) the Māori waka. Progress again inevitably favours the Pākehā. The Wanganui Chronicle agreed with the Observer that the cards were "most artistically got up, ….are eminently suited for the purpose intended, and will give people outside the colony a very favourable impression of picturesque New Zealand." 146 It also praised Willis's new Christmas booklet -Under the Southern Cross ( Figure 20) which reunited George Sherriff with Eleanor Montgomery, who provided an introductory poem. 147 This was a booklet of views and was perhaps a response to a very nicely designed West Coast Sounds booklet published the previous year by J. Wilkie & Co., with illustrations by Robert Hawcridge. 148 It demonstrated that a booklet of views was viable, and hence it is not surprising to see Willis -always aware of market trendsresponding to it. However it did not just include views of New Zealand. The Southern Cross connects many places and some pages show other lands and islands. In this booklet, New Zealand identity moves beyond the national to be located geographically and conceptually within the Pacific -and commercially in Australasia. 149 Despite the good reviews of the year's offerings, 1893 was Willis's Christmas swansong. He would produce no new designs, though his previous cards and booklets continued on sale for several years, 150 and the type of Christmas imagery he had helped popularise would continue in Christmas Issues of journals like the New Zealand Graphic. 151 There are several reasons for this retreat from a format that had helped establish him as a leading art publisher. Firstly, 1893 saw Willis elected to the House of Representatives, replacing John Ballance in the Wanganui seat. Secondly, sometime between 1893-4, William Potts left. 152 Given that Willis's Christmas card production began in earnest with Potts' arrival, and stopped with his departure, it seems likely that he was a driving force in their conception. And with no Potts, and with Willis in Wellington, there was clearly a decision to consolidate production. By 1893 Willis's printing works was highly successful across multiple fields (books, labels, playing cards etc). Now he could put more work into these lines. And quitting the Christmas market as a publisher did not prevent his profiting from it as an importer and bookseller. Looking back, however, Willis would have been able to reflect on a successful decade of Christmas production, that had helped promote his hometown as a publishing centre. He had, with Wilkie and the Burton Brothers, forged a place for the local in the teeth of competition from the giants of print culture. He had been attentive to the trends in that stationery industry and responded quickly -including being the first New Zealand publisher to publish Christmas booklets. He had also produced a catalogue of New Zealand views which period also need to be considered. Therefore, these will inform a future paper. For now, I hope that the above has shown that the work of this Whanganui chromolithographer deserves a more central place in these discussions than it has hitherto received, and that, by documenting Willis's development, it has established a clear basis from which these discussions can begin. encompassed most of the country and collectively recorded what was considered at the time to be 'picturesque New Zealand.' He had also managed to convince people across the country of the validity of these pieces of ephemera as being "the highest style of art," 153 had tapped top local talent and, latterly, could call on one of the leading artists of the day to produce works for him. This paper is not the last word on Willis's work. Rather it has sought to shape an armature on which subsequent studies might be sculpted. Nevertheless, some broad themes have emerged, with the two halves of the production having quite different trajectories. The first half was about refining the chromolithographic technique, developing a distinctive visual language with which to address the local audience while remaining accessible to overseas recipients. There was an increasing sense of New Zealandness being related to its unique, wild and awe-inspiring landscape -a development which superseded and incorporated the earlier focus on flora and fauna. The 'view' here becomes more collectible than the specimen. In none of this is Willis markedly different from what was going on around him, but he was very aware of current cultural shifts -as his quick understanding of the need to create visual abundance demonstrated. Nineteenth century print culture could easily act as a transnational force for cultural colonisation. Willis was inevitably part of that culture. His work in the 1890s attempted to address New Zealand's cultural distinctiveness, thereby becoming an early exponent of the type of Maoriland approaches that are now largely discredited -seen as "imprisoning Maori within an imagined past" while manufacturing identity from an appropriated mythology. 154 That Willis would, to twenty-first century eyes, fail spectacularly to properly respect the unique identity and position of Māori was, given his cultural background, fairly much inevitable. The cracks are particularly evident in the cards created for him by Kennett Watkins. Nevertheless, within his own context, Willis's interest in promoting the Māori language through his cards is one indicator that he was trying to do more than just make money, and was perhaps encouraging a base-level of cultural understanding via his publications. Indeed, he would later purchase the rights to Kōrero Māori: First Lessons in Māori Conversation and publish a fifth edition, 155 suggesting that he was serious about promoting Te Reo Māori. There is, clearly, a great deal more that needs doing to understand Willis's work in relation to the broader issues that Phillips, Bell, Stafford & Williams, Gibbons and Blackley, in particular, have defined. There is also more to be said about the ways that commercial art, the picturesque, the middlebrow and the Christmas card come together. However, for these discussions to be complete, the photographic Christmas cards of the
2021-09-20T16:55:06.608Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "e4a783434e9da1d0270f4f93d5a11627e5d1fbd3", "oa_license": "CCBYNC", "oa_url": "https://ojs.aut.ac.nz/back-story/article/download/49/60", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e2de62ecc6eec3f23f5aac716fa96e566df1da38", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
171094689
pes2o/s2orc
v3-fos-license
Disorder of Coagulation-Fibrinolysis System: An Emerging Toxicity of Anti-PD-1/PD-L1 Monoclonal Antibodies A disruption of immune checkpoints leads to imbalances in immune homeostasis, resulting in immune-related adverse events. Recent case studies have suggested the association between immune checkpoint inhibitors (ICIs) and the disorders of the coagulation-fibrinolysis system, implying that systemic immune activation may impact a balance between clotting and bleeding. However, little is known about the association of coagulation-fibrinolysis system disorder with the efficacy of ICIs. We retrospectively evaluated 83 lung cancer patients who received ICI at Kumamoto University Hospital. The association between clinical outcome and diseases associated with disorders of the coagulation-fibrinolysis system was assessed along with tumor PD-L1 expression. Among 83 NSCLC patients, total 10 patients (12%) developed diseases associated with the disorder of coagulation-fibrinolysis system. We found that disorders of the coagulation-fibrinolysis system occurred in patients with high PD-L1 expression and in the early period of ICI initiation. In addition, high tumor responses (72%) were observed, including two complete responses among these patients. Furthermore, we demonstrate T-cell activation strongly induces production of a primary initiator of coagulation, tissue factor in peripheral PD-L1high monocytes, in vitro. This study suggests a previously unrecognized pivotal role for immune activation in triggering disorders of the coagulation-fibrinolysis system in cancer patients during treatment with ICI. Introduction T cell activation and proliferation are initiated through antigen recognition by the T cell receptor (TCR). The T cell response is regulated by a balance between co-stimulatory and inhibitory signals called immune checkpoints [1]. Under normal physiological conditions, immune checkpoints play a crucial role in maintaining immune homeostasis and preventing autoimmunity [1]. In cancer, immune checkpoint pathways delivering inhibitory signals are often activated to suppress anti-tumor immune response in tumor immune microenvironments as one of the mechanisms of tumor immune escape [2][3][4][5]. The first generation of antibody-based immunotherapy, called immune checkpoint inhibitors (ICIs), blocks the receptor and/or ligand interactions of molecules, such as cytotoxic T-lymphocyte antigen 4 (CTLA-4), programmed cell death 1 (PD-1), and programmed cell death ligand 1 (PD-L1) [1,5]. Anti-PD-1/PD-L1 antibodies inhibit the interaction between PD-1 and PD-L1 and unleash immune responses against tumors: activating or boosting the activation of the immune system to attack cancer cells [6]. The immunotherapy targeting the PD-1/PD-L1 pathway has shown significant and durable clinical responses for non-small-cell lung cancer (NSCLC) patients in addition to a more favorable toxicity profile and improved tolerability than chemotherapy [7]. Anti-PD-1/PD-L1 antibody therapy has changed the treatment landscape and led to a paradigm shift in treatment strategies in NSCLC, and is now a standard-of-care for NSCLC [6][7][8]. ICI are recognized as a promising strategy to treat various types of cancer and the indications for the use of ICIs continue to expand with unprecedented speed [5][6][7][8]. A disruption of inhibitory immune checkpoint leads to imbalances in immune homeostasis, resulting in adverse effects which are termed immune-related adverse events (irAEs) that share clinical features with autoimmune diseases or inflammatory diseases [9]. The irAEs can affect multiple organs of the body and are commonly seen in the skin, lungs, thyroid, endocrine, adrenal, pituitary, gastrointestinal tract, musculoskeletal, renal, and nervous system [9,10]. Most of the irAEs are usually reversible, but in rare cases, they can be severe and life-threatening. In addition, unexpected severe irAEs have emerged in real-world clinical practice [11][12][13][14][15]. Thus, elucidating mechanisms of irAEs is urgently needed to improve their early diagnosis and develop more precise treatments for irAEs [9,10]. A balance between clotting and bleeding is maintained in normal physiology, but can be altered under the presence of malignancies [16][17][18]. It has been known that a coagulation homeostasis could be further impaired after nonsurgical cancer therapy including radiation therapy, standard chemotherapy, and targeted therapy, which could trigger both bleeding and thrombosis; however, the underlying mechanisms remain unclear [18][19][20]. Recently, several case studies have suggested that anti-PD-1/PD-L1 monoclonal antibodies might trigger disorders of the coagulation-fibrinolysis system in advanced cancer patients, which implies that the systemic immune activation may impact a balance between clotting and bleeding [13,14,[21][22][23]. However, the association of coagulation-fibrinolysis system disorder with the efficacy of anti-PD-1/PD-L1 monoclonal antibody therapies and clinical characteristics of the patients who develop diseases associated with disorders of the coagulation-fibrinolysis system under treatment with ICIs have not been studied yet. Tissue factor (TF) is a transmembrane cell surface glycoprotein that triggers the extrinsic coagulation cascade and is essential for hemostasis. TF binds the coagulation serine protease factor VII/VIIa (FVII/VIIa) to form a bimolecular complex that functions as the primary initiator of coagulation in vivo [24,25]. Studies have shown that levels of circulating TF in the form of microparticles are increased in various diseases, including cardiovascular disease, sepsis, and cancer [16,26]. In addition, circulating TF in blood has been suggested to be a cause of distant thromboses and contributes to the increased incidence of thrombosis observed in these diseases. Importantly, monocytes have been shown to be the major source of intravascular TF in many diseases [16,25,27]. Therefore, we focused on the relationship between T cell activation and induction of TF expression on monocytes in peripheral blood mononuclear cells (PBMCs) in this study. Here, we report the clinical features of NSCLC patients who developed diseases associated with disorders of the coagulation-fibrinolysis system during treatment with ICI. We demonstrate that T cell activation leads to promoting production of a primary initiator of coagulation, TF, in PD-L1 high human peripheral CD14 + monocytes. We also discuss the underlying mechanisms of the onset of disorders of coagulation-fibrinolysis system as an irAE of anti-PD-1/PD-L1 antibody therapy. This study suggests a previously unrecognized pivotal role for immune activation in triggering disorders of coagulation-fibrinolysis system in advanced cancer patients during treatment with ICI monotherapy. Patients The medical records of patients with advanced NSCLC who had received nivolumab (3 mg/kg every 2 weeks), pembrolizumab (200 mg every 3 weeks), or atezolizumab (1200 mg every 3 weeks) monotherapy at Kumamoto University Hospital between January 2016 and October 2018 were retrospectively reviewed. Treatments were provided until disease progression, unacceptable toxicity, or consent withdrawal. The present study was approved by the Kumamoto University Institutional Review Board (IRB number, 1685, Approval Date, 27 March 2018.) To maximally characterize the clinical features of NSCLC patients who developed diseases associated with disorders of the coagulation-fibrinolysis system during immune checkpoint blockade therapy, we searched the PubMed database. We did not limit search dates and we looked for articles published in English. Assessments Only adverse events associated with disorders of the coagulation-fibrinolysis system (thromboembolic and bleeding complications), which were not detected before treatment with ICI and newly developed during treatments with ICI (within 30 days of the last administration of ICI), were considered as disorders of the coagulation-fibrinolysis system possibly triggered by immune checkpoint blockade. The clinical severity of coagulation-fibrinolysis system disorders was graded according to the Common Terminology Criteria for Adverse Events, version 5.0. A newly developed purpura involving more than 10% of the body surface area (grade ≥ 2) was assessed as one of the bleeding complications. Disorders of the coagulation-fibrinolysis system accompanied with the abnormal decrease in platelet count less than 100 × 10 9 /L were not considered as coagulation-fibrinolysis system disorders triggered by ICI to exclude immune thrombocytopenia which previously reported as a rare irAE [17,28]. Tumor response to nivolumab, pembrolizumab, or atezolizumab monotherapy was objectively assessed by pulmonary physicians according to Response Evaluation Criteria in Solid Tumors, version 1.1. The Kaplan-Meier method was used to obtain estimates of progression-free survival (PFS) and overall survival (OS). PFS was measured from the date ICIs started to the date of documented progression or death. Patients who were alive and not known to have progressed were censored. OS was measured from the date ICI started to the date of death or last follow-up. The data cutoff date was 15 March 2019. 95% confidence intervals of survival were calculated by the log-log transformation of survival. The analysis was performed using GraphPad Prism 7.0c software (GraphPad Software, San Diego, CA, USA). PD-L1 Staining PD-L1 expression in the lung cancer specimen was analyzed by immunohistochemical staining using the PD-L1 IHC 22C3 pharmDx antibody (clone 22C3, Dako North America, Inc., Carpinteria, CA, USA). The antibody was applied according to DAKO-recommended detection methods. PD-L1 expression in tumor cells was scored as the percentage of stained cells. Isolation of PBMCs Blood samples of healthy donors were collected in cell preparation tubes with sodium citrate (BD Vacutainer CPT Tubes, BD Biosciences, Franklin Lakes, NJ, USA). PBMCs were obtained by centrifugation following the manufacturer's protocol. Flow Cytometric Analyses Multiparameter flow cytometric analysis was performed on PBMCs. Briefly, cells were incubated with Fc receptor blocking agent (Miltenyi Biotec, Bergisch Gladbach, Germany) and stained with monoclonal antibodies for 20 min at 4 • C in a darkened room. CD3 and CD14 immunophenotypic markers were used to define T lymphocytes and monocytes. Each population was also evaluated for CD142 (tissue factor; TF), and PD-L1 expression. The following monoclonal antibodies were used (all from BioLegend, San Diego, CA, USA): FITC-CD3 clone OKT3, PerCP/Cy5.5-CD14 clone HCD14, APC-CD69 clone FN50, PE-CD142 clone NY2, PE/Cy7-HLA-DR clone L243, Brilliant Violet 421-PD-L1 clone 29E.2A3 were used. Matched isotype controls were used for each antibody to establish the gates. Live cells were discriminated by means of LIVE/DEAD Fixable Aqua Dead Cell Stain (Thermo Fisher Scientific, Waltham, MA, USA) and dead cells were excluded from all analyses. All flow cytometric analyses were performed using a BD FACSVerse™ (BD, Franklin Lakes, NJ, USA). Data were analyzed using FlowJo software (FlowJo LLC, Ashland, OR, USA). Disorder of Coagulation-Fibrinolysis System Triggered by Immune Checkpoint Blockade in Advanced Lung Cancer Only diseases associated with disorders of coagulation-fibrinolysis system occurred during treatment with ICI were considered as the possible irAEs triggered by immune checkpoint blockade [13,14,21,22,29]. Disorders of the coagulation-fibrinolysis system accompanied with the abnormal decrease in platelet count were not considered as coagulation-fibrinolysis system disorders triggered by ICI to exclude immune thrombocytopenia which previously reported as a rare irAE [28]. Disseminated intravascular coagulation (DIC) caused by pneumonia and sepsis accompanied with elevations of procalcitonin in blood were seen in two patients during treatment with ICI. However, ICI-related DIC without infectious diseases was not observed in current study. Thus, the two patients who developed DIC were not considered as having coagulation-fibrinolysis system disorders triggered by ICI. Among 83 advanced NSCLC patients receiving nivolumab, pembrolizumab, or atezolizumab monotherapy at Kumamoto University Hospital between January 2016 and October 2018, a total of 10 patients (12%) developed diseases associated with the disorder of coagulation-fibrinolysis system (thromboembolic and bleeding complications) during treatment with ICI, of which 2 patients were cases recently reported from our group [13,14]. To maximally characterize the clinical features of NSCLC patients who developed diseases associated with disorders of the coagulation-fibrinolysis system during immune checkpoint blockade therapy, we added two NSCLC cases identified by the PubMed database search to this study [22,29]. Ferreira et al. have shown a case of NSCLC patient who had an acute coronary syndrome (ACS) during the second administration of nivolumab (Tables 1 and 2 (Tables 1 and 2) [29]. The characteristics of a total 12 patients who developed diseases associated with disorder of the coagulation-fibrinolysis system are summarized in Tables 1 and 2. The median age was 70.5 (range, 48-81) years. Of 12 patients, four (33%) were diagnosed as having squamous cell carcinoma and eight (67%) were diagnosed as having nonsquamous NSCLC. Mutation (L858R) in epidermal growth factor receptor (EGFR) was present in one patient (8%). Six patients (50%) had undergone no prior chemotherapy regimen, whereas 6 of 12 patients (50%) had received one or more previous chemotherapies. The median treatment line was 1.5 (range, [1][2][3][4]. A broad spectrum of diseases associated with disorders of coagulation-fibrinolysis system including ACS (n = 2), cerebral infarcts (CI; n = 5; Figure 1, case 4), DVT (n = 3; Figure 1, case 9), PTE (n = 1), intra brain tumor hemorrhage (n = 1), gastrointestinal bleeding (n = 2), purpura involving more than 10% of the body surface area (n = 2), and bronchial hemorrhage (n = 2) associated with the administrations of ICIs was observed (Table 1). Of the 12 patients, 11 were treated with anti-PD-1 monoclonal antibodies (nivolumab or pembrolizmab), while 1 patient received an anti-PD-L1 monoclonal antibody (atezolizumab). The severe grade (≥Grade 3) of diseases associated with disorders of coagulation-fibrinolysis system were seen in 4 out of 12 patients (33%). One death related to intra brain-tumor hemorrhage was observed. The first onset of diseases associated with disorder of coagulation-fibrinolysis system developed within 2 cycles of anti-PD-1/PD-L1 monoclonal antibody therapies for 8 patients (67%), with a median onset of 1 cycle (range 1-16 cycles) ( Table 1). Among 12 patients, 4 patients had past medical history of diseases associated with disorders of coagulation-fibrinolysis system. Three of the 4 patients had been taking antiplatelet agents ( Table 2). Antiangiogenic agents including bevacizumab have not been used prior to ICI in these 12 patients. to intra brain-tumor hemorrhage was observed. The first onset of diseases associated with disorder of coagulation-fibrinolysis system developed within 2 cycles of anti-PD-1/PD-L1 monoclonal antibody therapies for 8 patients (67%), with a median onset of 1 cycle (range 1-16 cycles) ( Table 1). Among 12 patients, 4 patients had past medical history of diseases associated with disorders of coagulation-fibrinolysis system. Three of the 4 patients had been taking antiplatelet agents ( Table 2). Antiangiogenic agents including bevacizumab have not been used prior to ICI in these 12 patients. Association of Coagulation-Fibrinolysis System Disorders with the Efficacy of Anti-PD-1/PD-L1 Monoclonal Antibody Therapies in NSCLC The PD-L1 tumor proportion score (TPS) were evaluable in 11 of 12 patients who developed diseases associated with disorders of the coagulation-fibrinolysis system (Table 1). Interestingly, all 11 patients were positive for PD-L1. The expression of PD-L1 was abundant (TPS ≥ 50%) in 9 of 11 patients (82%) (Figure 2A) and at low levels (1% ≤ TPS < 50%) in 2 patients (18%). The association between irAEs and the efficacy of anti-PD-1/PD-L1 monoclonal antibody therapies in NSCLC have been reported [5,[30][31][32][33]. Of the 12 NSCLC patients, 11 were assessable for tumor response. One patient died 28 days after starting treatment due to pneumonia and was not assessable for response. Based on Response Evaluation Criteria in Solid Tumors, version 1.1, complete response (CR) was observed in two patients (17%), partial response in six patients (55%), stable disease in two patients (18%), and progressive disease in one patient (9%). The objective response rate (ORR) and the disease control rate (DCR) were 72% and 91%, respectively in NSCLC patients evaluated for tumor response (n = 11) ( Figure 2B). Of 12 NSCLC patients who developed diseases associated with disorders of the coagulation-fibrinolysis system, 10 were assessable for survival. The 10 patients analyzed for PFS and OS had received ICIs at Kumamoto University Hospital. The median PFS was 8.3 months ( Figure 2C, left panel). The median OS was not reached ( Figure 2C, right panel). Chemotherapy and radiotherapy in patients with advanced cancer have been known to not only trigger coagulation disorders, but also enhance the risk of bleeding complications due to the local fibrinogen and platelet consumption and induction of endothelial injury [16][17][18][19][20]34]. In the current study, 7 out of 12 patients (58%) had hemorrhagic complications after immune checkpoint blockade therapy (Table 1). Case 10, 11, and 12 showed only hemorrhagic complications (bronchial hemorrhage, gastrointestinal bleeding, and grade 2 purpura). Among hemorrhagic complications, hemorrhage from brain tumor (case 2, Table 1) was a grade 5 adverse event, suggesting clinicians should be aware of the risk of hemorrhagic complications during ICI therapy as a potential life-threatening irAE [13]. observed in two patients (17%), partial response in six patients (55%), stable disease in two patients (18%), and progressive disease in one patient (9%). The objective response rate (ORR) and the disease control rate (DCR) were 72% and 91%, respectively in NSCLC patients evaluated for tumor response (n = 11) ( Figure 2B). Of 12 NSCLC patients who developed diseases associated with disorders of the coagulation-fibrinolysis system, 10 were assessable for survival. The 10 patients analyzed for PFS and OS had received ICIs at Kumamoto University Hospital. The median PFS was 8.3 months ( Figure 2C T Cell Activation Induce Tissue Factor Expression on PD-L1-Positive Monocytes LPS-activated monocytes have been known to produce abundant TF [17,35,36]. Consistent with the results reported previously, TF was expressed on LPS-activated CD14 + monocytes ( Figure 3A,B). However, activated T cells did not express TF ( Figure 3A). Little is known about the impact of T cell activation on TF production in human circulating PD-L1 expressing monocytes. Thus, we next assessed the impact of activated T cells on TF expression on monocyte in vitro. To activate T cells, PBMCs from healthy donors were incubated with CD3/CD28/CD2 beads, which can provide physiological activation of human T cells. T cell activation was confirmed by the surface expression of CD69 and HLA-DR on CD3 + T cells. T cell activation by CD3/CD28/CD2 beads induced significant TF expression on CD14 + monocytes, which was confirmed by multiparameter flow cytometric analysis and multicolor immune immunofluorescence staining ( Figure 3A,B). Activated monocytes have been shown to express PD-L1 and HLA-DR on cell surface, and the PD-L1 suppresses tumor-specific T cell immunity in physiological condition [37,38]. T cell activation by CD3/CD28/CD2 beads markedly increased PD-L1 on monocytes ( Figure 4A) and HLA-DR, suggesting activated T cells induced monocyte activation. Interestingly, PD-L1 high monocytes expressed higher TF compared to PD-L1 low monocytes ( Figure 4B). These results suggest that T cell activation leads to monocyte activation and induce high TF expression on peripheral PD-L1 + monocytes. study, 7 out of 12 patients (58%) had hemorrhagic complications after immune checkpoint blockade therapy (Table 1). Case 10, 11, and 12 showed only hemorrhagic complications (bronchial hemorrhage, gastrointestinal bleeding, and grade 2 purpura). Among hemorrhagic complications, hemorrhage from brain tumor (case 2, Table 1) was a grade 5 adverse event, suggesting clinicians should be aware of the risk of hemorrhagic complications during ICI therapy as a potential lifethreatening irAE [13]. T Cell Activation Induce Tissue Factor Expression on PD-L1-Positive Monocytes LPS-activated monocytes have been known to produce abundant TF [17,35,36]. Consistent with the results reported previously, TF was expressed on LPS-activated CD14 + monocytes ( Figure 3A,B). However, activated T cells did not express TF ( Figure 3A). Taken together, our data suggest that although, in physiological condition, upregulated PD-L1 on activated antigen-presenting cells (APCs) suppresses the activated T cells and results in end of immune activation as a homeostatic mechanism, T cell activation by ICIs has the potential to induce abundant TF production on APCs and may trigger disorders of the coagulation-fibrinolysis system ( Figure 5) [16,17,25,36,39]. ( Figure 3A,B). Activated monocytes have been shown to express PD-L1 and HLA-DR on cell surface, and the PD-L1 suppresses tumor-specific T cell immunity in physiological condition [37,38]. T cell activation by CD3/CD28/CD2 beads markedly increased PD-L1 on monocytes ( Figure 4A) and HLA-DR, suggesting activated T cells induced monocyte activation. Interestingly, PD-L1 high monocytes expressed higher TF compared to PD-L1 low monocytes ( Figure 4B). These results suggest that T cell activation leads to monocyte activation and induce high TF expression on peripheral PD-L1 + monocytes. Taken together, our data suggest that although, in physiological condition, upregulated PD-L1 on activated antigen-presenting cells (APCs) suppresses the activated T cells and results in end of immune activation as a homeostatic mechanism, T cell activation by ICIs has the potential to induce abundant TF production on APCs and may trigger disorders of the coagulation-fibrinolysis system ( Figure 5) [16,17,25,36,39]. The crosstalk between APCs and T cells provides crucial stimulatory signals for efficient expansion and development of effector functions to T cells and also induce activation of APCs and TF production. In physiological condition, upregulated PD-L1 on activated APCs suppresses T cells and results in end of immune activation as a homeostatic mechanism. However, ICIs provide a forced activation of T cells by blocking immune checkpoint pathways, which may lead to promoting further APC activation and abundant TF production, and may trigger disorders of the coagulationfibrinolysis system. Discussion A complex interplay between anti-PD-1/PD-L1 monoclonal antibody therapy and host immunity leads to unleash the antitumor immune response, however, the disruption of immune The crosstalk between APCs and T cells provides crucial stimulatory signals for efficient expansion and development of effector functions to T cells and also induce activation of APCs and TF production. In physiological condition, upregulated PD-L1 on activated APCs suppresses T cells and results in end of immune activation as a homeostatic mechanism. However, ICIs provide a forced activation of T cells by blocking immune checkpoint pathways, which may lead to promoting further APC activation and abundant TF production, and may trigger disorders of the coagulation-fibrinolysis system. Discussion A complex interplay between anti-PD-1/PD-L1 monoclonal antibody therapy and host immunity leads to unleash the antitumor immune response, however, the disruption of immune checkpoint signaling also leads to imbalances in immunologic tolerance resulting in an unfavorable immune response which clinically manifest as irAEs [9][10][11][12][13][14]. The number of indications for use of ICIs are growing at an unprecedented speed and ICIs have changed the clinical practice, whereas unexpected irAEs have emerged in the real-world clinical practice [9,10,[12][13][14]. In current study, we showed 12 % of NSCLC patients receiving ICI monotherapies developed diseases associated with disorders of the coagulation-fibrinolysis system. Interestingly, we found that disorders of the coagulation-fibrinolysis system occurred in patients with high PD-L1 expression on tumor cells and in the early period of ICI initiation. In addition, high tumor responses were observed including two CR cases among these patients, suggesting an association between immune activation by ICIs and the onset of disorders of the coagulation-fibrinolysis system ( Table 1). The coagulation-fibrinolysis system disorders have not been reported as irAEs in landmark clinical trials of ICIs in cancer patients. In most phase II/III clinical studies of ICIs, only AEs which were considered by the investigators to be related to the study therapy or high-incidence AEs (≥5-10% of patients who received ICI) have been reported (Table 3) [33,[40][41][42][43]. Thus, not all AEs were known. AEs associated with disorders of the coagulation-fibrinolysis system have not been reported in five landmark clinical studies of ICIs in patients with advanced NSCLC, whereas hemoptysis (n = 16, 6%), pulmonary embolism (n = 1, <1%) and cerebrovascular accident (n = 1, <1%) were reported in CheckMate057 [44]. However, CheckMate057 have not shown that the association of coagulation-fibrinolysis system disorder with the efficacy of ICIs. Recently, several case studies have shown that the relationship between the administration of ICIs and the occurrence of diseases associated with disorders of the coagulation-fibrinolysis system, in addition, some of the cases were severe and life-threatening, suggesting ICIs might impact coagulation-fibrinolysis system in advanced cancer patients [13,14,[21][22][23]29]. Understanding the underling mechanisms of disorders of coagulation-fibrinolysis system caused by ICIs and clinical characteristics of the patients who develop them are urgently needed to improve their early diagnosis and develop more precise treatments for the adverse events. Link between PD-L1 Expression on Tumor Cells and Efficacy of ICIs and Disorders of Coagulation-Fibrinolysis System Triggered by ICIs In this study, we investigated the clinical characteristics of the patients who developed diseases associated with disorders of coagulation-fibrinolysis system under treatment with ICI monotherapy. Although 5 of 18 adverse events (27%; Table 1) associated with disorders of coagulation-fibrinolysis system were severe (Grade ≥ 3), the ORR was 72%. Furthermore, more than 90% of patients with advanced NSCLC achieved disease control. This is the first study to show benefit for patients with advanced NSCLC who developed diseases associated with disorders of the coagulation-fibrinolysis system under the treatment with ICI monotherapy. Recent studies have shown that the association of early irAEs with the efficacy of ICIs in NSCLC, suggesting the relationship between systemic immune activation and the efficacy of ICIs [45,46]. The onset of common irAEs-such as rash, pyrexia, and endocrinopathies-have been reported to be early predictive factors of efficacy. Intriguingly, the patients who developed diseases associated with disorders of the coagulation-fibrinolysis system in association with the administration of ICIs tended to have better response to the therapy (the ORR was 72%) in view of the recent results from clinical studies, in which ORR to ICI monotherapy or ICI combination therapies was approximately 40-60% even in first-line setting [30,32,33]. In addition, the early onset of the coagulation-fibrinolysis system disorders were seen; the diseases associated with disorders of coagulation-fibrinolysis system developed within two cycles of ICI therapies for 67% patients and the median onset of cycle was one. High PD-L1 expression on tumor cells mirrors immunologically 'hot' tumor, which are characterized by high infiltration of T cells, and the immune system in NSCLC patients with high tumor expression of PD-L1 are ready to be activated by ICIs [5,7,11]. High PD-L1 expression on tumor cells has been indeed associated with a high clinical response to ICIs in advanced NSCLC patients [32,33,47]. In addition, systemic immune activation by ICIs in peripheral blood of cancer patients have been confirmed in ICI responders [48,49]. In our study, all patients who developed diseases associated with disorders of coagulation-fibrinolysis system were positive for PD-L1, in addition, 82% of patients were strongly positive for PD-L1 on tumor (TPS ≥ 50%). Importantly, activated T cells promote procoagulant activity via induction of TF in monocytes/macrophages [16,50,51]. We demonstrated that T cell activation leads to abundant TF in PD-L1 high CD14 + monocytes. Therefore, an association between high PD-L1 expression on tumor cells, systemic immune activation by ICIs, the response to ICIs and disorders of the coagulation-fibrinolysis system during ICI therapy potentially exists in NSCLC patients who receiving immune checkpoint blockade. Underlying Mechanisms of Disorders of the Coagulation-Fibrinolysis System as an irAE The main irAEs of ICIs are skin, lungs, thyroid, endocrine, adrenal, pituitary, gastrointestinal tract, musculoskeletal, renal, and nervous system. Little is known about the risk of vascular events associated with ICIs [9,10]. The hypercoagulable state in cancer involves several complex interdependent mechanisms, including interaction among cancer cells, host immune cells, and coagulation-fibrinolysis system. Key roles in pathophysiology are played by TF, inflammatory cytokines, and platelets [16][17][18][19]25,35,[52][53][54]. TF triggers the extrinsic coagulation cascade and cause disorders of coagulation-fibrinolysis system [24,25]. Importantly, monocytes have been shown to be the major source of intravascular TF in many diseases [16,25,27]. Therefore, we studied the impact of T cell activation on TF expression on monocytes in human PBMCs. In the current study, we demonstrated that T cell activation lead to monocyte activation and markedly increased PD-L1 on monocytes. We showed that PD-L1 high monocytes expressed higher TF compared to PD-L1 low monocytes, suggesting T cell activation by anti-PD-1/PD-L1 antibodies has the potential to induce high TF expression on peripheral PD-L1 + monocytes. The accumulating evidence suggests that activated T cells and APCs such as dendritic cells, macrophages, and monocytes are involved in provoking disorders of coagulation-fibrinolysis system [36,39,50,51,55]. Importantly, immune checkpoint blockade not only activates T cells but also activates APCs [37,56,57], indicating that anti-PD-1/PD-L1 monoclonal antibodies may trigger disorders of the coagulation-fibrinolysis system. Two possible mechanisms of the onset of diseases associated with disorders of the coagulation-fibrinolysis system under the treatment with ICIs are shown in Figure 6. promote T cell activation by blocking the interaction between PD-1 and PD-L1 and induces IFN-γ and Th1 cytokine production, which play a crucial role in anti-tumoral effects. In turn, the IFN-γ and Th1 cytokines promote APC activation and TF synthesis in monocytes/macrophages. TF-containing membrane fragments or microvesicles released by the monocytes/macrophages could be a cause of distant thromboembolic events in advancer cancer patients. Microvesicles carrying TF activate factor VII. Conversion of factor VII to its active form (VIIa) in complex with TF triggers the production of other coagulation-related proteases in the coagulation cascade. The complex TF-factor VIIa converts factor X to activated factor X (factor Xa). Factor Xa with its cofactor, activated factor V (factor Va), activates prothrombin and generate thrombin, which is required to transform fibrinogen into fibrin and to activate platelets. This hypothetical mechanism could play a role in intravascular thrombosis in cancer patients receiving ICIs. A contrast-enhanced CT image of leg vein from the case 9 is shown on the right upper side. (B) In atherosclerotic lesions, T cells and APCs have been shown to be involved in promoting plaque development, progression, and destabilization of atherosclerotic lesions. Activated APCs, such as monocytes/macrophages and DCs, promote inflammation by secretion of pro-inflammatory mediators such as IL-12 and TNF-α or by promoting T cell activation. TF expression on APCs in atherosclerotic lesions is also promoted. Activated T cells produce pro-atherogenic cytokines such as IFN-γ and TNF-α that contribute to both the growth and destabilization of atherosclerotic lesions, which could result in rupture of the lesion. In contrast to cancer where T-cell activation and pro-inflammatory cytokines produced by immune subsets are highly appreciated, the unwanted activation of immune subsets needs to be suppressed in atherosclerotic lesions. As a homeostatic mechanism, activation of T cell subsets is suppressed by immune checkpoint pathways in the atherosclerotic lesions; however, ICIs could promote T-cell activation by blocking the immune checkpoint pathways. This evidence raises a hypothesis that ICIs might be involved in provoking the growth and destabilization of atherosclerotic lesions and causing disorders of coagulation-fibrinolysis by activation and recruitment of T cells and APCs in the atherosclerotic lesions. There is a link between immune activation and thrombotic events within blood vessels [13,17,35,54,55,57]. TF is a primary initiator of fluid-phase blood coagulation and causes disorders of coagulation-fibrinolysis system [16,17,25]. Not only cancer cells but also activated monocytes and macrophages have been known to express abundant TF. It has been shown that monocytes/macrophages are a major source of circulating TF in the blood and their TF production is triggered by inflammatory cytokines such as IL-1β, TNF-α, and IFN-γ [17]. In addition, activated T cells have been shown to promote procoagulant activity via induction of TF in monocytes/macrophages [13,36,51]. Thus, the accumulating evidence suggests a potential risk of triggering disorders of coagulation-fibrinolysis system in association with ICI therapy within blood vessels; T cell activation by anti-PD-1/PD-L1 monoclonal antibody therapies may lead to promoting TF synthesis in monocytes/macrophages, which could result in triggering disorders of coagulation-fibrinolysis system such as DVT, PTE, and Trousseau's syndrome in advanced cancer patients ( Figure 6A). In arteriosclerotic lesions, the activation of immune subsets including T cells and APCs-such as monocyte, macrophages, and dendritic cells-play critical roles in promoting plaque development, progression, and destabilization resulting in rupture and thrombus formation [14,39,50,52,53,58,59]. In addition, TF from activated immune subsets have been also suggested to be involved in the onset of ACS [35]. Therefore, the unwanted activation of immunity needs to be suppressed in atherosclerotic lesions. PD-1/PD-L1 signaling plays a critical role in inactivating immune cells and maintaining plaque stabilization in atherosclerotic lesions although T cell activation and pro-inflammatory cytokines such as interferon-γ (IFN-γ) and TNF-α are highly appreciated in terms of anti-tumor effects [14,39,50,58,60]. Activations of both CD4 + T cells and CD8 + T cells are suppressed by immune checkpoint pathways as a homeostatic mechanism in the atherosclerotic lesions, but, immune checkpoint blockade invigorates T-cell functions and activates APCs, also suggesting a potential risk of triggering disorders of coagulation-fibrinolysis system in arteriosclerotic lesions such as ACS and cerebral infarction in association with ICI therapy ( Figure 6B). Bleeding disorders are frequent in advanced cancer patients, and were observed in 2.7% of patients during a 1-year period [61]. Chemotherapeutic treatments have been known to heighten not only venous thromboembolism occurrence but also bleeding complications [17,62]. Although the underlying mechanism of bleeding disorders in cancer patients receiving ICI remains unclear, ICI-induced systemic immune activation followed by hypercoagulopathy and thromboembolic events consequently may cause a local consumptive coagulopathy, tissue damage, and endothelial injury could develop leading to internal and/or external bleeding complications such as purpura, intra-tumor hemorrhage, bronchial hemorrhage, and gastrointestinal hemorrhage. Anticancer therapies have been shown to carry an increased risk of thrombotic events [19]. High rates of cancer-associated disorders of coagulation-fibrinolysis system have also been reported in cancer patients who are receiving antiangiogenic agents. In a meta-analysis of clinical trials of bevacizumab in combination with chemotherapy or interferon across a variety of cancers, the use of bevacizumab was associated with a 33% relative increase in the risk of venous thromboembolism [34]. However, the pathophysiology of anticancer therapy-associated disorders of coagulation-fibrinolysis system is not entirely understood [18][19][20]. Importantly, accumulating evidence from clinical and preclinical studies has shown that conventional and targeted anticancer agents have immunomodulatory or immune stimulatory effects and promote anti-tumor immunity [31,[63][64][65][66], suggesting these immunomodulatory anticancer agents may also impact the coagulation-fibrinolysis system and increase the risk of thromboembolic/bleeding events through immune activation as well as ICIs. Targeting PD-1/PD-L1 Signaling: A Double-Edged Sword in Cancer A concept of "immune normalization" for the class of drugs called ICIs has recently been proposed [7]. Anti-PD-1/PD-L1 antibodies are monoclonal antibodies selectively targeting the PD-1/PD-L1 pathway and the mechanism of action of ICIs is thought to restore a lost antitumor immunity in the tumor microenvironment [7]. However, ICIs does not always change the immune balance to a favorable direction (Figure 7). ICIs selectively target the PD-1/PD-L1 pathway, however, do not selectively target the PD-1/PD-L1 signaling between tumor antigen-specific T cells and tumor cells, because immune cells expressing PD-1/PD-L1 not only exist in TME but also exist in peripheral blood and normal tissues, in addition, both PD-1 and PD-L1 are expressed on not only effector CD8 + T cells called "killer T cells", but also a variety of immune cells including other T cell subsets, B cell subsets, and antigen-presenting cells including activated monocytes, macrophages, and dendritic cells [14,37,48,49,67]. Therefore, anti-PD-1/PD-L1 monoclonal antibodies can bind to various non tumor-specific T cells or non-tumor-directed immune subsets, which may lead to induce the unwanted activation of systemic immunity [59]. This may disturb the balance established between tolerance and autoimmunity and result in various irAEs. PD-1 and PD-L1 are expressed in activated 'non tumor-specific T cells' as well as activated 'tumor-specific T cells'. Thus, immune checkpoint blockade has a potential risk of shifting the systemic immune balance from tumor-specific T cell-mediated antitumor immune response to non-tumor-specific T cell-mediated immune response in cancer patients (Figures 6 and 7A). The crosstalk between APCs and T cells plays a key role in achieving efficient anti-tumor immune responses, which can be supported by various signals derived from T cells, such as IFN-γ [68][69][70][71][72]. The interaction provides crucial stimulatory signals for efficient expansion and development of effector functions to T cells (so-called "License to kill") [70,71]. APCs including monocytes and macrophages express both PD-L1 and PD-1 ( Figure 7B) [37,67]. IFN-γ from activated T cells not only activate APCs but also strongly induce PD-L1 expression on APCs to impede T cell function and maintain immune homeostasis, although IFN-γ is the most important cytokine implicated in antitumor immunity [68,73]. Therefore, blocking PD-1/PD-L1 signaling can activate APCs [67,[73][74][75] and ICIs have a potential risk of shifting the immune balance from tumor-directed monocyte/macrophage activation to non-tumor-directed monocyte/macrophage activation through T cell activation, resulting in common and unexpected irAEs, such as disorders of coagulation-fibrinolysis system [13][14][15] (Figures 5-7). In our current study, we demonstrated that T cell activation leads to monocyte activation and promotes the production of TF in PD-L1 high human peripheral CD14 + monocytes, suggesting that T cell activation followed by APC activation during ICI therapy may play a crucial role in triggering the diseases associated with disorders of the coagulation-fibrinolysis system. in the tumor microenvironment [7]. However, ICIs does not always change the immune balance to a favorable direction (Figure 7). ICIs selectively target the PD-1/PD-L1 pathway, however, do not selectively target the PD-1/PD-L1 signaling between tumor antigen-specific T cells and tumor cells, because immune cells expressing PD-1/PD-L1 not only exist in TME but also exist in peripheral blood and normal tissues, in addition, both PD-1 and PD-L1 are expressed on not only effector CD8 + T cells called "killer T cells", but also a variety of immune cells including other T cell subsets, B cell subsets, and antigen-presenting cells including activated monocytes, macrophages, and dendritic cells [14,37,48,49,67]. Therefore, anti-PD-1/PD-L1 monoclonal antibodies can bind to various non tumorspecific T cells or non-tumor-directed immune subsets, which may lead to induce the unwanted activation of systemic immunity [59]. This may disturb the balance established between tolerance and autoimmunity and result in various irAEs. Figure 7. Underlying mechanisms of irAEs caused by activated T cells and monocytes/macrophages. (A) A model of immune balance between tumor-specific and non-tumor-specific T cells. ICIs can activate not only tumor-specific T cells but also non-tumor-specific T cells. Thus, ICIs have the potential to modulate the balance between tumor-specific T cell response and non-tumor-specific T cell response. PD-1/PD-L1 express on both tumor-specific and non-tumor-specific T cells. If non Figure 7. Underlying mechanisms of irAEs caused by activated T cells and monocytes/macrophages. (A) A model of immune balance between tumor-specific and non-tumor-specific T cells. ICIs can activate not only tumor-specific T cells but also non-tumor-specific T cells. Thus, ICIs have the potential to modulate the balance between tumor-specific T cell response and non-tumor-specific T cell response. PD-1/PD-L1 express on both tumor-specific and non-tumor-specific T cells. If non tumor-specific T cells are dominantly activated by ICIs, this may lead to the onset of irAEs. (B) A model of immune balance between tumor-directed monocyte/macrophage activation and non-tumor-directed monocyte/macrophage activation. PD-1/PD-L1 express on both tumor-directed and non-tumor-directed monocytes/macrophages. Thus, ICIs could activate both tumor-directed and non-tumor-directed monocytes/macrophages and have the potential to modulate the immune balance. If non tumor-directed monocytes/macrophages are dominantly activated by ICIs, this may lead to the onset of irAEs such as disorders of the coagulation-fibrinolysis system. Limitation Our findings should be interpreted with caution in view of the limited samples, retrospective study, short observation period, and heterogeneity of study cohort (ICIs used in this study and prior lines of therapy); the results need to be confirmed in larger cohorts. Because the cancer-associated thromboembolic and bleeding events are common in advanced cancer patients and diverse asymptomatic disorders are present in single patients [18][19][20], it is conceivable that disorders of the coagulation-fibrinolysis system during ICI therapy could be unrecognized by clinicians and real incidence of the diseases associated with disorders of the coagulation-fibrinolysis system might be higher than that of our study. The cause-and-effect relationship between ICI and disorders of the coagulation-fibrinolysis system is not completely proven, although we showed T cell activation leads to promote production of TF in PD-L1 + monocytes in vitro. In our current study, we showed TF expression on monocytes, however, TF can also be induced in the endothelial cells of the vessel wall and smooth muscle cells under various pathologic conditions, and tumor cells also express abundant TF. Thus, various mechanisms of TF production should be considered in cancer patients receiving ICI therapy. Inflammatory cytokines derived from activated immune subsets by ICI have the potential to play a key role in pathophysiology of disorders of the coagulation-fibrinolysis system. Excessive inflammatory cytokines may induce tissue damage and endothelial injury, which could lead to TF production from various tissues. TF release from tumor cells killed by activated T cells may also trigger disorders of the coagulation-fibrinolysis system. Further studies including monitoring TF expression on circulating monocytes in cancer patients receiving ICI monotherapy are needed to unveil the mechanism of thromboembolic and bleeding complications in cancer immunotherapy. Conclusions This is the first evidence suggesting the association between disorders of the coagulationfibrinolysis system and immune activation by ICIs in cancer patients with PD-L1 + tumor. The present study may contribute to our understanding of the mechanism of disorders of the coagulation-fibrinolysis system in cancer patients and provide new insights into the complex interplay among cancer, host-immunity, and immunotherapy.
2019-06-01T13:10:51.802Z
2019-05-29T00:00:00.000
{ "year": 2019, "sha1": "410a495befcd47735e32c4363ed9106c15bf01b1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/8/6/762/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b312b311543ae78883535de8c80f030ec78d0243", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
206893746
pes2o/s2orc
v3-fos-license
Views and experiences of men who have sex with men on the ban on blood donation: a cross sectional survey with qualitative interviews Objective To explore compliance with the UK blood services’ criterion that excludes men who have had penetrative sex with a man from donating blood, and to assess the possible effects of revising this policy. Design A random location, cross sectional survey followed by qualitative interviews. Setting Britain. Participants 1028 of 32 373 men in the general population reporting any male sexual contact completed the survey. Additional questions were asked of a general population sample (n=3914). Thirty men who had had penetrative sex with a man participated in the qualitative interviews (19 who had complied with the blood services’ exclusion criterion and 11 who had not complied). Main outcome measure Compliance with the blood services’ lifetime exclusion criterion for men who have had penetrative sex with a man. Results 10.6% of men with experience of penetrative sex with a man reported having donated blood in Britain while ineligible under the exclusion criterion, and 2.5% had donated in the previous 12 months. Ineligible donation was less common among men who had had penetrative sex with a man recently (in previous 12 months) than among men for whom this last occurred longer ago. Reasons for non-compliance with the exclusion included self categorisation as low risk, discounting the sexual experience that barred donation, belief in the infallibility of blood screening, concerns about confidentiality, and misunderstanding or perceived inequity of the rule. Although blood donation was rarely viewed as a “right,” potential donors were seen as entitled to a considered assessment of risk. A one year deferral since last male penetrative sex was considered by study participants to be generally feasible, equitable, and acceptable. Conclusions A minority of men who have sex with men who are ineligible to donate blood under the current donor exclusion in Britain have nevertheless done so in the past 12 months. Many of the reasons identified for non-compliance seem amenable to intervention. A clearly rationalised and communicated one year donor deferral is likely to be welcomed by most men who have sex with men. Introduction In the 1980s blood services in many countries introduced measures to prevent HIV and other bloodborne viruses from entering the blood supply. 1 2 Among these was a lifetime ban on donation by men who had ever had oral or anal sex with a man-such as the UK blood services' "MSM donor deferral" (which defined men who have sex with men (MSM) as only those who had engaged in oral or anal sex between men, a departure from the term's use in other contexts)-in contrast to a year long deferral for most other high risk groups. 3 Several factors have prompted reconsideration of this rule. The number of heterosexually acquired cases of HIV infection has increased in some settings, 4 and improvements in blood screening techniques have reduced the "window period" between infection and detection, reducing the risk of HIV infected donations entering the blood supply. 5 6 The issue has attracted intense debate 2 7-12 leading, in some countries, to legal challenges to the deferral. 13 14 Opponents argue that a lifetime ban on blood donation by men who have sex with men is discriminatory, infringes individual rights, is disproportionate to risk, and reduces the supply of available blood. [9][10][11][12] Those who would retain the ban counter that the safety of the blood supply is paramount, that the lifetime ban is effective in helping to achieve this goal, and that a less stringent rule would be likely to increase the risk of transfusion-transmissible infections. 7 15 In several countries the exclusion has been reduced to a specified time since last having penetrative sex with a man (six months in South Africa 16 ; 12 months in Australia, Sweden, Japan, Hungary, and Argentina [17][18][19] ; and five years in New Zealand 18 ). In Spain and Italy deferral criteria now apply to high risk sexual behaviours, removing any mention of partner gender. 2 20 In the UK, Canada, and the US any man who has ever had oral or anal sex with another man (or since 1977 in North America), whether protected or unprotected, is permanently excluded from donating blood. 21 22 The study reported here was designed to explore compliance with the lifetime "MSM donor deferral" in Britain, to assess possible effects of a revision to the rule on compliance and willingness to donate, and to inform how best any revision might be formulated and communicated. Survey design and procedures Between April 2009 and June 2010, we conducted a population based survey, followed by qualitative interviews with male survey respondents reporting any sexual contact with a man. We recruited a household sample of men and women in Britain aged 18 and older via the TNS-BMRB Omnibus Face-to-Face survey. 23 The Omnibus survey uses a two stage, random location sampling strategy in which age and gender stratified quota samples are drawn from randomly selected geographical districts of approximately 150 households. 24 A core of sociodemographic questions is followed by bespoke modules of questions funded by other agencies. Verbal informed consent is sought from respondents in accordance with the ethical guidelines of the Market Research Society. 25 We designed, piloted, and incorporated into the survey a 20 question module relating to blood donation, sexual practice, and sexual identity, in CASI (Computer-Assisted Self Interview) format. Male respondents were asked whether they had ever had sexual contact with a man, using a question derived from the National Survey of Sexual Attitudes and Lifestyles (Natsal). 26 Men answering yes were asked about their experience of oral or anal (penetrative) sex with a man, age at first such occasion, self defined sexual identity, history of and most recent blood donation in Britain, donation since first penetrative sex with a man, disclosure of sex with men at last blood donation, reasons for non-disclosure, reasons for not donating blood, awareness of donor deferrals, and perceptions of current and potential revisions to the MSM donor deferral. During the first two weeks of the survey, two questions about the perceived role of the UK blood services and the appropriateness of the lifetime MSM donor deferral were asked of the entire Omnibus sample (1813 men and 2101 women). Qualitative interview Men reporting any sexual contact with a man were asked if they would be willing to be re-contacted to take part in the qualitative component of the study. Among those agreeing and providing a contact telephone number, we included both men who had donated ineligibly under the MSM donor deferral and those who had not done so in our invitation to participate in an in depth interview. The survey identified "non-compliers" as men who had donated blood in Britain since their first experience of penetrative sex with a man and since the introduction of the UK MSM donor deferral in 1985, and identified "compliers" as men who had had penetrative sex with a man but who had not donated blood in Britain since 1985 or since their first experience of penetrative sex with a man (if this happened after 1985). These terms are used to reflect knowing and unknowing ineligible donation. We invited all "non-compliers" who agreed to being re-contacted to interview and purposively selected "compliers" to reflect a range of demographic characteristics and to include men with and without experience of penetrative male sex in the past 12 months. We estimated that a survey sample of 1000 men reporting any male sexual experience would be required to identify a sufficient number of "non-compliers" willing to participate in the interview (based on estimates of the prevalence of same sex sexual experience and blood donation among men in Britain 21 27 ). At follow-up the confidential and anonymous nature of the research was stressed. Interviews were audio-recorded, with written informed consent, and took place in a venue and at a time of each participant's choice. We used a topic guide to explore experiences of and motivations for donating blood, including disclosure of sex with men; views on the MSM donor deferral criterion; suggestions for alternative MSM deferral criteria; likely impact of a revised criterion on donating practice and compliance; views on communicating a revised deferral; and sexual identity. Participants were given illustrative examples of countries with a five year MSM donor deferral (New Zealand), a one year deferral (Australia), and an exclusion criterion based on unprotected sex outside a regular sexual partnership within a defined time period (Spain). Interviews lasted on average 90 minutes. Participants in the qualitative component were offered a £20 voucher in recognition of their contribution to the study, as well as contact information for the UK Blood Services and services offering support on sexual health and sexual identity. Data analysis Survey data Compliance status was assessed from "age at first penetrative sex with a man," "current age," "year of last blood donation," and "interview week" (calculated compliance). Where data were unavailable or chronology was unclear, we used data on reported "donation since first penetrative sex with a man" and "year of last donation" (reported compliance). Concordance between calculated and reported compliance was high (97%, κ=0.796, P<0.001). We estimated the incidence of blood donation using "year of last donation" and "interview week" (see footnote to table 3⇓). Comparisons between sample subgroups were made with Pearson χ 2 tests and Fisher's exact tests. Although these tests assume a simple random probability sample, the random location design is a good approximation of this. 24 All analyses were performed using Stata 10 (Stata Corp, Texas). Qualitative data and integrated analyses Interviews were transcribed verbatim and anonymised. Data were managed using NVivo8 software (QSR International, 2008). Data were analysed iteratively and thematically across accounts. 28 Transcripts were reviewed by multiple authors (PG, WN, KW) to agree on the meanings of emerging codes. Relations between themes were explored and key overarching thematic areas identified. Survey and qualitative findings were integrated at the interpretation stage to address the key research objectives and identify overarching meta-themes. 29 30 Characteristics of the sample Three per cent of men responding to the Omnibus survey (1028/32 373) reported having had any sexual contact with a man and were routed to the blood donation module of the questionnaire (figure⇓). Their mean age was 41.8 years (SD 16.86). Compared with men reporting no such experience, they were younger, of higher socioeconomic status and educational level, and more commonly single (table 1⇓). Five per cent of male Omnibus survey respondents (1634) did not answer the question on male sexual contact and, compared with those who did, were older (41% v 24% aged ≥65), had lower socioeconomic status (40% v 30% in the two lowest socioeconomic groups), had lower educational level (32% v 20% had not completed secondary education), and were more commonly divorced, widowed, or separated (19% v 11%). Just under half of men reporting male sexual experience (457/1028) agreed to be re-contacted. Eighty eight were selected for in depth interview, of whom 30 participated, 13 declined, and 45 could not be contacted (see figure⇓). The average age of participants in the qualitative interviews was 42 years (range 21-71 years). They represented a range of sociodemographic characteristics and included eight "non-compliers" with the MSM donor deferral, three possible "non-compliers" (the chronology of first penetrative sex with a man and last blood donation was unclear, but there was some indication that they had donated since becoming ineligible), and 19 "compliers." All (possible) "non-compliers," however, were aged ≥35, had not donated blood in the past two years, and did not intend to do so again. Almost half of male survey respondents reporting any same sex sexual contact had ever had penetrative sex with a man, just over half of whom (227/489) had done so in the previous 12 months and 23% of whom described their sexual identity as "straight" or heterosexual (table 2⇓). Compliance with the lifetime MSM donor deferral Of the 474 male survey respondents who reported experience of male penetrative sex, 50 (10.6%) had donated blood in Britain since becoming ineligible under the MSM donor deferral ("non-compliers") and 11 (2.5%) had donated in the previous year (table 3⇓). Ineligible donation was significantly less common among men who had had male penetrative sex recently (in the past 12 months) compared with those who had last done so longer ago (table 3), and among those who self identified as gay or homosexual compared with those who had not (5.2% v 15.5%, P<0.001). There were no differences in age, socioeconomic status, education, country of residence, or ethnicity by compliance status (data not shown). "Non-compliers" described their reasons for having donated blood ineligibly in the qualitative component. Some had discounted the blood services' exclusion based on risk on various grounds-that they practised safer sex or knew their own risk status, because of a belief in the infallibility of blood screening procedures, or because of feelings of resentment over the unfairness of the exclusion in the absence of an equivalent for heterosexual practices. "I just said 'No' for that question … for whether I'd had sex with men.… I knew I shouldn't but I did because I knew for a fact that my blood was healthy; I didn't have HIV … and I also know the fact that any sex that I did have I always took protection."-"Non-complier," interview 13 Others had discounted the experience that barred them from donating blood. This was particularly the case for men whose experience of sex with men had happened far enough in the past or just once to be considered insignificant to current risk status. A non-consensual sexual experience was too distressing to recall at the time of blood donation. "I answered 'No' [to the screening question asking about sex with a man].... I disowned it, um, because I was abused and raped.… It didn't happen as far as I was concerned at that time."-"Non-complier," interview 5 For some, there was a reluctance to assume an identity associated with sex between men. For those who were, at the time, not open about their sexual practices or identity, the need for discretion had deterred self deferral in a public setting: "They didn't know about me [my sexuality] till I was 25 ... and we [father and son] worked together.… It would have been very difficult to say to dad, 'No, I can't go and donate blood.'"-"Non-complier," interview 20 A lack of clarity regarding the terms of the MSM donor deferral was also a barrier to compliance for some ineligible donors. Knowledge and awareness Survey findings revealed extensive lack of awareness of the rules relating to donation by men who had sex with men. Only one in four men with any experience of male sexual contact was aware that having had penetrative sex with another man barred donation. Almost a third believed that only unprotected penetrative male sex was a criterion for deferral. One in four did not know which groups were excluded (table 4⇓). The proportion of "compliers" who cited having had any male sexual contact as their reason for not donating blood was almost as high as the proportion citing having had male penetrative sex (table 3⇓). A high degree of confidence was expressed in the certainty of medical science. Nearly half of all men with experience of male penetrative sex held that they would donate blood regardless of the rules because they believed the screening of blood to eliminate risk (table 4⇓). This confidence in blood testing procedures was a common argument made by participants in the qualitative component against the lifetime MSM donor deferral. The possibility of administrative error resulting in the release of screened but infected blood was rarely mentioned and awareness of the "window period" between infection and detection, and its implications for screening effectiveness, was limited. Views on the existing lifetime MSM donor deferral Of the sample of 3914 men and women responding in the first two weeks of the Omnibus survey, most (1425 (78.6%) men, 1672 (79.6%) women) were of the view that the role of the blood services was to protect the blood supply rather than individual rights, agreeing with the statement: "The aim of the blood donation service is to make sure the country's blood supply is safe and free from infection, not to enable anyone who wants to to give blood." At the same time, a sizeable minority (38.5% men and 43.5% women) saw the MSM donor deferral as inflexible and excessive, agreeing that: "The current ban on gay men seems too rigid; it doesn't make sense for a man who has had a single homosexual experience even before the HIV epidemic to be banned from being a blood donor." Among men with experience of male sexual contact, less than half agreed that the lifetime ban should be retained to ensure 4⇓). Their views were further elaborated in the qualitative research. Although blood safety was recognised as the primary priority of the blood services, the ban was seen as outdated and founded more on the need for public reassurance than current scientific evidence. In view of the perceived absence of an equivalent deferral relating to high risk heterosexual practices, the MSM donor deferral was described as "unfair" and, by some, "discriminatory." "There's a lot of … STIs and HIV and everything in relationships that don't involve male-male sex so it's really discriminatory … maybe it's the stereotype of guys that sleep around.… It's kind of offensive for me ... I've probably had less sexual encounters than most straight women or men."-"Complier," interview 6 The lack of transparency in the rationale for the exclusion was seen to undermine confidence in its scientific basis. The inclusion of oral and protected sex was considered by some as contrary to safer sex messages. Of widespread concern was the "blanket" nature of the ban and its failure to distinguish between lifestyles conferring different risk status. "You have gay men, bisexual men, men who identify as heterosexual but have a bit of, you know, a dalliance every now and again, you have that whole range … you have promiscuous people, monogamous people, celibate people. You can never have ... sexual behaviour as a homogenous group."-"Complier," interview 4 Qualitative interviews also revealed tensions between concern for the right of the individual to donate blood and the protection of public health. Although giving blood was rarely considered a right, there was a strong sense that all potential donors were entitled to a considered assessment of risk based on current scientific evidence. Some saw the blood services as failing to benefit from potential supplies of usable blood by excluding healthy donors. Views on revision of the MSM donor deferral An individual risk assessment approach, taking account of risk status and risk reduction practice, regardless of one's own or one's partner's gender, was widely considered to be the ideal replacement for the current MSM donor deferral. "It [would] no longer discriminate against a group of people, it makes it more sensible in a way that if a man's had sex with 50 women then I think he's a lot more at risk than if another man's had sex with two men, kind of thing. So ... it's fairer and more acceptable."-"Complier," interview 22 This option was, however, acknowledged to involve more in depth questioning that would be costly, complex, and a potential deterrent to the wider donor population. A one year MSM deferral (since last sex with a man) was viewed as a generally acceptable, equitable, and sufficiently cautious alternative. "It's a step in the right direction and it would bring sex with men into the same category as other increased risky sexual behaviours."-"Complier," interview 4 A five year MSM deferral was typically dismissed as "tokenistic" and designed rather to appease gay and bisexual men than to take account of current epidemiological evidence. Such a revision was thought likely to have little impact on the number of eligible donors while retaining the potential to provoke negative public reaction. A one year deferral, on the other hand, was considered more scientifically sound, accounting conservatively for the window period for infections and any risk of men donating towards the latter part of a deferral period. Alignment with donor deferrals for most other high risk groups, and with other countries, was an important consideration for some, who felt that discrepancies undermined confidence in the current exclusion criteria. "It should be consistent. The world ... is a smaller place. We all travel consistently ... you can't tell me the rules in one country should be different to the rules in another."-"Non-complier," interview 20 Possible response to a revised criterion Roughly half of survey respondents with experience of male sexual contact held that a changed criterion would not affect their motivation to donate blood (table 4⇓). There was no significant difference by compliance status. One in three felt they would be more likely to give blood under a revised criterion because the lifetime MSM donor deferral had served as a deterrent in the past. Roughly the same proportion saw themselves as more likely to donate under a revised rule because of newly conferred eligibility, and the proportion was significantly higher among current "compliers" than "non-compliers" (P=0.030) ( Those who would remain ineligible foresaw little effect on their donating behaviour. Although most participants felt they would continue to comply with a revised donor deferral criterion, many were reluctant to speculate on other men's likely future compliance. Although some felt that a revised criterion may encourage donation towards the latter part of a deferral period-on the basis of perceived low risk to the blood supply-this was seen as avoidable by providing a clear rationale for the rule and taking this concern into account when setting deferral periods. Views on the communication and implementation of a revised criterion Less than half of survey respondents reporting male sexual contact considered the current donor deferral rules to be clear and easy to understand, and almost three quarters felt that more explanation was required regarding eligibility criteria (table 4⇓). Clear and transparent communication of the rationale for deferral was considered essential by participants in the qualitative research, both to facilitate compliance and to reassure excluded groups that the criterion was founded on evidence rather than prejudice. Confidentiality was seen as a vital issue. Concerns were raised regarding the reliability of self reported information on sexual behaviour in the semi-public setting of blood donation sessions. probably not the best place to do it ... on the day in the actual centre?"-"Non-complier," interview 9 An online screening questionnaire, which could be completed privately and submitted remotely in advance of donation sessions, was seen as a preferable means of ensuring anonymity. In terms of communication, participants favoured a broad advertising strategy combined with tailored information targeted at men identifying themselves as gay or bisexual, and potential donors. Generic messages were considered appropriate for mainstream advertising, while more targeted messages could make specific reference to sexual practices resulting in deferral. Web based resources were seen as an important source of additional information for those who remained unsure of their eligibility. Summary of findings This is the first published study reporting experience of and views on blood donation deferral criteria in a general population sample of gay, bisexual, and other men who have had sex with men. Our data show a sizeable minority of men to have donated blood since having had penetrative sex with a man. In depth accounts suggest that this experience had often happened some considerable time in the past or on a single or rare occasion. Of smaller size, but of greater concern, was the proportion of men who had donated ineligibly in the previous year. Reasons for non-compliance included self categorisation as low risk, discounting the sexual experience that barred donation, need for discretion around sexual identity or practice, misconceptions relating to procedures safeguarding blood, misunderstanding of the exclusion criterion, and resentment over its perceived inequity. Awareness of the terms of the MSM donor deferral was disquietingly limited. The current lifetime exclusion was criticised by the men in our study, who saw it as inequitable, discriminatory, and, above all, lacking a clear rationale. Taking into account issues relating to feasibility, equity, and scientific coherence, a one year deferral since last male penetrative sex was seen as a preferable alternative. A five year MSM donor deferral was not considered acceptable, and the lack of support for this option does not augur well for implementation or compliance. Strengths and limitations Our study had a number of limitations. To obtain a sufficiently large population based sample of men with experience of male sexual contact cost effectively, we used a frequently repeated Omnibus survey with a random location design rather than a true random probability survey. This inevitably has the potential to introduce bias into the sample. The prevalence of reported male sexual experience in our study was considerably lower than that in the National Survey of Sexual Attitudes and Lifestyles in 2000 (8.4%), 31 possibly reflecting differences in methodology and age range. Differences in the characteristics of men who did or did not respond to the question regarding male sexual contact raise further concerns in this context, particularly if these characteristics are also associated with propensity to disclose sexual practice or comply with the donor deferral. The CASI format of the questionnaire is likely to have facilitated disclosure, but our estimates of non-compliance with the MSM donor deferral and of same sex experience among men should be treated with caution. Further, the inclusion in the qualitative sample of only past, and not current, "non-compliers" aged 35 years and older who were now acting in compliance with the MSM donor deferral meant that the voices of younger men and those who had donated ineligibly more recently were not heard. Nevertheless, our general population sample has advantages over convenience samples recruited from gay venues; notably, its capacity to capture the views and experiences of men who may not be publicly open about their sexual experience with men. Although the qualitative findings reflect a purposive sample of men and so cannot be generalised to the wider population of gay, bisexual, and other men who have sex with men, this mixed methods design allowed us to build on the findings of a population based survey to gain an in depth understanding of these men's experiences and views of the MSM donor deferral. Conclusions and policy implications The crucial question is what impact a revised MSM donor deferral criterion might have on compliance. Previous research investigating male donors' history of sex with men has been unable to capture the experiences of men who currently do not donate but who would become eligible under a less restrictive deferral. 32 We are not able to predict with certainty how donation behaviour may change under revised criteria. However, according to our data, men who would remain ineligible to donate under a one year MSM donor deferral were less likely to have ever not complied with the lifetime blood donation ban. Encouragingly, many of the barriers to compliance identified in our study seem amenable to intervention. Our data suggest considerable scope for improving the clarity with which deferral criteria are communicated; the privacy afforded potential donors in disclosing sexual behaviours; and the adequacy of explanation for the rationale for donor deferrals, including the fallibility of blood screening. A crucial finding, that some "non-compliers" dissociated past male sexual experience from the MSM donor deferral, has important implications for communication of any deferral criterion. It is critical that health professionals consider the possibility that men who do not identify themselves as gay, bisexual, or "men who have sex with men" may not absorb health information targeted at these groups. In Sweden, where rates of and reasons for non-compliance among gay and bisexual men are similar to those found in this study, 33 blood safety is predicted to be optimal under a one year MSM deferral (compared with other deferral periods) on the basis of anticipated improved compliance. Early data from Australia comparing blood donations before and after the change from a five year MSM donor deferral to a one year deferral have shown no significant increase in the prevalence of HIV infected donations or the proportion of infected donors reporting sex with men, although the sample was small and the observation of more new infections in donors reporting sex with men needs further attention. 18 Should a revision be made to the MSM donor deferral criterion in the UK, careful monitoring of its effects will be needed. Replacing the lifetime MSM donor exclusion with one seen as fairer is likely to be welcomed by most gay, bisexual, and other men who have sex with men. Increased endorsement by the constituency in question might improve compliance rates, particularly among men who currently donate ineligibly owing to perceived discrimination. If the lifetime MSM donor deferral does indeed have the potential to reinforce public prejudice towards gay and bisexual men-a concern raised by opponents of the ban 9-11 and echoed by men in this study-then a revised rule might go some way towards addressing the negative social attitudes that may deter men from disclosing sexual practices and complying with the deferral rule. The findings of this study are intended to inform a review of UK blood donation policy with regard to men who have sex with men and to show the importance of consulting the constituent group before introducing changes to public health policy. Our findings are likely to be of value not only in the UK but in other countries considering changes to policy regarding blood donation by men who have sex with men. We thank all study participants, including volunteers in the piloting phase; Sarah Shepherd and colleagues at TNS-BMRB Omnibus for survey fieldwork; Ben Armstrong for statistical advice; Rachael Parker for administrative support; the Terrence Higgins Trust for assistance in piloting the questionnaire; and the Terrence Higgins Trust, Stonewall, and the National AIDS Trust representatives for advice on dissemination. We also thank the Health Protection Agency for commissioning the project, Brian McClelland for initiating this research, Su Brailsford for commenting on the final study report, and the Department of Health for funding the study. Contributors: KW led the study design, with contributions from SM, KS, and JD, and contributions to the development of data collection tools by PG and WN. PG led the data collection and analysis, with contributions from WN, KS, JD, and SM. All authors contributed to data interpretation and revisions to the manuscript, the first draft of which was prepared by PG. All authors had full access to the data derived from the study and can take full responsibility for the integrity of the data and the accuracy of the data analyses. KW is guarantor for the study. Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that WN worked for Terrence Higgins Trust until 2009 and continues to undertake consultancy work for the organisation; KS is employed by the UK Health Protection Agency and is the budget holder for other funding received from the Department of Health for work unrelated to this submission. There is no personal or financial gain attached to this publication. Funding: The Department of Health funded the study and had no involvement in study design, data collection, analysis, or the decision to submit for publication. The views expressed are not necessarily those of the Department of Health. Data sharing: No additional data available. What is already known on this topic Several countries have revised their policy regarding blood donation by men who have sex with men (MSM) Improved blood screening techniques and epidemiological knowledge have prompted reconsideration of the lifetime MSM blood donor deferral in the UK What this study adds A one year donor deferral was preferred by men who have sex with men to the existing lifetime deferral in the UK on the basis of perceived rationality and equity Improvements to communication and confidentiality, and a clear explanation of the rationale, will be essential for compliance with and acceptability of a revised MSM donor deferral criterion Fear of needles or pain *MSM donor deferral=UK blood services' lifetime ban on donation by men who had ever had penetrative sex with a man (enacted 1986). †Includes non-response with regard to how recent the experience of male penetrative sex was. ‡Includes non-response with regard to experience of penetrative sex with men. §Includes England, Scotland, and Wales. ¶Ever donated blood since first penetrative sex with male and since MSM donor deferral (1986 or later). Cases in which neither reported nor calculated compliance could be established (n=27) were excluded from all compliance related analyses. (24) Don't know *MSM donor deferral=UK blood services' lifetime ban on donation by men who had ever had penetrative sex with a man (enacted 1985). "Non-compliers" and "compliers" refer to such men's compliance with the lifetime ban. †Includes responses "strongly agree" and "tend to agree" (attitudinal questions used a 5 point Likert scale). Denominators exclude non-response. ‡Includes non-response regarding history of male penetrative sex and blood donation history. §Significant at P<0.05 (comparing responses of "non-compliers" and "compliers"). ¶Multiple response, so the cumulative percentages may not equal and may exceed 100. Denominator, however, is constant across response options as non-response applied to the entire question. **Denominator is specified total; excludes non-response.
2017-07-08T06:44:20.355Z
2011-09-08T00:00:00.000
{ "year": 2011, "sha1": "81ac5b23c4dd966ee08d2f73eb28820a5fb1a586", "oa_license": "CCBYNC", "oa_url": "https://www.bmj.com/content/343/bmj.d5604.full.pdf", "oa_status": "HYBRID", "pdf_src": "BMJ", "pdf_hash": "d2649e52f5fa2d0d96b845e6189ec4042c0ee524", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236593488
pes2o/s2orc
v3-fos-license
Photonic perceptron at Giga-OP/s speeds with Kerr microcombs for scalable optical neural networks Optical articial neural networks (ONNs) have signicant potential for ultra-high computing speed and energy eciency. We report a novel approach to ONNs that uses integrated Kerr optical micro-combs. This approach is programmable and scalable and is capable of reaching ultra-high speeds. We demonstrate the basic building block ONNs — a single neuron perceptron — by mapping synapses onto 49 wavelengths to achieve an operating speed of 11.9 x 10 9 operations per second, or Giga-OPS, at 8 bits per operation, which equates to 95.2 gigabits/s (Gbps). We test the perceptron on handwritten-digit recognition and cancer-cell detection — achieving over 90% and 85% accuracy, respectively. By scaling the perceptron to a deep learning network using off-the-shelf telecom technology we can achieve high throughput operation for matrix multiplication for real-time massive data processing. We report an optical neural network consisting of a single perceptron that operates with an integrated optical Kerr micro-comb source. The system achieves a single processor throughput speed of 11.9 Giga-OPS/s, equivalent to 95.2 Gigabits/s. We demonstrate benchmark tests including cancer cell diagnosis and handwritten digit recognition. We outline different approaches to scale the network to deep learning ONN architectures that have significantly increased processing power and throughput speed. This is possible because of the high level of parallelism that can be realized via simultaneous time, spatial, and wavelength multiplexing. Our approach has significant possibilities for real-time analysis of high dimensional data for advanced applications. Introduction Arti cial neural networks (ANNs) have achieved signi cant success in making predictions and achieving simple representations of complex and high dimension data. When su cient data are used for training, ANNs can outperform computational algorithms [1][2][3][4][5] and even humans for many tasks ranging from the recognition of images to translation of languages, risk assessment and intriguingly, complex board games [6]. The speed and computational power of ANNs is determined by matrix multiplication operations, or multipy-and-accumulate (MAC) operations. Electronic ANN chips include the IBM TrueNorth and Google TPU chips [7,8]. They use extremely large-scale processor arrays that include the systolic array [8], to enhance the parallelism to achieve operational speeds greater than 180 x 10 12 oating point operations per second (Tera-FLOPS). However, in spite of this performance, since they are electronically based they are still subject to relatively ine cient digital protocols and bandwidth bottlenecks due to the von Neuman effect [9]. In fact, each individual processor is limited in speed to only about 700 MHz [10 9]. Photonic approaches towards ANNs, or optical neural networks (ONNs), are next generation neuromorphic processors and are attracting extremely high levels of interest currently [11][12][13][14][15][16][17]. They are highly promising since they offer the potential to achieve extremely high processing speeds [3]. The key is to achieve the weighted synapses that connect the nodes and neurons. In contrast to electronic digital systems that store the synapses in memory, photonic systems operate by realizing actual physical embodiments of synapses, where their number determines the scale of the network, and which depend on the physical parallelism that is fundamentally analog in nature. ONNs have achieved substantial success by using a number of approaches to multiplex synapses in parallel. Schemes based on spatial multiplexing include coherent integrated photonic chips [3] and diffractive bulk optics [17]. These have successfully achieved classi cation of alphabetic-numeric characters including handwritten digits and vowels. Furthermore, they have achieved low power levels, although they have tradeoffs between the system footprint and processing power, determined by the number of synapses or degree of parallelism. There are a number of ways to realise ONNs, including reservoir computing [18][19][20][21] as well as spike processing [22][23][24][25]. These both use sophisticated schemes to multiplex the synapses and are both very compact. Photonic reservoir computing multiplexes the synapses in time to achieve very large scale systems with hundreds of input layer nodes. Conversely, spike processors have successfully achieved pattern recognition by using phase change materials in integrated devices [24]. This approach operates via wavelength division multiplexing (WDM) and also bene ts from a dynamically recon gurable operation bandwidth [17][18][19]. Despite the success of these approaches, they still face limitations. Temporal multiplexing is challenging to dynamically train and to scale up to deep learning systems with multiple layers. Spike processing is limited in the degree of parallelism it can achieve because it relies on arrays of discrete laser diodes. The combined use of temporal, wavelength, and spatial multiplexing has the greatest potential to achieve the highest combination of processing power, operation speed and scale of the network, and this is what our approach uses. Perceptron Here, we propose [11,12] a novel scheme for ONNs based on integrated micro-combs to achieve simultaneous temporal, spatial, and wavelength multiplexing, which we then use to perform the dot product of vectors. We perform matrix operations by first flattening the matrices to convert them into vectors at high data rates. Our system is capable of dynamic training and its network structure is highly scalable. We demonstrate a single photonic neuron perceptron with 49 synapses, or wavelengths using the microcomb. Our fundamental building block for ONNs achieves a speed for matrix multiplication at 11.9 billion (Giga) operations/s (OPS) -or GOPsthat equates to 95.2 Gigabits/s for 8 bit operations. We do this via simultaneous synapse weighting in the wavelength domain and the temporal domain, scaling the input data. The device is applied to benchmark tests that include handwritten digit classification, where we obtain an accuracy greater than 93%, and to the prediction of cancer classes to distinguish malignant from benign cases based on an extracted feature set from microscope images from biopsied tissue. We obtain an accuracy of greater than 85% for cancer classification. Figure 1 depicts the neuron perceptron mathematical model [25] and Figure 2 outlines the experiment setup that uses a Kerr optical micro-comb source. The perceptron is based on wavelength multiplexing of 49 microcomb wavelengths, done simultaneously with temporal multiplexing, in order to form a single synapse. The main operation consists of matrix multiplication with vectors formed from flattened matrices. The matrix multiplication occurs between the electronic image input data and the synaptic weights, and this is performed with multiple steps using photonics. The input data for classification consists of 28 × 28 electronic digital matrices with 8-bit grey-scale intensity resolution, which is initially down sampled digitally into 7×7 matrices that are then reorganized into 1D vectors: X(i) = [X(1), X(2) … X (49)], which are then multiplexed sequentially in the temporal domain by an electronic high speed D/A converter at 11.9 Gigabaud. Here, each symbol corresponds to the 8-bit pixel input data images and takes up one time slot 84 ps in length. Hence, the whole duration of the waveform is N x τ = 4.12 ns with N=49. In approaches based on digital electronics, the neural network input nodes usually reside in electrical memory and are routed according to memory address. By comparison, the input nodes for the ONN are temporally defined by multiplexing the symbols that are then routed according to their location in time. Following this, the electrical input waveform that is a temporally multiplexed signal is broadcast via an electrooptic modulator on to all 49 wavelengths (equal to the number of elements of the vector X), the wavelengths being generated by the micro-comb. Here, each comb line contains an equal copy of X, the time domain multiplexed input data waveform. Every comb line's power is then adjusted by an optical waveshaper with the weights being determined by the theoretical synaptic weight vector W = [w(1), w(2), …, w(49)] obtained during training. This effectively multiplexes the synaptic weights in wavelength. If W and X are both 1×49 column vectors, then the weighted input X vector replicas are where the nth row (where n∈ (1, N)) corresponds to the temporal weighted waveform replica of the n th wavelength. Therefore, the diagonal components reflect the input N weighted nodes, so that the n th weighted input node is reflected in the 8-bit symbol w(n)·x(n) that exists in the n th time slot for the n th wavelength. After this, the replicas are transmitted through a medium that provides a dispersive delay equivalent to 2 nd order dispersion, to sequentially delay the weighted replicas in order to align the diagonal components into the same time window, with the delay step given by t = delay(λ k ) − delay(λ k+1 ). Therefore, the dispersive delay is an addressable time-of-flight memory that lines up the progressively weighted time-dependent symbols w(1) · x(1), w(2) · x(2) … w(49) · x(49) over all wavelengths as While this process, as it is implemented here, does not enhance the network speed because it only uses diagonal components, in principle a significant increase in speed can be obtained by scaling the network to deep (multiple level) structures through the use of parallel wavelengths as well as time and spatial multiplexing. Finally, the intensity of all of the optical signals in each time bin are summed via sampling and detection to produce the resulting matrix multiplication (equivalent to a dot product of 49×1 vectors for the case of 7×7 matrices) of the neuron, given by: After matrix multiplication, the summed, weighted output is then modulated in order to map it into a desired range by using a nonlinear sigmoid function. In this initial demonstration we achieve this last function using digital electronics, which generates the output of the single neuron perceptron. In principle, however, this can easily be achieved all-optically. Finally, the input data prediction category is produced through comparison between the decision boundary with the neuron output. The decision boundary is a 49 dimensional hyperplane, generated during digital learning carried out offline prior to the experiments. Thus, the input data can be separated into two categories. Micro-combs Based On Soliton Crystals Kerr optical micro-combs [26-33] have achieved many ground-breaking breakthroughs including optical frequency synthesis [29], ultra-high bit rate data transmission [30], generation of advanced quantum states [31], high level RF signal processing [32], and more. They provide the full capability of mainframe optical frequency combs [34] although in a fully integrated form that has a much more compact footprint as well as the potential to scale the network in power, reliability, and performance [35][36][37][38][39][40][41][42]. The new platforms developed for optical microcombs [27,43,44] have much lower nonlinear absorption than other nonlinear platforms such as chalcogenide glass and semiconductors . We use a microcomb that operates via soliton crystal states [68, 69], produced in integrated ring resonators. Soliton crystals display deterministic generation induced by mode crossings that produce a background wave, with all of these processes sustained by the Kerr nonlinearity. For soliton crystals, there is very little nonlinear pump-induced shift in the resonance that otherwise would require di cult dynamic pumping schemes like DKS solitons require [26]. This is because the intracavity soliton crystal state power is virtually the same as the power for the chaotic state from which it is formed. Hence, very little power jump occurs when they are generated and this allows a reliable and simple method of initiation achievable by simple adiabatic, even manual, tuning of the pump wavelength [68]. This same effect also yields a much higher energy conversion e ciency from pump to comb-line [63]. Soliton crystals have demonstrated a multitude of RF or microwave signal processing based on photonics [11,12,32,. The integrated ring resonators were made from Hydex glass, a platform that is CMOS compatible [27] ( Figure 3). They had a high Q factor of 1.5 million and a 48.9 GHz FSR with a chip to bre coupling loss of 0.5 dB / facet achieved by on-chip mode converters. The waveguide cross-section of 3μm × 2μm produced anomalous dispersion with a mode crossing near 1552 nm. A 30dBm CW pump laser generated the soliton crystals when its wavelength was swept manually from short to long wavelengths (blue to red) near a resonance. To generate coherent micro-combs, a CW pump laser (Yenista Tunics -100S-HP) was employed, with the power ampli ed to 30dBm by an optical ampli er (Pritel PMFA-37) and the wavelength subsequently swept from blue to red. The acquired soliton crystals optical spectra are shown in Figures 3 and 4. We note that when locking the pump wavelength to the resonance of the MRR, the stability of the microcomb can be further enhanced that could even serve as frequency standards [29]. Figures 3f and 4a show the progression from the initial onset of primary combs, to chaotic combs that are not modelocked, to nally soliton crystal combs. Also shown in Figure 4b are the range of different soliton crystal states that can be obtained by adjusting the pump offset (to the nearest resonance) as well as the overall pumping wavelength. Figure 3 also indicates that the power jump in transitioning from the chaos state to soliton crystal state is extremely small. This arises because the two states have very similar power levels. This is a key reason for the stability of soliton crystals. Experiment As discussed above, the multicasting of the waveform was achieved via intensity modulation of all of the wavelength channels supplied by the shaped comb lines, simultaneously. Hence, the optical signal at the kth (k=1, 2, …, 49) channel was w(k) X. The delay that we used for the optical signals at all wavelength channels was 13-km of dispersive single mode bre which generated a time delay of (49 -k )×τ for the kth channel, and τ was measured to be 84 ps. Thus, the optical signals were progressively shifted in the time domain. The optical signal after the single mode bre was converted to the electrical domain by a photodetector (Finisar VPDV2120), and the waveform was then measured by a high-speed oscilloscope (Keysight DSOZ504A). The sampled output of the photodetector was added to the bias symbol and rescaled in intensity by the reference symbol to extract the recovered output of the ONN and locate the hyper-plane (a trained subspace in the high-dimension space of the input data, which serves as a decision boundary that separates different classes of data). During the experiment, the 7×7 gray scale data of the handwritten digit gures were rst converted into a one dimension array X=[x(1), x(2), …, x(49)] by assembling each column head-to-tail. Then a 49-symbol waveform was generated and coded with the intensities at each time slot in proportion to the values of X at corresponding sequences, thus the input data X were multiplexed in the time domain. The 49-symbol waveform was generated by an arbitrary waveform generator (Keysight M8195A), which supported a sample rate of 65 Giga-Samples/s and an analog bandwidth of up to 25 GHz. To acquire stepwise waveforms for the input nodes, we used 5 sample points at 59.421642 Giga Samples/s to form each single symbol of the input waveform, which also matched the progressive time delay τ (84 ps) of the dispersive buffer. The optical power of the 49 microcomb lines was shaped according to the intensity of pre-trained neuron weights W=[w(1), w(2), …, w (49)]. We shaped the comb lines' power with a programmable optical spectral shaper using liquid crystal on silicon techniques (Finisar WaveShaper 4000S), which could dynamically recon gure the ONN connections within 500 ms with a resolution of 1 GHz. Two stages of programmable optical spectral shapers were employed for a larger loss dynamic range. The rst WaveShaper was used to atten the microcomb, while the second one was used to achieve pre-trained neuron weights. A feedback loop was used to enhance the shaping accuracy, where the comb lines' power after shaping was measured by an optical spectrum analyser (Anritsu MS9710C) and compared with the pre-trained weights to generate an error signal for the calibration of the WaveShapers' loss characteristics. Figure 5 shows the time-domain multiplexed input layer for the cancer diagnosis test. The generated 11.9 Giga-baud data stream of the encoded 75 sets of features shows the 30-symbol encoded data for each set and 3 symbols padded for post measurement, including a trigger symbol to trigger the oscilloscope, a reference symbol to calibrate the reference level, and a bias symbol encoded with the pre-trained bias to locate the decision boundary. Figure 6 shows the experimental recognition of cancer diagnosis. Figure 6a shows the optical spectrum of the shaped (soliton crystal) micro-comb measured by an optical spectrum analyser, while Figure 6b shows the measured and sampled output waveform from the photodetector. Figure 6c shows the recovered ONN predictions X×W+b acquired by rescaling the sampled results via the reference symbol, and the hyper-plane X×W+b=0 (black line). Datasets And Pre-training The datasets we employed was from MNIST (Modi ed National Institute of Standards and Technology) handwritten digit database [95] and part of the publicly available Wisconsin Breast Cancer dataset [96]. The datasets of recognition tasks were rst separated as training sets and test sets. The training sets were used for the o ine training with the Back Propagation algorithm [97], performed on an electronic computer using Matlab TM , to acquire pre-trained weights and bias. The test sets were tested with both the ONN and an electronic computer for comparison. We note that, since the number of training sets is su ciently large compared with the number of synaptic connections, the cross validation process was not employed in this work-and in any case, it could be performed o ine before the pre-training. We note here that the accuracy of the ONN predictions was experimentally limited by the performance of the arbitrary waveform generator, which introduced errors to the symbols' intensities and thus deteriorated the correctness of the matrix multiplication. This can be addressed by using an arbitrary waveform generator with a larger analog bandwidth, or a higher sampling rate. Addressing this issue would result in higher levels of correctness than reported here. Results First, we evaluated the performance of the network using a number of handwritten digit pairs from a body of 500 images for each digit, from which we randomly selected 920 images for prior off-line training, which left 80 gures to evaluate the system performance. The handwritten digital images were electronically down-sampled to reduce the size of the images to 7×7 from 28×28. Next, this was transformed into a 49 symbol one dimensional array, following which the array was temporally multiplexed with each symbol occupying an 84ps time slot, yielding a modulation rate of 11.9 Gigabaud. The data vector dimension of our perceptron needed to match the weight vector dimension, given by number of wavelength, which was 49. Therefore, we used a down-sampling method on the image to reduce the length of the vector to 49. The optical power for each of the 49 comb lines was weighted according to the pre-learned synaptic weights in order to enhance the parallelism to form the neuron synapses. Next, the data input stream was simultaneously imprinted onto all of the 49 weighted microcomb lines, which were then linearly progressively delayed in wavelength by 13km of single mode bre that generated a time-of-ight optical buffer via its 2 nd order dispersion of 17 ps / nm / km. Therefore, the weighted symbols for each wavelength were aligned in time, thereby enabling them to be summed by simple sampling of the centre timeslot and subsequent detection. This yielded the matrix multiplication result, a product of the multiply and accumulate (MAC) operation. The output was nally compared against the decision boundary which consists of a hyper-plane that was generated during prior network training that classi ed the input samples arranged in a 49-dimensional hyperspace. The resulting matrix multiplication computations on the multiple input data samples were then compared in intensity against this decision boundary, nally producing the predictions of the perceptron (Figures 7, 8). We tested the perceptron performance for classifying 2 benchmark tests delineated by the decision boundary -rst for two handwritten digits (0 and 6), followed by determining whether cancer cells are benign or malignant. For the handwritten digits the perceptron produced an accuracy of 93.75%, versus 98.75% that can be achieved with an electronic digital neural network. For the tissue biopsy data classi cation for cancer cells (Figure 8), individual cell nuclei were extracted from breast mass tissue via ne needle aspirate and then imaged with a microscope. These images were previously characterized to distinguish 30 different features including texture, perimeter, radius, etc.. For our experiments, the data for 521 cell nuclei were used for pre-training the network, with a further 75 used as the basis for the testing diagnosis. This follows a very similar process to that used for the handwritten digit tests discussed above. We obtained an 86.67% accuracy versus 98.67% that can be achieved with a digital electronic neural network. In our experiments we used Intel's approach of evaluating digital microprocessors [98]. Since our system is rather more complex in that it uses input data and weight vectors for the MAC calculations that come from different sources that are multiplexed in time and wavelength, we de ne the throughput speed based on the temporal data sequence of the electronic output port, in order to be unambiguous. According to the protocol of broadcast-and-delay, each computation cycle consists of one vector dot product between the 49 symbol data and the weighted vectors, resulting in a time data sequence having a length of 48+1+48 symbols, yielding a total duration time of 97 × 84ps. The 49 th symbol represents the desired result -ie., the vector dot product resulting from 49 MAC operations, and hence the perceptron throughput is given by 49 / (84ps × 97) = 5.95 Giga-MACs/s. Since each MAC operation consists of two operations -a multiply followed by an accumulate operation -our throughput measured in operations (OPS) is twice that measured in MACs/s, or (49×2)/(84 ps×97) = 11.9 Giga-OPS. The input data sequence contained 8-bit symbols of 256 discrete levels, re ecting the pixel values of the grey scale image. The 8 bits was limited by our electronic arbitrary waveform generator's intensity resolution. The Waveshaper had a range in attenuation of 35 dB, which is equivalent to a resolution of 11 bits or 33 dB (=10×log 10 [2 11 ]). Therefore, every computing cycle had an effective throughput bit rate of (49×2) × 8 / (84 ps × 97) = 95.2 Gigabits/s. For analogue systems such as ours, both the intensity resolution and the bit rate are limited by the system SNR (signal-to-noise ratio). Therefore, in order to have a full resolution of 8-bits, our system needed to have a SNR greater than 20•log10(28) = 48 dB in terms of electric power. This is well within the capability of analogue photonic microwave links, such as the perceptron system that we reported here which had an OSNR > 28 dB. Our perceptron is the fastest optically based neuromorphic processor ever reported, although making direct comparisons with all of the different approaches is challenging since they vary so widely. As an example, on the one hand systems based on static or continuous sources that perform one-off or singleshot measurements [11,17,24] can have a very low latency. However, on the other hand, they also suffer from an extremely low throughput since the input data cannot be in any rapid manner. While our perceptron did have a relatively large latency of ~64 μs, this was purely due to the dispersive delay component which in our case was a simple spool of optical bre. This did not, however, have any effect on the speed or throughput of our system. Moreover, in fact this can be dramatically reduced or virtually eliminated -easily to less than 200 ps -just by using any type of compact device that can replace the dispersive delay of the bre, such as sampled Bragg gratings or etalon based tuneable dispersion compensators [99][100][101][102][103] and other approaches [104][105][106][107]. Speed Calculation Following our definition of throughput and latency introduced in the manuscript, the overall throughput of the deep ONN is roughly the product of each hidden layer's speed and the number of hidden layers, although we note that rigorous and accurate calculation of the throughput is only possible with specific configurations of the network. Here is a simple example of calculation (this example is just to show the calculations of throughput and latency, the actual performance in terms of prediction accuracies is not the focus of our discussion here): the input waveform/layer is the same as the demonstrated perceptron (49×1 vector at 11.9 Giga Baud with 8-bit resolution, τ= 84ps), the network has a hidden layer that each has 7 fully connected neurons, and an output layer that has 10 fully connected neurons (to match with the number of categories for digits from 0 to 9). As a result, 343 (49×7) and 70 (7×10) wavelengths would be needed in the hidden and output layer, respectively. This can be achieved by using smaller FSR microcombs such as 25GHz across the wide optical band (the C + L bands already reach >11THz wide). In the hidden layer, each initial electrical output waveform (right after the photodetection and before the digital signal processing) corresponds to the output of a single neuron and has a duration of (49×2−1)×84ps=8.148 ns. In the output layer, the generated electrical waveform of each neuron has a duration of (7×2−1)×84ps=1.092 ns. Only one time slot of each group of symbols represents the result of matrix multiplication between the input vector (sampled and re-multiplexed waveform from the hidden layer) and the weight vector that constitutes of 7×2=14 floating point operations, thus the throughput would be 14/1.092=12.8205 Giga-FLOPS for each neuron and the total throughput of output layer would be 12.8205×10=128.205 Giga-FLOPS. As such, the total peak throughput of the network would be 84.1925+128.205 =212.3975 Giga-FLOPS. In addition, the latency of the overall network is the sum of each layer's latency, which mainly comes from the dispersive optical buffer and the electrical sampling and multiplexing module. We assume the latency to be 200 ps for the buffer in integrated forms and to be twice of the waveform duration for the re-sampling unit (2×8.148 ns and 2×1.092 ns for the hidden and output layer, respectively), the total latency of the example network would roughly be 18.68 ns. We note that the latency is just a very rough estimation showing how to calculate or measure the performance of our approach, the practical calculations of the latency are subject to more detailed parameters. The speed of the network has the potential to reach 10 Tera-OPS [12], determined as follows. With 20 layers, each layer featuring 20 neurons and a modulation rate of 25 Giga baud, the overall throughput should be around 20×20×25=10 tera-FLOPS, according to the discussion in the above section. With 8-bit resolution, the total potential throughput in terms of bit rate could reach 10×8=80 Tbps. We note that other widely used techniques in telecommunications such as polarization multiplexing and coherent modulation formats could also potentially boost the computing speed of the proposed neuron network in this work. Table 1 shows the performance matrices of state-of-art ONNs. We note that it is difficult to directly compare different kinds of ONNs, since on one hand, there are no universal and specific definitions of ONN's parameters. On the other hand, the operation principles of existing ONNs are quite different and have their unique advantages. As such, here we highlight the decent advances of existing works and focus on the speed parameters, including the latency and throughput, to reflect our ONN's advantages in this aspect. "-" denotes the corresponding parameter is either not demonstrated or not indicated in the work. Conclusions We report an optical neural network consisting of a single perceptron that operates with an integrated optical Kerr micro-comb source. The system achieves a single processor throughput speed of 11.9 Giga-OPS/s, equivalent to 95.2 Gigabits/s. We demonstrate benchmark tests including cancer cell diagnosis and handwritten digit recognition. We outline different approaches to scale the network to deep learning ONN architectures that have significantly increased processing power and throughput speed. This is possible because of the high level of parallelism that can be realized via simultaneous time, spatial, and wavelength multiplexing. Our approach has significant possibilities for real-time analysis of high dimensional data for advanced applications. Declarations Competing interests: The authors declare no competing interests. Experimental setup for single perceptron. Time-domain multiplexed input layer of cancer diagnosis test. Generated 11.9 Giga-baud data stream of the encoded 75 sets of features showing 30-symbol encoded data for each set and 3 symbols padded for post measurement, including a trigger symbol to trigger the oscilloscope, a reference symbol to calibrate the reference level, and a bias symbol encoded with the pre-trained bias to locate the decision boundary. ONN predictions of handwritten digits labeled according to their correct answers.
2021-12-08T16:24:30.352Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "d1aa363eda386ea09655485d668f19081f2c918b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-453033/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "42b580f8d275cb9ce26c321acb0c1564cc8e58dc", "s2fieldsofstudy": [ "Physics", "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
255591007
pes2o/s2orc
v3-fos-license
A rare case of multiple paragangliomas in the head and neck, retroperitoneum and duodenum: A case report and review of the literature Pheochromocytomas and paragangliomas (PGLs) are rare non-epithelial neuroendocrine neoplasms of the adrenal medulla and extra-adrenal paraganglia respectively. Duodenal PGL is quite rare and there are only two previous reports. Herein, we report a case of multiple catecholamines (CAs)-producing PGLs in the middle ear, retroperitoneum, and duodenum, and review the literature of duodenal PGLs. A 40-year-old man complained right-ear hearing loss, and an intracranial tumor was suspected. Magnetic resonance imaging of the head revealed a 3-cm mass at the right transvenous foramen, which was surgically resected following preoperative embolization. The pathological diagnosis was a sympathetic PGL of the right middle ear. Six years later, family history of PGL with germline mutation of succinate dehydrogenase complex iron sulfur subunit B, SDHB: c.268C>T (p.Arg90Ter) was clarified. The patient had elevated levels of plasma and urine CAs again. Abdominal computed tomography scanning revealed two retroperitoneal tumors measuring 30-mm at the anterior left renal vein and 13-mm at near the ligament of Treitz. The larger tumor was laparoscopically resected, but the smaller tumor was not identified by laparoscopy. After the operation, the patient remained hypertensive, and additional imaging tests suggested a tumor localized in the duodenum. The surgically resected tumor was confirmed to be a duodenal PGL. After that, the patient remained hypertension free, and urinary levels of noradrenaline and normetanephrine decreased to normal values. No recurrence or metastasis has been found at 1 year after the second operation. CAs secretion from PGLs in unexpected location, like the duodenum of our patient, may be overlooked and leads to a hypertensive crisis. In such cases, comprehensive evaluation including genetic testing, fluorodeoxyglucose-positron emission tomography scanning, and measurement of CAs will be useful for detecting PGLs. Most previous reports on duodenal PGL were gangliocytic PGL which has been renamed composite gangliocytoma/neuroma and neuroendocrine tumor, and defined the different tumor from duodenal PGL. We reviewed and discussed duodenal PGLs in addition to multiple PGLs associated with SDHB mutation. Pheochromocytomas and paragangliomas (PGLs) are rare non-epithelial neuroendocrine neoplasms of the adrenal medulla and extra-adrenal paraganglia respectively. Duodenal PGL is quite rare and there are only two previous reports. Herein, we report a case of multiple catecholamines (CAs)producing PGLs in the middle ear, retroperitoneum, and duodenum, and review the literature of duodenal PGLs. A 40-year-old man complained right-ear hearing loss, and an intracranial tumor was suspected. Magnetic resonance imaging of the head revealed a 3-cm mass at the right transvenous foramen, which was surgically resected following preoperative embolization. The pathological diagnosis was a sympathetic PGL of the right middle ear. Six years later, family history of PGL with germline mutation of succinate dehydrogenase complex iron sulfur subunit B, SDHB: c.268C>T (p.Arg90Ter) was clarified. The patient had elevated levels of plasma and urine CAs again. Abdominal computed tomography scanning revealed two retroperitoneal tumors measuring 30-mm at the anterior left renal vein and 13mm at near the ligament of Treitz. The larger tumor was laparoscopically resected, but the smaller tumor was not identified by laparoscopy. After the operation, the patient remained hypertensive, and additional imaging tests suggested a tumor localized in the duodenum. The surgically resected tumor was confirmed to be a duodenal PGL. After that, the patient remained hypertension free, and urinary levels of noradrenaline and normetanephrine decreased to normal values. No recurrence or metastasis has been found at 1 year after the second operation. CAs secretion from PGLs in unexpected location, like the duodenum of our patient, may be overlooked and leads to a hypertensive crisis. In such cases, comprehensive evaluation including genetic testing, fluorodeoxyglucose-positron emission tomography scanning, and measurement of CAs will be useful for detecting PGLs. Most Introduction Pheochromocytomas and paragangliomas (PPGLs) are rare non-epithelial neuroendocrine neoplasms of the adrenal medulla or the extra-adrenal paraganglia among the sympathetic or parasympathetic chain, respectively (1, 2). Paragangliomas (PGLs) account for 10-20% of all patients with PPGLs (3), but gastrointestinal PGLs are extremely rare. Although duodenal PGLs have been thought the most common form of gastrointestinal PGLs, most cases previously reported were likely gangliocytic PGLs (4,5). The 2022 WHO classification of paragangliomas and pheochromocytomas clearly defined that PGLs are distinctively different from gangliocytic PGLs. PGLs are composed of spindle (sustentacular) cells and round or polygonal epithelioid cells arranged in a "Zellballen" pattern, and they produce catecholamines (CAs). In contrast, gangliocytic PGL contains mixtures of three distinct histological patterns: typical neurofibroma (with proliferating neuritis and Schwann cells), ganglion cells mixed with Schwann cells, and proliferation of clear epithelioid cells arranged in clusters or radial patterns resembling carcinoid tumors, and they do not produce CAs. However, the two tumors have been described in previous studies as similar diseases (2,4). Thus, the true number of patients with duodenal PGLs is much less than previously reported. Furthermore, production of CAs has not been evaluated in the vast majority of patients with gastrointestinal PGLs (6). Lack of clinical suspicion and unrecognized hypersecretion of CAs may lead to hypertensive crisis (6)(7)(8). We herein report our experience with multiple CAs-producing PGLs in the duodenum, middle ear, and extra-adrenal retroperitoneum of a patient along with a review of the literature. Case presentation A 40-year-old man presented to our affiliated hospital with right-ear hearing loss. He had no notable medical history. Audiometry revealed hearing loss in the right ear, but otoscopy showed no abnormalities in the auditory canal or tympanic membrane. The presence of an intracranial tumor was therefore suspected, and imaging studies were performed. Magnetic resonance imaging of the head revealed an approximately 3-cm mass at the right transvenous foramen ( Figures 1A-C). In response to this finding, the tumor was resected in the neurosurgery department of our affiliated hospital. At that time, the tumor was suspected to be a PGL, and it was removed after preoperative embolization. The pathological diagnosis was PGL of right middle ear ( Figure S1). Because the patient complained no headaches or hypertension, the 24-h urine CAs analyses were not measured. The patient's postoperative course was good, and he was followed up at the same hospital by imaging inspection only. Then, six years later, his eldest daughter was admitted to another hospital with hypertensive crisis caused by a retroperitoneal PGL. In addition, the fact that his twin brother had died owing to a hypertensive crisis when he was 23 years old came to light ( Figure S2). This prompted his physician to perform further examinations. Elevated levels of plasma and urine CAs were found, and abdominal magnetic resonance imaging (MRI) revealed a 3-cm retroperitoneal tumor. He was suspected of having PPGLs and was referred to our hospital for further examination. The results of adrenal hormone tests reevaluated in our hospital were as follows: adrenaline <0.01 ng/mL (reference: 0-0.17 ng/mL), noradrenaline 2.0 ng/ mL (reference: 0.15-0.50 ng/mL), dopamine <0.02 ng/mL (reference: 0-0.03 ng/mL), urinary adrenaline 2. There were no abnormally enlarged retroperitoneal lymph nodes, and no obvious abnormalities were observed in the liver, gallbladder, spleen, kidney, and gastrointestinal tract. 18 Ffluorodeoxyglucose-positron emission tomography (FDG-PET) showed FDG accumulation in these tumors ( Figure 1J). 123 Imetaiodobenzylguanidine (MIBG) scintigraphy also showed an accumulation in an area consistent with a mass, but because of the close proximity of the two tumors, the hormone production of the smaller tumor could not be identified ( Figure 1K). Based on these results, he was diagnosed as having abdominal PGLs. We considered that the tumors resided in the retroperitoneum and therefore attempted laparoscopic removal of both of them. We completely removed the 30-mm tumor. However, we were unable to identify the 13-mm tumor, and we finished the operation. The resected tumor was encapsulated. Hematoxylin-eosin staining showed a Zellballen structure composed of chief cells and sustentacular cells that is typical for PGLs ( Figure S3). The immunohistochemical staining studies indicated that the tumor was positive for chromogranin A, tyrosine hydroxylase, and dopamine b-hydroxylase but were negative for succinate dehydrogenase complex iron sulfur subunit B (SDHB), and choline acetyltransferase (ChAT). The grading system for adrenal pheochromocytoma and paraganglioma (GAPP) score was 5 points, suggesting a tumor with moderate-grade for malignancy (Table 1) (9). Pathological investigation confirmed the diagnosis of retroperitoneal PGL ( Figure S3). Genetic testing identified a known SDHB germline mutation, SDHB: c.268C>T (p.Arg90Ter) (10) identical to that of the daughter. However, despite the resection of the retroperitoneal PGL, the patient remained hypertensive, and his urinary CAs levels were twice the upper limit of normal at two months after the operation. Therefore, we performed additional imaging studies. 123 I-MIBG scintigraphy showed no accumulation, but 18 F-FDG PET/ CT and a contrast-enhanced CT study revealed that the tumor near the ligament of Treitz was still present and had become mildly enlarged. He was admitted for re-evaluation one year after the surgery. As the residual tumor was not visible during the previous laparoscopic operation, we considered the tumor to be in the duodenum lumen rather than in the retroperitoneum. Although he underwent esophagogastroduodenoscopy, endoscopic ultrasonography, and enteroscopy, the tumor still could not be identified (Figures 1L-O). On the basis of the CT findings, we thought that the tumor was a duodenal PGL and performed an open duodenectomy. A pathological investigation confirmed that a 10-mm sized submucosal tumor located at the junction between the horizontal portion of the duodenum and the jejunum. Histologically, the tumor was mainly located in the submucosal to subserosal layer ( Figure 2). The tumor cells with vacuolated cytoplasm and hyperchromatic nuclei showed Zellballen patterns, associated with well-developed vessels. The immunohistochemical studies were positive for chromogranin, tyrosine hydroxylase, and dopamine b-hydroxylase and negative for ChAT supporting the diagnosis of a sympathetic PGL. The Ki67 marker showed a low mitotic index of 1.3%. SDHB immunostaining showed loss of reactivity in both duodenal and retroperitoneal PGLs. The pathological results of the middle ear, retroperitoneal, and duodenal tumors are summarized in Table 1. After surgery, the patient's blood pressure remained within the normal range without medications, and urinary levels of noradrenaline and normetanephrine decreased to 51.2 µg/day and 0.18 mg/day, respectively. The tumor had disappeared on imaging. No recurrence or metastasis has been observed at 1 year from surgery. Discussion PPGLs are rare non-epithelial neuroendocrine neoplasms found within the adrenal medulla or the extra-adrenal paraganglia among the sympathetic or parasympathetic chain (1,2). Gangliocytic PGLs occasionally have been misinterpreted as several unusual lesions of PGL because the tumors include sustentacular cells that are one of the characteristic features in PGLs. However, the most recent WHO classification of PPGL clearly shows that duodenal PGLs and gangliocytic PGLs are distinctively different tumors (2). Thus, to review patients with definitive duodenal PGL, we performed a Medline search for PGLs and gangliocytic PGLs of the duodenum previously reported in the English literature (4,(11)(12)(13)(14)(15)(16)(17)(18). Based on our strict review of the clinicopathological findings of each report, we identified only two reports describing a definite diagnosis of duodenal PGL (Table 2) (17, 18). Furthermore, we found that the present case appears to be the first to show CA-producing duodenal PGLs expressing various enzymes involved in CA biosynthesis and postoperative normalization of plasma and urinary CA concentrations. An interesting aspect of this patient is the masquerading of the duodenal PGL as a retroperitoneal mass on the various imaging modalities. Although we initially thought that two tumors existed in the retroperitoneum before the second operation, only a solitary tumor could be found in the space at surgery. Unfortunately, overproduction of CAs remained, and one of the two tumors could still be visualized on 18 F-FDG-PET CT examination after the surgery ( Figure 1M). Hence, following the results of the imaging examinations, the patient underwent enteroscopy and endoscopic ultrasonography to identify lower duodenal and/or upper jejunal PPGLs, but no lesion in the intestinal lumen could be detected. However, because it was conceivable that the tumor could be located in the submucosa of the small intestine, such as an "epithelial neuroendocrine tumor, e.g. carcinoid," we performed an open duodenectomy, and histopathological examinations indicated the presence of a duodenal submucosal PGL. Thus, if retroperitoneal PGLs are encountered, the possibility that some of them might be intestinal in origin should be considered. In addition, when catecholamines remain high but no primary site is found, especially in cases of SDHB mutations, we must be careful to look for sites other than the usual site of PGL occurrence, such as the duodenum. In the present patient, 123 I-MIBG scintigraphy showed no accumulation in the duodenal PGL. Previously, 123 I-MIBG was the most widely used diagnostic modality, but several reports have indicated that the examination shows poor resolution and a lower sensitivity for small tumors (19), extra-adrenal lesions, metastatic sites, and patients with SDHx mutation (20, 21). One possible explanation for the discrepancy between MIBG accumulation in the duodenal versus retroperitoneal PGL in this patient is the difference in mass size because the two tumors must both be harboring the SDHB mutation, can be classified as extra-adrenal PGLs, and seem to be non-metastatic. Recent guidelines have recommended FDG-PET-based nuclear modalities as the first option for identifying tumor localization in patients with PPGLs (20). In accordance with this statement, the smaller duodenal PGL in our patient was visualized on 18 F-FDG-PET/CT. In summary, higher-resolution imaging techniques, such as 18 F-PET/CT and contrast-enhanced CT, should be selected to determine tumor localization and staging of PPGLs, especially in patients with masses smaller than 10 mm (19)(20)(21). Endoscopic ultrasound is useful for detecting PGLs in the esophagus, stomach, bulbar and descending portions of the duodenum. PGLs at these sites must be detectable during the examination. However, because the lesion in our patient was located horizontally to the ascending portion of the duodenum (near the ligament of Traits), it could not be observed on the endoscopic examination. Head and neck PGLs (HNPGLs) generally arise from the parasympathetic ganglia located along the glossopharyngeal and vagal nerves in the neck and base of the skull and are recognized as non-CAs producing tumors. Recent study of immunohistochemistry of ChAT, an enzyme involved in acetylcholine synthesis, demonstrated that most head and neck PGLs positive for ChAT, and designated HNPGLs as acetylcholineproducing parasympathetic tumors (22). However, HNPGLs can occasionally arise from the cervical sympathetic chain with CAs over-secretion (23). Immunohistochemical analysis in our patient clearly showed that the HNPGL synthesized CAs but acetylcholine (Figures 2, S1, S3). Rijken et al. indicated a higher risk of CAs overproduction in patients with HNPGLs harboring an SDHB germline mutation (24). Our case is a typical example of a sympathetic HNPGL harboring an SDHB germline mutation. However, we did not confirm whether the tumor released CAs into the bloodstream based on the presence or absence of CArelated symptoms other than hypertension at diagnosis or the hemodynamic records during the first operation. Nonetheless, we believe it is worth investigating the patient's PGLs with comprehensive assessments, including an assay for CA metabolites, radioactive scanning, family history, and genetic testing after the first operation. Currently, nearly 40% of all HNPGLs are recognized as hereditary (23). Although our present case is extremely rare, efforts that enhance awareness of PGLs among non-endocrinologists (e.g., neurosurgeons, otolaryngologists, and gastroenterologists) are pivotal in their recognition. The efforts should lead to avoid the occurrence of hypertensive crisis in the perioperative period, overlook patients with multiple PPGLs, and provide a helpful medical information for patient's relatives. Conclusion We present the first case, to our knowledge, of a duodenal PGL producing CAs that was completely verified by analysis of enzyme expression involved in the biosynthesis of CAs and postoperative normalization of the concentrations of the CAs. The secretion of CAs from a PGL arising in an unexpected location, such as the duodenum in the present patient, may be overlooked. The lack of clinical suspicion and unrecognized hypersecretion of CAs from such tumors may lead to a hypertensive crisis. Because PGLs can occur in various organs, efforts that enhance awareness regarding PGLs among nonendocrinologist are pivotal in their recognition. Further, if a physician suspects the presence of PGL, a comprehensive evaluation including genetic testing, FDG-PET scanning, and measurement of CAs will be useful in identifying the disorder. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. Ethics statement Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
2023-01-11T14:55:17.380Z
2023-01-10T00:00:00.000
{ "year": 2022, "sha1": "f348c9b4f30ae740c31343b82e46b2fcd26d082c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "f348c9b4f30ae740c31343b82e46b2fcd26d082c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4927087
pes2o/s2orc
v3-fos-license
Should hot biopsy forceps be abandoned for polypectomy of diminutive colorectal polyps? Standardized approach to polypectomy of diminutive colorectal polyps (DCPs) is lacking since cold biopsy forceps have been associated with high levels of recurrence, hot biopsy forceps are considered inadequate and risky and cold snaring is currently under investigation for its efficacy and safety. This has led to confusion and a gap in clinical practice. This article discusses the usefulness and contemporary practical applicability of hot biopsy forceps and provides well-intentioned criticism of the new European guidelines for the treatment of DCPs. Diminutive colorectal polyps are a source of frustration for the endoscopist since their small size is accompanied by a considerable risk of premalignant neoplasia and a small but non-negligible risk of advanced neoplasia and even cancer. Since the proportion of diminutive colorectal polyps is substantial and exceeds that of larger polyps, their effective removal poses a considerable workload and a therapeutic challenge. During the last decade, the introduction of cold snaring to routine endoscopy practice has attempted to overcome the use of prior techniques, such as hot biopsy forceps. It is important to recognize that with the exception of endoscopic methods that are obviously unsafe and inadequate to serve their purpose, all other interventional endoscopic methods are operator-dependent in the sense that specific expertise and training are obligatory for the success of any therapeutic intervention. Since relevant publications on hot biopsy forceps are still in favor of its careful use, as it has not yet demonstrated inferiority compared with newer techniques, it would be prudent for any medical practitioner to evaluate the available tools and judge any new proposed technique based on the evidence before it is adopted. Abstract Standardized approach to polypectomy of diminutive colorectal polyps (DCPs) is lacking since cold biopsy forceps have been associated with high levels of recurrence, hot biopsy forceps are considered inadequate and risky and cold snaring is currently under investigation for its efficacy and safety. This has led to confusion and a gap in clinical practice. This article discusses the usefulness and contemporary practical applicability of hot biopsy forceps and provides wellintentioned criticism of the new European guidelines for the treatment of DCPs. Diminutive colorectal polyps are a source of frustration for the endoscopist since their small size is accompanied by a considerable risk of premalignant neoplasia and a small but nonnegligible risk of advanced neoplasia and even cancer. Since the proportion of diminutive colorectal polyps is substantial and exceeds that of larger polyps, their effective removal poses a considerable workload and a therapeutic challenge. During the last decade, the introduction of cold snaring to routine endoscopy practice has attempted to overcome the use of prior techniques, such as hot biopsy forceps. It is important to recognize that with the exception of endoscopic methods that are obviously unsafe and inadequate to serve their purpose, all other interventional endoscopic methods are operator-dependent in the sense that specific expertise and training are obligatory for the success of any therapeutic intervention. Since relevant publications on hot biopsy forceps are still in favor of its careful use, as it has not yet demonstrated inferiority compared with newer techniques, it would be prudent , European Society of Gastroin testinal Endoscopy has released guidelines for colorectal polypectomy, which include a strong recommendation against the use of hot biopsy forceps (HBF) based on the GRADE system of clinical evidence. The release of guidelines by professional medical societies is acknowledged by the medical community as policy that functions as a deterrent to specific practices. With respect to that notion, the abandonment of a useful technique such HBF, which for many decades, has contributed to the polypectomy of diminutive colorectal polyps (DCPs), should be considered in an appropriate conscientious and judicious manner. The reasons for the negative criticism are based on the following: (1) unacceptably high risks of adverse events (AEs); (2) inadequate tissue sampling for histopathology (ITSH); and (3) high incomplete resection rates (IRR). The studies cited in support of the recommendation are 4 human studies (1 RCT nonblinded with a small number of patients [2] , one anecdotal report [3] and 2 observational studies [4,5] ), 3 of which have already been determined to be of low quality, and 2 animal studies [6,7] (Table 1). The overall quality of evidence was graded as high. Actually, apart from the methodological quality of the individual studies and the questionable generalizability, these studies are heterogeneous in terms of ITSH and IRR. Moreover, all studies are consistent with respect to the absence of perforations, and the few bleeding episodes (0.36%) in one of the studies occured in patients taking antiplatelets [5] . HBF is considered an alternative method for the removal of DCPs (≤ 5 mm). According to different surveys, it seems that HBF is still a viable option that is preferred by 30%50% of endoscopists [810] . The two studies, with the largest number of patients and polyps [11,12] showed no complications. The study by Wadas et al [13] , which reports a 0.38% major bleeding rate and a 0.05% perforation rate, refers to a questionnairetype survey from an era (1988) when the HBF technique was not standardized. Even this perforation rate is lower than the reported 0.15% for therapeutic colonoscopies [14] . The rate of AEs is also lower compared with that for snare polypectomies (3.3 vs 4.5/1000), and AEs are more likely to occur when low volume endoscopists use HBF than when high volume endoscopists (> 300 polypectomies/year) use the technique [15] . HBF has been reported to have a 17% IRR when white coagulum is present [16] and a variable rate of ITSH that ranges from 0.19%13%26.7% in studies with different mean polyp sizes [11,17,18] . It is acknow ledged that a significant predictor of histological misinter pretation is decreasing polyp size with a cut off limit of 2 mm. It is important to mention that even in studies with high reported rates of cautery artifacts [4] , the results showed that histological diagnosis could indeed have been reached in all specimens. In the sole nonblinded RCT, in which HBF and CSP are directly compared, the IRR in the ITT analysis was 29.9% for CSP, which is still unacceptably high. However, the bleeding rates were statistically insignificant at 8.1% vs 8.8% for HBF and CSP, respectively, and no perforations were observed in either study arm [25] . In conclusion, it seems that available evidence is not adequate to exclude hot biopsy forceps from the routine endoscopy practice. We either need more prospective studies exhibiting beneficial comparisons with new techniques or we need to focus on proper utilization of HBF by more experienced endoscopists.
2018-04-27T03:28:06.013Z
2018-04-14T00:00:00.000
{ "year": 2018, "sha1": "15bc6167c36931c43e8e84b347464e645a32c619", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v24.i14.1579", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "15bc6167c36931c43e8e84b347464e645a32c619", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6449084
pes2o/s2orc
v3-fos-license
A Facile Methodology for Engineering the Morphology of CsPbX3 Perovskite Nanocrystals under Ambient Condition A facile and highly reproducible room temperature, open atmosphere synthesis of cesium lead halide perovskite nanocrystals of six different morphologies is reported just by varying the solvent, ligand and reaction time. Sequential evolution of the quantum dots, nanoplates and nanobars in one medium and nanocubes, nanorods and nanowires in another medium is demonstrated. These perovskite nanoparticles are shown to be of excellent crystalline quality with high fluorescence quantum yield. A mechanism of the formation of nanoparticles of different shapes and sizes is proposed. Considering the key role of morphology in nanotechnology, this simple method of fabrication of a wide range of high quality nanocrystals of different shapes and sizes of all-inorganic lead halide perovskites, whose potential is already demonstrated in light emitting and photovoltaic applications, is likely to help widening the scope and utility of these materials in optoelectronic devices. Result and Discussion The NCs are fabricated by preparing a 1:1 molar precursor solution of PbBr 2 and CsBr in polar dimethyl formamide (DMF, 4 mL) and then adding this solution (200 μ L) drop wise to a vigorously stirred reaction media containing definite quantities of OLA (20-70 μ L) and OA (0.2-0.5 mL) in a larger amount (4 mL) of a less polar solvent (anti-solvent) at room temperature. The morphologies were precisely controlled by varying the anti-solvent (ethyl acetate and toluene), quantity of the capping agent (OLA and OA) and reaction time, the details of which are provided in tabular form as supporting information (Table S1-3). In ethyl acetate, we obtained the QDs, NPLs and nanobars and in toluene, nanocubes of different sizes, nanorods of increasing length and nanowires are obtained. Quantum Dots. When ethyl acetate is used as the anti-solvent, 2.6 ± 0.9 nm sized quasi-cubic CsPbBr 3 QDs are formed immediately on addition of the precursor solution. Figure 1a and Figure S1 (SI) show the transmission electron microscopy (TEM) and high-resolution transmission electron microscopy (HR-TEM) images from which we obtain an inter-planar spacing of 0.559 nm corresponding to the (100) plane. Cubic crystal phase of these QDs is determined from the powder X-ray diffraction (PXRD) study ( Figure S1, SI). The bluish-green colored QD solution shows an intense blue PL (quantum yield (QY) 27%, Table S4, SI) with first excitonic and PL emission peak at around 437 and 454 nm, respectively (Fig. 1b). Time-resolved PL behavior is best described by three exponential decay components ( Figure S1, SI), giving an average PL lifetime of 3.9 ns. Nanoplates. Allowing the reaction up to 10 minutes, a few unit cell thick, square shaped CsPbBr 3 NPLs of 60 nm edge length are obtained. TEM, HR-TEM and field emissive scanning electron microscopy (FESEM) images are shown in Fig. 1c and Figure S2 (SI) respectively. HR-TEM images show that the single crystalline NPLs are bound by the (100) planes. The double peaks at around 30° in the PXRD pattern ( Figure S2, SI) confirm the orthorhombic phase 13 and the energy dispersive X-ray (EDX) spectroscopy measurements ( Figure S2, SI) confirm 1:1:3 ratio of the Cs, Pb and Br atoms in synthesized CsPbBr 3 NPLs. Typical thickness of the obtained NPLs are ~4.8 nm, as evident from the atomic force microscopy images ( Figure S2, SI) and that the thickness represents that of the primary NPLs capped with the organic ligands 15,16 . Cyan emitting NPLs exhibit the first excitonic and PL emission peak at 450 and 476 nm ( Fig. 1d) with PL QY of 19%. Triexponential fitting of the PL decay profile yields an average lifetime of 5.1 ns ( Figure S2 and Table S4, SI). Nanobars. With further increase in reaction time the nanobars start forming at the expense of the NPLs. After 40 hours, the reaction medium contains mostly the nanobars with an average aspect ratio of 2.55 (~140 nm long and ~55 nm wide). Figure 1 shows the TEM and FESEM images of the nanobars with very low size dispersion. That the nanobars are highly crystalline in nature is evident from the HR-TEM images ( Fig. 1 and S3, SI), their corresponding FFTs (inset) and selected area electron diffraction (SAED) pattern ( Figure S3, SI). Additional images are shown in SI. Single crystalline nanobars are of orthorhombic phase ( Figure S3, SI) and bound mainly by (100) and (110) in the in-plane and side-plane facets. EDX spectrum ( Figure S3, SI) confirms the 1:1:3 ratio of the constituent elements. Both, first absorption onset and emission peak of the nanobars appear at 522 nm (Fig. 1h). Unlike the other morphologies, the PL QY of the nanobars is much higher (61%) and the PL decay profile is characterized by a biexponential kinetics ( Figure S3, SI) with an improvement in individual lifetime (Table S4, SI) and increase in the average lifetime (14.2 ns). The high PL QY and long PL lifetime of the nanobars indicate less charge trapping sites as compared to the other NCs and majorly excitonic radiative recombination. Superior quality of the synthesized nanobars was further confirmed by exposing the NPLs and nanobars under electron beam. While the NPLs degrade within 10 seconds, the nanobars remain unaffected even after 10 minutes of exposure ( Figure S4, SI). Nanocubes of Different Sizes. By changing the anti-solvent to toluene and keeping OLA (20-70 μ l) and OA (200 μ l) as capping ligand, we can further manipulate the morphology of CsPbBr 3 NCs. Well-crystallized nanocubes of 12 ± 2 nm size are obtained at the initial stage of the reaction following a similar procedure as before. TEM and HR-TEM images are shown in Fig. 2a,b and S5 (SI). PXRD and EDX spectra ( Figure S5, SI) confirm the formation of CsPbBr 3 nanocubes. First absorption onset and emission peak appear at 497 and 510 nm respectively (Fig. 2c). PL time profile of this material shows a triexponential decay behavior with an average lifetime of 11.3 ns. High PL QY of ~77% and good crystalline quality suggest that the nanocubes are largely free from charge trapping mid band gap trap states indicating possible application in optoelectronic devices. Extension of the reaction time up to 1 hour leads to the formation of another two morphologies for two different OLA concentrations. When 70 μ l OLA is used, larger nanocubes of ~34 nm edge-length are formed as evident from the TEM (Fig. 2d, S6, SI) and FESEM (Fig. 2e) images. Well crystalline nanocubes retain the cubic phase and bound by (100) and (110) planes ( Figure S6, SI). Red-shifted first absorption onset and emission peak compared to smaller nanocubes (~12 nm) appear at 502 and 514 nm ( Figure S6, SI), respectively. PL QY and lifetime of the nanocubes are 54% and 13.5 ns ( Figure S6, SI), respectively. Further increase in reaction time up to 50 hours or more ends up with larger nanocubes and few nanorods of larger dimension ( Figure S7, SI). Nanorods and Nanowires. Less quantity of OLA (20 μ l) provides a faster kinetics and forms CsPbBr 3 nanorods of good crystalline quality (after 1 hour) at the expense of nanocubes formed initially. TEM ( Fig. 2f and S8, SI) and FESEM (Fig. 2g) images show an average nanorod length and diameter of 800 and 70 nm, respectively. Single crystalline nanorods grow along < 100> direction as evident from the HR-TEM image ( Figure S8, SI) Table S2 (SI), the aspect ratio of the nanorods increases with time and finally, after 40 hours of reaction, nanowires of diameter ~70 nm and length ≥ 15 μ m are obtained. As evident from HR-TEM (Fig. 2j inset, S9, SI) and SAED pattern ( Figure S3, SI) these nanowires are of very good crystalline quality with < 100> growth direction. Nanowires are of orthorhombic phase and the ratio of Cs, Pb, Br to be close to 1:1:3. First absorption onset and emission peak appear at 511 and 520 nm, respectively. These nanowires have an average PL lifetime of 11.4 ns and QY of 29%. Both nanorods and nanowires obtained by the present method are of better crystalline quality and show improved PL QY as compared to the other reports 13,17 . X-ray photoelectron spectroscopy (XPS) measurements have been performed on different CsPbBr 3 morphologies to further investigate the quality and composition. The survey X-ray photo electron spectra ( Figure S10) show the peaks corresponding to Cs 3d, Pb 4f, Br 3d. XPS peak areas of the survey curve provide a ratio of 1:1:3 of Cs, Pb and Br of the NCs capped with oleic acid and oleylamine. The as obtained results are consistent with the literature 16 and confirm the identity and purity of our samples. Effect of Other Solvents. The results obtained by following a similar synthetic protocol but using other organic solvents of different polarity as anti-solvent are summarized in Table S3 (SI). In chloroform, smaller NCs emitting at 487 nm and nanocubes emitting at 510 nm (~12 nm) are formed at the early stages of the reaction. With increase in time, the nanocubes become larger in dimension and undergo slow degradation showing a decrease in PL after 24 hours of reaction ( Figure S11, SI). It is found that polarity of the used anti-solvent is crucial for the synthesis of the NC and its stability. The NCs degrade readily when more polar anti-solvents are used (Table S3, SI). Notably, during the reaction in a given anti-solvent, the morphologies evolved in a sequential manner and hence, different intermediates can coexist at a time (Table S1-3; SI). Different NC population was separated from each other by the use of size-selective precipitation method. Anion exchange. CsPbX 3 (X = Cl, I) NCs are prepared at room temperature using simple and fast anion exchange method 23,24 by adding definite proportion of PbCl 2 or PbI 2 to the CsPbBr 3 NC solution. Figure 3 illustrates that the entire visible spectral window (405-700 nm) can be covered by the anion exchange process on ~34 nm sized nanocubes in toluene. The method is found to be applicable for all other morphologies. Formation Mechanism. The nature of the ligand, its quantity, solvent and reaction time are found to play important role in determining the shape, size and luminescence properties of the NCs. Recognizing that OLA binds more strongly to the NCs compared to OA and considering the observations that (i) 100% OA leads to the formation of nonfluorescent bulk NCs with a wide size distribution (50-500 nm) and no particle formation takes place in its absence (Table S1, 2, SI), (ii) increasing OLA concentration slows down the formation rate of the NCs and excess of it retards the nucleation process completely (Table S1, 2, SI), and (iii) different morphologies can be observed only in the presence of both OA and OLA, we postulate the following mechanism of formation of different morphologies. In less polar solvents, toluene and ethyl acetate, long alkyl chain hydrophobic ligands form micelle of specific sizes, which lead to the formation of small NCs. Quasi-cubic QDs in ethyl acetate and cubic NCs in toluene are formed inside the micelle because of inherent cubicity of CsPbBr 3 perovskite material. In ethyl acetate, 2D growth along a plane leading to the formation of NPLs can only be explained if OLA, the stronger ligand of the two, binds preferentially to a given plane. As ethyl acetate is more polar than toluene, it acts both as a solvent and a nucleophile and can remove some OLA from the bound surface of the NPLs where small QDs can attach easily. This process can lead to an anisotropic growth of the NPLs with time and result in the formation of the nanobars as final product (Fig. 4). In toluene, in the presence of a small amount of OLA nanoparticles start growing along a direction along which less OLA (hence, more OA) is present. Nanocubes break their inherent symmetry and add up in a unidirectional manner to form nanorod and finally nanowires at longer time, as illustrated in Fig. 4. In the presence of a larger quantity of OLA, the nanocube surfaces become more protected in all directions and consequently, larger nanocubes (oligomers) are formed due to self-aggregation at longer reaction time (Fig. 4). Conclusion In conclusion, several CsPbX 3 NCs of different shapes and sizes (QDs, NPLs, nanobars, small and large nanocubes, nanorods and nanowires) are obtained following an anti-solvent precipitation method under ambient condition. These NCs are of excellent crystalline quality and show size, shape and composition dependent PL properties. High quality nanobar is shown to be a new member of the CsPbX 3 perovskite NC family. This simple method of fabrication of a broad range of NCs of various shapes and sizes is likely to boost the potential of this class of promising materials in light emitting and photovoltaic applications. Study of photo-induced charge separation and recombination dynamics involving these materials is currently underway. Synthesis of CsPbBr 3 nanocrystals (NCs). In a 50 mL round bottom flask 4 mL anti-solvent (less polar solvents) was loaded with desired amount of OLA and OA as the capping ligand and kept in a vigorously stirring condition. 200 μ L of precursor solution was added drop wise into the stirring solution. Depending on the anti-solvent used and amount of ligands added, color of the resultant solution changes from bluish green to green to yellow (in ethyl acetate) or from greenish yellow to yellow (in most of the other organic solvents) with increase in time. The conditions maintained for obtaining specific size and shape of the NCs are summarized in Table S1-3. Separation and purification of the NCs. The crude solutions collected at different stages of the reaction were centrifuged at 5000 rpm for 6 minutes. When a mixture of the NCs was present, centrifugation was done for a longer period at 7500 rpm. Following the centrifugation, the supernatant liquid was discarded, the precipitate was washed with ethyl acetate and then dispersed in toluene or hexane. Anion exchange process. 0.15 mmol PbX 2 (X = Cl, I) was dissolved in 5 mL of DMF and added drop wise to the stirring CsPbBr 3 NC solution. By controlling the amount of added PbX 2 solution we can easily tune the emission properties of these NCs. The reaction reaches to equilibrium after allowing it for some time then the NCs were separated by centrifugation for further studies. Anion exchange technique is very facile at ambient condition and was successfully applied for all NCs obtained in this work. Instrumentation and methods. Tecnai G2 FE1 F12 transmission Electron microscope (TEM) at an accelerating voltage of 200 kV was used to obtain images, high resolution images and selected area electron diffraction (SAED) pattern. Field-emission scanning electron microscope (FE-SEM) imaging was carried out using a Carl Zeiss model Ultra 55 microscope. Atomic force microscope (AFM) images were recorded on a NT-MDT Model solver Pro-M AFM in a semi-contact mode using a tip having a force constant of 12 nm −1 . Powder X-ray diffraction (PXRD) of the NCs were recorded on a SMART Bruker D8 Advance X-ray Diffractometer using Cu-Kα radiation (λ = 1.5406 A°). X-ray photoelectron spectra (XPS) of the samples were recorded with a custom built ambient pressure photoelectron spectrometer (APPES) (Prevac, Poland) equipped with a VG Scienta's R3000HP analyzer and MX650 monochromator. Monochromatic Al Ka X-rays were generated at 200 W and used for measuring the X-ray photoelectron spectrum (XPS) of the samples. Base pressure in the analysis chamber was maintained in the range of 5 × 10 −10 Torr. The energy resolution of the spectrometer was set at 0.7 eV at a pass energy of 50 eV. Binding energy (BE) was calibrated with respect to Au 4f7/2 core level at 84.0 eV. Samples were flooded with low energy electrons for efficient charge neutralisation. Steady state absorption and photoluminescence (PL) spectra were taken in an UV-vis spectrophotometer (Cary 100, Varian) and fluorescence spectrometer (Fluorolog 3, Horiba Jobin Yvon). PL quantum yield (QY) of different NCs was calculated using Coumarin 153 as reference (QY = 0.56 in acetonitrile). For PL lifetime measurements, Horiba Jobin Yvon IBH TCSPC spectrometer was used. A Nano LED laser source of output at 405 nm (fwhm: 200 ps) at 1 MHz repetition rate was used as the excitation source. PL decay curves were fitted to a multi-exponential decay function of the form, where, τ i are the lifetime components and α i are the corresponding exponents. The average lifetime, < τ > , reported in this work is defined as,
2018-04-03T03:30:53.082Z
2016-11-25T00:00:00.000
{ "year": 2016, "sha1": "478dc9dbf545bc06f8981d54df88b91dfa32d0d5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep37693.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "478dc9dbf545bc06f8981d54df88b91dfa32d0d5", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
248546254
pes2o/s2orc
v3-fos-license
Mapping mechanical stress in curved epithelia of designed size and shape The function of organs such as lungs, kidneys and mammary glands relies on the three-dimensional geometry of their epithelium. To adopt shapes such as spheres, tubes and ellipsoids, epithelia generate mechanical stresses that are generally unknown. Here we engineer curved epithelial monolayers of controlled size and shape and map their state of stress. We design pressurized epithelia with circular, rectangular and ellipsoidal footprints. We develop a computational method, called curved monolayer stress microscopy, to map the stress tensor in these epithelia. This method establishes a correspondence between epithelial shape and mechanical stress without assumptions of material properties. In epithelia with spherical geometry we show that stress weakly increases with areal strain in a size-independent manner. In epithelia with rectangular and ellipsoidal cross-section we find pronounced stress anisotropies that impact cell alignment. Our approach enables a systematic study of how geometry and stress influence epithelial fate and function in three-dimensions. 1-The significance of data presented in figures 2 and 4 should be better clarified, since some conclusions are based on figure 2d, where the significance for different results is different and it is unclear what are the determinants for the increase in the central and right plots. I find also very difficult to understand Figure 4c and the impact of measurement distribution on the final interpretation for cell orientation. 2-Methods: a) could the authors clarify how they quantify high and low fibronectin coatings and whether matrix surface density is homogeneous at the margin of the pattern (usually matrix proteins tend to accumulate at the boundary). b) have the authors checked the hydrogels after the 15 min UV exposure prior to cell seeding? Elastomers tend to get very brittle and change their surface properties. 3-I am missing completely the molecular mechanism explaining the cell alignment. The authors performed the cell experiment suing a line expressing a fluorescent construct to visualize the plasma membrane, however there are no information about the relation e.g. between cell orientation, adherens junction regulation and actin (I refer to Figure 4). Also, in Figure 4a, in the segmentation there seem to be even no consistent and accurate representation of cell-cell contacts. Reviewer #2 (Remarks to the Author): This is an excellent manuscript, which combined experiments and a novel computational tool to infer stress distributions in inflated epithelia with different geometrical footprints. This framework goes significantly beyond the previous work in the literature that inferred stresses only in spherical epithelia. The provided computational tool will also be a great resource for the community. The manuscript is very clearly written, and all claims are supported by evidence. I recommend publication after the following clarifications are made. 1.) Normal tractions showed a close match with the inferred stress along the contact line. Would it be possible to also compare the measured tractions in the x-y plane with the inferred stresses? I can imagine that such comparison is difficult because the neighboring cells attached to the substrate are also pulling/pushing on the cells at the contact line. 2.) In most regions, there was a tendency of cells to align with the direction of the maximum principal stress. How does this correlation depend on the magnitude of the maximum principal stress? Are cells more aligned for higher values of the maximum principal stress? 3.) How was the luminal pressure measured in experiments? Was it measured by averaging normal tractions over some regions or were normal tractions measured at specific locations (e.g., at the center)? Fig. 1e? Does each point correspond to average stress over the whole epithelia or does it correspond to local stress at one of the points on the epithelia? 5.) How are different levels of inflation defined (high/low in Fig. 2, high/medium/low in Fig. 3)? Is there some quantitative metric that was used to bin the data? 6.) Add descriptions for the hydrostatic stress color maps also in the captions of Figs. 2b and 3b. 7.) In captions of Fig. 3c,d, clarify that the polar angle is measured relative to the long axis of the ellipse. 8.) Clarify what the diamond symbols represent in Fig. 4c. I guess they correspond to the averages of the distribution. 9.) In methods explain what are the parameters 'h' and 'a' in the areal strain calculation. For the consistency with supplements 'a' should be renamed to 'R_b' 10.) The symbols for the curvature tensor and tractions in Extended Data Figure 4 should be made consistent with the notation in the Supplementary Materials (SM). 11.) In Section 1.5 of the SM, explain how the effective cortical viscosity was implemented. 12.) In Section 2.4 of the SM, explain how the initial surface triangulation of the experimental point cloud was done. In 2D one can use standard Delaunay triangulation algorithms, but it is not clear how to do this easily for points in 3D. 13.) In Figs. 5e, 7e, and 8e in the SM, also add green arrows indicating the expected solutions. Similarly, to how it was done for Figs. 4, 9e, 10e, 11e Reviewer #3 (Remarks to the Author): In their manuscript entitled "Mapping mechanical stress in curved epithelia of designed size and shape", Marín-Llauradó et al. present cMSM, a novel computational method to infer stresses in curved epithelia. For non-spherical epithelial lumens, surface stresses cannot be easily calculated or measured due to the anisotropic nature of stress distribution. The computational approach presented here allows the inference of stress tensors on the surface of curved epithelia of any shape, solely based on their geometry and luminal pressure. As a model of epithelial lumens, the authors grow doming MDCK epithelia with various footprint shapes, in which luminal pressure can be measured using traction force microscopy. They use this method to map the surface stress on non-spherical epithelial domes with a rectangular or ellipsoidal footprint. As expected, stress tensors along the epithelial surface are anisotropic. They validate their computational approach by comparing experimentally accessible traction forces at the cell-substrate interface with their computed values at these locations and find a good correspondence. Finally, they ask if cellular long axes align with the direction of anisotropic stress tensors in non-spherical epithelia, which they approach by cell segmentation. However, they do not find a global correlation of these parameters. The manuscript is nicely written and the figures are clear. The analysis presented in this paper will be a useful tool for researchers interested in epithelial mechanics. However, due to the lack of an application of the method beyond the synthetic MDCK domes, biological conclusions are limited. Since we lack the relevant expertise, we cannot comment on the validity of the underlying physical assumptions, computational methods and data evaluations. On the experimental side of the paper, experiments consists in a simple and elegant way to create epithelia of controlled shapes and sizes. The authors present few interesting conclusions about epithelial mechanics, such as the correlation of dome diameter and lumen pressure, the absence of mechanosensitive respond in this range of lumen sizes and pressure, as well as the non-globality of cell alignment of cell axes with stress orientation. Specific comments: -Fig1d: epithelium curvature seems to scale linearly with lumen pressure (before a plateau for very small lumen of just a few cells, which may be difficult to characterize). This scaling is based on 3 different lumen sizes and the data are quite scattered around the mean values. Could larger domes be made to extend those measurements beyond what is currently explored? -The authors conclude that different properties of apical / basal cell surface are negligible when considering forces in the epithelial layer. However, in biological contexts, lumens are often embedded in the ECM / underlying cells with their basal side. How will this be reflected in the cMSM approach? -To further validate cMSM, the authors could alter the mechanical properties of epithelial sheets by myosin, actin or microtubule manipulation and see how good their method would be at predicting stresses in these conditions. -The authors find no global correlation of cell axis alignment with the main stress tensor direction. Which other factors could influence cell orientation? -References 5 and 6 about the developing otic vesicle concerns zebrafish and not drosophila -Fig2d: there is "side x" overlaid on the (many) asterisks indicating statistical significance of the middle graph. Reviewer #1 (Remarks to the Author): We thank the reviewers for their comments and suggestions, which have greatly improved the quality of our paper. We hope the reviewed version provides a clearer and more comprehensive presentation of our findings. Our point-by-point response to their comments is below. In this study, Marin-Llaurado et al. designed curved epithelial monolayers of varying size to determine the correspondence between shape and mechanical stress in epithelial layers independently of underlying material properties. In particular, they managed to create anisotropic shapes and observed that cells align with the direction of maximum principal stress. Overall, the authors provide an interesting approach based on the combination of experimental and computational tools to vary epithelium geometries and map stress. This work could be of help in understanding the formation and regulation of 3D epithelia, which is of interest for a broad scientific readership. However, I found the paper too preliminary for some aspects and I recommend to clarify and strengthen some of the experiments presented here before publication. We thank the reviewer for the positive comments on our manuscript and for the constructive criticism and suggestions. 1-The significance of data presented in figures 2 and 4 should be better clarified, since some conclusions are based on figure 2d, where the significance for different results is different and it is unclear what are the determinants for the increase in the central and right plots. I find also very difficult to understand Figure 4c and the impact of measurement distribution on the final interpretation for cell orientation. From the reviewer's comment we now see that our description of Fig. 2d was unclear. There are two determinants for the increase in the vertical component of the traction. The first one is inflation: increasing inflation always increases the normal traction regardless of the shape of the footprint. The second one is the stress anisotropy in the monolayer, which causes the normal traction to be higher along the longest side of the footprint than along the shortest one. Since both determinants of the traction increase are relevant to validate our technique, we tested the statistical significance of both independently using Wilcoxon rank sum test for paired or unpaired samples depending on the comparison. Hence our plotting of statistical significance in the central and right panels of Fig. 2d. We have now re-written the description of this figure to clarify this point (p. 5, second paragraph). In Fig. 4, we assessed significance as follows. We generated 10 4 uniform distributions, each with the experimental sample size. We then computed the median of each distribution and the distribution of the medians. Finally, we assessed the p-values by locating the experimental median in this simulated distribution. We clarify this test in the methods section (Statistical Analysis). 2-Methods: a) could the authors clarify how they quantify high and low fibronectin coatings and whether matrix surface density is homogeneous at the margin of the pattern (usually matrix proteins tend to accumulate at the boundary). b) have the authors checked the hydrogels after the 15 min UV exposure prior to cell seeding? Elastomers tend to get very brittle and change their surface properties. The reviewer raises two relevant issues in surface photopatterning. Regarding point a) we now provide a representative fluorescence image of the photopatterned gels and the quantification of fluorescence levels for several footprints. These are presented in new Extended Data Figure 1. These data show the absence of systematic accumulation of patterned ECM at the boundary of the pattern. Regarding point b) we used the ball indentation method to quantify substrate stiffness with and without photopatterning, with no significant difference between both conditions. These data are shown in Extended Data Figure 10 and mentioned in the methods section (Soft PDMS stiffness measurements). 3-I am missing completely the molecular mechanism explaining the cell alignment. The authors performed the cell experiment suing a line expressing a fluorescent construct to visualize the plasma membrane, however there are no information about the relation e.g. between cell orientation, adherens junction regulation and actin (I refer to Figure 4). Also, in Figure 4a, in the segmentation there seem to be even no consistent and accurate representation of cell-cell contacts. We agree with the referee on the importance of understanding the mechanisms of cell alignment in curved monolayers. We now provide additional data and discussion on those mechanisms, but their description in molecular detail is beyond the scope of our paper. In this regard, we would like to point out that even in the simplest case of a single cell or a monolayer on a flat 2D substrate, mechanisms of cell alignment with stress remain poorly understood. Our goal in this study was to provide a methodological approach to begin to address this problem with high control of geometry and full mapping of the stress tensor. Our data on cell alignment are a proof of concept of the technique and we expect to address mechanisms of cell alignment in the future. We note that identifying such mechanisms with molecular detail will require not only mapping stress and curvature accurately, as our approach enables, but also to control the magnitude, rate and strain history of the dome, which is currently unavailable in our system. We are developing new approaches to provide such control by imposing hydraulic pressure to the dome underside. However, we agree with the referee that our original submission required additional evidence and discussion to clarify potential mechanisms of cell alignment. We now provide additional data and discussion on the possibility that besides aligning in the direction of maximum principal stress, cells might also align in the direction of minimum curvature in order to minimize bending. This might explain why in the regions of highest curvature of the domes, cells tend to orient perpendicular rather than parallel to maximum principal stress. New data on curvature measurements can be found in Extended Data Fig. 9. These data are presented on p. 5 (last paragraph) and discussed on p. 7 (second paragraph). We agree with the reviewer that our segmentation strategy does not capture accurately cellcell junctions. For the purpose of our study, the main goal of our segmentation approach was the identification of the 3D axes of orientation of the cell body, which does not require accurate information on cell-cell junctions. Reviewer #2 (Remarks to the Author): This is an excellent manuscript, which combined experiments and a novel computational tool to infer stress distributions in inflated epithelia with different geometrical footprints. This framework goes significantly beyond the previous work in the literature that inferred stresses only in spherical epithelia. The provided computational tool will also be a great resource for the community. The manuscript is very clearly written, and all claims are supported by evidence. I recommend publication after the following clarifications are made. We thank the reviewer for the positive comments on our work and for the detailed critique. 1.) Normal tractions showed a close match with the inferred stress along the contact line. Would it be possible to also compare the measured tractions in the x-y plane with the inferred stresses? I can imagine that such comparison is difficult because the neighboring cells attached to the substrate are also pulling/pushing on the cells at the contact line. The referee's intuition is correct. At the contact line, the x-y tractions have two contributions (mentioned at the end of the second paragraph of the Results section). The first one is the pulling force exerted by the suspended monolayer. The second one is the pulling force exerted by the adhered monolayer. As a result, the traction vector at the contact line is generally not tangential to the suspended monolayer and the estimation of x-y tractions is not straightforward. 2.) In most regions, there was a tendency of cells to align with the direction of the maximum principal stress. How does this correlation depend on the magnitude of the maximum principal stress? Are cells more aligned for higher values of the maximum principal stress? We tested whether the alignment angle was correlated with the maximum principal stress for each of the four regions of the domes (new Extended Data Fig. 8, mentioned on p. 5, last paragraph). We did not find a correlation. As an alternative, we tested whether the alignment angle was correlated with stress anisotropy, expressed as the ratio between minimal and maximal principal stresses. We did not find a correlation either (also shown in Extended Data Fig. 8). To a large extent we attribute this variability to the high scatter in the angular distribution and possibly to the existence of two competing mechanisms for cell alignment associated to the energy costs of stretching vs bending. We discuss this possibility on p. 5 (last paragraph) and on p. 7 (second paragraph). 3.) How was the luminal pressure measured in experiments? Was it measured by averaging normal tractions over some regions or were normal tractions measured at specific locations (e.g., at the center)? Pressure was measured by averaging normal traction over a central area of the dome to minimize boundary effects. The size and shape of this area depended on the geometry of the footprint. We now mention this methodological aspect in the methods section (Pressure measurement). Fig. 1e? Does each point correspond to average stress over the whole epithelia or does it correspond to local stress at one of the points on the epithelia? 4.) What surface tension is plotted in Each point corresponds to surface tension computed from Laplace's law using the measured pressure and radius. We now clarify this point in the figure caption. Fig. 2, high/medium/low in Fig. 3)? Is there some quantitative metric that was used to bin the data? Data were binned according to arbitrary height thresholds to ensure a sufficient number of data points in each category. Results did not depend on the specific value of the threshold. 6.) Add descriptions for the hydrostatic stress color maps also in the captions of Figs. 2b and 3b. 5.) How are different levels of inflation defined (high/low in Thanks, we added the descriptions to the captions. Fig. 3c,d, clarify that the polar angle is measured relative to the long axis of the ellipse. 7.) In captions of We added the requested clarification in the figure caption as well as a cartoon in the figure illustrating the definition of the angle. Fig. 4c. I guess they correspond to the averages of the distribution. 8.) Clarify what the diamond symbols represent in They represent the median. We now clarify this in the caption. 9.) In methods explain what are the parameters 'h' and 'a' in the areal strain calculation. For the consistency with supplements 'a' should be renamed to 'R_b' We agree and have unified the nomenclature with the supplement. Figure 4 should be made consistent with the notation in the Supplementary Materials (SM). 10.) The symbols for the curvature tensor and tractions in Extended Data We agree and have unified notation. 11.) In Section 1.5 of the SM, explain how the effective cortical viscosity was implemented. We added the underlined text to the supplement. However, as discussed in (Perez-Gonzalez et al. Nat Cell Biol, 2021), solving Eq. (4) leads to uncontrolled mesh distortion as nodes can move tangentially without changing cell area or volume. To avoid such tangential motions of nodes, we added an effective cortical viscosity, which vanishes at equilibrium to avoid biasing the end results. Taking a similar approach to (Ma and Klug, J Comput Phys, 2008), we treated cell cortices as hyperelastic surfaces deforming with respect to an evolving reference configuration. However, instead of an iterative update, we considered the evolution rule of the reference configuration towards the current configuration to be orders of magnitude faster than the dome inflation process, making all stored elastic energy negligible at each increment of the dome inflation process. Therefore, the dome inflation process can be seen as the quasi-static evolution of activeviscous cortices with fixed cellular volume and an increasing lumen volume. 12.) In Section 2.4 of the SM, explain how the initial surface triangulation of the experimental point cloud was done. In 2D one can use standard Delaunay triangulation algorithms, but it is not clear how to do this easily for points in 3D. The initial surface triangulation of the experimental 3D point cloud was obtained using an implementation of the crust algorithm for surface reconstruction from unorganized 3D sample points for open surfaces available at: https://www.mathworks.com/matlabcentral/fileexchange/63731-surface-reconstruction-fromscattered-points-cloud-open-surfaces. An explanation and reference to this open-source package is added to the discussion in Section 2.4 of the Supplementary material. 13.) In Figs. 5e, 7e, and 8e in the SM, also add green arrows indicating the expected solutions. Similarly, to how it was done for Figs. 4, 9e, 10e, 11e Figs. 5e, 7e, and 8e have been updated to include the green arrows for visual comparison of the expected and obtained solutions. Note that for the spherical cap in Fig. 5e, the mismatch between directions of green and black arrows is anticipated since the surface tension is isotropic for which the orientation of the principal directions is arbitrary. Reviewer #3 (Remarks to the Author): In their manuscript entitled "Mapping mechanical stress in curved epithelia of designed size and shape", Marín-Llauradó et al. present cMSM, a novel computational method to infer stresses in curved epithelia. For non-spherical epithelial lumens, surface stresses cannot be easily calculated or measured due to the anisotropic nature of stress distribution. The computational approach presented here allows the inference of stress tensors on the surface of curved epithelia of any shape, solely based on their geometry and luminal pressure. As a model of epithelial lumens, the authors grow doming MDCK epithelia with various footprint shapes, in which luminal pressure can be measured using traction force microscopy. They use this method to map the surface stress on non-spherical epithelial domes with a rectangular or ellipsoidal footprint. As expected, stress tensors along the epithelial surface are anisotropic. They validate their computational approach by comparing experimentally accessible traction forces at the cell-substrate interface with their computed values at these locations and find a good correspondence. Finally, they ask if cellular long axes align with the direction of anisotropic stress tensors in non-spherical epithelia, which they approach by cell segmentation. However, they do not find a global correlation of these parameters. The manuscript is nicely written and the figures are clear. The analysis presented in this paper will be a useful tool for researchers interested in epithelial mechanics. However, due to the lack of an application of the method beyond the synthetic MDCK domes, biological conclusions are limited. Since we lack the relevant expertise, we cannot comment on the validity of the underlying physical assumptions, computational methods and data evaluations. On the experimental side of the paper, experiments consists in a simple and elegant way to create epithelia of controlled shapes and sizes. The authors present few interesting conclusions about epithelial mechanics, such as the correlation of dome diameter and lumen pressure, the absence of mechanosensitive respond in this range of lumen sizes and pressure, as well as the non-globality of cell alignment of cell axes with stress orientation. We thank the reviewer for the positive assessment of our work and for the detailed and constructive critique. -Fig1d: epithelium curvature seems to scale linearly with lumen pressure (before a plateau for very small lumen of just a few cells, which may be difficult to characterize). This scaling is based on 3 different lumen sizes and the data are quite scattered around the mean values. Could larger domes be made to extend those measurements beyond what is currently explored? Unfortunately, larger domes cannot be made with our current technology. We rely on the ability of domes to pump osmolytes across their surface to delaminate and inflate. We found that beyond 200 µm in diameter (the maximum size shown in the manuscript), the monolayer does not fully delaminate. We are currently working on new methods to delaminate domes by directly imposing pressure differences, but these methods do not allow for traction measurements and are not yet ready for publication. -The authors conclude that different properties of apical / basal cell surface are negligible when considering forces in the epithelial layer. However, in biological contexts, lumens are often embedded in the ECM / underlying cells with their basal side. How will this be reflected in the cMSM approach? This interesting point highlights an assumption embedded in the presented cMSM formulation that the membrane supports a uniform luminal pressure. If this is not the case, for instance when the pressurized lumen is supported by ECM on one side, we need to account for the contact traction forces between ECM and the lumen in the tangential and normal force balance equations. The proposed cMSM approach can be readily modified to account for these additional external forces acting on the membrane. Surface stresses can be resolved in absolute terms if these contact traction forces can be measured in addition to the luminal pressure. This will amount to performing 3D TFM on the ECM in contact with the curved membrane. Although experimentally challenging, the proposed approach in principle can be extended to account for such a biological context. We have added a discussion on this point in Section 2.1 of the Supplementary Material. -To further validate cMSM, the authors could alter the mechanical properties of epithelial sheets by myosin, actin or microtubule manipulation and see how good their method would be at predicting stresses in these conditions. This is an excellent suggestion. To further validate our technique we treated ellipsoidal domes with ROCK inhibitor Y27632. Upon treatment with this drug, both dome tension and pressure showed a sudden decrease consistent with the impairment of myosin activity. We show this data in new Extended Data Fig. 7. -The authors find no global correlation of cell axis alignment with the main stress tensor direction. Which other factors could influence cell orientation? We thank the referee for raising this point, which has led us to rethink our mechanical analysis of cell orientation. On flat 2D surfaces, we showed in the past that cells in epithelial monolayers align and move in the direction of maximum principal stress (Tambe et al, Nat Mater, 2011). However, in curved monolayers there is an additional mechanical ingredient to take into account, which is the resistance of cells to bending (see for example the theory by Biton and Safran, Phys Biol 2009, to explain cell orientation on cylindrical wires). When coupling between bending and alignment dominates, then cells will tend to align in the direction of lower curvature (lower bending) rather than in that of maximum stress. In our ellipsoidal domes, the directions of minimum curvature and maximum principal stress are roughly orthogonal, and curvature varies depending on the dome region, providing an interesting configuration to test the competing effects of stretching vs bending. Consistent with our reasoning, in the regions with lowest curvature anisotropy (those labelled as minor axis top, minor axis side and major axis top on Fig. 4d) cells were predominantly oriented in the direction of maximum stress. By contrast, in the region of highest curvature anisotropy (labelled as major axis side) cells tended to orient with the direction of minimal curvature (i.e. normal to maximum stress). We now discuss this point in the main manuscript. New data on curvature measurements can be found in Extended Data Fig. 9. These data are presented on p. 5 (last paragraph) and discussed on p. 7 (second paragraph). We plan to further test these ideas in future studies using devices in which the magnitude and rate of inflation as well as the mechanical history of the sample can be controlled. -References 5 and 6 about the developing otic vesicle concerns zebrafish and not drosophila Thanks for pointing out this ambiguity. Reference 5 is about the zebrafish otic vesicle and reference 6 is about the drosophila embryo. We have split the references and moved them earlier in the sentence to clarify this point. -Fig2d: there is "side x" overlaid on the (many) asterisks indicating statistical significance of the middle graph. Thanks for pointing out this mistake. It has now been corrected. REVIEWERS' COMMENTS Reviewer #1 (Remarks to the Author): The authors have addressed my comments and the manuscript has significantly improved. I just have two minor points: 1. In the discussion, the authors have added a paragraph on the possible explanation for cell alignment. Could the authors elaborate on the involvement of cell cytoskeleton, and cell-cell interactions, considering that the domes are free-standing. 2. Fig 2d central panel has the text "side x" overlapping the significance symbols for the long group. Please remove this text. Reviewer #2 (Remarks to the Author): The authors have addressed all of my previous concerns, and I recommend publication. Minor comments: 1.) The polar angle in Figure 3c is defined as beta, but the main text refers to the angle theta. Figure 2d has the text "side x" overlaid on top of the asterisks 3.) In the Extended Data Figure 8, the top and bottom rows of panels should be switched, otherwise the plots are not consistent with the captions. 2.) The middle panel in Reviewer #3 (Remarks to the Author): In their revised version, the authors have analyzed the stress and pressure of domes after treatment with Y27632. While this addition provides information about the potential applications of their powerful and interesting analysis, the way this experiment is currently presented is unfortunately not satisfactory. Stresses and pressure are measured before and after the addition of Y27632. There is no description of that experiment in the methods: where is the drug coming from, what is the drug diluted into, at what concentration, are the domes treated with the vehicle before being treated with the drug (if that is DMSO, it could have an effect of its own)? At present, the reader cannot understand how the experiments where performed and they cannot be reproduced. Also, Y27632 is presented as an inhibitor of actomyosin activity. Unfortunately, that is too often what is found in the literature but it is actually not that straightforward. Y27632 inhibits Rock, as mentioned by the authors, which can lead to reduced actomyosin activity. However, Y27632 is also a potent inhibitor of atypical protein kinase C (aPKC) (Atwood and Prehoda 2009), which is key to maintain polarized fluid transport in MDCK cells. Therefore, the measurements presented by the authors could be interpreted very differently if the effects of Y27632 were the result of aPCK inhibition: the pressure drop would result from decreased pumping rather than decreased actomyosin activity. This needs to be discussed and ideally a different perturbation would be helpful if the authors want to make a point of actomyosin being important for dome stress and luminal pressure. In addition, the authors could look into areas of the monolayer without domes: if tractions to the ECM were decreased after Y27632 treatment, that would be a good indication that actomyosin activity would be affected. applications of their powerful and interesting analysis, the way this experiment is currently presented is unfortunately not satisfactory. Stresses and pressure are measured before and after the addition of Y27632. There is no description of that experiment in the methods: where is the drug coming from, what is the drug diluted into, at what concentration, are the domes treated with the vehicle before being treated with the drug (if that is DMSO, it could have an effect of its own)? At present, the reader cannot understand how the experiments where performed and they cannot be reproduced. In the revised manuscript we provide the protocol used for Y27632 in the methods section. We note that this protocol is the same used in Latorre et al (Nature, 2018). Also, Y27632 is presented as an inhibitor of actomyosin activity. Unfortunately, that is too often what is found in the literature but it is actually not that straightforward. Y27632 inhibits Rock, as mentioned by the authors, which can lead to reduced actomyosin activity. However, Y27632 is also a potent inhibitor of atypical protein kinase C (aPKC) (Atwood and Prehoda 2009), which is key to maintain polarized fluid transport in MDCK cells. Therefore, the measurements presented by the authors could be interpreted very differently if the effects of Y27632 were the result of aPCK inhibition: the pressure drop would result from decreased pumping rather than decreased actomyosin activity. This needs to be discussed and ideally a different perturbation would be helpful if the authors want to make a point of actomyosin being important for dome stress and luminal pressure. In addition, the authors could look into areas of the monolayer without domes: if tractions to the ECM were decreased after Y27632 treatment, that would be a good indication that actomyosin activity would be affected. This is a great point. We now provide new evidence showing that the response to Y27632 arises from inhibition of actomyosin rather than an alteration in flows. First, we follow the reviewer's suggestion and show that in the areas of the monolayer without domes, Y27632 drives an acute drop in tractions, indicating a downregulation of actomyosin activity. Second, we now show that quickly after the addition of Y27632, tension in the dome drops whereas dome volume does not change. This result provides further support to our conclusion that Y27632 mainly affects actomyosin activity rather than transport within the time scale of our experiments. The new data are presented in Supplementary Fig. 7 (panels g and h) and discussed on page 5, paragraph 3.
2022-05-07T13:21:26.992Z
2022-05-04T00:00:00.000
{ "year": 2023, "sha1": "17ebd177d62540a651737926a25566a9d490b01c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ceaf2fe514b73781e87ad132a4c6380044f31ece", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
44100956
pes2o/s2orc
v3-fos-license
Understanding the Strategies to Overcome Phosphorus–Deficiency and Aluminum–Toxicity by Ryegrass Endophytic and Rhizosphere Phosphobacteria Phosphobacteria, secreting organic acids and phosphatases, usually favor plant performance in acidic soils by increasing phosphorus (P) availability and aluminum (Al) complexing. However, it is not well-known how P-deficiency and Al-toxicity affect the phosphobacteria physiology. Since P and Al problems often co-occur in acidic soils, we have therefore proposed the evaluation of the single and combined effects of P-deficiency and Al-toxicity on growth, organic acids secretion, malate dehydrogenase (mdh) gene expression, and phosphatase activity of five Al-tolerant phosphobacteria previously isolated from ryegrass. These phosphobacteria were identified as Klebsiella sp. RC3, Stenotrophomona sp. RC5, Klebsiella sp. RCJ4, Serratia sp. RCJ6, and Enterobacter sp. RJAL6. The strains were cultivated in mineral media modified to obtain (i) high P in absence of Al–toxicity, (ii) high P in presence of Al–toxicity, (iii) low P in absence of Al–toxicity, and (iv) low P in presence of Al–toxicity. High and low P were obtained by adding KH2PO4 at final concentration of 1.4 and 0.05 mM, respectively. To avoid Al precipitation, AlCl3 × 6H2O was previously complexed to citric acid (sole carbon source) in concentrations of 10 mM. The secreted organic acids were identified and quantified by HPLC, relative mdh gene expression was determined by qRT-PCR and phosphatase activity was colorimetrically determined using p-nitrophenyl phosphate as substrate. Our results revealed that although a higher secretion of all organic acids was achieved under P–deficiency, the patterns of organic acids secretion were variable and dependent on treatment and strain. The organic acid secretion is exacerbated when Al was added into media, particularly in the form of malic and citric acid. The mdh gene expression was significantly up–regulated by the strains RC3, RC5, and RCJ6 under P–deficiency and Al–toxicity. In general, Al–tolerant phosphobacteria under P deficiency increased both acid and alkaline phosphatase activity with respect to the control, which was deepened when Al was present. The knowledge of this bacterial behavior in vitro is important to understand and predict the behavior of phosphobacteria in vivo. This knowledge is essential to generate smart and efficient biofertilizers, based in Al–tolerant phosphobacteria which could be expansively used in acidic soils. Phosphobacteria, secreting organic acids and phosphatases, usually favor plant performance in acidic soils by increasing phosphorus (P) availability and aluminum (Al) complexing. However, it is not well-known how P-deficiency and Al-toxicity affect the phosphobacteria physiology. Since P and Al problems often co-occur in acidic soils, we have therefore proposed the evaluation of the single and combined effects of P-deficiency and Al-toxicity on growth, organic acids secretion, malate dehydrogenase (mdh) gene expression, and phosphatase activity of five Al-tolerant phosphobacteria previously isolated from ryegrass. These phosphobacteria were identified as Klebsiella sp. RC3, Stenotrophomona sp. RC5, Klebsiella sp. RCJ4, Serratia sp. RCJ6, and Enterobacter sp. RJAL6. The strains were cultivated in mineral media modified to obtain (i) high P in absence of Al-toxicity, (ii) high P in presence of Al-toxicity, (iii) low P in absence of Al-toxicity, and (iv) low P in presence of Al-toxicity. High and low P were obtained by adding KH 2 PO 4 at final concentration of 1.4 and 0.05 mM, respectively. To avoid Al precipitation, AlCl 3 × 6H 2 O was previously complexed to citric acid (sole carbon source) in concentrations of 10 mM. The secreted organic acids were identified and quantified by HPLC, relative mdh gene expression was determined by qRT-PCR and phosphatase activity was colorimetrically determined using p-nitrophenyl phosphate as substrate. Our results revealed that although a higher secretion of all organic acids was achieved under P-deficiency, the patterns of organic acids secretion were variable and dependent on treatment and strain. The organic acid secretion is exacerbated when Al was added into media, particularly in the form of malic and citric acid. The mdh gene expression was significantly up-regulated by the strains RC3, RC5, and RCJ6 under P-deficiency and Al-toxicity. In general, Al-tolerant phosphobacteria under P deficiency increased both acid and alkaline phosphatase activity with respect to the control, which was deepened when Al was present. The knowledge of this bacterial behavior in vitro INTRODUCTION Acidic volcanic soils are frequently characterized by high contents of total phosphorus (P), but sparingly bioavailable as orthophosphate anions and H 2 PO − 4 ), which are required as nutrient for plants. Orthophosphate anions are strongly adsorbed on the surfaces of clay fractions and other particles, and colloids of acidic soils or precipitated as inorganic salts principally with aluminum (Al) and iron (Fe) (Redel et al., 2016). Under acidic soil conditions, the harmless Al compounds (such as aluminosilicates and oxides of Al) are solubilized into Al 3+ , which is generally toxic to microorganisms (Lemire et al., 2010) and plants (Wulff-Zottele et al., 2014). In vegetative tissues, low Al 3+ concentrations are sufficient to affect cellular integrity at the root apex and thereby limiting water and nutrient uptake by sensitive plants (Kochian, 2003). Accordingly, it is desirable to develop environmentally friendly strategies, focused on increasing the efficiency by which plants can access the unavailable soil P forms in soils, as well as to develop tolerance strategies to alleviate the Al-toxicity to plants grown in acidic soils. Higher plants have naturally developed strategies to survive in acidic soils, where the production and exudation of phosphatases and low-molecular-weight organic acids, such as citric, malic, oxalic and succinic, play a central role (Inostroza-Blancheteau et al., 2012;Chen and Liao, 2016). Root exudates stimulate growth and chemotaxis of microorganisms in the rhizosphere, resulting in a nutrient-rich microbial hot-spot where usually inhabit numerous beneficial bacteria, known as plant growthpromoting rhizobacteria (PGPR) (Richardson et al., 2009). Diverse studies have suggested the use of PGPR to improve P nutrition by plants, also named as phosphobacteria, because they can increase the orthophosphate availability to plants by secreting P-hydrolyzing enzymes (Jorquera et al., 2008;Patel et al., 2010) and organic acids (Vyas and Gulati, 2009;Sharon et al., 2016). While phosphobacteria are able of synthetizing and secreting both acid (ACP, EC 3.1.3.2) and alkaline (ALK, EC 3.1.3.1) phosphatases (Azcón-Aguilar and Barea, 2015), plants only can produce ACP (Spohn et al., 2015). In the rhizosphere, both ACP and ALK mineralize the P attached to organic compounds, whereas that organic acids are fundamental in solubilizing the P strongly adsorbed to soil complexes via ligand exchange (Richardson et al., 2009). The organic acids also solubilize P precipitated as inorganic Al-and Fe-salts via metal chelation in the rhizosphere, thus avoiding further toxic Al 3+ uptake by plant (Richardson et al., 2009). Organic acids have therefore an important function in acidic soils, both in the P-solubilization and Al tolerance. Organic acid production is catalyzed by various interrelated enzymes, such as malate dehydrogenase (MDH, EC 1.1.1.37.), which is encoded by the mdh gene. The MDH is an ubiquitous enzyme that catalyzes the reversible reduction of oxaloacetate to malate (Lü et al., 2012a). Some studies have demonstrated that both P-deficiency (Wang et al., 2014) and Al-toxicity up-regulates mdh gene expression in plants, which has been related with an enhanced exudation of the organic acids, malate, and citrate (Ligaba et al., 2004). Therefore, the upregulation of mdh gene expression, may underpin an enhanced P-efficiency (Lü et al., 2012b) and Al-tolerance (Tesfaye et al., 2001) in acidic soils. Although plant mdh gene expression is known under Al and P stress, the mdh gene expression in PGPR under these adverse conditions has not been previously examined. Likewise, although organic acid secretion and phosphatase activity are characteristics which are widely described for soil bacteria, there is very little information available about their patterns of production and activity in vitro, under controlled conditions of P-deficiency and Al-toxicity. To date, the underlying mechanisms of bacterial adaptation to the environmental stressing factors, especially Al-toxicity, typically present in acidic soils remain poorly understood. Many studies have suggested and demonstrated the agronomic potential of using phosphobacteria as a suitable and sustainable biotechnological alternative to increase P availability in acidic soils (Jorquera et al., 2008). However, it is still necessary to mechanistically understand the functional responses of phosphobacteria to these adverse soil conditions, and the genetic regulation of some of these key responses. Therefore, we hypothesized that Al-tolerant phosphobacteria use similar key metabolic strategies as plants, to overcome Pdeficiency and Al-toxicity in soils. To confirm this hypothesis we have evaluated the single and combined effects of P-deficiency and Al-toxicity on growth, production of organic acids, mdh gene expression and phosphatase activity of five phosphobacteria previously selected by their ability to both solubilize and mineralize insoluble P forms, and tolerate high Al concentration (Mora et al., 2017). DNA Extraction and PCR Reaction Bacterial DNA of each strain was extracted from overnight culture in LB broth using a Gentra Puregene Yeast/Bact. Kit (Qiagen, Inc.) according to the manufacturer instructions. The mdh gene fragments were amplified by PCR using the primers set mdh2 (5 ′ −GCG CGT AAG CCG GGT ATG GA−3 ′ ) and mdh4 (5 ′ −CGC GGC AGC CTG GCC CAT AG−3 ′ ) as described Yap et al. (2004). Amplicons were sequenced in both sense by Macrogen, Inc. (Seoul, Korea). The consensus nucleotide sequences were compared with the GenBank database from the National Center for Biotechnology Information (NCBI) using BLAST tools (http://www.ncbi.nlm.nih.gov/BLAST). The nucleotide sequences of mdh gene segments were deposited in GenBank database under accession numbers MG023310-MG023319. Culture Conditions In order to test the effects of P-deficiency and Al-toxicity on selected phosphobacteria, a standard mineral culture medium (MCM) was prepared and modified from those described by Guida et al. (1991) and Appanna and St Pierre (1994), as follow. The MCM contained (L −1 ): 1.0 g NH 4 Cl, 1.0 g KCI, 0.01 g CaCl 2 × 2H 2 O, 0.87 g K 2 SO 4 , 0.2 g MgSO 4 × 7H 2 O, 54.4 mg KH 2 PO 4 (only P source), 1.0 mg FeSO 4 × 7H 2 O and trace elements (L −1 ): 10 µg H 3 BO 3 , 11.19 µg MnSO 4 × H 2 O, 124.6 µg ZnSO 4 × 7H 2 O, 78.22 µg CuSO 4 × 5H 2 O, 10 µg MoO 3 and. The solution of FeSO 4 was sterilized by filtration (0.22 µm mesh), whereas the MgSO 4 × 7H 2 O and trace elements solution were separately autoclaved and added aseptically to the salt sterile medium. The MCM was buffered with citrate buffer (pH 5.4) and the citrate from buffer was used as the only C source for bacterial growth at a final concentration of 4.0 g L −1 . Four adapted MCM media were additionally formulated to obtain the following combinations: (i) high P in absence of Al-toxicity (P+ Al-), (ii) high P in presence of Al-toxicity (P+ Al+), (iii) low P in absence of Al-toxicity (P-Al-), and (iv) low P in presence of Al-toxicity (P-Al+). High and low P concentrations were defined according to Lidbury et al. (2016) by adding KH 2 PO 4 at final concentration of 1.4 and 0.05 mM, respectively. To obtain high Al concentrations, the MCM was supplemented with AlCl 3 × 6H 2 O in concentrations of 10 mM because this is the maximum tolerable Al concentration detected for these strains by Mora et al. (2017). To avoid Al precipitation, AlCl 3 × 6H 2 O was complexed to the 4.0 g of citric acid prior to sterilization as described by Appanna and St Pierre (1994). No Al was added in both Al-media. The pH of all media was adjusted to 5.4 by diluting NaOH. The MCM with P+ Alwas considered as controls under optimal culture conditions. The phosphobacteria strains were grown in 10 ml of standard MCM at 30 • C on a rotary shaker at 150 rpm. Late exponential cells were harvested by centrifugation at 3,500 rpm for 5 min. The pellets were washed three times with sterile saline solution (0.85% NaCl), resuspended and diluted to a final optical density of 0.1 at 600 nm (OD 600 ). Subsequently, aliquots of 50 µL from each bacterial suspension were inoculated (in triplicate) in 10 mL of each modified MCM media. All bacterial cultures were incubated at 30 • C on a rotary shaker at 150 rpm. Growth was monitored as the change in optical density for 48 h at 600 nm using Multiskan TM GO Microplate Spectrophotometer (Thermo Fisher Scientific Inc.). Secretion of Organic Acids All bacterial cultures were harvested during the late exponential phase of growth. For this, 2 mL aliquots were transferred to 1.5 mL tubes and centrifuged at 13,000 rpm for 1 min. Supernatants were filtered through 0.22 µm syringe filters and stored at −20 • C until analysis. Then, 20 µL of each filtrate (in triplicate) was analyzed via high performance liquid chromatography (HPLC) (Hitachi Primaide, Japan), equipped with a UV-210 nm detector. The organic acid separation was carried out on RP−18 150833 columns as described by Mora et al. (2017). The organic acids were identified by comparing their retention times and the peak areas of their chromatograms with those of standards for citric, malic, oxalic and succinic acid, by using Primaide System Manager Software. The organic acids determined here were selected because are the main organic acid exuded by Lollium perenne in acid soils (Rosas et al., 2011). Relative Expression of Malate Dehydrogenase (mdh) Gene Relative expression of mdh gene was determined by quantitative PCR (qRT-PCR) as follow. Total RNA was isolated from 2 mL of each bacterial culture using TRIzol R reagent (Life Technologies TM ) and treated with RNase-free DNase I (New England, Biolabs) to eliminate DNA contaminations according to the manufacturer's instructions. The RNA concentrations and purity were measured spectrophotometrically using Multiskan TM GO (Thermo Fisher Scientific, Inc.). The RNA concentration was adjusted to 100 ng µL −1 and cDNA was synthesized by using High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Thermo Fisher Scientific, Inc.) according to the manufacturer's instructions. Relative quantification was performed in an Applied Biosystems Step One TM Real-Time PCR System in 20 µL reaction mixtures containing PowerUp TM SYBR R Green master mix (Applied Biosystems, Life Technologies.), 1 µL of 1:10 dilution of the synthesized cDNA and 600 nM of each primer. PCR were performed in quadruplicate under the following conditions: an initial denaturing step at 95 • C for 10 min and 40 cycles at 95 • C for 30 s, 52 • C for 30 s, and 60 • C for 1 min. In this experiment, 16S rRNA was used as an endogenous control by using universal primer set Bac1369F (5 ′ −CGG TGA ATA CGT TCY CGG−3 ′ ) and Prok1492R (5 ′ −GGW TAC CTT GTT ACG ACT T−3 ′ ) (Suzuki et al., 2000). Data of target gene quantification was normalized using the endogenous control. The normalized values were subjected to a 2 − Ct method (Pfaffl, 2001) to estimate the fold change between endogenous control and target gene. The obtained data were transformed to log2 fold change. Phosphatase Activity Extracellular and intracellular cell-associated phosphatase activity of phosphobacteria were assayed at acid (pH 5.5) and alkaline (pH 8.0) pH by using p-nitrophenylphosphate (p-NPP) as substrate. Extracellular phosphatase activity was determined in the supernatants of bacterial culture media. Whereas, cellassociated phosphatase activity was determined in sonicated pellet as described by Patel et al. (2010). To phosphatase activity, 100 µL aliquots of supernatant or sonicated cell suspension were incubated with 5 µL of 0.115 M p-NPP, along with 5 µL of 0.1 M MgCl 2, and 5 µL of 0.1 M Na-acetate buffer pH 5.5 for ACP activity, or with 0.1 M Tris-HCl buffer pH 8.0 for ALP activity. The reaction was incubated in dark at 30 • C for 30 min, after which the reaction was stopped by adding 115 µL of 2 N NaOH. The amount p-nitrophenol (p-NP) released was determined by measuring absorbance at 405 nm in 96 wells microplates in a Multimodal Detector Synergy TM HT, BioTek. One unit of phosphatase was defined as the amount that releases 1 µmol of p-NP min −1 . The phosphatase activity was expressed on a protein basis as mU mg −1 protein. Protein concentration was colorimetrically quantified by using the Bradford's method. Data Analysis The data were analyzed by ANOVA and the means were compared using the Tukey's multiple comparison test for mean separation. Correlations amongst organic acid production and mdh gene expression were tested with Spearman's nonparametric correlation analysis. In all analyses, differences at P ≤ 0.05 were considered as significant differences between treatments. The analyses were conducted using the IBM SPSS 21 software. In addition, a principal component analysis (PCA) plot was generated using the statistical software package R (http://cran.at. r-project.org). Bacterial Growth The effect of P-deficiency and Al-toxicity on growth of five selected phosphobacteria are illustrated in Figure 1. The results revealed that the five phosphobacteria strains were able to grow in the modified MCM, even under P-deficiency (0.05 mM KH 2 PO) and Al addition (10 mM AlCl 3 × 6H 2 O). In all tested strains, the exponential growth phase was not significantly (P ≤ 0.05) affected by Al when phosphobacteria were grown with high P concentration (1.4 mM KH 2 PO). However, the stationary growth phase showed lower cell density (OD 600 ) and started before when the phosphobacteria were grown in presence of Al (18-24 h) than in control conditions (21-33 h). The Figure 1 also shows that the growth of all phosphobacteria strains was significantly (P ≤ 0.05) affected by P-deficiency. Thus, under P-deficiency the exponential growth phase was slower and the stationary growth phase was reached later (36-42 h), and a significantly (P ≤ 0.05) lower absorbance (0.210∼0.320) compared with cultures with higher P concentration (0.360∼0.480) was observed at OD 600 . Interestingly, adverse effect of P-deficiency on the growth of all phosphobacteria was more pronounced in presence of Al, particularly when they were grown in P-Al+ MCM. In Figure 1 and in Table S1 (available in the online Supplementary Material) it is also possible to observe that there are different degrees of tolerance to P-deficiency and Al-toxicity among the bacterial strains tested. Thus, after 24 h of incubation the strain Klebsiella sp. RCJ4 was the phosphobacteria less inhibited by Altoxicity (P+Al+). However, this strain was gradually decreasing its relative growth with respect to control (P− Al+), to become the most inhibited strain at 36 and 48 h of growth (Table S1). In contrast, Klebsiella sp. RC3 and Stenotrophomonas sp.RC5 were the less inhibited strains by Al-toxicity at 36 and 48 h of growth, although they were among the most inhibited phosphobacteria at 24 h. Interestingly, four strains grown under Al-toxicity equaled the biomass of the controls after 48 h of incubation. Similarly, Klebsiella sp. RC3 and Stenotrophomonas sp.RC5 were the phosphobacteria that showed the highest tolerance to Pdeficiency, both in presence and in absence of Al, since were the strains with lowest growth inhibition at 24, 36, and 48 h of incubation in P-, Al-, and in P-Al+ MCM. Meanwhile, Klebsiella sp. RCJ4, Enterobacter sp.RJAL6 and Serratia sp. RCJ6 were the strains that on average had a greater growth inhibition under P-deficiency, Al-toxicity and the combination of both factors, respectively. These findings have demonstrated that the growth of phosphobacteria is rather limited by P-deficiency than by Al-toxicity, at least in Al-tolerant phosphobacteria, which are well adapted to the presence of this toxic cation. Secretion of Organic Acids In general terms, HPLC analysis ( Figure S1; available in the online Supplementary Material) revealed that the selected phosphobacteria strains secreted all tested organic acids ( Table 1), in at least one of the treatments. Under considered optimal culture conditions (P+ Al-), the patterns of organic acids secretion were very variable and depending on treatment and strain. However, when modified MCM was supplemented with Al (P+ Al+), coincidently all tested strains showed the secretion of succinic acid, at the same time as the other organic acids tested were almost not secreted. Similar patterns were observed when selected strains were gown under P-deficiency (P-Al-), where all tested strains showed the secretion of succinic acid. In addition, when this is compared to culture media with high (P+) and low P (P-) addition, a higher secretion of all organic acids was achieved under P−deficiency. Coincidently, the organic acid secretion is exacerbated when Al was added into media, particularly in malic and citric acids. Table 1 also shows that Enterobacter sp.RJAL6 was the strains that secreted the highest concentration of succinic acid and the only one that secreted citric acid under Al-toxicity (P−Al+). Whereas, under P−deficiency (P− Al-) the highest concentration the citric and malic acid, and succinic and oxalic acid were secreted by Serratia sp. RCJ6 and Stenotrophomonas sp. RC5, respectively. Coincidently, Stenotrophomonas sp. RC5 was also the strain that produced more citric acid when both stressing factors were present. Meanwhile, RJAL6 produced significantly higher concentration of malic acid in the P+ Al+ MCM. Thus, apparently the stimulation of the production and secretion of organic acids under the typical conditions present in acid soils is a transversal characteristic in plant-associated phosphobacteria, independent of the species. With respect to Spearman's correlation coefficients presented in Table S2 (available in the online Supplementary Material), malic acid secretion had a significant (P ≤ 0.05) positive correlation with oxalic and citric acid secretion under both Al-toxicity (P+Al+ and P-Al+) treatments and under P-deficiency (P-Al-), respectively. Meanwhile, the secretion of citric acid was negatively correlated with oxalic acid under both P−deficiency treatments (P−Aland P−Al+) and with malic acid secretion only in the treatment with P−Al+. Likewise, the secretion of succinic acid was negatively correlated with malic and citric acid under P−deficiency (P-Al-) and positively correlated with the secretion of citric and oxalic acid under P+Al+ and P-Al+, respectively. Relative Expression of mdh Gene Reverse transcription followed by qPCR was used to determine the relative expression of mdh gene by selected phosphobacteria strains subjected to P-deficiency and Al-toxicity. The results illustrated in Figure 2, revealed that mdh gene expression was significantly (P ≤ 0.05) up-regulated under P-deficiency, Altoxicity and the combination of both factors with respect to treatment control (P+ Al-) by the strains Klebsiella sp. RC3 and Serratia sp. RCJ6. Meanwhile, Al-toxicity both in the presence and absence of P (P+ Al+ and P-Al+), caused significant (P ≤ 0.05) up-regulation of mdh gene expression in Stenotrophomonas sp. RC5. In contrast, P-deficiency and/or Al-toxicity did not significantly (P ≤ 0.05) affected mdh gene expression of the (Figure 3) its first two dimensions explained 57.0% of the total variation, with principal component 1 (PC1) accounting for 32.5% and principal component 2 (PC2) for 24.5% of the variance. Interestingly, the results illustrated in the PCA shows that in general the groups were joined according to treatment rather than species. Phosphatase Activity The Figure 4 illustrates bacterial both cell-associated and extracellular acid (ACPase) and alkaline (ALPase) phosphatase activities. The results showed considerable variation in phosphatase activities among treatments and strains. In general, phosphobacteria grown under P-deficiency increased significantly (P ≤ 0.05) cell-associated and extracellular ACPase with respect to control treatment (Figures 4A,C). Similar observations were obtained in ALPase activity, where significantly (P ≤ 0.05) higher cell-associated and extracellular ALPase activities were attributed to P-deficiency treatments (Figures 4B,D). Whereas, under Al toxicity (P+ Al+) almost all strains increased significantly (P ≤ 0.05) the cell-associated acid and alkaline phosphatase activity, which, however, did not result in a significant (P ≤ 0.05) improvement in extracellular phosphatase activity. The higher cell-associated ACPase activity under P-deficiency and Al toxicity was registered by Klebsiella sp. RCJ4 followed by Stenotrophomonas sp.RC5 (Figure 4A), while higher extracellular ACPase activity was also registered by Klebsiella sp. RCJ4 followed by Klebsiella sp. RC3 under Pdeficiency P−deficiency ( Figure 4C). The higher cell-associated ALPase activity was registered by Klebsiella sp. RCJ4 under P−deficiency and Al toxicity (Figure 4B), while the extracellular ALPase activity higher was registered by Klebsiella sp. RCJ4 followed by Klebsiella sp. RC3 under P-deficiency RC3 under P−deficiency ( Figure 4D). In contrast, the lower phosphatase activities were mostly registered by Enterobacter sp. RJAL6, particularly for extracellular phosphatase independently of treatments (Figures 4C,D). These results demonstrate that phosphobacteria efficiently respond to the P shortage by increasing its production and secretion of both alkaline and acid phosphatase. DISCUSSION Deficiency of P and Al-toxicity often coexist in acidic soils affecting negatively to diverse organisms, including bacteria (Appanna and St Pierre, 1994). The interactions of Al-P are very relevant in physicochemical properties of Chilean acidic volcanic soils (Mora et al., 2017). However, the effect of Al-P interactions on growth and performance of soil bacteria have been scarcely studied, due mainly to the fact that Al spontaneously precipitates with P at acid pH. In the present study we described a mineral culture medium (MCM), which proved to be suitable to evaluate the response of phosphobacteria to P deficiency, Altoxicity and the combination of both environmental stressors in vitro. The response to these conditions was tested in five phosphobacteria strains, which were selected particularly based on their ability to efficiently tolerate high Al concentrations (Mora et al., 2017) and to grow in P shortage (this study), but also for possessing several plant growth promoting (PGP) traits, such as production of indole acetic acid and siderophore, as well as, 1-aminocyclopropane-1-carboxylate deaminase, phytate mineralizing and P solubilization activities (Mora et al., 2017). In this context, by minimum inhibitory concentration (MIC) assay, it was revealed that after 4 days of bacterial incubation in a culture media supplemented with 0-40 mM Al(NO 3 ) 3 the selected phosphobacteria strains were able to tolerate Al concentration up to 10 mM (Mora et al., 2017). Similarly, we determined by a MIC assay (data no shown) that bacterial growth was inhibited by 50% at 0.05 mM KH 2 PO 4 (P concentration that was chosen to supplement the P-MCM) and almost completely inhibited at 0.003 mM KH 2 PO 4 . The Al-tolerance and the ability to growth with low P availability, along with the PGP traits, makes the selected bacteria strains good candidates to be used in sustainable agriculture in acidic soils. The obtained data in the present study have demonstrated that high Al concentrations reduced the growth rate after 24 h (4.6-18.8%), and induced a growth stationary phase in Al-tolerant phosphobacteria strains, as already described for rhizobia and Pseudomonas fluorescens strains (Keyser and Munns, 1979;Wood, 1995). The negative effect of Al on bacterial growth, although not fully elucidated, is due to the fact that this toxic cation mediates a malfunction of the cell membranes and cell walls (Illmer and Erlebach, 2003), binds to DNA, thereby affecting DNA synthesis (Piña and Cervantes, 1996) and interferes with central metabolism by competition with Fe and Mg (Lemire et al., 2010) resulting in an energy deficit, disturbed aerobic metabolism and increased oxidative stress (Hamel and Appanna, 2001;Mailloux et al., 2008). Despite this potential negative effect, the biomass of selected strains after 48 h of incubation under Al-toxicity tends to be equal to that of the controls, which corroborates that the select bacteria are naturally adapted to the presence of this metal. In line with expectations, our results have also revealed that P deficiency significantly decreased the bacterial growth, even more severely than Al-toxicity. It has been extensively described that P deficiency produce a significant decrease in bacterial growth rates (Munns and Keyser, 1981;Wood and Cooper, 1984). These results are clearly explained because P is an essential macronutrient for all organisms, by virtue of its requirements for the synthesis of many biomolecules, such as ATP, phospholipids and nucleic acids. It is noteworthy that the detrimental effect of Pdeficiency on bacterial growth was exacerbated by the presence of Al in the culture media. Despite to the inherent difficulties to study the joint effect of P and Al on bacterial growth, a previous study published by Appanna and St Pierre, (1994) described that the Al-tolerant bacteria P. fluorescens decreased by 10-40% its cellular yield in the presence of 3-15 mM Al when was grown in P deficient media. These authors concluded that the Al tolerance of P. fluorescens is dependent on the P concentration of the medium, because when P is in excess the Al is immobilized as an exocellular gelatinous insoluble residue, subsequently identified as phosphatidylethanolamine (Appanna and St Pierre, 1996), a phospholipid typically present in biological membranes. In contrast, under P-deficiency the Al was localized in soluble metabolite(s) in the supernatant (Appanna and St Pierre, 1994), which apparently derive from citric acid, an Al-chelating organic acid (Hamel et al., 1999). Therefore, phosphate and organic acids appears to be important ligand involved in Al detoxification (Appanna and St Pierre, 1994). Organic acids secreted into the rhizosphere have traditionally been implicated in important soil processes, including the release and uptake of nutrients by microorganisms and plants (Jones, 1998;Azcón-Aguilar and Barea, 2015;Sharon et al., 2016), and the detoxification of metals by plants (Chen and Liao, 2016). The release of organic compounds by soil bacteria is a welldocumented phenomenon (Vyas and Gulati, 2009;Gulati et al., 2010) and many aspects of their metabolic machinery have already been investigated (Singh et al., 2009). Nevertheless, the factors influencing their production and secretion have yet to be fully elucidated. This is because organic acids synthesis is a complex and intricate process involving the activities of multiple enzymes and the expression of their corresponding coding genes (Yin et al., 2015), which can be downregulated or up-regulated depending on the requirements of the bacterial cell (Hamel and Appanna, 2001). Indeed, it is well known that situations such as metal toxicity and the consequent oxidative stress induce several metabolic reconfigurations in order to ensure energy production and bacterial survival . Our study has confirmed that the phosphobacteria tested here, have the ability to metabolize the Al-citrate complex to obtain energy and produce biomass. It should also be noted that all phosphobacteria strains were able to secrete the main organic acids that are also exuded by L. perenne, their host plant (Rosas et al., 2011;Mora et al., 2017). However, the concentration and types secreted organics acids are highly variable and dependent on bacterial strain and the culture conditions to which they were subjected (Al-toxicity, P-deficiency or both). Even phosphobacteria strains belonging to the same genera (Klebsiella sp. RC3 and Klebsiella sp. RCJ4) and subjected to identical conditions, exhibited different patterns of organic acid secretion as described by Vyas and Gulati (2009). Although the tested phosphobacteria do not follow a common pattern of secretion, larger concentrations and more varied composition of organic acids were secreted with the combination of P-deficiency and Al-toxicity compared with less stressed culture conditions. Thus, the secretion of oxalic, succinic and malic acids by Serratia sp. RCJ6 and Enterobacter sp. RJAL6 and citric and malic acid by Klebsiella sp. RCJ4 and citric acid by Klebsiella sp.RC3 and Stenotrophomonas sp.RC5 is augmented when are subjected to P-deficiency and Al-toxicity. Previous studies in P. fluorescens have corroborated that Alcitrate is translocated into the cell and metabolized intracellularly to be used as substrate in the production of other important Alchelating organic compounds, such as oxalate (Appanna et al., 2003a;Singh et al., 2009) and citrate (Appanna et al., 2003b). Curiously, although citrate or Al-citrate were the only C source, the secretion of this organic acid was generally enhanced under stress conditions. This finding could be due to a reconfiguration of bacterial metabolic pathway that leads to the production of polycarboxylic aluminophore citrate derivative, which is involved in the sequestration of Al as proposed in the models described by Appanna et al. (2003b) and Lemire et al. (2008). Similarly, Mora et al. (2017) reported that the same phosphobacteria tested here, formed Al-chelating siderophores as evidenced by fluorescence emission assessed by confocal microscopy. It should be noted that the strains showing the highest tolerance to P-deficiency and Al-toxicity (that is, Klebsiella sp. RC3 and Stenotrophomonas sp. RC5) coincidentally were the strains with the highest production of citric acid under this condition. These results could be explained because the stability constants for Alorganic acid complexes are higher for citrate and oxalate than for malate and succinate (Hue et al., 1986;Poschenrieder et al., 2015). Previous works has already demonstrated the secretion of oxalic, citric, malic and succinic acid by several phosphobacteria strains (Arthrobacter, Bacillus, Serratia, Chryseobacterium, Pseudomonas, and Delftia) in response to P-deficiency in vitro (Chen et al., 2006;Vyas and Gulati, 2009;Gulati et al., 2010). These phosphobacteria strains showed the ability to solubilize considerable amounts of tricalcium phosphate (Chen et al., 2006;Vyas and Gulati, 2009;Gulati et al., 2010), and to favor the growth, and uptake and accumulation of macro-nutrients (N, P, and K) in maize (Vyas and Gulati, 2009;Gulati et al., 2010). In similar way, oxalic, malic, succinic, and/or citric acids have also been implicated in Al-detoxification by various cellular systems (Hamel and Appanna, 2003;Hamel et al., 2004). In bacteria, augmented oxalic acid production, via increasing the activity of the enzyme isocitrate lyase (Hamel et al., 2004), have been described as the main strategy to detoxify high Al concentrations by the soil Al tolerant bacteria, P. fluorescens. In addition, the activities of citrate synthase, malate synthase and malate dehydrogenase are also increased by P. fluorescens in response to Al-toxicity (Lemire et al., 2010). According to our knowledge, this is the first report comparing the patterns of organic acid secretion by bacteria under the combined effects of P-deficiency and Al-toxicity.In relation to mdh gene expression, our results showed that mdh gene expression is dependent on the strain and do not follow a common pattern of expression when phosphobacteria are subjected to P-deficiency and Al-toxicity. However, three phosphobacteria strains (Klebsiella sp. RC3, Stenotrophomonas sp. RC05, and Serratia sp. RCJ6) increased the mdh gene expression under P-deficiency and Al-toxicity, which was negatively correlated with malate secretion. Previous reports have associated an enhanced organic acids exudation with increased expression of mdh gene both in bacteria (Lü et al., 2012a) and plants (Tesfaye et al., 2001;Lü et al., 2012b). Thus, Escherichia coli overexpressing a mitochondrial malate dehydrogenase (mMDH) gene from Penicillium oxalicum C2 increased the secretion of malate, citrate, oxalate, lactate, and acetate in culture media, which in turn improved the tricalcium phosphate solubilizing ability of the bacteria (Lü et al., 2012a). Similarly, increased synthesis and exudation of malate have been found in tobacco (Wang et al., 2010) overexpressing mdh gene from E. coli. In contrast to these previous reports, our results show no evident association between an increased mdh gene expression and organic acid exudation by phosphobacteria. Further molecular studies are yet necessaries to fully understand the complex network of organic acids regulation, production and secretion by phosphobacteria in acid soils. In this regard, it is possible that MDH enzyme activity is not only regulated by the gene expression of mdh, but also by post-translational modification of the protein. Many researchers have described the important role of bacterial phosphatases (both ACP and ALP) in P cycling in terrestrial ecosystems for plants (Fraser et al., 2015(Fraser et al., , 2017Acuña et al., 2016). In order to provide an alternative P source to biological systems, phosphatase hydrolyzes phosphomonoesters with a wide substrate specificity (Fraser et al., 2015). All phosphobacteria strains tested have previously been described as P-mineralizing bacteria by Mora et al. (2017), which is in agreement with our findings that demonstrated both cellassociated and extracellular phosphatase activity. In general, our study showed an increase in ACP and ALP activities under P-deficiency. Our observations are also consistent with recent findings that have described under P limiting conditions increased bacterial phosphatase activity, both ACP and ALP (Fraser et al., 2015;Spohn et al., 2015), which is reduced as a result of inorganic P fertilization (Spohn et al., 2015). Molecular studies have confirmed that labile P in soil is negatively correlated with bacterial non-specific acid (phoC) and alkaline (phoD) phosphatase gene abundance and phosphatase activity (Fraser et al., 2017). In addition, an early study described that fast-growing rhizobial strains contained high levels of ALP activity under P-limited conditions (Smart et al., 1984). Our study has also provided evidence that the ACP and ALP activities by two strains, Stenotrophomonas sp.RC5, Klebsiella sp. RCJ4, are even more pronounced when Al is present in the culture media. Although there are now many studies available about the effects of Al on bacterial phosphatase activity, Kunito et al. (2016) revealed significant influences of the exchangeable Al concentration, as well as pH, on the microbial ACP activity in acidic forest soils. Sonicated cell lysates of phosphobacteria showed higher activities than the extracellular compartment. However, cellassociated phosphatases may be more related to other cell metabolic functions than playing any role in extracellular P mineralization (Menezes-Blackburn et al., 2013). Therefore, our study suggest that phosphobacteria have developed similar strategies to deal with P-deficiency and Al-toxicity that their host plant. CONCLUSION Here, we describe the responses of Al-tolerant phosphobacteria to P-deficiency and Al-toxicity, principal stressors present in acidic volcanic soils. Although the growth of phosphobacteria is negatively affected for the combination of both stressors, they tolerate and survive these adverse conditions. This is achieved by the synthesis and secretion of organic acids and the activities of both acid and alkaline phosphatase enzymes. The secretion of organic acids is generally enhanced under stressful conditions, although the patterns of secretion and concentrations, are very variable and dependent on treatment and strain. Furthermore, the bacterial phosphatase activity is more strongly increased by P-deficiency, than by Al-toxicity. In order to generate novel and efficient phosphobacteria-based biofertilizers, for extensive application in acidic soils, a continued investigation of bacterial responses in vitro, is essential to understand the regulation of these organisms in soils. AUTHOR CONTRIBUTIONS PB, MJ, and MM designed the research and supervised the study. PB, MJ, PD, and AV contributed intellectually to data analysis. PB and SV performed laboratory work. PB and MM wrote the manuscript. PB, MJ, and PD designed Tables and Figures. All authors revised the manuscript and approved the final version. ACKNOWLEDGMENTS The authors acknowledge to Scientific and Technological Bioresource Nucleus of Universidad de La Frontera (BIOREN-UFRO) for support in use of Multimodal Detector Synergy TM HT, BioTek. The authors also acknowledge to Dra Mabel Delgado for her constant collaboration. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2018.01155/full#supplementary-material Figure S1 | HPLC chromatograms of standard organic acids and of the five phosphobacteria grown in control mineral culture medium (MCM) supplemented with 1.4 mM KH 2 PO 4 and without Al added (P+ Al-). Table S2 | Spearman's rank correlation matrix for organic acid secretion and malate dehydrogenase (mdh) gene expression.
2018-06-01T13:03:58.123Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "b81e6955f9831cf8cfe4ac75b3575e36c32bb3e7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.01155/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b81e6955f9831cf8cfe4ac75b3575e36c32bb3e7", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
26320093
pes2o/s2orc
v3-fos-license
Exploring Barriers Related to the Use of Latrine and Health Impacts in Rural Kebeles of Dirashe District Southern Ethiopia: Implications for Community Lead Total Sanitations Unsanitary disposal of human excreta, together with unsafe drinking water and poor hygiene conditions contribute for 88% of diarrheal diseases; the burden of this disease is a leading cause of morbidity and mortality particularly in young children and lack of access to sanitation has significant non-health consequences, especially for women and girls, including lack of security and privacy, decreased school attendance and basic human dignity. In addition, inadequate sanitation is implicated in Helminth infections, enteric fevers and trachoma. There are many factors that limit the utilization of latrines in rural setting. Qualitative study was conducted to explore the barriers related to the use of latrine and health impacts in rural kebeles of Dirashe district Southern Ethiopia. Data was collected through focus group discussions, in-depth interview and observations. The study revealed that the utilization of latrine was low with high open field defecations and the community had poor attitude towards the sanitation Introduction Sanitation means the safe and sound disposal of human excreta [1]. According to world health organization and United Nations children's Fund Joint monitoring Program (JMP) sanitation defined as the 'Lowest cost option that ensures a clean and healthful living environment both at home and in the neighborhood of users [2]. At the household level, adequate sanitation facilities include an improved toilet and a disposal that separates waste from human contact [3]. On account of proper utilization of well-maintained latrine rather than its merely physical presence, the health status of the people improves [1,4]. Unsanitary disposal of human excreta, together with unsafe drinking water and poor hygiene conditions contribute for 88% of diarrheal diseases; the burden of this disease is a leading cause of morbidity and mortality particularly in young children and lack of access to sanitation has significant non-health consequences, especially for women and girls, including lack of security and privacy, decreased school attendance and basic human dignity. In addition, inadequate sanitation is implicated in helminth infections, enteric fevers and trachoma [5][6][7]. In contrast, the use of improved sanitation has been found to reduce transmission of enteric pathogens and intestinal parasites, reducing morbidity and mortality especially in children. Thus, facilitating access and use of improved sanitation can prevent the transmission of diarrheal diseases [4,8]. In total the prevention of sanitation and water-related diseases could save some $7 billion per year in health system costs; the value of deaths averted based on discounted future earnings, and adds another $3.6 billion per year [9]. However, in 2012, an estimated 2.5 billion people in the world have no access to improved sanitation facilities. Of these, 761 million use public or shared sanitation facilities and another 693 million use facilities that do not meet minimum standards of hygiene. The remaining 1 billion (15% of the world population) still practice open defecation. The majority (71%) of those without sanitation live in rural areas, where 90% of all open defecation (OD) takes place [10]. The order of the magnitude of sanitation and related health context are striking; every year the failure to tackle these problems claim the lives of 1.5 million children and result in severe welfare losses -wasted time, reduced productivity, ill health, impaired learning, environmental degradation and lost opportunitiesfor millions more [5,11,12]. In developing regions people are most vulnerable to infection, where only one in every three people has access to improved sanitation; the vast majority, 82% of people practicing open defecation now live in middle-income populous countries [8,10]. In sub-Saharan Africa 69% of the populations do not have access to improved sanitation facilities and the practice of open defecation has highest prevalence in Southern Asia, Oceania and sub-Saharan Africa which is associated with significant negative externalities as it releases germs into the environment that can harm the rich and poor alike even those who use latrines, thus it needs to be brought to an end [13][14][15][16]. Poor sanitation and hygiene conditions are among the major causes of public health problems in Ethiopia in general and in Dirashe district in particular, nearly 40% of Ethiopians lack access to sanitation facilities in 2009. Even where toilets do exist, many are not used, meaning that open defecation is common for almost all the rural population. In Ethiopia 82% of the population use unimproved sanitation facilities, 38.1 million populations still practice open field defecation [3,17]. Diarrhea is the leading cause of Under-5 mortality in Ethiopia causing 23% of all under-5 deaths. Around 44% of under-5 children in Ethiopia are stunted, which can be strongly linked to the childhood incidence of diarrhea and other mechanisms such as environmental enteropathy [15]. According to the most recent EDHS, 80% of all incidences of diarrhea are due to unsafe water supply, poor sanitation and unsafe hygiene behaviors; 17% of childhood deaths are associated with diarrhea. There is also high prevalence of worm infestations causing contributing to the high levels of malnutrition mainly among the large population of under-five year's children which sanitation can prevent [13,18]. However, the government of Ethiopia has been promoting universal sanitation coverage to ensure better health and quality of life for all Ethiopians working hard to increase access to and utilization of improved sanitation to its rapidly growing population. In Dirashe district the annual health service report 2013 shows that 80% of the households have latrines. But still it needs a clear, reliable, consistent and sustainable sanitation use by all family members beyond simply calculating the coverage. The morbidity report of the district indicated that the burden of diarrheal disease is still 5th of top ten disease of the area and other related illness lead to the economic impacts i.e., cost for treatment per infection, decreased work time influencing their growth and in addition, the trachoma survey 2009 in the district shows that the prevalence of active trachoma and trachoma trichiasis were high. One of the reasons for conducting this study was described as follows. Open defecation and unsafe excreta disposal continue to be widespread in the district with major public health and economic consequences. Open defecation is not limited to remote fields in the area. It's ordinary to observe human feaces in most of the villages even in and near homes where children playing around and this served as a source of transmission of the diarrheal diseases. Many of the communities which counted as ODF do not properly (in clean) use the communal (passersby) latrines or do not use them all the time, defecate in open field, around the latrines or outside the pit, hand washing without water and soup (ash) or no hand washing facilities at all. The latrine utilization of households which have been declared fully as open defecation free is assumed to be superior to non-open defecation free households though. Thus, the latrine utilization is questioned. Therefore, this study aims to explore the barriers for latrine utilization in rural kebeles district Southern Ethiopia. Study area and setting The study was conducted in Derashe district, Segen Area People's Zone, SNNPR. Derashe district is one of the five districts in Segen Area People's Zone. It is located at 550 km from Addis Ababa, 330 km from Hawassa, 55 km from Arba Minch and 42 km from Segen, the capital city of the zone. The district is bordered in north by Gamo Gofa zone, south Konso district, and west Ali district and in east Amaro and Konso districts. According the report from the district, it has an estimated 1,16,000 population with 1:1 sex ratio. The main livelihood of the population is based on farming. There are two main rainy seasons which make the district to cultivate biannually. The main cush crops of the district include "teff", wheat, barley, grain, coffee and others. In addition to their ethnic language the people are the speakers of Affan Oromo [19,20]. This study was conducted rural kebeles of the district during 2014/15. Data collection methods Data was collected through Focus Group Discussion and indepth interview. Both the methods were used to generate the adequate information from the community. Qualitative data will be collected by using semi-structured questionnaire or interview guide. FGDs The qualitative data was collected by using semi-structured questionnaire or interview guide. The interview guide was developed by reviewing different literatures of WASH and to address the objectives of the study. The FGD questionnaires had perception on latrine utilization and the effect of latrine utilization on diarrheal diseases. A trained environmental health professional who had experience in conducting FGDs, conducted/moderated the FGDs. Four FGDs was conducted for the saturation of information. Nine participants were included in each FGD. Three data collectors were participated on FGD, including moderating discussions and note taking. The principal investigator was participated in selection of study participants, observation during the discussions and transcribing tape recorded data. Based on prepared guide for FGDs, data was collected. Careful attention was given to establish the frequency of the occurrence of themes, phrases and expressions so as to make the discussants describe their opinions relative to the specific research questions. In-depth interview In-depth interview of eight key people was conducted, with two key informants from each of the study communities. Indepth interviews were used to generate detailed information about the community's thoughts and behaviors with regard to the latrine utilization and why not abiding to the regulations of health extension programs in depth. These Interviews were used to provide context to other data offering a more complete picture of what happened in the community and why people were not utilizing latrines and keeping basic sanitations. To achieve these informants included latrine artisans, water and sanitation committee members, and community opinion leaders. Observations Data was also collected using observation checklists in order to gather information on the presences of solid and liquid wastes and also human feces in the compound. The availability of latrines, presence of hand washing facility, feces in the compound, garbage in the home and compound and the general cleanliness of the community and households. The checklists were modified from the UNICEF WASH projects checklists to be adaptable to the local community. Quality control The FGDs was carried out in separate rooms. Both participants from male and females have given their responses in different class rooms. A trained environmental health professional was moderated each discussion. During the discussion, data was collected using note taking and tape recorder and transcribed by principal investigators and moderators. Before transcribing the data, repeated listening to the tap recorded data and transcribing the written note line by line was used to maintain the quality of the data. Data analysis The tape recorded data was analysed under selected themes based on the guide and summarized manually. Open code software was also used in the analysis. The written note and tape recorded data was transcribed line by line and translated. The results from observations were also used to supplement the results of FGDs and the interview. Perception and attitude of the community towards latrine utilization and sanitations When the issue of water raised, it is the problem of peoples in rural kebeles of Dirashe community. This is the problem not resolved. If there is no water the issue of the sanitation is meaningless. Water is the first and foremost in the life. Even though awareness is being improved with respect to washing the faces of the children, utilization of latrines but the problem is the availability and accessibility of the needed things. There is a continuous monitoring and controlling mechanism for latrine availability, utilization and waste disposal mechanisms. Major problem in sanitation is the lack of water which hampers hand washing personal hygiene. What is given as education is adequate on sanitation. Peoples are taking and practicing it. "Previously Dirashe people do not utilize latrine but currently things are being changed, open defecation is reduced, but the problem still persists" but we do not believe that hygienic practices will have meaning without water. In reality, there are less public latrines for the population in different areas and also areas for solid waste disposals there are communal areas. But the utilization of this public latrines is low because of different reasons. Individuals were defecating in open field, and discarding many wastes here and there. The other most critical problem in this area is shortage of water. All population do not use pipe water some of them use river and some other unprotected spring. You can understand that the awareness of the individuals increased from time to time on health activities. Hope international Ethiopia tried to solve the water problems of the rural community in the district but not sustained. Even if individuals are told to wash the face of their children, and for hand washing after toilets due to lack of water children face is not washed well. We understand that the awareness of the individuals increased from time to time due to shortage of water personal hygiene is not practiced. In reality, there are many public latrines for the population in different areas and also areas for solid waste disposals. But the utilization of this public latrines is low. Individuals who come from rural area were defecating in open field, and discarding many wastes in the town here and there. Some latrines are also not well protected and the individuals not utilizing them correctly. You can see the new bus station kebele. The same is true for market area public latrines, where people defecate outside of the hole. If we had improved knowledge the main problem is related land problem for latrine construction and pit for dry waste disposal. Practice or latrine utilization and health behavior The practice of the community towards latrine utilization, waste disposal and keeping the personal hygiene of their children especially face washing, we can divide the population in to three category when this issue is concerned. The first category were those who had and practice what is said by the health providers without any assistance. The second category were those who did what they perceive when they are told to do so. This category of individuals need follow them they will return back to their initial situation. The third group is individuals who have had poor attitude and do not respond to what the health extension workers are saying and the development army also. Thus, concerning practice of the community towards latrine utilization, waste disposal and keeping the personal hygiene of their children especially face washing, we can divide the population in to three category when this issue is concerned. The first category were those who had appropriate waste disposal, washing their hand after toilets, and washing their children's face without any assistance. The second category were those who did what they perceive when they are told to do so. This category of individuals unless you follow them they will return back to their initial situation. The third group is individuals who have poor attitude and do not respond to what the health extension workers are saying and the development army also. From this one can understand there are necessary prerequisites to be fulfilled for practice of certain hygienic activities but this may not be true for the second and third category individuals. Hand washing after toilets limited by the shortage of water. You observe empty jerkan used for hand washing water after toilets. Washing of children face also follows the same fashion. There are families who wash the face of the children, the children themselves urges their families to provide them with soap and water and wash their face and also the third category individuals will come without washing their faces. But there is improvement due to trained individual on sanitation, trachoma and health extension education and also the information delivered during the campaigns of drug distribution especially trachoma. The other theme of discussion was prevention of different health problems including diarrheal diseases, trachoma and water borne diseases. After exchanging of different ideas, majorities of the discussants come up with environmental and water hygiene activities to prevent diarrhea and other related diseases. This can also be done through implementing the packages of Health Extension Programs (HEP). Since open defecation is highly practiced the role of water treatment is crucial. But, one of the discussant said that "it is unbelievable to control diarrhea using treating water because some of the germs may not be killed with boiling; use of chemical alters the test of water and it could not be used in this culture" (grade seven completed, 38 years old male discussant). Therefore, we need improved water supply where contaminations are high because of the feces will be washed to water sources. General diarrheal disease prevention is comprehensive and not believable in poor setting. Changing the cultures and beliefs of the society in using toilets and avoiding open filed defecation may take longer time because of nature of the occupation. Discussion In this study the utilization of latrine is low. This is manifested in open defecation in the fields. The attitudes of the community towards latrine use were poor. The reasons or factors associated for non-use of latrine was poor knowledge of the danger effects of poor latrine utilizations and the nature of work which totally outdoor. In this study the utilizations of latrine is less promising. These results were in contradiction with the study conducted in East Gojam showed that encouraging practice in latrine use [4]. The factors that help the utilization of latrines were mainly community attitude and lack of the benefits of the latrine use and nature of the work the community engaged in. This is in contradiction with the study in East Gojam Zone where presence a school children in a household, duration of owning a latrine, peer pressure, and self-initiation to owe latrine due to the promotional activity of health extension workers were the major factors affecting utilization of latrines [4]. And this result is in consistent with the study conducted in similar district in the south Ethiopia [15]. The reports from the health offices said that the majority of the households in the community had latrine and it is being utilized by the respective age groups. But, the reality in the ground is totally different. The other factor acted as barrier for the utilization of latrine was attitudes of the community. In this regard they reported that the utilization was restricted to the times of health professional visits. This finding is consistent with the study conducted in Melekoza woreda, south Ethiopia [21]. And also similar finding was reported from the two studies [4,15]. This explained that the barrier for the effective utilizations of latrines also extended to the factors of re-enforcing factors. The relationship between the attitude and practice of latrine utilization and improved water supply where contaminations are high because of the feces will be washed to water sources. General diarrheal disease prevention is comprehensive and not believable in poor setting. Changing the cultures and beliefs of the society in using toilets and avoiding open filed defecation may take longer time because of nature of the occupation. Limitations of Study The limitations of the study were since it was a onetime study undefined seasonal variability were limitations of this study to identify the barriers for the community perceptions and attitudes towards the latrine use and its effects. Availability of literature addressing our research questions was also a limiting factor to discuss our finding. Conclusions and Recommendations In this study the utilization of latrine was low wit high open field defecation. There were poor community perceptions towards utilization of latrine. The government should enforce the local community to implement the HEP packages. The health extension workers should closely supervise the utilizations of the latrine available and put scary messages on open defecations.
2019-03-17T13:02:47.316Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "6c1d32b301070acf56e63be0896fc1512d3bd67e", "oa_license": null, "oa_url": "https://doi.org/10.21767/1791-809x.1000492", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ac8a3845ee9e69d2114be3c53a75770794b236f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253936746
pes2o/s2orc
v3-fos-license
Driving Factors of Industry 4.0 Readiness among Manufacturing SMEs in Malaysia : Industry 4.0 increases the production efficiency and competitiveness of companies. However, Industry 4.0 implementation is comparatively low in developing countries, while Malaysian manufacturing Small and Medium Enterprises (SMEs) Industry 4.0 adoption is still in its infancy stage. This quantitative study aimed to broaden the knowledge of the driving factors that signifi-cantly strengthen Malaysian manufacturing SMEs’ readiness for the digital revolution. Based on the Resource-Based View theory, the study built a research framework to govern the investigation of organizational capabilities, SME institutional support, perceived advantage, and market factors as the driving factors of Industry 4.0 readiness, while firm size as the moderating variable. The data were collected by conducting an online survey with the owners and managers of Malaysian-owned manufacturing SMEs located throughout Peninsular Malaysia, where the firms have received some form of government assistance. The analysis of the study indicated that organizational capabilities, SME institutional support, and market factors positively correlate with Industry 4.0 readiness. It was determined that firm size only moderates the relationship between SME institutional support and Industry 4.0 readiness. This study’s findings benefit industry practitioners and policymakers who wish to drive the future of Malaysia’s SMEs business ecosystem and contribute to Industry 4.0 literature. Global Revolution Shift Global manufacturing industries have seen digital transformation compelled by Industry 4.0 as an important agenda due to its operational advantages and market opportunities [1]. Industry 4.0 revolutionizes the means by which products are designed, fabricated, delivered, produced, used, operated, serviced, and maintained [2]. It also changes the processes, operations, supply chain management, skill requirements and manufacturing power, as well as the energy footprint of factories [2]. Product lifecycles are becoming shorter, which drives the sector's constant and ongoing flow of product development projects [3]. Furthermore, the current COVID-19 pandemic provides an opportunity for a new generation of entrepreneurs to lead the next industrial transformation and innovate new methods of doing business using state-of-the-art technology [4]. According to Bawany [5], Industry 4.0 is about the idea of smart factories in which machines are augmented by the use of web connectivity and linked to a system that can envision the whole production chain and make decisions on its own. Smart factories represent a big step from more traditional automation to a fully connected and flexible system, where computers and machines communicate, collect and exchange data, and based on the data, enhance production efficiency to achieve better positioning in the competitive marketplace [6]. From a macroeconomic approach, Industry 4.0 is regarded as a new competitive advantage for a nation [7]. for Industry 4.0, and empirically demonstrated the moderating effect of firm size to address this research gap. Moreover, although there has been some research on the impacts of capability and financial resources on SMEs' engagement in innovative activities, there is currently a dearth of research connecting financial aptitude with digital technology adoption. In the current literature, the bulk of research analyzed firm size solely from the perspective of firm performance; consequently, the moderating effect of firm size on the deployment of Industry 4.0 remains insufficient. Based on the discussion above, this study intends to determine how prepared Malaysian manufacturing SMEs are for digital transformation. Importantly, there is a need to comprehend the driving factors that will empower Malaysian manufacturing SMEs to embrace digital transformation. Therefore, this study aims to broaden the current state of knowledge in Industry 4.0 readiness of Malaysian manufacturing SMEs and the driving factors that significantly strengthen SMEs' readiness for the digital revolution in Malaysia. This study contributes to the existing body of knowledge by examining the moderating effect of firm size between the four driving factors and SMEs' readiness for Industry 4.0. This study elongated the relationship between the driving factors, namely organizational capabilities, SME institutional support, perceived advantage, market factors and readiness for Industry 4.0, and empirically proved the moderating effect of firm size. In the current literature, most studies evaluated firm size only from the firm's performance perspective; thus, the moderating role of firm size on Industry 4.0 implementation is still inadequate. Hence, rigorous validation is necessary to identify the moderating effect of firm size between the driving factors and Industry 4.0 implementation. Theoretical Framework and Hypothesis Development This section investigates the theoretical foundation and notion of Industry 4.0 readiness as well as the literature-reported driving factors, leading to the formulation of hypotheses for empirical testing. Figure 1 focuses particularly on the impact of driving factors for Industry 4.0 readiness and the moderating effect of firm size on Industry 4.0 implementation. Readiness for Industry 4.0 Industry 4.0 provides new technological capabilities by communicating and integrating information technologies to maximize production performance [38,39]. Adopting Industry 4.0 is a crucial strategic decision. The inadequacy of a shared understanding of the factors that influence the implementation of Industry 4.0 technologies may reasonably explain SMEs' hesitation to embrace the digital revolution under Industry 4.0 [40]. Hence, it is critical to analyze the organization's readiness for Industry 4.0 implementation before making such a significant decision [41]. Readiness for new technology has been defined differently. According to Holt, Armenakis [42], a readiness assessment allows leaders to identify gaps in the current organization ahead of or throughout the change implementation process. In a more practical sense, the systemic analysis of an organization's ability to deal with and implement a revolutionary process or change is defined as assessing or measuring readiness Pirola, Cimini [43]. Furthermore, a readiness assessment also intends to address potential barriers to success, empowering the companies to overcome the barriers before starting the transformation project [43]. In this study, Industry 4.0 readiness is defined as the preparedness level of organizations to benefit from Industry 4.0 technology [44], which includes management commitment, operational resources, technological skills, and technical requirements [45]. Malaysian SMEs require extensive preparations to address the challenges of getting them ready for Industry 4.0, which is not easy. The Industry 4.0 level process cannot be implemented without adequate planning and support from authorities. Therefore, understanding the current level of readiness for Industry 4.0 among SMEs and the areas in which SMEs should be ready may assist the government in creating a conducive ecosystem and generating important Industry 4.0 initiatives to increase SMEs' readiness. This study evaluates SMEs' current readiness level and examines whether manufacturing SMEs in Malaysia are ready to transform and achieve Industry 4.0 status digitally. Readiness for Industry 4.0 Industry 4.0 provides new technological capabilities by communicating and integrating information technologies to maximize production performance [38,39]. Adopting Industry 4.0 is a crucial strategic decision. The inadequacy of a shared understanding of the factors that influence the implementation of Industry 4.0 technologies may reasonably explain SMEs' hesitation to embrace the digital revolution under Industry 4.0 [40]. Hence, it is critical to analyze the organization's readiness for Industry 4.0 implementation before making such a significant decision [41]. Readiness for new technology has been defined differently. According to Holt, Armenakis [42], a readiness assessment allows leaders to identify gaps in the current organization ahead of or throughout the change implementation process. In a more practical sense, the systemic analysis of an organization's ability to deal with and implement a revolutionary process or change is defined as assessing or measuring readiness Pirola, Cimini [43]. Furthermore, a readiness assessment also intends to address potential barriers to success, empowering the companies to overcome the barriers before starting the transformation project [43]. In this study, Industry 4.0 readiness is defined as the preparedness level of organizations to benefit from Industry 4.0 technology [44], which includes management commitment, operational resources, technological skills, and technical requirements [45]. Malaysian SMEs require extensive preparations to address the challenges of getting them ready for Industry 4.0, which is not easy. The Industry 4.0 level process cannot be Driving Factors for Industry 4.0 Harvie, Narjoko [46] claimed that SMEs' innovation capability strongly influences SMEs' engagement in production networks. SMEs with plentiful internal financial resources or access to external sources of finance are hypothesized to be more likely to engage in innovative activities than those without [46]. Despite the importance of financial aptitude in acquiring new technology, there is still a scarcity of literature linking it to digital technology. This gap in the literature necessitates an empirical study of the impact of financial capability on digital transformation. According to Tech Wire Asia [47], one of the main barriers to digital transformation is the manufacturing firms' lack of expertise and skill scarcity. Besides, a firm with technological capability is likely to be tech-savvy and technologically oriented, since it strives to keep up with technological advancements [48]. Firms with unique technological resources or technical capabilities that are not prone to replication by competitors will gain a sustained competitive advantage due to newly emerging technologies [49]. Based on the literature and RBV, organizational capabilities with the dimensions of financial and technological capability are deemed to have great value and impact on driving Malaysian manufacturing firms to prepare for Industry 4.0. The following hypotheses are therefore formed: Hypothesis 1 (H1): Organizational Capabilities Positively Impact SMEs' Readiness for Industry 4.0. According to Lall [50], governments may need to take a proactive role to overcome market failures that prevent firms from developing the capabilities needed for industrial development. It has been said that a policy for higher industrial efficiency based on new processes should focus on the role of new machinery acquisition through incentives and credit for new technology investment by Vaona and Pianta [51]. Given that the Malaysian government has set aside significant funds to help SMEs transition to Industry 4.0, it is important to determine whether this effort has successfully driven SMEs' digital transformation. In the study on South Korean firms, Kim and Lee [52] found that government support positively influences innovation production at the business level but that the effect on obtaining high innovativeness is statistically insignificant. Based on this literature, the researchers posit as follows. It has been reported in the media that key barriers to Industry 4.0 in Malaysia include the lack of awareness of its benefits and impacts among SMEs [23]. Many manufacturing SMEs are still hesitant to adopt Industry 4.0 because they are unsure about the benefits. In this sense, manufacturers will be more encouraged to consider the implementation of Industry 4.0 as they perceive them as competitively valuable [53]. Since perceived benefits have been empirically validated to positively affect new technology implementation in the IT literature, SMEs will prepare for Industry 4.0 requirements based on their belief or perceived benefits of digital transformation [54]. On the other hand, introducing new services and products enabled by new technologies is expected to bring market opportunities for manufacturing firms [55]. Although market opportunities have been reported, a study by Tortora, Maria [56] revealed that most manufacturing firms are unaware of the potential opportunities that Industry 4.0 technologies may provide. Thus, they must thoroughly understand the various parts of Industry 4.0 and the necessary knowledge, skills, and confidence. As the Industry 4.0 transformation impacts the entire firm, it is critical to understand how the varied elements of Industry 4.0 can take advantage of digitization opportunities [1]. Therefore, the researchers hypothesize as follows. Firms will be able to incorporate their customers' needs and preferences into their development and production processes in new ways due to Industry 4.0, which includes direct data sharing with their machinery [32]. Industry 4.0 enables a faster response to customer needs than is currently achievable; hence, manufacturing system suppliers must understand how to apply technologies in new use cases to provide the most value to their customers [57] in order to avoid lower customer satisfaction levels and loss of customers [58]. In this respect, customers' requirements drive the decision to begin the Industry 4.0 process. Adopting new technology is frequently a strategic imperative for remaining competitive in the marketplace [54]. In the literature on technology transformation, competitive pressure has long been acknowledged as a key driver for technology transformation [59][60][61]. By adopting new technology, firms can gain better market insight, improved operational efficiency, and more accurate access to real-time data [59,60]. According to Rajnai and Kocsis [45], company competitiveness is vital for an economy to be well-positioned in the markets and value chains in a new and continuously changing environment. In this dynamic business environment, SMEs will prepare for Industry 4.0 when they feel competitive pressure from their competitors. At the same time, governments are keen to examine the local economy's health and businesses, particularly to assess their readiness for Industry 4.0. The researchers hypothesizes as follows: Hypothesis 4 (H4): Market factors for Industry 4.0 positively impact SMEs' readiness for Industry 4.0. Firm Size as the Moderator The firm's size is the most important factor in implementing new technology and gives strong support [61]. Firm size has been investigated by several researchers in the field of innovation, which is seen to be a significant indication of organizational complexity, including Berry, Bizjak [62], Stock, Greis [63], and Zona, Zattoni [64]. SMEs' firm size is measured using the number of employees [65,66]. Although some researchers have discovered a negative relationship between firm size and technology, for example, cloud computing [67], the majority of studies in various contexts, including ICT innovations [68] and ICT adoption [69], have found a positive relationship. According to Hansen [70], firm size is inversely associated with the innovative product when it comes to determining the factors that influence both product and process innovation, the size of the company matters [51,71]. Firms of various sizes have different patterns of the innovation process and diverse factors driving their organizational performance [72]. Although the RBV theory indicates that larger firms could enhance growth, Miller and Toulouse [73] revealed that the CEO in the smaller firm will have a greater impact on the employees, both directly and through internal procedures and processes, allowing the firm to respond more swiftly. On the other hand, SMEs' flexibility and specificity are frequently thought to make them competitive when executing new technologies and addressing certain customer demands; however, their ability to the invention is constrained by scarce resources [74]. Financial support from the government for technological development enhanced a higher percentage of technological advancements' success [75]. Even though there is already a huge amount of literature on the impact of government support on innovation performance for various sizes of businesses in various countries [29], this study focuses on SMEs in Malaysia, as most previous studies have been designed for managers in large firms. SMEs are not obligated to disclose financial statements to the public, so they are notoriously opaque regarding financial health [76]. Their main funding source for technology and innovation investments is external finance from financial institutions and banks [77]. As the financial system restricts the types of assets a lender can accept as collateral for a loan [78], small borrowers frequently lack assets to pledge as collateral or are limited in their company collateral options [79]. In this case, SMEs will require external funding or investment from institutions such as the government. On the other hand, the perception of SME owner-managers and supply side shortcomings are proven to play a role in the difficulties of financing innovation [80]. Various factors suggest that small firms may have an advantage in innovation as they are more likely to recognize opportunities [81]. Individual firms should examine and account for their size before investing in innovative products or projects or embracing innovation to enhance their process management [76]. Christensen [72] claimed that competition and consumer needs in the value network affect the firm's cost structure, the firm size required to remain competitive, and the required growth rate in various ways. Furthermore, technological and market factors determine the relationship between firm size and innovation [82]. As international competitive pressures have become more intense in recent years, the role of firm size has become a top priority for policymakers and industrial players [83]. Based on the above discussions, the size of a firm is believed to have a certain impact on technological innovation. Therefore, deriving and considering the impact that firm size could bring into this study, the role of firm size will be studied as a moderator, hence the following hypotheses: Hypothesis 5 (H5): Firm size moderates the relationship between organizational capabilities and SMEs' readiness for Industry 4.0; the positive relationship between organizational capabilities and SMEs' readiness for Industry 4.0 is stronger when the firm size becomes larger. Hypothesis 6 (H6): Firm size moderates the relationship between institutional support and SMEs' readiness for Industry 4.0; as such, the positive relationship between institutional support and SMEs' readiness for Industry 4.0 is stronger when the firm size becomes larger. Hypothesis 7 (H7): Firm size moderates the relationship between perceived advantage and SMEs' readiness for Industry 4.0; as such, the positive relationship between perceived advantage and SMEs' readiness for Industry 4.0 is stronger when the firm size becomes larger. Hypothesis 8 (H8): Firm size moderates the relationship between market factors and SMEs' readiness for Industry 4.0; as such, the positive relationship between market factors and SMEs' readiness for Industry 4.0 is stronger when the firm size becomes larger. Methods This quantitative study investigated the relationships between the independent variables (organizational capabilities, SME institutional support, perceived advantage, and market factors) and the readiness for Industry 4.0 in Malaysian manufacturing SMEs. In particular, this study aimed to investigate whether firm size affects managers' decision to revolutionize their business into Industry 4.0 in three-dimensional readiness (managerial, operational, and technological readiness). This study is executed using a cross-sectional design, and in a non-contrived setting, that is, in the managers' natural work environment, with the lowest interference from the researchers. Self-administered questionnaires were used to collect data for each study variable. Population and Source of Data The SME Corporation directory was employed as a sampling frame to determine the population of this present study. All SMEs in manufacturing form the population of the current study. A sample was elicited from these firms with the following set of conditions: i. The respondents are the owners or managers of SMEs. ii. The firm must be from the SME manufacturing sector in Malaysia. iii. The firm must have received government assistance, including financial and technical assistance. This study's Malaysian manufacturing SMEs included a variety of businesses from the electronics, electrical, pharmaceutical, and automobile sectors. Thus, purposive sampling focusing on the management personnel at the organizational level from Malaysian manufacturing SMEs is employed in this study. Only the owners or managers of Malaysian-owned manufacturing SMEs are chosen as the data source in this study. Therefore, this study is a purposive sampling study, since it concentrates only on a group of owners or managers taken from Malaysian-owned manufacturing SMEs who share a common characteristic. The researchers clarified this matter with the owners or managers of the manufacturing firms during the phone call for survey invitation. Unit of Analysis In this study, the unit of analysis is at the organizational level. Management personnel at the organizational level serve as the decision-maker in Malaysian-owned manufacturing SMEs. An owner or manager will lead each Malaysian-owned manufacturing SME. Ownermanagers in SME firms tend to be the main decision-maker maintaining control of the firm's operations. Lobonţiu and Lobonţiu [84] stated that in small firms, owner-managers tend to be all-powerful and manipulate the two most vital functions of the firm: Production and Sales and Marketing. Typically, top management members are members of the dominant coalition. In addition, Van Gils [85] conducted a study and found that although the CEO in SMEs is the main decision-maker, more than sixty percent of them set up top-management teams to reinforce the strategic decision-making process. The top-management teams are small, but the executives intensify the company's know-how. On the other hand, Burgelman [86] mentioned that middle-level managers strive to develop wider strategies for areas of new business activity and attempt to persuade top management to support them. This is the sort of strategic behavior encountered in the research of internal corporate venturing. Hence, only owners or managers were considered in this study. Sampling Technique This study used the purposive sampling technique, which restricted the sample selection to those who could provide the required information. This technique is suitable because a limited category or number of people have the information required. For example, the owners or managers of Malaysian-owned manufacturing SME firms were selected as a sample for this study because they are in the best position to give the information needed [87]. Minimum Sample Size Cohen and Klepper [83] suggested that the sample size of statistical power of 80% with a 5% significant level and four arrows pointing at a construct with minimum R 2 values of 0.25 and 0.10 between 65 and 137. Meanwhile, another method for identifying the sample size in PLS-based analysis is G*Power. G*Power is a tool employed to provide power and sample-size calculations. As the model in this study has 4 predictors pointing to one endogenous variable, the readiness for Industry 4.0 construct, the number of predictors in the sample size calculation is then set as 4. The effect size is set at 0.15, which is the medium size effect based on the argument of Cohen and Klepper [83], while the power required for the calculation is set as 0.8 Green [88]. Based on G*Power, the minimum sample size needed in this study is 84 respondents. As a result, this study collected a minimum sample size of 110 SME owners or managers. Data Collection Procedures Self-administered online questionnaires were employed to gather primary data to obtain responses for all variables in this study. The online questionnaire was set by using Google Forms. In January 2022, the researchers of the present study began to contact the managers of SMEs in Malaysia to get their approval to execute the survey. Before surveying owners or managers of Malaysian-owned manufacturing SME firms, the researchers invited manufacturing SMEs located throughout Peninsular Malaysia based on the Directory of SMEs, sector by sector, by phone and email, to invite them to participate in this study. To achieve a satisfactory response rate, emails and phone calls served as ways to communicate the importance and benefits of this survey. Ultimately, a total of 500 SMEs agreed to participate. Once the researchers had received approval from the owners or managers of the respective manufacturing SME firms, a cover letter was emailed with the link to the questionnaire survey to each SME owner or manager, assuring the respondent of their anonymity and providing directions on completing the survey questionnaire. The questionnaire contained a total of 81 items representing the study variables along with demographic data. The SME owners and managers were given two weeks to complete the questionnaire. Upon completing the questionnaire, the SME owners and managers submitted their responses directly at the end, and the researchers received them immediately. The researchers followed up with the respondents at the end of the month to see if there were any additional submitted responses. Research Instrument This section presents the measurement employed in this study. The dependent variable is readiness for Industry 4.0, which consists of managerial readiness, operational readiness, and technological readiness. The independent variables consist of organizational capabilities, government support, perceived advantage, and market factors. Table 1 summarizes the measures employed in this study, with 69 items used to measure the variables. Another 12 items were included to obtain information on the respondents' companies and personal profiles. The dependent variable in this study is readiness for Industry 4.0. Readiness for Industry 4.0 is conceptualized in three dimensions: managerial, operational, and technological. A 23-item measurement for readiness for Industry 4.0 developed by Khin and Kee [54] was employed. The first eight items measure managerial and operational readiness. The last seven items measure technological readiness. The measurement is assessed using a 5-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree). The Cronbach's alpha reported by Khin and Kee [54] for managerial, operational, and technological readiness were 0.88, 0.86, and 0.79, respectively. The 23-item measurement for managerial, operational, and technological readiness is shown in Table 2. To measure the readiness for Industry 4.0 of SMEs, respondents are asked to assess their managerial, operational, and technological readiness below on the basis of the 23 items. A 5-point Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree) was used. Our management is convinced that we should consider the Industry 4.0 production process. 2 Our management has the plan to digitalize the production process. 3 Our management is mentally prepared to adopt Industry 4.0. 4 We have the left leadership in place to implement digitalized production. 5 Digital transformation is our corporate priority. 6 Our management does not commit to upgrading the production process (R). 7 Our management has approved a budget for upgrading production processes. 8 Our management has been recruiting new staff necessary for upgrading the production process. 9 Our company is financially prepared to digitalize operations to Industry 4.0 standards. 10 Our staffs are cooperative in upgrading production processes. 11 We are mentally prepared for changes in our production. 12 We have staff to manage the Industry 4.0 process. 13 Our production processes can be digitalized to Industry 4.0. 14 Our production floor is prepared for digitalized production. 15 We have the infrastructure to support the Industry 4.0 production process. 16 We have the resources to start the digital transformation. 17 We have an IT system that could be upgraded for Industry 4.0 production process. 18 Our key machines could be networked for Industry 4.0 process. 19 Our staffs are capable of learning new digital skills. 20 Our staffs have sound knowledge of technical requirements for Industry 4.0. 21 Training has been provided to our staff to understand digital technologies. 22 Our staffs have no technical knowledge about digital transformation (R). 23 We have vendors who can provide good service for the technical aspect of digital transformation. Organizational Capabilities The two dimensions of organizational capabilities, financial and technological, were measured using a 10-item measurement developed by Khin and Kee [54]. The items were measured using a 5-point Likert scale, ranging from 1 (very low) to 5 (very high). The Cronbach's alpha values reported by Khin and Kee [54] for financial capability and technological capability were 0.94 and 0.96, respectively. The 10-item measurement for financial and technological capability is shown in Table 3. To measure SMEs' financial and technological capabilities, the respondents were asked to examine their financial and technological capabilities to invest in segments below in ten items. A 5-point Likert scale (very low, low, moderate, high, very high) was used. Operational resources 6 Acquiring important technology-related information 7 Identifying new technological opportunities 8 Responding to current technological trends 9 Learning advanced technologies 10 Upgrading production technologies Adopted from Khin and Kee [54]. SME Institutional Support The two dimensions of SME institutional support, financial and technological support, were measured using an 11-item measurement developed by Khin and Kee [54]. The measurement was assessed on a 5-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree). The Cronbach's alpha values for financial and technological support were 0.85 and 0.91, respectively. To measure SMEs' financial and technological support, respondents were asked to assess their financial and technological support for investing in the areas below on the basis of ten items. The 11-item measurement for both financial and technological support is shown in Table 4. Table 4. Measurement items for financial support and technological support. 1 We are aware of Industry 4.0-related financial incentives from agencies. 2 We have access to funding for Industry 4.0 provided by agencies. 3 We know where to apply for funding for Industry 4.0 from authorities. 4 We have applied for funding for Industry 4.0. 5 We have received funding for Industry 4.0. 6 We have attended training for Industry 4.0 technology provided by external agencies. 7 We have received technical advice from agencies regarding Industry 4.0 8 We have access to Industry 4.0-related programs 9 Governmental agencies have been supportive in providing technological assistance for Industry 4.0. 10 We have access to Industry 4.0-related services from agencies. Perceived Advantages The two dimensions of perceived advantages, perceived benefits, and opportunities were measured using a 15-item measurement developed by Khin and Kee [54]. The measurement was assessed on a 5-point Likert scale, ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). The Cronbach's alpha values reported for perceived benefits and opportunities were 0.96 and 0.94, respectively. To measure SMEs' perceived benefits and opportunities, respondents were asked to assess the benefits and opportunities to investing in the areas below on the basis of 15 items. A 5-point Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree) was used. The 15-item measurement for perceived benefits and opportunities is presented in Table 5. better access to production data. 9 more market for products with better quality and margin. 10 new export markets. 11 more customers and buyers. 12 more sales. 13 more market shares. 14 new products with better quality. 15 a better image of our company. Market Factors The two dimensions of market factors, customer needs and competitive pressure, were measured using a 10-item measurement developed by Khin and Kee [54]. The measurement was assessed on a 5-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree). The 10-item measurement for customer needs and competitive pressure is shown in Table 6. The Cronbach's alpha values reported for customer needs and competitive pressure were 0.928 and 0.758, respectively. Table 6. Measurement items for customer needs and competitive pressure. 1 Our customers are looking for products of Industry 4.0 quality. 2 Our customers are looking for suppliers who use the Industry 4.0 manufacturing process. 3 Our customers requested that we use Industry 4.0 manufacturing process for the products they buy from us. 4 We risk losing our customers if we do not adopt Industry 4.0. 5 Potential customers in new markets need Industry 4.0 products. 6 Competition is high in our industry. 7 Our competitors are ahead of us in adopting Industry 4.0. 8 Our competitors may attract our existing customers if they could supply products with Industry 4.0 standards. 9 Our customers might switch to competitors if we cannot reduce costs and improve quality. 10 We are in urgent need for adopting the Industry 4.0 process to keep our customers away from competitors. Common Method Variance Method variances or biases are one of the main sources of measurement error. Common method variance is perceived as a potentially critical concern for researchers because this study is cross-sectional based on survey data, especially since the independent and dependent variables are based on perceptual evaluations and come from the same source. Therefore, as suggested by Podsakoff, MacKenzie [89], several procedural remedies have been employed to cope with common method variance. First and foremost, when designing and administering the questionnaire, different scale types, including dichotomous and rating scales, were used, as suggested by Podsakoff, MacKenzie [89], in order to reduce common method biases. Additionally, respondents were notified that there were no right or wrong answers in the questionnaire [89]. Secondly, the questionnaire was accompanied by a cover letter pledging the anonymity of the respondents. This method supports the statement that respondents are not being evaluated based on the answers [89]. Thirdly, Harman's single-factor test technique was used, as noted by [89]. Statistical Techniques and Analysis For data analysis and hypothesis testing, this study used the IBM SPSS Statistics Campus Edition (Version 26.0) and Partial Least Squares (PLS) with Smart PLS 3.3.2. The SPSS statistical analysis has been employed for data cleaning, data entry, descriptive analysis, and missing value imputation analysis. In contrast, the PLS was used to test the structural equation model. Descriptive Statistics The main purpose of employing descriptive statistics is to discuss the basic features of the data in the study. Descriptive statistics such as means and standard deviation were obtained to describe the sample. This helped the researchers understand the respondents' firms' constructs with respect to their readiness for Industry 4.0, organizational capabilities, institutional support, perceived advantages, and market factors. Measurement Model Evaluation of the measurement model (outer model) is the first step in PLS analysis. These steps involve evaluating whether the recommended variance properly measures the theoretical constructs. This means that the accuracy of the measures and their convergent and discriminant validities are to be evaluated. Construct Validity Sekaran and Bougie [32] illustrated that construct validity demonstrates how well the results from using the measure adapt to the theories around which the test is designed. This can be evaluated by using convergent and discriminant validity. At this stage, the important things that the researchers have to take note of are the respective loadings. Hair, Ringle [90] suggested a cut-off value of 0.5 for loadings considered significant. Convergent Validity Hair, Ringle [90] illustrated that convergent validity is the degree to which multiple attempts to assess the same constructs are in agreement. The researchers examined the composite reliability and the average variance extracted (AVE). Thresholds with values of 0.70, 0.70 and 0.50, respectively, are usually accepted in the literature. Discriminant Validity Discriminant validity is the extent to which measures of different concepts are distinct. According to Hair and Ringle [17], in assessing the discriminant validity, the AVE for each construct should be greater than the square of the correlations between the construct and all other constructs [33]. The elements include predictive power (R 2 ), effect size (f 2 ), bootstrap procedure, predictive relevance (Q 2 ), and goodness of fit (GoF). Structural Model Evaluation In the PLS analysis, the following step after evaluating the Measurement Model is the examination of the model (inner model). The purpose of the structural model is to provide accurate evidence to support the theoretical model. The structural model is assessed based on the significance of the hypothesized relationships between the constructs. Results The researchers distributed 500 questionnaires using Google Forms via email to SME owners or managers for data collection. The expected response would be 100 (500 × 20% = 100), which meets the minimum sample size of 84. At the end of data collection, 110 questionnaires were filled in and returned to the researchers, surpassing the minimum requirement of 84 samples calculated using G*Power. Only 22% responded to the questionnaires. According to the feedback from those owners or managers of the manufacturing firms who disagreed with responding to the survey, their reasons are that they were not interested in implementing Industry 4.0, and some of them were still in the phase of Industry 2.0, so they did not have much information about the adoption of Industry 4.0. Therefore, it is also believed that these are the reasons why only 110 out of 500 SMEs responded. Respondents' Company Profiles The raw data collected from Google Forms was then prepared for processing using Smart PLS 3.0 for further computation and analysis. Table 7 shows the respondents' background information and company profiles. Mean and Standard Deviation of the Variables As depicted in Data Analysis Partial least squares structural equation modeling (PLS-SEM) was executed for further statistical data analysis using SmartPLS 3 version [91] to investigate and statistically explore the structural and measurement model. Common Method Variance (CMV) Different scale types, including dichotomous scales and rating scales, were used in this study to reduce common method biases. For readiness for Industry 4.0, SME institutional support, perceived advantage, and market factors are examined using a 5-point Likert scale with 1 = strongly disagree to 5 = strongly agree. Moreover, the Likert scale for organizational capability is a 5-point scale with 1 = very low to 5 = very high. Conversely, Harman's single-factor test was used with SPSS software to assess the presence of any common method variance. An exploratory factor analysis based on unrotated factor analysis was performed by loading all the items used in this study. The unrotated factor analysis demonstrated that no single factor was deemed for all the variances. The results also revealed that 11 factors emerged, with the greatest factor accounting for 33.67% of the total variance of 69.77% of all measurement items. Since the greatest factor does not account for more than half of the total variance, it is believed that there was no significant common method variance. Therefore, it can be concluded that this study had no problem with common method variance. Reliability Analysis At this stage, the measurement model is evaluated to identify the validity and reliability of the measurements used. SmartPLS 3.0 version software examined the measurement model by executing the PLS algorithm. The PLS algorithm was used to construct the path coefficients, factor loadings, coefficient of determination, and the model's reliability and validity measures. Construct Validity Construct validity was used in this study to ensure that the altered and adopted questionnaire questions accurately measured the target idea. Convergent Validity The loadings for all distinctive items between the constructs are demonstrated in Table 9. It can be observed from Table 9 that no indicators possessed loadings that were lower than 0.40. Nevertheless, the indicators of PB2, PB4, PB6, PB7 and PB8, as shown in Table 9, were deleted as recommended by Hair, Hult [92], because the AVE for the perceived advantage construct was lower than 0.50. After deleting the mentioned indicators, the AVE for perceived advantage increased to above 0.50. The results of the measurement model after the deletion of said indicators are shown in Table 9. In addition to the loadings, the AVE, a summary indication of convergence, was analyzed. As a rule of thumb, it is recommended that the AVE be at least 0.50 [92][93][94]. Table 10 reveals astonishingly that the AVE for all constructs were more than 0.50. As a result, it can be concluded that the measurement employed in this study possessed adequate convergence validity. Discriminant Validity As can be seen from Table 11, the square root of the AVE for each construct surpassed the correlations between the constructs. As a result, the measurements employed in this study revealed sufficient discriminant validity. Testing and Assessing the Structural Model The structural model assessment was examined and checked with respect to the hypothesized relationships of the predicted research model after the constructs and instruments had been confirmed for their reliability and validity. Six steps of assessments were employed to assess the structural model, which corresponded to PLS-SEM [92]. The six steps were the assessment of collinearity, path coefficients, coefficient of determination (R 2 value), effect size (f 2 value), predictive relevance (Q 2 value), and the PLSpredict. Table 12 displays the values of skewness and kurtosis for each variable. It can be observed from Table 12 that the skewness and kurtosis values were within +1 and −1; thus, it can be inferred that the data were not intensely abnormal. Therefore, it is concluded that the data in the present study did not pose a serious concern because they were not highly abnormal. The inner VIF values for the research variables were derived using SmartPLS. Table 13 demonstrates the inner VIF values for the involved study variables. It can be observed from Table 13 that none of the VIF values were above 5. Thus, multicollinearity was not an issue in the present study. Bootstrapping was performed to examine the relevance and significance of the relationship in the model Hair, Risher [95]. The structural model was reported with the standard errors, path coefficients, t-values, and p-values after the complete bootstrapping of 5000 subsamples with a 95% confidence interval [96]. Since the model would have to go through the PLSpredict procedure, items from the endogenous construct must be included. As proposed by Sarstedt, Hair Jr [97], a discontinuous two-stage approach was applied. Items of a single construct were turned into a single item after the two-stage approach. This entailed combining the original items for the lower-order construct (LOC) with the latent variable score produced from the hierarchical component model (HCM). This enables PLSpredict to calculate the score for each item in the construct. Hahn and Ang [98] suggested that p values were insufficient to report on the significance of a hypothesis, and thus the researchers propose that the reporting should include other criteria as well, such as confidence intervals and effect size. Path Coefficients and Coefficients of Determination The structural model of data analysis is evaluated in this section with respect to the hypothesized links between the variables. Bootstrapping was performed on a sample of 5000 people. To determine the significance of the structural path, the path coefficients and t values were analyzed. The value of the path coefficients, which corresponds to the standardized beta values in β range between −1 and +1, as generated by the software, is used to estimate the predicted relationship between the predictor and the outcome variable. As illustrated in Figure 2, the bootstrapping process was subjected to a one-tailed test with a 5% level of significance, and the findings can be found in Table 13 under the t-values column. With reference to Table 13, four out of the hypothesized relationships have t-values of more than 1.645, and are thus significant for α = 0.05. In summary, organizational capabilities explained 38.6% of the model (β = 0.386, p < 0.05), followed by SME institutional support, at 34.1% (β = 0.341, p < 0.05), and market factors (β = 0.220, p < 0.05), which explained 22.0% of the model, while the firm size moderation effect on SME institutional support explained 13.5% (β = 0.135, p < 0.05). However, one of the non-hypothesized relationships appeared to explain 19.1 percent of the model, which was firm size (β = 0.191, p < 0.05). Coefficient of Determination The next stage is to assess the model's coefficient of determination. In the R 2 endog- With reference to Table 13, four out of the hypothesized relationships have t-values of more than 1.645, and are thus significant for α = 0.05. In summary, organizational capabilities explained 38.6% of the model (β = 0.386, p < 0.05), followed by SME institutional support, at 34.1% (β = 0.341, p < 0.05), and market factors (β = 0.220, p < 0.05), which explained 22.0% of the model, while the firm size moderation effect on SME institutional support explained 13.5% (β = 0.135, p < 0.05). However, one of the non-hypothesized relationships appeared to explain 19.1 percent of the model, which was firm size (β = 0.191, p < 0.05). Coefficient of Determination The next stage is to assess the model's coefficient of determination. In the R 2 endogenous construct, the coefficient of determination is shown in Figure 3 for the endogenous construct. The R 2 value, which ranges from 0 to 1, shows the model's explanatory power and in-sample predictive power, with a higher number indicating greater power of explanation (Hair,Sarstedt [99]). This study embraced the three levels of predictive accuracy proposed by Hair Jr, Hult [100], whereby 0.75 indicates considerable predictive accuracy, 0.50 moderate predictive accuracy, and 0.25 weak predictive power. In this study, the R 2 value is 0.739, and the adjusted R 2 value is 0.715, which is more than moderate predictive power. Effect Size In addition to determining the R2 value, the fifth step is to look at the effect size (f2) of the exogenous constructs and if they affect the endogenous construct. The f2 values are tested to examine the impact on the coefficient of determination when the interaction effect is eliminated from the study model [99]. According to Sheko and Braimllari [100], the f2 values, also known as Cohen's Indicator, can be classified into three, with 0.02 being tiny, 0.15 median, and over 0.35 considered to be large. Therefore, with reference to Table 13, it can be observed that the construct with the largest effect size is organizational capabilities (f2 = 0.304), followed by SME institutional support (f2 = 0.236), firm size (f2 = 0.112), market factors (f2 = 0.098). Two of the hypothesized relationships have a large effect size, which are H6 (f2 = 0.040), and H7 (f2 = 0.042). However, the other two hypothesized moderated relationships had very low or no effect; H5 (f2 = 0.002) and H8 (f2 = 0.014). In this scenario, firm size does not seem to have an impact in terms of strengthening the relationship between three driving factors, namely organizational capabilities, perceived advantage and market factors, and readiness for Industry 4.0; however, it has an impact in terms of strengthening the relationship between one driving factor, namely SME institutional support, and readiness for Industry 4.0. Moreover, since this study involved moderated hypotheses, the moderating effect must be evaluated. Table 14 demonstrates Effect Size In addition to determining the R 2 value, the fifth step is to look at the effect size (f 2 ) of the exogenous constructs and if they affect the endogenous construct. The f 2 values are tested to examine the impact on the coefficient of determination when the interaction effect is eliminated from the study model [101]. According to Sheko and Braimllari [102], the f 2 values, also known as Cohen's Indicator, can be classified into three, with 0.02 being tiny, 0.15 median, and over 0.35 considered to be large. Therefore, with reference to Table 13, it can be observed that the construct with the largest effect size is organizational capabilities (f 2 = 0.304), followed by SME institutional support (f 2 = 0.236), firm size (f 2 = 0.112), market factors (f 2 = 0.098). Two of the hypothesized relationships have a large effect size, which are H6 (f 2 = 0.040), and H7 (f 2 = 0.042). However, the other two hypothesized moderated relationships had very low or no effect; H5 (f 2 = 0.002) and H8 (f 2 = 0.014). In this scenario, firm size does not seem to have an impact in terms of strengthening the relationship between three driving factors, namely organizational capabilities, per-ceived advantage and market factors, and readiness for Industry 4.0; however, it has an impact in terms of strengthening the relationship between one driving factor, namely SME institutional support, and readiness for Industry 4.0. Moreover, since this study involved moderated hypotheses, the moderating effect must be evaluated. Table 14 demonstrates the outcome of the tested interactions. Predictive Relevance The following step assesses the predictive relevance. While the R 2 value describes the model's in-sample predictive and explanatory power, the StoneGeisser's Q 2 value demonstrates how valid the model's prediction is through the blindfolding process [103]. This evaluation can be performed using the SmartPLS 3 software and is set with an omission distance of 7. Ramayah, Cheah [96] suggested that any Q 2 value larger than 0 shows that the endogenous construct has substantial predictive relevance (see Table 15). Construct Q 2 Value Readiness for Industry 4.0 0.596 PLSpredict The PLSPredict procedure is the final stage in the model evaluation [104,105]. The PLSpredict assessment is calculated with ten folds and ten repetitions using the SmartPLS software. With reference to Table 16, one indicator in the LM model is higher than the PLS model, while the other two are lower, hence suggesting that the model possesses low to moderate predictive power. Summary of Hypotheses The data collected from the samples were segmented into two phases to assess the proposed research framework. The first phase consisted of gathering demographic data, which was then descriptively analyzed to ensure that the data fit the study's prerequisites and requirements as specified in the inclusion criteria. The second phase involved using structural equation modeling to examine the hypothesized correlations. Prior to assessing the relationships, the model's convergent and discriminant validity were assessed, and after the criteria were met, the relationships were evaluated for path coefficients, collinearity statistics (VIF), and moderated relationships with SmartPLS 3. Table 17 presents an overview of the findings. No Hypothesis Results Discussion This section contains detailed clarifications for all the relationships evaluated in this study. The explanations help address the two research questions, which were tested through eight hypotheses. Out of the eight hypotheses, four were supported. As a result, the following sub-sections comprehensively discuss the hypotheses to clarify the reasons for the study's findings. 5.1.1. Relationships between Organizational Capabilities, SME Institutional Support, Perceived Advantage, Market Factors, and SMEs' Readiness for Industry 4.0 The first research questions asked about the relationships between organizational capabilities, institutional support, perceived advantage, market factors and SMEs' readiness for Industry 4.0. Organizational capabilities, institutional support, perceived advantage, and market factors were hypothesized to impact SMEs' readiness for Industry 4.0 positively. The present study found that organizational capabilities, SME institutional support, and market factors have positive impacts on SMEs' readiness for Industry 4.0, whereas perceived advantage was found to have a negative impact on SMEs' readiness for Industry 4.0. The discussion based on the findings to answer the research question is shown below 5.1.2. The Relationship between Organizational Capabilities and SMEs' Readiness for Industry 4.0 The present study found that organizational capabilities have a positive impact on SMEs' readiness for Industry 4.0. This finding contends that SMEs with the organizational capabilities to invest in Industry 4.0 are more prepared to execute digital processes. This is line with the findings of Agostini and Nosella [106], SMEs with stronger internal and external capabilities are more willing to adopt Industry 4.0 technologies. This could be due to firms recognize that new technology frequently makes business processes easier, faster, and less expensive, and thus they determine to keep up with emerging technologies and harness them in creative ways [107]. Likewise, it has also been claimed that the ability to innovate has emerged as one such dynamic capability that distinguishes firms that outperform their competitors [108]. Amid environmental turbulence, such as during an economic downturn, the demand for innovation is regarded to be able to withstand the gales of creative destruction [109]. Malaysia has been locked down for approximately 2 years due to the outbreak of the COVID-19 pandemic, and the country's economy has been affected heavily, with its economy contracting by 4.5% in the third quarter of 2021 [110]. According to the overall observations of manufacturing firm responses in the survey conducted by UNIDO [111], small-sized firms have been hit the hardest by the COVID-19 crisis. Most of the respondents of the present study were small-sized firms (54.5%), and this could be the reason for their low willingness to adopt Industry 4.0 technologies. According to a report by SME Corp Malaysia, in 2020, approximately 77 percent of SMEs maintained at the basic digitalization stage, which signifies that they will only have a website, while in 2019, only 53.9 percent of the companies were present on the Internet [112]. The high costs of maintaining cutting-edge technology services discouraged 44 percent of Malaysian SMEs from considering cloud services, and 48 percent of SME owners cited the fact that their employees lacked the technical skills needed to go digital [112]. The MDEC Digital Talent Survey 2021 also revealed that 85 percent of companies surveyed recognize the need to reskill their employees [113]. Nonetheless, Agostini and Nosella [106] claim that firms must invest in advanced manufacturing technology and equipment to fully exploit the advantages of Industry 4.0. At the same time, Bank Negara Malaysia [114] highlights that using digital tools benefits both sales and marketing functions, remote work arrangements and the establishment of new products. Hence, when the owners and managers of SMEs have the ability to transform and they recognize the importance of digital transformation, they are more likely to adopt Industry 4.0. The Relationship between SME Institutional Support and SMEs' Readiness for Industry 4.0 The present study found that SME institutional support positively impacts readiness for Industry 4.0. It highlights that SMEs that obtain SME institutional support are more likely to adopt digital transformation. Sáfrányné Gubik and Bartha [115] claimed that technology support enhances firms' business development and also serves as a tipping point that gives the firms a competitive advantage. It is further delineated by Zhang, Xu [116] that through improved organizational capabilities, institutional support in terms of technology and government support and partnerships has an indirect positive impact on digital transformation, while technology propels the company's digital strategy and assists top management. Moreover, Pavic, Koh [117] also proposed that SMEs should prioritize their supporting activities such that the firm's human resources and technological infrastructure will shape the core of e-business planning to use the external and internal resources and opportunities to establish value through integration and intervention. According to Sommer [118], SMEs require more institutional support than larger organizations because they are less skilled at dealing with technological, financial, and staffing challenges. As digital transformation requires high investment and small-sized firms have less resources, they will therefore need financing solutions. In the present study, most of the responding firms are registered with limited legal status (LLP). According to SSM [119], any debts and obligations of the LLP will be borne by the LLP's assets, not the partners'. Although LLPs have limited liability with respect to debts [120], which is seen as an advantage when seeking financing solutions, it has been reported that 60% of Malaysian SMEs are unaware of any relevant financing methods [112]. On the other hand, industries with great production volumes can take advantage of economies of scale, making them more likely to make greater initial investments to implement Industry 4.0 processes [32]. Lutfi [121] contended that SMEs' CEOs' recognition of government support and incentives plays a critical role in meeting enterprises' IT innovation execution and would lead to their prompt implementation. The Malaysian government has recently initiated the National Economic Recovery Plan (Penjana) as an all-inclusive and holistic approach to the country's economic recovery with the aim to encourage more SMEs to begin their digital revolution [122]. This institutional support has addressed the monetary issue in the demand for SMEs to go digital, and it has been reported that several SMEs have already taken advantage of the opportunity to begin digitalization as part of their digital transformation journey [112]. Therefore, it can be concluded that when SME firms obtain sufficient institutional support, it enhances the SME owners and managers to implement digital transformation. 5.1.4. The Relationship between Perceived Advantage and SMEs' Readiness for Industry 4.0 The present study reveals that perceived advantage does not impact readiness for Industry 4.0. This finding showed that Industry 4.0 readiness is not an easy task, and it may take more effort and time to learn and master the required mechanisms of new digital technologies and processes, and thus an organization's perceived advantages cannot drive its management to get ready for digital adoption. This is supported by Nugroho, Susilo [123], who found that SME firms are unwilling to invest in IT infrastructure because it is difficult to optimize or even use. Furthermore, Stentoft, Jensen [124] argue that some SMEs may overlook some possible benefits from Industry 4.0 technologies due to a primary focus on routine operations. In contrast, others are hesitant to use the technologies because they, in fact, may not introduce more benefits than the costs to introduce and execute the technologies. This finding is supported by those who discovered that implementing Industry 4.0 projects in SMEs is still a cost-driven initiative, and the benefits of business transformation have yet to be demonstrated. Hence, SME managers may be less aware of the opportunities provided by new digital technologies. As a result, scarce strategic vision toward new market opportunities can prevent them from preparing for Industry 4.0 [54]. The respondents who responded to this present study are mainly comprised of top management (45.4%) and middle management (40%). Alieva and Powell [125] discovered that top and middle management involvement were identified as positively influencing factors in the digital transformation process. In Malaysia, only 55% of CEOs recognize the need for digital transformation, and businesses are skeptical of embracing technology to improve their business productivity and growth [126]. On the other hand, more than half of the firms in the present study had been operating for more than 10 years, with 49.1% operating for 11 to 20 years, and 12.7% of them operating for 21 to 30 years. Based on the findings of Kane, Palmer [127], older companies are generally less digitally mature, and employees in older companies are more likely to inhibit digital transformation. Nonetheless, the findings of Bouncken, Ratzmann [128] suggest that, while all firms across the age spectrum can benefit from mutual knowledge creation in their alliances, older firms can minimize their limitations with respect to innovation value creation when they mutually establish knowledge with their partners. Therefore, to elevate the adoption rate of Industry 4.0, the top and middle management of SME manufacturing firms should be more aware of the benefits and take the initiative to change. The Relationship between Market Factors and SMEs' Readiness for Industry 4.0 Based on the results, it could be inferred that market factors are the important drivers for starting the transformation towards Industry 4.0, as it was found to positively impact readiness for Industry 4.0. This is in line with Lai, Sun [129] and Gangwar, Date [130], who discovered that competitive pressure drives Industry 4.0 readiness. Additionally, this finding is also consistent with the findings of Nugroho, Susilo [124], who reported that SMEs will prepare for Industry 4.0 if they are aware of the current or new customer demand for Industry 4.0-produced products. Consumers become the trigger system for digital implementation, because satisfaction is important to a business. In certain manufacturing sectors, such as pharmaceuticals and automobiles, Industry 4.0 has become a mandatory requirement for certain products in certain countries [54]. Moreover, according to the Adyen Malaysia Retail Report 2022, customer requirements in technology have become the trend urging companies to undergo digital transformation, while Malaysian companies undergoing digital transformation outperform their industry peers, with a total value of MYR 334 billion [131]. Companies also suggest employing technological solutions to better understand their customers' needs and subsequently fulfil their requirements [131]. As a result, SMEs must be more customer-focused and proactive in seeking out new customers who require Industry 4.0 products. On the other hand, evolving new market players and competitors threatens established manufacturing companies' market positions and competitive advantages [132]. Hence, technology has become one of the tools used to support enterprise competitiveness in dealing with consistently changing business dynamics and expanding various developmental methods of doing the work [124]. Additionally, the National Industry 4.0 policy that was launched in 2018 by the Malaysian government prioritized the support for five main sectors in terms of Industry 4.0 technology implementation, namely electronic and electrical, machinery and equipment, chemical, medical devices, and aerospace, while the emergence of new economies with lower cost structures boosted the development of the electronic and electrical sector [25]. The respondents of the present study corresponded to one-third (37.3%) of the mentioned sectors, with electronic and electrical sector accounting for 23.6% alone, and this could explain the situation whereby market factors affect the behavior of Malaysian SME manufacturing firms with respect to adopting digital transformation. The Moderating Effect of Firm Size The second research question asked whether firm size moderates the relationship between organizational capabilities, institutional support, perceived advantage, market factors, and SMEs' readiness for Industry 4.0. Therefore, the relationship hypothesized is whether firm size strengthen the relationship between organizational capabilities, SME institutional support, perceived advantage, market factors, and SMEs' readiness for Industry 4.0. To add novelty to this study, the element of firm size is incorporated into the research model to investigate its strengthening effect. Having introduced firm size as a moderator in the earlier chapter, some of the findings of the other researchers have shown the significant of firm size in the adoption of digital transformation. However, the findings of this study contradict the previous findings of the other researchers such as Pla-Barber and Alegre [82] and Noori, Nasrabadi [133], while being in line with the study of Lee and Kim [75]. The findings of this study demonstrate firm size not to be a strengthening factor of organizational capabilities, perceived advantage, and market factors. However, the results of this study reveal that firm size does moderate the relationship between institutional support and SMEs' readiness for Industry 4.0; as such, the positive relationship between institutional support and SMEs' readiness for Industry 4.0 is stronger when the firm size is larger. The respondents in the present study were mainly small-sized enterprises (54.5%), followed by medium-sized firms (33.6%), and micro units (11.8%). With respect to the number of employees, the majority group of 59 (53.6%) had fewer than 50 employees, 28 (25.5%) had between 51 and 100 employees, 10 (9.1%) had 101 to 150, and 13 (9.8%) 151 to 200. First and foremost, the results show that firm size does not moderate the direct effect of organizational capabilities on Industry 4.0 readiness. This result is in line with the findings of Agostini and Nosella [107] who reported that firm size was not statistically significant in any research model. Additionally, the research discovered a weak relationship between firm size and willingness to adopt technology. This makes sense, because the size of a firm does not guarantee the availability of appropriate technology and sufficient financial resources for business transformation. This is supported by the research executed by Lin, Lee [134], who reported that firm size does not mandate increased use of advanced manufacturing technology for Industry 4.0, most probably due to industrial and product characteristics, such as high value, high safety and reliability requirements, global sourcing, large batch production, and large scale. Moreover, Michna and Kmieciak [135] also identified that SME firms' financial performance was positively related to their willingness to implement Industry 4.0, regardless of firm size. Firms with sufficient financial resources may be able to invest in Industry 4.0 and meet the initial investment and administrative costs despite the risk of failure [136]. Inferring from the accepted moderated hypothesis of firm size between SME institutional support and SMEs Industry 4.0 readiness, firm size does affect the firm's willingness to innovate and grow when they have or have not received institutional support. This is supported by the findings of Motta and Sharma [137], who revealed that SMEs' access to capital might be hampered by firm size, as small businesses may lack the high-quality projects required to obtain bank credit from financial intermediaries. To improve the entire business operation, business owners should consider the trade-off between the cost of financial capital, its advantages, and the firm size restriction [76]. On the other hand, the results of the present study also show that firm size does not moderate the direct effect of perceived advantages on the Industry 4.0 readiness of manufacturing SMEs. This is in line with Ricci, Battaglia [138] finding that firm size does not significantly impact the perception of Industry 4.0 opportunities and the implementation of Industry 4.0 technologies. A possible reason for this might be that SMEs' recognition of the advantages of Industry 4.0 adoption depends on their current automation level, which has nothing to do with the firm size. The findings of Müller, Buliga [139] proved that SMEs with high degrees of automation perceive opportunities rather than threats from Industry 4.0, but SMEs involving a high level of human labor are more likely not to expect changes from Industry 4.0 in their business models. The impact of market factors on SMEs' readiness for Industry 4.0 has an insignificant direct effect when firm size is taken into account. A possible reason for this finding is that the responsibilities of identifying and meeting customer needs and market demands, in terms of possessing technological skills that are adequate to the products offered, are at the top management level regardless of the firm size [140][141][142]. Theoretical Contribution From a theoretical perspective, this study complements the literature encompassing Malaysian manufacturing SMEs' readiness towards Industry 4.0 and the driving factors that will empower them to embrace digital transformation. More specifically, this study provides relevant information to the body of knowledge to recognize the relationships between independent variables, namely organizational capabilities, institutional support, perceived advantage, and market factors. Furthermore, this study contributes to the body of research already performed in Industry 4.0, especially in the Malaysian context. This study contributes empirical support to the implementation of resource-based view theory in the conceptualization of Industry 4.0 readiness on the basis of four driving factors. This study contributes to the existing body of knowledge by examining the moderating effect of firm size between the four driving factors and SMEs' readiness of Industry 4.0. In the existing literature, most studies have evaluated firm size only from the perspective of firm performance; thus, knowledge of the moderating role of firm size on Industry 4.0 implementation is still inadequate. Hence, rigorous validation is necessary to identify the moderating effect of firm size between the driving factors and Industry 4.0 implementation. To fill this research gap, this study elongated the relationship between the driving factors, namely organizational capabilities, SME institutional support, perceived advantage, and market factors and readiness for Industry 4.0, and empirically proved the moderating effect of firm size. The results demonstrated that most of the relationships between the driving factors and readiness of Industry 4.0 were not moderated by firm size. Practical Implications These findings lay a sturdy foundation for understanding the driving factors of Industry 4.0 for SMEs in the execution or diffusion stage. The results can be applied to other geographic or industrial areas for implementation and achievement, such as the service sector. According to the study's findings, SMEs must have relevant financial and technological resources for digital revolution. The owners and managers must comprehend the advantage of the digital adoption and make viable decisions to initiate Industry 4.0 implementation. They should set aside sufficient funds to upgrade the current IT infrastructure to make it compatible with this revolution. SME owners and managers must reskill their employees in terms of digital and technology and encourage them to embrace digital transformation. The relationship of market factors was found to be significant with readiness for Industry 4.0. This shows that the present industrial environment and competitors put pressure on Malaysian SMEs to execute Industry 4.0. Competing in the local SME sector may produce such results. SMEs may also prioritize market pressure when making digital innovation decisions. Furthermore, institutional support, including financial and technological support from the government or institutions, is critical for technological transformation. Owners and managers must consider obtaining institutional funding to execute Industry 4.0, as the technological evolution is costly. The relationship between perceived advantage and SMEs' readiness for Industry 4.0 was found to be insignificant. Lack of Industry 4.0-related knowledge and awareness among the owners and managers may also have contributed to the result. SME owners and managers should focus on this point and plan to ensure that the organization's performance will be boosted by adopting new Industry 4.0 technologies. The findings of this study could be useful for policy formulation in various ways. Policymakers could use the empirical findings to streamline institutional support and collaborative platforms, as well as a strategic reference for current development. Globalization and the growth of the information economy have rendered traditional policy ineffective, so new policy interventions for Industry 4.0 should be inventive. Industry 4.0 has sparked a technological revolution, and it is becoming ingrained in the DNA of businesses in order to obtain a competitive advantage. Technology is one of the primary drivers that will drive the future of Malaysia's SMEs business ecosystem. Malaysian SMEs should take advantage of digital technology and enhanced automation to maintain the country on cutting-edge technological advancements. Limitations and Recommendations for Future Studies Although this study is useful to SMEs and policymakers, it has a sample size limitation due to time constraints during data gathering. A larger sample size may help researchers to obtain a more generalized picture of the situation regarding status and attitude toward Industry 4.0 adoption and readiness. Furthermore, multiple groups with varied readiness levels (ready vs. not ready) could be studied to produce more intriguing results. In terms of future study, it will be critical to investigate what Industry 4.0 means for a company's business and the entire organization and how it affects present business strategies and business models. Furthermore, other elements that could motivate and encourage SMEs to prepare for digital transformation should be investigated depending on various theories. Summary and Conclusions This study relied on the RBV theory in identifying the driving factors of Malaysian SME manufacturing firms toward Industry 4.0 readiness. The objective of this study was to examine the relationship between organizational capabilities, institutional support, perceived advantage, market factors, and SMEs' readiness for Industry 4.0, and to assess whether firm size moderated the relationship between organizational capabilities, institutional support, perceived advantage, market factors, and SMEs' readiness for Industry 4.0. In this study, organizational capabilities, SME institutional support, perceived advantage, and market factors explained 20.16% of the variance in Industry 4.0 readiness. According to Cohen's (1992) rule of thumb, variances explained for an endogenous variable with values greater than 26% is regarded as high, while values greater than 13% are seen as medium and values greater than 2% are regarded as small. Therefore, the variance explained in Industry 4.0 readiness by the mentioned independent variables can be regarded as medium and substantial. Based on the hypotheses tested, organizational capabilities and SME institutional support were found to have a significant positive impact on Industry 4.0 readiness, while perceived advantage and market factors were discovered to possess a significant negative impact on Industry 4.0 readiness. Regarding firm size as a moderator, it can be identified that firm size moderates the relationship between SME institutional support and SME's readiness for Industry 4.0. Conversely, firm size is determined not to be a strengthening factor of organizational capabilities, perceived advantage, and market factors on SME's readiness for Industry 4.0. This study provides theoretical and practical contributions critical to practitioners and policymakers. For policymakers, empirical support is provided in this study for applying the RBV theory in explaining the Industry 4.0 readiness level of Malaysian SME manufacturing firms. For practitioners, the study supplies recommendations that could help to increase the Industry 4.0 readiness level among Malaysian SME manufacturing firms, potentially boosting the organization performance of Malaysian SME manufacturing firms. Manufacturing SMEs must strive towards a high-tech production model and skilled workforce by embracing Industry 4.0 if Malaysia is to maintain its manufacturing competitiveness in the future. Overall, this research contributes to the body of information showing that readiness for a digital revolution is influenced by some characteristics that both practitioners and policymakers should pay greater attention to. First and foremost, this study highlights that, to begin the road towards Industry 4.0, SMEs have to first prepare by planning for all requirements from three key aspects: managerial, operational, and technological. Secondly, it emphasizes that being ready necessitates a coordinated effort to alter the mindsets of management staff before they can shift the mindsets of the non-managerial staff who will be allocated to handle new workers, machines, equipment, systems, procedure, process, and goods. In conclusion, SMEs should keep in mind that the benefits of digital transformation may not be apparent immediately, but can be achieved over time. The autonomous or digitalized workplace may be a long way off for some, but it is useful to have a sense of what that vision might look like, and what benefits it might bring. In a nutshell, SMEs must be convinced of the advantages of implementing Industry 4.0, and this study addresses the reasons that may prompt them to prepare for this difficult but profitable shift along the digital wave.
2022-11-26T16:59:51.488Z
2022-11-23T00:00:00.000
{ "year": 2022, "sha1": "884b0bdc61fdb77939cac0c156fb22ff93730df7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2078-2489/13/12/552/pdf?version=1669208072", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6668802663f56ee6fdc4de35d6641a3635e1c7b1", "s2fieldsofstudy": [ "Business", "Computer Science", "Engineering", "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
778925
pes2o/s2orc
v3-fos-license
Cardioprotective role of zofenopril in patients with acute myocardial infarction: a pooled individual data analysis of four randomised, double-blind, controlled, prospective studies Background Early administration of zofenopril following acute myocardial infarction (AMI) proved to be prognostically beneficial in the four individual randomised, double-blind, parallel-group, prospective SMILE (Survival of Myocardial Infarction Long-term Evaluation) studies. In the present analysis, we evaluated the cumulative efficacy of zofenopril by pooling individual data from the four SMILE studies. Methods 3630 patients with AMI were enrolled and treated for 6–48 weeks with zofenopril 30–60 mg/day (n=1808), placebo (n=951), lisinopril 5–10 mg/day (n=520) or ramipril 10 mg/day (n=351). The primary study end point of this pooled analysis was set to 1 year combined occurrence of death or hospitalisation for cardiovascular (CV) causes. Results Occurrence of major CV outcomes was significantly reduced with zofenopril versus placebo (−40%; HR=0.60, 95% CI 0.49 to 0.74; p=0.0001) and versus the other ACE inhibitors (−23%; HR=0.77, 0.63 to 0.95; p=0.015). The risk reduction observed under treatment with the other ACE inhibitors was nearly statistically significant (−22%; HR=0.78, 0.60 to 1.02; p=0.072). The benefit of zofenopril versus placebo was already evident after the first 6 weeks of treatment (−28%; HR=0.72, 0.54 to 0.97; p=0.029), while this was not the case for the other ACE inhibitors (−19%; HR=0.81, 0.57 to 1.17; p=0.262). In this early phase of treatment, zofenopril showed a non-significant trend towards a larger reduction in CV events versus the other ACE inhibitors (−11%; HR=0.89, 0.69 to 1.15; p=0.372). Conclusions The pooled data analysis from the SMILE Programme confirms the favourable effects of zofenopril treatment in patients with post-AMI and its long-term benefit in terms of prevention of CV morbidity and mortality. INTRODUCTION Activation of the renin-angiotensin-aldosterone system has long been implicated in the pathogenesis of acute myocardial infarction (AMI), and its blockade has been shown to be beneficial in preventing major cardiovascular (CV) complications in several large randomised, prospective early and late intervention trials in patients with post-AMI. [1][2][3][4] Accordingly, current guidelines recommend the prescription of an ACE inhibitor (ACEIs) to all patients with ST elevation anterior AMI, post-AMI left ventricular dysfunction (left ventricular ejection fraction, LVEF <40%), or to those who have experienced heart failure in the early phase of the AMI. [5][6][7] ACEIs should also be given to and KEY QUESTIONS What is already known about this subject? ▸ Use of ACE inhibitors has been shown to be beneficial in preventing major cardiovascular complications in several large randomised, prospective early and late intervention trials in patients with post-acute myocardial infarction (AMI). What does this study add? ▸ Zofenopril is an ACE inhibitor with a proven efficacy in early treatment of AMI. Such evidence comes from four separate randomised studies of similar design comparing zofenopril with placebo, lisinopril or ramipril. A pooled analysis of these studies is provided in order to increase the robustness of the evidence of zofenopril efficacy in patients with post-AMI. How might this impact on clinical practice? ▸ The results confirm the superior efficacy of zofenopril versus placebo in patients with post-AMI, and its long-term benefit in terms of prevention of major cardiovascular events. This further supports the indication to use zofenopril as well as ACE inhibitors in the treatment of AMI. continued indefinitely in patients recovering from unstable angina or non-ST elevation AMI, or those with stable coronary heart disease (CHD), even in the absence of left ventricular dysfunction. [7][8][9] Among the various ACEIs, zofenopril proved to be very effective in patients with CHD and AMI, thanks to its unique effective mechanism of action for improving blood pressure control, left ventricular function and myocardial ischaemia burden, as well as ACE inhibition. 10 The double-blind, randomised, parallel-group, prospective trials of the Survival of Myocardial Infarction Long-Term Evaluation (SMILE) project, which involved more than 3600 patients with CHD, demonstrated that early AMI treatment with zofenopril may reduce mortality and morbidity, also when combined with acetylsalicylic acid, to a greater extent than lisinopril and ramipril. [11][12][13][14] In addition, zofenopril has shown an interesting anti-ischaemic effect in patients with preserved left ventricular function after AMI. 15 The objective of this pooled individual data analysis of the four SMILE studies was to review the cumulative efficacy of the ACEI zofenopril in the patients with CHD enrolled under the SMILE project. Study design and population The four double-blind, randomised, parallel-group SMILE studies compared the efficacy and safety of zofenopril with that of placebo (SMILE-1 and SMILE-3), 11 15 lisinopril (SMILE-2) 12 or ramipril (SMILE-4), 13 in European men and non-pregnant women with AMI. Patients included in the studies were those with (1) an early AMI (<24 h), not eligible for thrombolytic therapy because of late admission to the intensive care unit or with contraindication to systemic fibrinolysis (SMILE-1); 11 (2) a confirmed diagnosis of AMI and a prior thrombolytic treatment within 12 h of the onset of clinical symptoms of AMI (SMILE-2); 12 (3) a recent AMI (within 6±1 weeks) with preserved LVEF (>40%), treated with a thrombolytic treatment and with ACEIs (SMILE-3); 15 and (4) an early myocardial infarction (<24 h), either treated with thrombolysis or not, with primary percutaneous transluminal angioplasty or coronary artery by-pass graft, and with clinical and/or echocardiographic evidence of left ventricular dysfunction (SMILE-4). 13 All studies were conducted in according with the Guidelines for Good Clinical Practice and the Declaration of Helsinki, and were approved by the Ethics Committee of each participating centre. Written informed consent was obtained from each patient before enrolment. Treatments Eligible patients were double-blinded and randomly allocated to treatment with zofenopril or comparator ( placebo, lisinopril or ramipril), in addition to standard recommended therapy for AMI. No lead in observational period was foreseen prior to randomisation, except for the SMILE-4 study. In this study, eligible patients entered a 4-day open-label phase prior to randomisation and were given zofenopril according to the up-titration scheme described below. 11 The initial dose of zofenopril was 7.5 mg two times per day on day 1 and 2, followed by 15 mg two times per day on day 3 and 4 and 30 mg two times per day from day 5 onward. Up-titration was allowed if systolic blood pressure remained >100 mm Hg and if there were no signs or symptoms of hypotension. The doses of the active comparators were up-titrated as well: up to 10 mg once daily for lisinopril and up to 5 mg two times per day for ramipril. Randomised treatment was continued for 6-48 weeks and patients were seen at enrolment every 1-6 months, depending on the study. For all studies, duration of treatment and follow-up periods overlapped, the only exception being represented by the SMILE-1 study. In this trial, on completion of the 6-week doubleblind treatment period, the patients stopped taking the study medication but continued treatment with their other medications for an additional 48 weeks. Statistical analysis This analysis was an individual patient data (IPD) analysis where the IPD were pooled. In this analysis, a one-step approach was used, the IPD were aggregated and the pooled data set was analysed using appropriate statistical methods. IPD pooled analysis improves the quality of data and produces more reliable results. Since all the four SMILE Studies provided information on fatal and non-fatal CV events, the primary study end point of this retrospective analysis was set to the 1 year combined occurrence of death or hospitalisation for CV causes. In the SMILE-1 study, the primary end point was the combined occurrence of death or severe congestive heart failure during the 6 weeks of treatment with zofenopril or placebo. In the SMILE-2 study, the primary end point was the 6-week occurrence of severe hypotension, either cumulative or related to study drug administration, with zofenopril or lisinopril, while occurrence of CV outcomes was a secondary end point. In the SMILE-3 study, the primary end point was the 6-month global ischaemic burden, defined as the occurrence of postinfarction angina, ischaemic abnormalities during rest or Holter ECG, or treadmill test, recurrent myocardial infarction or need for coronary revascularisation. Finally, in the SMILE-4 study, the primary end point was the 1 year rate or CV mortality or morbidity (hospitalisation for CV causes). For the purpose of the present pooled analysis, the efficacy end point was calculated after weighing for the number of participants contributing from each study. The efficacy analysis was carried out on the full analysis set (intention-to-treat population), made up of all randomised patients treated with at least one dose of study medication and documenting at least once the measure of the primary efficacy assessment, even in case of protocol violation or premature withdrawal from the study. The safety analysis was applied to all randomised patients who took at least one dose of the study medication, by assessing the incidence of adverse events and changes in laboratory data or ECG during the study. The measure of safety used in this pooled analysis was the rate of drug-related adverse events expressed also as the number of drug-related adverse events divided by the person-time at risk throughout the observation period. The baseline characteristics and the distribution of variables in the study populations and subgroups were compared using a χ 2 test (2×4 tables) for categorical variables and an analysis of variance (between groups, F-test) for continuous variables. HRs and 95% CIs were calculated by a Cox proportional hazard regression model in which treatment group, gender (males vs females), country, age (<65 vs ≥65 years), body mass index (<30 vs≥30 kg/m 2 ) and CV risk factors (yes vs no) were included as covariates. CV risk factors were defined by the presence of previous angina pectoris, previous congestive heart failure, hypercholesterolaemia requiring lipid-lowering drug, previous peripheral artery disease or previous coronary revascularisation. In order to account for the different durations of follow-up among the four studies, the relative risk of CV morbidity and mortality was assessed using a time-dependent Cox regression model and corresponding survival curves were drawn. Heterogeneity for the primary study end point was assessed by the Q Cochrane's statistics. 16 Evaluations for the primary study end point were also made by subgroups of patients according to age, gender, diabetes, hypertension and CV risk factors. All p values are two-sided and the minimum level of statistical significance was set at 0.05. Data are shown as mean±SD or as mean and 95% CI, or as absolute (n) and relative (%) frequencies. Patient population Three thousand six hundred and thirty patients were included in this pooled analysis: 1556 patients (43%) were enrolled in the SMILE-1, 1024 in the SMILE-2 (28%), 334 in the SMILE-3 (9%) and 716 in the SMILE-4 (20%) study. Eleven European countries participated in the study, with 61% of patients recruited in Italy. One thousand eight hundred and eight patients (50%) received zofenopril, 951 (26%) placebo, 520 (14%) lisinopril and 351 (10%) received ramipril. Baseline characteristics for the patients included in the present pooled analysis are summarised in table 1. Some heterogeneity across the four treatment groups was observed. Long-term treatment efficacy As shown in figure 1A, the chance of surviving over 1 year without any major CV event was significantly higher under treatment with zofenopril than placebo (HR and 95% CI 0.60, 0.49 to 0.74; p=0.0001), with a 40% risk reduction. The superiority of zofenopril versus placebo was evident regardless of age, gender, diabetes, hypertension or presence of CV risk factors (table 2). The 1 year risk of mortality and morbidity was reduced by 22% under treatment with the other ACEIs versus placebo (0.78, 0.60 to 1.02), the between-group difference being nearly statistically significant ( p=0.072; figure 1A), also when specific subgroups were examined (table 2). A larger benefit was observed with respect to placebo under lisinopril (HR=0.63, 0.47 to 0.87) than under ramipril (0.94, 0.71 to 1.26; figure 1B). The Q Cochran's analysis showed a moderate heterogeneity in the effect between zofenopril and the other ACEIs (I 2 =42%). Efficacy in the early phase of treatment The superiority of zofenopril versus placebo was particularly evident in the first 6 weeks of treatment, with a 28% reduction in the risk of CV mortality and morbidity (0.72, 0.54 to 0.97; p=0.029; figure 2A). Such a reduction was documented also for the other ACEIs (0.81, 0.57 to 1.17), but it was not statistically significant ( p=0.262; figure 2A). Survival rates with respect to placebo were much better under lisinopril (0.62, 0.41 to 0.95) than under ramipril (1.06, 0.71 to 1.59; figure 2B). In this early phase of the study, zofenopril was as effective as the other ACEIs (0.89, 0.69 to 1.15; p=0.372; figure 2A), and particularly with respect to lisinopril (1.15, 0.83 to 1.60; figure 2B). The rate of drug-related adverse events ( person-time at risk) was 0.60 under zofenopril (369 events), 0.44 under placebo (102 events, p<0.001 vs zofenopril), 2.78 under lisinopril (152 events, p<0.001 vs zofenopril) and 0.08 under ramipril (16 events, p<0.001 vs zofenopril). Thus, the rate of drug-related adverse events was lower under ramipril and higher under lisinopril, as compared to zofenopril. In patients treated with ACEIs, a total of 60 drug-related serious adverse events occurred, of which 36 were under zofenopril (1.5% of total adverse events), 22 were under lisinopril (2.2%) and 2 were under ramipril (0.5%). DISCUSSION In the present pooled analysis, we confirmed what was documented in the individual SMILE studies, namely, that treatment of patients with AMI, with or without left ventricular dysfunction, with zofenopril, effectively reduces the risk of the combined end point of CV death or hospitalisation for CV causes with respect to placebo. [11][12][13][14][15] Interestingly, most of the beneficial effects of active drug treatment with zofenopril in our study were already evident in the first weeks following initiation of treatment, and were well maintained over time. As a matter of fact, in zofenopril-treated patients, 70% of the risk reduction was achieved in the first 6 weeks, while an additional 30% was reached at the end of the follow-up. These results are in agreement with previous data showing that treatment with ACEIs begun days to weeks after AMI improves clinical outcomes. 17 18 Given the peculiar pharmacological characteristics of zofenopril, we may suggest that most of the benefit is achieved with this drug through a primary vasculoprotective and cardioprotective effect, as shown in preclinical studies in animals 19 20 and in clinical studies in humans, 21 22 as well as through the prompt blockade of the deleterious effects of neurohumoral activation. 23 Our study also compared the effects of zofenopril with those of two other ACEIs (lisinopril and ramipril): treatment with zofenopril reduced the chance of occurrence of the combined end point slightly more than did lisinopril or ramipril, at least in the long term. The only other available large trial assessing the efficacy of different ACEIs after AMI is a non-randomised, observational study by Hansen et al. 24 In this study, no differences were observed in the risk of mortality and reinfarction among trandolapril, ramipril, enalapril, captopril, perindopril and other ACEIs, suggesting a class effect rather than a specific activity of the single ACEI. Our results are in contrast with those of the study by Hansen et al, and do not support a class effect but, rather, support differences in the efficacy between different ACEIs. Although our data are quite consistent, being collected through double-blind, randomised, parallel-group, prospective studies with similar designs, we cannot exclude the fact that the superiority of zofenopril might simply be related to a larger number of participants included in this group and to some heterogeneity across the studies. Future direct comparative studies should explore this aspect in detail. Safety results confirmed that when treatment with zofenopril is initiated at low dose within the first days or weeks of onset of symptoms and signs of AMI, and up-titrated to optimal dose within a week, its tolerability is good, comparable to that observed with the reference drugs, lisinopril and ramipril, and consistent with previous clinical observations in the same field. 25 Study limitations Although the design of the four SMILE studies was very similar, there were some differences in the inclusion criteria, and treatment duration and follow-up, which might have biased the study results, particularly when direct comparisons between different active drug treatments were attempted. For instance, the SMILE-1 study included only those patients who were nonthrombolysed, the SMILE-2 and SMILE-3 included only thrombolysed patients, and the SMILE-4 study included both types of patients. The SMILE-3 study excluded patients with a LVEF <40%, while patients with left ventricular dysfunction were included in the SMILE-4 study. In the SMILE-1 study, active treatments lasted 6 weeks, while observation continued for the subsequent 12 months. In the other SMILE studies, treatment duration and observation coincided, but the time interval differed. In assessments of the differences between treatments, resulting variations in baseline characteristics might have tended to decrease the sensitivity of such analyses to show interaction. However, such differences are inherent to all pooled analysis and the bias introduced into ascertainment of the average effects among the patients is usually limited in size. This is particularly true in our case, because we adjusted comparisons for confounding variables and we used individual patients' data instead of averages. Another important study limitation concerns the interpretation of the results of the safety analysis. As a matter of fact, in the SMILE-2 study, the primary end point was a safety factor: the incidence of drug-related severe hypotension. This might explain why the proportion of patients with an adverse event was particularly high in the group of patients receiving lisinopril. Since the rate of adverse events was low in ramipril-treated patients of the SMILE-4 study, when data of these two different ACEIs were pooled together, differences were counterbalanced and thus elided. Conclusions The results of the pooled data analysis of the SMILE studies confirm the favourable effects of zofenopril treatment in patients with CHD. The reduction in mortality and morbidity observed in zofenopril-treated patients in comparison to placebo supports the fact that the ACE inhibition and specific pharmacological profile both contribute to the unbeaten efficacy of ACEIs in CHD. These results also strongly support the strategy of starting ACEIs early after AMI, in order to maximise their potential benefits. However, since the clinical benefits persisted during long-term treatment, this also suggests that ACEIs should not be withheld.
2017-04-11T00:11:25.942Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "77f62d871695544f25d8e9f9561b85ff65232086", "oa_license": "CCBYNC", "oa_url": "https://openheart.bmj.com/content/openhrt/2/1/e000220.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77f62d871695544f25d8e9f9561b85ff65232086", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238989127
pes2o/s2orc
v3-fos-license
Can a low-cost exercise monitor provide useful heart rate monitoring for use in low-resource emergency departments? Objective Our objective was to study the clinical monitoring capabilities of a low-cost fitness wristband while measuring patient satisfaction with a mobility permitting device in the emergency department. Methods Patients enrolled were on continuous three-lead telemetry monitoring in a high acuity zone of the emergency department. Patients were given a fitness band to wear while simultaneously monitored with standard three-lead monitor. A brief survey was conducted upon study end, and data was compared between wristband and three-lead telemetry. Median heart rate (HR) values were calculated, a Bland-Altman plot was generated, and sensitivity and specificity were calculated for comparison of the formal telemetry and the inexpensive wristband. Results Thirty-four patients with an average age of 61.5 years were enrolled. From June to October 2019, over 100 hours of data were collected. In comparison for comfort, participants scored 9.5 of 10, preferring wristband over telemetry. Using a correlation coefficient graph, we found a significant disparity of HR readings within a telemetry range of 40 to 140 beats/min. An R-value of 0.36 was detected. Using a Bland-Altman plot, we observed a significant difference in HR between the telemetry monitor and the wristband. The sensitivity and specificity of the wristband to detect bradycardia (HR <60 beats/min) were 76% and 86%, respectively, while the sensitivity and specificity of the wristband to detect tachycardia (HR >100 beats/min) were 92% and 51%, respectively. Conclusion Inexpensive fitness bands cannot be a suitable tool for monitoring patient’s HR because of inaccuracy in detecting bradycardia or tachycardia. INTRODUCTION The practice of medicine in low-resource settings is fraught with challenges. In addition to scarce medical supplies, lack of infrastructure, and lack of financial support, there are often insufficient tools to adequately monitor and treat patients. 1 While nursing ratios in the United States and Canada are on average 1:4.4 on general medicine wards, and as high as 1:1 or 1:2 for critically ill patients, 2 in low-resource settings these ratios can be as high as 1:31. 3 Under these conditions, it is no surprise that tasks as fundamental as routinely documenting vital signs can be untenable. [3][4][5] While telemetry monitoring is a useful tool to extend monitoring capability, it is often out of the financial reach of lower resource emergency departments (EDs) and hospitals. As the authors have previously proposed, 6 there may be an opportunity to bridge this gap through the use of commercially available low-cost fitness monitoring devices. Increasingly ubiquitous fitness monitoring devices and smart watches monitor the heart rate (HR) through an inexpensive optical technique known as photoplethysmography. 7 While there have been multiple studies validating moderate to high-cost exercise monitoring devices in the 150 to 400 US dollars range, [8][9][10] there are now ultra-low-cost monitors available in the < 50 US dollars range that offer the possibility of use in the lowest resource settings. These ultra-low-cost monitors have yet to be tested in clinical settings. In this study, we examine the clinical monitoring capabilities of a low-cost fitness band, which is available commercially in the 20 to 40 US dollars price range. Our objective was to study the clinical monitoring capabilities of a low-cost fitness wristband while measuring patient satisfaction with a mobility permitting device in the ED. Study design Study participants were enrolled in the ED at Beth Israel Deaconess Medical Center located in Boston, Massachusetts, approved by the local institutional review board (2018P000380). Each participant presented with various medical complaints, not always involving cardiovascular issues. Due to the observational nature of the study, written consent was waived and verbal informed consent was utilized. Patients undergoing continuous telemetry as part of routine care were screened and selected by research staff to approach and obtain verbal consent. The decision to enroll a patient was based on if the patient was on continuous three-lead telemetry monitoring as well as placed in the highacuity area of the ED in a tertiary, urban, academic medical center. These patients were of greater interest due to the potential for a more dynamic range of HR in critically ill patients. Patients were excluded if they were < 18 years of age, unable to comfortably fit the device on their wrist, were on infectious disease contact precautions, had an allergy to silicone or aluminum, or if the application of the device would interfere with clinical care. Particular attention was given to those patients who were bradycardic and tachycardic so that the device could be tested on a broad range of HR. Patients who met no exclusion criteria were consented for inclusion in the study and a Xiao-mi Pulse fitness band Xiamoi Company, Beijing, China was applied to their wrist. This wristband communicated via Bluetooth to a nearby Android tablet, the Samsung Galaxy 8-inch edition manufactured by Samsung Electronics Co, Seoul, South Korea. While the included Xiao-mi Android application is only able to obtain "spot check" HR measurements, by using a publicly available application called "Notify & Fitness for Mi Band", designed by OneZeroBit, Padova, Italy, we were able to obtain HR data as frequently as once per second. Patients were simultaneously monitored with a standard What is already known Exercise monitors have become less expensive and are universally used by patients in nonclinical settings. What is new in the current study While expensive exercise monitoring has been tested for clinical validity, a low cost fitness monitor has not been tested. Our goal was to find applicability of a low-cost fitness device for clinical use especially in low-resource emergency department settings. The combination of a low-cost heart monitor and a commercially available open source application were not able to reliably detect heart rates at a suitable safety level for clinical use. How does the comfort of the wristband compare to the comfort of the telemetry system (chest sticker and wires) that you were wearing today? (1 = much less comfortable, 10 = much more comfortable) 9.5/10 If it was determined to be effective, would you prefer to wear a wristband monitor during future emergency department visits rather than the telemetry (chest stickers and wires) that you wore today? (1 = absolutely not, 10 = absolutely would) Values are presented as mean (range) or number (%). three-lead monitor as part of their clinical care, which sent HR information to our central hospital database every minute. When the subject was ready to leave the ED (either admitted or discharged), researchers removed the wristband and administered a brief verbal survey regarding the patient experience as well as collected basic demographic information. Data collection The android application session for each subject, including HR data and survey answers, were exported into a REDCap database with a subject number ID assigned for each participant. Contemporaneous HR telemetry data were exported to the REDCap database for comparison. Preliminary data analysis was performed when 100 hours of HR data had been collected. During this period, only 34 patients were enrolled from June to October 2019, collecting over 100 hours of data. Median HR values were calculated for every minute of wristband data for comparison with the gold standard telemetry data. Data analysis was performed in R version 4.0.0 (R Foundation for Statistical Computing, Vienna, Austria). Correlation between the two data sets was calculated utilizing the Pearson method. A Bland-Altman plot was generated, and sensitivity and specificity were calculated for identification of tachycardic and bradycardic episodes. The primary outcome of the study was the correlation between the gold standard telemetry and the wristband monitor. The secondary outcome was patient acceptability of this device. A convenient sample of patients was enrolled until 100 hours of data were collected. RESULTS Total 34 participants were enrolled. Females (n = 22) accounted for 64.7% of the total study participants, averaging 61.5 years of age with a range of 22 to 88 (Table 1). Satisfaction survey response descriptive statistics are reported in Tables 1 and 2. In comparison to standard three-lead telemetry, participants notably scored in favor of the wristband with a satisfaction score of 9.5 of 10, an overall comfort score of 9.6 and preferred wristband (score of 10) over telemetry (score of 1) with a mean of 9.8 ( Table 2). Fig. 1 shows the correlation coefficient between the median HR values of the wristband compared to the telemetry monitor. There was significant disparity of HR readings within a telemetry range of 40 to 140 beats/min. Significant bidirectional spikes were seen for the median watch HR to as high as 224 beats/min and as low as 26 beats/min. As a result, a low R-value of 0.36 was detected, thus indicating a poor correlation between the two HR monitoring devices. A Bland-Altman plot was used to determine how the performance of the wristband compared to the telemetry across a range of HRs (Fig. 2). We observed a significant difference in HRs between the telemetry monitor and the wristband. As the HR increased above tachycardia (HR > 100 beats/min), a the magnitude of error grew significantly larger. At times, the wristband data showed a HR up towards the 200's while the telemetry monitor showed a HR in the 60s. The wristband's sensitivity and specificity to detecting bradycardia (HR < 60 beats/min) and tachycardia were also investigated (Figs. 3, 4). The sensitivity and specificity of the wristband to Low-cost heart monitoring in emergency departments detect bradycardia were 76% and 86%, respectively, while the sensitivity and specificity of the wristband to detect tachycardia were 92% and 51%, respectively. Additionally, 39% of the data points captured by telemetry had no corresponding data collected by the wristband and did not contribute to our analysis. DISCUSSION While the overall satisfaction of using the wristband highly preferred over using the three-lead telemetry monitor, we found an unacceptable level of error in this device. Additionally, it is impor-tant to note that of the telemetry monitor data collected, 39% of those minutes were missing data from the wristband. This is likely due to the wristband moving out of range of the device or the application intermittently failing to connect to Bluetooth. Figs. 1 and 2 highlight the significant discrepancies seen between the devices. Some of the data collected by the wristband demonstrated HRs as high as the 200 beats/min while the gold standard telemetry monitor read a HR of 60 beats/min. Many of these anomalies appeared to cluster, with the observation of HRs reading reliably for some time, and then suddenly having several minutes of incorrect elevated HRs in the 200 to 300 beats/min range. This phenomenon likely explains the significant discrepancies visualized in tachycardic range of our Bland-Altman analysis (Fig. 2). We suspect that this intermittent error state likely contributed to much of error between the wristband and the gold Fig. 2. A Bland-Altman plot comparing the wristband to the telemetry heart rates readings. The larger the discrepancy between the two, the greater the distance from 0 a figure will appear. "Measurements" on Xaxis are defined as heart rate. Measurements Difference Telemetry vs. wristband Fig. 3. Area under the Receiver Operating Characteristic curve showing sensitivity and specificity of the wristband to detect a heart rate less than 60 beats/min. The sensitivity of detecting a heart rate <60 beats/ min was 76% while the specificity of detecting heart rate <60 beats/ min was 86%. AUC, area under curve. showing sensitivity and specificity of the wristband detecting a heart rate greater than 100 beats/min. The sensitivity of detecting a heart rate >100 beats/min was 92% while the specificity of detecting hear rate >100 beats/min was 51%. AUC, area under curve. We also specifically examined the ability of the device to detect bradycardic and tachycardic events as they can be significant clinical indicators of adverse events. Figs. 3 and 4 illustrate these detection rates of both bradycardia and tachycardia in the wristband device. We found unacceptable sensitivity and specificity for detecting both bradycardia (sensitivity 76%, specificity 86%) and tachycardia (sensitivity 92%, specificity 51%). Our analysis indicates that the use of a low-cost wristband with a publicly available monitoring application is not accurate enough for clinical use as a monitoring device. In conclusion, utilizing a low-cost fitness band tested in our study is not a suitable tool for clinical monitoring. We found reliability, correlation, sensitivity, and specificity all to be unacceptable for clinical use. We hypothesize that the observed sudden periods of false tachycardia were most likely the cause of the significant deviation in test characteristics from prior studies on more expensive devices. This error state led to significant skew in test characteristics and tachycardia detection. It is likely that these errors were secondary to software errors rather than limitations of the photoplethysmography method itself. We observed no clear trigger to entering the error state and noted that it would often take several minutes to correct itself. Additionally, we observed a failure of connectivity 39% of the time. This did not appear to correct when distance between device and android tablet was decreased. As such, this inexpensive fitness band could not be a suitable tool for monitoring patient's HR because of inaccuracy in detecting bradycardia or tachycardia. Going forward, it is clear the patient acceptability is high for a wristband telemetry monitor. Future studies with a wristband device designed specifically for continuous clinical monitoring may have better success. We suspect that by using a publicly available application to operate the wristband in a manner in which it was not originally designed, significant issues with performance stability were introduced, notably Bluetooth connection issues. High loss rate was due to software instability inherent to utilization of a non-native application. This can be easily improved through purpose-built software and hardware but would require the initial investment of funds for development and would not provide the "off-the-shelf" availability that we had hoped for in this project. Unfortunately, at this time, there is no commercially available device at this ultra-low price point that offers out-of-the-box continuous monitoring. We are hopeful that as the trend towards less expensive and more reliable monitoring equipment continues, we may have the opportunity to extend monitoring capabilities to low resource clinical settings. Unfortunately, we are not there yet. In conclusion, utilizing a low-cost fitness band in our study with a publicly available monitoring application is not a suitable tool for clinical monitoring. As such, this inexpensive fitness band could not be a suitable tool for monitoring patient's HR because of inaccuracy in detecting bradycardia or tachycardia. Going forward, it is clear the patient acceptability is high for a wristband telemetry monitor. Future studies with a wristband device designed specifically for continuous clinical monitoring may have better success. We suspect that by using a publicly available application to operate the wristband in a manner in which it was not originally designed, significant issues with performance stability were introduced, notably Bluetooth connection issues. High loss rate was due to software instability inherent to utilization of a non-native application. This can be easily improved through purpose-built software and hardware but would require the initial investment of funds for development and would not provide the "off-the-shelf" availability that we had hoped for in this project. Unfortunately, at this time, there is no commercially available device at this ultra-low price point that offers out-of-the-box continuous monitoring. We are hopeful that as the trend towards less expensive and more reliable monitoring equipment continues, we may have the opportunity to extend monitoring capabilities to low resource clinical settings. Unfortunately, we are not there yet.
2021-10-16T06:16:34.996Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "faf33ea1c31f2b66cc6cdd1a39ac8a5309d71a3c", "oa_license": "CCBYNC", "oa_url": "https://www.ceemjournal.org/upload/pdf/ceem-20-128.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4cc95f6d1dd42c62435341caa6fa4e7a86773b09", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210233186
pes2o/s2orc
v3-fos-license
Combating Stimulated Raman Scattering Nonlinear Effect on 8-channels DWDM Systems The performance of dense wavelength division multiplexing (DWDM) system in optical fiber network communication is influenced by various factors, one of them is called a nonlinear effect. Stimulated Raman Scattering (SRS) is one of the nonlinear effects that occur due to a high-power level utilization, causes a signal scattering phenomenon that grown exponentially as the power increases. This works aimed to analyze the bit error rate (BER) and Q-factor performance of DWDM systems that suffer from SRS nonlinear effects on optical power launch, channel spacing, and bit rate variations. Observations were made on a model of 8-channels DWDM, over 100 km optical fiber cable with channel spacing variations of 50, 100, and 200 GHz. The DWDM system was designed using the erbium-doped fiber amplifier (EDFA). The system performance was observed with optical power launch variations of -6, - 4, -2, 0, 2, 4, 6 dBm, and bit rates for 10 and 40 Gbps. Based on the result, the 6 dBm optical power launch, with 200 GHz channel spacing, and 10 Gbps data rates, provides the best performance of 1.78 x 10−151 BER values, and Q-factor of 48.57. The observation of nonlinear systems performance is measured with an optical spectrum. Changes in the value of optical power launch, channel spacing, and data rate had affected the performance of the DWDM nonlinear system. Introduction Utilizing a different wavelength as an information channel to transmit data through an optical fiber link can improve the system performance, especially in bandwidth capacity. The dense wavelength division multiplexing (DWDM) systems can transmit several different wavelengths with large quantities of traffic through the same optical fiber channel. It could be applied in the long-distance telecommunication networks which require a large bandwidth consumption. Despite its superiority in efficiency and scalability, DWDM also has some disadvantages, one of which is the presence of the nonlinear effects [1,2]. The nonlinear effects are events that could decrease the performance of a system while transmits the signals. There are several types of nonlinear effects on DWDM links, such as Stimulated Raman Scattering (SRS), stimulated Brillouin scattering (SBS), self-phase modulation (SPM), carrier-induced phase modulation (CIP), and four-wave mixing (FWM). SRS is a stimulating effect induced by the inelastic scattering phenomenon at a higher power level and grown exponentially along with the increasing power. When a high power light beam propagates throughout the fiber, the SRS emerges because of an interaction between vibrational mode of the fiber silica molecules with the light beam. This work analyzes the performance of DWDM system with the SRS nonlinear effect scenario in OptiSystem 15 simulator. To analyze the performance, we use a variety of optical power launch, bit rate, and channel spacing scenarios to obtain BER and Q-factor values. For a better understanding, the rest of this paper is organized as follows. In section 2, we cover the related works. It is followed by the proposed method in section 3, and the results of the simulation are discussed in section 4. Finally, the conclusions are presented in section 5. Related Works In recent years, several research that analyzes the nonlinear effect on optical communication has been published [1,2,3,4,5,6,7,8,9]. Aldila et al. [3] in 2015 examined the impact of linear and nonlinearity of CWDM optical fiber link systems. They employ a wavelength variation from 1,460 nm to 1,625 nm with 20 nm of channel spacing. In this work, the CWDM link was varied by length. The erbium-doped fiber amplifier (EDFA), as an optical amplifier, was observed by Q-factor and bit error rate (BER) parameter. Based on the discussion, the CWDM link with EDFA has a better performance than both linear and nonlinear link without reinforcement. Firnandya et al. [4] in 2015 study the effect of the FWM nonlinear effect in DWDM link with three scenarios. The first scenario was conducted by varying bit rate and optical fiber length. Followed by analyzing the effect of channel spacing. Finally, in the last scenario, they vary the optical power launch. The results in these works were the FWM nonlinear effect can have a detrimental impact on the DWDM system. It caused by the parameter such as Q-factor for all scenarios did not meet the ITU-T standard. Ditya et al. [5] in 2017 examined the effect of three-wave mixing in the DWDM system with three scenarios. The first scenario in this work was changing the bit rate and length of the optical link. The second one changed the channel spacing variable, and the last scenario was to change the transmitter power variable. The results of this study are nonlinear effects of three-wave mixing (TWM) can cause adverse effects on DWDM systems because almost all values of Q-factors are below the standard set. In 2016, Pamukti et al. [6] published a paper that discusses the impact of the nonlinear effect on soliton transmission in DWDM link. The result of these work showed an influence on DWDM performance caused by the soliton transmission and proved the channel reducing can overcome the nonlinear effect in DWDM. Kumari et al. [7] in 2015 showed the SRS effect could occur on the WDM system using the variation of channel spacing. In these work, the SRS effect on WDM system was influenced by optical power launch. Another works by Kaur et al. [8] and Patni et al. [9] in 2016 discussed the performance improvement on the DWDM system using an optical amplifier and dispersion compensator. Those works can be used to overcome the linear effect in the DWDM system. The results of this study indicate an escalation of signal quality in DWDM systems with EDFA amplifiers since the implementation of higher pump power value. Proposed Method This study uses a model to analyze the performance of DWDM systems that suffer from SRS nonlinear effect. In this section, we elaborate on how the simulation is conducted. Fig. 1 shows the block diagram of the systems comprised of a transmitter, transmission, and receiver. In the transmitter block diagram, located CW laser, Mach-Zender modulator, nonreturn to zero (NRZ) pulse generator, pseudo-random bit sequence (PRBS), and multiplexer, as depicted in Fig. 2. The simulation is performed on 10 and 40 Gbps 8-channels DWDM system, with EDFA optical amplifier over a transmission distance of 100 km. As shown in Table 1, the channel spacing used in this study is the standard of ITU-T G.694.1 of 50 GHz, 100 GHz, and 200 GHz [10]. The input power varied to -6, -4, -2, 0, 2, 4 and 6 dBm. CW laser has a role as an optical transmitter. Table 2 summarizes several different frequencies (in THz) for 8-channels DWDM system, with 0.05, 0.1, and 0.2 THz channel spacing variations. Figure 2. Transmitter block diagram. System Design The transmission block consists of two types of the optical link; single-mode fiber (SMF), and dispersion compensating fiber (DCF). Table 3 shows the parameter of SMF and DCF. We use 80 km of SMF, and 20 km of DCF to reach 100 km of transmission distance. Meanwhile, as the last mile systems, the receiver block diagram comprises of the demultiplexer, APD photodetector, low pass Bassel filter (LPBF), and BER analyzer, as shown in Fig. 3. The details parameters value of the APD and LPBF are summarized in Table 4. It shows the responsitivity and gain value of the APD photodetector are 1 A\W and 3 dB, respectively. Meanwhile, the LPBF cut-off frequency value is set in 30 and 7.5 GHz. Scenarios The performance parameters tested in this simulation are the BER and Q-factor. These two variables are then simulated towards channel spacing, power launch, and bit rate variations. The detailed scenarios of this work are summarized in Table 5. Results and Discussion In this section, we evaluate the effect of channel spacing, optical power launch, and bit rate variations towards BER and Q-factor parameters based on the arrangement in section 3. As a comparison of error bits ratio that relative to the total number of received bits, the BER value in the DWDM system is restricted to a maximum of 10 -12 . Meaning that out of 1,000,000,000,000 bits transmitted, only one of them suffered in error, as expressed in (1). ……………………………… (1) where, nc is a number of bits error, while nb is a received bit in a defined time interval [11]. In a simple transmission channel model with the assumed data source, the BER value could be calculated analytically [12]. However, in this work, the BER value is determined through stochastic computer simulations. As mentioned earlier in section 3.1, and depicted in Fig. 3, the BER value was obtained by utilizing the BER analyzer instrument. Aside from BER, another key parameter in this study is Q-factor. Q-factor characterizes two digital SNRs (electrical and optical) that combined into a single convenient measurement of overall system quality. Q-factor could be determined from (2). ……………………………… (2) where yopt is the optimal threshold level, iL is optical power level, and viL is the standard deviation of the noise. However, similar to BER value, in this work, we also determined the value of Q-factor through the simulation results. Before discussing BER and Q-factor, we provide input and output of the optical spectrum images in the transmitter block with 8 data channels. As shown in Fig. 4 to Fig. 9, the SRS nonlinear effect is portrayed at the output of the optical spectrum (Fig. 5, 7, and 9). From the obtained results, the SRS nonlinear effect is actually affected by the optical input power, bit rate, and channel space variations. The utilization of higher input power value will consequence to the escalation of the SRS effect in the DWDM system. On the other hand, the nonlinear effect of SRS will get smaller when a more significant bit rate is employed. This thing happens since during the signal transmission process, the transmitted optical light experiences scattering. Hence, some of the scattered light would lose energy (Stokes shift) or gain energy (anti-Stokes shift) [13]. The Effects of Channel Spacing Variation As summarized in Table 5, scenario 1 performs a channel spacing variation (50, 100, and 200 GHz) to obtain BER and Q-factor value, simulated in 40 Gbps optical link with 6 dBm power input. Bit Error Rate (BER) The impacts of channel spacing variation in 8-channels DWDM system on BER value is depicted in Fig. 10. The graphic shows, by employing a higher channel space, it will generate a better BER value. The best BER value results in 200 GHz channel space utilization for 1.78 x 10 -151 , located on channel 1. Meanwhile, the worst BER value is experienced at 50 GHz channel spacing, located on channel 4 and 6. Q-factor Similar to BER value, The Q-factor also gain an improvement along with the escalation of channel spacing (shown in Fig. 11). The best Q-factor value appears on 200 GHz channel spacing for 26.18, located on channel 1. Table 5, scenario 2 performs an optical power launch variation (-6, -4, -2, 0, 2, 4, 6 dBm) to obtain BER and Q-factor value, simulated in 200 GHz channel spacing, and 40 Gbps of bit rate. Fig. 12 depicts the impacts of optical power launch variation in 8-channels DWDM system on BER value. As the input power increase, the BER value also gaining an improvement. The best BER value results in 6 dBm power launch for 1.78 x 10 -151 , that located in channel 1. Meanwhile, the worst BER value is experienced at -6 dBm optical power utilization for 2.17 x 10 -18 , located on channel 2. Q-factor The Q-factor value is relatively similar to BER, as the power level increase, so both of them will gain improvement, as shown in Fig. 13. The best Q-factor value appears on 6 dBm of power variation is 43.34, located on channel 7. Meanwhile, the worst value occurs in channel 5 with -6 dBm of input power, for 17.71. The Effects of Bit Rate Variation As summarized in Table 5, scenario 3 performs a bit rate variation (10 and 40 Gbps) to obtain BER and Q-factor value, simulated with 6 dBm optical power, and 200 GHz of channel spacing. Fig. 14 depicts the impacts of bit rate variation in 8-channels DWDM systems on BER value. The best BER value in 40 Gbps bit rate occurs for 1.78 x 10 -151 , located on channel 1. Meanwhile, the worst outcome is for 1.50 x 10 -114 . Figure 10. Optical power launch variation Figure 11. Optical power launch variation vs BER.vs Q-factor. Q-factor In contrary, the best value of Q-factor is reached when employing a 10 Gbps of bit rate rather than 40 Gbps. The 48.57 of Q-factor value is located in channel 6, resulted by 10 Gbps bit rate with 6 dBm of input power, and 200 GHz channel spacing, as shown in Fig. 15. Meanwhile, the worst Q-factor value occurs in channel 7, for 22.70 with 40 Gbps bit rate. Discussion From all three examined scenarios, the detailed summarize of BER and Q-factor best and worst value affected by channel spacing, power launch, and bit rate variations, are presented in Table 6. Conclusions In this paper, we have simulated an 8-channels DWDM system that suffers from SRS nonlinear effect in OptiSystem 15 simulator environment. Based on the results and discussion in section 4. the nonlinear effect of SRS on DWDM systems is resulted by the emergence of new wavelengths in the spectrum analyzer. In each scenario, the solution to overcome the SRS nonlinear effects could be obtained by increasing the input power on the transmitter block, adding channel space values, and raising the bit rates. The system performance results in the BER parameter have the best value of 1.78 x 10 -151 at 6 dBm of optical power launch, with 40 Gbps bit rate and 200 GHz channel space. Meanwhile, the worst BER values occur when using 40 Gbps of bit rate on 50 GHz channel spacing. The Q-factor parameter has the best value of 48.57 at 6 dBm of optical power launch, with 200 GHz channel spacing and 10 Gbps bit rate. Meanwhile, the worst value occurs when using 50 GHz channel spacing with 40 Gbps of bit rate.
2019-11-14T17:13:27.985Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "e3403a4bd81f9fe648d5237471a8ba96c29a5d9c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1367/1/012064", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bfe3dde10f021657b7b1ad95851a7e181737cdea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
140172359
pes2o/s2orc
v3-fos-license
The seasonality of butterflies in a semi-evergreen forest: Gibbon Wildlife Sanctuary, Assam, northeastern India Author Details: Arun P. Singh is working on the ecology and conservation of biodiversity of the Himalaya and northeastern India with special reference to butterflies and birds. Presently, he heads the Ecology and Biodiversity Conservation Division, Rain Forest Research Institute (ICFRE), Jorhat. Lina Gogoi is an environmental science post graduate from Tezpur University, Assam had worked on weathering geochemistry of Lohit River, Dibang River and Dibru Saikhowa National Park. Also worked on ecological studies of butterflies in Arunachal Pradesh at the Rain Forest Research Institute for a short period. Currently working in Tezpur University as a project fellow in biochar related project. Jis Sebastian did her MSc in forestry from FRI University in Dehradun, Uttarakhand. She has working experience in Wildlife Trust of India and as JRF in the Rain Forest Research Institute on ecological studies of butterflies in Arunachal Pradesh. Currently perusing PhD research in botany at Sacred Heart College, Cochin, Kerala. INTRODUCTION The northeastern region of India, that lies south of the Brahmaputra River, is part of the Indo-Burma biodiversity hotspot on the globe. It is located at the trijunction of Indo-Chinese, Indo-Malayan and Palaearctic biogeographic realms exhibiting a profusion of habitats characterized by diverse biota with a high level of endemism (http://www.biodiversityhotspots.org/xp/ hotspots/indo_burma/Pages/default.aspx). More than 50% of the butterfly species found in India occur in the northeast, also called the "Papilionidae-rich zone" in the 'Indo-Burma hotspot' as per IUCN (New & Collins 1991). The high species richness and endemism make this an important region for conservation of biodiversity in India. Study Area The Gibbon Wildlife Sanctuary (GWS) 26 0 40'-26 0 45'N & 94 0 20'-94 0 25'E, lies in Jorhat District in upper Assam in northeastern India. It is today an isolated forest patch covering approximately 21km 2 of mainly lush green 'tropical semi-evergreen forest' sparsely interspersed with 'wet evergreen forest' patches, classified as 'Assam plains alluvial semi-evergreen forests (2B/ C1a)' (Champion & Seth 1968). Dipterocarpus retusus (Hollong) is the predominant element in the forest. The associated species are Ailanthus integrifolia, Altingia excelsa, Artocarpus chama, Castanopsis purpurella, Cinnamomum bejolgheta, Dysoxylum gobara, Mesua ferrea, Michelia champaca and Vatica lanceafolia (Baruah & Khatri 2010) with most of the tree species being utilized by the Western Hoolock Gibbon Hoolock hoolock here (Barua & Gogoi 2012). The altitudinal range of GWS varies between 100-120 m above sea level, while the average temperature ranges from 18.95 -27.9 0 C, the average humidity varies between 64.5% and 94.5% and the annual rainfall of the study area being ~250cm. The sanctuary was carved out of Hollongapar Reserve Forest set aside in 1881 named after the dominant tree species -Hollong (Dipterocarpus retusus). Subsequently, more forest areas were added to this RF and by 1997 the total area of the Hollongapar RF increased to 2098.62ha. The Government of Assam declared this entire RF area as the Gibbon Wildlife Sanctuary in 1997. GWS is surrounded by mostly tea gardens and small villages. The Bhogdoi River flows from Nagaland (south) to Assam (north-west) and distinctly demarcates the eastern boundary of this sanctuary as a permanent physical barrier (Image 1). GWS was once contiguous with a large forest tract that extended to Dissoi Valley Reserve Forests of Nagaland in the south and are now separated by a vast stretch of tea gardens presenting a barrier in the effective migration of wildlife such as elephants (Bhattacharjee 2012). GWS today is still a home to many species of animals of global concern namely, Hoolock Gibbon Hoolock hoolock (Endangered; Brockelman et al. 2008); Capped Langur Trachypithecus Image 1. Gibbon Wildlife Sanctuary and its surrounding areas. (Bordoloi 2010). The published literature on the butterflies of the GWS is scanty. Senthilkumar et al. (2006) recorded 37 species from GWS. A blog by Abhijit Narvekar (http:// butterflyinggibbonwls.blogspot.in/) lists 31 species from GWS, recorded in May 2013. Besides these, there are no other published records of butterflies from GWS. The authors hereby report the results of a three and a half year study carried out by them in the GWS. Sampling Twenty-eight sampling surveys covering all the months were carried out in Gibbon WS from 4 August 2010 to 26 April 2014. Sampling was carried out along forest trails up to 5m on both sides along a stretch of 3.5km from the village Melang Grant to the Gibbon Forest Rest House (FRH) and along the two parallel trails that goes from the FRH towards river Bhogdoi in the east (Fig. 1). The 'Pollard walk' (Pollard & Yates 1993) method was used for sampling butterflies. Sampling was carried out between 08.00hr to 15.00hr mostly on sunny days, but the sampling hours varied in different samplings from 1.5-3 hours. The taxa encountered were recorded in each sampling. The data on abundance, however, could not be recorded for each survey, but species occurring in exceptionally high numbers (peak abundance) were noted. A total of ~65 hours of sampling was carried out. Butterflies were identified from photographs and using field guides (Evans 1932;Wynter-Blyth 1957;Haribal 1992;Smith 1989Smith & 2006Kehimkar 2008;Sondhi et al. 2013 and websites: www.flutters.org/ and www. ifoundbutterflies.org/). Butterflies in Gibbon Wildlife Sanctuary Singh et al. Data Analysis Data for the number of species recorded in each survey was pooled. Species accumulation curve was then plotted from the first to the last sampling to see the rate of species accumulation during the study period. The Sorensen's similarity index or β was calculated to see the species similarity in butterflies between four different seasons meeting different seasons [premonsoon (March-May), monsoon (June-Sept), postmonsoon (October-November) and winter (December-February)]in this semi-evergreen forest. β= 2c/ (S1+S2) here, S1 = the total number of species recorded in one season/site S2 = the total number of species recorded in different season/site C = number of species common to both seasons/ sites The Sorensen's similarity index (Sorensen 1948) is a very simple measure of beta diversity, ranging from a value of zero, where there is no species overlap between the communities to a value of one, when exactly the same species are found in both communities. The seasonality of butterflies in GWS was then compared with trends available in other studies in other forest habitats in the Himalaya and the northeast to see the variation in this forest type. Species richness Amongst the 211 species belonging to 115 genera recorded during 28 sampling surveys (Appendix 1), 19 species were of the family Papilionidae. This suggests that species richness of the area could be as high as 257 species based on the family proportion model (Singh & Pandey 2004), by taking Papilionidae's proportion as 7.4% of the total for northeastern India (Wynter-Blyth 1957). The present sampling thus represents about 82% of the species found in the study area. Families Lyceanidae and Hesperiidae proportions are less than those of the northeastern region, these two families are thus under-represented (Table 1) in the present surveys and there is a need to look for more species among these two families in GWS. Species accumulation An increasing trend in the species accumulation curve shows that new species were added during every sampling up to the last sampling at a prominently higher rate just after the monsoon rains (Aug-Sep) until pre-monsoon (March), every year (Fig. 2). The trend obtained during the last six samplings suggests that new species were still being discovered until the end (mainly Lycaenidae and Hesperiidae). Seasonality Maximum number of species were recorded during the 'post monsoon' season in the region (Fig. 3). The first peak in species richness (102 species) during March and April was smaller than the second peak in September to October (118) when most of the species are in flight in GWS. The two peak seasonal trends in butterflies is very typical of the Himalaya and northeastern India. In GWS, which is a semi-evergreen forest, the second peak is higher than the first peak, however. This pattern differs considerably from the sub-tropical lowland forests in Bhutan ( Fig. 4; Singh 2012) lying between 100-220 m, where both the peaks are high but the first peak in April is slightly greater than the second peak in December (Fig. 4). The reason for the first peak being smaller than the second peak in GWS may be related to the pattern of rainfall here. The reason for the first peak being smaller than second peak higher in GWS may be related to the pattern of rainfall here. In GWS the onset of early rains is early in spring (from April), monsoons are less severe, there is short dry (moderate) winter in comparison to rains arriving relatively late in May-June, severe monsoon and a longer winter season in Bhutan. Species similarity among seasons Sorensen's similarity index between seasons varied from 0.25-0.55. This suggests that, the species composition varied in GWS all over the seasons of the year. However, the highest similarity was noticed between post-monsoon and autumn, post-monsoon and spring, winter and spring, spring and autumn, respectively. In other words from post-monsoon to spring the species composition in GWS showed much similarity. The similarity index was least between spring and pre-monsoon followed by monsoon and winter, respectively (Fig. 5). This suggests that major changes in species composition in the semi-evergreen forests occurs between these seasons, which may be related to the life history patterns of these butterflies. The number of species in flight during rainy season were few in comparison to the dry season. This could be due to proximity and continuity of Jeypore RF with Himalayan foothills of Arunachal Pradesh from where these species come down and non-connectivity of GWS forest with the nearest hills in Nagaland and no freshwater mountain streams inside the GWS. Besides, 30 papilionids have been recorded in Garo Hills (Sondhi et al. 2013) of which 10 have not been recorded at GWS, but Garo Hills have diverse habitats under at least three forest types and a large altitudinal gradient when compared to GWS. The Papiliondae species similarity between these three forests (Table 2) all having semi-evergreen forest component in common also have at least 53 percent papilionid species common among them. Significant records A dead female of the Great Blue Mime Papilio paradoxa telearchus, a rare species (Evan 1932), crushed by a vehicle on the forest road was recorded on 25 Between Sites Jaccard's coefficient of similarity March 2011 was also a species with a distribution in the Naga Hills, Siam and Borneo (Evans 1932). Norman (1956), however, had recorded S.w. woolletti Riley from Sibsagar District of Assam that was previously also known from Manipur. The record of Snowy Angle, Darpa pteria dealbata on 4 August 2012 (Image 6), is the second photographic record of this species from India. Earlier, it had been recorded from the forests of Jeypore-Dehing in Assam between April 24 and 29, 2011, the distribution of the species being further south through Burma, Thailand, Laos, Malay Peninsula, Tioman, Borneo, Sumatra, Java, and Palawan, Phillipines in South-east Asia (Karthikeyan & Venkatesh 2011).The Constable, Dichorrhagia nesimachus (Image 7) a very rare species (Evans 1932) the sunshine and on wet mud inside the forest. Sylhet Oakblue Arhopala silhetensis (Images 9,10) is another rare species (Evans 1932) CONCLUSION Being a remnant forest of 21km 2 , GWS supports a rich diversity of butterflies found in northeastern India. The seasonality and diversity of butterflies of a 'semi evergreen forest' is unique from that of lowland subtropical forests of the lower Himalaya. have also found that rainfall has a strong correlation with the abundance of some papilionids in northeastern India besides a strong seasonality in continental South-east Asian butterfly assemblages. GWS, besides supporting butterfly diversity, also needs to be preserved as a gene bank biodiversity of flora and fauna (birds, mammals, herpetofauna, orchids, canes, bamboos, etc.) unique to northeastern India and functions as an island habitat for movement of large mammals and birds between larger protected areas in the landscape. Also, better accessibility and location of GWS with the national highway in the region, proximity to Jorhat town, lying in the plains and having a rest house, increases its potential for attracting tourists for -butterflyinclusive eco-tourism in a natural semievergreen forest habitat. Using local villagers as guides to generate livelihood for communities involved thereby reducing biotic pressure on one hand and conserving this magnificent forest on the other along, with the researchers and students, GWS can easily be taken up as a role model in conservation biology. Butterflies in Gibbon Wildlife Sanctuary Singh et al.
2019-04-24T13:12:50.463Z
2015-01-26T00:00:00.000
{ "year": 2015, "sha1": "aed5eb5a65928d9a90e2f6e188c9e60924aef9e3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11609/jott.o3742.6774-87", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4fb74064830dd785bfe5ff36b87bc5c9ed8f532b", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
248458969
pes2o/s2orc
v3-fos-license
Shedding Light on the Complex Regulation of FGF23 Early research has suggested a rather straightforward relation between phosphate exposure, increased serum FGF23 (Fibroblast Growth Factor 23) concentrations and clinical endpoints. Unsurprisingly, however, subsequent studies have revealed a much more complex interplay between autocrine and paracrine factors locally in bone like PHEX and DMP1, concentrations of minerals in particular calcium and phosphate, calciprotein particles, and endocrine systems like parathyroid hormone PTH and the vitamin D system. In addition to these physiological regulators, an expanding list of disease states are shown to influence FGF23 levels, usually increasing it, and as such increase the burden of disease. While some of these physiological or pathological factors, like inflammatory cytokines, may partially confound the association of FGF23 and clinical endpoints, others are in the same causal path, are targetable and hence hold the promise of future treatment options to alleviate FGF23-driven toxicity, for instance in chronic kidney disease, the FGF23-associated disease with the highest prevalence by far. These factors will be reviewed here and their relative importance described, thereby possibly opening potential means for future therapeutic strategies. Introduction Fibroblast Growth Factor 23 (FGF23) has emerged as an important biomarker in chronic kidney disease (CKD) [1]. Accumulating evidence suggests that it not only is a risk predictor for cardiovascular disease, in particular heart disease and heart failure, but also a uremic toxin itself, directly causing disease [2]. For both properties, being either an independent risk predictor or a direct toxin, in-depth knowledge of its regulation is of paramount importance. In the setting of FGF23 as an independent risk factor, but not directly inflicting harm, its association with clinical endpoints is confounded by hitherto hidden mechanisms that are in the causal path to these endpoints. Exploring these regulators of FGF23 may thus reveal novel targets of treatment and hold the promise of improving outcomes for patients with kidney disease. In turn, if FGF23 itself is the causative molecule, intervening in its regulators may also modify FGF23-driven morbidity. Besides being a prominent hormone in CKD, the discovery of FGF23 solved the quest for a humoral factor explaining several inheritable diseases characterized by renal wasting, which by then could be explained by mutations of FGF23 itself or factors involved in its regulation [3]. FGF23 is a hormone, secreted by osteocytes, and has a central physiological role in phosphate homeostasis. It promotes phosphaturia and inhibits the activation of vitamin D, thereby limiting vitamin-D mediated phosphate absorption from the diet by the transcellular uptake route of enterocytes in the gastro-intestinal tract. There are several principal ways in which FGF23 concentrations can be regulated, and all of these appear to play a role. These modes of regulation are production and secretion by the cells of origin, ectopic production, and cleavage or breakdown at cells of origin or after release into the circulation. The currently available immunoassays measure either the full-length and biologically active hormone, termed intact FGF23 (iFGF23), or both iFGF23 and its c-terminal fragment, termed cFGF23. It should be noted that the term cFGF23 for this assay is a confusing term, because it does not only measure the c-terminal fragment, the latter originating after cleavage of the full-length polypeptide. This cleavage obviously also generates an N-terminal fragment, but no commercially available assay detects this fraction. Although intact FGF23 is assumed to be the physiological effector molecule, debate exists on the role of the fragments, possibly having agonistic or antagonistic effects, the latter as a competitive inhibitor [4]. Phosphate Given the key role of FGF23 to protect the organism against hyperphosphatemia, it can be expected that phosphate increases FGF23 concentrations. Indeed, several studies have shown that an increase in dietary intake of phosphate by both healthy volunteers and people with CKD increased its concentrations, albeit with some delay of around 24 h [5][6][7] to restore phosphate balance. In turn, phosphate restriction has the ability to lower FGF23, but, different from PTH secretion from healthy parathyroid glands in a setting of hypercalcemia, FGF23 has never been described to be fully suppressed following hypophosphatemia, for instance when induced by mutations in the renal phosphate transporter NaPi2c which gives rise to hereditary hypophosphatemic rickets with hypercalciuria (HHRH) [8,9]. Until recently, the underlying molecular mechanism by which phosphate modulates FGF23 levels has been elusive. It has been shown that phosphate transport into bone cells across the inorganic phosphate transporter 1 (PiT-1) may be involved [10]. A recent study revealed an additional remarkable mechanism [11]. Bone cells that produce FGF23 express its receptor FGFR1 as well. It is now shown that phosphate itself can bind to this unliganded receptor, leading to the upregulation of the Galnt3 gene, the protein-product of which leads to O-glycosylation of full-length FGF23, as will be discussed below. The consequence of this post-translational modification of the FGF23 molecule is that it escapes intracellular cleavage, increasing the proportion of biologically active FGF23. This mechanism does not suggest that phosphate induces FGF23 expression, even though a previous study suggested it can in a cell line [12], but rather stabilises the hormone. This mode of action of phosphate on FGF23 concentrations is in line with clinical studies in patients with CKD that addressed the question of whether dietary phosphate restriction can lower FGF23. A recent meta-analysis of these studies found more pronounced reduction of iFGF23 than of cFGF23, the latter also measuring FGF23 fragments [13]. In normophosphatemic CKD patients, short-term treatment with non-calcium containing phosphate binders did not change FGF23 [14,15], while prolonged treatment induced a substantial decline [16]. The use of calcium-containing binders did increase FGF23 [17]. Calcium Interestingly, there appears to be a minimal concentration of calcium required for phosphate to be able to increase FGF23 levels. In an animal model testing varying serum concentration of calcium, it was shown that an increment of FGF23 by PTH was completely abolished when ionized calcium concentrations were below 4 mg/dL [18]. The physiological functionality of this phenomenon might be that this prevents the catabolism of vitamin D by FGF23 in a setting of hypocalcemia. Moreover, in an animal model, calcium itself was shown to be able to directly increase FGF23 transcription by acting on the promotor of the Fgf23 gene [19,20]. These findings from experimental research are in line with most, but not all, clinical observations. In a clinical trial among 30 early CKD patients, studying the effects of adding calcium carbonate to calcitriol, it was shown that this induced an increase of FGF23, which was paralleled by an increase in serum calcium concentration [21]. In more advanced CKD, the non-calcium containing phosphate binder lanthanum carbonate was able to lower FGF23 levels, while a calcium-containing binder could not [17]. However, in a short-term study, acute increments or decrements of serum calcium concentrations had no effect on FGF23 [22]. Calciprotein Particles Apart from the synergistic effects of combined higher levels of calcium and phosphate on increasing FGF23, it is possible that this is mediated by the formation of calciprotein particles (CPP) [23,24]. Even at physiological concentrations, human plasma is supersaturated for calcium and phosphate, which would induce spontaneous hydroxyapatite crystal formation [25]. These potentially damaging crystals, however, are prevented from being formed and freely floating in the circulation by being scavenged into soluble amorphous primary calciprotein particles CPP (CPP1), which are nanoparticles containing the serum protein Fetuin-A as the main protein constituent. In a setting of increased availability of these minerals, as is the case for phosphate in CKD, or suppressed hepatic production of Fetuin A in a setting of chronic inflammation, the stage is set to overwhelm the capacity of this defence system, leading to the formation of more toxic larger crystalline secondary CPPs (CPP2) [26][27][28]. Like high exposure to phosphate, exposure to high calcium levels also increases the amount of CPP, as was shown in a patient with renal sarcoidosis, and this was paralleled by an increase in FGF23 [29]. The role of calcium on the formation of CPP was also shown in a clinical study comparing calciumcarbonate with lanthanumcarbonate [30]. After switching to lanthanumcarbonate, the total amount of CPP declined substantially, without major differences in serum concentration of calcium and phosphate between the two phosphate binders. A recent clinical observational study demonstrated an association between the amount of CPPs and FGF23, suggesting an induction in the phosphaturic hormone by CPP's [31]. Indeed, a recent in vitro study found that CPPs are capable of increasing FGF23 expression in osteoblast-like cells [32]. Remarkably, this effect was restricted to the smaller sized CPP1. It is therefore conceivable that an increased amount of CPP's formed triggers FGF23, which in turn induces phosphaturia and declines levels of active vitamin D. FGF23 thereby slows the formation of CPP's by lowering the concentrations of the minerals that form its mineral content. This concept is supported by the ability of CPP to exit the circulation, enter the bone marrow and reach FGF23-producing bone cells [32]. Magnesium Given its resemblance to calcium as a bivalent cation, and its beneficial effects on the formation of CPP [33][34][35], it is likely that magnesium is also involved in the regulation of FGF23. Data, however, are scarce. In an animal study of cats with chronic kidney disease, a negative association between serum magnesium concentration and FGF23 was found, which was independent of calcium, phosphate, and PTH [36]. In an observational study among young healthy men, it was shown that a lower dietary intake of magnesium was associated with higher FGF23 [37]. When rats were exposed to a short-term (7 day) magnesium deficient diet, FGF23 levels were higher compared to a normal diet at all time points following the interventions which reached statistical significance after one week [38]. However, clinical trials demonstrating a beneficial effect on clinically relevant endpoints of magnesium supplements are lacking [39]. The role of minerals and calciprotein particles are summarized in Figure 1. Parathyroid Hormone PTH was shown to be a relevant regulator of FGF23 by directly increasing its expression in bone in an experimental model of CKD [40]. Moreover, in that same study, parathyroidectomy before the onset of CKD completely abolished the FGF23 increment, even in a subsequent setting of hyperphosphatemia. This is probably mediated by the receptor PTH1R for PTH on bone cells, the same receptor that is involved in regulating bone turnover, and with Nuclear Receptor Related-1 protein (Nurr1) an intermediate intracellular molecule [41]. Another established action of PTH on bone cells is the suppression of the gene encoding for sclerostin (Sost). Sclerostin acts as local inhibitor of the Wnt pathway by sclerostin, thereby suppressing FGF23 [42][43][44]. PTH therefore unleashes FGF23 by suppressing sclerostin. Clinical studies suggest a biphasic response to PTH. In a short term (3 h) 1-34 PTH infusion in healthy young persons, FGF23 declined, most likely driven by PTH-induced renal phosphate loss [45]. During this period 1,25 dihydroxyvitamin D3 (1,25D) started to rise, which expectedly would induce increased dietary phosphate uptake. This, and the potential direct effects of PTH on bone cells may be the dominating effect following more prolonged exposure, giving rise to FGF23 increments. This indeed was suggested by a two days PTH infusion study that led to increased cFGF23 in healthy persons and people treated by dialysis regardless of bone turnover status [46]. Like for many other aspects, however, the role of PTH is complex, because if endogenous levels rise as a consequence of a decline of serum calcium by sodium citrate infusion, FGF23 did not increase [22]. Obviously, the stimulating effects of PTH on FGF23 may have been nullified by the low levels of calcium. There seems to be a logical physiological basis for the induction of FGF23 by PTH. The key purpose of PTH is to restore hypocalcemia and it does so in part by liberating calcium form bone. This is paralleled by release of phosphate, which is, besides by phosphaturic effects of PTH itself, excreted by the kidneys under the influence of FGF23. Observations of persons with dialysis-dependent end-stage kidney disease treated with calcimimetics appear to be in line with the notion that lowering PTH is accompanied by declining FGF23 [47,48]. Remarkably, however, in both of these clinical studies, using the oral cinacalcet or the intravenous etelcalcetide, the decline of FGF23 followed reductions of phosphate and calcium, instead of PTH reductions. Vitamin D There is strong evidence that 1,25D directly induces Fgf23 gene transcription. Mice injected with the active form of vitamin D had increased levels of FGF23 mRNA, exclusively in bone, which was accompanied by a rise in serum FGF23 levels [49]. In that same study, Parathyroid Hormone PTH was shown to be a relevant regulator of FGF23 by directly increasing its expression in bone in an experimental model of CKD [40]. Moreover, in that same study, parathyroidectomy before the onset of CKD completely abolished the FGF23 increment, even in a subsequent setting of hyperphosphatemia. This is probably mediated by the receptor PTH1R for PTH on bone cells, the same receptor that is involved in regulating bone turnover, and with Nuclear Receptor Related-1 protein (Nurr1) an intermediate intracellular molecule [41]. Another established action of PTH on bone cells is the suppression of the gene encoding for sclerostin (Sost). Sclerostin acts as local inhibitor of the Wnt pathway by sclerostin, thereby suppressing FGF23 [42][43][44]. PTH therefore unleashes FGF23 by suppressing sclerostin. Clinical studies suggest a biphasic response to PTH. In a short term (3 h) 1-34 PTH infusion in healthy young persons, FGF23 declined, most likely driven by PTH-induced renal phosphate loss [45]. During this period 1,25 dihydroxyvitamin D3 (1,25D) started to rise, which expectedly would induce increased dietary phosphate uptake. This, and the potential direct effects of PTH on bone cells may be the dominating effect following more prolonged exposure, giving rise to FGF23 increments. This indeed was suggested by a two days PTH infusion study that led to increased cFGF23 in healthy persons and people treated by dialysis regardless of bone turnover status [46]. Like for many other aspects, however, the role of PTH is complex, because if endogenous levels rise as a consequence of a decline of serum calcium by sodium citrate infusion, FGF23 did not increase [22]. Obviously, the stimulating effects of PTH on FGF23 may have been nullified by the low levels of calcium. There seems to be a logical physiological basis for the induction of FGF23 by PTH. The key purpose of PTH is to restore hypocalcemia and it does so in part by liberating calcium form bone. This is paralleled by release of phosphate, which is, besides by phosphaturic effects of PTH itself, excreted by the kidneys under the influence of FGF23. Observations of persons with dialysis-dependent end-stage kidney disease treated with calcimimetics appear to be in line with the notion that lowering PTH is accompanied by declining FGF23 [47,48]. Remarkably, however, in both of these clinical studies, using the oral cinacalcet or the intravenous etelcalcetide, the decline of FGF23 followed reductions of phosphate and calcium, instead of PTH reductions. Vitamin D There is strong evidence that 1,25D directly induces Fgf23 gene transcription. Mice injected with the active form of vitamin D had increased levels of FGF23 mRNA, exclusively in bone, which was accompanied by a rise in serum FGF23 levels [49]. In that same study, rat-derived UMR-106 osteoblast-like cells had a 1000-fold increase of FGF23 mRNA 4 h after exposure to 1,25D. In another study with a focus on exploring the Fgf23 gene promotor region, this role of 1,25D was confirmed [50]. Collins and co-workers observed three patients that received a high dose of calcitriol after parathyroidectomy after surgery and observed steep increments of FGF23 [51]. Many clinical trials have been performed in which either active or nutritional vitamin D was the key intervention. In several of these trials, FGF23 levels were part of the follow-up parameters. The results of these observations have been summarized in two meta-analyses. In the first of these it was found that in patients that were deficient in vitamin D at baseline, the intervention induced a statistically significant increase of iFGF23 [52]. There was also an increase of cFGF23, but this did not reach statistical significance. A very recent meta-analysis could not confirm this effect of vitamin D, but in this meta-analysis, trials were included where participants did not have vitamin D deficiency at baseline, which may explain the discrepancy with the previous analysis [53]. In a study among children treated by dialysis, active vitamin D compounds (calcitriol or doxercalciferol) induced a substantial increase in FGF23 [54]. Collectively, these studies strongly suggest that vitamin D, especially active vitamin D, induces FGF23. A summary of the roles of PTH and vitamin D is provided in Figure 2. Metabolites 2022, 12, x FOR PEER REVIEW 5 of 12 rat-derived UMR-106 osteoblast-like cells had a 1000-fold increase of FGF23 mRNA 4 h after exposure to 1,25D. In another study with a focus on exploring the Fgf23 gene promotor region, this role of 1,25D was confirmed [50]. Collins and co-workers observed three patients that received a high dose of calcitriol after parathyroidectomy after surgery and observed steep increments of FGF23 [51]. Many clinical trials have been performed in which either active or nutritional vitamin D was the key intervention. In several of these trials, FGF23 levels were part of the follow-up parameters. The results of these observations have been summarized in two meta-analyses. In the first of these it was found that in patients that were deficient in vitamin D at baseline, the intervention induced a statistically significant increase of iFGF23 [52]. There was also an increase of cFGF23, but this did not reach statistical significance. A very recent meta-analysis could not confirm this effect of vitamin D, but in this meta-analysis, trials were included where participants did not have vitamin D deficiency at baseline, which may explain the discrepancy with the previous analysis [53]. In a study among children treated by dialysis, active vitamin D compounds (calcitriol or doxercalciferol) induced a substantial increase in FGF23 [54]. Collectively, these studies strongly suggest that vitamin D, especially active vitamin D, induces FGF23. A summary of the roles of PTH and vitamin D is provided in Figure 2. Factors Involved in FGF23 Expression Dentin Matrix Protein 1 (DMP1) and PHosphate regulating gene with homologies to Endopeptidases on the X chromosome (PHEX) both are suppressors of FGF23 gene expression that appear to act in concert locally in bone for that function [55][56][57]. PHEX is also believed to promote FGF23 cleavage, which then would induce a lower iFGF23 over cFGF23 ratio. Mutations in either PHEX (XLH, X-linked hypophosphatemic rickets) and DMP1 (ARHR, autosomal recessive hypophosphatemic rickets) cause renal phosphate wasting and its clinical sequelae by primary elevations of FGF23. There are no descriptions in the literature of acquired malfunction or suppression of the PHEX protein, with the possible exception of a report on a patient with leprosy [58]. For DMP1, however, diseases that induce acquired suppression appear to exist. In a mice model of CKD, it was shown that renal failure lowered osteocyte DMP1 expression, followed by FGF23 increases, while supplementation of DMP1 partially restored FGF23 towards the normal lower range [59]. The extent to which this is of relevance in clinical CKD remains to be established, but it has been shown that lower circulating levels of DMP1 are associated with cardiovascular event [60], and this finding may Factors Involved in FGF23 Expression Dentin Matrix Protein 1 (DMP1) and PHosphate regulating gene with homologies to Endopeptidases on the X chromosome (PHEX) both are suppressors of FGF23 gene expression that appear to act in concert locally in bone for that function [55][56][57]. PHEX is also believed to promote FGF23 cleavage, which then would induce a lower iFGF23 over cFGF23 ratio. Mutations in either PHEX (XLH, X-linked hypophosphatemic rickets) and DMP1 (ARHR, autosomal recessive hypophosphatemic rickets) cause renal phosphate wasting and its clinical sequelae by primary elevations of FGF23. There are no descriptions in the literature of acquired malfunction or suppression of the PHEX protein, with the possible exception of a report on a patient with leprosy [58]. For DMP1, however, diseases that induce acquired suppression appear to exist. In a mice model of CKD, it was shown that renal failure lowered osteocyte DMP1 expression, followed by FGF23 increases, while supplementation of DMP1 partially restored FGF23 towards the normal lower range [59]. The extent to which this is of relevance in clinical CKD remains to be established, but it has been shown that lower circulating levels of DMP1 are associated with cardiovascular event [60], and this finding may be mediated by increases of FGF23. In addition, uremia induced suppression of DMP1, and hence increments of FGF23 may explain the clinical observation that in more early stages of CKD, FGF23 and phosphate levels appear to diverge, pointing to another inducer of FGF23 than phosphate itself, namely suppressed DMP1 [61]. α-Klotho is intricately involved in phosphate homeostasis and the biological activity of FGF23 [1]. Its colocalization with FGFR1 is mandatory for signal transduction of FGF23 across the cell membrane to exert its actions in the proximal tubules, to induce phosphaturia. Recent research has now revealed that the circulating form of α-klotho, generated after cleavage of its large ectodomain [62], is involved in the expression and excretion of FGF23 from osteocytes. This hitherto unknown role of α-klotho was postulated after analysis of a 13-months old girl with unexplained elevation of FGF23 leading to hypophosphatemic rickets [63]. She was found to have a translocation nearby the α-klotho gene. This phenotype could be mimicked in an animal model by using an adenovectorinduced increased expression of α-klotho, leading to high levels of circulating α-klotho, accompanied by a very steep rise of FGF23 and hypophosphatemia [64]. A recent study employed targeted deletion of the α-klotho gene from long bones and found that this led to attenuated increase of FGF23 after induction of CKD, both at osteocyte expression level and its circulating concentration [65]. This strongly suggests that α-klotho is required in an autocrine fashion for FGF23 expression from osteocytes. Both studies revealed that the presence of FGFR1 on osteocytes is required. Post-Translational Modification of FGF23 in Bone Following the translation of FGF23, the full-length polypeptide can be cleaved intracellularly before being secreted, thereby preventing the biologically active compound to enter the circulation. This cleavage occurs between the arginine residues at positions 176 and 179, and mutations at either of the arginine residues renders FGF23 resistant to proteolytic cleavage, giving rise to autosomal dominant hypophosphatemic rickets (ADHR) [66]. This cleavage is assumed to occur at the Golgi-apparatus by one of seven serine-proteases belonging to the family of subtilisin-like preprotein convertases (SPC), which act by cleaving polypeptides from preproteins to its mature polypeptide backbone. The most likely SPC is furin because its knock-out completely prevented FGF23 cleavage [67]. Prior to being exposed to these proteases, in particular furin, FGF23 can be O-glycosylated by N-acetylgalactosaminyltransferase 3 (GalNT3) at threonine residue position 178, which induces resistance to proteolytic cleavage of FGF23. As indicated above, exposure to phosphate may increase this O-glycosylation and thereby increase the relative amount of full-length FGF23, the active form, as a feedback mechanism to restore phosphate to lower concentrations. In turn, FGF23 can also be phosphorylated at a serine residue at position180 by a kinase termed Fam20c, which prohibits O-glycosylation by GalNT3, which ultimately makes FGF23 more prone for proteolytic cleavage [67]. Anemia and Iron Deficiency Patients with ADHR, one of the inherited forms of renal phosphate wasting due to inappropriate elevations of FGF23, can present rather late (from puberty or not even before their mid-forties), and frequently do not present with typical features such as short stature or bowed deformations of the lower extremity [68]. While these patients have limited or absent capacity to cleave FGF23, it is assumed that as long as the baseline transcription of FGF23 is rather low, circulating iFGF23 can remain relatively normal for years without severe phosphate losses. Iron deficiency in these patients was associated with increased iFGF23 levels [69] and in a small open label trial oral iron supplementation substantially lowered FGF23 level in patients with ADHR [70]. These clinical observations are in line with animal research on models of ADHR [71]. In that experimental study it was additionally shown that exposure of osteoblastic cells (UMR-106) to low iron condition increased mRNA of FGF23 up to 20-fold. The mechanisms involved were mitogen-activated protein kinase (MAPK) dependent. In addition, iron-deficiency also induced increments of Hypoxia Inducible Factor 1α (HIF1α), and HIF1α itself could also boost FGF23 expression. Indeed, it was shown that a HIF1α binding site exists in the promotor region of the Fgf23 gene [72]. In addition HIF1α prevents the cleavage of FGF23 [73]. Collectively, these findings would lead to higher circulation levels of iFGF23 in a setting of increased expression of HIF1α by either iron deficiency or hypoxia. However, in a study using the HIF1α stabiliser molidustat in an animal model of CKD and in additional in vitro experiments, it was shown that improved iron availability to osteocytes by the compound abolished the increased FGF23 expression [74]. This same study also revealed that EPO increased FGF23 [74], and this finding was previously shown in both patients and animal models [75]. This latter study demonstrated that this effect was sustained after bone marrow ablation, where upregulation of the Fgf23 gene persisted, strongly suggesting a direct effect on these cells in cortical bone. Also in human studies, either EPO levels or exogenous doses were associated with FGF23, in particular total FGF23, while the effects on iFGF23 were indeterminant [76]. The role of hepcidin, a liver-derived acute phase protein that induces functional iron deficiency, as an intermediate metabolite in anemia, and iron-deficiency associated FGF23 upregulation is not yet well established. Inflammation Several reports point to the role of inflammatory mediators on bone cells leading to increased expression and secretion of FGF23 [73,77]. In turn, FGF23 can upregulate inflammatory mediators from hepatocytes [77]. Especially in the setting of advanced CKD with remarkably high concentrations of FGF23, this may initiate a pro-inflammatory vicious circle, further driving FGF23. Indeed, several pro-inflammatory cytokines such as tumor necrosis factor (TNF), Interleukin-1β (IL-1β), TNF-like weak inducers of apoptosis (TWEAK) and also bacterial lipopolysaccahrides (LPS) have been shown to stimulate both Fgf23 gene expression and protein excretion in a cell model of osteocytes [78]. In another study, LPS injections increased FGF23 despite a low phosphate diet [79]. Interestingly, the exposure to LPS also caused renal FGF23 resistance by suppression of kidney α-klotho, thereby dismantling the FGF23 receptor. Using several animal models of CKD, or TNF injections in mice with normal kidney function, it was found that TNF increased FGF23 while anti-TNF prevented this [80]. Importantly, the source of FGF23 in that study was the kidney itself, possibly driven by the highest local concentrations of TNF in that organ. This role of TNF is in line with the identification of a TNF responsive FGF23 enhancer, suggesting the direct upregulation of FGF23 by this inflammatory cytokine [81], although it has also been suggested that increases of NF-κB are required. Chronic Kidney Disease An extensive review of the impact of chronic kidney disease on FGF23 is beyond the scope of this review and has been extensively reviewed recently [1,82]. Besides the propensity to accumulate phosphate as a driver for FGF23 increases, in addition to hyperparathyroidism, DMP1 suppression, as outlined above, chronic inflammation, iron deficiency, and FGF23 resistance due to α-klotho deficiency have all been implicated in the exponential rise of FGF23 as CKD progresses. Importantly, experimental studies found that FGF23 cleavage in CKD is impaired as it is in ADHR [73,83]. This feature of CKD, the precise molecular mechanisms of which is currently unknown, fits with the observation that in end stage kidney disease, most circulating FGF23 is intact [84]. Interestingly, it was recently shown that in a model of acute kidney injury, the kidneys themselves produce glycerol-3-phosphate (G3P), which directly stimulates FGF23 production, exclusively in bone [85]. It is likely that besides novel regulators like G3P, the impact of CKD on many, if not all, of the mechanisms involved, as described above, is huge, and collectively creates a perfect storm for essentially unopposed upregulation of FGF23. In addition, it seems plausible that in the setting of CKD, the cleavage of FGF23 is attenuated or its capacity overwhelmed, leading to extremely high levels of biologically active FGF23 in end stage kidney disease, most likely contributing to uremic toxicity. Conclusions The physiological regulation of bone-derived FGF23 is complex, and is regulated at levels of gene transcription, post-translational modifications, cleavage and cellular release. In addition, remote biological activity is variable by dynamic affinity of its receptor due to changing α-klotho abundance, possibly competitive inhibition by FGF23-fragments, and also varying expression of the FGF receptors themselves [86]. Moreover, ectopic FGF23 production has been described too, as outlined for the kidney as described above, but cardiac production has also been described [87,88]. The machinery involved in regulating the metabolism of FGF23 involves an intricate interplay between minerals, calciprotein particles, the endocrine system and local regulators in the vicinity of osteoblasts and osteocytes in an autocrine or paracrine fashion. Since FGF23 is most likely involved in the pathogenesis of an expanding list of diseases, in-depth knowledge of these regulatory pathways is the first step in ultimately targeting these molecular mechanisms that are in the path to clinical events. The exploration of these pathways is far from being finalized, and designing safe and effective interventions are only at the beginning.
2022-05-01T15:12:44.309Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "17ef432cad3d1e1d42a0665fc6cb401295df3f93", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/12/5/401/pdf?version=1651857738", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00a417afa67314f08c1cae1483d9adb8b97ad043", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252120550
pes2o/s2orc
v3-fos-license
Existential loneliness and life suffering in being a suicide survivor: a reflective lifeworld research study ABSTRACT Purpose The aim of the study was to describe the loss of a family member by suicide, based on the lived experience of suicide survivors. Methods A phenomenology study with a Reflective Lifeworld Research approach was conducted, consisting of sixteen interviews with eight suicide survivors. Results The essence of losing a family member by suicide encompasses experiences of involuntary and existential loneliness, life suffering, and additional burdens in a life that is radically transformed, comprising prolonged and energy-intensive attempts to understand. Life for the family member encompasses a constant fear of being judged and an ambiguous silence, where this silence can both lead to involuntary loneliness and be a source of support and fellowship. Support mechanisms inside the family fall apart, and it becomes obvious that the survivors’ experiences affect others. The loss also implies an active endeavour to maintain the memory of the deceased. Conclusions Based on these results, it is important for professionals to accept the survivors as suffering human beings early—from the point of the notification of death—and consider them as patients in need of compassionate care. Such support might reduce life suffering, counteract stigma and involuntary loneliness, and work simultaneously as suicide prevention. Background A sudden death is unexpected and unforeseen, and happens without warning when compared to an expected death, which happens gradually and allows the family to prepare for the loss. Suicide is often considered a sudden death and situates the suicide survivors within a major life transition (Iserson, 2000). Every year, more than 700000 people die due to suicide, which means that one person dies every 40 seconds (World Health Organization, 2022). For each suicide, an average of about 15 extended family members are estimated to be affected (Berman, 2011). This means that 11500000 suicide survivors are left behind every year. In previous studies, both quantitative and qualitative differences have been found between the loss of a loved one by suicide and other types of bereavement. Suicide survivors are at risk of a range of adverse mental and physical health outcomes compared with the general population (Erlangsen et al., 2017). A family who has been affected by suicide has a higher risk of being affected by suicide again (Niederkrotenthaler et al., 2012;Runeson & Asberg, 2003;Tidemalm et al., 2011;Qin et al., 2002). However, quantitative methods may fail to identify differences in bereavement when suicide-specific domains are not considered, for example, meaning-making around the death; feelings of guilt, blame, and responsibility; and feelings of rejection and abandonment (Jordan, 2001). A systematic literature review indicates that suicide is different and more difficult when compared with bereavement after different types of deaths (Jordan, 2001). Suicide survivors are not only faced with the typical reactions that occur following a death, but they also have unique experiences, such as the unending question, "why?" (Pompili et al., 2013). Another systematic literature review has also found that suicide survivors' grief differed significantly concerning levels of rejection, shame, stigma, blaming, and concealing the cause of death when compared with all other survivor groups (Sveen & Walby, 2008). Previous research has focused on the bereavement process following suicide (Shields et al., 2017), but has not fully addressed the essence of what it means to lose someone by suicide by openly and curiously engaging with the suicide survivors' own stories. By telling their own stories, the suicide survivors are given the opportunity to address their experiences honestly and in an uncensored way to convey what the loss by suicide means to them. Such stories make it possible to understand the essence of this phenomenon. Therefore, the aim of this study was to describe the loss of a family member by suicide, based on the lived experiences of suicide survivors. Research design This phenomenological study was based on the Reflective Lifeworld Research (RLR) approach, where the ontology is drawn from Husserl (1936Husserl ( /1970 and Merleau-Ponty's (1945/2002 lifeworld theory. RLR advocates that researchers must "go to the things themselves" and discover human experience in all its variety in order to obtain a profound understanding of the phenomenon (Dahlberg et al., 2008). The phenomenon in this study is losing a family member by suicide. Recruitment of participants The inclusion criteria for participation in the study were: a) confirmed suicide outside hospital; b) suicide within the last three years; c) death notification provided by a professional with a duty to perform this task; and d) suicide survivors residing in a specific medium-sized region (36 inhabitants/km 2 ) in Sweden. In comparison with other countries, Sweden is described as a society based on secular-rational and self-expression values, which means that less emphasis is placed on religion, traditional family values, and authority. Divorce, abortion, euthanasia, and suicide are seen as relatively acceptable. Further, a society with self-expression values gives high priority to environmental protection, growing tolerance of other cultures, LGBTQ+, and promoting gender equality (World Values Survey, 2022). The exclusion criteria were: suicide survivors who the first author had met in her work as a prehospital emergency care nurse. Participants were recruited by publishing an advertisement in two daily newspapers and by posting advertisements in seven grocery stores located across a wide geographical spread in the region. People were instructed to send an email or call to a toll-free number to report their interest in participating. Thereafter, an information letter was sent to those who indicated an interest in participating in the study. A total of 14 suicide survivors responded to the advertisement. Five persons did not meet the inclusion criteria, as the cause of death was not confirmed as suicide, the death happened inside a hospital, or they had received the death notification from a family member. One potential participant declined the invitation to participate. The eight participants who were included were composed of five women and three men, aged between 42 and 74 years. The time since the loss of the deceased ranged from three months to up to three years before the first interview. The re-interviews were conducted six months after the first interview. The deceased were six men and two women, and the type of relation to the person who died by suicide included brother, daughter, husband, mother, partner, and son, and there were different types of suicide. Seven of the participants received the death notification from a police officer, and one from a general practitioner. Data gathering Individual interviews were performed with eight suicide survivors in the first round, with a second reinterview held between three to six months afterwards, which resulted in a total of 16 interviews. The first author performed all of the interviews, which took place from September 2019 to March 2020. The interviews in the first round proceeded by posing an initial question: "You lost your X (relation) by suicide, can you please tell me about it?" To obtain a deeper understanding of the phenomenon, the participants were encouraged by asking further questions and probes, for example: "Can you describe a little bit more about . . . ?"; and "Can you tell me more about what you mean by . . . ?". Three pilot interviews were completed with suicide survivors to test the interview approach and the initial question. The pilot interviews did not lead to any changes in the questions and were not included in the analysis. The first round of interviews lasted between 41 and 89 minutes (mean 63 minutes and median 64 minutes) and, after the initial question, the survivors spoke for a median of 12 minutes (mean 15 minutes) before other probes were given. After the first round of interviews, a close reading of all the transcripts was made, aiming to identify and clarify any unclear meanings in the data, so that the authors were able to grasp the participants' own words and descriptions to avoid making any assumptions or misunderstanding their meanings. In the re-interviews, two questions were first posed to each participant: "Is there something that you have thought about after the first interview?" and "Have you something further that you want to tell me about since the last interview?" Additional questions, based on the readings of the interview transcripts from the first round, were prepared for each participant. The interviewer asked for clarification about responses given during the first interview, and the participant was given the opportunity to further elaborate on their previous descriptions and to clarify using other words. The re-interviews lasted between 37 and 113 minutes (mean 71 minutes and median 64 minutes). The participants chose the time and place for the interviews. The first round of interviews were held in either the participants' homes (n = 3) or in different care units (n = 5). The re-interviews were also held either in their homes (n = 4) or in different care units (n = 4). If needed, the moral obligation to protect the participant's health was temporarily given first priority, and the research requirement became secondary if the participants presented suicidal behaviour. Data analysis According to Gadamer (1960Gadamer ( /2013, our preunderstanding is a prerequisite for understanding-to maintain a critical attitude as researcher is crucial. The willingness to understand the interviewee and the methodological principles in RLR consist of openness, curiosity, and bridling, and help the researcher in developing an understanding of the phenomenon (Dahlberg et al., 2008). All interviews were audiorecorded and carefully transcribed verbatim by an experienced research secretary to include non-verbal information. The tripartite structure of the data analysis comprised a movement between the wholethe parts-the whole (Dahlberg et al., 2008). The initial phase aimed to allow the researchers to become familiar with the data. The first author listened to the first-round interview and the reinterview for each participant and read the transcripts several times to get a sense of them as a whole, using a sense of curiosity and openness to open up the mind to the text and the meanings that are present. This involved looking for otherness or something new, rather than looking for things that confirm what is already known (Gadamer, 1960(Gadamer, /2013. One of the co-authors read all of the transcripts and it was a conscious choice to allow the other coauthors to read several of the transcripts. The rationale was to allow the possibility for all to ask curious and critical questions about the analysis and the meanings so that the analysis did not proceed too quickly. Bridling is a process and an approach that involves researchers moving between subjective and objective dimensions in their ambitions to investigate and uncover meanings that are valid. All of the authors used bridling throughout the whole analysis, to "keep an eye on and keep in check"-to slow down and incorporate a continuous investigation of one's own preunderstanding and so avoid making assumptions and jumping to conclusions (Dahlberg & Dahlberg, 2019, 2020 and, further, to be open to new understandings (Dahlberg et al., 2008). After the initial readings, keeping the phenomenon in focus, an understanding of the meaning structure presented in the data was illustrated by describing essential, nuanced, and varied meanings. Then, all authors worked together to cluster meanings that possess the same characteristics and which might be related to each other, including all the variations of the phenomenon. This clustering formed a broader understanding of patterns within the phenomenon. The analysis was performed by all of the authors and, to facilitate the clustering process within the extensive data, the NVivo software package version 12 (QSR International, 2018) was used. A new whole description, comprising the essential structure of meanings that explicate the phenomenon, was presented. First, the essential structure of meanings was written. Then, the individual constituents were formulated, including variations within meanings and unique experiences, and by presenting quotations (Dahlberg et al., 2008). To illustrate the findings and to increase the trustworthiness of the findings, quotations were identified. According to the RLR approach, the results must be presented in the present tense (Dahlberg et al., 2008). Ethical considerations Several established researchers in this area have previously studied suicide survivor experiences, and each predominantly adopts a positive attitude towards the ethical implications of including such participants in studies (Andriessen et al., 2022;Dyregrov, 2004;Dyregrov et al., 2010;Hawton et al., 1998;Omerov et al., 2014;Runeson & Beskow, 1991). Andriessen et al. (2022) conclude, in a study with 61 bereaved family members, that the participants experienced little distress and would recommend participation to others. Dyregrov et al. (2010) found, in their study with 92 suicide survivors, four categories of motivations for individuals to participate in such research: "Helping Others", "Venting", "Insight", and "Just Because". Their conclusion was that the suicide survivors are motivated to participate in interviews and that the motives are multifaceted and reflexive. Further, suicide survivors should be allowed to decide for themselves whether to participate in research or not. The researchers were fully aware that the participants could regard the topic of this study as being sensitive. Therefore, all participants received the phone number of a counsellor in the event that anyone needed further support before or after the interview. The authors prepared a plan for how to respond if a participant expressed suicidal thoughts during the interview. After the interview, the first author's ambition was to change from the role as researcher to that of prehospital emergency care nurse and make an assessment according to medical service guidelines. Further standard procedures related to the assessment would be followed. Ethics approval was obtained from the Swedish Regional Ethical Review Board of Uppsala (No. 2017/228 andNo. 2017/228 K) and the Swedish Ethical Review Authority (No. 2019/ 03259). The participants were informed in writing and orally that their participation was voluntary and that they had the right at any time to refrain from further participation without specifying any reasons. Before the interview, written informed consent was obtained from all participants. Essence The loss of a family member by suicide is a phenomenon characterized by existential loneliness, which means life suffering with additional burdens being placed on those closest to the deceased. These include the perception that the family member does not have the strength to live any longer and experiences feelings where their existence seems to crack. Everyday life means heavy feelings of guilt and a prolonged and energy-intensive effort to try to understand. At the same time, to understand is hard to bear, as it presents painful insight into unknown elements about the deceased. Life includes existential loneliness and silence from people around them, while at the same time there is an appreciable need to speak to someone who dares to remain silent and listen. This ambiguous silence can both lead to involuntary loneliness and be a source of support and fellowship. Disclosures in relationships where someone is listening are exposed to extreme challenges and the usual support mechanisms inside the family become unstable. There is a strive to seek an existence that is familiar, despite the fact that life has been radically transformed. The loss implies an active endeavour to maintain the memory of the deceased and manage the grief to prevent a total obliteration of the deceased's imprint. Stigma and a constant fear of being judged and held liable is constantly present in life, which also involves embarking on experiences that adversely affect others and contributes to an undesirable identity. The following constituents further describe the phenomenon: A loss with additional burdens; The need to understand; The ambiguous silence; Memories as an eternal companion; and The fear of being convicted. A loss with additional burdens Loss from a suicide constitutes more burdens of grief and produces extra "layers" of pain. To be left behind is an additional burden: first, in the sense that a family member did not have the strength to live any longer, and second, by actively ending their life. The realization that insurmountable worries led to the loss of the family member is painful: We [the family] have probably talked a lot about that part precisely because (pause) yes, because for the debt issue as well, as one feels guilty because someone is so sad that they do not want to live. So then, yeah, so that [the insight] we probably have, that [the insight] has been the major part. The nearest time after the family member's death is dizzying and painful. The suicide survivor breaks down, stops and feels, but also switches off and manages under this "shutdown" to arrange practical tasks. It is like having a nightmare, where the whole world seems to be cracking. In connection with the death notification being delivered, difficulties are experienced in understand what is being said, panic takes hold, and reactions such as crying, screaming, and anger arise. The bodily feelings associated with the death notification remain as an embodied memory that can return at any moment. The death notification can be described as a complete shock, an unreal dream, a brutal message, a total change of life where reality is questioned, and at the same time is so obviously real. You just lose everything. It is uncontrollable. It is like you just, you cannot control it. It is a sign that you are alive, to a very high degree. That's how you can express it. That you are, you are, you cannot be more alive. The death notification is surprising for some, but not surprising for others, and, in some cases, the notification was only a formal legal process, as they already knew that the family member was dead. It can also imply a long and lonely wait to obtain information. The sense of powerlessness was palpable when the survivor had to wait patiently for the police officers to deliver the death notification to other family members. Further burden occurs when the survivor tries to convey the deceased's mood before death in meetings with healthcare professionals and they feel that the response is cool. The inability to provide care allows the blame to be partially shifted onto the healthcare service. At the same time, this insight into the healthcare service's shortcomings and inadequacies burdens the survivor and they express how feelings of powerlessness, betrayal, and disappointment take hold. Further burden and pain are experienced when insights about the deceased's life are revealed, for example, their suicidal thoughts and actions, through having access to their medical records. Here, a father describes reading his son's records: So, I imagine, we [healthcare] cannot take it. Now have we been doing this for a long time and we have come to the end of the road. And this guy [therapists] who spoke to my son, he even wrote, 'I'm not coming any longer. I have been doing this for a very long time and I'm not getting anywhere'. But then you have to change to another path for fuck's sake, and do something else! No, they don't change to something else just business as usual, in the same shit. A burdensome feeling is an increased sense of responsibility, involving many practical tasks that cannot be escaped. The responsibility implies a loss of energy, where the ability, for example, to receive news or receive visitors is reduced and the space for making new impressions is exceedingly limited. Despite this, the survivors are met by incomprehensible demands from those around them to move on after what has happened. After a long period of mental illness of the deceased, it can feel as though a "wet oppressive cloth" has been lifted, bringing a need to manage ambivalence, and a feeling of lightness and wellbeing that is at the same time filled with guilt. Impaired memory, increased vulnerability to stress, and difficulty sleeping are described to be caused by the death. Furthermore, survivors experience a lack of energy, a struggle against an uncompassionate employer, and guilt for what one's children have to deal with. To feel some form of normalization, some survivors choose to move house and start over to avoid being constantly reminded of what has happened. The boundary between life and death is blurred, with a feeling of indifference to death, sometimes bringing with it one's own suicidal thoughts and sometimes a newfound joy in life. The experiences of the loss provide insights into what life can be and reminds us of the vulnerabilities of life and what life can do to one's own existence. The need to understand After the death notification, a prolonged and energyintensive hard effort begins, to try to understand the inconceivable. Gathering all the pieces becomes crucial in having the opportunity to solve the puzzle. The survivors are active and seek facts and information about the deceased. They receive information about the deceased from friends and other acquaintances, and by obtaining the deceased's medical records and notes. Reading through the documents involves exposure to further painful insight about the deceased's unknown life and their relationships with others become apparent. Sometimes, survivors are looking back many years, in some cases all the way back to childhood. The survivors, in solving their puzzle, turn to professionals in the form of seeking conversational support. This is something that the survivors arrange for themselves. The survivors spend a lot of time assembling the information and putting the pieces together. Puzzling, thinking, and trying to understand also implies experiencing an examination process similar to being the accused in a judicial trial. It is a trial all the time (pause) Yes, it is a trial in my head all the time when you like (pause) to (pause) and I can (pause) I can get it to that is my fault. But then I know reason-wise I am not. Through a process of self-examination, questions are asked about what should have been done and what should have been observed. At the same time, there are feelings of guilt. It is described as being placed in a yoke and the search sometimes reveals insights that ease off the yoke-easing one's own guilt. At the same time, it is difficult to reach an end to this work. Gradually, an insight also emerges about the suicidal signals that existed before the suicide which were not understood then; these insights need to be managed. Below is a description of the moment when a picture of the puzzle begins to emerge: Maybe two months before [the death] and all of a sudden we went down to her storeroom for a look. And, as I said, she was very orderly, you know, and she was down there and looked so -she could barely walk -and she said, 'Oh, [participant's name], look how many cartons and how much stuff there is.' 'But you're kidding,' I said, 'There are some boxes on the shelves and they are marked and everything is folded and nice, this is like, this will take half an hour to clear out,' I said, and then I thought, then, like all that was done back then. That like, yes, even then she had cleaned up like [you would] after a death long before that. There is also a need to see the deceased in the morgue or in the place where the family member died. When it was not possible to see the deceased at the morgue, this is described as being a barrier to understand. Solving the puzzle and gaining an understanding creates a sense of peace and puts the survivor's mind at rest once they stop dwelling on the matter, which also means avoiding questioning oneself: You have to, no matter how much we try to dig into these things, we still do not know everything And then you have to realize that, yes, but here now okay, now I know enough. The ambiguous silence The loss implies a clear need to share thoughts and feelings, by writing to or telling someone who genuinely and compassionately dares to listen sensitively and has the ability to remain quiet during the narration. For the survivor, it is a relief to be able to tell their story: Then it was just, you weave it (pause) it just came like that. But then he just sat there and listened to my story. I would not say that it was hard to tell (pause) it is like, you unload emotional discharge a little bit, it is when you tell the story. Close relationships undergo extreme challenges. Huge demands are placed on the family to cope with providing support to each other. For some of the survivors, they are in mourning themselves, and, at the same time, they need to be available to provide support for other family members. The foundations of support within the family are unstable, as the loss is managed differently by the family members, depending on their relationship with the deceased. Sometimes the closeness to other family members is soothing and partially supportive. This can also be described as strengthening the family relationships, for example, meeting as a family more often, and prioritizing the family more than before and taking financial responsibility for the children who remain. The survivors experience that they are self-limited in the meeting with others and at the same time have a major need to share their thoughts with someone outside the family. However, sharing thoughts with someone outside the family can also bring distress. One aspect of meeting someone outside the family with the similar situation is that it can be too much: It is still difficult. I do think that the group meetings with SPES [suicide survivor organization] were good, but at the same time I felt sometimes that they gave me more pain rather that it did me good. Because you have your own grief and then you're sitting with others with their grief and are listening to their stories. The survivors have a need for practical help from those around them. Such practical help from the employer consists of, for example, reducing their requirements and allowing the survivors to attend their workplaces without heavy demands, or that they are offered conversational support from the occupational healthcare service. The practical help received from family members and people around them is a need for an everyday life that is familiar and a need for routines, and such support is also experienced as being listened to and encouraged. The survivor has a need for flexible support, which means being offered continued support and being given a feeling of security by those who dare to listen. The survivors, on the one hand, sometimes lacked support from those where the support had been expected, which would lead to an unexpected exclusion of former friends. On the other hand, they are sometimes met with unexpected support from people who they thought were unlikely to do so. The silence from the healthcare service brings with it a feeling of being forgotten and abandoned, where they themselves must be active in seeking and receiving help. Where can you call? We called the hospital / . . . / and we could come down to the hospital to some unit somewhere. But it all ended up in that we could not go to the hospital because he had not died in the hospital or something like that. Yes, it was some weird explanation like that. There is also a great need to be treated with compassion and support, particularly when encounters with the healthcare service, other professionals, society, and other individuals feel unengaged, distanced, and cold. Even people who know the survivors seem to avoid any contact and even intentionally turn away. The survivor encounters inhumanity when society does not feel present and supportive. Initially, a feeling of support is experienced in connection with the death notification being delivered, which, after a while, transforms into a lack of support. This sense of abandonment feels especially clear when promised support turns out to be nonexistent: The police officer went / . . . / and they [police officers] said: 'You know where we are,' and they also said that they work until then and then. They work until nine or something like that. So I got a telephone number and then a name I think, on a piece of paper. / . . . / But at least I got this person on the line finally: 'We need help' 'Yes, no but, but we cannot come we do not have any help to give. No, we cannot send anyone but we would like that one of children come to us so we can swab them' [take saliva samples]. The genuine compassion when the death notification is delivered confirms and responds attentively to the need for calm, closeness, honesty, and touch. It occurs when the person who delivers it introduces themselves, maintains a sense of calm, promotes a feeling of care, is kind, and is prepared to listen. A description of how compassion can also be conferred in a phone call is made here: Then she [police officer] stayed calm all the time. You had the feeling that she cared about me in some way and that she wanted to frame the conversation in a sort of security in some way, in the middle of all the strange things that of course happened then. And then you also got the feeling of care when I started, like, 'Yep, but what will happen then? And will we find out about it? And do we expect to get anything home, or?' No, we do not. 'But I [police officer] can call you later.' In this way you felt that she wanted it to be good. And you feel like she took care of you in the way that she could on the telephone. And she stayed calm, collected, and pleasant, and yeah, that conversation felt very good. In contacting and in meeting with the healthcare service after the death, the survivors describe an inability from the healthcare professionals to show compassion. Even a lack of compassion in those delivering the death notification was described: Maybe they [police officers] said 'Bye'. I do not know. Even if I had not been able to say anything, just 'Yep'. Or something. I think that it (pause) I do not know. A certain compassion was missing. Because you are in shock. They [police officers] come and give you a shock. And then you can just say, 'Bye, we're leaving now'. Then you know that here is someone who, they have just announced, but they have also cared I do not have any memory of when the police officers actually went, it is that. I see it very clearly now, that they would have said 'Bye'. Maybe only 'Bye'. Perhaps, 'Do you want to ask something?' Of course, you cannot when you are in shock, but they could still ask the question. Because inside me I have registered it for maybe a long time afterwards. Memories as an eternal companion There is a need for sharing memories with family, friends, and others about the deceased, particularly when others choose not to talk about the family member. This silence is experienced as if the family member never existed and intensifies the feelings of emptiness. No one even mentions him, of those who have found out [that he died by suicide] / . . . / but I do not want them to remove him. He should be allowed to join. I wish they might just [ask] 'How was (the name of the deceased) when he was ?' / . . . / I would like it to be a bit like that, so that he can be a bit alive. There is a value in preserving the memory and in which way the survivors accomplish this is varied. Some have the need for solitude and having an opportunity to meet the deceased in a secluded "room" and to preserve there the memory of the family member. Memories of the deceased reveals itself in both the good times and the bad. The memory can also bring to life without being an active prompt and can come at any time-memories that need to be managed and, in some cases, even processed. Further, with an emptiness and the need to adjust to a life where the family will never be the family that it once was. Grief is consciously managed as a way to prevent a total obliteration of the deceased's memory and imprint. This conscious management becomes a constant companion in life: My mom and dad have passed away, but still it's like not the same. This was like (pause) losing an arm or a part of the body. Yeah. And now I am going to learn how to live without a part of my body, in a different way. In the memory somehow. Mm (pause) yeah, but it is like I have been amputated somehow. He amputated me. The fear of being convicted There are barriers to talking about one's own feelings and it is not possible to unburden one's feelings without affecting others. In some situations, survivors choose not to discuss their emotions with other, to avoid affecting the atmosphere. Sometimes they choose not to tell others about the real cause of death to spare themselves from being blamed by others, and other family members from feeling guilty. These experiences are like secrets that are carried through barbed wire and that are only revealed after the wire has torn open wounds, such as in sudden and unexpected moments, when the survivor is forced to explain why the family member is not with us any longer. There is a constant fear of being judged by others and being blamed, or that the whole family is being judged in the eyes of others and being held responsible for what has happened. The conscious choice to tell one's story is made only when trust is established. Often, in meetings with other suicide survivors, the survivors are seen as being on the same level and, for that reason, such silences can be supportive and not judgemental. You do not have explain so much, what you feel and how you feel. And it is about the silence, the silent language, you know exactly how it is. No one asks the extra question or thinks that it is odd, what you said or that. You understand each other exactly. It feels like being in a vacuum, not being able to share thoughts with others and about the difficulties that come with the fear of being judged. Reluctantly, the need to be seen is weighed against the risk of being attributed an undesirable identity; to be seen as a family where someone had died as a result of suicide: I feel that there will be (pause) It is something that you defend yourself against hearing. And it is very hard to hear that a young person committed suicide or 'What is going on?' And that also creates a lot of fantasies about what our family really looked like. Discussion Based on the phenomenon of losing a family member by suicide and the present results, the question of "losing" should be understood in relation to "being" a suicide survivor, that is to say, human existence in all its dimensions. The opening question of the interviews was directed at the experience of losing, while the answers were often about being, which implies a tightly entangled meaning that is impossible to separate. The survivors did not only describe losing a family member, they also described a need to orient themselves within the lifeworld of other human beings. This could explain the closely intertwined and dialectical relationship between being and losing, which clarifies "being into the world" for the survivors. Being is always about "being into the world", and the world is always something that we share with others (Heidegger, 1927(Heidegger, /1998Merleau-Ponty, 1945/2002, 1948/1968. In this study, being a suicide survivor signifies existential loneliness and life suffering. The results indicate that survivors seek a deeper understand by solving a puzzle in order to ease off the yoke and cope with the question of guilt. Previous studies have described that survivors struggle in silence with unanswered questions (Bowden, 2017), searching for reasons why the suicide happened (Begley & Quayle, 2007), and needing answers to central questions such as, "Why did he commit suicide?" (Fielden, 2003). It seems to be central for the survivors to understand what, in some sense, is elusive and not fully possible to understand. An increased understand does not necessarily reduce feelings of guilt, but instead facilitates the understand that the survivors can cope with the feelings of guilt in everyday life. It is not the answers to the questions "Why?" that are important per se, instead, it is vital to really understand. The lived experiences of the survivors in the present study indicate a profound life suffering and feelings of involuntary loneliness. The survivors described a continual fear of being judged. Previous studies also describe that survivors blamed and judged themselves for the death (Peters et al., 2016a), felt blame and judgement from others (Ford, 2016;Sheehan et al., 2018), and felt judged by the first responders in the immediate aftermath of a suicide (McKinnon & Chonody, 2014). By condemning another human being and acting as a judge, much suffering is caused by feelings of being accused, blamed, or judged by others. The suffering, guilt, and pain may be found among family members who occupied a close relationship with the deceased. The feeling of humiliation and shame is a type of torment that afflicts a person who fights against suffering (Eriksson, 2006). The survivors experienced barriers to talking about one's own feelings and burdens, and it is impossible to ease these without affecting others. A previous study has described similar results, where survivors felt a sense of responsibility to alleviate the discomfort demonstrated by others (Peters et al., 2016a). The survivors in the present study needed to share their thoughts and feelings with others, and sometimes the expected support was lacking. The lack of support also meant a lack of opportunities to talk about and maintain the memory of the deceased. Similar results have been found in a phenomenological study, where the survivors were found to have kept the memory alive through themselves and by regarding the deceased as still being a part of the family, and never saying goodbye (Kinsey, 2019). In a study of the enigmatic phenomenon of loneliness, the meaning of loneliness can be understood as an existential deficit and as being rejected by people who they want to be with, which makes this imposed loneliness painful (Dahlberg, 2007). Further, the suffering caused by silence from the people around them left the survivors in a state of involuntary loneliness. An additional suffering in care and healthcare was caused by a violation of the person's dignity through omitted care or a non-caring attitude. According to Eriksson (2006), suffering in care and healthcare often arises from unconsciously acting, deficient knowledge, and a lack of reflection. Other studies have also expressed deficient availability and compassion in healthcare organization (Peters et al., 2016b), which can also be considered as suffering caused by healthcare. Instead, healthcare professionals need to support and provide compassionate care to the survivors. Compassionate care is, according to Dewar et al. (2014), a relational activity involving noticing another person's vulnerability, experiencing an emotional reaction to this, and acting in some way. In the present results, it was obvious that the usual support mechanisms within the family became unstable as the survivors coped with the loss differently, depending on their relation to the deceased family member. Previous studies describe similar results, for example, that the suicide leads to the fracturing of family relationships (Peters et al., 2016a), strained family relationships as family members are unable to acknowledge the grief of others in the family (Tzeng et al., 2010), avoidance and distancing from close relationships, superficial or troubled close relationships (Hoffmann et al., 2010), and family conflicts (Lee et al., 2017). Further, the present study indicates that the support from others is not taken for granted. Sometimes the survivors encountered incomprehensible demands from people around them to move on with their lives. Previous studies also describe that survivors were confronted with avoidant and even repellent behaviours from others, instead of receiving support (Peters et al., 2016a), making unreasonable demands on the survivor with statements such as, "let it go and move on" (Kinsey, 2019;Sheehan et al., 2018). The survivors concealed the loss and carried the loss like a secret, as partly revealed in this present study, and as also described in other studies (Azorina et al., 2019;Peters et al., 2016a). In a conceptual review of loneliness, existential loneliness was found to offer a different perspective on loneliness when compared to social or emotional loneliness. Existential loneliness means not simply the absence of meaningful relationships, it also signifies a feeling of fundamental separateness from others and the wider world, often in conjunction with traumatic events and death (Mansfield et al., 2019). The deepest feeling of loneliness occurs perhaps when a person is not seen by others, which is perhaps the deepest suffering of all. Not being seen is, therefore, in some sense, like being considered as being "dead" (Eriksson, 2006). Methodological considerations This topic may be difficult to talk about and, with respect for the complexities of the phenomenon, the researchers chose the RLR approach for this study in seeking the meanings of a phenomenon (Dahlberg et al., 2008). A strength is that both women and men were included in this study; as Maple et al. (2014) point out, the existing literature lacks studies about men's experiences as suicide survivors, along with experiences of losing a woman by suicide. A further strength is that the survivors had a variety of relationships with the deceased, which is also an element that is missing in previous studies (Sveen & Walby, 2008). It can also be considered a strength that the participants were not recruited from support groups, unlike most studies conducted to date (Maple et al., 2014). The demographic variation among the participants is judged to also have contributed important and varying experiences (Dahlberg et al., 2008). The study has some limitations regarding the risks associated with recruiting a selective sample. First, most of the participants had received the death notification from police officers. Despite this, the results might be transferable to other professionals and also to everyone who may encounter suicide survivors. Second, no young adults participated in the study. It is conceivable that, if the advertisement had been posted on other platforms, perhaps we may have reached them. This raises the likelihood that certain nuances of the phenomenon were missed, but the essential structure of meanings would probably remain the same. Promoting additional objectivity is the rationale for posing a wide opening question, so the researchers do not tend to steer the conversations with participants in any one direction except towards talking about the phenomenon. The open question allows them to feel free to choose where to begin. In that way, the participants had control of the interviews and the first author could adapt to the participant. This also minimizes the risk of steering them towards a specific answer and allows the participants to speak about as much of the phenomenon that they wish to and not only about the things that the researcher finds interesting. It is also a way to ensure that the preunderstanding of the researcher does not dominate the interview (Kvale & Brinkmann, 2009). The decision to perform a second interview was a good one, as it gave the participants an opportunity to clarify and provide more nuanced experiences. Having a difficult story to tell acted as a release for them, and a reinterview allowed the interviewer to focus more on asking them to describe their impressions of the phenomenon. All participants said before the first interview was finished, "Can we meet again? I have probably missed something". However, the researcher must keep in mind that they have met before and use openness and curiosity to look for new insight and otherness, especially during the second interview. These are particularly strong emotional stories, and, to find a balance between fellow human being and a researcher, the interviewer had to focus on the phenomenon instead of being drawn into the story and completely lose the phenomenon. According to Dahlberg et al. (2008), openness, curiosity and adopting a bridling attitude helps the interviewer to perform these emotionally strong interviews to reveal rich meanings by moving between subjectivity and objectivity. Further, to promote the opportunity to find the essence of a phenomenon, the data have to be rich in meanings (Dahlberg, 2006). In this study, the interviews with the survivors were very rich. On one occasion, it was necessary for the first author to change from the role of researcher to the role of prehospital emergency care nurse to make an assessment about a participant's disclosure of suicidal thoughts. Conclusions and implications The loss of a family member by suicide is a phenomenon characterized by additional burdens and suffering being placed on those close to the deceased. Being a survivor implies existential loneliness and life suffering, which are new insights revealed in this study. Searching to understand a constant fear of being judged, and being attributed an undesirable identity, are partly new insights. The silence from people around the survivor, a welcome need to speak to someone, and the struggle to preserve the memory and imprint of the deceased, are found in other studies and confirmed in the results. Because the survivors are extremely sensitive to meeting others, it seems important that professionals are aware of this sensitivity, which means that there is an extra need for them to be attentive and responsive in their relationship with the survivor. From the very first stage, in connection with the death notification, professionals should already accept survivors as suffering human beings and the healthcare professionals should consider them as patients. Hence, the healthcare professionals should take the initiative and provide compassionate care. Such support might reduce life suffering, counteract stigma and involuntary loneliness, and work simultaneously as suicide prevention. Notes on contributors Christina Nilsson is a PhD student at Örebro University and alongside her studies she works as a Prehospital Emergency care Nurse (PEN) in the ambulance service. Her interest in the research area has arisen from her work as a PEN in encountering suicide survivors. She is interested in qualitative research, and has experience of moderating and facilitating focus group discussions with professionals. Karin Blomberg is a Professor of Nursing Sciences at the School of Health Sciences, Örebro University. Karin is a Registered Nurse with experience and expertise in palliative care. Her research focuses on interventions for relationcentred care and the conditions for such care, i.e., professional development and learning. Another area of her research is related to concepts such as dignity and compassionate care, which has relevance to suicide survivors' experiences and need of support. Methodological development in qualitative research is also an interest. Anders Bremer is an Associate Professor in caring sciences at Linnaeus University, and a PEN with extensive experience as a clinician in the ambulance service. His research is about ethical problems and ethics of care in the prehospital context. An area with high relevance in his research relates to the family members' situation during cardiac arrest and sudden death, including death by suicide. The research concerns equality and fairness within healthcare with a special focus on older patients. He is interested in various qualitative research methods, and especially phenomenology.
2022-09-09T06:18:07.695Z
2022-09-08T00:00:00.000
{ "year": 2022, "sha1": "5886f02a19566c80e990d7baa9c5862016eaf2a6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bd556dbdaedacec56946223731b3e9bf7be18f3d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55121792
pes2o/s2orc
v3-fos-license
Direct measurement of the pulse duration and frequency chirp of seeded XUV free electron laser pulses We report on a direct time-domain measurement of the temporal properties of a seeded free-electron laser pulse in the extreme ultraviolet spectral range. Utilizing the oscillating electromagnetic field of terahertz radiation, a single-shot THz streak-camera was applied for measuring the duration as well as spectral phase of the generated intense XUV pulses. The experiment was conducted at FLASH, the free electron laser user facility at DESY in Hamburg, Germany. In contrast to indirect methods, this approach directly resolves and visualizes the frequency chirp of a seeded free-electron laser (FEL) pulse. The reported diagnostic capability is a prerequisite to tailor amplitude, phase and frequency distributions of FEL beams on demand. In particular, it opens up a new window of opportunities for advanced coherent spectroscopic studies making use of the high degree of temporal coherence expected from a seeded FEL pulse. Introduction Free-electron lasers (FELs) are unequalled in providing tuneable, intense, ultrashort light pulses in the soft-and hard-x-ray regime. The self-amplified spontaneous emission (SASE) principle, underlying most FEL implementations, guarantees a high spatial coherence of the beam. Its longitudinal coherence, however, is known to be rather limited [1,2]. This important property is now being improved using FEL seeding techniques, where a seeding wave at the target wavelength [3] or an integer fraction of it [4,5] imprints a well-behaved and reproducible time evolution on the phase of the amplified electro-magnetic field. This opens the door for a novel class of experiments investigating the response of matter on the mutual action of synchronized light fields in a previously unexplored spectral regime. A first such coherent-control type experiment [6] has demonstrated to steer photoelectrons formed in a nonlinear ionization event by adjusting the phase between the EUV pulse and itʼs simultaneously generated 2nd harmonic. The full exploitation of these opportunities calls for a precise measurement of the temporal pulse properties, including it is temporal phase evolution. Existing temporal pulse property methods for FEL pulses in the soft and hard x-ray regime adapting measurement principles developed for optical fs-laser pulses, like the autocorrelation [7] or SPIDER [8] technique. Recently, the duration of pulses from the seeded FEL FERMI has been determined using nonlinear cross-correlation with an infrared laser pulse [9] and a chirp has been indirectly inferred by successively following the change in the FEL pulse duration, when modifying the chirp of the seed beam [10]. In the work presented here, we aimed at a direct measurement of the temporal properties of a seeded FEL pulse by employing the THz streak camera principle [11], which provides information on both pulse duration as well as spectral chirp [12]. Importantly, the single-shot capability of THz- Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. streaking allows to assess the reproducibility of these pulse parameters. While this method is not able to achieve a complete single-shot field reconstruction (as potentially being provided by the SPIDER method), it can deliver the pulseʼs duration and linear chirp, if more than only one time-of-flight spectrometer is used [13]. Furthermore, as the XUV pulse structure is not affected by the gas target or streaking electrical THz field, this method provides a non-invasive and fully parasitic temporal diagnostic allowing for a re-use of the undisturbed and characterized FEL pulse for downstream experiments. In the last section we will explicitly compare the seeded FEL pulse measurement results obtained from the THz streaking method with an electron bunch related measurement method, which uses a transverse deflecting rf-structure (temporal deflecting structure, TDS, [14]), to observe the energy modulation imprint of the seeding process in the electron bunch directly. Transient THz light field streaking of photo electrons To directly measure both, pulse duration and chirp of individual XUV pulses, this diagnostic utilizes the energetic streaking of photo electrons in a strong electro-magnetic field at zero-phase. This technique has been developed in the field of attosecond science [15] and was adapted to be used on ionizing light sources with pulses in the femtosecond to picosecond regime [11]. The underlying principle is to map the temporal intensity and phase profile of the XUV pulse onto an electron yield distribution over kinetic energies. It also has conceptual similarity to other 'zero-phase'-techniques for analyzing the structure of an electron bunch of a linear electron accelerator as discussed in [16] or [17]. In order to comply with the expected pulse duration of the seeded FEL in the range of several tens to hundreds of femtoseconds, the frequency of the adjacent field is chosen to be in the Terahertz regime. The THz streaking diagnostic has been explained in detail elsewhere [12], so here we focus on the salient points for its application at a seeded FEL. Figure 1 depicts the experimental setup. The THz pulse is transported via metallic mirrors and is focused with a plano-konvex lens (Zeonex polymer 440r, Viaoptics) of 75 mm focal length to a spot size of 1.2 mm FWHM width and 1.9 mm FWHM height. The XUV pulse is collinearly guided through 2 mm apertures in the THz lens and the metallic mirror (see figure 1). The XUV pulse ionizes an argon gas target, which is located 5 mm in front of the entrance of a time-of-flight spectrometer. The beam diameter of the XUV pulse is reduced to 0.5 mm diameter with a pinhole aperture. Using a temporal delay stage and a set of temporal and spatial diagnostics like an electro-optical crystal, a fluorescent screen, a pyro detector and a fast XUV diode, the spatio-temporal overlap between the THz pulse and the XUV pulse is established. Figure 1. Scheme of the THz streaking diagnostic: a near infrared femtosecond laser generates a synchronized Gaussian laser pulse, which is split into two pulses. One pulse is frequency tripled and injected into the FEL to seed the electron bunch inside an undulator. The other part is guided to the THz streaking diagnostic, where it is frequency down-converted to THz radiation and brought to spatio-temporal overlap with the XUV pulse inside an argon gas target. The experiment makes use of a variety of different light pulses, which all originate from the same optical femtosecond laser pulse. It is created in a 10 Hz flash-lamp pumped, chirped-pulse-amplification titaniumsapphire laser system (HIDRA, Coherent) with 30 mJ pulse energy at 800 nm central wavelength and a Fourier limited pulse of 35 fs FWHM duration. The Titanium sapphire laser is optically synchronized to the master oscillator of the FEL with a temporal precision of 13 fs rms [18]. The amplified output pulses are split into two parts, one is used for the seeding of the FEL process and the second for the THz streak camera. The output pulse of the laser system is temporally uncompressed. To reduce unwanted nonlinear effects during the pulse transport the compression of each of the two parts is performed in front and close to any nonlinear frequency conversion component. For seeding a pulse with 5 mJ pulse energy is frequency tripled to 266 nm wavelength and a pulse energy of 500 μJ. The pulse duration of the optical seed pulse has been measured using an optical single-shot line cross-correlator between the fundamental and the frequency-tripled pulse to t laser = (140±10) fs rms A spectral measurement of the seed pulse provides a spectral width of 1.35 nm, which corresponds with a Fourier limited pulse duration of 34 fs rms Thus, the seed pulse was affected by an initial chirp as a result of the tripling process. The nature of this chirp is currently unknown and will be subject to further measurements. The frequency tripled pulse is focused into a dedicated section of the linear accelerator of the FEL, where the amplification process is initiated by means of external seeding [19]. In the so-called high-gain harmonic generation mode of operation, a sinusoidal modulation of the energy of an ultra-relativistic electron beam is introduced by overlapping it with a GW-level optical seed laser pulse with 266 nm wavelength inside an undulator magnet. Arrangements of four dipole magnets generate longitudinal dispersion and convert the energy modulation into a current density modulation. The harmonic content of the current spikes allow the coherent emission of radiation at an integer fraction of the initial seed wavelength in a subsequent undulator. In contrast to FEL radiation pulses which are initiated by the spontaneous synchrotron radiation in the so-called SASE mode, the seeded FEL pulses exhibit a high degree of temporal coherence [19]. In this experiment, the 8th harmonic of the 266 nm seed laser radiation was generated. The THz pulse with a central frequency of  ( ) 0.5 0.1 THz and a maximum field strength of  ( ) 80 10 kV cm −1 in the focus was generated via optical rectification [20] using a nonlinear crystal (LiNbO 3 ). The field strength information about the streaking THz field is extracted from the result of a THz streaking temporal delay scan as will be discussed in the next section. The optical driver pulse energy in front of the LiNbO 3 -crystal was 1.5 mJ with a pulse duration chirped to 800 fs FWHM. The beam profile of the optical pulse is cylindrically shaped and reduced in size using a spherical and a cylindrical Galilei telescope (1:4 and 1:2) before passing the grating, which induces the pulse front tilt. Behind the LiNbO 3 -crystal a gold coated parabolic mirror ( f=150 mm) collimates the THz radiation. The THz transport is performed with gold and silver coated mirrors and the path length in air is limited to 0.6 m to minimize absorption. A f=75 mm plastic lens (Zeonex 480R) with a 2 mm diameter hole focused the THz radiation into the interaction area (figure 1) with a spot size of 1.5 and 2.2 mm FWHM. With a pulse duration of 1200 fs FWHM a THz pulse energy of 0.42 μJ is obtained. Including energy losses due to Fresnel reflection, absorption of water in air and clipping in the vacuum entrance window, the conversion efficiency is estimated to be 0.05%. The reason for this relatively low conversion efficiency (see [21] for example) is attributed to a too low available energy of the optical driver laser during that experimental campaign, which prevented saturation of the THz conversion process. THz streaking scan In figure 2 the temporal delay scan between the XUV and the THz pulse is presented. For each delay time, kinetic electron energy spectra are recorded and plotted vertically in false color encoding the electron signal amplitude normalized to the overall maximum. The scan was recorded in time steps of Δt = 50 fs with n=10 spectra measured at each step. In the figure, all single-shot spectra are plotted such that for each spectrum of delay time t i a new interpolated delay time t j according = + D · t t j j i t n is assigned, with = j n 1 ... . This plotting technique preserves the single shot characteristics such as shot-to-shot fluctuations of the pulse amplitudes and the relative pulse arrival time. The red trend line connects the centers of mass of each spectrum. It represents the timedependent energy gain ( ) W t and is further used to derive the streaking THz field ( ) E t THz . The scan was limited to the central part of the THz vector potential since we extract the relevant information to calculate the XUV pulse information only from spectra taken at the streaking points  O and from unstreaked spectra of the region R 0 as will be further explained. According to [12], the energy modulation D ( ) W t is connected with the streaking THz field by the relation D » THz the vector potential of the streaking THz field. It should be emphasized that the quality of a THz streaking scan strongly depends on the technical abilities to keep any source of spatio-temporal jitter of the THz and XUV pulse as well as fluctuations of the energy and of the spectral center-wavelength of the XUV pulse as small as possible. Since the THz scan is measured using a seeded FEL pulse, the relative pulse arrival time jitter of THz and XUV pulse is expected to be small as will be discussed later [22]. R 0 , + R and -R denote regions of similar kinematics: in the region of R 0 the temporal overlap is not yet established and spectra are not streaked, thus, representing the unperturbed spectrum of the ionized 3p-argon valence electrons. In this area, one can study the shot-to-shot variation of the pure FEL pulse, which is mandatory to further evaluate the average XUV pulse properties from the streaked spectra. Region + R reaches from 0.3 to 0.6 ps. In this delay time interval the slope of the energy-gain curve is positive and almost constant as is the case at -R ranging from 0.9 to 1.2 ps, where the slope is negative. In the two inflexion points of the energygain curve the slope has an extremum providing highest streaking strength. Those two positions are further denoted as positive and negative operating points  O . A linear fit of the streaking trace at these two points reveal streaking speeds + s ands with + 16.8 meV fs −1 and −23.0 meV fs −1 , respectively. Determination of XUV pulse duration and chirp Under the assumption that the pulse may by subject to a frequency chirp of no more than linear order, it is possible to extract pulse duration and chirp from the spectral broadenings s + -at both streaking points together with the original width s 0 in region R 0 [12]. Assuming a Gaussian temporal shape, the electric field of a XUV pulse with chirp parameter c and rms pulse duration t = a XUV 1 2 2 for the widths of the deconvoluted streaked spectra. Because in this THz streaking experiment only a single time-of-flight detector was used, the average values s  , s 0 , and thus the averages of s ,decon , c and t XUV were acquired successively. The error of the average values for pulse duration and chirp is thus dominated by the shot-to-shot fluctuations of the FEL pulse. However, as this experiment was performed on a seeded FEL, these fluctuations are expected to be smaller than in the case of an unseeded SASE FEL. This expectation is underlined by a closer inspection of streaking data: in figure 3 (left column) a series of single pulse spectra acquired at the two streaking points and in region R 0 are presented. The center column displays individual spectra of arbitrarily chosen XUV pulses as waterfall plots. All amplitudes are normalized to the maximum amplitude found in the series of the non-streaked spectra. The right column shows the histogram of the spectral widths of a subset of spectra, which are used to calculate the average temporal pulse properties. Due to shot-to-shot fluctuations of the temporal and spatial overlap of the seeding pulse and the electron bunch in the undulator magnet, the seeding process performed with fluctuating efficiency leading to fluctuations of the XUV pulse energy and thus of the amplitude of the kinetic photo-electron spectra. By selecting only spectra with amplitude higher than 10% of the highest amplitude of a measurement series, only spectra of good signal-tonoise ratio were used for the evaluation. The total number of selected spectra used in one series varied between 100 and 300 individuals. The temporal resolution s res for the provided pulse duration measurement is limited by the smaller streaking speed + s , hence s s = + s res 0 = 16.6 fs. A detailed analysis of the influence of THz-wave front deformations in the focus shows that the spectra are also subject to broadening due to the Guoy-phase shift, which has been discussed in the literature [12]. With a gas jet of 1. and retrieved pulse properties t XUV,rms and chirp c. Comparison with TDS measurements The FLASH setup allows to temporally resolve the energy spectrum of the electron beam after it has generated the FEL radiation [17]. In a TDS, an x-z-correlation is introduced to the electron beam that allows mapping the longitudinal phase-space into the x-y coordinate system. The increase of the local energy spread for the seeded region inside the electron pulse compared to the unseeded electron pulse was used to determine the FEL pulse duration. A direct comparison of the FEL pulse duration measured by TDS and THz streaking is shown in fs for the 8th harmonic, which agrees with our measured pulse durations within the error bars. Summary and conclusion We have measured the pulse duration as well as the frequency chirp of a seeded XUV free-electron-laser using the THz streaking method. The obtained pulse duration matches well with an independent TDS based measurement of the duration of the energetic modulation of the electron bunch. The spectral components of the FEL XUV pulse were found to have a temporal chirp. In [24] it is predicted that, due to the high degree of involved nonlinearity, the HGHG seeding process is very sensitive to temporal phase variations of the optical seed pulse. Such transfer of temporal phases from the seed to the amplified FEL beam has also been experimentally confirmed at FERMI [10]. A detailed quantitative investigation of this transfer mechanism, however, is beyond the scope of this work and will be the subject of a further study. Our simulations of the frequency-tripling process generating the seed beam are compatible with a chirp of positive or negative sign, depending on the exact settings of the laser compressor, the phase-matching conditions in the tripling crystals, and the residual dispersion introduced by the optical materials in the beam path. In the present experiment, the resulting XUV-chirp of the seeded FEL pulse was found to exhibit a negative sign. Yet, with currently built phase diagnostics for the seed beam being implemented, we expect to be able to realize and measure chirps of the seed pulse of both signs and variable magnitude. With a powerful metrology for the spectro-temporal properties of individual FEL pulses now at hand, we will aim at a full control over the pulse shape and longitudinal coherence properties of seeded FEL pulses.
2018-12-06T21:21:13.642Z
2018-01-08T00:00:00.000
{ "year": 2018, "sha1": "dcd0c0c546d80145feea4ce5d4e4b9f0783ca364", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1367-2630/aa9b4c/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fe5263fd8e9ebffad31404ed5d886bbd3135eb8b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9686132
pes2o/s2orc
v3-fos-license
Radiation Induced Enhancement of Hydrogen Influence on Luminescent Properties of nc-Si/SiO2 Structures Using photo-luminescence, infrared spectroscopy, and electron spin resonance technique, the silicon dioxide films with embedded silicon nanocrystals (nc-Si/SiO2 structures) have been investigated after γ-irradiation with the dose 2 × 107 rad and subsequent annealing at 450 °C in hydrogen ambient. For the first time, it was shown that such a radiation-thermal treatment results in significant increase of the luminescence intensity, in a red shift of the photoluminescence spectra, and in disappearance of the electron-spin resonance signal related to silicon broken bonds. This effect has been explained by passivation of silicon broken bonds at the nc-Si–SiO2 interface with hydrogen and by generation of new luminescence centers, these centers being created at elevated temperatures due to transformation of radiation-induced defects. Background Optimization of the emission characteristics (intensity enhancement and widening the emission spectral range) of the nc-Si/SiO 2 structures remains a topical task yet. According to this aim, during the last decades, many researchers have directed their investigations on the study of the influence of the formation techniques as well as post-growth treatments on the structural and luminescent properties of these film systems. Taking into account the results of numerous investigations, one can summarize that just changes in the impurity-defect state of the nc-Si/SiO 2 structure are able to substantially influence on its luminescent characteristics. Herewith, the most sensitive to effect of doping atoms and structural defects (similar to microelectronic planar system Si/SiO 2 ) is the nc-Si/oxide interface. In general, this effect is caused by a wide range of physical processes-from passivation of the broken Si chemical bonds (quenching of the non-radiative recombination channels) to formation of the complex defects that are able to mediate the radiative recombination processes (formation of radiative recombination channels). Also, this effect depends on several factors: type of impurities, sizes of impurity atoms, solubility, and ability to form stable chemical bonds with silicon or oxygen. These impurities may be introduced at the stage of nc-Si/SiO 2 structure formation [1,2] as well as due to its lowtemperature treatment in the ambient atmosphere of chemically active gas (hydrogen, nitrogen, or oxygen) [3][4][5][6][7][8][9]. Influence of these thermal treatments on radiative recombination in nc-Si depends on chemical composition of the annealing environment, temperature, and duration. Annealing in the ambient atmosphere that includes hydrogen is the most efficient [5][6][7][8]. The defect-impurity state of the nc-Si surface and parameters of light emission, respectively, may be changed by ionizing radiation treatment. For instance, irradiation of porous silicon samples with γ-quanta (with the dose 4.3 × 10 6 -3 × 10 8 rad) resulted in the remarkable (up to five times) increase of the red luminescent band near 710 nm (~1.75 eV) and in its blue shift [10]. Irradiation with a larger dose (10 10 rad) led to I PL decrease (up to 40 times) [11]. It is important to note that this I PL enhancement took place due to radiation treatment in air atmosphere only [12] and was explained by porous silicon surface oxidation during irradiation that created around the sample corrosive ionized ambient containing ozone, atomic hydrogen, and oxygen. Certain enhancement (~1.5 times) of the luminescent intensity was observed earlier [13] after low-dose (~10 4 rad) ionizing irradiation (with γ-rays) of SiO 2 films containing Si nanocrystals. Contrary to the case of porous silicon, effect in the nc-Si/SiO 2 structures was observed after radiation treatment even in inert gas or vacuum; hence, it had a different nature. Recently, the authors of [14] reported that proton irradiation of nc-Si in SiO 2 multilayers leads to significant increase in the intensity of the luminescence centered near 750 nm, in blue shift of the photoluminescence spectra, and in appearance and increase of the luminescence band near 500 nm. Infrared spectroscopy results demonstrated a decrease in the absorption peak corresponding to SiO 2 . On the other hand, irradiation with rather high doses (~10 5 -10 6 rad) of the semiconductor-dielectric systems is known to substantially facilitate elimination of surface recombination defects under following thermal treatments resulting in markedly improvement of electrical (Si/SiO 2 systems) and luminescent (GaAs/oxide, InSb/ oxide systems) characteristics [15]. Therefore, one may expect that irradiation and subsequent thermal treatment of nc-Si/SiO 2 structures are also able to enhance their light-emitting efficiency. To check this hypothesis, the investigation has been done, in which the influence of high-dose γ-irradiation followed by low-temperatures annealing on luminescent properties of nc-Si/SiO 2 structures was studied. Methods SiO x layers were obtained by thermal evaporation of SiO (Cerac Inc., purity of 99.9%) in vacuum at the residual pressure of 2 × 10 −3 Pa. Both-side polished p-Si wafers with the resistivity of 10 and 50 Ohm cm −1 were used as substrates. The substrate temperature during deposition was 150°C. The film thickness (d) was estimated in situ by the quartz-crystal-oscillator monitor system (KIT-1) and was 450 nm (on the substrate with ρ = 10 Ohm cm −1 for photo-luminescence and infrared measurements) or 1000 nm (on the substrate with ρ = 50 Ohm cm −1 for electron-spin resonance measurements). After deposition, it was measured with the MII-4 microinterferometer and profilometer Dectak 3030. In the result of subsequent high-temperature (T = 1100°C) thermal treatment in Ar atmosphere for 15-30 min, the nc-Si/SiO 2 structures were formed. The previous investigations of such samples using transmitting electron microscope of high resolution [16] have shown that the average size of the formed nanocrystallites is about 3 nm, and they are sufficiently evenly distributed through the film. These structures have been irradiated by γ-quanta ( 60 Co) with the intensity 36.77 rad s −1 and the energies 1.17 and 1.33 MeV. The dose of exposure was 2 × 10 7 rad. The temperature of the samples during irradiation did not exceed 30°C. Initial and irradiated samples were thermally treated at the temperature of 450°C in H 2 atmosphere for 2 h. Infrared (IR) transmission spectra were measured using Fourier transformed IR spectrometer Spectrum BXII PerkinElmer. A silicon substrate (without oxide film) served as the reference sample. The absorption band related to Si-O bonds (maximum position within the range of 1000-1100 cm −1 depending on the oxygen content in the oxide film) was under investigation. To investigate luminescent properties of the studied samples, the Raman and photoluminescent spectrometer HORIBA Jobin-Yvon T64000 was used. As an excitation tool for photoluminescence, the 488 nm (2.54 eV) line of Ar-Kr laser was applied. Photoluminescence (PL) measurements were carried out at room temperature and at various excitation powers. Taking into account that the measured red luminescence band (maximum position in the vicinity of 800 nm) for initial or annealed in hydrogen ambient samples was not symmetrical; therefore, its deconvolution into elementary profiles has been carried out. To obtain more reliable results, the well-known approach [17] was used. According to this method, one should measure the PL spectra several times under different conditions selected in such a manner that contribution of each component was different in every case but the kind of profile and their main parameters (maximum position and full width at half maximum) were kept constant. The number of measured spectra should exceed that of individual components. Preliminary measurements showed that, in the case of initial or annealed nc-Si/SiO 2 structures, the shape of PL spectra sufficiently depended on the excitation light intensity (I ex ); therefore, we measured a set of PL spectra under different values of I ex (within the range 10 17 -10 19 quanta cm −2 s −1 ). Then the spectral curves for each kind of treatment were deconvoluted into Gaussian profiles with the stable parameters. The deconvolution accuracy was characterized by the root-mean-square deviation of the summed Gaussian profiles from the experimental curve and in these experiments did not exceed the value 10 −2 . Electron-spin resonance (ESR) spectra were measured at the temperatures T = 300 К and T = 77 К using the samples of nc-Si/SiO 2 structures with the substrates of p-Si (ρ = 50 Ohm cm −1 ) and with the oxide matrix thick-ness~1 μm. The measurements were provided in X-band (the frequency of microwaves ν~9.4 kHc), magnetic field being modulated at the frequency of 100 kHz. The microwave power and amplitude of magnetic field modulation did not exceed the values of 1 mW and 0.06 mT, respectively. To avoid the effects of saturation and overmodulation, when registering the rather narrow and weak ESR lines, the averaging of 16 to 49 measured spectra was carried out. The number of paramagnetic defects and the value of g-factor were determined using the reference MgO:Mn 2+ sample with the known number of spins, this sample being located within the microwave resonator simultaneously with the sample under investigation. The amount of defects was determined comparing the double integrals of the first derivatives for absorption signals inherent to the sample under study and the reference one. The absolute error in determination of defects amount was ±40%; the relative error, when comparing different samples did not exceed 15%. Results and Discussion Thermal treatments of the initial nc-Si/SiO 2 samples in inert (Ar) atmosphere at 450 o C did not influence on the PL intensity (I PL ). After irradiation that has decreased the original value of I PL about two times, this annealing only restored I PL magnitude, which was expected and evidently related to elimination of radiation damage. In Fig. 1, the PL spectra of nc-Si/SiO 2 structures annealed in hydrogen ambient are presented. It is seen that the thermal treatment of initial samples (Fig. 1, curve 2) leads to the~5-fold increase in the PL intensity, which is in good agreement with the known results [5][6][7][8]. In the case of preliminary irradiated samples, the influence of the hydrogen annealing on the I PL value was much stronger depending on the time lapse between irradiation and annealing processes. If the latter was rather short (days or weeks) the PL intensities for irradiated and unexposed samples differ up to 2.5 times (Fig. 1, curve 3). In the case when irradiated samples have been annealed within a few years this difference decreases to~1.5 times (Fig. 1, curve 4). From the results presented in Fig. 1, one more observation should be made-the maximum position of PL band for the irradiated and annealed sample is red shifted (~30-50 nm) as compared with that of initial film. Thus, radiation and subsequent low-temperature treatments in hydrogen ambient of nc-Si/SiO 2 structures result in a new effect-essential enhancement of the PL intensity and the red shift of the PL band. To obtain additional information on the mechanisms of the observed phenomena, the analysis of PL band and its behavior due to carried treatments has been done. It turns out that the studied red PL band consists of two stable Gaussian profiles I 1 and I 2 with peak positions at λ 1 = 755 ± 5 nm and λ 2 = 863 ± 3 nm and with full widths at half maximum w 1 = 120 ± 5 nm and w 2 = 100 ± 5 nm, respectively (Fig. 2). Contribution of each component to the general PL band depends on the type of sample treatment and power of excitation light. For example, for the non-irradiated (initial and hydrogen annealed) structures the elementary component I 1 was always dominant. Its contribution to the general PL band (S 1 /S, where S 1 and S are the areas of the I 1 profile and the general red PL band, respectively) slightly (~10%) decreases, when the excitation power decreases by two orders of magnitude. At the same time, contribution of the weak I 2 component (S 2 /S, where S 2 is the area of the I 2 profile) slightly increases. Irradiation of the samples leads to disappearance of the I 2 component. The following thermal treatment induces reconstruction of the abovementioned PL band, but now behavior of these contributions becomes fundamentally different in comparison with that in non-irradiated samples. At a decrease of the excitation power by two orders of magnitude, contribution of the I 1 profile falls by more than two times. Thus, contribution of the I 2 profile becomes dominant even at the excitation level equal to~0.1 of the maximum one (10 19 quanta cm −2 s −1 ), and when decreasing the excitation power down to 0.001 of the maximum value only the I 2 profile remains. In Fig. 2, the dependences of intensity of both PL contributions on the excitation level are adduced. Usually, Fig. 2 Deconvolution of PL spectra into Gaussian profiles for initial (a), thermally treated in hydrogen (b), γ-irradiated with the dose 2 × 10 7 rad (c), and γ-irradiated and thermally treated in hydrogen nc-Si/SiO 2 structures (d). PL spectra were measured at the maximum excitation intensity (10 19 quanta cm −2 s −1 ). In the inserts, the excitation power dependences of the area of corresponding Gaussians are displayed the dependence of the PL intensity (I) on the excitation power (P) is approximated by a power law function I = P α , where α (slope coefficient) is an exponent that is often used to identify the origin of emission from semiconductors: α~1 for exciton-like transition and α < <1 for free-tobound and donor-acceptor pair transitions [18]. In fact, we ascertained sublinear dependences with the slopes of 0.9 and 0.3 for I 1 and I 2 components, respectively. This result allows us to conclude that in the studied samples two luminescence mechanisms with different nature occur, and contribution of these mechanisms depends on the sample treatment type. Moreover, the value α 1 (close to 1) allows to conclude that the I 1 component is caused by the band-to-band recombination of the electron-hole pairs in the Si nanocrystals. This mechanism, attributed to quantum confinement effect, is considered as the major one, when explaining the emission in the Si nanocrystallites both in porous Si and incorporated into SiO 2 film [19]. The obtained value of λ 1 ≈ 755 nm in our case allows to estimate the energy band gap of nc-Si, which is ΔE ≈ 1.65 eV. Taking into account that the average size of Si nanocrystallites embedded into the studied oxide films is equal to~3 nm, the above ascertained value ΔE well agrees with calculation results as well as with experimental data (see, for example [20]). The substantially lower value α 2 ≈ 0.3 can be attributed to the radiative transitions with participation of the interface defect states in the nc-Si/SiO 2 transition region [18,21]. Also, the value of λ 2 ≈ 860 nm allows to estimate the energy position of these interface defect states-it is close to 0.2 eV relatively to the corresponding edge of the nc-Si band gap. In Fig. 3, we schematically showed recombination processes responsible for PL emission in the studied samples. Thus, light emission in the non-irradiated (both initial and hydrogen annealed) nc-Si/SiO 2 structures was realized mainly as a result of radiative recombination of electronhole pairs in the Si nanocrystallites. The minor part of PL can be explained by radiative recombination through the defect centers at the nc-Si-SiO 2 interface. The enhancement of PL of the nc-Si/SiO 2 structures after low-temperature annealing in the hydrogen ambient is usually explained by hydrogen atom passivation of the silicon broken bonds (nonradiative recombination centers) that exist in a high concentration at the surface of Si nanocrystallites [22]. In Fig. 4, the ESR spectra for our samples have been shown. It is seen that the band corresponding to silicon broken bonds (including P b -centers) with g = 2.0058 disappears after thermal treatment in hydrogen ambient both of initial and irradiated nc-Si/SiO 2 samples. This fact enabled us to infer that the process of silicon broken bonds passivation by hydrogen atoms was completed practically with the same efficiency in both cases. In other words, passivation of silicon broken bonds due to hydrogen annealing of irradiated structures takes place, but can not be the reason of the enhanced PL intensity. Hence, to clarify this effect one should take into account other additional mechanisms, effective factor of which should be interaction of ionizing radiation with the components of the nc-Si/SiO 2 nanocomposite. It is well known, for example, that the intense ionizing irradiation of the silicon oxide films in vacuum can lead to formation of Si nanocrystallites due to the effect of radiation induced reduction of SiO 2 [23,24]. In principle, formation of additional nc-Si should lead to enhancement of the PL intensity. However, in our case the studied samples were irradiated in air and the irradiation power was not high. Furthermore, the IR absorption band related to stretching vibrations of oxygen atoms in Si-O-Si units (maximum position near~1090 cm −1 ) with the shape inherent to SiO 2 phase [25] practically did not vary after annealing both the initial and irradiated nc-Si/SiO 2 samples in inert or hydrogen environment (see Fig. 5). This fact should mean that thermal treatments did not affect the concentration and structural arrangement of the bridging oxygen atoms in the oxide matrix. In other words, this processing did not additionally increase the elemental silicon content in the sample, similarly, for example, to high temperature treatment of the initial SiO x films (see Fig. 5). Hence, the growth of PL intensity observed after radiation-thermal treatments of nc-Si/SiO 2 structures The analysis of the dependences of PL intensities on the excitation level has shown that, in the case of annealing the irradiated structures, the role of radiative recombination through the defect states is significantly increased. It is logical to conclude that radiation-thermal treatments lead to the increase in the concentration of defect centers, the energetic levels of which are rather close to the edges of the optical gap in the nanocrystallites. Recombination of charge carries with participation of the mentioned defects provides excess (in comparison with non-irradiated samples) light emission. The energy of these quanta is slightly (~0.2 eV) lower than the width of the optical gap, which in fact causes the observed PL band red shift. In literature, the authors have reported several candidates on the role of surface defects which can create similar radiative recombination states: ESR centers P′ ce and P′ h , concentration of which correlates with the intensity of the red PL band [26], silanol groups Si = O [27], complexes SiH x (x = 1, 2) [28]. In our case, the latter can be formed under the thermal treatment in hydrogen ambient in excess (in comparison with non-irradiated structures) concentrations, because the most probable radiation-induced defects that form in the region of Si-SiO 2 interface, are the broken silicon bonds Si 3+ [29]. However, the above mentioned surface defects were invoked to explain the relatively low emission energy of small Si nanocrystallites (with the sizes less than~2.8 nm [28]), when according to the quantumconfinement effect the width of the optical gap increases, so that their energy states fall near its edges. In our case (rather large nanocrystals), it does not take place. The observed changes in the emission of the nc-Si/SiO 2 structures as a result of radiation-thermal treatment (increase in the intensity and the red shift of PL band, as well as their dependence on the time interval between the exposure and annealing), we explain in the following way. The ionizing radiation leads to the substantial damage at the Si-nanocrystals/oxide interface region: (i) the broken Si bonds (as efficient non-radiative recombination centers) are generated, (ii) the radiative recombination centers disappear, and (iii) some metastable defects (that can be partially eliminated even at room temperature) are created. The latter are not light emitting and electrically active, may be due to their energy levels that are not located within nc-Si optical gap; hence, they don't affect the red PL band. However, the processes (i) and (ii) cause its remarkable quenching. The subsequent low-temperature thermal treatment in hydrogen ambient leads to passivation of the silicon broken bonds (enhancing the intensity of the I 1 contribution) and (that is the key) gives rise to transformation of oxide structural arrangement in the vicinity of metastable defects, thus varying their energetics in consequence of, e.g., the change in the mechanical stresses. As a result, the energy level of the metastable defects is shifted and falls inside the nc-Si optical gap, permitting them to take part in radiative recombination. This involves the considerable growth of the intensity of I 2 contribution and corresponding red shift of PL band. What is the nature of the metastable defects and what is the role of hydrogen in structural transformations remains unknown yet. Recently, we have obtained preliminary results which demonstrate a strong enhancement of the red PL band of the exposed nc-Si/SiO 2 structures after annealing even in the inert ambient; however, a special temperature regime of the thermal treatment is necessary. An additional research enables to make a specific conclusion on the mechanism of the radiation-thermal treatment. Conclusions Thus, summarizing the presented experimental results we can conclude that thermal annealing in the hydrogen ambient of γ-irradiated nc-Si/SiO 2 structures leads to the substantial (more than two times in comparison with the annealed initial samples) increase in the photoluminescence intensity, causes the red shift (up to~50 nm) of the PL band, quenches the ESR signal, attributed to broken Si bonds (including P bcenters). These effects were explained by passivation of silicon broken bonds on the surface of Si nanocrystals with hydrogen and by generation of new lightemitting centers. These centers are the most probably formed as a result of thermally stimulated transformation of radiation-induced defects at the nc-Si-SiO 2 interface. Their nature and mechanism of creation require further investigations and discussion. Authors' Contributions IL and MV conceived the idea of the study. VV and MV conducted the deposition, annealing, and radiation-thermal treatment of the films. MV performed the FTIR characterization. YN performed photoluminescence characterization. VB performed the ESR measurements. VL and VV discussed the results of the experiment and helped in the preparation of the manuscript. IL and MV interpreted the experiments and wrote the manuscript. All authors read and approved the final manuscript. Competing Interests The authors declare that they have no competing interests.
2018-04-03T00:16:28.056Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "1b52cef798c028b1999d6963648f777cc6bbbc28", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-016-1744-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b52cef798c028b1999d6963648f777cc6bbbc28", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
4476824
pes2o/s2orc
v3-fos-license
Physiologically based microenvironment for in vitro neural differentiation of adipose-derived stem cells The limited capacity of nervous system to promote a spontaneous regeneration and the high rate of neurodegenerative diseases appearance are keys factors that stimulate researches both for defining the molecular mechanisms of pathophysiology and for evaluating putative strategies to induce neural tissue regeneration. In this latter aspect, the application of stem cells seems to be a promising approach, even if the control of their differentiation and the maintaining of a safe state of proliferation should be troubled. Here, we focus on adipose tissue-derived stem cells and we seek out the recent advances on the promotion of their neural differentiation, performing a critical integration of the basic biology and physiology of adipose tissue-derived stem cells with the functional modifications that the biophysical, biomechanical and biochemical microenvironment induces to cell phenotype. The pre-clinical studies showed that the neural differentiation by cell stimulation with growth factors benefits from the integration with biomaterials and biophysical interaction like microgravity. All these elements have been reported as furnisher of microenvironments with desirable biological, physical and mechanical properties. A critical review of current knowledge is here proposed, underscoring that a real advance toward a stable, safe and controllable adipose stem cells clinical application will derive from a synergic multidisciplinary approach that involves material engineer, basic cell biology, cell and tissue physiology. INTRODUCTION The majority of neurological diseases are characterized by primary or secondary neurodegeneration with the concomitance of different degree of inflammation [1,2] . Parkinson disease [3,4] , multiple sclerosis [5] , traumatic injury [6] or lysosomal storage disease with neurological symptoms like Krabbe disease [7] represent conditions in which the neural cells disappearance turn into decline of patient quality of life. Therapeutic approaches are mostly symptomatic and not restorative. Due to the skills to immunomodulation, to support the brain parenchyma and to transdifferentiate, stem cells (SC) are under evaluation in preclinical tests for the promotion of neural regeneration. Among the different sources of SC, the adipose stem cells (ASCs) are becoming more and more popular and attract the researchers' interest because they are easily accessible from subcutaneous liposuction, obtained in large quantity [8] , cultured for several months in vitro with low levels of senescence [9,10] and applicable without ethical and political issues [11] . Moreover, ASCs have been shown to possess self-renewal property and multipotential differentiation toward adipocytes [12] , chondrocytes [13,14] , osteoblasts [15] , myocytes [16] , neurocytes [17] , and other cell types [18] , including neurons [19] and neural cells [20] . All these hallmarks give to ASCs potential application in regenerative medicine and clinical studies [21,22] . As regard the transdifferentiating potential of ASCs into neural cells, the transduction properties need to be further characterized. Longtime, the stimulation of ASCs by growth factors-enriched media has been the most applied procedure to induce a specific cell lineage [23] , but recently it has been enlightened that the conventional two-dimensional systems do not mimic the cellular connections and the space distribution that occur in vivo [24,25] , especially if compared with the structural complexity of nervous system. A reliable solution to this question resides in three-dimensional (3D) biomaterial scaffolds that show a great potential as engineered neural tissue for cellbased therapy [26,27] . This review integrates the basic physiology of ASC with the functional modifications of cell phenotype furnished by enrichment of microenvironment with appropriate biophysical, biomechanical and biochemical stimuli. In particular, the effects of chemicals-like drugs and growth factors-biomaterial and microgravity are discussed as both single and co-applied parameters for inducing ASCs toward the neural lineage. DESIRABLE? SCs are defined as unspecialized cells capable of selfrenewing and of giving rise to a wide range of mature cell types [28] . During their proliferation, SCs do not follow the classical asymmetric cell division that generates a SC and a differentiated daughter at each division. Their "potential" resides in generating more SC and differentiated daughters [29] . Two types of SCs have been classified following their origin and their potential of differentiation: Embryonic tem cells (ESCs) and somatic SCs. The ESCs derive from the early blastocyst and the inner cell mass of the embryo and are able to differentiate into cell types of the three germ layers [30] . Even if they represent the most powerful tool for cell therapy in animal models, their application is associated to ethical issue and to high degree of variation with regard to differentiation potential due to their genetic and epigenetic instability. The somatic SCs are obtained from fetal (after gastrulation) or adult tissues and traditionally differentiate only toward cell types that belong to the tissue which they originate from. Among the adult tissues, somatic SC have been isolated from bone marrow [31] , brain [32] , blood [33] , epidermis [34] , skeletal muscle [35] and fat [10] . In each tissue, somatic SCs guarantee the maintaining of tissue homeostasis, but their action in replacing damaged cells after intense insults is limited by a mostly quiescent status or a weak activity. This is the case of neural stem cells (NSCs), located in adult mammals within a cellular niche [36] in the sub-ependymal layer of the ventricular zone and in the dentate gyrus of the hippocampus [37] . However, even if these differentiate in vitro into neurons, astrocytes and oligodendrocytes [38] , they are not effective in containing neurodegenerative process. The adult SCs offer the potential for autologous stem cell donation, reducing the risk of immune rejection and complications [39] and are additionally far from ethical and religious debates. We underscore that these advantages represent a solid basis for cultural renaissance and for scientific efforts to define the best source of adult SCs and to optimize methods for a safe, controlled and longlasting differentiation. According to our experience, the first description of a population of cells derived from human adipose tissue with a multilineage differentiation and high proliferation capacity in vitro [9] represents the milestone for scientific awakening and for overcoming specific tissue-linked limitations. Compared to bone marrow, adipose tissue is obtained with a not invasive, well-tolerated and safe procedure such as liposuction surgery. Moreover, the yield of obtained cells is relatively higher than other stem cell sources [40] and the digestion of lipoaspirate permits to isolate approximately from 0.5 × 10 4 to 2 × 10 5 stem cells per gram of adipose tissue [41] . Furthermore, ASCs can be cultured for several months in vitro with low levels of senescence [10] . The latter aspect is essential because it turns into a reduction of permanent post-mitotic states and the cells remain viable and proliferative over extensive periods during which the terminal differentiation could be stimulated. Thus, the critical point is the induction of a stable phenotype not restricted to mesodermal cells but including the ectodermal ones. There is a diffuse disagreement about pluripotential properties of ASCs, but in our experience the phenotype of ASCs can be addressed toward mesodermal [12,14] and non-mesodermal lineages [20] . In these observations reside the scientific efforts in the evaluation of ASCs as tools for generation of neural cells to apply in cell therapy strategies and in cell models for various neurodegenerative disorders [42] . Because non-neural differentiation potential falls outside of the scope of the present review, we focus on in vitro methods to induce neural differentiation. A systematic literature search was conducted using PubMed, WoS, and Scopus. Studies providing only results for in vitro neural phenotype induction from ASCs and preclinical examination were included. When preliminary tests on animal model of diseases have been performed, the major relevant findings were discussed. DIFFERENTIATION OF ASCs The experimental conditions of ASCs neural induction and differentiation contemplate at least three main categories or microenvironment factors: (1) The elaboration of chemically-defined or growth factorsenriched media; (2) the creation of a functionalized tree-dimensional structure by biomaterials; and (3) the application of appropriate biophysical forces. FOR ASCs NEURAL DIFFERENTIATION The most applied protocols for neural differentiation of ASCs are designed as "run-through" procedures, in which ASCs are sequentially propagated in different media enriched by growth factors or chemicals until they transdifferentiate into a desired phenotype. These approaches should be defined "physiological-inspired" or "chemical-based" as they try to mimic in vitro the complex environment of nervous system by adding growth factors or chemicals. In the earlier reports, a two-steps method has been adopted and a phase of cellular preconditioning or induction was followed by application of differentiation stimuli. As preconditioning media, Safford et al [43] tested the enrichment with epidermal growth factor (EGF) and basic fibroblast growth factor (bFGF), whereas Zuk et al [10] used DMEM supplemented by 20% of fetal bovine serum and β-mercaptoethanol. After this step, the neuronal differentiation was performed by medium composed of DMEM plus butylated hydroxyanisole, KCl, valproic acid, forskolin, hydrocortisone, and insulin or by a serum-free and β-mercaptoethanolenriched medium, respectively. In both experimental conditions, ASCs developed to an early neuronal stage, as no expression of established oligodendrocyte and astrocyte markers or mature neuronal markers were observed. These two works are milestones for neuronal differentiation of ASCs, but they lacked in electrophysiological tests. Indeed, a delayed-rectifier type K + current (an early developmental ion channel) concomitantly with morphologic changes and increased expression of neural-specific markers suggested that ASCs differentiate toward early progenitors of neurons and/or glia after 2 wk in differentiating medium with isobutylmethylxanthine, indomethacin, and insulin [44] . The pre-induction was also performed by bFGF for seven days [19] or for twenty-four hours [45] , following the incubation with forskolin alone [19] or in combination with N2 supplement, butylated hydroxyanisole, KCl, valproic acid [45] . Despite the similar protocol, the relevant findings were different. Krampera et al [45] reported a transient and reversible differentiation within 48-72 h of culture with basal medium. Indeed, in the protocol of Jang et al [19] , the acquired neuron-like functions were demonstrated by evaluation of voltage-dependent tetrodotoxin (TTX)-sensitive sodium currents, outward potassium currents, and prominent negative resting membrane potentials. These events underscore that the in vitro microenvironment is capable to infer with the multiple functional ion channel currents that are physiologically present in undifferentiated ASCs [46] . Another approach showed morphological, immunocytochemical and electrophysiological evidences of stable neuronal differentiation of ASCs. It is based on the induction of floating sphere in serum-free medium in presence of bFGF, and EGF. The spheres were dissociated in single cells and cultured with brain derived neurotrophic factor (BDNF) and retinoic acid [47] . It was also investigated the possibility to transdifferentiate ASCs by using neural induction medium (high glucose DMEM, β-mercaptoethanol, and butylated hydroxyanisole) supplemented with and 10% of autologous platelet-rich plasma (PRP) isolated and prepared from venous blood of the same patient underwent liposuction [48] . Some reports showed an induction toward a neural-like phenotype by media previously conditioned thought incubation with neuroblastoma or olfactory ensheathing cells (OECs) [49] or with ASCs induced to secrete neurotrophic factors [50] in presence of estrogen [51] , also. All the major protocols considered for differentiation of ASCs to neural cells have been reviewed by linking them to the neural markers that should be used in each procedure and the possible pathways that are involved in this process [52] . Here we focus on the physiological input trying to define a profile that links chemicals and growth factors to ASCs fate. According to the studies performed up to now, EGF and bFGF seem able to induce a useful preconditioning microenvironment for ASCs induction toward ectodermal lineage [53] . The co-administration of EGF and bFGF is essential because, as shown in Table 1, when tested alone on ASCs, EGF acts to promote ASCs proliferation by robust phosphorylation of SHC and ERK1/2 [54] , to induce migration, to delay senescence, and to maintain differentiation potency by EGF-induced activation of STAT signal pathway [55] . Indeed, bFGF alone enhances the proliferation, and the hepatocyte growth factor expression ability of ASCs [58] , promoting the adipogenic [59] and chondrogenic [60] differentiation with the contemporary inhibition of the osteogenic one [61] . This biochemical event happens because ASCs express EGF and bFGF receptors [54] . Furthermore, they express PDGF receptors α and β. PDGFR-α is highly expressed, but its ligand only slightly increases the proliferation of ASCs. Therefore, it is reasonable to assume that PDGF-β and PDGF receptor-β signalling is involved primarily in the stimulation of ASCs [62] . PDGF is released from activated platelets on bleeding, thus the stimulation with autologous platelet-rich plasma (PRP) represents an effective method to mediate stimulatory effect on cell proliferation, to increase the yield of ASCs and to reduce the cost of ASCs differentiation. In the same manner, the incubation with conditioned media appears a good technique for introducing an enriched cocktail of growth factors with positive remarks at both financial and practical point of view. Specifically, the secretome of neuroblastoma B104 cells has been reported to contain PDGF-AA, bFGF and IGF-1 [63] , whereas brain derived neurotrophic factor (BDNF), nerve grown factor (NGF), neurotrophin-4/5 (NT-4/5), neuregulin, secreted [57] Development of the oral cavity, lungs, gastrointestinal tract, epidermis, derma, eyelids and central nervous system [56] Promotion of proliferation with delays of senescence and insurance of differentiation potency [55] EGF and bFGF co-administration limits ASCs differentiation abilities by inducing ASCs into an ectodermal lineage rather than the mesodermal one [53] bFGF Non-glycosylated polypeptide of 18 kDa and 155 amino acid in length (heparin-binding growth factor family) Stimulator of tissue repair and cellular viability released from an injured extracellular matrix [64] Enhancement of proliferation, differentiation and hepatocyte growth factor expression ability [58] . Induction of the adipogenic [59] and chondrogenic [60] potential, with inhibition of osteogenic differentiation [61] PDGF Dimeric glycoprotein Potent mitogen for cell of mesodermal lineage and stimulator of tissue repair released from activated platelets on bleeding [65] Supporting of cell proliferation in vitro: It increases ASCs yield. Promotion of neural differentiation in an antioxidant microenvironment [48] Receptor-β signalling is involved primarily in ASCs stimulation [62] . ASCs stimulation with autologous platelet-rich plasma reduces the cost of differentiation [48] EGF: Epidermal growth factor; bEGF: Basic fibroblast growth factor; PDGF: Platelet-derived growth factor; ASCS: Adipose stem cells. protein acidic rich in cysteine (SPARC) and matrix metalloproteinase-2 (MMP-2) have been reported as typical elements of OECs secretoma. Among these growth factors, nerve grown factor-β (NGF-β), BDNF, and neurotrophin-3 (NT-3) were applied on EGF plus bFGF-preconditioned ASCs to induce a neural-like phenotype [20] . Thus, the growth factors that physiologically act in tissue rapid turnover [56,64,65] seem to stimulate proliferation and to improve responsiveness of ASCs toward ectodermal-derived stimuli. From this basic speculation, the major question must be opened is addressed to define what happens in humans with neurodegenerative syndromes, or in which way the loss or the increase of a biochemical stimulus expression may interfere into ASCs phenotype by enhancing or limiting clinical application. Up to now, no scientific result can drive to answer. Among the chemical reagents, the major applied ones were antioxidants or compounds active on DNA ( Table 2). An antioxidant microenvironment, obtained by N-acetyl-l-cysteine and ascorbic acid-2-phosphate, has been reported to reduce ASCs-doubling time and to increase cell number [66] . The β-mercaptoethanol sustained the induction of neural phenotype after preinduction and differentiation [10] , whereas the butylated hydroxyanisole promoted the neural stem cell survival. In peripheral intestinal nervous system increases the number of synapses and the vesicle population in the nerve terminals [69] Key elements for the neural induction medium: Reduction of oxidative stress and reactive oxygen species production could support neural population Improve meiotic maturation in vitro cultured oocytes [70] Butylated hydroxyanisole Mixture of two isomeric organic compounds Inhibition of 17 β-estradiol(E2)mediated oxidative stress and of oxidative DNA damage N-acetyl-l-cysteine Synthetic derivative of endogenous amino acid, L-cysteine, precursor of the antioxidant enzyme Glutathione Stimulator of glutathione synthase Activator of NMDA1 receptor When co-administrated, reduction of ASCs-doubling time and increase of cell number compared with b-FGF alone supplementation [66] Ascorbic acid-2-phosphate (Vitamin C) Water-soluble essential vitamin Reducing agent and coenzyme in several metabolic pathways Interference on DNA Valproic acid Branch-chained fatty acid, acting as a histone deacetylase inhibitor Wide range of neuroprotection [71,72] Inhibitor of glycogen synthase kinase-3 [73] Inducer of chromatin remodeling [74] Promoter of neuron-like cells [75] In vivo, it improves homing of ASCs via overexpression of CXCR4 and CXCR6 [76] 5-azacytidine Analog of cytidine nucleoside, acting as demethylating agent [77,78] Inducer of cell plasticity and active molecule for cellular differentiation into multiple phenotype [79] Stimulated-cells ameliorate neurological deficits when injected in rats after cerebral ischemia [80] Indomethacin Synthetic nonsteroidal indole derivative Inhibitor of COX1/2 Component of neural induction medium applied for two weeks [44] Immuno modulation Hydrocortisone Glucocorticoid hormone Suppressor of cell-mediate immunity Form multi-nucleated myotubes, yielding protein markers for myocytes [9] Energetic balance N2 supplement Chemically defined formulation containing insulin, transferrin, progesterone, putrescine and selenite In vitro survival and expression of post-mitotic neurons in primary cultures from both the peripheral nervous system and the central nervous system All these beneficial effects of antioxidants toward a neural phenotype should be related to the essential role that a pro-oxidant microenvironment exerts on induction of adipogenic phenotype [67] . It has been recently demonstrated that oxidative stress and reactive oxygen species (ROS) overproduction could drive the activation of molecular pathways that are able to convert myoblasts into brown adipocytes [68] . Nevertheless, the B27 reagent, that is routinely applied in laboratory's procedures for the growth and maintenance of neurons or for differentiating of SCs into neurons and glial cells, contains tocopherol, Vitamin A, catalase, superoxide dismutase, glutathione, that, among the other effects, are largely described for their propriety of oxidative stress-limiting agents. The drugs active on DNA usually applied for neural induction of ASCs are histone deacetylase (HDAC) inhibitors or methylation inhibitors, like the valproic acid and the 5-azacytidine, respectively. The valproic acid is commonly used for the treatment of seizures and bipolar disorder. Valproic acid demonstrated a wide range of neuroprotective properties in cellular and animal models of neurodegenerative diseases [71,72] , probably for the activity as both toward the inhibition of glycogen synthase kinase-3 (GSK-3) [73] and the enhancement of CXCR4 expression [74] . In ASCs, in vitro treatment with valproic acid resulted in a promotion of neuron-like differentiation [75] and in vivo an enhanced homing of ASCs was reported via overexpression of CXCR4 and CXCR6 [76] . Indeed, the demethylating agent 5-azacytidine is commonly employed to treat blood disorders such as myelodysplasia and leukemia [77] . It has been historically described as an inducer of cell plasticity and as an active molecule for cellular differentiation into multiple phenotypes [78,79] . The enrichment of ASCs microenvironment with 5-azacytidine has been effective to improve neural differentiation and to ameliorate neurological deficits after cerebral ischemia in rats [80] . Thus, the media formulation for ASCs neural differentiation is very far from a "magic recipe", but its definition, amelioration and reproducibility should necessary start not only from ASCs physiology but also from the analysis of their reactivity toward environmental stimuli. We think this aspect is essential especially with a clinical application in mind. BIOMATERIALS FOR ASCS NEURAL DIFFERENTIATION Chemicals and growth factors act as signal transducers to induce ASCs toward a neural-like phenotype, but the control of their differentiation toward a specific and stable lineage requires not only a controlled biochemical microenvironment, but also a milieu in which cell-cell and cell-environment interactions should be evaluated in a three-dimensional architecture. Biomaterials offer the possibility to deliver stem cell regulatory signals in a precise and near physiological manner without the exclusion of 3D space as parameter. Table 3 Biomaterials for neural phenotype of adipose stem cells Biomaterials Profile Effect on ASCs Test on animal Limitation for clinical Chitosan films Naturally derived polysaccharide from chitin [81,82] Spontaneous cell organization in a 3D architecture Yes, higher cellular retention ratio of ASC spheroids after intramuscular injection in nude mouse [81] Not declared Chitosan and gelatin Elastic-dominant, porous scaffold Conditioning toward a neuronlike phenotype Yes, better repair in a mouse model of traumatic brain injury [83] Not declared Chitosan and silk Complex structural framework Efficient as delivery vehicle for ASCs Yes, proposed as nerve grafts in the regeneration of injured rat sciatic nerve [84] Not declared Collagen gel Engineered neural tissue Cells must be aligned to collagen fibres Yes, supported robust neural regeneration of injured rat sciatic nerve [85] Not declared Albumin Serum-derived porous scaffold Promotion toward neurons Yes, filler effect on the spinal cord cavity in animal models of spinal cord injury [86] Not declared Matrigel Commercially available hydrogel Good cell encapsulation and delivery [87] Yes, mouse models of spinal cord injury Not applicable for its isolation from the basement membrane of a mouse sarcoma Alginate Hydrogel Neurospheres encapsulation and neural promotion [88,89] Good biocompatible profile Nanosized graphene oxide-laminin hybrid patterns Engineered tissue Efficient neuron-like cells differentiation [90] ASCs: Adipose stem cells; 3D: Three-dimensional. The effectiveness of biomaterials in driving ASCs differentiation has been already reported for their differentiation into epithelial cells. It was emblematic and noteworthy the geometric dependence of ASCs phenotype in fibrin culture [81] . In this well-conducted study, the experimental plan demonstrated clearly that the growth factor-enriched medium increased ASCs growth and chemotaxis, but the differentiation into epithelial cells was effective only in a 3D structure of fibrin. In this condition, the authors identified the formation of a bilayer of two segregate cell phenotypes: The superficial one with ASCs-derived epithelial cells and the deeper one with mesenchymal cells. This evidence strongly suggests that biomaterials may allow the control of proliferation and differentiation not only of ASCs, but also of their neural derivatives, because it is possible the hypothesis of biomaterials that should act by reducing ASCs propagation after differentiation and by maintaining a niche of ASCs with high level of stemness to be reprogrammed according to tissue necessity. Table 3 summarizes the principal biomaterials for neural differentiation of ASCs investigated in preclinical studies. The spontaneous formation of threedimensional spheroids has been reported by using chitosan, a naturally derived polysaccharide from chitin [82,83] . ASCs spheroids were formed on chitosan films because pure chitosan cannot support adequate cell adhesion for its biophysical parameter. This propriety enhanced spontaneous cell organization in a 3D architecture that permitted the close association of cells and a transmission of signal cues easier and faster than in monolayer cultures. Considering that chitosan lacks biological activity, the upregulation of pluripotency marker genes [82] , the transdifferentiation efficiency into neural-like cells [83] or neuron [82] in vitro, and the higher cellular retention ratio of ASC spheroids after intramuscular injection in nude mouse [82] could be justified by the 3D organization of cells. Moreover, chitosan combinations with gelatin or silk were tested in animal models of neurodegeneration. In the case of elastic-dominant, porous scaffolds from photocurable, chemically modified chitosan and gelatin, ASCs were conditioned toward neuron-like cells capable to better repair a traumatic brain injury mouse model [84] . Indeed, the chitosan/silk fibroin scaffold has been proposed as nerve grafts for its efficiency as delivery vehicle for ASCs and as structural framework in the regeneration of injured rat sciatic nerve [85] . In the same animal model, an engineered neural tissue (EngNT) composed by collagen gel and aligned rat ASCs supported robust neural regeneration across the gap and into the distal stump [86] . Similar efficiency was proved in animal models of spinal cord injury after implantation of ASCs seeded on serum-derived albumin scaffold, a porous biomaterial that completely filled the spinal cord cavity and permitted the passage of descending and ascending neurons [87] . Similar results have been obtained by using Matrigel [88] , a commercially available hydrogel that, because of its isolation from the basement membrane of a mouse sarcoma, is unlikely to be approved for clinical use. To select the hydrogels, the patient safety should be always considered in order to perform translational researches. For example, the alginate hydrogel-that has been used to encapsulate neurospheres obtained from neural differentiation of ASCs [89,90] -had a good biocompatible profile. More recently, nanosized graphene oxide (NGO)laminin hybrid patterns were reported to be useful for ASCs transdifferentiation into neuron-like cells that were up to 30% higher than the control group. In the same work, it was proved that cells grown on NGO grid patterns were more differentiated than the other ones grown on PLL-coated Au or on NGO-coated Au [91] . Taken together, all these results strongly suggest that biomaterials provide a benefit for neural differentiation of ASCs: They should mimic the shape of interconnected neuronal network and the nanoscale topographical features of the extracellular matrix, as already reported for the nanoengineered polystyrene surface containing nanopore array-patterned substrate [92] . Moreover, biomaterials have been applied to realize an electrical cell culture system for selective induction of neurons. This in vitro technique is realized by cellular seeding on the conductive polypyrrole/chitosan membrane with a thickness of 0.4 mm. The membrane can be connected to an electric stimulator by two thin platinum electrodes. In this way, a defined electric intensity can be applied to cell culture (V/cm). It has been employed on Schwann cells [93] , on OECs [94] and on mouse bone marrow stromal cells [95] to induce and sustain their phenotype regulation. Yang et al [96] applied it on ASCs; they tested the possibility to promote neuronal differentiation by using both electrical stimulation and Nurr-1 gene transduction alone and in combination. The results clearly evidenced that both electrical extracellular microenvironments and intracellular patter profile were capable of promoting neuronal differentiation in ASCs, but the best result was achieved by a synergistically combination of electrical forces and genetic modification. NEURAL CELLS FROM ASCs Biophysical forces, particularly the electro-mechanical coupling and the deformation forces are important physiological regulators of nervous systems. Actually, microgravity, as a mechanical factor, is more and more under investigation, especially for its implication in heath of spaceflights and astronauts in orbit. As summarized by Mariggiò and Fanò-Illic [97] , the microgravity effects are not fully characterized and contrasting events have been reported: In some cases, cell differentiation and tissue assembly were not affected by microgravity, indeed in other cases alteration of cell morphology and function has been reported. For this reason, a threedimensional glia-neuron co-culture cell model has been proposed as useful tool for the investigation of microgravity as a new environment to successfully manipulate cell functions and phenotype. Generally, in monolayer tissue, an improvement of stem cell differentiation into neurons was reported for PC12 neuron-like cell [98] and for ESCs [99] . As about ASCs, the microgravity effect is very little known. Up to now, only a single and recent study tried to define a mechanistic link between microgravity and neural induction of ASCs [100] . In this experimental setup, it was found that microgravity stimulation with a clinostat instrument increased ASCs differentiation toward neural-like cells in presence of the classical chemically defined and growth factor enriched medium. Even if the differentiation was proved by evaluation of neural cell lineage markers, no data about the specific effect of microgravity have been produced. Thus, it remains unclear if microgravity alone can modify the cell phenotype in absence of growth factors and biochemical stimuli, also. CONCLUSION Since ASCs can be readily isolated, expanded and transplanted, their application in cell-based therapies is more and more under investigation. The differentiation of ASCs was initially considered restricted to mesodermal cells, but recent advances display ASCs ability to transdifferentiate, acquiring cell phenotype different from mesenchymal, including the ectodermal one. In the past decade, most of researches focused of the promotion of ASCs into neural-like cells for evaluating their potential application in neurodegenerative disorders. Different strategies were adopted. Among them, the cultures in chemically defined or growth factors-enriched media were applied to stimulate in vitro the physiological process of neural induction. This dynamic event involves many biological processes and signaling events that can be potentiated by the elaboration of an appropriate biophysical and biochemical microenvironment and should be evaluated in a 3D architecture. In this context, biomaterials provide a sophisticated microenvironment. Even if terminally differentiated, functional neurons have yet to be achieved, the reported data from animal tests have shown that some biomaterials have a great potential as nerve grafts. A synergic work between cell and tissue physiologists and biomaterial production experts seems to be useful for the future development of ASCs-based clinical therapeutics to be employed in neurodegenerative disorders.
2018-04-03T04:07:20.375Z
2018-03-26T00:00:00.000
{ "year": 2018, "sha1": "99e2f96ea36fe2dc29998b2689350ea5188a18ab", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4252/wjsc.v10.i3.23", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "99e2f96ea36fe2dc29998b2689350ea5188a18ab", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
29831637
pes2o/s2orc
v3-fos-license
Critical study of the asavaishta preparations of brhatirayee. This study deals with Asavarishta preparations of the Ayurvedic System of medicine and scans various classical texts to find out the different types of constituents required for their preparation along with their proportions, the method of preparation, the time required to complete the process, the fermentations pots, the fermenting materials, the place and time (season) of fermentation etc. with a view to develop certain common norms for their preparation. INTRODUCTION 'Asavas' and 'Aristhas' are the most important Ayurvedic preparations, prepared through fermentation process. In Ayurvedic system of medicine these are very popular since the time of 'Brhattrayee'. In Ayurveda 'Brhattrayee' is considered the most authentic and popular literature and consists of three most important texts, viz. -'Caraka Samhita', 'Susruta Samhita', 'Astanga Samgraha' and 'Hrdaya! Of the three first two texts (i.e. 'Caraka' and 'Susruta Samhita') belong to a comparatively earlier period i.e. these may be placed somewhere between 600 B.C. to 300 B.C and represent two different disciplines i.e Kayachikitsa (medicine) and 'Salya Tantra' (Surgery) respectively. The third text of this group belongs to a little latter period i.e. 500 A.D to 700 A.D. These are not the original texts rather these are compiled on the basis of the ideas and the materials of the first two texts. Thus, historically it may be sais that the "Caraka Samhita". And the Astanga Sangraha' and 'Hrdaya' are the latest texts of this series. Hence the subject matters dealt within these texts also have the historical importance. It is important to mention here that in the texts of 'Brhattrayee' there is a detailed description of fermented preparations and their technology alongwith other aspects of Ayurveda. There are number of fermented preparations in which "Asavaristhas' are the most important from therapeutic point of view. In addition to this, 'Asavaristhas' are also considered superior to the other types of preparation in the sense that these are more palatable better absorbed and quick effective, because of higher drug concentration, sweet taste and liquid form. Further these could be preserved for a longer period than the other herbal preparations due to the presence of the self generated alcohol. It may further be pointed out that there are three kinds of drugs form the point of view of their sources of origin i.e 'Jangama', 'Audbhid' and 'Parthiva'. Of these more or less all the three types are found used in the Asavaristha' preparations. The present study is undertaken to collect various details regarding the fermentation process and of its technology, being adopted in the ancient times i.e during the period of 'Brhattrayee'. And whether there is a systematic chronological development of the technology or it remained static during the span of more than 1200 years. In this study all the references concerning to Asavaristha yogas' have been collected from the texts of 'Brhattrayee' and were studied critically to find out the different types of constituents required for their preparation alongwith their proportions, the method of preparation, the time required to complete the process, the fermentation pots, the fermenting materials, the place and time required to complete the process, the process, the fermentation pots, the fermenting materials, the place and time (season) of fermentation etc. The efforts were also made to explore the possibility of drawing conclusions on the basis of the descriptions available in these texts with regard to the preparations of fermented nature (Asavaristhas) with a view to evolve some common principles for their preparation. MATERIALS & METHODS: Fermented preparations (Asavaristhas) and their Technology as-'Sandhanarupatvat' which means due to the involvement of 'Sandhana' (fermentation) process these are known as 'Asavas'. It is further mentioned in the same chapter that there are nine 'Yonies' (source-materials) of 'Asavas' i.e. the 'Asavas' are prepared from the following nine sources, such as -Dhanya, Phala, Mula, Sara, Puspa, Kanda, Patra, Tvak and Sarkara. It is also mentioned in the 'Caraka Samhita' that eighty four 'Asavas' could be prepared from these sources. (c) In 'Charaka' Cikitsa, 24 th Chapterthere is a mention of 'Madatyayaroga' and of its cikitsa (treatment) which indicate that during that period alcoholic preparations are frequently prepared and used. And as a result of this the persons using it in excess become the victims of 'Madatyaya roga' and need its treatment. Besides this, in Caraka Cikitsa there are twenty six 'Asavaristha yogas' which have been recommended for the treatment of various diseases. Further in kalpasthana also there are four 'Asavaristha yogas' thus total goes to thirty. As regards the Technology a few terms have been found used in Caraka Samhita in context of different 'Asavaristha yogas' viz. These references indicate many things that are involved in the preparation technique of these preparations, viz. It need specific type of pots i.e. the pot made of mud should be anointed or smeared with ghee. In some cases medicinal paste is also recommended for anointing these pots. These should be well cleaned before use. In some cases only 'Snigdha ghata' is mentioned while I other cases honey is recommended for anointing alongwith medicinal paste and ghee. Fumigation of pot is also advised. 2. Regarding the place it is mentioned that the pot filled with material should either be kept under the sky or inside the mass or heap of barley corn or paddy. 3. Regarding the duration three four terms have been used viz., Dasaratram or Dasaham, i.e 10 days, Paksam or Masardham i.e. 15 days and Masam i.e. one month, not only this it is further mentioned that in winter this limit should be taken as twice or double to the original (summer) limit. As regards the signs of the completion of process the texts have mentioned these as 'Jata rasam' or in some cases 'Vyaktamla Katukam -Jatam' which means the preparation should develop a specific type of taste and when that is developed the preparation may be taken as complete. In case of takraristha 'Caraka' has mentioned that its taste should predominantly be Amla (sour) and Katuka pungent). In this way many details regarding the technology are available in this text. Susruta Samhita Here also many details regarding fermented preparations are available. a) In 'Susruta' Sutra 44 th chapter, there is a mention of the term 'Aristha' viz., -'Aristho Dravya Samyoga Samskaradadhiko gunaih' i.e 'Aristha' contains more better properties and effects than any other preparations because of the 'Dravya Samyoga' (combination of different types of drugs) and the 'Samskara' (special processing). While commenting on this 'Dalhana' mentioned that in 'Aristhas' there is a predominance of 'dravyas' (drugs) where as in 'Asavas' drava (liquid) is more important. Thus, 'Dalhana', the commentator of 'Susruta' was the first scholar to describe the difference between the 'Asavas & Aristhas', the fermented preparations of therapeutic importance. b) In 'Susruta' Sutra 45 th chapter there is a description of 'Medya Varga' which included twenty seven types of fermented preparations and 'Asavas' and 'Aristhas' are also there. This text, for the first time, had classified the fermented preparations in 'Madya' and 'Sukta' groups on the basis of their alcoholic and acidic contents. In this text, twenty one 'Asavaristha yoga' are mentioned but the detailed description, regarding the contents and the method of preparation, is found only for six ' Asavaristha yogas'. ASTANGA SAMAGRAHA HRDAYA: Here also some details regarding the fermented preparations and fermentation technology are available. 3) Astanga Samgraha In 'Astanga Samgraha' Sutra 6 th Chapter, there is a description of five 'Madyakaras' (sources of alcoholic preparations), such as Draksa, Iksu, Madhu, 'Sali and Sasthi'. All these are either rich in their sugar or in carbohydrate contents which are highly essential for the production of alcoholduring fermentation process, this is definitely a new addition of the text in the knowledge regarding the subject and may be taken as the outcome of deep scientific thinking of the author. In this text there is a description of seventeen 'Asavaristha yogas'. Astanga Hrdaya In this text there is a mention of number of alcohol containing preparations in the context of 'Ritucarya' I.e-In Hemanta ritu Gauda, Accha Sura & Sura. In Vasanta ritu Asava, Aristha, Sidhu and Mardwika and in Varsa ritu Madhwaristha etc. have been mentioned and recommended for use. In this text only eight 'Asavaristha yogas' are found mentioned. Thus, in both the latter texts total twenty five 'Asavaristha yogas' are found described. If all the 'Asavaristha yogas' of 'Brhattrayee' are combined then the total number of such yogas comes to seventy six (Appendix I ). Of these twenty six are 'Asavayogas' the detailed study of these 'Asava' and 'Aristha yogas' further revealed that during that period these have not been differentiated on the basis of their method of preparation i.e on the basis of boiling or without boiling. As there are many 'Asava Yogas' which are prepared by boiling and many 'Aristha yogas' prepared without boiling. The 'Asavas' and 'Aristhas' prepared by boiling and without boiling and mentioned in the texts of Brhattrayee' are shown in the following table Medicinal Drugs For preparing 'Asavas' and 'Aristhas' it is seen that all the three types of drugs (i.e herbal, mineral and animal origin) are found used in some way or other. Of the three types though, the herbal drugs are abundantly used for these preparations, the other types are also not uncommon. In herbal drugs 'Caraka' has advised to use the following parts of the drugs for the Asavaristha preparations i.e. Roots, fruits, seeds, flowers, leaves, stem bark, hard wood, gum/resin etc, In addition to above 'Susruta' has recommended the use of an ash of certain drugs also for this purpose. In this context 'Palasa bhasma' and 'Tilanala bhasma' deserve mention. In mineral drugs there is a mention of 'Loha Curna' (iron powder) to be used in some preparations. As regards animal products the 'Ghee' and 'Madhu' deserve mention. 'Ghee' is generally used to anoint the vessel or pot to be used for this purpose while 'Madhu' is used as a sweat substance and also for anointing of pots in number of preparations. Liquids In liquids, though many liquids are recommended the water is the commonest of all. In other liquids plant juice, fruit juice, decoctions, butter milk, curd water, gomutra, kanji and dhanyamla etc, deserve mention. Of these water, decoctions and juices are more commonly used liquids that the others (i.e. acidic and alkaline). In some cases two or three liquids together are also found used. As regards their proportion, nothing definite could be said on the basis of the descriptions of the texts of 'Brhattrayee' group. It is important to mention here that, of these 'Guda' is most commonly used sweet substance i.e out of 76, Asavaristha yogas, it is recommended in 33 cases. Honey stands next to 'guda' i.e it is recommended in 23 cases. Sugar and its kinds come next to honey i.e these are found used in 11 preparations. It may also be mentioned here that all the three main sweet substances (i.e 'Guda', 'Madhu' and 'Sarkara') are used either separately or in combination also. In some cases all the three sweet substances are found used together. It may further be pointed out that honey is also used as 'Lepana dravya in some preparations. As regards their proportion no defining proportion of the sweet substance could be fixed on the basis of the descriptions of the texts of 'Brhattrayee.' The minimum and maximum percentage found mentioned in different texts is shown in the following table. It is evident from the table no2 that according to 'Brhattrayee' the minimum percentage recommended is 20% while maximum percentage is upto 40% which have been mentioned in Caraka and Susruta Samhitas respectively. Praksepa Dravyas: The 'Praksepa dravyas' are also the important constituents of this preparation but these are not necessarily found recommended for all the preparations of 'Asavaristha' group. As it is noticed that in the preparations mentioned by ' Caraka' preparations. In 'Susruta' these are used only in 6 yogas while in 'Astanga Sangraha' only 10 preparations are such which contain Praksepa dravyas' and in 'Astanga Hrdaya' only 4 preparatios consist 'Paksepa dravya'. Thus out of 76 only 32 'Asavaristha yogas' contain 'Praksepa dravyas'. The 'Dhatakipuspa'. Which is also considered as one of the 'Praksepa dravyas' was for the first time, recommended by 'Astanga Sangrahakara' for adding in the preparations of 'Asavaristhas'. In 'Praksepa dravyas' mostly the 'Sugandhi-dravlyas (fragrant drugs) are included but other drugs are not uncommon. According to the texts of 'Brhattrayee' no definite proportion can be fixed for 'Praksepa dravyas' also. (B) Sandhana Process: Mix all the constituent materials properly in the liquids. It should than be filled in the well prepared and recommended containers and be kept in suitable place recommended for the purpose for a specified time limit to allow the alcoholic fermentation to start with and to go on smoothly. At the end i.e when the fermentation is stopped and the necessary organoleptics are developed the fermented liquid should be filtered and be kept for some time to allow the sediments to settle down in the bottom. Now collect the supernatant clear fluid for use. These terms indicate four things, viz, 1. These should either be made of earth or metals such as iron and copper. 2. These should be prepared specifically for the purpose viz., 'Ghrta bhavita, Madhulipta and fumigation. 3. These should be well cleaned and disinfected i.e by applying certain pastes and fumigation. 4. These should also be strong enough to stand long duration. Now -a-days wooden and plastic containers are also being used for preparing Asavaristhas. TYPES OF CONTAINERS In ancient times only earthen and or metallic pots were found recommended. PREPARATION OF THE CONTAINERS These containers need some kind of preparations before these are being used for fermentation . A) BHAVANA /LEPANA PASTING /ANOINTING A few terms, appeared in 'Brhattrayee', indicate that these should either be anointed with ghee or pasted with honey or with some other medicinal pastes. This is done with a view to either minimise the porosity of the earthen pots or to strengthen or disinfect them before their actual use in the fermentation process. In fermentation process also. B) DHUPANA (FUMIGATION) In some cases 'Dhupana' of the containers is also indicated by all the texts and for this sarkara, Agaru, Rala, Candana and Guda are recommended. This is done either to disinfect or to perfume the containers before use. These indicate that the fermenting pots should be kept in these places i.e. inside the mess or heap of barley corn or paddy which may either contribute for the safety of the pot or for the prevention of the effect of temperature variation as it seems that the ancient scholar are aware that the variation in the temperature is most likely to effect the fermentation process. Hence, they evolved above technique to prevent the effect of temperature variation by making the full use of the available facilities and the knowledge at their command. D) SANDHANA AVADHI (DURATION OF FERMENTATION) This study has further revealed that the ancient scholars seem to have given due consideration on this point also i.e. how much time would be required for completion of the fermentation process. The literary review revealed that it varies from season to season and preparation to preparation. According to the texts the minimum time limit is of 'seven days' while its maximum time limit is about 'six months'. As regards the different texts there seems to be difference of opinion on this point, viz., in ' Caraka Samhita' the minimum time limit is of 'seven days' which has been followed by 'Samhita' and 'Astanga Sangraha Kara' also, but in 'Astanga Hrdaya' it is of '15 days'. As regards the maximum time limit 'Caraka' mentioned it as '11/2 month , Susruta' 4 months', 'Astanga Sangraha' 6 months and 'Astanga Hrdaya' 'one month' and "Astanga Hrdaya' 'one month'. Further there are some Yogas (preparations) in which 'iron' is advised to be added, in all such cases the maximum time limit advised is' 'till the metal dissolve completely in the solution' as all the texts have used the term 'Aloha Samksayat' for this which means till the whole metal is lost or dissolved or goes into the solution completely It may further be mentioned in this connection that where iron is recommended to be used it should be used in powder form only which is done by heating the thin sheets of iron in the strong heat and then dipped in the specified decoctions for several times. No other metal except iron is found recommended for this purpose. The minimum and the maximum time limit described in different texts for completion of fermentation process is shown in the following table.
2016-05-04T20:20:58.661Z
1990-01-01T00:00:00.000
{ "year": 1990, "sha1": "1b6f0b9b0f5c04e2f8031c43af2d7366f767698e", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1b6f0b9b0f5c04e2f8031c43af2d7366f767698e", "s2fieldsofstudy": [ "History", "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
219230702
pes2o/s2orc
v3-fos-license
Reassembly of Phospholipase C-β2from Separated Domains Phosphatidylinositol-specific phospholipase C-βs (PLC-βs) are the only PLC isoforms that are regulated by G protein subunits. To further understand the regulation of PLC-β2 by G proteins and the functional roles of PLC-β2 structural domains, we tested whether the separately expressed amino and carboxyl halves of PLC-β2could associate to form catalytically active enzymes as two polypeptides, and we explored how the complexes thus formed would be regulated by G protein βγ subunits (Gβγ). We expressed cDNA constructs encoding PLC-β2 fragments of different lengths in COS-7 cells and demonstrated by coimmunoprecipitation that the coexpressed fragments could assemble and functionally reconstitute an active PLC-β2. The pleckstrin homology domain of PLC-β2 was required for its targeting to the membrane and for substrate hydrolysis. Reconstituted enzymes that contained the linker region that joins the two catalytic domains were as active or more active than the wild-type PLC-β2. When the linker region was removed, basal PLC-β2 enzymatic activity was increased further, suggesting that the linker region exerts an inhibitory effect on basal PLC-β2 activity. The reconstituted enzymes, like wild-type PLC-β2, were activated by Gβγ; when the C-terminal region was present in these constructs, they were also activated by Gαq. Gβγ and Gαq activated these PLC-β2 constructs equally in the presence or absence of the linker region. We conclude that the linker region is an inhibitory element in PLC-β2and that Gβγ and Gαq do not stimulate PLC-β2 through easing the inhibition of enzymatic activity by the linker region. All the ten mammalian PLC isozymes identified to date are modular proteins. As shown in Fig. 1, the PLCs contain a pleckstrin homology (PH) domain, four EF-hand motifs, a catalytic domain (composed of X and Y regions separated by a linker region) and a C2 domain. PLC-␤s have an additional 400-residue C-terminal region, which is required for activation by G␣ q (9,10) and may also contribute to membrane localization (11). Among the PLC isozymes, only members of the PLC-␤ family (PLC-␤ 1-3 ) are activated by G␤␥. Part of the G␤␥-binding site on PLC-␤ 2 is located in the Y region as shown by cross-linking (12) and copurification (13). G␤␥ can also bind to the isolated PH domains from PLC-␤ 2 (14). Indirect evidence suggests that this interaction may lead to activation of PLC-␤ 2 (15). Despite these progresses, the mechanism whereby G␤␥ activates PLC-␤ 2 is still unclear. It seems unlikely that PLC-␤ 2 activation by G␤␥ involves membrane translocation of PLC-␤ 2 to the plasma membrane, because G␤␥ does not significantly alter the binding affinity of PLC-␤ 2 to phospholipid vesicles (16 -18). The goal of this study was to further understand the mechanism of substrate hydrolysis of PLC-␤ 2 , its regulation by G protein subunits, and the functional contribution of some of the PLC-␤ 2 domains to enzyme function. Although some PLC domains are homologous to known domains in other proteins, and highly homologous domains can be found among the various PLC isoforms, these domains may have different functions in the different isoforms. For example, PH domains are found in many proteins, but only some of them can bind WD-repeatcontaining proteins (such as G␤␥) (for review see Ref. 19). In addition, the PH domains of various PLC isoforms show very different affinities to phospholipids (14,20,21). It is therefore necessary to test individual proteins to find out which role(s) a particular domain plays in a particular context. The roles of other specific PLC domains in basal and ligandregulated PLC catalytic activities have not yet been clearly identified. For example, the role of the linker region between X and Y regions of the catalytic domain is less well understood. In the crystal structure of the PLC-␦ 1 molecule, the X and Y regions are tightly associated to form a triose phosphate isomerase barrel-like structure (22). Although the X and Y regions are well conserved among the PLC isozymes, the linker region possesses little similarity among the PLC isozymes. For example, PLC-␥s have a long linker region that contains two SH2 domains, one SH3 domain, and an additional PH domain, whereas the linker regions in PLC-␤ and PLC-␦ are less than 100 residues long and contain no obvious structural domains within them. In the crystal structure of PLC-␦ 1 , the linker region shows a disordered structure (22). The linker region is not essential for PLC catalytic activity. Coexpression of the Nand C-terminal fragments of PLC-␥ 1 lacking the linker region produces a catalytically active complex with an activity substantially higher than the holoenzyme (23). Trypsin digestion of PLC-␦ 1 cleaves the enzyme at the linker region and generates two associated fragments that retain catalytic activity (24). Proteolysis at or near the linker region of a truncated form of PLC-␤ 2 after it had folded into an active enzyme suggested that the linker region served as an inhibitory element (25). In this study, the linker region was cleaved but not removed and the exact site of tryptic or V8 protease cleavage was not determined. Because the authors used a truncated form of PLC-␤ 2 that was not stimulated by G␣ q , it was not possible to determine the effect of proteolysis of the enzyme in or near its linker region on G␣ q -dependent PLC activity. It is usually straightforward to analyze the contributions of domains at the N or C termini of a protein, because truncated forms of the enzyme can be made, and these truncated proteins are often active. In addition to analyzing the role of the Nterminal PH domain and the C terminus, we were particularly interested in the linker region. Because it is often difficult to study the function of internal domains due to misfolding of proteins with internal deletions, we attempted to reconstitute PLC-␤ 2 from two separate fragments, each containing one of the two catalytic X and Y regions. We tested whether or not the N-and C-terminal halves of PLC-␤ 2 could associate to form catalytically active enzymes when expressed as two separate polypeptides and whether reconstituted PLC-␤ 2 could still be activated by G␤␥ and G␣ q. Using PLC fragments of different lengths, we examined the functional contribution of the PH domain, the linker region and the C-terminal region to basal activity and G␣-and G␤␥-mediated PLC-␤ 2 activation. Answers to these questions will expand our understanding of the mechanism of substrate hydrolysis by PLC-␤ 2 and its regulation by G proteins. EXPERIMENTAL PROCEDURES cDNA Constructs-Plasmids containing cDNA sequences encoding various fragments of human PLC-␤ 2 were constructed by polymerase chain reaction. Full-length wild-type PLC-␤ 2 in pMT2 vector (a gift from M. Simon of the California Institute of Technology, Pasadena, CA) was used as template. The primer at the 5Ј-end included a HindIII site, a Kozak sequence (GCCGCC), and a start codon. The primer at the 3Ј-end included an EcoRI site and a stop codon. To add a FLAG or hemagglutinin (HA) epitope tag to a construct, one of the two primers contained the sequence encoding the epitope. The polymerase chain reaction products were digested with HindIII and EcoRI and cloned into an HindIII/ EcoRI-cut pcDNA3 vector. All the sequences were confirmed by DNA sequencing. Cell Culture and Transfection-COS-7 cells were maintained in complete growth medium (Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum, 50 units/ml penicillin, and 50 g/ml streptomycin). Cells in 6-well plates (for immunoprecipitation) or 12-well plates (for PLC activity assay) were transfected using Lipo-fectAMINE (Life Technologies). Prior to transfection, cells were transferred to Opti-MEM I medium (Life Technologies) for 1 h. The medium was replaced with 1 ml (6-well plates) or 500 l (12-well plates) of Opti-MEM I containing preformed DNA-LipofectAMINE complexes. The final concentration of DNA in the medium was 1.5 or 2.5 g/ml. The exact amount of each plasmid was as indicated in the legends to individual figures. The ratio (w/w) between DNA and LipofectAMINE was always kept at 1:8. After 5 h, 2 ml (6-well plates) or 500 l (12-well plates) of complete growth medium were added to each well. The me-dium was replaced with complete growth medium the next day. 35 S Metabolic Labeling and Immunoprecipitation-Forty-eight hours after transfection, cells on 6-well plates were starved for 2 h in 2 ml of starvation medium (RPMI 1640 without glutamine, methionine, and cysteine (Sigma) supplemented with 10% dialyzed, heat-inactivated fetal bovine serum and 2 mM L-glutamine). The cells were then metabolically labeled in 1 ml of starvation medium containing 150 Ci of [ 35 S]-Express Protein Labeling Mix (NEN) for 4 h. The cells were rinsed with PBS and lysed in 1 ml of lysis buffer (50 mM HEPES-Na (pH 7.5), 6 mM MgCl 2 , 1 mM EDTA, 75 mM sucrose, 3 mM benzamidine, 1% (v/v) Triton X-100, and 1 mM dithiothreitol) at 4°C for 30 min. The cell lysates were precleared with 30 l of protein G-agarose (Roche Molecular Biochemicals) or 50 l of protein A-Sepharose (Sigma) slurry (50% (v/v) in PBS) for 30 min. After a 10-min centrifugation, the supernatants were mixed with 2 l of M2 anti-FLAG antibody (Sigma) or anti-HA-epitope antibody 12CA5 (Babco) at 4°C overnight. The samples were centrifuged at 15,000 ϫ g for 15 min. The supernatants were then mixed with 30 l of protein G-agarose or 50 l of protein A-Sepharose slurry for 1.5 h. The resins were washed twice at 4°C for 15 min each with 1 ml of lysis buffer containing 150 mM NaCl and once at room temperature for 15 min with 1 ml of PBS. 25 l of 3ϫ sample buffer (187.5 mM Tris-Cl (pH 6.8), 6% SDS, 30% glycerol, 0.003% bromphenol blue) was added to each of the final pellets, and 20 l was loaded onto an SDS-PAGE gel. The gel was Coomassie Blue-stained, destained, treated with EN 3 HANCE (NEN), dried, and used for autoradiography with intensifying screens at Ϫ80°C. Inositol Phosphate Production in COS-7 Cells-PLC activity was analyzed as production of inositol phosphates (26,27). Twenty-four hours after transfection, the medium was replaced with 1 ml of inositolfree DMEM supplemented with 5% fetal bovine serum. Two hours later, the medium was again replaced with the same medium containing 2 Ci of myo-[ 3 H]inositol. After 15 min, 10 l of 1 M LiCl was added to each well (the final LiCl concentration was 10 mM). No difference in the uptake/incorporation of myo-[ 3 H]inositol was found in cells incubated with LiCl-containing medium for 1 h and 24 h. Forty-eight hours after transfection, the cells were washed with 1 ml of PBS and extracted twice for 30 min each with 500 l of 20 mM formic acid. The extracts were combined and neutralized to pH 7.5 with a solution containing 7.5 mM HEPES and 150 mM KOH. The neutralized extracts were loaded onto 0.5 ml AG1-X8 (Bio-Rad) anion exchange columns. Prior to use, the columns were washed with 2 ml of 1 M NaOH and 2 ml of 1 M formic acid and equilibrated with H 2 O to neutrality. After the extracts were loaded onto the columns, the columns were washed with 5 ml of H 2 O and 5 ml of 5 mM Borax and 60 mM sodium formate. The inositol phosphates were eluted with 3 ml of 0.9 M ammonium formate and 0.1 M formic acid. The eluates were counted in a scintillation counter. Subcellular Fractionation-COS-7 cells cultured in 6-well plates were transfected and metabolically labeled with [ 35 S]-Express Protein Labeling Mix as described above. The cells were washed with PBS and detached by incubation in 500 l of Trypsin-EDTA solution (1ϫ) (Sigma) at 37°C for 1 min. After mixing with 2 ml of DMEM/8% fetal bovine serum, the cells were collected by centrifugation at 500 ϫ g at 4°C for 5 min. After being washed with 3 ml of a buffer identical to the lysis buffer used in immunoprecipitation but containing no Triton X-100, the cells were resuspended in 600 l of the same buffer and went through freeze-and-thaw in ethanol/dry ice three times (3 min per period). The broken cells were then passed trough a 23-gauge (or smaller) needle ten times to shear DNA and centrifuged at 100,000 ϫ g in a Beckman SW55 rotor at 4°C for 30 min. The supernatant was the soluble fraction. The pellet was resuspended in 600 l of lysis buffer containing 1% Triton X-100 and incubated at 4°C for 30 min. The supernatant, after a 5-min centrifugation at 15,000 ϫ g was the particulate fraction. Both the soluble and the particulate fractions were later used in immunoprecipitation. Western Blot Analysis-Forty-eight hours after transfection, cells in 6-well plates were washed with 2 ml of PBS and harvested in 1 ml of lysis buffer. The cells were lysed at 4°C for 30 min. After a 10-min centrifugation at 15,000 ϫ g, an aliquot (10 l) of the supernatant was loaded on an SDS-PAGE mini-gel, and the resolved proteins were wet-electroblotted to a nitrocellulose membrane and probed with specific primary and peroxidase-conjugated secondary antibodies using a chemiluminescence kit according to the manufacturer's instructions (NEN). RESULTS Design of PLC-␤ 2 Plasmids-We constructed several mammalian expression plasmids encoding various fragments of PLC-␤ 2 , as shown in Fig. 1. To further understand the mechanism of substrate hydrolysis of PLC-␤ 2 and its regulation by G protein subunits, we used these constructs to determine whether the amino and carboxyl halves of PLC-␤ 2 could associate when expressed as two polypeptides and, if they could, how the complexes thus formed would be regulated by G␤␥ and G␣ q . These constructs were designed to allow us to test the role of the PH domain, to compare the activity and G protein regulation of enzyme with the linker region either attached to the C-terminal fragment or completely removed, and to compare the activity of reconstituted enzyme with and without the Cterminal domain required for activation by G␣ q . To all these fragments (except construct A), a FLAG or an HA epitope tag-encoding sequence was attached at one end (see Fig. 1). Construct A had both a FLAG tag at the N terminus and an HA tag at the C terminus to compare the results of immunoprecipitation through the FLAG and HA tags. Assembly of PLC-␤ 2 Fragments Expressed in COS-7 Cells-Antibodies directed against the epitope tags were used to (co)immunoprecipitate metabolically labeled PLC-␤ 2 fragments that had been expressed in COS-7 cells. The representative autoradiograph in Fig. 2A shows that most of the fragments were robustly expressed. Only A and E fragments were expressed at significantly lower levels (about one-tenth to onefifth) when compared with their corresponding PH domaincontaining fragments (AЈ fragment and wild-type, respectively). PLC-␤ 2 has an endogenous proteolytic site that cleaves off the C-terminal region necessary for G␣ q activation (10). The wild-type enzyme expressed in COS-7 cells was partially cleaved at this site giving rise to a fragment of about the same size as AЈ fragment. The AЈ fragment was further cleaved to a polypeptide of approximately the same size as the BЈ fragment. Similarly, A and E fragments had shorter polypeptides of the same size as the B fragment, suggesting that there is another proteolytic site near the C terminus of the X region. The remaining proteolytic or background bands were not identified but represented only a small fraction of the total protein. Lanes 8 -11 of Fig. 2A show that the BЈ fragment could bind C, CЈ, D, and DЈ fragments (CЈ and DЈ fragments lacked the linker region, whereas C and D fragments included this region). The expression level of the BЈ fragment was higher when the C-terminal fragments were coexpressed, suggesting that they stabilized the BЈ fragment. Immunoprecipitation through the FLAG tag on the BЈ fragment was able to coimmunoprecipitate the C, CЈ, D, and DЈ fragments, indicating that the BЈ fragment was able to form complexes with each. The BЈ fragment coimmunoprecipitated approximately equal amounts of C and CЈ fragments. The numbers of methionines in these fragments were: 15 in BЈ, 17 in C, 16 in CЈ, 27 in D, and 26 in DЈ. The D and DЈ fragments were cleaved at or near the site described by Park et al. (10) to generate C and CЈ fragments, which also bound to the BЈ fragment. About 90% of D fragment and 75% of DЈ fragment were cleaved. The implication of this cleavage for interpretation of activity measurements will be described below. The B fragment, which lacked the PH domain, could also coimmunoprecipitate a C-terminal fragment (C, CЈ, D, and DЈ fragments), but the capacity was lower when compared with the BЈ fragment (Fig. 2B). Therefore, removal of the PH domain reduced but did not block assembly. Immunoprecipitation and coimmunoprecipitation could also be performed through the HA epitope tag on A, C, CЈ, D, and DЈ fragments, but the efficiency was lower. For this reason, we performed the coimmunoprecipitation in all our other experiments through the FLAG epitope tag. We also examined whether the PLC-␤ 2 fragments could form complexes after they had been synthesized. When the BЈ fragment and C, CЈ, D, or DЈ fragments were expressed in COS-7 cells separately and later mixed after cell lysis, none of the C, CЈ, D, nor DЈ fragments were coimmunoprecipitated by the BЈ FIG. 1. Schematic representation of a human PLC-␤ 2 molecule and its constructs with epitope tags. PH, pleckstrin homology domain; EF, EF-hands; X and Y, X and Y regions of the catalytic domain; L, X-Y linker region; C2, C2 domain. The long black bar at the Cterminal end is the C-terminal region. See text for more details. FIG. 2. Expression of PLC-␤ 2 fragments in COS-7 cells. COS-7 cells in 6-well plates were cotransfected with plasmids encoding various PLC-␤ 2 fragments. 625 ng/ml of each DNA was used. Vector DNA (pcDNA3) was added as needed to make a total DNA concentration of 1.5 g/ml. The cells were metabolically labeled with [ 35 S]methionine/ cysteine. The PLC-␤ 2 fragments were immunoprecipitated with the anti-FLAG tag antibody as described under "Experimental Procedures" and resolved on two 10% SDS-PAGE gels. Positions of expressed PLC-␤ 2 fragments are indicated by arrows. When more than one band of a protein is present, only the band of the largest size was indicated. The first lane on the left in both gels is a control (transfected with vector only). fragment (data not shown). Therefore, the PLC-␤ 2 fragments need to be coexpressed in cells to form complexes with one another. In addition, we failed to coimmunoprecipitate G␤ 1 ␥ 2 through a FLAG tag on the PLC-␤ 2 fragments (including the wild-type PLC-␤ 2 ) or to coimmunoprecipitate the PLC-␤ 2 fragments with G␤ 1 ␥ 2 through an HA tag on G␤ 1 or G␥ 2 , suggesting that the interaction was weak. Roles of the PH Domain for the Enzymatic Activity and Subcellular Distribution of PLC-␤ 2 Fragments-We next tested the catalytic activity of the PLC-␤ 2 fragments measured as production of inositol phosphates. Full-length, wild-type PLC-␤ 2 was used as control. As shown previously in this laboratory (26,27), inositol phosphate production increased when COS-7 cells were transfected with PLC-␤ 2 itself (Fig. 3A). Coexpression of G␤ 1 ␥ 2 caused a pronounced rise in PLC activity. PLC-␤ 2 truncated at the C terminus (AЈ fragment) had basal and G␤ 1 ␥ 2 -stimulated activity equal to the full-length enzyme. Even though the wildtype enzyme was substantially cleaved, if the activity of the wild-type enzyme was much higher than AЈ fragment, the mixture should still show higher activity than the AЈ fragment. These results were consistent with previous reports (9, 10, 28). However, when the PH domain was removed from the fulllength or truncated enzyme (E fragment and A fragment, respectively), both were inactive. Fig. 3A also shows that individual fragments containing only one of the two catalytic regions (B, BЈ, C, CЈ, D, and DЈ) had no PLC activity whether or not the PH domain was present. Fig. 3B illustrates that basal and G␤ 1 ␥ 2 -stimulated PLC-␤ 2 activity could be reconstituted from two fragments each containing one of the catalytic domains only if the PH domain was present. The characteristics of the reconstituted activity will be discussed below. These results indicate that the PH domain was required for PLC-␤ 2 to hydrolyze its substrate in COS-7 cells. Because the fragments and reconstituted complexes lacking the PH domain (i.e. E, A, and complexes formed with the B fragment) were expressed at levels significantly lower than those containing the PH domain (the wild-type enzyme, AЈ, and complexes formed with the BЈ fragment) (Fig. 2), it was possible that their lower catalytic activity was simply a result of lower expression. To test this possibility, we compared the expression and PLC activity of B ϩ CЈ at a higher DNA dose (625 ng of DNA/ml at transfection) with those of BЈ ϩ CЈ transfected with one-tenth of this DNA dose (62.5 ng/ml). We chose this pair, because, in contrast to the wild-type enzyme and the AЈ fragment, at the lower DNA dose BЈ ϩ CЈ were expressed well and had a basal activity substantially higher than the blank. Despite a higher expression level of the B fragment (due to higher DNA dosage) compared with the BЈ fragment and a similar amount of coimmunoprecipitated CЈ, B ϩ CЈ showed no enzymatic activity. Similar results were also observed for other fragments (data not shown). Although this experiment did not completely exclude the possibility that the absence of PLC activity of E and A was in part due to low expression, our results indicated that removal of the PH domain abolished the function of PLC-␤ 2 in COS-7 cells. Removal of the PH domain also altered the subcellular distribution of the expressed proteins (Fig. 4). When the PH domain was present (wild-type, AЈ, and BЈ fragments), ϳ30% of the enzyme was found in the particulate fraction. Constructs lacking the PH domain (E, A, and B fragments) were found almost exclusively in the soluble fraction. Therefore, the inaccessibility to a membrane-associated substrate may account for the observed loss of PLC-␤ 2 activity in constructs lacking the PH domain. However, using these experimental approaches in transfected cells, we could not distinguish an intrinsic loss of catalytic activity in truncated fragments from effects secondary to alterations in subcellular localization. Effects of G␣ i on G␤ 1 ␥ 2 Activation of Wild Type and Reconstituted PLC-␤ 2 -G␤␥ needs to dissociate from G␣ to interact with its effectors. Therefore, excess G␣ i should block the G␤␥ activation of PLC-␤ 2 by scavenging free G␤␥ to form heterotrimers (26,27). This is an important control, because it shows that G␤␥ is activating PLC-␤ 2 with characteristics expected for a heterotrimeric G protein. Fig. 5 shows that, although G␣ i1 itself did not exhibit any significant effect in any group, it FIG. 3. Activity of PLC-␤ 2 fragments and reconstituted PLC-␤ 2 . A, activity of single PLC-␤ 2 fragments. COS-7 cells in each duplicate wells in 12-well plates were transfected with 625 ng/ml of each DNA. In all cases, vector DNA was added to give a total DNA concentration of 2.5 g/ml. Black bars, no G␤ 1 ␥ 2 ; shaded bars, in the presence of G␤ 1 ␥ 2 . WT, wild-type. A representative experiment analyzed in duplicate is shown. The error bars indicate the ranges of duplicate determinations. Each construct was tested at least three times. B, activity of two cotransfected PLC-␤ 2 fragments. Experimental conditions were identical to those in A of this figure except that the concentration of each PLC-␤ 2 DNA was 125 ng/ml. Wild-type PLC-␤ 2 and the AЈ fragment were used as positive controls. The PLC activity dropped to a level equal to or slightly lower than the basal activity. Cotransfection of the COS-7 cells with plasmids encoding G␣ i1 and/or G␤␥ protein subunits had no effect on the amount of immunoprecipitated PLC-␤ 2 fragments (data not shown). Effects of the Linker Region on Basal and G␤ 1 ␥ 2 -stimulated Activity of Reconstituted PLC-␤ 2 -The above experiments showed that the N-terminal fragment containing the PH domain (BЈ fragment) could reconstitute active enzyme when coexpressed with the C-terminal fragments. The reconstituted complexes had higher basal PLC activities than the wild-type PLC-␤ 2 and, like the wild-type PLC-␤ 2 and AЈ fragment, were activated by G␤ 1 ␥ 2 (Figs. 3B and 5). Among the four complexes, those lacking the linker region (BЈ ϩ CЈ and BЈ ϩ DЈ) had higher basal and G␤ 1 ␥ 2 -stimulated activity than those with the linker region connected to a C-terminal fragment (BЈ ϩ C and BЈ ϩ D). Although the reconstituted complexes had G␤ 1 ␥ 2 -stimulated activities that were equal to or greater than that of the wildtype enzyme, this was entirely due to increased basal activity. There was no significant change in the increment due to G␤ 1 ␥ 2 between the wild-type enzyme and any mutant/complex but BЈ ϩ C, whose increase was higher than that of any other group. This indicates that, although cleavage or removal of the linker region leads to increased basal activity, it does not affect the activation of PLC-␤ 2 by G␤ 1 ␥ 2 . Regulation of PLC-␤ 2 Activity by G␣ q -PLC-␤s are the only PLC isozymes that are activated by G␣ q (9, 29 -31). This activation involves the long C-terminal region characteristic of the PLC-␤ isozymes. We had shown above (Figs. 3 and 5) that the recombined PLC-␤ 2 complexes could be activated by G␤ 1 ␥ 2 and tested next whether they could also be activated by G␣ q (Fig. 6). We used the wild-type PLC-␤ 2 and AЈ fragments, which lacks the C-terminal region, as positive and negative controls, respectively. Among the four complexes, no significant activation by G␣ q was observed in the combinations lacking the C-terminal region (BЈ ϩ C or BЈ ϩ CЈ). In contrast, BЈ ϩ D and BЈ ϩ DЈ were activated by G␣ q . As with G␤␥ activation, the increment in PLC-␤ 2 activity due to G␣ q was very similar in wild-type PLC-␤ 2 , BЈ ϩ D and BЈ ϩ DЈ. Again, removal of the linker region (as in BЈ ϩ DЈ and BЈ ϩ CЈ) increased the basal activity, but the basal activity and the G␣ q -induced increase in activity were additive. These results documented that the reconstituted PLC-␤ 2 complexes containing the C-terminal region were still subject to regulation by G␣ q . We also tested the effect of G␣ q on fragments and reconstituted enzymes without the PH domain (E, B ϩ CЈ, and B ϩ DЈ). These fragments or reconstituted proteins showed no increase in PLC activity, providing further evidence that removal of the PH domain abolished the function of PLC-␤ 2 (Fig. 6B). DISCUSSION Our study is the first to reconstitute active enzymes with PLC-␤ fragments. In this study, we have characterized several constructs encoding various fragments of human PLC-␤ 2 . We reassembled PLC-␤ 2 from enzyme fragments each containing one of the two catalytic regions (Fig. 2) and found that the PH domain was required for both enzymatic activity (Figs. 3) and membrane targeting of PLC-␤ 2 (Fig. 4). These reassembled enzymes were still subject to regulation by G protein subunits (Figs. 5 and 6). We identified the X-Y linker region as an inhibitory element in the intact enzyme. However, changes at the linker region did not affect the regulation of PLC-␤ 2 by G protein subunits (Figs. 3, 5, and 6). The Roles of the PH Domain-We found that, although the targeting of PLC-␤ 2 fragments lacking the PH domain to the membrane may be impaired (Fig. 4), these fragments were still assembled (Fig. 2). Moreover, the PH domain was essential for PLC-␤ 2 to hydrolyze its substrates in COS-7 cells (Fig. 3). We could not distinguish whether removal of the PH domain in PLC-␤ 2 prevented the enzyme from getting access to its substrates in the plasma membrane, caused a loss in enzymatic activity, or both. A PLC-␦ 1 construct in which the PH domain was replaced by glutathione S-transferase has full enzymatic activity (32), suggesting that the major role of the PH domain in PLC-␦ 1 is to ensure membrane localization. The PH domain of PLC-␦ 1 binds to the PIP 2 polar headgroup with an affinity FIG. 5. Regulation of PLC-␤ 2 fragments by G␤ 1 ␥ 2 and G␣ i1 . COS-7 cells in duplicate wells in 12-well plates were transfected with 125 ng/ml of each PLC-␤ 2 DNA, but the concentrations of G␤ 1 , G␥ 2 , and G␣ i1 DNA were kept at 625 ng/ml each. In all cases, vector DNA was added to give a total DNA concentration of 2.5 g/ml. A representative experiment out of four independent experiments analyzed in duplicate is shown. The error bars indicate the ranges of duplicate determinations. FIG. 6. Activation of PLC-␤ 2 fragments by G␣ q . A, activation of PLC-␤ 2 fragments containing the PH domain. COS-7 cells in each duplicate wells in 12-well plates were transfected with 125 ng/ml of each PLC-␤ 2 DNA. The concentration of G␣ q DNA was 625 ng/ml. In all cases, vector DNA was added to give a total DNA concentration of 1.5 g/ml. The counts of the blank group (vector DNA only) with or without G␣ q were subtracted from those of the other groups under the same conditions. The error bars indicate standard deviations of three independent experiments. Black bars, no G␣ q ; shaded bars, in the presence of G␣ q . B, absence of activation of PLC-␤ 2 fragments lacking the PH domain. The error bars indicate the ranges of duplicate determinations in the same experiment, which was repeated twice with similar results. and specificity comparable to the native enzyme (20) and is proposed as the anchor localizing the enzyme to the plasma membrane in the "tether-and-fix" model based on the crystal structure of PLC-␦ 1 (22). The PH domain of PLC-␥ 1 binds to PIP 3 strongly and specifically and targets the enzyme to the membrane in response to growth factor stimulation (21). In contrast, PLC-␤ 1 , PLC-␤ 2 and their PH domains bind to phospholipid membrane surfaces with lower affinities, and the binding is PIP 2 concentration-independent (14). We observed that PLC-␤ 2 fragments lacking the PH domain were not found in the particulate fraction, whereas constructs containing the PH domain were partially targeted to the particulate fraction (Fig. 4). Therefore, this domain is also involved in membrane targeting of PLC-␤ 2 . Besides binding to some inositol phosphates, PH domains identified in some proteins bind to proteins containing WD-repeats (33). An example is the strong interaction between ␤-adrenergic receptor kinase and G␤␥ (34). The isolated PH domain of PLC-␤ 2 binds to G␤␥ with an affinity comparable to that of the full-length PLC-␤ 2 (14), but the significance of this interaction for the enzyme's regulation by G␤␥ needs to be further tested. The Roles of the Linker Region-The X and Y regions form the catalytic domain of PLC. In the present study, we demonstrated by coimmunoprecipitation experiments that, when COS-7 cells were cotransfected with two plasmids each containing the DNA sequence encoding one of the two catalytic regions, the two in vivo coexpressed fragments associated tightly with each other (Fig. 2). In contrast, fragments that were separately expressed and then combined could not bind to each other, suggesting that the association occurs during translation. The reassembled enzymes possessed catalytic activity similar to or higher than that of the wild-type PLC-␤ 2 (Figs. 3B and 5). The most dramatic elevation of the basal catalytic activity was found in the two combinations lacking the linker region, BЈ ϩ CЈ and BЈ ϩ DЈ. These results suggest that the linker region, when present in the intact enzyme, inhibits basal PLC-␤ 2 activity. When the linker region was attached to the C-terminal fragment containing the Y domain, but (in contrast to wild-type PLC-␤ 2 ) was not linked to the X domain, thereby allowing for more flexibility, the basal activity almost doubled (Figs. 3B and 5, compare the wild-type PLC-␤ 2 with BЈ ϩ D). However, the linker region may still interfere with PIP 2 hydrolysis, because complete removal of the linker region resulted in even greater increase in the basal activity. The long C-terminal region was not essential for the basal or G␤␥-stimulated activity of PLC-␤ 2 (Fig. 3A). However, the highest basal activity of all was given by BЈ ϩ DЈ that lacked the linker region but retained the long C-terminal domain. As was shown in Fig. 2, the C-terminal domain did not lead to the formation of more reassembled PLC-␤ 2 . Therefore, we conclude that the presence of the C-terminal domain allows those complexes that do reassemble to acquire a more active conformation. We found that even when the linker region was cleaved or completely removed, PLC-␤ 2 was activated by G␤ 1 ␥ 2 and this activation was completely blocked by G␣ i1 (Fig. 5). These PLC-␤ 2 fragments were also subject to activation by G␣ q . Consistent with previous findings (9,10), activation by G␣ q was contingent upon the presence of the C-terminal region (Fig. 6). In addition, all PLC-␤ 2 fragments showed similar increment in PLC-␤ 2 activity upon activation by G␤ 1 ␥ 2 or G␣ q . Therefore, it is highly unlikely that the G protein ␣ q and ␤␥ subunits regulate PLC-␤ 2 by direct effects on the linker region. Conclusions-Our experiments show that the PH domain is required for the basal as well as the G␣ q -and G␤␥-stimulated PLC-␤ 2 activity in a heterogeneous cell expression system. Like PLC-␥ 1 , functional PLC-␤ 2 can be reconstituted from two coexpressed enzyme fragments, each containing one of the two catalytic regions. The linker region is an inhibitory element in PLC-␤ 2 , but cleavage or removal of the linker region does not affect the G protein-mediated regulation of PLC-␤ 2 . Therefore, G␤␥ and G␣ q appear to activate PLC-␤ 2 by mechanisms other than easing the inhibition of PLC-␤ 2 activity by the linker region, thereby providing evidence for a regulatory pathway for PLC-␤ 2 -involving mechanisms distinct from other PLC isoforms.
2019-08-19T03:29:49.843Z
2001-01-26T00:00:00.000
{ "year": 2000, "sha1": "35d311ca4ac8bd011798b0393749c5c8b3efedb6", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/4/2503.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "dc6fff4980e78c70ea950b015e8bda586aa84b86", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
53092905
pes2o/s2orc
v3-fos-license
Highly Reactive Bis-Cyclooctyne-Modified Diarylethene for SPAAC-mediated Cross-Linking Photoisomerizable diarylethenes equipped with triple bonds are promising building blocks for constructing bistable photocontrollable systems. Here we report on the design, synthesis and application of a cross-linking reagent which is based on a diarylethene core and features two strained cyclooctynes. High reactivity of the cyclooctyne rings in catalyst-free 1,3-dipolar cycloaddition reactions was suggested to stem from the additional strain imposed by the fused thiophene rings. This hypothesis was confirmed by quantum chemical calculations. Introduction Reversibly photoisomerizable diarylethene (DAE) units have been inserted in countless molecules and systems, whose properties and functions can be controlled with light. 1 DAEs undergo pericyclic transformations under irradiation with UV or visible light (Fig. 1).The "open" and "closed" photoforms are stable at ambient temperatures that makes them attractive for numerous applications, especially in biology and medicine.1i,j,m,o One of the popular synthetic approaches utilizes functionalized DAE-derived reagents ("building blocks") for modular construction of photocontrollable molecules.For example, bisdiethynyl-substituted diarylethene 1 (Fig. 2) has been proposed as an entry to new photochromic DAE materials through palladium-catalyzed Sonogashira cross-coupling reaction 2 or for Cu-catalysed cycloaddition to azides (a "click"-reaction) involving its triple bonds. 3e were interested in 1 and similar compounds because they could be used for cross-linking the azide-substituted side chains in biologically active peptides in order to stabilize their conformation via so-called "stapling".This could be done, for example, using a two-component strategy, employing the clickreaction. 4Stapling can increase the biostability, improve binding affinities and pharmacokinetic properties of peptides, 5 and photoisomerizable cross-linkers can additionally make their bioactivities photocontrollablea feature which is attracting much interest due to potential applications in biotechnology and medicine.1i,j,o Cross-linking of peptides with azobenzene-derived, mainly thiol-reactive photoisomerizable cross-linkers was extensively studied since the beginning of 2000th.Efficient photocontrol of peptide conformation, 6a-i folding, 6j,k and affinity of binding to DNA 6l,m was documented.It was demonstrated that biologically relevant processes like protein-protein interaction (PPI) 6n-q and insulin secretion 6r could be "switched on" and "switched off" reversibly with the use of the azobenzenederived cross-linked peptides.Photoisomerizable spiropyrane 6s and rhodopsin-like fragments 6t were also utilized in peptide cross-linkers to enable the photocontrol of peptide conformation and properties.These studies have laid the ground for the development of practically useful photocontrollable biologically active compounds for biotechnology and in vivo applications.1i The only proof-of-principle experiment demonstrating successful photocontrol of peptides stapled by the DAE building block 2 have been reported for DNA-binding peptides (Fig. 2).6u The activated carboxylic groups in 2 reacted with the amino groups of ornithine side chains to form amide bonds in the stapled peptides.To the best of our knowledge, no DAE building blocks equipped for a click-reaction have been used so far for peptide stapling.In this paper, we report on the design and synthesis of such a building block, and its validation for peptide stapling applications. Of particular interest for applications are bio-orthogonal cross-linking reagents utilizing the strain-promoted azidealkyne cycloaddition (SPAAC) as the click-reaction, which avoids toxic Cu-catalysts (Fig. 3). 7SPAAC has already been successfully used for peptide stapling: a cyclooctadiyne derivative 3 (Fig. 2) was employed as the stapling reagent and demonstrated excellent performance. 8Here, we aimed at developing a DAE-based cross-linking building block suitable for the Cu-free SPAAC. Design of the target compound The utility of SPAAC in biological systems critically depends on the reactivity of the strained alkynes involved in the reaction: more reactive reagents can address faster biological processes and can react with the azide-modified biomolecules even if they are present at low concentrations in living systems.After the pioneering work describing relatively slow-reacting cyclooctynes, 7a much effort was put into the development of stable, but more reactive fluoro-and difluoro-substituted analogues, 7b heteroatom-containing cyclooctynes, 7e dibenzo-annulated cyclooctadiynes 9 and twisted systems 10 (see a recent review 11 ). When designing our DAE-based building block, we were inspired by a recent report demonstrating that the addition of one mole of azide to dibenzo[a,e]cyclooctadiyne 3 made the second triple bond 500-fold more reactive.12a This can be attributed to additional strain imposed on the medium-sized carbocycle by the annulation of the five-membered triazole ring, resulting in the enhanced reactivity of the remaining triple bond.An exceptionally high reactivity was also reported for compound 4, a cyclooctyne mono-annulated to a five-membered pyrrole ring.12b Taking into account these findings and aiming at enhanced reactivity of our reagent, we targeted structure 5 (Fig. 2), in which two cyclooctyne residues are symmetrically annulated to the five-membered thiophene rings of the DAE fragment.Comparing to the known DAE-derived building block 2, compound 5 will form less conformationally flexible cross-linker (due to the presence of two additional cycles) which might help to better convey its structural changes to the cross-linked molecular unit upon photoisomerization. Synthesis The key intermediate in our synthesis of 5 was the cycloheptanone derivative 6 (Fig. 4).This compound is easily available through the technically simple 1,1,1,3,3,3-hexafluoro-2-propanol-promoted intramolecular Friedel-Crafts acylation reaction of 7, as described for its non-methyl-substituted analogue. 13ompound 7 was obtained in good yield starting from 2-methyl-4-bromothiophene 8, following the procedures first described more than half a century ago. 14All the synthetic sequence can be scaled up to multigram quantities of the compound 6, which can be prepared in a reasonable time (1-2 weeks). Easy synthetic availability of 6 prompted us to explore the cyclooctyne ring construction first using this compound as a model, and then applying the elaborated procedure to the more complex and expensive DAE-derived bis-cyclooctyne precursor of 5, which could also be synthesized from 6. The exocyclic alkene 9 was obtained from 6 in excellent yield using the Wittig reaction.Rearrangement of 9 to the corresponding cyclooctanone 10 proceeded smoothly under action of hydroxy(tosyloxy)iodobenzene followed by aqueousmethanol work-up, a procedure reported recently for β-benzocycloalkenones. 15Formation of the enol triflate 11 followed by elimination completed the synthesis of the model cyclooctyne 12, in overall 18% yield. We were pleased to find that compound 12 was highly reactive in SPAAC, yet stable enough to allow performing the reaction with azides in situ at ambient temperature.Although attempts at isolating of pure 12 failed in our hands due to Fig. 2 The cross-linking reagents (1-3), a pyrrole-containing cyclooctyne (4) described in the literature and the new reagent developed in this work (5).compound decomposition, storage of this compound at −20 °C in the solution after preparation preserved the compound almost not degraded for several days.Reaction of 12 with benzyl azide proceeded completely in less than 30 minutes at 1 mM concentration and 0 °C giving isomeric 13a and 13b (syn : anti 24 : 76).Both triazole isomers were separated by preparative HPLC and their structure was assigned through series of HMBC and NOESY 2D-NMR experiments (see ESI †). DFT calculations of the structure of 12 and its reaction with methyl azide confirmed that the high reactivity of this compound stems from the additional strain imposed by the fivemembered heterocycle fused to the cyclooctyne ring (Fig. 5). Transition state activation barriers for SPAAC between 12 and methyl azide were calculated using Gaussian 09 16 with the B3LYP density functional and the 6-31 G(d) basis set within the CPCM model for methanol as solvent at standard conditions (see ESI † for full computational details).We found that the activation barrier for the formation of the anti-regioisomer (21.8 kcal mol −1 ) was lower than for the syn-regioisomer (22.6 kcal mol −1 ), which is consistent with the experimental observation of the anti-isomer being the predominant product (Fig. 5a).Furthermore, the activation barrier for SPAAC with 12 (21.8kcal mol −1 ) was found to be lower than the activation barrier reported for both cyclooctyne 14 and benzocyclooctyne 15 (24.9 kcal mol −1 ), 17 suggesting an increased reactivity.We also calculated the strain energy for 12 (18.4kcal mol −1 ) which was found to be higher than the calculated strain energy of cyclooctyne 14 (13.6 kcal mol −1 ) and benzocyclooctyne 15 (14.4 kcal mol −1 ), implying an increased reactivity due to the fusion of the five-membered thiophene ring (Fig. 5b).This increase in the ring strain due to the fused five-membered thiophene ring was also reflected by the decreased alkyne bond angles in 12 (151.1°,156.7°) compared to 14 (157.4°,157.5°) and 15 (155.1°,157.2°).With these encouraging results at hands, we performed the synthesis of the target DAE-derived bis-cyclooctyne 5, also starting from 6.The corresponding synthetic route is shown in Fig. 6. The cycloheptanone 6 was brominated, and the obtained bromo-derivative 16 was converted (through alkene 17) into the cyclooctanone 18, similarly to the transformation of 6 to 10 described above.The carbonyl group in 18 was protected to allow synthesizing the DAE derivative 20 using the procedure reported previously by Irie et al. 18 Then the carbonyl groups were liberated (forming compound 21).Formation of the enol triflate (compound 22) followed by the HOTf elimination completed the synthesis of the DAE building block 5 in about 10% overall yield, calculating on 6. As expected, the bis-cyclooctyne 5 was as reactive as 12 towards benzyl azide, a colourless 23a was formed as a major product in its SPAAC with benzyl azide (the storage of 12 after the preparation should be done at −20 °C due to its relative instability).Compound 23a was isolated in the open form and fully characterized.Blue-coloured 23a(closed) was prepared from the colourless 23a(open) by irradiation with UV light (256 nm) and characterized; 23a(closed) could, in turn, be transformed back to 23a(open) by irradiation with visible light (590 nm).Both transformations proceeded in quantitative yield, demonstrating a high efficiency of the photoconversion.Notably, the conversion of 23a(closed) to 23a(open) can be achieved by red light: the low-energy absorption band of the closed isomer is intense enough at 630-650 nm (see the UV-VIS absorption spectrum in Fig. 7a).This feature is important for the application of the DAE-derived compounds in vivo, because red light penetrates deeply into live tissues.Hence, photocontrol of biologically active derivatives of 23 should be feasible non-invasively as deep as 1-2 cm beneath the tissue surface. 19he photoisomerization of 23a(closed) to 23a(open) proceeded within minutes (Fig. 7b), which is typical for DAEs.1a-d Observation of a perfect isosbestic point (298 nm) confirmed that no other chemical processes other than the photo-isomerization took place during irradiation. Finally, to check the utility of the new building block 5 for peptide stapling, we prepared a stapled version of a peptide that originally stemmed from the PDI sequence (LTFEHYWAQLTS), which had been identified by phage display as an efficient inhibitor of p53/MDM2 and p53/MDMX protein-protein interactions. 20These interactions are known as important targets for anti-cancer drug candidates. 21Peptide inhibitors of p53/MDM2 and p53/MDMX are among the most promising compounds currently under investigation and development. 22Recently, PDI analogues stapled by SPAAC employing the cyclooctadiyne 3 as a linker have been prepared. 8For one of the most potent MDM2 binders identified, the linear precursor 24 bearing two azide-substituted sidechains in positions (i,i + 7) was used to prepare a stapled peptide 25 (Fig. 8).In this work, we also used precursor 24 to prepare the DAE-modified stapled peptides. SPAAC between 24 and the cross-linker 5 was performed at 1 mM concentration of both reactants and was found to be complete within less than one hour (methanol, 25 °C, LCMS monitoring of the reaction mixture).Three different stapled peptide isomers (of the general formula 26, Fig. 8) were easily separated by preparative HPLC using standard chromatographic approaches (see the ESI † for the details). Conclusions A novel DAE-derived bis-cyclooctyne 5 was synthesized for use as a cross-linking reagent by SPAAC.The stable compound was shown to be highly reactive towards azides due to additional strain imposed on the cyclooctyne rings by the fused thiophene rings.The high azide reactivity of 5 makes it a useful building block for azide cross-linking, e.g. for obtaining stapled peptides, as demonstrated on a peptide inhibitor of the p53/MDM2 and p53/MDMX protein-protein interactions. Fig. 1 Fig. 1 Diarylethene fragment used in photocontrollable molecules and systems, R 1 , R 2 = alkyl; R 3 , R 4 = H, alkyl, aryl; Y = S, O or N. The core unit undergoing photoinduced pericyclic transformations is highlighted in red.† Electronic supplementary information (ESI) available: Synthetic procedures, analytical data for novel compounds & details of the calculations.See DOI: 10.1039/c8ob02428f Fig. 4 Fig. 4 Synthesis of the model cyclooctyne 12 and its reaction with benzyl azide.
2018-11-11T01:39:44.737Z
2018-11-14T00:00:00.000
{ "year": 2018, "sha1": "cfed2858ec198516c959e4dcfc500d639d96435c", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ob/c8ob02428f", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "cfed2858ec198516c959e4dcfc500d639d96435c", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
268338444
pes2o/s2orc
v3-fos-license
Influence of bionic structure on hydraulic performance and drag reduction effect of a centrifugal pump Based on bionic theory, miniature dimple-type structures are constructed on the blades of a single-stage centrifugal pump to investigate the influence of the bionic structures on the hydraulic performance and drag reduction effect of the pump. The results show that dimples with shallower depth and which are located near the front section of the blade suction surfaces are more effective in improving the efficiency and reducing the drag of the pump. At small flow rate, due to the mismatch between the incoming flow angle and the blade placement angle, serious flow separation occurs in several flow channels, accompanied by the formation of the vortices, which would deteriorate the performance of the pump. With the arrangement of the dimples on blade suction surfaces, some low-speed vortices formed in the dimples, which is equivalent to increasing the effective thickness of the viscous substrate layer and decreasing the velocity gradient of the boundary layer. Therefore, the velocity distribution in the flow channels gets more uniform, and the turbulence kinetic energy and wall shear stress are thereby reduced. Introduction Energy conservation and environmental preservation have progressively become major topics of worldwide concern as a result of rising energy consumption, and low-carbon energy development has emerged as a key pathway [1].According to statistics, the annual power consumption of pumps accounts for about 17% of China's total electricity consumption [2], and surface friction drag is one of the important sources of energy dissipation.For large-scale industrial equipment or systems that operate constantly, reducing the drag of fluid movement inside pumps not only saves energy but also reduces operation and maintenance expenses significantly.Therefore, it is crucial to conduct research on pump energy conservation and drag reduction. The bionic non-smooth surface drag reduction method is a technique for reducing drag by mimicking biological epidermal structures [3], which is originally derived from observation of the epidermis of marine species.The remarkable swimming skills of large marine organisms like sharks have long attracted the interest of scientists.Researchers have found that shark skin is covered with non-smooth shield scale structures with microscopic grooves aligned in the direction parallel to the direction of fluid flow [4].These grooves can alter the structure of the boundary layer near the shark's body surface, delaying the formation of turbulent eddies and thereby reducing swimming drag.In the 1980s, Walsh et al. [5][6] conducted the first experimental study on the drag reduction performance of groove structure 2 on shark skin.The results confirmed that the groove structure can significantly reduce the wall friction drag, and the drag reduction effect is closely related to the shape of the grooves.It is found that the symmetric V-shaped grooves can reduce friction drag by up to 8% at high-speed flow.This achievement provides insights for research in bionics.Various non-smooth structural features have since been extracted from the surfaces of other organisms, such as the existence of some elevated blocky nodules at the leading edge of the humpback whale's flipper [7].When water flows through these nodules, vortex structures form and create perturbations in the flow field, thus effectively reducing the flow drag and noise.In addition, micro-dimple structures have been observed on the surface of fish such as the Chinese sturgeon, and studies have shown that dimple-type structures induce the turning of the laminar boundary layer to the turbulent boundary layer in advance, thereby suppressing flow separation in the channel [8]. In recent years, with the development of bionic theory, some progress has been made in the application of bionic microstructures in fluid machinery.Wang et al. [9] investigated the effect of triangular bionic microgrooves on the internal flow of a centrifugal pump.They discovered that the bionic microgrooves could reduce the energy loss caused by the impact of the flow and improve the hydraulic performance of the pump under various operating conditions.At the design flow rate, the head and efficiency of the pump increased by 3.7% and 0.8%, respectively.Dai et al. [10] constructed the Vgroove structure on the blades of a centrifugal pump that resembled shark skin.It is found that the Vgroove structure could improve the efficiency, the drag reduction rate, and the total sound pressure level noise reduction rate of the internal sound field at the rated operating condition, which further verifies the drag reduction and noise reduction function of the bionic structure.Zhao et al. [11] arranged the bionic nodular structure at the leading edge of the blades in a centrifugal pump.The results showed that the bionic model effectively suppressed the development of vacuoles, and the best inhibition performance was achieved at the cavitation incipient stage, with a 99.72% reduction in the average vacuole volume fraction.Qian et al. [12] studied the impact of the bionic convex domes on the erosion drag effect of a double suction pump.They found that the bionic blades exhibited much better erosion drag than the smooth blades.The high erosion-rate area was significantly reduced, and the erosion region became more spread across the entire bionic blade surface.Zhang et al. [13] optimized a vertical pipeline pump with reference to the sawtooth structure at the end of the owl feather.The results showed that the bionic sawtooth blade can significantly reduce the pressure pulsation and noise in the flow field, with the noise reduction effect being more noticeable under the design and high-flow conditions. In summary, non-smooth surfaces have achieved some promising results in hydraulic machinery, however, the shapes of which usually adopt grooves, and the drag reduction effect of other-shaped surfaces need to be further studied.In the current paper, the bionic dimple-type structures are introduced on the blade suction surfaces.Multiple schemes are put up to examine the impact of the size and arrangement location of the dimple-type structures on the hydraulic performance and drag reduction effect of the pump, which would provide theoretical guidance for the optimization of the centrifugal pump. 2.1.1. Original model.This paper takes a single-stage centrifugal pump as the research object, the design parameters of the model pump are as follows: flow rate Qn=500m 3 /h, head H=48.2m,rotating speed n=1450 rpm.The main geometrical parameters of the model pump are as follows: impeller inlet diameter Dj=200mm, impeller outlet diameter D2=420mm, impeller outlet width b2=36mm, blade outlet angle β2=20°, blade number Z=6, volute inlet diameter D3=450mm, volute inlet width b3=72mm.Figure 1 shows the assembly diagram of the model pump, the calculation domains mainly include the inlet and outlet pipes, impeller, and volute.2.1.2.Bionic model.The blade flow direction is defined as U, and the blade spreading direction is defined as V. Four rows of dimple-type structures are arranged along the direction U in the form of matrix on the blade suction surfaces.Taking depth and the arrangement location of the dimples as design variables, 9 bionic schemes are put forward, which are denoted by SSij (1 ≤ i ≤ 3, 1 ≤ j ≤ 3), respectively.The first subscript i represents the location of the dimples, i=1, 2, and 3 indicate that the dimples are located at the leading section, middle section, and trailing section of the blades, respectively.The second subscript j represents the depth of the dimples, j=1, 2, and 3 indicate that the depth of the dimples are h=0.6mm,h=1mm, and h=1.5mm, respectively.Moreover, the depth-to-diameter ratio of the dimples is h/d=0.25 [14], the distance between the dimples in direction U is sU=1.7d,and the distance between the dimples in direction V is sV=1.4d.Table 1 presents the detailed parameters of different bionic schemes.Figure 2 shows the schematic diagram of the dimple-type structures in the flow crosssection, and figure 3 exhibits the structural comparison of the original blade and the bionic blades. Grid generation Since the surface of the bionic blades is relatively complicated, the unstructured grid with strong adaptive is applied for the impeller domain and the structured grids with high accuracy are adopted for other domains by using the ICEM CFD software.The grid at the bionic structure is locally encrypted, and the mesh division of the impeller is shown in figure 4. Meanwhile, the grid independence of the model pump was checked to verify the accuracy of the simulation, and the results are displayed in table 2. When the number of meshes reaches 5.43 million, the head of the pump basically remains unchanged, hence the mesh number of 5.433×10 6 is chosen for the following calculations. Turbulence model and boundary conditions The internal flow in the impeller is characterized by large curvature, strong rotation, and high counterpressure gradient, therefore, the selection of the turbulence model is important to obtain ideal simulation results.Since the RNG k-ε model has good accuracy in simulating the boundary layer flow [15], flow separation, and secondary flow under strong counterpressure gradient, the current paper adopted the RNG k-ε model and used ANSYS CFX software to carry out the steady numerical simulation of different models.The multi-coordinate reference system was employed, with the impeller domain designated as the rotating domain, and the other domains designated as stationary domains.The dynamic and static component interfaces were set as frozen rotor interfaces, and all physical surfaces of the pump were designated as no-slip walls.Furthermore, the inlet boundary condition was set as total pressure, the outlet boundary condition was set as mass flow rate, and the convergence residual RMS was specified as 10 -4 . Performance characteristics curves The comparison between the test and simulated values of the external characteristic parameters is shown in figure 5.It is demonstrated that the simulated head and efficiency of the model pump are basically consistent with the test values.At the design point, the calculation errors for head and efficiency are 0.28% and 2.1%, respectively.The maximum calculation error is 4.2% when the flow rate decreases to 0.4Qn, the deviations are within the acceptable range, so further analysis would be made based on the numerical simulation method. Analysis of flow drag reduction effect of the bionic structure The flow in the pump is usually subject to various drags during operation, among which fluid drag and mechanical drag are the main factors.In order to overcome the drag to rotate the impeller, sufficient torque must be exerted on the pump shaft.The more drag there is, the more torque is necessary to turn the impeller.As a result, the drag reduction effect of the bionic model can be measured using the torque of the impeller.Gu et al. [16] provided the formula for the drag reduction rate. Where Ns is the torque of the smooth-surface impeller and Nr is the torque of the bionic-surface impeller. A positive C indicates that the dimple-type structures could reduce drag, and vice versa.The greater absolute value of C indicates that the dimples are more effective in reducing or increasing drag. Fluid drag, such as friction drag and differential pressure drag during the operation of the pump, is primarily determined by the properties of the fluid and the flow conditions [17].Among these, the friction drag has a significant impact on the energy consumption of the pump and is the object to be optimized in the structure design of the pump.Since the drag reduction rate C can only give quantitative analysis of the drag in the pump, it is not possible to obtain the specific location where the drag appears.Therefore, a new parameter needs to be introduced to further analyze the local drag in the pump. According to Newton's law of internal friction, the friction drag is directly proportional to the wall shear stress.Therefore, the magnitude of the wall shear stress can be applied to measure the drag reduction effect of the bionic model.The reduction rate D of wall shear stress is defined by equation ( 2): Where Ws is the average wall shear stress of the smooth-surface impeller and Wr is the average wall shear stress of the bionic-surface impeller. Effect of depth of the dimples on the performance of the centrifugal pump. Table 3 compares the external characteristic parameters and drag reduction rate of the original model and different bionic models under rated working condition.It can be seen that the head of all bionic models is elevated compared to the original model, while the efficiency of the bionic model is related to the depth of the dimples.The efficiency of scheme SSi1 all increased relative to the original model, in general, the hydraulic performance of scheme SSi1 is better than other bionic schemes, and scheme SS11 increases the head and efficiency of the original pump by about 1.1% and 0.6%, respectively.The hydraulic performance of the model pump gradually decreases as the depth of dimples increases, therefore, a shallower depth of the dimple is recommended to improve the performance of the centrifugal pump.In addition, the drag reduction rates of all bionic schemes are negative, indicating that the torque of the bionic impeller under the design condition is increased with respect to the original model, but the increment is quite small, with a maximum drag increment of 0.7846%.According to the previous section, a smaller depth of the dimple is beneficial to improve the performance of the model pump.Therefore, schemes SS11, SS21, and SS31 with h=0.6mm are selected to further examine the effect of the location of dimples on the performance of the centrifugal pump. Figure 6 shows the hydraulic performance comparison between the original model and three bionic models.It can be seen from figure 6(a) that the efficiency of all bionic models is lower than that of the original pump at 0.4 Qn, among which scheme SS11 has the smallest reduction amplitude.With the position of the dimples away from the front section of the blade, the efficiency of the bionic pump gradually decreases, and the efficiency of scheme SS31 dropped by 5.8% compared with the original model.When the flow rate is greater than 0.4 Qn, the efficiency of all bionic models exceeds that of the original model.In general, scheme SS11 has a better effect on improving the efficiency, and the efficiency is increased by up to 1.7% at 0.6 Qn.According to figure 6(b), the heads of schemes SS31 and SS21 differ very little from that of the original model at various flow rates.Scheme SS11, on the other hand, exhibits a different pattern.When the flow rate is less than 0.6 Qn, the head of scheme SS11 is lower than that of the original model.As the flow rate approaches the design flow rate, the head of Scheme SS11 gradually increases and reaches the maximum value at the design point. Figure 7(a) depicts the torque of the impeller of different schemes, where the vertical axis denotes the absolute value of torque.It is demonstrated that the torque of the four schemes has similar changing trends, that is, the torque increases gradually as the flow rate increases.At 0.4 Qn, the torque of scheme SS11 is slightly lower than the original model, while the torque of the other two schemes is higher than the original model.Table 4 shows the comparison of the drag reduction rates of three bionic impellers.It can be seen that the torque of schemes SS21 and SS31 raised by nearly 2.9% and 6.5% compared with the original model.It means that the arrangement of the dimple-type structures in the middle and trailing section of the blades has a negative influence on the drag reduction of the pump at lower flow rates.With the increase in flow rate, the torque of the bionic models reduces in comparison to the original model, and the magnitude of the decrement reaches the largest at 0.6 Qn.According to table 4, the torque of scheme SS11 is reduced by 2.7% compared with the original model at this time.When the flow rate raises to the design point and above, the dimple-type structures show little effect on the torque of the model pump. Figure 7(b) presents the comparison of the average wall shear stress on the impeller blade of different schemes.It is obvious that as the flow rate increases, the average wall shear stress on the blade exhibits a tendency of decreasing first and then increasing, and it reaches the minimum value at 0.6 Qn.The wall shear stress of scheme SS11 is lower than the original model at all working conditions.Table 5 displays the reduction rates of wall shear stress of three bionic impellers, it is evident that the arrangement of the dimples significantly reduces the average shear stress on the blade surface, of which the reduction of scheme SS11 has the largest magnitude, and the wall shear stress of the impeller is reduced by 13.54% compared with that of the original model at the rated condition.Overall, scheme SS11 outperforms the other two schemes, so the internal flow field of scheme SS11 is further analyzed to investigate the drag reduction mechanism of the bionic structure. The internal flow field of the original model and the bionic model To describe the typical flow structure in the model pump, the relative velocity is dimensionless by equation ( 3): Figure 8 presents the distribution of velocity and streamlines at the mid-span of the two models.It shows that the original model and the bionic model have similar flow patterns under different operation conditions.At small flow rate, the velocity distribution inside the pump is extremely uneven, and a wide range of low-velocity regions appear in the flow channels close to the tongue, accompanied by flow separation, vortices, and other unstable flow structures.With the increase of the flow rate, the distribution of velocity and streamlines in the pump becomes more uniform, and the unsteady structures gradually disappear. As can be seen from figure 8(d)~(f), the bionic structure performs admirably in improving the internal flow of the impeller.At 0.6 Qn, the bionic structure effectively controls the flow separation on the blade suction surfaces and reduces the number of vortices in the flow channels, thereby improving the performance of the pump to a certain extent.At both rated and over-load operating conditions, the bionic structure stabilizes the streamlines inside the impeller, the vortex in the flow channel basically disappears, and the fluid moves steadily along the profile of the blade.In addition, the velocity distribution at the outlet of the impeller becomes more uniform, and the area of the high-speed zone is reduced.At 1.2 Qn, it is clear that the high-speed region at the outlet of the flow channel facing the tongue essentially vanishes.Relative velocity distributions and streamlines at the mid-span of the pump.Turbulence kinetic energy is a parameter that reflects the intensity of fluid turbulence, the flow field would grow more unstable as the turbulence kinetic energy increases.Figure 9 depicts the distribution of turbulence kinetic energy at the mid-span of the pump, it is clear that the turbulence kinetic energy is quite strong at the impeller exit and near the tongue.At 0.6 Qn, it can be observed in the streamline diagram that multiple large-scale vortices appear in the flow channel near the tongue at this time, and the vortices continuously fall off and hit the volute, thus causing strong turbulence kinetic energy in the corresponding region near the tongue.As evident in figure 9(d), the bionic structure improves the internal flow of the pump, and the turbulence kinetic energy at the impeller outlet close to the tongue is significantly reduced.In addition, the regions with moderate intensity of the turbulence kinetic energy can also be observed as marked by the red circle in figure 9(a), which is caused by the flow separation on the blade suction surface.The turbulence kinetic energy in this region is also substantially decreased for the bionic structure successfully suppresses the flow separation on the suction surface. As the flow rate increases, the flow in the centrifugal pump grows more uniform, at the design flow rate, the internal flow of the pump is considered to be the most stable since the turbulence kinetic energy inside the pump reaches the lowest.It is glaring from figure 9(b) and (e) that the distribution of turbulence kinetic energy in the original model and the bionic model is similar, yet the bionic structure reduces the turbulence kinetic energy of the regions in the left-hand side and the diffusion section of the volute.When the flow rate raises to 1.2 Qn, the impacting effect of the fluid increases, and the highspeed fluid from the impeller exit has an intensive interaction with the tongue, causing significant turbulence to form close to the tongue.As seen in figure 8(f), the bionic structure improves the uniformity of the velocity in the flow field, and the local high-speed region at the impeller exit vanishes, which weakens the impact of the fluid on the tongue and reduces the turbulence kinetic energy near the tongue.Turbulence kinetic energy distributions at the mid-span of the pump.Figure 10 shows the distribution of wall shear stress of different models.It can be seen that the regions with higher wall shear stress are mainly concentrated on the leading section of blade suction surfaces.It could be explained by the fact that the axially moving fluid acquires a peripheral velocity at the inlet of the impeller and is thrown towards the blade suction surfaces under the action of centrifugal force, thus resulting in higher wall shear stress at the corresponding location.With increase in flow rate, the velocity of inflow raises, which increases the wall shear stress on the suction surfaces.Moreover, the distribution range of wall shear stress gradually spreads from the leading edge to the entire suction surface. From figure 10(d)~(f), it is evident that bionic structures effectively reduce the wall shear stress on the whole blade surfaces.And as shown in table 5, the reduction rate of wall shear stress decreases with the increase in flow rate.This is because the velocity gradient on the blade surface is large and severe flow separation could occur under small flow conditions.The bionic structure, on the other hand, can inhibit the flow separation, thus apparently reducing the wall shear stress.At large flow rate, the flow in the impeller is relatively stable, and only slight flow separation occurs, so the dimple-type structure has less influence on it.Figure 12 shows the relative velocity distribution of two models on the circumferential position R1.The parameter θ, as shown in figure 11, is introduced to represent the angle of the point from the vertical axis.It can be seen that the relative velocity of the original model and the bionic model have similar trends at different flow rates.In general, the velocity distribution at the inlet of the impeller is relatively uniform at the design and large flow conditions.The relative velocity gradually increases from the blade pressure surface to the suction surface.It is evident that the relative velocity of the bionic model is slightly lower than that of the original model, as the velocity at the impeller inlet is reduced, the impact of the inflow on the wall is weakened accordingly.As a result, the wall shear stress at the inlet of the impeller is reduced.The relative velocity within different channels appeared to be noticeably distinct at 0.6 Qn.Overall, the relative velocity in channel 1 and channel 6 is lower than that of other channels, and the sharp decline of relative velocity is generated at the suction side of channel 1 and the pressure side of channel 6.This is because the velocity of inflow is comparatively lower at small flow rate, and the incoming flow angle is smaller than the blade placement angle, resulting in apparent flow separation.From Figure 12(a), it is obvious that the bionic structure increases the relative velocity at the inlet of the flow channel near the tongue, and eliminates the steep drop in velocity of channel 6, thereby inhibiting the flow separation on the blade surface to some degree.According to Zhang et al. [19][20], the internal flow field of the volute is significantly influenced by the jet-wake structure at the impeller exit.The average velocity in the jet region is usually high, whereas the average velocity in the wake region is relatively low.As a result, the inflection point of the velocity can be regarded as the demarcation point of the jet wake structure. Figure 13 presents the relative velocity distribution of two models on the circumferential position R2.At both rated and over-load operating conditions, the variation of the relative velocity at the impeller exit of different flow channels is relatively similar.From the blade pressure surface to the suction surface, the relative velocity follows an ascending and subsequently descending trend.Obviously, the relative velocity of the flow channel 1 is comparatively large, especially at 1.2 Qn.As can be seen from figure 11, channel 1 faces the tongue, where the cross-sectional area of the volute is the smallest.Therefore, the convergence area of the jet-wake structure at the channel exit is small, resulting in higher velocity here.Strong energy loss would be generated under the interaction between the high-velocity fluid and the tongue.From Figure 13(b) and (c), it is evident that the bionic structure effectively reduces the velocity in the jet region of channel 1, and makes the velocity distribution in each channel more uniform, thereby weakening the interaction between the jet-wake structure and the tongue. When the flow rate decreased to 0.6 Qn, the relative velocity in the original model begins to show an irregular trend.The sharp decline of relative velocity is generated at the pressure surface of channel 1, channel 5, and channel 6, indicating that unstable flow structures may occur at these regions, which is consistent with the distribution of streamlines as shown in figure 8.It can be seen from figure 13(a) that the bionic structures effectively suppress the low-velocity region near the pressure surface of the aforementioned flow channels.In addition, it is worth noting that the bionic structure increases the overall velocity of the jet-wake structures in channels 1 and 6, yet makes the distribution of velocity more uniform, thus improving the flow filed at the impeller outlet.Overall, the velocity fluctuation at the impeller exit of the bionic model is small and the velocity distribution in each channel is relatively uniform, which indicates that the bionic structure reduces the velocity gradient of the jet-wake structure, thus reducing the shear effect between the high-velocity fluid and the low-velocity fluid, which further stabilizes the flow field inside the impeller and reduces the energy loss. Conclusions In this paper, the dimple-type structures are constructed on the blade surfaces of a centrifugal pump based on bionic theory.The influence of the depth and location of the dimples on the hydraulic performance and drag reduction effect of the model pump is investigated by numerical simulation.The main conclusions are as follows: (1) The hydraulic performance of the model pump gradually decreases as the depth of dimples increases, therefore, a shallower depth of the dimple is recommended to improve the performance of the centrifugal pump.Moreover, the arrangement of the dimples near the front section of the blade suction surfaces is more effective in improving efficiency and reducing the drag of the pump.At 0.6 Qn, the best bionic scheme SS11 increases the efficiency and drag reduction rate of the model pump by up to 1.7% and 2.7%, respectively. (2) At the design and large flow conditions, the velocity distribution in the impeller channel is relatively uniform, and the relative velocity at the inlet and outlet of the bionic model is lower than that of the original model, which weakens the impacting effect of the fluid on the wall, thereby reducing the wall shear stress of the blade surfaces.At small flow rate, the sharp decline of relative velocity is generated at the blade surfaces of several channels, indicating that unstable flow structures may occur in these regions.The bionic structures effectively suppress the low-velocity region of the aforementioned flow channels, which makes the velocity distribution more uniform, thus stabilizing the flow field and reducing the energy loss. (3) With the arrangement of the dimples on the suction surface, vortices with lower velocity are formed in the dimples, which would change the velocity distribution and decrease the velocity gradient of the boundary layer.Therefore, the friction drag on the blade surface is reduced. Figure 5 . Figure 5.Comparison between simulated and test results. schemes. Figure 6 . Figure 6.External characteristic parameters of different models under various working conditions. Figure 7 . Figure 7. Torque and wall shear stress of different models under various working conditions. Figure 8 . Figure 8. Relative velocity distributions and streamlines at the mid-span of the pump.Turbulence kinetic energy is a parameter that reflects the intensity of fluid turbulence, the flow field would grow more unstable as the turbulence kinetic energy increases.Figure9depicts the distribution of turbulence kinetic energy at the mid-span of the pump, it is clear that the turbulence kinetic energy is quite strong at the impeller exit and near the tongue.At 0.6 Qn, it can be observed in the streamline diagram that multiple large-scale vortices appear in the flow channel near the tongue at this time, and the vortices continuously fall off and hit the volute, thus causing strong turbulence kinetic energy in the corresponding region near the tongue.As evident in figure9(d), the bionic structure improves the internal flow of the pump, and the turbulence kinetic energy at the impeller outlet close to the tongue is significantly reduced.In addition, the regions with moderate intensity of the turbulence kinetic energy can also be observed as marked by the red circle in figure9(a), which is caused by the flow separation on the blade suction surface.The turbulence kinetic energy in this region is also substantially decreased for the bionic structure successfully suppresses the flow separation on the suction surface.As the flow rate increases, the flow in the centrifugal pump grows more uniform, at the design flow rate, the internal flow of the pump is considered to be the most stable since the turbulence kinetic energy inside the pump reaches the lowest.It is glaring from figure9(b) and (e) that the distribution of turbulence kinetic energy in the original model and the bionic model is similar, yet the bionic structure reduces the turbulence kinetic energy of the regions in the left-hand side and the diffusion section of the volute.When the flow rate raises to 1.2 Qn, the impacting effect of the fluid increases, and the highspeed fluid from the impeller exit has an intensive interaction with the tongue, causing significant turbulence to form close to the tongue.As seen in figure8(f), the bionic structure improves the Figure 9 . Figure 9. Turbulence kinetic energy distributions at the mid-span of the pump.Figure10shows the distribution of wall shear stress of different models.It can be seen that the regions with higher wall shear stress are mainly concentrated on the leading section of blade suction surfaces.It could be explained by the fact that the axially moving fluid acquires a peripheral velocity at the inlet of the impeller and is thrown towards the blade suction surfaces under the action of centrifugal force, thus resulting in higher wall shear stress at the corresponding location.With increase in flow rate, the velocity of inflow raises, which increases the wall shear stress on the suction surfaces.Moreover, the distribution range of wall shear stress gradually spreads from the leading edge to the entire suction surface. Figure 10 . Figure 10.Wall shear stress distributions of the blades.As shown in figure 11, two circumferential positions R1=76mm and R2=210mm are selected to analyze the relative velocity distribution at the inlet and outlet of the impeller.Figure12shows the relative velocity distribution of two models on the circumferential position R1.The parameter θ, as shown in figure11, is introduced to represent the angle of the point from the vertical axis.It can be seen that the relative velocity of the original model and the bionic model have similar trends at different flow rates.In general, the velocity distribution at the inlet of the impeller is relatively uniform at the design and large flow conditions.The relative velocity gradually increases from the blade pressure surface to the suction surface.It is evident that the relative velocity of the bionic model is slightly lower than that of the original model, as the velocity at the impeller inlet is reduced, the impact of the inflow on the wall is weakened accordingly.As a result, the wall shear stress at the inlet of the impeller is reduced.The relative velocity within different channels appeared to be noticeably distinct at 0.6 Qn.Overall, the relative velocity in channel 1 and channel 6 is lower than that of other channels, and the sharp decline of relative velocity is generated at the suction side of channel 1 and the pressure side of channel 6.This is because the velocity of inflow is comparatively lower at small flow rate, and the incoming flow angle is smaller than the blade placement angle, resulting in apparent flow separation.From Figure12(a), it is obvious that the bionic structure increases the relative velocity at the inlet of the flow channel near the tongue, and eliminates the steep drop in velocity of channel 6, thereby inhibiting the flow separation on the blade surface to some degree. Figure 11 .Figure 12 . Figure 11.Definition of the circumferential parameters of the impeller. Figure 13 . Figure 13.The relative velocity distribution on the circumferential position R2.To further reveal the drag reduction mechanism of the bionic structure, figure14displays the distribution of the velocity vector and streamlines on the blades at 0.6 Qn.As can be seen from figure14(a), at small flow rate, the flow inside the impeller is characterized by low velocity and high counterpressure gradient, the momentum of the moving fluid itself cannot resist the effect of viscosity and pressure difference.As a result, backflow occurs on the blade surfaces, which would squeeze the fluid particles in the boundary layer away from the surface, and the boundary layer gradually thickens and eventually detaches from the wall, causing severe flow separation to occur in the flow channel.With the arrangement of the dimples on the suction surface, vortices with lower velocity are formed in the dimples as presented in figure14(b), which is equivalent to increasing the effective thickness of the viscous substrate layer and decreasing the velocity gradient of the boundary layer[21], thus effectively reducing the friction drag on the wall. Figure 14 . Figure 14.The vector distributions and streamlines on the blade of the original and bionic models at 0.6 Qn. Table 1 . Detailed parameters of different bionic blade schemes. Table 2 . Validation of mesh independence. Table 3 . Performance parameters of pumps with different bionic schemes.Effect of location of the dimples on the performance of the centrifugal pump. Table 5 . The reduction rates of wall shear stress of three bionic impellers.
2024-03-12T16:24:49.617Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "dfe669c18fc72da42111db0f5a4a58aaf434d5e0", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2707/1/012044/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bf4cc95dee19150f57366acba11dd11d11e07d66", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
59226096
pes2o/s2orc
v3-fos-license
Sequential detection of pseudocowpox virus and bovine papular stomatitis virus in a same calf in Japan We detected parapoxviruses from environmental samples and calves with and without intraoral clinical signs and conducted molecular and serological analyses. Pseudocowpox virus (PCPV) was detected from a calf showing anorexia, frothy salivation, and erosion in the mucosa of the lip and tongue. At the time that PCPV was detected, bovine papular stomatitis viruses (BPSVs) were detected in environmental samples as well as in calves without intraoral clinical signs. BPSV, but not PCPV, was detected in the same calf after 22 days. Phylogenetic analysis revealed that genetically different PCPV strains exist in Japan. This is the first report on the detection of PCPV and BPSV sequentially in the same calf and coexistence of PCPV and BPSV in the same farm in Japan. doi: 10.1292/jvms.18-0367 1 and Fig. 1). After 22 days, oral swab samples were collected from all calves again, which are indicated as "post samples" in Table 1. The swab samples were homogenized with phosphate buffered saline (PBS) and centrifuged at 1,270 × g for 5 min at 4°C. The supernatants were filtered through a 450-nm pore sized membrane (Merck Millipore, Cork, Ireland). For virus isolation, swab samples were inoculated into primary bovine testis (BT) cells in rolling tubes by rotary cultures as described previously [11]. Cells were passaged more than three times in a blind manner. Viral DNA was extracted from swab samples with a High Pure Viral Nucleic Acid Kit (Roche, Mannheim, Germany). Polymerase chain reaction (PCR) amplifications were carried out with a Taq PCR Master Mix Kit (Qiagen, Hilden, Germany) using a TaKaRa Thermal Cycler Dice Touch (TaKaRa Bio, Kusatsu, Japan) and the primer set PPP-1/PPP-4 for detection of the partial-length (594bp) B2L gene, which encodes the parapoxvirus envelope [6]. Restriction fragment length polymorphism (RFLP) analysis was conducted with Drd I (marker for ORFV), Xmn I (marker for BPSV), and PflM I (marker for PCPV) as described previously [7]. PCR products were purified using a QIAquick PCR Purification Kit (Qiagen) and the nucleotide sequence was determined by direct sequencing using a BigDye Terminator Cycle Sequencing Kit v3.1 (Applied Biosystems, Austin, TX, U.S.A.). Sequence data was aligned using the ClustalW method [15] and phylogenetic analysis was performed using MEGA7 software [8]. Phylogenetic trees were constructed using maximum-likelihood estimation methods, and the reliability of the branches was evaluated by bootstrapping with 1,000 replicates. The nucleotide and deduced amino acid sequences were compared with those of available corresponding parapoxviruses. For serological analysis, serum samples were collected from all calves at the time of appearance of intraoral clinical signs and after 22 days again (Table 1) and used for agar gel immunodiffusion (AGID) tests as described previously [9]. Oral swab sampling and blood collection were performed within the veterinary scope of practice with informed owner consent. This study was approved by the Gifu University Animal Care and Use Committee (Approval numbers 14094 and 17046). No cytopathic effects were observed in BT cells, and thus viral isolation was unsuccessful. PCR specific for the partial parapoxvirus B2L gene showed positive results from all the oral swab samples of the first collection (pre samples) and from three of the oral swab samples of the second collection (post samples) ( Table 1). By RFLP analysis, three PCR products from the oral swab samples of the first collection (pre samples from cattle A-2 to A-4) and three PCR products from the oral swab samples of the second collection (post samples from cattle A-1 to A-3) were cut with Xmn I and thus classified as BPSV; however, a product of the oral swab sample of the first collection (pre sample) from A-1 was cut with PflM I and was therefore classified as PCPV ( Table 1). The nucleotide sequences of two representative PCR products which were amplified clearly were determined and designated as MZ17-3 (cattle A-1, GenBank/EMBL/DDBJ accession no. LC350284) and MZ17-4 (A-3, LC350285). Based on the nucleotide/ deduced amino acid identities and phylogenetic analysis, the MZ17-4 strain was classified as BPSV; however, the MZ17-3 strain was classified as PCPV in accordance with the RFLP analysis. The phylogenetic tree showed that MZ17-4 clustered with the previously reported BPSV strains Iwate-bovine-2007 (AB538385) and Chiba (AB044798), which were isolated from cattle in Iwate and Chiba Prefectures, Japan, and with V660 (AB044793), which was isolated from a calf in Europe. MZ17-3 clustered with the previously reported PCPV strains YG2828 (LC230119) and IW10-H (AB921003), which were isolated from cattle in Yamaguchi and Iwate Prefectures. Additionally, a nucleotide sequence was determined from a stored DNA sample which was previously extracted from the oral cavity of a cow (Holstein, female, 39-month old, cattle ID B-1) without intraoral clinical signs on a farm (farm B: which is located south of farm A and is about 70 km away, and there is no epidemiological relation with farm A) in Miyazaki Prefecture in 2016, and the PCPV pattern on RFLP analysis was observed and designated as MZ16-1 (LC350286) ( Table 1). MZ16-1 was classified as PCPV, but instead of clustering with the same group as MZ17-3, it clustered with another group of PCPV, the VR634 strain (AB044792), which was isolated from a human in the United States of America (Fig. 2). In the environmental samples, PCR showed positive results from four samples out of 11 at farm A (Fig. 1). RFLP analysis using PCR products selected from farm A revealed that all of them were cut with Xmn I (data not shown) and thus classified as BPSV. The AGID test showed positive results from both serum samples of the first (pre samples) and second (post samples) collection ( Table 1). The detection of multiple parapoxviruses in cattle has been reported overseas [4,10], but not in Japan. Moreover, coexistence of multiple parapoxvirus members on a farm has not been reported in Japan. In this study, PCPV and BPSV were detected sequentially from the same calf with intraoral clinical signs; however, although BPSVs were detected, no PCPV was simultaneously detected in other calves or the environment. This is the first report of PCPV and BPSV detection in the same calf sequentially. It is also the first report to describe the coexistence of PCPV and BPSV at the same time on a farm in Japan. As BPSV was detected in calves without clinical signs and the environment, we speculated that the calf (A-1) may be already infected with PCPV subclinically in Hokkaido and showed intraoral clinical signs after transport to the farm A, and that BPSV may exist in calves subclinically (except the calf (A-1)) and in the environment of the farm A. Presumably, the fact that calves on farm A are regularly introduced from three regions may be a possible reason of the coexistence of multiple parapoxvirus members. The clinical signs seen in the calf infected with PCPV were similar to clinical signs reported in a previous study of PCPV infection in Japan, such as anorexia, frothy salivation, and hyperemia in the mucosa under the tongue surface [11]. BPSV was detected in all calves; two calves (A-2 and A-3) had positive oral swab samples from the first and second collection, BPSV was detected in environmental samples as well. Additionally, before the appearance of intraoral clinical signs, calves had just been introduced from Hokkaido. Previous studies reported that BPSV caused subclinical infection, persistent infection, and reinfection [5,13,17], and that parapoxvirus infection can be induced by stress factors [13,17]. Therefore, we hypothesize that parapoxvirus infection in this study may be related to transport. BPSV could have caused subclinical infection and incubated for a long period in the calf oral cavity; then, BPSV-infected calves could have exhaled the virus and contaminated the farm environment in consideration of the resistance of BPSV in nature. In this study, two genetically different strains of PCPV (MZ16-1 and MZ17-3) were detected in Miyazaki Prefecture. MZ17-3, which was detected at farm A, clustered with two PCPV strains previously reported in Japan (IW10-H [16] and YG2828 [11]). These are the only two PCPV strains reported in Japan and were detected from cattle showing papular stomatitis, but not clinical signs on the teats and udder. However, MZ16-1, which was detected from a calf without intraoral clinical signs at farm B in 2016, was in a different cluster from the PCPV IW10-H and YG2828 strains. These results suggest that multiple genetically different PCPV strains are spreading among cattle with or without intraoral clinical signs in Japan. It has been reported overseas, that co-infection with multiple BPSVs causes more severe clinical signs [4]. In Japan, only two studies detecting PCPVs have been reported [11,16], and the prevalence of coinfection of PCPV with BPSV and other viruses is unknown. Further investigations in the field and with regard to the pathology of parapoxvirus infections are needed.
2019-01-25T14:03:01.168Z
2019-01-22T00:00:00.000
{ "year": 2019, "sha1": "b80341244f3ba0978ba6c4ca8af1522f7675df05", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/81/3/81_18-0367/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b80341244f3ba0978ba6c4ca8af1522f7675df05", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
221857251
pes2o/s2orc
v3-fos-license
Best management in isolated right ventricular hypoplasia with septal defects in adults Hypoplastic right ventricle is a rare congenital disease usually associated with pulmonary atresia or tricuspid atresia. Isolated right ventricular hypoplasia is a rare anomaly without important valvular abnormalities. It is associated with inter atrial septal defects leading to the right-to-left shunting of blood. Patients with isolated right ventricular hypoplasia usually have different and variable courses. In some patients, it is recognized in the perinatal period and necessitates prompt intervention; nonetheless, there are some reports of this anomaly in old age with no significant symptoms. In this report, we describe the clinical data and management of 6 adult cases with isolated right ventricular hypoplasia treated medically or surgically based on the severity of the disease and symptoms and then offer an in-depth discussion regarding this rare anomaly. Introduction Right ventricular hypoplasia, unassociated with severe pulmonary or tricuspid valvar malformations, is a primary congenital abnormality with the underdeveloped trabeculated sinus of the ventricle. In right ventricular hypoplasia, a patent foramen ovale or an atrial septal defect can serve as an escape valve. 1 The clinical manifestations are usually nonspecific and vary related to the degree of hypoplasia and right ventricle compliance as well as the degree of right-to-left shunting via the atrial septal defect or the patent foramen ovale. Physical examinations are often normal. 2 The diagnosis and evaluation of the severity of the syndrome are routinely performed by echocardiography, cardiac magnetic resonance imaging, and hemodynamic evaluation. 3 The mild hypoplasia of the right ventricle can be corrected by atrial septal defect closure; nevertheless, for subjects with severe hypoplasia, the Glenn shunt, one and half ventricle repair, or even the Fontan surgery should be chosen. 4 This report describes the history, para-clinical data, and management of 6 patients with isolated RVH treated medically or surgically based on their symptoms. Case Presentation Case 1 A 22-year-old woman was admitted to our clinic with mild dyspnea. The patient had a history of cardiac disease in her bother, who had died. Additionally, she had an abortion due to cardiac abnormalities in the fetus. Physical examinations revealed cyanosis (oxygen saturation at room air = 85%), clubbing, and systolic murmurs in the mitral area (+2/6+) radiating to the anterior axillary line. Twodimensional Doppler echocardiography demonstrated the enlargement of the right atrium and the normal size of tricuspid valve annulus; however, in the subcostal view, there was hypoplasia of the apical portion of the right ventricle while the sub-pulmonary outflow was normal. The systolic pulmonary artery pressure was 30 mm Hg. A redundant interatrial septum with a large atrial septal defect and a bidirectional shunt was illustrated, and there was also evidence of a small apical muscular ventricular septal defect with no significant left-to-right shunting ( Figure 1B & C). Cardiac magnetic resonance revealed a normal volume and function for the left ventricle, while a large atrial septal defect and a small ventricular defect were seen along with a localized interventricular bulging of the septum at the site of the ventricular septal defect ( Figure 2B). According to the cardiac magnetic resonance results, the right ventricle volume was in the lower normal limit with a mildly reduced function (Figure 2A). Cardiac catheterization was performed in order to delineate the right ventricle and pulmonary artery hemodynamics and pressures. The data obtained from the right ventricle angiogram showed a small right ventricle with apical hypoplasia. The pressure of the right atrium and right ventricular end-diastolic pressure was increased, and evidence of an atrial septal defect with right-to-left shunting was observed. The saturation and pressure data from right-heart catheterization and cardiac magnetic resonance results are depicted in Table 1 and Table 2, respectively. Medical treatment with diuretics was done and subsequently, the patient was discharged. Re-evaluation was performed after 6 months by catheterization. The right ventricular end diastolic pressure and right atrium pressure decreased significantly, and the sizing balloon occlusion test showed no dramatic changes in the pressures. Accordingly, the atrial septal defect was closed with an Occlutech ® device (21 mm) percutaneously. After 1 month, the patient had no symptoms and cyanosis and follow-up after 2 years showed no symptoms. Case 2 A 36-year-old woman with a history of surgical atrial septal defect closure many years previously was referred to our hospital for an evaluation of her dyspnea and cyanosis. The patient mentioned a history of pulmonary stenosis in her son, who had undergone pulmonary valvuloplasty. Her previous surgical records at the time of surgery and after atrial septal defect closure showed that the surgeon had decided to open the patch because of her unstable hemodynamic status and the infeasibility of weaning from the cardiorespiratory pump. Electrocardiogram (ECG) data showed sinus rhythms and incomplete right bundle branch block. Echocardiographic assessments revealed the normal size and mild dysfunction of the left ventricle, abnormalities in the apical part of right ventricle with moderate tricuspid regurgitation, and the normal size of the tricuspid valve annulus. In the interatrial septum, there were 2 residual septal defects, about 12 mm in size, with right-toleft shunting. The function and size of the pulmonary valve were normal. For further evaluations, cardiac catheterization was done and the right ventricle angiogram confirmed the hypo-plastic right ventricle at the apical portion with moderate systolic dysfunction. Additionally, cardiac pressure assessments showed elevated right atrium and ventricle pressures ( Table 1 and Table 2). Cardiac magnetic resonance also showed the abnormal shape of the right ventricle with the right ventricle enddiastolic volume measurement at the lower normal limit. Given the severe symptoms in this case, the one and half ventricle repair (Glenn shunt, tricuspid valve repair and decreasing the ASD size) surgery was performed for her. Early postoperative assessments showed the improvement of function class and cyanosis. Case 3 A 35-year-old woman was admitted to our hospital with progressive dyspnea and cyanosis with oxygen saturation of 78% at room temperature. This patient had a family history of severe pulmonary stenosis in her child. Physical examinations revealed cyanosis and digital clubbing, and heart auscultation was normal. Echocardiographic and cardiac magnetic resonance assessments showed the normal size and function of the left ventricle; however, the right ventricle was small with a large patent foramen ovale leading to a right-to-left shunt. Hemodynamic and cardiac pressure evaluations were done through catheterization ( Table 1). The results showed increased right ventricle end diastolic pressure and right atrium pressure with evidence of right-to-left shunting. Given the patient's cyanosis and the abovementioned symptoms, the Glenn procedure was performed. Further clinical evaluations of the patient soon after the surgical operation showed improvements in dyspnea and an increase in oxygen saturation (92%). Nonetheless, in further follow-ups, she complained of the exacerbation of her symptoms relative to the early days following surgery. Case 4 A 19-year-old man was referred to our clinic with the chief complaint of dyspnea. Physical examinations revealed cyanosis and clubbing with oxygen saturation of 82% at room temperature. ECG showed sinus rhythms without significant ST-T changes. Based on the echocardiographic data, the size and function of the left ventricle were normal, while the right ventricle was small with apical hypoplasia and moderate dysfunction. Assessments of the tricuspid valve revealed moderate tricuspid regurgitation and normal pulmonary artery pressure. There was also a large atrial septal defect with a bidirectional shunt. Angiographic and catheterization data showed increased right atrium pressure and right ventricle end diastolic pressure ( Figure 3). On the basis of the information gathered, the patient underwent the Glenn surgery and the incomplete closure of the atrial septal defect. The surgery conferred significant alleviation in the patient's symptoms. Case 5 A 28-year-old man was admitted to the emergency service in our hospital with progressive dyspnea and cyanosis. Clinical examinations showed normal lungs, systolic murmurs, and oxygen saturation of 79% at room temperature. Echocardiographic evaluations showed a normal left ventricle, whereas there was hypoplasia in the apical part of the right ventricle with moderate dysfunction, moderate tricuspid regurgitation, and a moderately sized atrial septal defect with a right-toleft shunt. Cardiac catheterization confirmed elevated right chamber pressures. The right ventricle angiogram provided evidence of a hypoplastic right ventricle with moderate dysfunction, the absence of the apical part of the trabeculated right ventricle, and an atrial septal defect with right-to-left shunting. (Figure 4A& B). Given the patient's severe symptoms and cyanosis, one and half ventricle repair (Glenn and incomplete closure of atrial septal defect) was suggested. The patient refused surgery, and medical treatment with spironolactone, digoxin, and furosemide was chosen for him. Case 6 A 20-year-old woman with a history of mild dyspnea and cyanosis from infancy referred to our hospital. She had a family history of congenital heart disease in her sister. Clinical examinations, followed by echocardiographic assessments, showed a normal left ventricle, a small right ventricle ( Figure 1A) with moderate dysfunction, and a large stretched patent foramen ovale with right-to-left shunting. Cardiac magnetic resonance confirmed all the above mentioned results. Further cardiac catheterization illustrated the absence of the apical part of the right ventricle, increased right atrium pressure and right ventricle end diastolic pressure and evidence of a patent foramen ovale with bidirectional shunting. Medical treatment with diuretics was started for the patient; and 3 months after the medical therapy, catheterization and the sizing balloon occlusion test were performed. Given the absence of any dramatic changes in right atrium pressure and right ventricle end diastolic pressure, the patent foramen ovale was closed with an atrial septal defect Occlutech ® device (18 mm). Post procedurally, the patient's cyanosis was eliminated and she was asymptomatic in her follow-ups. Discussion Isolated right ventricle hypoplasia is a rare anomaly that in which the trabecular components of the right ventricle were absent or less developed, without significant tricuspid or pulmonary valve malformations. From 1950 to 2009, there were 74 reports of patients suffering from isolated right ventricle hypoplasia in 41 different studies as reviewed by Lombardi et al. 1 In the present study, we summarized data from 2010 to 2019 (Table 3). Only a few cases of right ventricle hypoplasia occur in adulthood, and most of them tend to manifest themselves in infancy. Pathophysiology The right ventricle consists of 3 parts: the atrioventricular valve as an inflow tract, the trabecular portion, and the outflow tract. 5 Hypo-plastic right ventricle is characterized when 1 or more of these 3 components have an anomaly and cause a reduction in the chamber size. Hypo-plastic right ventricle could be associated with different abnormalities such as pulmonary valve atresia and tricuspid valve atresia as well as other congenital defects such as interventricular septal defects. 6 The underdevelopment of the trabecular portion but normally developed pulmonary and tricuspid valves could lead to a reduction in the right ventricle size, characterized as isolated right ventricle hypoplasia, which is a rare disease with only a few cases reported. 7 The main cause of isolated right ventricle hypoplasia in most cases is unknown; in some cases, however, the cause appears to be familial. 8,9 We detected this association in our cases 1, 2, 3, and 6; however, our genetic tests on these cases aimed at finding a mutation or a single-nucleotide polymorphism yielded no evidence to confirm this hypothesis. The existing literature lacks a reliable estimation as regards predominance between sexes in isolated right ventricle hypoplasia. 1 Clinical presentation As a mentioned earlier, isolated right ventricular hypoplasia is a cyanotic congenital heart disease without significant associated anomaly that clinical manifestations of this disease and the onset of symptoms are highly variable depending on the severity of hypoplasia. In cases with less malformation, symptoms such as cyanosis, dyspnea, and digital clubbing may be found later; whereas in more severe cases, congestive heart failure and cyanosis can appear during infancy. 10 In general, the severity and timing of the onset of symptoms vary greatly from patient to patient, depending on the severity of the underlying abnormality and degree of right ventricle hypoplasia. 11 In addition, published articles have reported few cases of disease diagnosis during surgery. 12 Diagnostic methods Complementary diagnostic tests are crucial to the differential diagnosis of isolated RVH from other disease with cyanosis, and it is often determined by echocardiography, CMR, or hemodynamic assessments. A simplest test is EKG that is of importance, and maybe showed decreased RV electrical forces and signs of right atrial or biatrial hypertrophy, cardiac axis deviation to the left, and sometimes atrioventricular conduction disorders, but in many patients, it is not a diagnostic modality. 13,14 Chest X-ray contributes little to the diagnosis, as it may show normal cardiac silhouette, cardiomegaly and/or normal or decreased pulmonary blood flow. 15 Echocardiography is another diagnostic method that is simple and available. We used it as an earlier modality for evaluation of these patients and usually shows a decreased right ventricle size and the hypoplasia of the trabecular portion with no anomalies in the tricuspid and pulmonary valves. Besides, the patent foramen ovale or interatrial septal defect appear in most patients as compensatory components with right to left shunt. 16 Cardiac magnetic resonance can provide comprehensive evaluations, and it is widely used in the assessment of congenital heart defects. It does not use ionizing radiation or potentially nephrotoxic means of contrast and provides more comprehensive cardiac evaluation. Cardiac magnetic resonance provides precise data of the ventricular function and volume and the accurate quantification of right-to-left shunts via atrial septal defect or patent foramen ovale. 17 Cardiac catheterization demonstrates a rise in enddiastolic pressures and also in right atrial pressure. The presence of a mixed shunt at the atrial level in the absence of pulmonary hypertension, tricuspid or pulmonic valvular disease demonstrates the restriction to inflow caused by the hypo-plastic right ventricle. 18 Furthermore, an increase in the right atrial pressure and in initial and final diastolic pressures demonstrates reduced ventricular filling capacity. Moreover, right-left or bidirectional shunts can be found, as well as an oxygen saturation of 66% to 90%. 11 One of the best modalities for evaluating the right ventricle shape and especially detecting the absence of the right ventricle apical part in right ventricle angiography. A precise evaluation of isolated right ventricle hypoplasia needs multimodality imaging. Management In patients with isolated right ventricle hypoplasia, decision-making vis-à-vis medical treatment, catheter intervention, or surgical treatment depends on the severity of the disease and symptoms. The right ventricular size measurement is the most important factor in choosing appropriate approach. Sizing balloon occlusion test in Cath lab or intraoperative period can provide informative data to identify eligible patient for simple closure of ASD by device or surgical approach. Atrial septal defect should be closed only when the right ventricle can adapt to the increased volume load. 19 In the cases with severe hypoplasia whom had significant elevation in RA pressure and RVEDP after balloon occlusion test, incomplete closure of ASD with SVC to pulmonary artery connection (Glenn shunt) recommended. The aims of the surgical repair of isolated right ventricle hypoplasia in this situation, are to offload the small right ventricle through a direct connection between the vena cava and the pulmonary artery and to completely or incompletely close the atrial septal defect to improve cyanosis. Sometimes in cases with very severe hypoplasia Fontan operation is the better surgical approach. Our postoperative evaluations of the patients showed that all the chosen interventions were successful in alleviating the symptoms. Conclusion With regard to the different presentation of isolated right ventricle hypoplasia, we suggest meticulous examinations and evaluations of patients with atrial septal defect presenting with cyanosis. The evaluations of our cases revealed the importance of a combination of echocardiography, angiography, and cardiac magnetic resonance along with physical examinations and clinical history taking when it comes to a precise assessment of patients and decision-making about medical or surgical treatment. The early interventions for patients with isolated right ventricle hypoplasia with decreased pulmonary blood flow and cyanosis may include atrial septal defect catheter intervention or surgical closure when the right atrium pressure and right ventricular end diastolic pressure is reasonable (especially good hemodynamic response after temporary balloon occlusion) or Glenn and or Fontan surgery for offloading the RV and in some patients one and a half ventricle repair.
2020-09-23T10:44:06.893Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "e260b2ff99492831d4968cd51396ba7f47d9e4eb", "oa_license": "CCBY", "oa_url": "https://jcvtr.tbzmed.ac.ir/PDF/jcvtr-12-237.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77e7d3270d43688a332db024e3b121a648c05b68", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3357880
pes2o/s2orc
v3-fos-license
MODELLING THE CONSTRAINTS OF SPATIAL ENVIRONMENT IN FAUNA MOVEMENT SIMULATIONS : COMPARISON OF A BOUNDARIES ACCURATE FUNCTION AND A COST FUNCTION Landscape influences fauna movement at different levels, from habitat selection to choices of movements’ direction. Our goal is to provide a development frame in order to test simulation functions for animal’s movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual’s behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements. * Corresponding author INTRODUCTION 1.1 General context The context of our research refers to landscape planning.When defining projects, planners need to have knowledge and insight on ecosystem functioning.Animal species represent parts of current ecosystems.We focus on how fauna movement can be taken into account within future projects.Movements play a part in the survival of individuals and act upon the environment.Movement patterns depend on the species as they have different needs and ability to move.Patterns are also related to the general type of environment where a species lives and at a more precise level to the characteristic of its home range. An important aspect to recall is that knowledge on animal species requires a great amount of field work framed by precise protocols.Electronic devices such as radio and satellite-based technologies help to increase the amount of information on where individuals go and how they move.Spatial features are described in databases created from field work and from remote sensing techniques.Databases tend to be enriched and more accurate, in particular concerning attributes and spatial unit aggregation on land cover and land use.The selection of the data needed for a particular analysis is a problematic tackled by Geographical Information Science.The problematic includes being able to define what is the quality of the data and how it propagates in the results (Devillers et al., 2007).It also includes at what scale data should be considered (Wilkin et al., 2007) and how they should be analyzed and the results interpreted (Li & Wu, 2004).Factors such as the trophic specialization rank of species can be taken into account while defining what spatial features should be studied.The more specialized species are, the more dependent to its environment they tend to be (Holt, 1996).Besides, perception of environment depends strongly on the species (Tolman, 1948;Von Uexküll, 1934). Field information and recordings contribute to the knowledge on animals.Though, it cannot be exhaustive as all study sites and species cannot be scanned and followed.In parallel, the need in landscape planning is to visualize effects of future project on movements.Modelling allows formalizing concepts and mechanisms at stake in the movement patterns of an individual.Then it allows testing scenarios corresponding to projects.Virtually modifying a landscape enables to observe consequences by comparing simulated movements before and after modification.In this article, the initial need was to be able to simulate movements in order to test the expected effects of planning projects. Previous works and our problematic Modelling fauna movement implies identifying what acts upon it.Elements influencing fauna movement are from different sources.They come from the environment: living environment (other individuals, microorganisms including bacteria for instance), spatial surroundings (e.g.landscape features, minerals flux and pollution particles), other variables like acoustic and light.Another aspect to be taken into account lies in the characteristics related to the species.These characteristics depend on the animal species, its capacity and needs for living.The landscape elements will not have the same effect depending on the species.Habitat preferences are defined partly by the geophysical features like relief (Dickson & Beier, 2007) and by components like vegetation type and cover (Bélisle et al., 2001).To define models on animal movements, it is necessary to identify hindrances and corridors.These two functions are linked to certain habitat (where animal lives) and travel areas (where they move) preferences.Preferred places can be located and characterised by comparing the proximity of individuals during their usual activities and larger surroundings (Matthiopoulos, 2003). A large number of models for animal movements have been defined.General models can be listed in several categories including: cost functions, object-oriented, statistical.Approaches relying on cost functions generally divide studied areas in regular units.For each unit, landscape elements are associated with a cost value and a function estimates the cost, or the facility, for individuals to move from a starting point towards various directions (La Morgia et al., 2011;Palmer et al., 2011).Object-oriented programming formalizes explicitly the characteristics and the properties of individuals as well as those of their environment (Vuilleumier & Metzger, 2006).What we called statistical approaches integrates equations that evaluate the probability of movement in a direction.It can use for instance on differential equations such as in Coulon et al. (2008).Modelling movements can rely on random walks like in Miller & Maude (2010).The parameters of the different approaches have to be fixed depending on the objectives of the study.Various types of movements (daily, seasonal migrations, dispersal) can be modelled by changing the values of the parameters, as well as the studied influence on movement patterns (landscape, other individuals) can be. Our work stands in the modelling of movement within a geographical space.It focuses on the characterization of this space depending on the species.Our contribution is to use highscaled data describing spatial environment, these data being available for the whole French territory.From these data, we aimed at formalizing the influence of each type of landscape features depending on study cases.We worked on mediumsized animal species which movements are coherent with the databases used to describe space.Some problematics are related to data quality.Qualification of data is a major issue in analyzing processes and in re-exploiting the data for assessment objectives.From the fauna movement analysis, it can be noticed that the data used for discovering and corroborating knowledge are central.Indeed several aspects in data intervene, in particular: -Geometrical accuracy -Exhaustiveness of the cover -Attribute definition and details Spatial scale is defined depending on the adaptability of data to the represented element or phenomenon.It depends on geometry accuracy and spatial aggregation.Temporal accuracy is linked to the time separating updates and the adequacy between the date of the data and the reality at that moment.Our problematic is to define what are the data needed for understanding and representing fauna movements and how these data can be treated to be used in evaluating landscape planning projects.This question is tackled from a movement simulation proposal. Objective and approach Our main objective is to propose a simulation model for movements.Modelling fauna movement is quite a challenge as many parameters are involved.This is why we focused on the interactions between animal species and landscape features.We also have limited the scope to daily movement.We aimed at defining a model that formalizes a) the influence of landscape features on movements, b) for more realism, movement patterns in terms of travelled distances and activity rhythms.We implemented the model in order to be able to simulate trajectories.Then changing the spatial environment will allow to visualise the consequences of modifications on the simulated trajectories and so to formulate hypotheses about the possible consequences on real fauna movements.We believe that modelling all the elements that influence and define the patterns of animal movements would be too complex.Being aware of that complexity, we have chosen to focus on only one part of the influence and to exploit existing data for sake of clarity.The application of such modelling is more in landscape planning, including communication towards stakeholders and base for debates, than in trying to discover new animal behaviours. Our global approach first consisted in gathering information on animal movements depending on the species.Then we proposed a model that formalizes available data and knowledge on individuals' behaviours and the relations between individuals and their environment (part 2).We present in the next section our proposed model.We detail the approach for simulating trajectories which is based on agent-oriented modelling.In the results (part 3), we focus on the last part of the global methodology: two simulation functions that have been proposed and tested. Data and knowledge for modelling: To establish a model, we needed to have knowledge about the influence of landscape on animals.Three types of sources were exploited: literature, experts and data analyses.We analysed particular study cases with their corresponding datasets.Then we faced the results with previous conclusions on works described in literature, especially literature in ecology for the spatial behaviours and apprehension of landscape elements depending on the species.The databases on animal movements we had access to, contain GPS tracking recording from collars.Several species were concerned.Though, in this article, we only focus on the study case of red foxes in an urbanised environment.Data come from an existing survey led by the ANSES agency.The GPS recordings have an average time interval of 15 minutes during 24 h for 4 individuals.In this dataset, it is considered that the GPS planimetric coordinates were accurate within about 20 m.The urban environment contains masks for satellite signal such as buildings.Wooded vegetation has likewise a masking effect that can happen when foxes stay in patches to rest or to hide.We did not use the coordinates Z given by the GPS as the accuracy can be less than within 20 m.We have chosen to use the projection on the DTM.The data describing the spatial environment are from diverse institutional sources.BD TOPO® and DTM from RGE®ALTI are produced by the French mapping agency IGN.BD TOPO® is a metric database, meaning that the objects are digitized within 1 meter accuracy.1. Description of the databases A first step was to analyse the data.We have based the analyses on the comparison between landscape elements near the individual and the ones further.This allows the extraction of preferred landscape elements and the ones that are avoided or that can hinder movements.We don't present those analyses in this article (details can be found in Authors-xx).The accuracy of the input data impacts on the detected spatial behaviours.Spatial preferences for particular land covers could be identified, as well as avoidance to transport infrastructures.Though the exact place where an individual crosses a road or the pattern of his movement to by-pass buildings are unknown.It is not only a matter of completeness and accuracy in the data describing the study site but also a matter of capacity of the tracking devices.Results from the analyses of the study cases were then confronted to the literature.Our aim was not to discover new behaviour but to characterize properly the movement in our study cases.This would enable the construction of the concepts relevant in our model.The model focuses on the type of influence associated with the landscape features.We present the identified influences in the case of the red fox in the next section (2.2.2).These correspond to the conceptual knowledge as input in the model.Four types of roles for landscape features were determined: -Obstacle -Corridor for movement -Interesting element -Avoided element The influences associated to landscape features depend on the animal species.For red fox, the knowledge we defined in the model is synthetized in Figure 1.Red fox is a generalist species that can adapt in various environments.We focused on the case of peri-urban area.Individuals were observed covering large distances at night, until 10 km, included in an area of around 1.5 km².The given values are an average for the four individuals.Landscape features are linked to diverse roles.For example, railway lines can reduce crossings, though their surroundings can be corridors due to the vegetation and to low human disturbance. 2.2.2 The agent-based simulation model: Our proposal for simulating movements is agent oriented.An agent is an individual.The environment includes landscape features with geometry and attributes on their characteristics and their influences.Activities are indirectly enclosed in the two phases: a phase during daytime corresponding to small movements and a phase for nocturnal foraging.In the model, we favoured the second phase corresponding to larger movements as well as the relations between individuals and landscape features.As a consequence, the results are simulated trajectories on longer distances than the ones usually travelled by an individual for 24 h.We are interested on daily movements.Thus we left aside the dispersal and migration dynamics. The strategy for construction a trajectory is as follow: (1) The agent records its location and perceives its spatial environment within a radius. (2) The agent selects an element as a destination.This element is chosen if it is inside the perception radius and if it corresponds to the more accessible (less obstacles in between) and to the largest interesting area (building or vegetation patch as described in Figure 1 commercial and industrial areas are not considered here are they correspond to typology classes). (3) The agent moves towards the destination step by step.It calculates its trajectory thanks to one of the two functions that we will evaluate and to a medium speed. (4) When the agent reaches the destination, it stays included (vegetation patch) or around (building) such if it exploits the potential resources. We developed two functions for the calculation of constructing the trajectories.The first function takes into account the boundaries of the landscape elements.The second function relies on cost valuation of how elements hinder or facilitate movements.The model on knowledge presented in Figure 1 remains valid for the two methods.Figure 2 illustrates the principle of the two functions.We present the results in section 3, the details of the functions with the protocol implemented for evaluating their advantages and drawbacks. RESULTS We first present the results of animal movements depending on the trajectory simulation functions, and then, we analyse the results. The proposed two trajectory functions We have proposed two simulation functions in order for an agent to be able to build its trajectory.Our concern focuses on step (3) of the strategy for simulation: a destination has been selected and the agent is heading towards it.The two methods differ about the consideration of spatial environment.The roles of landscape elements depend on the species, here the red fox. Function (a), boundaries accurate: The first method takes into account the boundaries of the features as shown in Figure 3.This statement implies that obstacles are avoided.Buildings are walked around.A permeability value has been attributed to roads, railways and water lines according to their importance (traffic intensity and width).The vegetation patches are destinations as they represent interesting elements especially for hiding. Figure 3.The object-accurate function: the points of the trajectory built step by step by an agent.In this example, buildings are avoided and roads have permeability values. Function (b), interest sum: The second method summarizes the cost values of the landscape elements around an agent.The spatial environment around the agent is considered inside a circle divided in sections.The direction taken by the agent is in the direction of the section with the lower cost and so with the higher value given by our interest function.The interest function is defined as follow (1). ( where I = total interest for movement associated to a section I_favour = index for the presence of corridors in the section (vegetation) I_obstacle = index depending on the number of obstacles (buildings, roads, railways, waterlines) I_direction = index for the direction of the section All indexes used in the sums in function ( 1) are whole number between 0 (high-cost section for movement) and 3 (low-cost section).The function is illustrated in Figure 4. Figure 4.The interest function: it takes into account the cost value of landscape elements for movement.In that example, half a circle represents the surroundings of the agent.It is divided in 5 equal sections and its main direction is towards the destination.The more interesting the section is, the redder it is. 3.2.1 The defined protocol: Our protocol to evaluate differences between the two proposed simulation functions is to launch 50 trajectories in the same initial conditions for each trajectory function.The whole simulation strategy has a random part, when an agent selects destinations step by step.The study site is located in the suburb area of Nancy in the East of France, shown in Figure 5.This site is mainly covered by forests.Grassland and agricultural fields are not present in the dataset (represented by white background on the map in Figure 5).On the west part of the site, a town centre and a residential area is located.Some parameters have to be defined.The parameters are divided between those identical for the two functions and those that differ due to their specific inputs.We present the values in Table 2. . Values of the parameters The values of the parameters were settled based on hypotheses on the environment perception ensued from the knowledge corpus.We have run tests in order to evaluate the effects of parameters values on the results and to determine which was the more likely close to reality.The medium speed is the average of the estimated speeds between GPS points during 24 h.The model integrates a decrease of the speed when an obstacle is encountered.We fixed the general perception radius of the spatial environment at 200 m.This value is a hypothesis as there is no unique value for such perception depending on the type of environment at a certain time of the individual.We ran some simulation tests with different values from 10 m to 500 m.The more coherent results in terms of covered surfaces correspond to the values between 50 m and 200 m.We took the higher value to fit with the high-scaled function (boundary accurate) and the medium-scaled one (interest sum).In the interest sum function, the sections are defined to be a compromise between calculation time and the accuracy of the consideration of spatial environment.For instance, if the number of sections is increased, the cut-out of space might end up with areas that would be too small to differ from their neighbours.The distance of 50 m has been fixed in order to be smaller than the perception radius and to be coherent with the distances covered between the simulated points.The possible re-selection of destinations is set for the red fox species as it has been observed from the GPS data.Probabilities of crossing obstacles were set in order to fit with global knowledge on the permeability of the diverse linear landscape elements.There are no values that can fit every elements and also context of crossings (e.g.necessity, exploration).It is also an objective of the model to allow modifying values of parameters to test and compare with observations.The values of the parameters have been fixed to run simulations of trajectories for red foxes with short time length, around few minutes, between points (e.g.perception radius).The values can be adapted to another simulation framework. Simulated trajectories obtained with the two functions: We launched the two simulation functions for the boundary accurate function and the cost function.This allows to map and to explicitly compare the results of the two functions.The experiments were run for a time length of 24 h for agent standing for red fox individuals.Time length between simulated points is 2 minutes.A visual comparison is given in Figure 6.The initial point is located at the south border of an urbanised area and the majority of trajectories are located in and around this town.We defined criteria to compare quantitatively the two functions.Criteria deal with the form of the trajectory and with the relations between a trajectory and landscape elements.Table 3.Comparison of the two functions based on criteria, calculated for a 24 h simulation.Calculations are made from the simulated locations, related to geographical objects.The main difference lies in the total distance. The first remark on the calculated criteria is that distance from the cost function is longer than from the boundaries accurate one.First, both functions largely exaggerate the distances as the model insists on representing movements.Besides, we did not integrate the difference between day and night.Second, the difference between the two functions comes from the fact that the cost function is less sinuous than the other function.Indeed, paths are chosen depending on the direction assigned with the less cost.There is no bypassing around buildings or when a road has to be crossed.When the agent exploits the reached destination, it stays inside or near the destination (phase 4 in the strategy in 2.2.2).In both functions, we fixed the time length of the exploitation at 10 minutes.It is quite short.For a 2 minutes time length, it corresponds to 5 simulated points.This contributes to the increase of the covered distances as the agent is in fact still moving during the exploitation, even if the distance can be shorter in case of obstacle with the reduction of the speed.The relations between simulated trajectories and wooded areas are quite similar in the two functions such as the medium distance to roads (all types: with low and strong traffic).Though, the majority of roads are associated with medium traffic and for this type of roads, the cost functions is associated with larger average distance.This is partially due to the fact that trajectories from cost function can be located in areas far from the urbanized area, so as a consequence far from roads.This last point is also the explanation to the difference in numbers of different wooded areas crossed, as cost function corresponds to larger distances. Discussion The process of modelling always introduces a simplification of reality.In fauna movements, the complexity of the influencing factors led us to focus on landscape elements available in existing produced geographical databases.We aimed at selecting few aspects of the animal behaviours in the simulation process so that to be able to interpret the simulation results and to be able to adapt it.The comparison of two spatial information considerations allows identifying the relevancy of each consideration in fauna movement simulations.From the results of simulated trajectories based on the study case of red foxes, we stated that the two functions are adapted to a certain spatial and temporal scale.The function that integrates accurate geometries of geographical objects allows a precise modelling of the known influence of geographical features on movements.This function is bottom-up in the discovery of trajectory patterns.The details of the trajectories around and through elements are interesting if they make sense in terms of influence of landscape features.We focused on the global pattern of the trajectory and the spatial preferences and obstacles rather than on the formalisation of the exploitation of precise landscape elements.The second function is based on a cost calculation for effort needed to be generated by the animals.It summarizes in the neighbourhood of an individual the presence and amount of features that would facilitate or hinder movements. The boundaries accurate function corresponds to high scaled data, whereas the cost function can be qualified as a function describing the spatial environment at medium scaled.Though, this medium scaled view of the spatial environment is made possible thanks to high-scaled databases.The consideration of spatial environment is also linked to temporal aspect and to type of movements.Taking into account the geometry of landscape features is relevant if movements are studied at a fine temporal scale, such as in Coulon A. et al. (2008) and Dickson & Beier (2007).In our datasets of observations, daily movements were described.It was convenient to have high frequency of GPS recordings for locations so that to analyse the more precisely as possible tracks taken by animals as well as the influence of landscape elements on directions.The accuracy of the GPS tracks is quite adequate with the high-scaled spatial data.Despite positioning errors, it allows estimating the global trend of an animal's locations in relation to the landscape features.Simulations were undertaken with a time length of 2 minutes, which is very high frequency.This simulation frequency is coherent to the representation of bypassing buildings, roads or other obstacles, even if the description of how an individual goes around buildings and other obstacles can only be estimated in the observation datasets.We can say that the boundaries accurate function is more realistic than the cost function. Though, it can be long to calculate (for instance in densely urbanised areas).Besides it is sometimes difficult to model the exact movement behaviour of an animal that would be relevant at that high level of accuracy.Indeed, observation data, even with high frequency cannot always be precise enough to clearly understand how and where an animal apprehend landscape features (e.g.where and how roads are crossed).Dispersal movements are generally on larger distances than daily movements, such as single long-distance movements for exploration.The cost function might be more relevant in that type of movement like in La Morgia et al. (2011), even if Vuilleumier & Metzger (2006) uses a vector-based description of space for long-distance movements.In our tests, the simulated trajectories are less sinuous as space units are considered and global direction is decided from the analysis of the units.This can be associated with a lower consideration for the quality of geometrical information.Though, it can be clearer for identifying areas of corridors or obstacles, to not consider every buildings or landscape features but to have synthetized information on landscape permeability. To sum up, here are the links we found relevant between simulation process and geographical scales. (a) Geometry accurate function:  High-scaled geographical data are taken into account  Precise apprehension of landscape features  Daily movement and its influence of landscape elements (b) Cost function:  Synthetized information on corridors and obstacles for regular spatial units  Long-distance movements (may be exploratory or migrations) Note that another aspect should be taken into account in the case of simulating different types of movements: landscape elements can have various influences or importance during a daily movement inside a home range comparing to a migration. For the evaluation of the results, the same study area is concerned, same as the site where GPS data have been analysed.We can say that the quality of data analyses depends on the quality of the input data.Concerning the description of land cover types and the one of specific landscape features, the completeness of the themes influences the detection of spatial preferences.Besides, it is necessary to have access to interesting attributes on the spatial elements (e.g.type and structure of vegetation patches).The accuracy of the geometry in the spatial description and in the movements' data allows misinterpretation of animal behaviour and its spatial preferences.In the data analyses, we have noticed that relevant landscape elements are missing in the dataset used for describing spatial environment.The enrichment depends on the species.For foxes, it can be relevant to have more precise information on favourable resources areas like agricultural fields or urban patches with little disturbance.Other useful data could be about hedgerows that can act like corridors and physical barriers such as fences.We assume that this note is also valid for simulation process.Simulated trajectories could be improved with an enrichment of the environment.In that case, the simulation model could increase the realism of movements with better modelled interactions between individuals and space. CONCLUSIONS We have presented a model for simulating daily animal movements based on the influence of landscape features.We have developed two functions in order to calculate trajectories.We saw that each function was related to a certain spatial scale of geographical data.Our function taking into account geometries of buildings is interesting as it is a bottom-up approach for discovering global patterns in simulated trajectories and to link them to the landscape description at fin or regional scale.Summarising the geographical information, as we did with the cost function, decreases quality of the description of spatial environment.It is useful to have a synthetic view of the potential paths along where animals would go.Our agent-oriented approach can accept the addition of some individual parameters like personality parameters for individuals (e.g.boldness, perseverance) and more generally the individual's characteristics that can greatly influence some types of movements (e.g.migration, exploratory).The perspective is to test the method on another species.We mentioned that we have worked and analysed data on roe deer as well as on red deer.Their movement patterns are quite different due to their needs.They are herbivores which, among other aspects, lead them to exploit specific landscape elements as resources.It would be interesting to see how the two functions of simulation fit their movements' rhythms and paths, and which adaptations should be brought.With a wellparametrized model and a relevant consideration of spatial environment, landscape planning projects can be assessed in order to precise the type of concrete action and their best location. Figure 1 . Figure 1.Knowledge on red fox movements in peri-urban environment and the influence of landscape features. Figure 2 . Figure 2. Overview of the principle of the two functions for an agent to build a trajectory between its actual location and the selected destination: 1) the first function takes into account the geometry of landscape features; 2) the second function calculates a cost value for movement. Figure 5 . Figure 5.The study site in the east of France covers 15 km².It is mainly forested with urbanised area on the west part.The red rectangle corresponds to the area shown in Figure 6. Figure 6 . Figure 6.Two trajectories constructed with the two simulation functions, extraction concerning one hour.We notice that the one built from the boundaries accurate function (a) is more sinuous than the one build with the cost function (b). Table It includes more than 95 % of the buildings and more than 98 % of road network.Main forest stands are indicated.Nevertheless, some themes are excluded like crop fields.The European database CORINE Land Cover takes into account homogenous land cover larger than 0.25 km².It is generally used for medium-scale mapping.The advantage of this database is that it characterizes the land cover continuously thanks to a typology.Characteristics of the databases are given in Table1. Table3gives the average concerning the 50 trajectories for several criteria.
2018-02-19T11:09:17.726Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "39236f56ac026cc501d1f06e62e52a98a13baf03", "oa_license": "CCBY", "oa_url": "https://isprs-annals.copernicus.org/articles/II-3-W5/249/2015/isprsannals-II-3-W5-249-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "39236f56ac026cc501d1f06e62e52a98a13baf03", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
271061733
pes2o/s2orc
v3-fos-license
Maternal betel quid use during pregnancy and child growth: a cohort study from rural Bangladesh ABSTRACT Background Chewing betel quid (BQ) – a preparation commonly containing areca nut and slaked lime wrapped in betel leaf – is entrenched in South Asia. Although BQ consumption during pregnancy has been linked to adverse birth outcomes, its effect on postnatal growth remains largely unexplored. Objective We examined the associations of BQ use during pregnancy with children’s height-for-age and body mass index-for-age z-scores (HAZ and BAZ, respectively) and fat and fat-free mass along with sex-based differences in association in rural Bangladesh. Methods With a prospective cohort design, we assessed BQ use among mothers enrolled in the Preterm and Stillbirth Study, Matlab (n = 3140) with a structured questionnaire around early third trimester. Children born to a subset of 614 women (including 134 daily users) were invited to follow-up between October 2021 and January 2022. HAZ and BAZ were calculated from anthropometric assessment, and fat and fat-free mass were estimated using bioelectric impedance. Overall and sex-specific multiple linear regression models were fitted. Results Growth data were available for 501 children (mean age 4.9 years): 43.3% of them were born to non-users, 35.3% to those using prior to or less-than-daily during the survey, and 21.3% to daily users. No statistically significant associations were observed after adjusting for sex, parity, maternal height and education, and household wealth. Conclusions There was no effect of BQ use during pregnancy on postnatal growth in this study. Longitudinal studies following up those born to heavy users beyond childhood are warranted for capturing long-term implications of prenatal BQ exposure. Background The practice of chewing betel quid (BQ) is widely prevalent in south and south-east Asia, the Pacific islands, and high-income countries with migrant populations from these regions [1].BQ, known as 'paan' in Bangla and Hindi, is a chewable wrap ready for placing and retaining in the mouth.The preparation of BQ varies across regions and by user preferences.Nonetheless, it invariably contains areca nut and slaked lime (calcium hydroxide) -with or without catechu, cured tobacco leaves and various flavorings -enclosed in a leaf of the vine Piper betle (betel leaf).The core component, areca nut, is the fibrous seed inside the fruit of Areca catechu (oriental palm) [1,2].An estimate published in 2002 [3] put the number of areca-nut users at 600 million globally. Multiple studies have associated BQ use with oral cancers and precancerous conditions [6].In addition, the non-cancerous health consequences of chewing BQ are primarily related to the metabolic effects and diabetogenic potential of areca alkaloids.Observational studies have linked BQ use with increased waist circumference [7], obesity [8], hyperglycemia, type 2 diabetes and dyslipidemia [8][9][10].The metabolic impairments may propagate intergenerationally through parents who chew BQ habitually [11].Understanding this intergenerational impact would require an examination of the relationship of BQ chewing during pregnancy with birthweight and postnatal growth as both influence future cardiometabolic risk [12].While an average reduction in birthweight of 89.5 g (p = 0.0028) [13] and a pooled odds ratio of 1.75 for low birthweight (<2500 g) [14] from maternal BQ use have been reported, the effects on growth during early childhood remain unexplored.The biological plausibility of a negative impact of BQ use during pregnancy is supported by mechanistic studies that demonstrated: arecoline crosses placenta [15]; toxicity of arecoline in mice embryos [16]; and growth retardation in arecoline-exposed Zebrafish due to cytotoxicity from depletion of intracellular thiols [17].Nonetheless, whether the potential growth-retarding effects of using BQ during pregnancy persist into early childhood has not been examined in population-based studies. The lifetime prevalences of BQ use in south and south-east Asia range from 2.3% to 43.6% [18].Studies conducted in Bangladesh have documented prevalences of current BQ use of 32.7%-52.8%[19][20][21].This high population burden underlines the practice of chewing BQ -unlike smoking or drinking alcohol -being socially acceptable due to long-standing cultural norms [22].Furthermore, BQ use during pregnancy is considered a means of preventing morning sickness and a relaxant [23].Unsurprisingly, the prevalence of BQ use among pregnant women can be as high as 61% in rural Bangladesh [24].In 2021, Bangladesh was the second largest producer of areca nut globally following India, with an annual yield of 345.8 kilotonnes [25].However, in contrast to maternal dietary behaviors and nutritional deficiencies, potential risks to child growth from maternal BQ chewing received less attention.Stunting and underweight among under-five children in Bangladesh continue to be high (prevalences: 31.1% and 22.5%, respectively) [26], whereas overweight or obesity has started increasing [27].As these growth abnormalities may influence cardiometabolic risk later in life [28], it is imperative to examine whether maternal BQ use affects child growth in a setting where habitual BQ consumption is common.Hence, we evaluated the associations of BQ use during pregnancy with four indicators of child growth -height-for-age z-score (HAZ), BMI-for-age z-score (BAZ), and fat and fat-free mass -along with potential differences in associations by sex in a rural community of Bangladesh. Study design, setting and participants This prospective study drew on the PreSSMat (Preterm and Stillbirth Study, Matlab) birth cohort [29,30] that was set up in the rural area of Matlab, southeast Bangladesh to examine the biological, environmental and socio-demographic predictors of adverse pregnancy outcomes.Matlab is located about 55 km to the southeast of the capital city of Dhaka, and is typical of many low-lying, riverine areas of Bangladesh where agriculture and fishing are the major sources of income [31].International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b) has been running a Health and Demographic Surveillance System (HDSS) in Matlab since 1966. PreSSMat utilized the bi-monthly HDSS household visits to identify pregnant women from those with overdue menstrual periods (≥2 weeks) using urine pregnancy tests.The women with a positive test were offered ultrasound to confirm the gestational age and viability of the pregnancy.In total, 3644 women prior to gestational week (GW) 20 were enrolled from May 2015 to June 2017.The enrolled women were followed prospectively till delivery and at 6-weeks post-partum to collect sociodemographic, anthropometric and behavioral data; biological samples; and information on gestational age, pregnancy complications and birth outcomes.Complete data on BQ use was available for 3140 women (median GW at assessment 28, interquartile range: 27-30).Out of these women, children born to a randomly selected subset of 500 women (that included 20 daily users of BQ) and to the remaining 114 daily users (n = 614) were invited to the growth follow-up between October 2021 and January 2022.Figure 1 at the beginning of the Results section illustrates the flow of participants into the present study. Assessment of exposure A structured questionnaire based on the standard GATS (Global Adult Tobacco Survey) instrument [32] from the World Health Organization (WHO) was employed to assess maternal use of BQ.The enumerators had been trained extensively in the implementation of field survey.According to precoded responses, the mothers were grouped as: non-users (did not use BQ in the current pregnancy), previous users (used prior to GW 27-30), current less-than-daily users (used on a less-than-daily basis at GW 27-30), and current daily users (used daily at GW [27][28][29][30].For the analysis, the previous and lessthan-daily users were categorized together as having 'intermediate exposure' and the daily users were labelled as having 'high exposure'. We employed leg-to-leg bioelectrical impedance analysis [35] using the tetrapolar Tanita TBF-300A analyzer for assessing body composition.The impedance value from the analyzer reading was extracted and a prediction equation [36] was used to derive fat-free mass (FFM) from the impedance.Khan et al. [36] developed and validated this equation against deuterium oxide dilution (adjusted R 2 = 0.89; standard error = 0.90; p < 0.001) for measuring total body water in a sample of children from Matlab.The values of FFM were subtracted from children's weight to calculate fat mass. Additional variables of interest Children's sex (dichotomous: male/female) and age were ascertained during the follow-up.Maternal data extracted from the PreSSMat database included: age, height and weight at enrollment; parity; educational status in terms of completed years of formal education; and household asset score.The asset score was derived by principal component analysis [37] of data on ownership of durable items (e.g.television, almirah, mobile phone, bicycle, etcetera), dwelling characteristics (wall and roof materials), source of drinking water and type of toilet used.A household wealth variable was created by converting the asset scores into tertiles whereby the lowest, intermediate and highest tertiles represented the poorest, middle-status and richest households, respectively.Mothers' weight prior to GW 20, was measured with a bathroom scale (Seca Uniscale, 100 g) and height with a locally manufactured wooden scale (precision 0.1 cm) [29].For describing their weight status, mothers were categorized as underweight (BMI <18.5 kg/m 2 ), normal-weight (BMI 18.5-24.9kg/m 2 ), and overweight or obese (BMI ≥25 kg/m 2 ) [38].Maternal education was categorized into: primary or below (≤5 years of formal education), secondary (6-10 years), and higher secondary or above (≥11 years).Based on parity, the women were categorized as nulliparous, primiparous, and multiparous (≥2). Statistical analysis Data were analyzed using R statistical software (version 4.3.0)[39].Visualization of the numerical data involved examination of histograms, boxplots and quantile-quantile plots.Two right-skewed variables were natural log (Ln) transformed: fat and fat-free mass.Children with BAZ>+5 (n = 2) were flagged [33] and removed from the respective analyses.At bivariate level, characteristics were compared across exposure categories using Chi-squared test for categorical variables and one-way analysis of variance or Kruskal-Wallis test.We fitted linear regression models -overall and gender-specific -to evaluate the associations of BQ use with the four growth parameters, and reported unstandardized regression coefficients with 95% confidence intervals (CI).We adjusted for children's sex (when not stratified by it), parity, and maternal height, education and household wealth based on a directed acyclic graph (Supplementary Figure S1).As maternal age and parity had a strong, positive correlation (Spearman's rho: 0.8, p < 0.0001), only parity was included in the model to avoid collinearity.Assumptions of linear regression were checked using quantile-quantile plots of the residuals and residuals versus fitted plots.To examine possible effect modification by child sex, a two-way interaction term (BQ use × child sex) was tested and found non-significant.Statistical significance was set at p < 0.05 for all analyses. Ethics approval Research and Ethical Review Committees at https:// www.icddrb.org/,Dhaka, Bangladesh approved the study (Protocol#14067; 13 April 2019).Participation was voluntary and participants retained the rights to withdraw throughout.Written informed consent was obtained from the pregnant women before enrollment. Results Figure 1 illustrates the flow of children into the 5-year follow-up.The mothers selected for the follow-up gave birth to 626 children including 12 pairs of twins.Of these 626 children, data from 501 (80%) were available around 5 years of age.The major reasons for loss to follow-up were: outmigration (n = 78), parental refusal (n = 15) and child death (n = 15). Table 1 presents the basic anthropometric and socio-demographic characteristics of the mothers at enrollment and growth parameters of the children around 5 years of age.Out of the 501 mothers with corresponding data on children's growth, 217 (43.3%) did not report any BQ use when surveyed at GW 22-24; whereas 177 (35.3%) reported using BQ either in the previous trimester or currently on a less-thandaily basis ('intermediate exposure' category) and 107 (21.3%) were using daily ('high exposure' category).Median maternal age was higher among daily users than non-users (28.7 versus 25.2 years), and about 47% of the daily users were multi-parous.Nearly 21% of the mothers were overweight/obese at enrollment and the proportion rose to 24.3% among the daily users. Males (52.9%) were over-represented among the children.Approximately 15% of the children were stunted and 7% were overweight/obese.The proportions of both stunting and overweight or obesity were higher among children born to daily users than those born to non-users.There were no differences in children's fat and fat-free mass levels by BQ exposure categories of the mothers (Table 1). Table 2 presents the regression coefficients from overall and sex-specific linear regression models analyzing the associations of BQ use with the four outcome variables.None of the regression coefficients were statistically significant.When stratified by sex, female children born to daily users had on average 0.03 standard deviation (SD) lower HAZ, 0.06 SD lower BAZ, 2% lower fat mass and 1% lower fat-free mass than those born to non-users; but the regression coefficients did not attain statistical significance. Discussion One-fifth of the children were born to mothers who used BQ on a daily basis by the early third trimester, but there was no association between BQ use and child growth around 5 years of age in this cohort study.In addition to the lack of statistical significance, the effect size (based on unstandardized regression coefficients) appeared notably small. Given that the placenta is not an efficient barrier to arecal alkaloids [15,40] and BQ use during pregnancy may lower birthweight [13], there may have been an attenuation of differences in postnatal growth by maternal BQ exposure.This attenuation could be reflective of a catch-up growth [41] among children born to mothers with higher exposure to BQ.Two aspects related to the daily users of BQ lend support to this proposition.First, we observed approximately 108 g (95% CI: 28-189; p = 0.008) lower birthweight among the children born to daily users compared to non-users after adjusting for confounders when the full cohort (n = 2570) was analyzed (manuscript under preparation).A small proportion of those born to daily users was premature: 14.9% in the full cohort and 13.1% in the present study (Table 1).Longitudinal studies on postnatal growth trajectory associate lower birthweight without prematurity with a catch-up in weight and height starting from 12 months [42,43].Second, nearly a fourth of the daily users in our study were overweight or obese in early pregnancy (Table 1).Maternal overweight or obesity in early pregnancy has been linked to intense catch-up growth and more rapid weight gain among children during the first 3-5 years of life [44,45].Whereas the catch-up growth may have attenuated the impact of BQ exposure, such a catch-up growth tends to be associated with greater acquisition and more centralized distribution of body fat and insulin resistance later on [41,[46][47][48], leading to a heightening of cardiometabolic risk. Differences in how exposure was assessed (biomarker assay versus survey data) may affect studies exploring the impact of behavioral exposure -such as use of tobacco, alcohol and caffeine during pregnancy -on postnatal growth.A recent meta-analysis [49] of 21 effect measures from children aged <10 years documented a pooled mean difference of 0.23 kg/m 2 higher BMI due to maternal smoking during pregnancy.The authors noted that studies ascertaining exposure objectively from serum levels of cotinine -the main metabolite of nicotine -reported higher mean differences in BMI [49].Conversely, Howe et al. [50] did not find any association between selfreported smoking during pregnancy and children's BMI change from 2 to 10 years in the ALSPAC cohort from south-east England.Muraro and colleagues [51] followed up a Brazilian birth cohort around 1.5 years (n = 2405) and 12 years (n = 1716) of age.They did not find any association between smoking during pregnancy (any trimester) and HAZ after controlling for relevant confounders [51].Among 7597 mother-child dyads of the ALSPAC cohort, no association was observed between self-reported alcohol consumption during pregnancy and children's height or weight at 2 and 10 years of age despite a negative effect on weight and length at birth [52]. .5-14.9)HAZ −1.0 (0.9) −1.0 (1.0) −1.0 (0.9) − Abbreviations: BMI, body mass index; HH, household; BAZ, BMI-for-age z-score; HAZ, height-for-age z-score.Values represent percentage for categorical variables; mean with standard deviation for continuous variables (approximately) normally distributed, or median with inter-quartile range for continuous variables that were skewed. 1 Born to mothers who used betel quid prior to, or at gestational weeks 22-24 on a less-than-daily basis; 2 born to mothers who had been daily users of betel quid at gestational weeks 22-24; 3 born before 37 completed weeks of gestation. Higher caffeine consumption during pregnancy has been associated with lower HAZ (~0.2 SD between top and bottom quartiles) but not BAZ or fat mass at ages 4-8 years in a cohort study from the US.The investigators ascertained the exposure with blood levels of caffeine and paraxanthine instead of self-reports, yet the magnitude of reduction in HAZ was fairly low [53].Although influenced by methodological and population-level differences, these findings broadly point toward a recovery in postnatal growth that alleviates the impact of prenatal exposure to these substances to some extent.However, the link between this recovery and future cardiometabolic risk [48] need to be examined for understanding the long-term implications.The stratified regression analyses did not reveal any notable sex-based differences in the impact of BQ exposure, except that for HAZ as the adjusted regression coefficient was positive for male but negative for female children of daily users (Table 2).Nevertheless, the associations did not attain statistical significance and the interaction terms incorporating children's sex were not significant as well.The lack of statistical power in stratified analyses should be considered here as there were 51 female and 56 male children born to 107 daily users in the present study.With sample proportions of 43% non-user, 35% intermediate exposure and 21% high exposure, and an alpha of 0.05 (two-tailed); there was 80% power to detect differences of about 0.5 SD across non-user versus high or intermediate exposure categories on sex stratification.Thus, our sample size was unlikely to have captured sex-based differences smaller than that. Some limitations of the present study are noteworthy.We assessed exposure from responses to survey questions -without measurement of arecoline levels -which may have led to misclassification from recall bias and underestimation of the strength of associations.Residual confounding could not be entirely ruled out in spite of multivariable regression models accounting for confounders identified from a DAG.Attrition from outmigration, refusal to participate and child death contributed to a lowering of statistical power; particularly in stratified analyses.While initially planned, blood samples could not be collected from the children due to general reluctance and refusal by parents to invasive procedures during the early post-COVID period when the follow-up was conducted.Consequently, we were unable to assess any effect of BQ exposure on cardiometabolic markers.The key strengths of the study relate to reliable data from a well-characterized birth cohort in a setting with long-established research infrastructure; rigor in maintaining internal validity; application of a validated equation for bioelectric impedance-based assessment of body composition; and use of simple, cost-effective tools that reduced inter-rater and instrumental biases.The findings are generalizable to under-5 children in Matlab given the area-wide, HDSS-based recruitment of pregnant women in PreSSMat, and also in similar agrarian settings of rural Bangladesh. Conclusion In this prospective analysis, no association was observed between maternal BQ use during pregnancy and child growth in terms of HAZ, BAZ, fat and fatfree mass around five years of age.Catch-up growth among children born to daily users may have played a role in attenuating the impact of BQ use.However, this sort of catch-up growth tends to be associated with future heightening of cardiometabolic risk. Longitudinal studies following up those born to heavy users in adolescence and early adulthood remain key to understanding the implications and cardiometabolic sequelae of any such growth recovery following prenatal exposure to BQ. Figure 1 . Figure 1.Flow of children into the present study around 5 years of age.BQ, betel quid; GW, gestational week. Table 1 . Maternal and child characteristics. Table 2 . Association of maternal betel nut use during pregnancy with child growth at 5 years.CI, confidence interval; BAZ, body mass index-for-age z-score; HAZ, height-for-age z-score; Ln, natural-log transformed variable where the base of the log was 2.71828.βrepresentsunstandardized regression coefficient.1Bornto mothers who used betel quid prior to, or at gestational weeks 22-24 on a less-than-daily basis; 2 born to mothers who had been daily users of betel quid at gestational weeks 22-24; 3 Adjusted for children's sex (when not stratified by it), parity, maternal height and education, and household wealth.
2024-07-10T06:17:15.195Z
2024-07-09T00:00:00.000
{ "year": 2024, "sha1": "9e9868a1e289383ab8de6d40cb9b0e02df84fdf9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "58b6c80de1082909f8701f9223fc3abb7c09e770", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
2187528
pes2o/s2orc
v3-fos-license
Autologous growth factors used for the treatment of recurrent fistula-in-ano preliminary results Dear Sir, The risk of postoperative complications such as fistula recurrence or incontinence increases in patients with recurrent fistulas and high transsphincteric and suprasphincteric fistulas [1, 2]. Due to the large number of postoperative complications, in recent years, there have been some attempts to treat this disease conservatively. Methods of treatment include tissue adhesives and plugs that close the internal orifice as well as autologous growth factors that are present in the platelet-rich plasma and used as “natural glue” to close the lumen of the fistula. Autologous platelet-rich plasma (APRP) is platelet concentration derived from centrifuged full blood collected from a patient right before surgery. The platelet-rich plasma (PRP) obtained using the Gravitational Platelet Separation III System (GPS® III, Biomet Merck, UK) constitutes 10 % of the original volume of the collected blood and more than 90 % of its mass is made of platelets. Platelet-rich plasma preparation is administered to the fistula tract following curettage (Fig. 1). We report our experience with the GPS® System in 2 patients with anal fistula. Fig. 1 Autologous platelet-rich plasm administration to fistula tract A 50-year-old male underwent surgery for recurrent suprasphincteric posterior horseshoe fistula at our unit. At a time of surgery curettage of the fistula tract was performed, and then a GPS® III preparation was administered to the fistula tract. The patient was discharged home 2 days after surgery. Fourteen days after surgery, the patient presented at the ward due to an inflammatory infiltration. Two months after surgery, a follow-up examination revealed fistula recurrence. A 37-year-old male with recurrent intersphincteric anterior fistula-in-ano, and with a narrow, transsphincteric branch running towards the puborectalis loop also underwent surgical treatment at our unit. At a time of surgery, the main tract of the fistula was removed and APRP (GPS® III) was administered to the narrow lateral tract following curettage. The patient was discharged home on postoperative day 3. No recurrence of the fistula was reported within the 6-month follow-up period. The first experiences with using growth factors were related to the treatment of patients with chronic skin ulcers [3]. At present, indications for using APRP therapy are being expanded. APRP is used in burn units and for the treatment of non-healing wounds. There are also reports of using APRP for the treatment of tendon injuries, muscle injuries, degenerative lesions of tendons and joints and bone voids, mainly in maxillofacial surgery. APRP therapy is also offered by aesthetic surgery centres, where it is used to fill deep wrinkles. In the healing process, thrombin stimulates platelet alpha-granules to release growth factors which, through appropriate stem cell receptors, stimulate stem cell differentiation. Reports of using autologous growth factors for the treatment of fistula-in-ano date back to the late 1990s. In 1999, a paper on the treatment of patients with recurrent fistula-in-ano was published [2]. It described a group of 30 patients with complex fistulas, rectovaginal fistulas and vesicorectal fistulas. The cure rate was 60 %. A similar cure rate was reported by Cintron et al. [1]. In the literature, there are reports of using APRP as an element of multi-stage treatment of fistula-in-ano. The first stage involves inserting a loose seton through the fistula tract in order to drain and heal any local inflammatory lesions. The second stage involves APRP administration, which resulted in fistula healing in as many as 75 % of patients [4]. Hagen et al. [5] used the two-stage method, and in 2009 they published the results of administering APRP to 10 patients with high transsphincteric fistulas. The first stage of treatment involved draining the fistula tract and then closing the internal opening with a patch of mucous membrane and filling the tract with APRP. After almost 2 years of follow-up, fistula recurrence was observed in 10 % of the patients. Available publications include papers that present small groups of patients, different types of fistulas qualified for APRP treatment and different methods of surgery. However, the results are encouraging since this method has no negative effect on anal sphincter function. Dear Sir, The risk of postoperative complications such as fistula recurrence or incontinence increases in patients with recurrent fistulas and high transsphincteric and suprasphincteric fistulas [1,2]. Due to the large number of postoperative complications, in recent years, there have been some attempts to treat this disease conservatively. Methods of treatment include tissue adhesives and plugs that close the internal orifice as well as autologous growth factors that are present in the platelet-rich plasma and used as ''natural glue'' to close the lumen of the fistula. Autologous platelet-rich plasma (APRP) is platelet concentration derived from centrifuged full blood collected from a patient right before surgery. The platelet-rich plasma (PRP) obtained using the Gravitational Platelet Separation III System (GPS Ò III, Biomet Merck, UK) constitutes 10 % of the original volume of the collected blood and more than 90 % of its mass is made of platelets. Platelet-rich plasma preparation is administered to the fistula tract following curettage (Fig. 1). We report our experience with the GPS Ò System in 2 patients with anal fistula. A 50-year-old male underwent surgery for recurrent suprasphincteric posterior horseshoe fistula at our unit. At a time of surgery curettage of the fistula tract was performed, and then a GPS Ò III preparation was administered to the fistula tract. The patient was discharged home 2 days after surgery. Fourteen days after surgery, the patient presented at the ward due to an inflammatory infiltration. Two months after surgery, a follow-up examination revealed fistula recurrence. A 37-year-old male with recurrent intersphincteric anterior fistula-in-ano, and with a narrow, transsphincteric branch running towards the puborectalis loop also underwent surgical treatment at our unit. At a time of surgery, the main tract of the fistula was removed and APRP (GPS Ò III) was administered to the narrow lateral tract following curettage. The patient was discharged home on postoperative day 3. No recurrence of the fistula was reported within the 6-month follow-up period. The first experiences with using growth factors were related to the treatment of patients with chronic skin ulcers [3]. At present, indications for using APRP therapy are being expanded. APRP is used in burn units and for the treatment of non-healing wounds. There are also reports of using APRP for the treatment of tendon injuries, muscle injuries, degenerative lesions of tendons and joints and bone voids, mainly in maxillofacial surgery. APRP therapy is also offered by aesthetic surgery centres, where it is used to fill deep wrinkles. In the healing process, thrombin stimulates platelet alpha-granules to release growth factors which, through appropriate stem cell receptors, stimulate stem cell differentiation. Reports of using autologous growth factors for the treatment of fistula-in-ano date back to the late 1990s. In 1999, a paper on the treatment of patients with recurrent fistula-in-ano was published [2]. It described a group of 30 patients with complex fistulas, rectovaginal fistulas and vesicorectal fistulas. The cure rate was 60 %. A similar cure rate was reported by Cintron et al. [1]. In the literature, there are reports of using APRP as an element of multi-stage treatment of fistula-in-ano. The first stage involves inserting a loose seton through the fistula tract in order to drain and heal any local inflammatory lesions. The second stage involves APRP administration, which resulted in fistula healing in as many as 75 % of patients [4]. Hagen et al. [5] used the two-stage method, and in 2009 they published the results of administering APRP to 10 patients with high transsphincteric fistulas. The first stage of treatment involved draining the fistula tract and then closing the internal opening with a patch of mucous membrane and filling the tract with APRP. After almost 2 years of follow-up, fistula recurrence was observed in 10 % of the patients. Available publications include papers that present small groups of patients, different types of fistulas qualified for APRP treatment and different methods of surgery. However, the results are encouraging since this method has no negative effect on anal sphincter function.
2016-05-12T22:15:10.714Z
2012-11-29T00:00:00.000
{ "year": 2012, "sha1": "281960a0cbbb513ecbd74b9eac2c0f2384e0ef95", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10151-012-0954-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "281960a0cbbb513ecbd74b9eac2c0f2384e0ef95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212697886
pes2o/s2orc
v3-fos-license
Determination of Modelling Error Statistics for Cold-Formed Steel Columns In this article, an attempt has been made to estimate the Modelling Error (ME) associated with compression capacity models available in international standards for different failure modes of compressionmembers fabricated from Cold-Formed Steel (CFS) lipped channel sections. For the first time, a database has been created using test results available in the literature for compression capacities of CFS lipped-channel sections. (e database contains details of 273 numbers of compression member tests which have failed in different failure modes, namely, (i) flexural, torsional, flexural-torsional, local, and distortion buckling and (ii) failure by yielding. Only those sources, which report all the details, required to compute the capacities using different standards are included in the database. (e results of experimental investigations carried out at CSIR-Structural Engineering Research Centre, Chennai, are also included in this test database. (e international codes of practice used in calculation of compression capacities of the database columns considered in this paper are ASCE 10-15 (2015), AISI S100-16 (2016), AS/NZS 4600: 2018 (2018), and EN 19931-3:2006 (2006). (e ASCE, AISI, AS/NZS, and EN design standards have different design guidelines with respect to the failure modes, e.g., ASCE 10-15 (2015) standard provides stringent criteria for maximum width to thickness ratio for stiffened and unstiffened elements. Hence, guidelines for the distortional buckling mode are not provided, whereas the AISI S100-16 (2016) and AS/NZS 4600: 2018 (2018) standards consider separate guidelines for distortional buckling mode and EN 1993-1-3:2006 (2006) standard considers combined local and distortional buckling mode. Further, the sample size for each design standard is varying depending on the design criteria and failure mode. Studies on statistical analysis of ME suggest that the compression capacity predicting models for flexural-torsional buckling mode are associated with large variation irrespective of the design standard. Similar observations are made for the flexural buckling model as per EN 1993-1-3:2006 (2018) standard and distortional buckling models as per AISI S100-16 (2016) and AS/NZS 4600: 2018 (2018) standards.(e compression capacities for test database sections are evaluated by neglecting the partial safety factors available in design standards.(e probabilistic analysis to determine statistical characteristics of compression capacity indicates the importance of consideration of ME as a random variable. Hence, the ME results will be useful in code calibration studies and may have potential reference to design practice. Introduction e exponential growth in all sectors, viz. industrial, housing, transportation, communication, services, has increased the demand of power supply by many folds. In order to cater to this increased demand, the transmission line towers have to support larger and heavier conductors in bundle configuration, Juette and Zaffanella [1]. is makes the tower configurations taller and wider, and hence, the tower system would become heavier. Conventionally, the transmission line towers are fabricated using Hot-Rolled Steel (HRS) angle sections. To reduce the tower weights, it is necessary either to consider eco-friendly alternate sustainable materials (viz., GFRP sections) or to produce lighter steel sections with higher strengths. Generally, CFS sections are used for achieving higher strengths for the structure and structural members without compromising on overall drift requirements. For CFS sections, the influence of cold work on mechanical properties of structural steel was investigated at Cornell University by Chajes et al. [2], Karren [3], Karren and Winter [4], and Winter and Uribe [5]. It was found that the changes in mechanical properties of CFS are due to cold stretching caused mainly by strain hardening and strain aging, occurring predominantly in the bent portions of the CFS sections. e effect of cold working increases the yield stress by a minimum of 15% [6]. Some of the advantages of CFS sections as against HRS sections are as follows: (i) Cold rolling process produces any desired shape of cross section over longer lengths (ii) High strength to weight ratio can be achieved in CFS sections (iii) e connection methods that can be used for CFS sections are same as those for the HRS sections (iv) Since CFS sections/members are lighter and stronger, they can be easily transported and erected (v) Pregalvanised or precoated CFS sections have high resistance to corrosion e abovementioned advantages of CFS members enable engineers to design cost effective transmission line towers. e CFS members can be fabricated to closely fit the design requirements. Recommendations for the design of plain and lipped angles, produced by cold-forming process for transmission towers, were given by Gaylord and Wilhoite [7]. ese recommendations are included in ASCE 10-15 [8] standard. Some of the other international standards providing design guidelines for the use of CFS sections are AISI S100-16 [9], AS/ NZS 4600: 2018 [10], EN 1993-1-3:2006 [11], and IS 801 [12]. ese codes recommend equations/models for prediction of compression and tension capacities of CFS members failing in different modes depending on geometrical, material, and boundary condition details. In this paper, the focus is on estimation of ME associated with equations specified in ASCE 10-15 [8], AISI S100-16 [9], AS/NZS 4600: 2018 [10], and EN 1993-1-3:2006 [11] standards for compression capacity estimation of CFS members. Some of the highlights of the international design standards considered are as follows. (i) e ASCE 10-15 [8] standard provides design guidelines for CFS sections for transmission line towers. In this standard, the CRC curve is used for the estimation of compression capacity without embedding any partial safety factor. (ii) e AISI S100-16 [9], AS/NZS 4600: 2018 [10], and EN 1993-1-3:2006 [11] standards are providing design guidelines for CFS section for general building structures. e JCSS probabilistic model code [13] states that the models used in calculation of structural responses are usually not complete and exact, so the actual outcomes cannot be predicted without error. e variables used in model functions are associated with possible uncertainties which accounts for random effects that are neglected in the models, and uncertainties arise due to simplification in the mathematical relations. As has been already pointed out, the focus of this paper is to establish the ME associated with the compression capacity formulae given in ASCE 10-15 [8], AISI S100-16 [9], AS/ NZS 4600: 2018 [10], and EN 1993-1-3:2006 [11] codes. In order to estimate the same, a database of experimental capacities of columns (pinned ended), made of CFS lipped channel sections failing in different modes, is created for the first time. is database includes experimental results for CFS lipped channel columns obtained from the literature along with the test results of three nominally similar columns tested at CSIR-SERC. A brief review of the literature on column tests, probabilistic analysis, and determination of reliability index for CFS lipped channel sections is presented here. From the review of literature, it has been noted that recent research is directed towards experimental and analytical investigations on built-up CFS channel sections under axial compression. Ting et al. [32] have carried out numerical and experimental investigations to study the buckling behaviour of back to back built-up CFS channel sections covering stub columns to slender columns. e geometric imperfections were measured using equipment with LVDT (0.01 mm accuracy) and considered in finite element modelling. It is observed that the FEA and test results were in good agreement and conservative with the calculated strengths as per AISI and AS/NZS standards for short, intermediate, and slender columns which failed through a combination of local and global buckling and/or global buckling. Whittle and Ramseyer [33] have conducted experimental compression tests on closedsection, built-up members formed of intermediately welded CFS channel sections, and the test capacities were compared to theoretical buckling capacities based on the AISI specifications modified slenderness ratio. Use of the modified slenderness ratio was exceedingly conservative. Capacities based on the unmodified slenderness ratio provisions were less conservative. Ye et al. [34,35] presented the results of experimental and numerical investigations carried out on CFS plain and lipped channels under axial compression to study the interaction of local and overall flexural buckling. e measured initial geometric imperfections and material properties are used in development of finite element models. e finite element and experimental results were compared to the compression capacity calculated using Eurocode and direct strength method. It was observed that the Eurocode provides conservative predictions for the compressive capacity of plain and lipped channel sections, while the direct strength method predictions are more accurate for lipped 2 Advances in Civil Engineering channels. It has been stated that these studies represent the state of the art. Balaji Rao and Appa Rao [36] carried out studies on probabilistic analysis of strength of steel imperfect columns. Based on these studies, a characteristic strength equation is proposed which will be useful in rational design of imperfect columns. e researchers have estimated ratios of load carrying capacity to the yield load of column (ME) using CRC, AISC-ASD, AISC-PD, AISC-LRFD, SSRC Curve 2, IS 800: 1984, Lui and Chen [37], and Rondal and Maquio [38] formulae/model functions. e determination of reliability index for CFS elements or members is presented in several research reports of the University of Missouri-Rolla [39][40][41][42][43] and Supomsilaphachai et al. [44], where both the basic research data as well as the reliability index inherent in the AISI Specification are presented in great detail. e entire set of data for HR steel and CFS, was reanalysed by Ellingwood et al. [45]; Galambos et al. [46]; and Ellingwood et al. [47] using (a) updated load statistics and (b) a more advanced level of probability analysis which was able to incorporate probability distributions and to describe the true distributions more realistically. From the review of the literature, presented above, it is found that studies dealing with ME estimation for CFS compression members are scanty. However, while estimating the ME care is taken to neglect the use of any partial factors embedded in the codal equation. e main objectives of this paper are (i) creation of database for compression capacities of CFS lipped-channel sections from the test results available in literature, (ii) estimation of ME for database results for the various failure modes as per ASCE 10-15 [8], AISI S100-16 [9], AS/NZS 4600:2018 [10], and EN 1993-1-3: 2006 [11] standards, (iii) studies on statistical characteristics of ME estimated at step (ii), and (iv) fitting a statistical distribution for the ME. e details of experimental investigations carried out at CSIR-Structural Engineering Research Centre, Chennai, which included in test database are provided in next section. Compression Tests on CFS Lipped Channels Carried Out at CSIR-SERC At CSIR-SERC, Chennai, experimental investigations were carried out to study the buckling behaviour and to evaluate the compression capacity of axially loaded CFS lipped channel compression members (leg member of a X-braced tower panel). e scope of these experimental investigations consist of measurement of sectional dimensions for the procured lipped channels to study the imperfection associated with the CFS sections, coupon tests for determining the mechanical properties of CFS, and element level studies on axially loaded compression members to observe the buckling failure mode and to evaluate the capacity. e details for abovementioned experimental investigations are summarised as below: Sectional Dimensions. e sectional dimensions are verified for each specimen of 3.0 m length with the help of Vernier calliper (least count 0.01 mm) at seven locations (starting from one end at 0.0 m to every 0.5 m) along the length, and thicknesses are verified using micrometer screw gauge (least count 0.001 mm) at both the ends. e details of dimension measurements for two number of sample specimens are given in Table 1. e tolerance limit for profile dimensions as per IS: 811-1987 [48] is ±0.5 mm and for the thickness of strip used in making of cold formed section is ±0.5 mm as per IS: 1852-1985 [49]. It is observed that the sectional dimensions and member thicknesses are within the tolerance limits. To measure the straightness, the specimens were placed horizontally on a plane surface and a long precision ruler was used as a reference plane. Coupon Tests. e material properties were determined by tensile coupon tests. e coupons were cut from the centre of web of CFS lipped channel sections of the same batch as per ASTM Standard E8/E8M-15a [50] and tested as per ASTM Standard E6-15 [51] in a 100 Ton Universal Testing Machine (UTM). An extensometer of 50 mm gauge length was used to measure the longitudinal strain. e displacement control for load application is kept at 0.5 mm/ min, and the inbuilt data acquisition system of UTM with 1 Hz sampling frequency was used to record the load and strain values during the test. e stress-strain curve obtained along with the coupon specimen details are presented in Figure 1. As the material is gradually yielding material, the yield stress was determined by the 0.2% offset method [52]. e tension coupon test set up for CFS and the coupon test specimen after failure is shown in Figure 2. Element-Level Tests. e element-level compression tests were carried out on lipped channel sections LC 90 × 50 × 20 × 3.15 mm (Lipped Channel − Web depth × Flange width × Lip depth × ickness). e tests were categorized as concentrically loaded members corresponding to leg members in a latticed tower panel. e tests were conducted using 250 Ton capacity displacement control UTM with spherical joints and ball bearing head at the top end and fixed base at the bottom. e CFS lipped channel specimens fabricated were of 2.225 m in length and connected with specially made fixtures at ends using 3 numbers of 16 mm diameter bolts as shown in Figure 3. is is in order to simulate the exact member end condition for leg member of a X-braced panel. During the tests, the lateral displacements of flanges and web were measured using displacement dial gauges (0.01 mm least count). e linear foil strain gauges were instrumented across the cross section of specimens to measure the longitudinal strain variations throughout the load application. e test responses for dial and strain gauges were acquired using HBM data logger. In order to measure the strains and displacement responses at critical location, specimens were instrumented at maximum lateral displacement sections, i.e., mid length of the specimens. e test set-up and the three tested specimen's photos are shown in Figure 4. As the end fixtures are made of 16 mm thick HRS plates and the specimen thickness is 3.15 mm, the stiffness of end fixtures is higher compared to the specimen. Hence, it is Advances in Civil Engineering 3 assumed that the end condition of test specimen will act as pin ends. Accordingly, the effective length for analysis is considered between centre to centre of the end connection. e test specimens have failed in flexural-torsional buckling mode and the observed test capacities are 119, 118, and 113 kN with an average compression capacity of 116.67 kN. e capacities calculated using the assumed effective length and as per ASCE 10-15 [8] and AISI S100-16 [9], AS/NZS 4600:2018 [10] are in good agreement with the test capacity. is observation shows that the effective length determined assuming pinned end conditions is satisfactory. However, it is to be noted that exact determination of effective length is difficult. e results of test capacities and codal predictions for the test specimens are presented in Table 2. e capacity predictions are based on the design standards under consideration and calculated for effective length equal to the centre to centre of bolted connection (1915 mm) and without considering the safety or partial safety factors present in model equations. e specimen details and results of above compression tests along with results of compression tests available in the literature for CFS lipped channel pinned columns considered in creation of database is presented in the next section. Database of Compression Capacity for CFS Lipped Channel Members In this study, for the first time, a test database is created from the experimental details available in the literature for the CFS lipped channel compression members. Table 3 provides the details of database created. Further, it gives information on the test data exclusion criteria used in filtering the test data of each study considered. It also indicates the reference data number for particular referenced compression members from the detailed test database presented in Table 4 and Figure 5. e selection of test compression members, obtained from the literature, for the ME estimation was based on the following criteria: (1) e member is a CFS lipped channel section (2) e test results are close to the calculated capacities (±% variations used) (3) e member is concentrically loaded during the test (4) e member tested with pinned end condition on both ends (5) e member is not having any perforations along the length (6) e member is not having any intermediate stiffeners in web A total number of 577 compression members' test details available in the literature are used in the creation of test database. Out of this 273 compression members (120 columns and 153 stub columns) are filtered using abovementioned criteria and used in test database. e detailed test database (refer Table 4 and Figure 5) includes test compression member details like out-to-out cross-section dimensions (lipped channel section web depth, flange width, and lip depth), thickness of section, corner radius (inner), length of the member, material properties (Young's modulus, yield, and ultimate strength), test load, type of compression member (column or stub column), torsion indicator, and forming method (press brake or roll forming). While creating the test database, it is noticed that not all the required details were available and were assumed appropriately as follows: (a) In some of the references, the material properties like Young's modulus and/or ultimate strength details were not available which were taken from the standards based on the value of yield strength and country/region in which the tests were conducted. (b) e pinned end condition was assumed at both ends for some of the compression members for which the details were not available. (c) For the compression members where it was not clearly mentioned that the torsional buckling is allowed, it is assumed to be allowed. is assumption provides calculated results in a more realistic behaviour. (d) In one of the references, the member length was not indicated for the stub columns. ough it is not required for calculating the section capacity, the same was assumed as three times the web depth. is is just to indicate in the test database. Further, to estimate ME, initially the test database for CFS lipped channel compression members has been filtered through the geometric proportion criteria stipulated for each code separately which is explained in next section. en the database compression members are further filtered for the limiting maximum slenderness ratio. e limiting maximum slenderness ratio for tower leg members (axially loaded compression members in transmission tower) is given as 120 in ASCE standard. ough this limitation is not present in AISI, AS/NZS, and EN standards, the same is applied in this study as ASCE standard provides design guidelines for "Design of Latticed Steel Transmission Structures," whereas the other standards are applicable for general steel structures. is philosophy of limiting slenderness ratio to the axially loaded compression members is based on the fact that the overall deformation of transmission line towers is governed by the stiffness of leg members. Large deformations resulting with higher slenderness ratios of lower stiffness members may cause disturbance to the safe clearance limits of electrical conductors with the tower profile. Before estimating the ME associated with different codal equations, a brief overview of codal clauses that would help in discussing the results of statistical analysis are presented in the next section. Codal Provisions for Estimation of Compression Capacity of CFS Columns In this section, a brief review of design guidelines for CFS sections available in International design standards, i.e., ASCE 10-15 [8], AISI S100-16 [9], AS/NZS 4600: 2018 [10], and EN 1993-1-3:2006 [11] are presented (refer Table 5). e design guidelines of AISI and AS/NZS standards are similar and hence presented in a single column (refer Table 5). e design guidelines covered here focus on increase in strength Advances in Civil Engineering due to cold working, limiting geometrical proportions, and column capacity prediction models for different failure modes. Considering the above codal provisions, the database CFS lipped channel sections are filtered for limiting criteria, and then the compression capacities are calculated for the estimation of ME. e ME estimation procedure and statistical properties of ME for database CFS lipped channel sections is presented in the next section. Modelling Error Estimation Methodology e mathematical relations or model functions used to predict the capacity of the compression member are based on the simplifying assumptions and/or neglected random effects for some of the parameters involved in the prediction equation [13]. Hence, the model function may not completely describe the physical phenomenon under consideration. is in combination with lack of complete knowledge of the modeller causes the difference in the actual and predicted compression member capacities. e difference in capacity can be quantified using ME analysis. Accordingly, the ME should be defined as the ratio of actual capacity to calculated capacity. It may be possible that the test capacity may not be the true capacity as there may be some unavoidable errors involve in the test and measurement procedure, but even then, it provides close estimate to the true capacity. In this study, the ME is taken as the ratio of test to calculated capacity of compression member. Further, in ME analysis, the model functions are considered for various modes of failure viz. flexure, flexure-torsion, local and distortion buckling, and yielding of section based on ASCE 10-15 [8], AISI S100-16 [9], AS/NZS 4600: 2018 [10], and EN 1993-1-3:2006 [11] standards as described in above section. Accordingly, after passing through the filtering criteria as discussed in "Database of Compression Capacity for CFS Lipped Channel Members", the member compression capacity has been calculated for the balance test database members for governing failure modes given in each design standard after removing the safety or partial safety factors present in the model function. en the ME estimated taking the ratio of test capacity as per database with the calculated capacity. e results of calculated ME and its statistical properties for the test database compression members in various failure modes as per abovementioned design standards are presented in Table 6. e histograms of ME for various failure modes with respect to design standards are shown in Figures 6-8 . Further, Chi-square goodness-of-fit test has been performed on ME for selection of statistical distributions, and the results of this test are provided in Table 7. e failure modes considered for column members are flexure, flexure-torsion, and local and distortion buckling and for stub columns failure by yielding and local and distortion buckling. Criterion used to exclude some of the test compression members given in reference. @ -Some of the test compression members of this reference are considered in test database, finally they were excluded based on very high ME or no match in calculated and experimental results. $ -e data are taken from Mulligan [17] as the original reference is not available. * e data are taken from Chou et al. [29] as the original reference is not available. Table 4: Test database of axially loaded cold-formed steel lipped channel section compression members. S. No. e algorithm for estimation of ME and determination of its statistical characteristic is given as follows: (1) Prepare the test database for ME estimate with exclusion criteria (2) Filter the test database members (columns and stub columns) for geometric proportions criteria and maximum slenderness ratio limit (3) Calculate capacities for different governing failure modes for various standards by not considering the safety or partial safety factors present in the model function (4) Calculate ME as ratio of test to calculated capacity (5) Plot histograms for ME and calculate statistical properties (6) Perform Chi-square goodness-of-fit test for selection of statistical distribution for ME e probabilistic analysis is carried out for simulation of compression capacity of a typical CFS lipped channel 12 Advances in Civil Engineering column from the database considering ME as a random variable. e details of probabilistic analysis and its statistical characteristics are provided in the next section. Considering the above ME statistics for different buckling failure modes of CFS lipped channel columns, probabilistic analysis is carried out on compression capacity as discussed in the next section. Probabilistic Analysis of Compression Capacity e aim of this section is to bring out the importance of consideration of ME in assuming the distribution of compression capacity. In order to study the effect of ME on compression capacity, simulation of compression capacity is carried out with and without considering ME as the random variable. e yield strength, F y, ultimate strength, F u , and the modulus of elasticity, E, are the material properties of CFS and used in compression capacity estimation of CFS columns as random variables. Ravindra and Galambos [59], and Hess et al. [60], suggested the lognormal distribution for F y and E and Normal distribution for F u for high-tensile steel plates. e Coefficient of variation (COV), as per the above studies, for F y is 0.089, F u is 0.091, and E is 0.038, and the same values are adopted in the present study. Simulation of 10 6 cycles are used in present investigations to estimate the statistical properties of compression capacity of a typical CFS lipped channel column (database column no. 18) as per ASCE 10-15 [8], AISI S100-16 [9], AS/NZS 4600: 2018 [10], and EN 1993-1-3:2006 [11] standard, with and without considering ME as random variable along with the other random variables. e column is selected such that it is having mean ME more than 1.15 and COV more than 0.25. e column is critical in flexure torsion mode of buckling as per the abovementioned standards. e histograms along with statistical properties of simulated compression capacity for abovementioned standards estimated with and without ME as random variable are presented in Figures 9-11. Further, to predict statistical distributions, Chi-square goodness-of-fit test has been performed on simulated compression capacity, and the results of this test are given in Table 8. e abovementioned results of modelling error estimation and probabilistic analysis are discussed in the following section. Results and Discussions e ME is estimated for various failure modes considered in ASCE 10-15, AISI S100-16, AS/NZS 4600: 2018, and EN 1993-1-3: 2006 standards for the database columns. e statistical details of ME and results of Chi-square goodnessof-fit test are presented in Figures 6-8 and Tables 6 and 7, respectively. e distributions of simulated compression capacity based on probabilistic analysis and corresponding results of Chi-square goodness-of-fit tests are indicated in Figures 9-11 and Table 8, respectively. Based on these results, the following observations are made. Modelling Error e mean ME for column buckling in flexure is close to 0.9 for ASCE 10-15 and AISI S100-16 & AS/NZS 4600:2018 standard, whereas the same for EN 1993-1-3:2006 standard is 0.95. In case of flexural torsional buckling, the mean ME is close to 1.2, irrespective of the design standard. For local buckling, the mean ME is 0.92 for ASCE 10-15 standard and 0.82 for AISI S100-16 & AS/NZS 4600:2018 standard. e ASCE 10-15 standard is having stringent criteria for dimensional proportions with remote chances of distortional buckling and hence does not provide any guideline for it. As per AISI S100-16 & AS/NZS 4600:2018 standard, the mean ME in distortional buckling is 1.34, which is a high value. For EN 1993-1-3:2006 standard, the combined model function is provided for the local and distortional buckling and it gives mean ME as 1.11. In case of stub columns, the mean ME for yield failure of cross section is 1.11 for ASCE 10-15 and 1.02 for AISI S100-16 & AS/NZS 4600:2018 standard. EN 1993-1-3:2006 standard does not provide separate guidelines for yield failure of the cross section. For local buckling, the mean model error is 0.97 and 0.88, respectively, for the ASCE 10-15 and AISI S100-16 & AS/NZS 4600:2018 standard. e mean ME in distortional buckling for AISI S100-16 & AS/ NZS 4600:2018 standard is 0.95. e combined mode of failure for local and distortional buckling is suggested in EN 1993-1-3:2006 standard, and the mean ME in this mode is 1.02. It is observed that for columns, the mean MEs are within ±10% variation for the single mode failures viz. flexure and local buckling. In case of combined failure mode in flexure and torsion, the variation is around +20% and for distortion buckling it is +34%. is indicates requirement of extensive experimental research in combined and distortion buckling modes. For stub columns, a good estimate of ±10% of mean ME was observed irrespective of the design standards and failure modes; except for the local buckling as per AISI Advances in Civil Engineering 13 (i) e elastic critical buckling load for a long column is determined by the Euler's equations. (ii) For locally stable columns, the AISC LRFD specification (1993) equation is adopted for elastic and inelastic ranges of buckling. e resistance of compressed members is based on the "European design buckling curves" (ECCS, 1978) that relate the reduction factor to the nondimensional slenderness. ese (five) curves were the result of an extensive experimental and numerical research programme (ECCS, 1976), conducted on HR and welded sections, that accounted for all imperfections in real compressed members (initial outof-straightness, eccentricity of the applied loads, residual stresses). e analytical formulation of the buckling curves was derived by Maquoi & Rondal [33], and is based on the Ayrton-Perry formula, considering an initial sinusoidal deformed configuration corresponding to an "equivalent initial deformed configuration" where the amplitude was calibrated in order to reproduce the effect of all imperfections. Torsion As per this code the local buckling strength is not equal to torsional buckling strength. Hence, torsional-flexural buckling strength is determined. e torsional buckling in the elastic range is computed based on the equation provided by Winter [53] for elastic critical stress. For members with "point-symmetric" open cross sections (e.g. Z-purlin with equal flanges), the possibility that the resistance of the member to torsional buckling might be less than its resistance on flexural buckling is considered in this code. Torsion-Flexure e design compressive stress for torsional-flexural buckling strength is determined using an equivalent radius of gyration. e governing elastic flexuraltorsional buckling load of a column can be found from the equation suggested by Chajes and Winter [54]; Chajes et al. [55]; Yu and LaBoube [56]. For members with mono-symmetric open cross-sections, the possibility of the resistance of the member to torsional-flexural buckling might be less than its resistance to flexural buckling is considered in this code. Local buckling If element slenderness ratio is not small enough to develop uniform design stress distributed over full cross section, then the post buckling strength of element which buckles prematurely is taken into account by using an effective width of an element in determining the area of the member cross section. e effective width of an element is the width, which gives the same resultant force of a uniformly distributed design stress, as the nonuniform stress develops in the entire element in the post-buckled state. In this code, the effective width Method's approach to local buckling is adopted. It conceptualizes the member as a collection of "elements" and investigates local buckling of each element separately. e effective width method determines a plate buckling coefficient, k, for each element, then buckling stress, and finally the effective width. An effective width approach is adopted, whereby 'ineffective' portions of a cross section are removed and section properties may be determined based on the remaining effective portions. In this standard, the local and distortional buckling modes for cross sections with edge stiffeners are considered together while estimating the resistance. 14 Advances in Civil Engineering Distortional buckling is an instability that may occur in members with edge-stiffened flanges, such as lipped C and Z-sections. is buckling mode is characterized by instability of the entire flange, as the flange along with the edge stiffener rotates about the junction of the compression flange and the web. e expressions in this specification are derived by Schafer [57] and verified for complex stiffeners by Yu and Schafer [58]. EN1993-1-3 does not provide explicit provisions for distortional buckling. However, a calculation procedure is obtained from the interpretation of the rules given in the code for plane elements with edge or intermediate stiffeners in compression. e design of compression elements with either edge or intermediate stiffeners is based on the assumption that the stiffener behaves as a compression member with continuous partial restraint. is restraint has a spring stiffness that depends on the boundary conditions and the flexural stiffness of the adjacent plane elements of the cross section. e spring stiffness of the stiffener may be determined by applying a unit load per unit length to the cross section at the location of the stiffener. e rotational spring stiffness characterizes the bending stiffness of the web part of the section. Yielding Not indicated Very short, compact column under an axial load may fail by yielding. Hence, the yield load determined by multiplying the gross area with yield strength. e design resistance is computed by multiplying the gross area with increased basic yield stress for the contribution from difference of average and basic yield stress reduced by a factor based on the ratio of relative slenderness for elements. Maximum element slenderness ratio (w/t) (i) For elements supported on both the longitudinal edges, w/t ≤ 60 and (ii) for elements supported on one longitudinal edge, w/t ≤ 25 w/t � flat width to thickness ratio. (i) For stiffened element in compression, w/t ≤ 500, (ii) for edge stiffened element in compression w/t ≤ 90 for I s ≥ /I a and w/t ≤ 60 for I s < I a , and (iii) for unstiffened element in compression w/t ≤ 60 w/t � flat width to thickness ratio. (i) For stiffened element in compression, w/t ≤ 500, (ii) for edge stiffened element in compression a) for element w/t ≤ 60 for stiffener w/t ≤ 50, and (iii) for unstiffened element in compression w/t ≤ 50 w/t � out to out width to thickness ratio. Material strength As At Cornell University, the influence of cold work on mechanical properties was investigated by Chajes et al. [2], Karren, [3], and Karren and Winter [4]. It was found that the changes of mechanical properties due to cold-stretching are caused mainly by strain-hardening and strain-aging, Chajes et al. [2]. Cornell research also revealed that the effects of cold work on the mechanical properties of corners usually depend on (1) the type of steel, (2) the type of stress (compression or tension), (3) the direction of stress with respect to the direction of cold work (transverse or longitudinal), (4) the Fu/Fy ratio, (5) the inside radius-to-thickness ratio (R/t), and (6) the amount of cold work. Investigating the influence of cold work, Karren derived the equations for the ratio of corner yield stress-tovirgin yield stress [3]. With regard to the full-section properties, the tensile yield stress of the full section approximated by using a weighted average is used in this specification. Good agreements between the computed and the tested stress-strain characteristics for a channel section and a joist chord section were demonstrated by Karren and Winter [4] e increased yield strength due to cold forming may be taken into account if in axially loaded members in which the effective cross-sectional area equals the gross area, and in determining the effective area, the yield strength should be taken as basic yield strength. 16 Advances in Civil Engineering Kurtosis. e failure modes in flexure and flexuretorsion buckling for columns irrespective of the design standard, along with local buckling mode for column and stub columns as per AISI S100-16 & AS/NZS 4600:2018 standard and combined local and distortion buckling mode as per EN 1993-1-3:2006 standard, kurtosis values are found to be more than three. is indicates non-Gaussian distribution for the ME data and the positive excess kurtosis in these cases indicate the data is leptokurtic having heavy tails and contains extreme values. For other modes of failure, the kurtosis values are close to three with negative excess kurtosis which indicates that the data is platykurtic and having flat tails with small probability of extreme values. In this case, also the statistical distribution for ME data is nonGaussian. 22 Advances in Civil Engineering distribution of ME. ree different hypothetical distributions, namely, Normal, Lognormal, and Uniform distributions are considered for the test. e results of these tests are presented in Table 7. From the results of Chi-square goodness-of-fit test, it is found that the hypothesis of assumed distributions considered cannot be rejected at 1% and 5% of significance level. However, a distribution with the lowest Chi-square value is considered for the particular mode of failure. In some cases, the Normal distribution is governing but considering nonzero values of skewness and kurtosis value not equal to 3.0, lognormal distribution is assumed. e assumption of lognormal distribution for ME is also justified since negative values for ME, however small they may be, do not have engineering meaning. e ME for the failure modes, for which the sample size is small, can be assumed to follow a uniform distribution. Figures 9-11, it is observed that the distribution of compression capacities, without ME as random variable, follows normal distribution irrespective of design standard and the mean resistance value is equal to compression capacity calculated as per the respective design standards. However, the distribution of compression capacity, with ME as a random variable, seems to follow the lognormal distribution irrespective of design standard, and the COV for simulated compression capacity with ME as random variable is 0.3, which is significant compared to COV of resistance without considering the ME. Probabilistic Analysis of Compression Capacity. From e Chi-square goodness-of-fit test results presented in Table 8 justify the use of the distributions for resistance suggested earlier based on eye judgement. A summary of conclusions are drawn and presented in the following section, based on the above discussions. Summary and Conclusions e model functions available for different modes of failure in ASCE 10-15, AISI S100-16, AS/NZS 4600:2018, and EN 1993-1-3:2006 standards are used to predict the compression capacity of a set of 273 CFS lipped channel compression members. e test data of compression members are obtained from the literature. Using these test results and also those of three nominally similar compression members tested at CSIR-SERC, a database is created for the first time and presented in this paper. Each test compression member has sufficient information for calculating the compression capacity using the abovementioned standards. Using the ratio of the test to predict compression capacity, ME analysis was carried out to assess the accuracy of model functions to calculate compression capacity for various failure modes available in these standards and to suggest probability distributions for the ME. e results of statistical analysis are briefly summarised as follows: (i) From the results presented in Tables 6 and 7, it is inferred that the ME for various failure modes (except for the cases for which the data points are less), follows a lognormal distribution at 5% significance level with means approximately equal to 1.10 and COV � 0.25. A higher value of COV is recommended to offset the effect of shield over estimation of capacity by ASCE 10-15 standard. In general, EN 1993-1-3:2006 standard seems to perform satisfactorily in the estimation of capacity of compression members. e values of coefficient of skewness and kurtosis, obtained in this investigation, also suggest the use of unsymmetrical distribution about mean (which in this paper is lognormal distribution). (ii) It is also noted from Tables 6 and 7 that where the data points are less, the maximum entity distribution, i.e., uniform distribution, is recommended for ME. (iii) As can be expected, the COV of ME associated with prediction models for combined failure modes are higher. is indicates that more number of tests to be carried out in this range for reducing statistical uncertainties. (iv) e probabilistic analysis of compression capacity of CFS columns has brought out the importance of consideration of ME and in general, and it can be assumed that the compression capacity of column follows a lognormal distribution at 5% significance level. (v) e results presented in this paper are first of its kind, could help in carrying out the calibration studies for CFS columns, and may have potential reference to design practice. Data Availability e data used to support the findings of this study are included within the article as Table 4 and Figure 5. Disclosure is work was carried out as a part of PhD work of the first author at CSIR-SERC. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2020-03-05T11:08:10.839Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "3efef5fd38e251b3282d155b281b70996e86f2f7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ace/2020/3740510.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ce95ac439aafb94bd47bc3031a8b1b2787e341be", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
247293411
pes2o/s2orc
v3-fos-license
Mitral Valve Repair for Barlow’s Disease with Mitral Annular and Subvalvular Calcification: A Case Report Barlow’s disease with mitral annular calcification encompassing the subvalvular apparatus, including the valve leaflet and chordae, is extremely rare, and mitral valve repair in such cases is challenging. We report a case of a 60-year-old woman with mitral valve regurgitation that was successfully controlled by resecting the rough zone of P2 and calcifications on the excess leaflet regions and subvalvular apparatus, while retaining the calcification of P3 and implanting artificial chordae and an annuloplasty ring. Mitral valve repair for such cases requires an individualized and compounded surgical strategy for the technique to treat Barlow’s disease and manage calcification to control mitral regurgitation. Case report Text Barlow's disease (BD), which is known to involve leaflet myxoid degeneration (leaflet thickening, large redundant leaflets, chordal elongation or rupture, and annular dilation), sometimes causes mitral regurgitation (MR) [1]. Although some factors correlate with the prevalence of mitral annular calcification (MAC) [2], BD with MAC encompassing the subvalvular apparatus is rare, and surgical strategies for mitral valve repair (MVr) remain unclear. In this case report, we present a case which we successfully controlled mitral valve (MV) regurgitation by resecting the valve leaflet, performing decalcification, and retaining a part of the calcification. A 60-year-old woman was referred to Saiseikai Kumamoto Hospital with dyspnea on exertion. Echocardiography revealed cardiac enlargement, no asynergy of left ventricular wall motion, an ejection fraction (EF) of 65%, severe MR (effective regurgitation orifice area of 0.59 cm 2 ) with bi-leaflet thickening (Fig. 1A), myxomatous changes with excessive tissue formation (Fig. 1B), and billowing with prolapse ( Fig. 1C), which are characteristic pathologic changes of BD. Computed tomography (CT) revealed calcification of the mitral annulus, leaflets, and subvalvular apparatus, such as the chordae (Fig. 1D). Our cardiology team recommended surgery based on the severity of MR. Surgical procedure is as follows (Supplementary Video 1). Surgery was performed with the da Vinci surgical system (Intuitive Surgical Inc., Sunnyvale, CA, USA) using 4 chest ports, and cardiopulmonary bypass was initiated using the right femoral artery and vein. Cardiac standstill was obtained from antegrade cardioplegia, and we approached the MV through a right-side left atriotomy. The bi-leaflet of the MV was large and excessively thickened; annular, leaflet, and subvalvular calcifications at P2 and P3 were observed. We resected the rough zone of P2 in a pentagonal shape, and performed decalcification by en bloc resection of the calcium deposits at the leaflet and chordae of P2. The calcification of the P3 leaflet was retained. After suturing the P2 leaflet in an inverted T-shape, we implanted 2 artificial chordae from the bilateral papillary muscles and a 36-mm Physio Flex annuloplasty ring (Edwards Lifesciences, Irvine, CA, USA). MV competence upon normal saline injection was satisfactory. The operative, cardiopulmonary bypass, and cross-clamp times were 359 minutes, 277 minutes, and 200 minutes, respectively. The postoperative course was uneventful. At 1 week postoperatively, echocardiography showed preserved EF (59%), improvement of the left ventricular endo-diastolic diameter from 55 mm preoperatively to 45 mm, with a https://doi.org/10.5090/jcs.21.113 http://www.jchestsurg.org JCS mean pressure gradient of 2.2 mm Hg, and no residual MR ( Fig. 2A). Enhanced CT performed at 1 week postoperatively revealed resection of most calcification in the region of P2, but calcification in P3 were retained (Fig. 2B). At a 1-year follow-up visit, progression of MR and cardiac enlargement were not observed. Our institutional review board made the decision that ethical approval was not needed for this study. Informed consent was obtained from the patient. Discussion The prevalence of MAC in patients undergoing MVr is relatively high. Fusini et al. [2] reported that some factors correlated with MAC and that posterior annular involvement was the most common site of MAC. However, Carpentier et al. [3] reported that cases of MAC involving the subvalvular apparatus, such as chordae and papillary muscles, were rare. Therefore, MVr for BD with MAC encom-passing the subvalvular apparatus is a challenging procedure, and management of calcifications, the method of MVr, and whether to repair or replace it are the key surgical considerations. Various methods have been previously reported regarding the management of MAC. Carpentier et al. [3] reported en bloc resection and reconstruction of the atrioventricular groove with 2-0 vertical sutures. Loulmet et al. [4] reported the modified patch technique for repairing an atrioventricular groove after en bloc resection of extensive MAC. Residual MAC might limit the leaflet mobility and cause residual MR or postoperative mitral stenosis (MS); however, excessive resection may also cause disruption of the morphology of the MV, which makes it impossible to repair. In this case, calcification involved the leaflet and chordae of P2 and P3, as well as the annulus of P3. P2 calcification contributed to restrained movement of the leaflet and may have caused postoperative MS. Conversely, the P3 calcification involved a smaller area and might not have contribut- ed to postoperative MS; however, its resection may have collapsed the MV morphology. Many reports have investigated the methods of MVr for BD. Ben Zekry et al. [5] analyzed simple repair for BD with only an annuloplasty ring; however, this seems incomplete because it included a certain number of cases that had systolic anterior motion (SAM) of the anterior leaflet. Some reports have suggested a relationship between BD and mitral annular disjunction, and Hiemstra et al. [6] reported annular remodeling and stabilization as being important for the surgical treatment in BD in order to correct the dilation and abnormal annulus movement. Miura et al. [7] reported a restoration technique that involved reducing the lateral volume of excess leaflets in the rough zone and correcting the prolapse using a few artificial chordae. Therefore, we followed the MVr strategy according to these concepts of fixing the MV by annuloplasty, reducing excess volume by resection, and correcting the prolapse with artificial chordae, and we controlled MR without causing residual MR, MS, or SAM. A B In conclusion, MVr for BD with MAC encompassing the subvalvular apparatus requires careful assessment of the distribution of calcifications, valve morphology, and surgical techniques for managing calcification. Conflict of interest No potential conflict of interest relevant to this article was reported.
2022-03-09T06:22:36.443Z
2022-03-08T00:00:00.000
{ "year": 2022, "sha1": "5c8b16c2e674c5627a1184a5ea778bf85de2b19d", "oa_license": "CCBYNC", "oa_url": "https://www.jchestsurg.org/journal/download_pdf.php?doi=10.5090/jcs.21.113", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b06bee88a534dc0c01ea47ca0538af645ad343f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15230246
pes2o/s2orc
v3-fos-license
Mitochondrial dysfunction has divergent, cell type-dependent effects on insulin action The contribution of mitochondrial dysfunction to insulin resistance is a contentious issue in metabolic research. Recent evidence implicates mitochondrial dysfunction as contributing to multiple forms of insulin resistance. However, some models of mitochondrial dysfunction fail to induce insulin resistance, suggesting greater complexity describes mitochondrial regulation of insulin action. We report that mitochondrial dysfunction is not necessary for cellular models of insulin resistance. However, impairment of mitochondrial function is sufficient for insulin resistance in a cell type-dependent manner, with impaired mitochondrial function inducing insulin resistance in adipocytes, but having no effect, or insulin sensitising effects in hepatocytes. The mechanism of mitochondrial impairment was important in determining the impact on insulin action, but was independent of mitochondrial ROS production. These data can account for opposing findings on this issue and highlight the complexity of mitochondrial regulation of cell type-specific insulin action, which is not described by current reductionist paradigms. INTRODUCTION Type 2 diabetes (T2D) involves insulin resistance in skeletal muscle, the liver and adipose tissue [1]. One of the most contentious issues in metabolic research is the role of mitochondrial dysfunction in the development of insulin resistance. The term mitochondrial dysfunction can describe impairments in numerous mitochondrial function indices, including respiration, ATP production, membrane potential, proton leak and reactive oxygen species (ROS) production [2]. Impaired mitochondrial function has been observed in skeletal muscle [3e6], the liver [7e9] and adipose tissue [10e12] of T2D patients and animal models of T2D. Similar impairments have been observed in the skeletal muscle of insulin resistant offspring of T2D patients [13]. This could suggest a role for impaired mitochondrial function in the development of insulin resistance. This is supported by observations of insulin resistance in humans with mitochondrial DNA mutations that result in impaired mitochondrial function [14e16]. Mitochondrial dysfunction has been proposed to induce insulin resistance through ectopic lipid accumulation secondary to reduced b-oxidation, which impairs insulin signalling [17,18]. More recently, production of ROS has emerged as a direct link between mitochondrial dysfunction and insulin resistance driven by numerous insults such as saturated fatty acids, inflammatory cytokines and glucocorticoids [19,20]. However, numerous animal models with impaired mitochondrial function have either unchanged or increased insulin sensitivity [21e23], questioning both the causality of this relationship and whether mitochondrial dysfunction is necessary and/or sufficient for insulin resistance. Further adding to controversy on this issue, anti-diabetic agents such as the biguanide and thiazolidinedione family of compounds, which enhance insulin action primarily in the liver and adipose tissue respectively, have been reported to inhibit complex I of the electron transport chain and/or the mitochondrial pyruvate carrier (MPC), which impairs mitochondrial function [24e26]. These counterintuitive findings have been balanced by evidence that biguanides can prevent ROS production by complex I under conditions of electron backflow from complex II, such as during high fat feeding [27]. However, given that most of our knowledge regarding the role of mitochondria in the regulation of insulin action has been generated from studies of skeletal muscle, coupled with the fact that the primary tissues of action of these anti-diabetic drugs are not skeletal muscle, could raise the possibility that there are cell/tissue type-specific responses in this relationship that are not yet fully understood. Indeed, studies of either insulin resistant humans or animal models of mitochondrial dysfunction have not been able to mechanistically dissect this relationship with any certainty. This is due to reasons such as the non-physiological nature of gene ablation, the markedly different mitochondrial respiratory rates and metabolic function of tissues involved in whole body insulin action, the complexity in controlling substrate flux to individual insulin-sensitive tissues and inter-tissue cross-talk in vivo. While these factors are important for the development of the whole body metabolic phenotype in insulin resis-tance, they are nonetheless confounding variables when examining the fundamental link between mitochondrial dysfunction and insulin action. The fundamental biology underpinning these paradoxical findings describing the relationship between mitochondria and insulin action is poorly understood. Indeed, the potential importance of specific mitochondrial enzyme impairments and the tissue/cell type in which impairments occur are unknown. Furthermore, it is unknown how alterations in many of the interdependent indices of mitochondrial function impact on cellular insulin action. As in vivo studies have been unable to dissect these mechanisms, fundamental studies in cellular systems that define the biological complexity in the relationship between mitochondrial function and insulin action in multiple cell types are required before our understanding of the physiological role of mitochondria in the development of insulin resistance can advance. Therefore, the aims of this study were to: 1. determine whether mitochondrial dysfunction is necessary and/or sufficient for cellular insulin resistance; 2. establish whether specific mitochondrial enzyme impairment is important for this response in a cell type-dependent manner; and 3. assess whether ROS production is a universal link between impaired mitochondrial function and insulin resistance. We used 3T3L1 adipocytes and FAO hepatoma cells, as models of adipocytes and hepatocytes, respectively, to address these aims, assessing glucose uptake and suppression of glucose production as measures of insulin action. These cell lines have been used extensively to study insulin action and these cell types are characterised by impaired mitochondrial function in insulin resistant states [7,12]. However, few studies have utilised these cell types to mechanistically examine the role of mitochondria in insulin action. Cell culture Mouse immortalised 3T3L1 fibroblasts were cultured in 10% CO 2 at 37 C in growth media consisting of DMEM (4.5 g/L glucose; Invitrogen), 10% heat-inactivated foetal bovine serum (HI-FBS; Thermo Scientific) and antibiotics (100 units/mL penicillin and 100 mg/mL streptomycin; Life Technologies). Cells were induced to differentiate 2 days after reaching confluence (day 0), by supplementing growth media with 3 nM insulin (Humulin R; Eli Lilly), 0.25 mM dexamethasone (SigmaeAldrich) and 0.5 mM 1-methyl-3-isobutyl-xanthine (Sigmae Aldrich). From day 3 until day 7, cells were maintained in growth media supplemented with 3 nM insulin after which the mature adipocytes were maintained in growth media. All treatments were for 24 h in growth media unless stated otherwise. Rat FAO immortalised hepatoma hepatocytes [28] were cultured in 5% CO 2 at 37 C in growth media consisting of RPMI 1640 medium (2 g/L glucose; Invitrogen) and 10% foetal bovine serum (FBS; Thermo Scientific). Assays were carried out when cells were w90% confluent. All treatments of hepatocytes were for 24 h in glucose-and serum-free RPMI 1640 media supplemented with 2 mM sodium pyruvate, 20 mM sodium L-lactate and 0.1% BSA (glucose production media; GP media), except where indicated. For 3T3L1 models of insulin resistance, cells were treated with 25 mU/mL glucose oxidase (G.O.; SigmaeAldrich), 10 ng/mL tumour necrosis factor-a (TNFa; Peprotech) or 10 nM chronic insulin. For chronic insulin treatments, cells were returned to growth media containing no insulin 2 h before beginning glucose uptake assays or protein collection. Treatment doses for oligomycin (SigmaeAldrich) and rotenone (SigmaeAldrich) models of mitochondrial dysfunction as well as Antimycin A (SigmaeAldrich) and FCCP (Carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone; SigmaeAldrich) in both 3T3L1 and FAO cells are as stated in Figures 2 and 3, Figures S2 and S3 . The doses of rosiglitazone and phenformin are stated in Figure 4 and Figure S4. MnTBAP (Manganese (III) tetrakis (4-benzoic acid) porphyrin chloride; Enzo Life Sciences) co-treatments were at a dose of 300 mM and wortmannin (wort; SigmaeAldrich) co-treatments were at a dose of 100 nM. Bioenergetics and respiration analyses The cellular bioenergetics profile of 3T3L1 adipocytes and FAO hepatocytes was assessed using the Seahorse XF24 Flux Analyzer (Seahorse Bioscience). 3T3L1 fibroblasts were seeded into a 24-well XF24 cell culture microplate (Seahorse Bioscience) and were differentiated to maturity, as described above, at which time the cells were treated for 24 h. FAO hepatocytes were also seeded into a XF24 microplate at a density of 50,000 cells per well and 4 h later, 24 h treatments were begun in growth media. Cells were washed and incubated in 600 ml unbuffered DMEM (containing 25 mM glucose, 1 mM pyruvate and 1 mM glutamate) pH 7.4, at 37 C in a non-CO 2 incubator (1 h prior to bioenergetics assessment). Three basal oxygen consumption rate (OCR) measurements were performed using the Seahorse analyzer, and measurements were repeated following injection of oligomycin (1 mM), FCCP (1 mM) and Antimycin A (1 mM). Basal extracellular acidification rate (ECAR) was determined from data collected at basal measurement points. Calculations of respiratory parameters of mitochondrial function were performed as previously described [29] and included subtraction non-mitochondrial respiration from all mitochondrial respiration parameters. Following completion of the assay cell number was determined using the CyQuant Ò Cell Proliferation Assay kit (Molecular Probes) according to manufacturer's instructions. Glucose uptake assay Mature adipocytes in 24-well plates were treated for 24 h with insults, mitochondrial inhibitors or anti-diabetic agents, as described above, in serum-starve media consisting of DMEM (4.5 g/L glucose) supplemented with 0.2% fatty acid-free bovine serum albumin (BSA; USB Corporation). To begin the assay, cells were washed twice in 1 Â Dulbecco's Phosphate-Buffered Saline, pH 7.4 (Life Technologies), containing 0.5 mM MgCl 2 , 0.5 mM CaCl 2 and 0.2% fatty acid-free BSA, and then incubated in the presence or absence of 10 nM insulin at 2.4. Glucose production assay FAO hepatocytes in 48-well plates were treated for 24 h with mitochondrial inhibitors or insulin sensitising agents in GP media in the presence or absence of 0.1 nM insulin. To measure the glucose produced by the cells, 40 ml of media was collected from each well and combined with 250 ml of Assay Buffer consisting of 0.12 M NaH 2-PO 4 $2H 2 O pH 7.0, 1 mg/mL phenol, 0.5 mg/mL 4-aminoantipyrine, 1.6 units/mL peroxidise and 10 units/mL glucose oxidase. This was incubated for 25 min at 37 C after which absorbance was measured at 490 nm. Cells were lysed in 100 ml 0.03% SDS and protein was quantified using a BCA Protein Assay kit (Pierce). Results are expressed as mg glucose per mg protein. An alternative method was used to measure glucose production in hepatocytes co-treated with MnTBAP as the colour of MnTBAP interferes with any colorimetric measurement. Cells were seeded and treated as in the colorimetric assay and instead glucose production was measured by a fluorometric assay kit (Cayman Chemical Company) following the manufacturer's instructions. Cells were lysed in 100 ml 0.03% SDS and protein was quantified using a BCA Protein Assay kit (Pierce). Results are expressed as mg glucose per mg protein. 2.6. Lactate dehydrogenase cytotoxicity assay Lactate dehydrogenase (LDH) is released from cells upon loss of membrane integrity due to apoptosis or necrosis and can therefore be used as a measure of cell viability. Cells in 48-well plates were treated as described above (3T3L1 in serum-starve media). After 24 h the cytotoxic effects of our treatments were assessed using the CytoTox 96 Ò Non-Radioactive Cytotoxicity Assay kit (Promega) as per manufacturer's instructions. Viability was expressed as LDH in the media normalised to total LDH from the media and cell lysate. Mitochondrial membrane potential measurement Mitochondrial transmembrane potential was measured using the membrane-permeable JC-1 dye (Invitrogen). Cells in black-walled 96well plates were treated for 24 h as described above. In the final 10 min of treatment, cells were incubated with 10 mg/mL JC-1 dye at 37 C. After three washes with PBS, both green and red fluorescence emissions were measured using an excitation wavelength of 488 nm and emission wavelengths of 522 and 605 nm, respectively. Mitochondrial superoxide measurement MitoSOXÔ Red (Molecular Probes) was used to measure mitochondrial specific superoxide. Cells in black-walled 96-well plates were treated for 24 h as described above. In the final 30 min of treatment, cells were incubated with 1mM MitoSOXÔ Red at 37 C. After two washes with PBS, fluorescence was measured using an excitation wavelength of 510 nm and an emission wavelength of 580 nm. 2.9. Statistical analysis All values are expressed as mean AE SEM. Data were analysed for statistical significance using Minitab 15 Statistical Software. Unpaired t-tests were used where two individual treatments were compared. One-way ANOVA with Tukey post-hoc test was used where more than two groups were compared within a single treatment, and p values less than 0.05 were considered significant. Mitochondrial dysfunction is not necessary for insulin resistance in adipocytes In vivo studies have not been able to determine whether mitochondrial dysfunction is necessary for insulin resistance with any certainty. To establish whether mitochondrial dysfunction is necessary for insulin resistance, we assessed mitochondrial function indices in diverse cellular models of insulin resistance in 3T3L1 adipocytes. These included glucose oxidase, TNFa and chronic insulin treatment, which all impaired insulin-stimulated glucose uptake ( Figure 1A). Cellular insulin resistance in these models was not attributed to a common defect in insulin signalling at the level of IRS1, Akt or AS160 ( Figure 1B). There was a small but significant increase in LDH release with glucose oxidase ( Figure 1C), indicating compromised cell viability. As impaired mitochondrial respiration has been linked to the development of insulin resistance, we measured multiple mitochondrial respiration parameters including basal respiration, respiration due to ATP turnover and respiration due to proton leak. Glucose oxidase decreased respiration due to ATP turnover ( Figure 1D) and increased mitochondrial ROS production ( Figure 1F). Chronic insulin increased basal respiration, respiration for ATP turnover ( Figure 1D) and decreased MMP ( Figure 1E). As TNFa had no effect on any of the parameters measured and there was no consistent mitochondrial perturbation across all insults, these data suggest that a common Original article defect in mitochondrial function, and mitochondrial dysfunction more generally, is not necessary for all forms of cellular insulin resistance. 3.2. Mitochondrial dysfunction by inhibition of ATP synthase or complex I is sufficient to induce insulin resistance in adipocytes, independent of ROS production A number of animal models with mitochondrial impairments have unchanged or enhanced insulin action. We sought to clarify these findings by investigating whether physiological impairment of mitochondrial function was sufficient to induce insulin resistance. As reduced ADP-stimulated respiration has been observed in insulin resistant states [3,4,13], we used the ATP synthase inhibitor oligomycin to mimic aspects of this defect. In 3T3L1 adipocytes, 24 h treatment with 50 nM and 100 nM oligomycin reduced basal respiration by w20% and w40%, respectively (Figure 2A), similar to that observed in insulin resistant states [13]. Three-variable (respiration, membrane potential and ROS production) surface plots were generated to represent mitochondrial function regulated by oligomycin ( Figure 2B). Dose-dependent reductions in respiration due to ATP turnover ( Figure S2A), but not proton leak ( Figure S2B), were associated with increased MMP and ROS production ( Figure 2B, Figure S2C and D). At 100 nM, oligomycin increased LDH release ( Figure S2E). The extracellular acidification rate (ECAR), a proxy measure of glycolytic rate, was increased with both doses ( Figure 2C) and was associated with an increase in cellular ATP levels ( Figure 2D). Insulin-stimulated glucose uptake was decreased with both doses of oligomycin, and at 50 nM, basal glucose uptake was increased ( Figure 2E). Insulin action, assessed as the percentage increases in insulin-stimulated glucose uptake over basal, was ablated at both oligomycin doses ( Figure 2F). There was also a global reduction in expression and phosphorylation of key components of the insulin signalling pathway ( Figure 2G). As mitochondrial ROS production has been implicated as a causative factor in insulin resistance [19,20] and was increased in this model, the SOD mimetic MnTBAP was used to reduce mitochondrial ROS production ( Figure S2F). MnTBAP did not restore oligomycin-induced insulin resistance, but did reduce insulin-stimulated glucose uptake in the absence of oligomycin ( Figure 2H), suggesting that increased mitochondrial ROS production does not mediate insulin resistance in this model. This was further supported by evidence that antimycin A, a complex III inhibitor that increases ROS production, had no effect on glucose uptake ( Figure S2G) at concentrations that reduced respiration by w20% and w80% respectively (data not shown). These data show that mitochondrial dysfunction, in a model that replicates aspects of mitochondrial dysfunction seen in insulin resistant states in vivo, is sufficient to induce insulin resistance independent of mitochondrial ROS production. As inhibition of complex I is thought to be the mechanism of action of some anti-diabetic agents, we examined the effect of the complex I inhibitor rotenone on insulin action. In 3T3L1 adipocytes, 1 nM and 5 nM rotenone reduced basal respiration by w20% and w60% respectively ( Figure 2I). Altered mitochondrial function induced by rotenone ( Figure 2J) varied from that induced by oligomycin ( Figure 2B) and did not include alterations in MMP or mitochondrial ROS production ( Figure S2H and I). However, 5 nM but not 1 nM rotenone, significantly reduced respiration associated with ATP turnover ( Figure S2J) and proton leak ( Figure S2K). While 5 nM rotenone increased ECAR ( Figure 2K), both doses of rotenone had no effect on ATP concentration ( Figure 2L) or cell viability ( Figure S2L). We next tested the effect of rotenone on insulin action in 3T3L1 adipocytes. At 5 nM, but not 1 nM, rotenone significantly reduced insulin-stimulated glucose uptake ( Figure 2M) and insulin action ( Figure 2N). There were no obvious impairments in insulin signalling through IRS1, Akt and AS160 ( Figure 2O). These data suggest that inhibition of complex I and its associated mitochondrial dysfunction is sufficient to induce insulin resistance in 3T3L1 adipocytes. However, this occurred only at supraphysiological impairment of respiration. Nonetheless, these data show that complex I inhibition does not have insulin sensitising effects in 3T3L1 adipocytes. The divergent effects of oligomycin and rotenone on insulin signalling and the magnitude of insulin resistance show that the exact mechanism of mitochondrial impairment is important in predicting subsequent effects on insulin action. 3.3. Impairment of ATP synthase or complex I of the ETC has no effect on the insulin sensitivity of FAO hepatocytes, despite increasing ROS production Insulin-sensitive cells can differ widely in their oxidative capacity and metabolic functions. Therefore, to determine whether mitochondrial dysfunction is sufficient to induce insulin resistance in multiple cell types, we investigated these effects in FAO hepatocytes. Low doses of oligomycin reduced basal respiration ( Figure 3A) and altered mitochondrial function ( Figure 3B) in FAO hepatocytes. At 0.5 nM, which reduced respiration by w10% ( Figure 3A), oligomycin did not significantly alter respiration due to ATP turnover ( Figure S3A) and proton leak ( Figure S3B), but did increased MMP ( Figure S3C) and ROS production ( Figure S3D). There was no change in cell viability ( Figure S3E). There were no effects on ECAR ( Figure 3C) or ATP levels ( Figure 3D). We tested whether oligomycin-induced mitochondrial dysfunction was sufficient to reduce insulin sensitivity in FAO hepatocytes by measuring insulin suppression of glucose production ( Figure 3E). Oligomycin did not alter basal or insulin suppression of glucose production, or insulin action ( Figure 3F). No effect on the insulin signalling pathway was detected ( Figure 3G). These data provide evidence that increased mitochondrial ROS production does not induce insulin resistance in these cells. Rather a reduction in mitochondrial ROS ( Figure S3F) and alteration of the cellular redox state with MnTBAP completely impaired glucose production ( Figure S3G), suggesting a more fundamental role for ROS signalling in the control of glucose production in these cells. We next assessed the effect of rotenone on respiration in FAO hepatocytes ( Figure 3G), which altered mitochondrial function ( Figure 3H). At 1 nM, which reduced respiration by w20%, this included decreased respiration due to ATP turnover ( Figure S3H) and proton leak ( Figure S3I) but had no effect on MMP, ROS production ( Figure 3H, Figure S3J and K) or cell viability ( Figure S3E). ECAR was increased ( Figure 3I) and ATP levels were unchanged ( Figure 3J). Interestingly, rotenone reduced both basal and insulin-stimulated glucose production ( Figure 3M), but had no effect on insulin action ( Figure 3N). As rotenone increased both basal and insulin-stimulated tyrosine phosphorylation of IRS1 ( Figure 3O), we tested whether inhibiting phosphatidylinositol 3-kinase (PI3K) and subsequent insulin signalling with wortmannin, would eliminate the ability of rotenone to reduce glucose production. Wortmannin significantly increased basal glucose production in vehicle-treated cells, but had no effect on the ability of insulin to suppress glucose production, suggesting that the canonical insulin signalling pathway is not essential for the suppressive effects of insulin on glucose production in this model. In the presence of wortmannin, rotenone was still able to decrease both basal and insulinstimulated glucose production ( Figure 3P), suggesting that rotenone's actions on glucose production are not mediated through insulin signalling pathway sensitisation. The anti-diabetic drugs rosiglitazone and phenformin alter mitochondrial function and have cell type-dependent effects on insulin action Our data show that mitochondrial dysfunction is not a universal initiating factor for insulin resistance. However, inducing mitochondrial dysfunction is sufficient to induce cell type-dependent insulin resistance. As rosiglitazone and phenformin exert their effects, in part, through inhibition of complex I and the mitochondrial pyruvate carrier, resulting in impaired mitochondrial respiration, we examined the acute effects of these agents on mitochondrial function and insulin sensitivity. In 3T3L1 adipocytes, rosiglitazone reduced mitochondrial respiration ( Figure S4A) and mitochondrial function ( Figure 4A, Figure S4B and C), however the nature of that dysfunction, based on multiple parameter analysis varied from that induced by oligomycin and rotenone in these same cells ( Figure 2B and J), reinforcing the importance of measuring multiple parameters of mitochondrial function. Rosiglitazone increased ATP levels ( Figure S4D) independent of increases in ECAR ( Figure S4E). Consistent with our data for other mitochondrial inhibitors in these cells, insulin action was decreased by rosiglitazone ( Figure 4B and C), but this was driven by an increase in basal glucose uptake ( Figure 4B). Phenformin induced similar alterations in mitochondrial respiration ( Figure S4F) and function ( Figure 4A), in adipocytes, albeit with greater inhibitory action on respiration associated with ATP turnover and proton leak ( Figures S4G and H). No significant changes in ATP levels ( Figure S4I) or ECAR ( Figure S4J) were detected with phenformin treatment. Although phenformin increased basal glucose uptake ( Figure 4E), insulin action was also completely abrogated ( Figure 4F). As phenformin increased mitochondrial ROS production (refer ahead to Figure 5D), we used MnTBAP to counter this increase, with no effect on insulin action ( Figure 4E and F). This shows that in this cell type, heterogeneous forms of mitochondrial dysfunction converge to induce insulin resistance, independent of increased mitochondrial ROS production. These drugs also altered mitochondrial function in FAO hepatocytes ( Figure 4G and K, Figure S4KeO, QeU). Rosiglitazone at high doses reduced both basal and insulin suppression of glucose production ( Figure 4H) and increased insulin action ( Figure 4I). As recent evidence links the efficacy of biguanides on hepatic glucose production to reduced ATP levels [34], we also assessed ATP concentrations. However, rosiglitazone did not reduce ATP levels ( Figure 4J), nor ECAR ( Figure S4P). Phenformin at higher doses also reduced basal and insulin-stimulated glucose production ( Figure 4L), but not insulin action ( Figure 4M), despite reduced ATP levels ( Figure 4N) and no compensatory change in ECAR ( Figure 4SV). These data show that similar to traditional mitochondrial inhibitors, these anti-diabetic agents have cell type-dependent effects on mitochondrial function and insulin action. Rosiglitazone and phenformin do not impair mitochondrial ROS production in 3T3L1 adipocytes While the exact mechanism linking the effects of rosiglitazone and phenformin on mitochondrial function and insulin action remain unclear, one proposed mechanism is that these compounds can reduce ROS production under conditions of mitochondrial dysfunction [27], suggesting that these drugs could have altered efficacy in stressed and healthy states. While we have not found a role for ROS production linking mitochondrial dysfunction and insulin resistance in our models, we nonetheless explored this possibility. We induced mitochondrial dysfunction in 3T3L1 adipocytes with glucose oxidase, which increased ROS production ( Figure 1F), and co-treated with rosiglitazone or phenformin before assessing insulin action ( Figure 5A). Glucose oxidase-induced insulin resistance was worsened with increasing doses of both rosiglitazone and phenformin ( Figure 5A). ROS production was also increased with co-treatment of glucose oxidase with rosiglitazone ( Figure 5B) and phenformin ( Figure 5D). This was associated with a loss in MMP with rosiglitazone that was also worsened with glucose oxidase ( Figure 5C) and with 100 mM phenformin and glucose oxidase ( Figure 5E). These data suggest that these compounds do not reduce ROS production under states of mitochondrial dysfunction in all cell types and that these anti-diabetic drugs can further exacerbate mitochondrial dysfunction and its associated insulin resistance under context-specific conditions. DISCUSSION These fundamental studies on the relationship between mitochondrial function and cell-autonomous insulin action revealed that acute, physiological impairments in mitochondrial function are sufficient, but not necessary, to induce insulin resistance ( Figure 6). Definitive conclusions on this issue have been elusive in human and animal studies. This has been partly due to the difficultly in inducing physiological impairments in mitochondrial function in animal models either through genetic or other means. In addition, interventionist human studies have found it difficult to separate mitochondrial dysfunction with other confounding variables, such as increased fatty acid availability, which is also associated with insulin resistance [30,31]. However the utility of our cell-autonomous system that allows titration of mitochondrial impairment to physiological levels and analysis of insulin action under conditions of constant substrate availability, without the influence of inter-tissue crosstalk, has assisted with mechanistic dissection of this relationship. These findings support data in humans with mitochondrial impairments due to mitochondrial DNA mutations, which can also manifest in insulin resistance in key insulin-sensitive tissues [14e16]. Numerous animal models with impaired mitochondrial function due to genetic or dietary modification of components of the electron transport chain have produced opposing conclusions on this same issue. For example, deletion of the apoptosis initiating factor (AIF) enzyme that is a component of complex I, leads to oxidative phosphorylation defects and protection against insulin resistance in response to high fat feeding [22]. Similar findings are observed with the ablation of cytochrome c oxidase subunit VI peptide 2a (Cox6a2; 23) and iron-containing enzymes of the ETC [21]. However, data from the present study could reconcile these findings. For example, we showed that complex I inhibition decreased insulin-stimulated glucose uptake in adipocytes, but potentiated insulin suppression of glucose production in hepatocytes. These data showing that identical insults have cell type-dependent effects on mitochondrial function and insulin action highlights the complexity in this relationship ( Figure 6). Multi-parameter measurement and representation of mitochondrial function in a multidimension format in the form of surface plots was also useful in identifying the heterogeneous nature of mitochondrial dysfunction by common insults. Furthermore, the specific mitochondrial enzymes impaired also determine the impact on insulin action. Indeed, oligomycin and rotenone had differential effects on insulin signalling and the magnitude of insulin resistance in 3T3L1 adipocytes, despite similar impairments in respiration. These data suggest that animal models with global defects in mitochondrial function could manifest different phenotypes depending on the site of the mitochondrial impairment and on the specific insulin-sensitive tissue that dominates the whole body phenotype. The impact on whole body insulin action would therefore be difficult to identify and predict. The functional significance of these findings are that any phenotypic investigation into the role of mitochondria in the pathogenesis of insulin resistance in patients or disease models will likely have to establish the exact tissue and enzyme(s) affected to understand the mechanisms involved. Furthermore, the biological complexity in mitochondrial regulation of insulin action identified in our studies suggests that universal and reductionist theories describing this relationship may not be possible. Our findings also revealed that mitochondrial ROS is not a universal driver of insulin resistance in the cellular systems examined. This theory on the aetiology of insulin resistance has gained momentum following a number of recent studies showing that skeletal muscle insulin resistance can be prevented through buffering of mitochondrial ROS [19,20]. Furthermore, buffering total cellular ROS in a number of adipocyte models of insulin resistance was sufficient to reverse insulin resistance [32]. However, the compartmentalisation of ROS production and the cell type-dependent responses to ROS require further consideration when interpreting these data, as ROS is also required for insulin action [33]. Nonetheless, mitochondrial ROS-mediated insulin resistance appears to be a valid mechanism in skeletal muscle. It is unclear why mitochondrial ROS production was not associated with insulin resistance in our models, however this is an area that warrants further investigation and could involve differences in anti-oxidant capacities. Such experiments could be important in determining the utility of anti-oxidant therapies in heterogeneous insulin resistant states. The mechanisms that link mitochondrial dysfunction and insulin action in adipocytes and hepatocytes are also unclear. In adipocytes, mitochondrial dysfunction-induced alterations in calcium handling could be important, as insulin-stimulated GLUT4 translocation and glucose uptake are calcium sensitive [34]. Protein signalling from the mitochondria under states of stress could also play a role. Notably, deletion of apoptosis-inducing factor (AIF), a protein typically released from the mitochondria by pro-apoptotic stimuli, protects against insulin resistance despite a reduction in mitochondrial respiration [22]. This could suggest that protein-mediated signalling from mitochondria might impact on insulin action. In hepatocytes, a recent study has found that reduced cellular energy status can impede glucagon signalling, which opposes insulin signalling in the control of glucogeogenesis [35]. Increased AMP might also allosterically inhibit gluconeogenic enzymes directly to reduce gluconeogenic flux [36]. However, we found no association between reduced ATP, or any involvement for the canonical insulin signalling pathway in our mitochondrial dysfunction models that reduced insulin suppression of glucose production. The mechanisms by which mitochondrial dysfunction impacts on insulin action in hepatocytes remain to be determined. We also observed that the anti-diabetic agents phenformin and rosiglitazone showed cell type-dependent impairments in mitochondrial function and insulin action. This included enhanced suppression of glucose production and enhanced insulin action in hepatocytes, but also included induction of insulin resistance in adipocytes. The acute effects of these drugs in adipocytes might be offset by longer term remodelling of adipocyte metabolism, particularly in the case of rosiglitazone, which enhances the expression of PPAR dependent genes to increase glucose disposal and lipogenesis, thereby reducing hyperglycemia in vivo [37]. Indeed, the reduced insulin action we observed in rosiglitazone treated adipocytes was primarily due to an increase in basal glucose uptake, rather than reduced insulin-stimulated glucose uptake. Nonetheless, both anti-diabetic drugs altered mitochondrial function in adipocytes, which could have longer term implications in patients treated with these drugs for extended periods. Indeed, these findings could explain inconsistencies in efficacy associated with these drugs and their reduced efficacy over time. The finding that both drugs exacerbated insulin resistance in the context of existing mitochondrial dysfunction also highlights that personalised medicine approaches that identify any mitochondrial defects could be used to develop improved treatment strategies for patients. In conclusion, our findings show that acute, physiological impairments in mitochondrial function are sufficient, but not necessary, for the development of cell type-dependent insulin resistance and this is independent of mitochondrial ROS production. The translational implications from these findings are that therapies for insulin resistance that involve modulation of mitochondrial function should be tissue specific and their mechanism of action on mitochondria well characterised to ensure optimal therapeutic outcomes. The complexity in the relationship between mitochondrial function and insulin action revealed by these studies provides insights into opposing conclusions on this issue and questions reductionist theories that attempt to describe this relationship.
2018-04-03T06:20:52.090Z
2014-03-12T00:00:00.000
{ "year": 2014, "sha1": "2ce36fa4ec8f90d389042a2065d9952f0c92b900", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.molmet.2014.02.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16ded37dcaa8eb6415f7699bd454333ac16602e8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13881511
pes2o/s2orc
v3-fos-license
Time-frequency analysis of bivariate signals Many phenomena are described by bivariate signals or bidimensional vectors in applications ranging from radar to EEG, optics and oceanography. The time-frequency analysis of bivariate signals is usually carried out by analyzing two separate quantities, e.g. rotary components. We show that an adequate quaternion Fourier transform permits to build relevant time-frequency representations of bivariate signals that naturally identify geometrical or polarization properties. First, the quaternion embedding of bivariate signals is introduced, similar to the usual analytic signal of real signals. Then two fundamental theorems ensure that a quaternion short term Fourier transform and a quaternion continuous wavelet transform are well defined and obey desirable properties such as conservation laws and reconstruction formulas. The resulting spectrograms and scalograms provide meaningful representations of both the time-frequency and geometrical/polarization content of the signal. Moreover the numerical implementation remains simply based on the use of FFT. A toolbox is available for reproducibility. Synthetic and real-world examples illustrate the relevance and efficiency of the proposed approach. Introduction Bivariate signals are a special type of multivariate time series corresponding to vector motions on the 2D plane or equivalently in R 2 . They are specific because their time samples encode the time evolution of vector valued quantities (motion or wavefield direction, velocity, etc). Non-stationary bivariate signals appear in many applications such as oceanography [1,2], optics [3], radar [4], geophysics [5] or EEG analysis [6] to name but a few. In most of these scientific fields, the physical phenomena (electromagnetic waves, currents, elastic waves, etc) are described by various types of quantities which depend on the frequency over different ranges of frequencies. As frequency components evolve with time, time-frequency representations are necessary to accurately describe the evolution of the recorded signal. Over the last 20 years, several works have proposed to develop time-frequency representations of bivariate signals. Most authors have made use of augmented representations [7,8], i.e. the real bivariate signal [x(t), y(t)] ∈ R 2 or the complex version [f (t), f (t)] ∈ C 2 where f (t) = x(t) + iy(t). The latter has been used to characterize second order statistics of complex valued processes [9] as well as higher order statistics [10,11]. In both stationary and non-stationary cases, this approach leads to the extraction of kinematics and polarization properties of the signal [12,13,14,15,16]. In the deterministic setting, several bivariate extensions of the Empirical Mode Decomposition (EMD) have been proposed [17,18] to decompose bivariate signals into "simple" components 1 . Bivariate instantaneous moments were introduced recently in [19] and further extended to the multivariate case in [20]. The bivariate instantaneous moment method permits to extract kinematic parameters from a bivariate signal for example. However, as noted by the authors in [19], this method is not applicable when the signal is multicomponent, i.e. consists of several monochromatic signals. Existing methods all share the use of the standard complex Fourier transform (or complex-valued Cramèr representation in the random case). In this article, we demonstrate that a different deterministic approach towards bivariate time-frequency analysis is possible thanks to an alternate definition of the Fourier transform borrowed from geometric algebra. In the univariate case, i.e. when the signal is real-valued, it is well-known that its (complex) Fourier transform obeys Hermitian symmetry. This feature is fundamental for physical interpretation of the Fourier transform. In signal analysis it leads to the definition of the analytic signal [21,22], which is the first building block towards more sophisticated time-frequency analysis tools. The analytic signal carries exactly the same information as the original real-valued signal, but with zeros in the negative frequency spectrum. Negative frequencies are redundant since they can be obtained by Hermitian symmmetry from the positive frequencies spectrum. The analytic signal permits to define the instantaneous amplitude and the instantaneous phase of a real signal: it can be seen as the very first time-frequency analysis tool, at least for amplitude modulation signals. In the bivariate case or when the signal is complex-valued, its (complex) Fourier transform lacks Hermitian symmetry. Both positive and negative frequencies have to be considered. This observation has motivated a type of analysis often called the rotary spectrum. In the simple case where the signal is stationary/periodic (depending from the setting, deterministic or random), the kinematic and polarization properties of the signal at frequency ω can be extracted from both spectrum values at ω and −ω [16,13]. This approach involves a systematic parallel processing scheme of positive and negative frequencies. Physicists would prefer a tool that directly yields a representation of the bivariate signal content as a function of positive frequencies only. This is the desirable condition of a direct physical interpretation of the proposed representation, as in a spectrogram for instance. In this work, we handle bivariate signals as complex-valued signals. We show that they can be efficiently processed using the Quaternion Fourier Transform (QFT), an alternate definition of the Fourier transform. As a first benefit, it restores a special kind of Hermitian symmetry in the quaternion spectral domain. The positive frequencies of the quaternion spectrum carry all the information about the bivariate signal. This permits the definition of the quaternion embedding of a complex signal, the bivariate counterpart of the analytic signal. This quaternion-valued signal is uniquely related to the original complex signal. An interesting feature of quaternion-valued signals is that geometric information is easily accessible. Indeed, polar forms of quaternions permit the definition of phases to be interpreted geometrically. Much alike the phase and amplitude defined from an analytic signal, we will introduce the same type of parameters to describe complex-valued (i.e. bivariate) signals thanks to their quaternion embedding. It will appear that this description receives a direct geometrical interpretation. Even the usual Stokes' parameters used by physicists to describe the polarization state of electromagnetic waves will arise as natural parameters. Furthermore, the quaternion Fourier transform (QFT) permits to define time-frequency representations for multicomponent bivariate signals in a proper manner. We introduce the Quaternion Short-Term Fourier Transform (Q-STFT) and the Quaternion Continuous Wavelet Transform (Q-CWT) and provide new theorems demonstrating their ability to access geometric/polarization features of bivariate signals. The proposed approach leads to a consistent generalization of classical time-frequency representations to the case of bivariate signals. In practice, known concepts such as ridges extraction extend nicely to the bivariate case, revealing simultaneously time-frequency and polarization features of bivariate signals at a low computational cost since only FFT are needed still. Symbol Description Operators Convolution product between f and g (non commutative in general) Sf Quaternion Short-Term Fourier transform of f W f Quaternion Continuous Wavelet Transform of f Spaces C µ subfield R ⊕ µR isomorphic to C, where µ is a pure unit quaternion. L p (R, X) L p -space of functions taking values in X H 2 (R, X) Hardy space of square integrable functions taking values in X P Poincaré half-plane (x, y) ∈ R 2 y ≥ 0 . The paper is organized as follows. Section 2 first reviews usual quaternion calculus elements and introduces the Quaternion Fourier Transform (QFT) in a general setting. Then we consider the special case of bivariate signals and study this QFT. Section 3 introduces the quaternion embedding of a complex signal as well as a tailored polar form that recovers both the frequency content and polarization properties. Section 4 addresses some limitations of the quaternion embedding by introducing the Quaternion Short-Term Fourier Transform and the Quaternion Continuous Wavelet Transform. This section proves two fundamental theorems regarding bivariate time-frequency analysis. Section 5 provides an asymptotic analysis and defines the ridges of the transforms presented in section 4. Section 6 illustrates the performances of these new tools on several examples of bivariate signals. A (non-exhaustive) summary of notations and symbols used throughout this paper is provided in Table 1. Appendices gather technical proofs. Quaternion algebra Quaternions were first described by Sir W.R. Hamilton in 1843 [23]. The set of quaternions H is one of the four existing normed division algebras, together with real numbers R, complex numbers C and octonions O (also known as Cayley numbers) [24]. Quaternions form a four dimensional noncommutative division ring over the real numbers. Any quaternion q ∈ H can be written in its Cartesian form as where a, b, c, d ∈ R and i, j, k are roots of −1 satisfying The canonical elements i, j, k, together with the identity of H form the quaternion canonical basis given by {1, i, j, k}. Throughout this document, we will use the notation S(q) = a ∈ R to define the scalar part of the quaternion q, and V(q) = q − S(q) ∈ span {i, j, k} to denote the vector part of q. As for complex numbers, we can define the real and imaginary parts of a quaternion q as A quaternion is called pure if its real (or scalar) part is equal to zero, that is a = 0, e.g. i, j, k are pure quaternions. The quaternion conjugate of q is obtained by changing the sign of its vector part, leading to q = S(q) − V(q). The modulus of a quaternion q ∈ H is defined by When |q| = 1, q is called an unit quaternion. The set of unit quaternions is homeomorphic to S 3 = x ∈ R 4 x = 1 , the 3-dimensional unit sphere in R 4 . The inverse of a non zero quaternion is defined by It is important to keep in mind that quaternion multiplication is noncommutative, that is in general for p, q ∈ H, one has pq = qp. Involutions with respect to i, j, k play an important role, and are defined as The combination of conjugation and involution with respect to an arbitrary pure quaternion µ is denoted by and for instance (a + bi + cj + dk) * j = a + bi − cj + dk. The Cartesian form (1) is not the only possible representation for a quaternion. Most of the content of this paper is indeed based on other useful representations, which better reflect the geometry of H. First, we introduce the Cayley-Dickson form, which decomposes a quaternion q into a pair of 2D numbers isomorphic to a pair of complex numbers. The most general Cayley-Dickson form of a quaternion reads where C µ ≡ R ⊕ µR is a complex subfield of H isomorphic to C. Here µ and µ ⊥ are pure unit orthogonal quaternions: S(µµ ⊥ ) = 0. In particular, the decomposition given by the choice of µ = i and µ ⊥ = j in (8) will be used extensively. Just like the polar form of complex numbers, a quaternion Euler polar form can be defined 2 . This form was first introduced by Bülow in [25]. Any quaternion q has a Euler decomposition which reads where (θ, ϕ, χ) is called the phase triplet of q. The phase is (almost uniquely) defined within the interval The term almost refers to the two singular cases for χ = ±π/4, where the phase of a quaternion is not well defined. This phenomenon is known as gimbal lock, since the three angles above are indeed the xzy-Euler angles corresponding to the rotation associated to the quaternion q/|q| of unit norm 3 . Figure 1 presents the procedure to obtain the phase triplet (θ, ϕ, χ). Table 2 summarizes important properties relevant to quaternion calculus. Quaternion Fourier Transform Most of the literature about Quaternion Fourier transforms concerns a particular type of Fourier transforms, namely for functions f : R 2 → H. For a review on the subject, we refer the reader to [27] and references therein. This particular set of transforms is for instance of particular interest in color image processing [28]. In constrast, we review here the one-dimensional Quaternion Fourier transform, for functions f : R → H. 2 The polar form presented here is rather different from what is often referred to as the polar form in the quaternion literature. It usually refers to the decomposition q = |q| exp(µθ), with µ a pure unit quaternion and θ ∈ R. 3 Recall that the unit sphere in R 4 , S 3 is indeed a two-fold covering of the rotation group SO(3). That is every rotation matrix R ∈ SO(3) can be identified with two antipodal points on S 3 , q and −q. Rotations can be described by three Euler angles [26], corresponding to three successive rotations around the canonical axes. The xzy-convention is used in this work. Property Description S(q), V(q) Scalar and Vector part of a quaternion q R, I i , I j , I k Real and Imaginary parts operators Conjugate of a product q µ = −µqµ Involution by unit quaternion µ, Cayley-Dickson form of q e µθ = cos θ + µ sin θ Exponential of a pure unit quaternion µ, θ ∈ R q = |q|e iθ e −kχ e jϕ Euler polar form of q Table 2: Basic guide to quaternion calculus. Here q = a + bi + cj + dk is an arbitrary quaternion, µ is a pure unit quaternion µ 2 = −1. The quantities θ, χ, ϕ are real-valued. q ∈ H Normalizẽ q = q/|q| = a + bi + cj + dk Compute ellipticity angle χ = arcsin[2(bc − ad)]/2 If χ = ±π/4 If e iθ e −kχ e jϕ = −q and θ ≥ 0 If e iθ e −kχ e jϕ = −q and θ < 0 We define the Quaternion Fourier Transform (QFT) of axis µ of a function f : R → H by In comparison with the standard complex Fourier transform, there are several differences. The position of the exponential function is crucial due to the noncommutative product in H, and in our work the exponential function will always be placed on the right. Moreover, the axis µ is a free parameter; it is only restricted to be a pure unit quaternion. Details on the choice of µ are given in section 2.4. The existence and invertibility of the QFT have been studied by Jamison [29] in his PhD dissertation that dates back to 1970, for functions in L 1 (R, H) and L 2 (R, H). Although in his definition the exponential function is placed on the left, the proofs are straightforwardly adapted to our convention. We recall the fundamental results only, and refer to Jamison's manuscript for an extensive discussion. As for the standard complex Fourier Transform, the existency and invertibily of the QFT are first proven for functions in L 1 (R, H). The inverse QFT reads Using a density argument (see [30,Chapter 2] for instance) adapted to the present context, the extension of the QFT to functions in L 2 (R, H) can be worked out. The steps are identical to what is known in the complex case and will not be reproduced here. One can show that L 2 (R, H) is a (right) Hilbert space over H [29,31]. This will be of great interest, since from now on we will write inner products between functions f, g ∈ L 2 (R, H) such that the induced norm reads We now prove a fundamental theorem, which extends well-known results to the case of H-valued functions. Theorem 1 (Parseval and Plancherel formulas). Let f, g ∈ L 2 (R, H). Then the following holds Proof. Since Plancherel formulas can be obtained directly with f = g in Parseval formulas, we only give a proof for Parseval's formulas. Let f, g ∈ L 2 (R, H). We have The other equality is proven analogously. Note that this theorem shows two things. It shows that the QFT is an isometry of L 2 (R, H). It also shows that another quantity of geometrical nature is preserved by the QFT, the integral * µ dt. This quantity will appear naturally later on in the time-frequency analysis of bivariate signals. General properties We develop some properties related to the Quaternion Fourier Transform (QFT) of axis µ, for arbitrary functions in L 2 (R, H). For brevity, most demontrations are omitted, and we refer the reader to [28] for completeness. Linearity. First, let us consider f, g ∈ L 2 (R, H), and denote byf andĝ their Fourier transforms. It is straightforward to note that for all α, β ∈ H, the QFT of αf + βg is αf + βĝ. Therefore the QFT is left-linear. Scaling. Let α ∈ R * . The QFT of the function f (t/α) is |α|f (αω), as can be checked by direct calculation. Derivatives. The QFT of the n-th derivative of f , f (n) is given byf (ω)(µω) n . Note that multiplication by (µω) n is from the right, as the exponential kernel was placed on the right in the QFT definition (11). Invariant subspace. An important feature of the QFT of axis µ defined in (11) is that the subspace This shows that the restriction of the QFT of axis µ to L 2 (R, C µ ) defines a transform isomorphic to the well-known complex Fourier transform. Convolution and product. Convolution is one of the cornerstone of signal processing. The following proposition gives the expression for the QFT transform of the convolution product under some conditions. Proposition 1 (Convolution). Let f ∈ L 2 (R, H) and g ∈ L 2 (R, C µ ) with respective QFT of axis µ denoted byf andĝ. Then the QFT of axis µ of the convolution product, f * g, is given by Note that in this case the convolution product is not commutative, i.e. f * g = g * f , as f and g are now quaternion-valued functions. Proof. A direct calculation gives where we have used the fact thatĝ(ω) and exp(−µωu) commute, since C µ is an invariant subspace of the QFT of axis µ. Proposition 2 (Product). Let f ∈ L 2 (R, H) and g ∈ L 2 (R, C µ ) with respective QFT of axis µ denoted bŷ f andĝ. Then the QFT of axis µ of their product is given by Proof. Similar to the proof of the convolution property. Uncertainty principle. Of great importance is the uncertainty principle, also known as Gabor-Heisenberg uncertainty principle. First, considering a function f ∈ L 2 (R, H) we define the temporal mean u like and the mean frequency ξ as Spreads around these mean values are defined like Theorem 2 (Gabor-Heisenberg uncertainty principle). Given a function f ∈ L 2 (R, H) with QFTf and time (resp. frequency) spread σ 2 t (resp. σ 2 ω ), then the following holds: Proof. Using a change of variable, it is sufficient to prove the theorem in the case where u = ξ = 0. First let us note that Sincef (ω)jω is the Fourier transform of f (t), using the Plancherel identity applied tof (ω)jω yields Schwarz's inequality implies (legit since L 2 (R, H) is a Hilbert space) Now, using integration by parts we obtain Bivariate signals and choice of the axis of the transform It is well known that bivariate signals can be equivalently described as a pair of real signals or a single complex valued signal. For instance, given a bivariate signal f (t), where f r (t) and f i (t) are real-valued is a valid decomposition for f (t) in C i . Actually, any subfield C µ could have been used to decompose f (t) as they are all isomorphic to the complex field. In the sequel, we will thus use the term bivariate, complex or C i -valued interchangeably depending on the context, and will assume that a bivariate signal f is of the form (39). There are several possibilities for the choice of the axis µ of the QFT. If one chooses µ = i, then this is simply the classical complex Fourier transform. This shows that the QFT definition encompasses the standard complex case, but that it allows different choices as well. This point is interesting as it is well-known that the complex Fourier transform does not exhibit Hermitian symmetry when f is complexvalued. This prevents from defining standard time-frequency tools (e.g. the analytic signal) that rely on the Hermitian symmetry property of the transform. It is interesting to wonder whether it is possible to obtain a transform that exhibits an "Hermitian-like" symmetry for a C i -valued signal? Proposition 3 below brings a positive answer. Writing down explicitly the QFT of axis µ for a signal as in (39), we havê The key idea is now to treat separately the real and imaginary part of f in the QFT. That is, we would like to impose to the QFT to maintain the separation between real and imaginary parts of f in the frequency domain. This constraint is satisfied easily provided that µ is orthogonal to i, that is S(iµ) = 0. The simplest choice for µ is to take the second imaginary axis in H, namely µ = j, and we will stick to this choice in the remaining of the paper. Therefore we choose to use the following QFT definition: This QFT of a complex-valued signal exhibits a particular symmetry. Proof. We write f (t) = f r (t) + if i (t), with f r (t) and f i (t) real-valued functions. Therefore, the QFT of axis withf r ,f i the QFTs of f r , f i respectively. Recall that f r and f i are real-valued functions so thatf r (ω), f i (ω) are C j -valued. Therefore, their QFT satisfy the usual Hermitian symmetry (as the QFT for real signals is isomorphic to the standard Fourier transform). As a result: The last equation is obtained recalling that if z ∈ C j , then iz = zi. This result is fundamental and will permit to construct the quaternion embedding of a complex signal in section 3. This result is, in regard of the Hermitian symmetry of the Fourier transform of real signals, the very first building block of nonstationary bivariate signal processing tools. Quaternion embedding of complex-valued signals For simple real-valued signals, it is natural to write f (t) = a(t) cos[ϕ(t)], where a(t) ≥ 0 and ϕ(t) are respectively identified as the instantaneous amplitude and phase [32,33]. When the signal is richer, timefrequency analysis techniques aim to provide a set of pairs [a k (t), ϕ k (t)] that faithfully decribes the signal. Nevertheless, the decomposition f (t) = a(t) cos[ϕ(t)] is the simplest time-frequency model. It has many shortcomings, but understanding it is helpful in understanding how things work. The analytic signal of f provides a well known way to associate to a simple real signal f an instantaneous amplitude a(t) and phase ϕ(t) [32]. It is obtained by suppressing negative frequencies from the spectrum [34,35]. This operation is motivated by the Hermitian symmetry of real signals spectrum: suppressing negative frequencies spectrum carry no information. This operation associates an unique canonical pair [a(t), ϕ(t)] to the real signal f [34]. When the signal f takes complex values (i.e. is bivariate) the analytic signal approach is not directly applicable, since its spectrum no longer exhibits Hermitian symmetry. It was proposed in [19] to overcome this limitation by considering an analytic and anti-analytic signal, the latter being the analytic signal associated to the negative part of the spectrum. However, this approach has some inherent limitations due to the complex Fourier transform. The usual complex Fourier transform brings some intrinsic ambiguity between geometric content and frequency content. For instance, a linear polarized signal f oscillating in the direction exp(iθ) at angular frequency ω can be written as One can read either f as a linearly polarized signal at pulsation ω in the direction exp(iθ), or as the analytic signal of g(t) = cos(ωt + θ). Therefore the geometric content (the direction of oscillation) is mixed with the frequency information. The dimensionality of the complex field is actually responsible for not being able to separate directly the geometric from the frequency content. Using the QFT and the four dimensions of the quaternion algebra will permit to account for the geometric and frequency variables separately. Moreover, the i-Hermitian symmetry (42) of the QFT of complex-valued signals will lead to the definition of a QFT counterpart of the analytic signal: the quaternion embedding. We build upon the recent paper by one of the authors [36] and present developments on the interpretation of the physical quantities offered by this construction. Definition Definition 1 (Quaternion embedding of complex signals). Let f : R → C i . Its quaternion embedding f + is defined as where H{·} denotes the Hilbert transform where p.v. stands for Cauchy principal value. By construction, f + is H-valued. The original signal f is recovered in the {1, i}-components of f + , and the Hilbert transform of f is found in the {j, k} part of f + . Let us compute the QFT of axis j of the quaternion embedding f + :f which can be further decomposed since the quaternion Hilbert transform can be separated into two Hilbert transforms on the real and i-imaginary parts of f respectively. As a consequence, withf r ,f i being C j -valued so that the QFT of the quaternion Hilbert transform of f reads so thatf where U (ω) is the Heaviside unit step function. Equation (52) shows that: (i) the quaternion embedding of any complex-valued signal has a one-sided spectrum (ii) f and f + have the same frequency content in the positive frequencies region ω > 0, up to a factor 2. These properties are direct continuations of those of the analytic signal of a real signal. Let us define the real Hardy space H 2 (R, H) such as By construction, the quaternion embedding f + of a complex signal f ∈ L 2 (R, C i ) belongs to f + ∈ H 2 (R, H). More precisely, the quaternion embedding method establishes a one-to-one mapping between L 2 (R, C i ) and H 2 (R, H), that is between a complex signal and its quaternion embedding. Instantaneous complex amplitude, ellipticity and phase To any real signal one can associate the canonical pair [a(t), ϕ(t)] using the polar form of its analytic signal. We show here that one can associate a canonical triplet to any complex signal using its quaternion embedding, by means of an appropriate factorization. In [36] a complex signal is described by a canonical pair [a(t), ϕ(t)], with a(t) and ϕ(t) taking complex values. The authors use the recently introduced polar Cayley-Dickson form of a quaternion [37]. While a(t) is interpreted as the instantaneous complex amplitude, the instantaneous phase ϕ(t) is restricted to be real, as the meaning of a complex instantaneous phase is not clearly interpretable. This restriction prevents from considering generic bivariate signals. What follows circumvents this issue. The key idea is to decompose any quaternion embedding f + using the Euler polar form introduced in section 2.1: or equivalently, This decomposition defines the canonical triplet [a(t), χ(t), ϕ(t)] ∈ C i × [−π/4, π/4] × [−π/2, π/2]. The canonical triplet describes the bivariate signal f (t). By construction f (t) is the projection of f + (t) onto C i so that Note the choice of the negative sign in (55), which leads to a positive second term in (56). A clear physical interpretation of the canonical triplet [a(t), χ(t), ϕ(t)] is possible under some usual restrictions. We consider bivariate AM-FM signals, which can be seen as a thorough generalization of the AM-FM univariate signals. A (monocomponent) bivariate AM-FM signal is such that the variations of ϕ(t) are much rapid than the rate of variation of the other components, The quantity ϕ(t) is called the instantaneous phase of f . This choice is natural since ϕ(t) appears in (9) along the same axis as the QFT, i.e. j in our case. The instantaneous frequency of f is thus given by ϕ (t). The term a(t) is referred to as the instantaneous complex amplitude. The term χ(t) is to be intepreted as the instantaneous ellipticity. Figure 2 depicts the ellipse traced out by the model (56) over time under condition (57). Over a short enough period of time, the quantities a(t) = a and χ(t) = χ can thus be assumed as constants. The dot on the ellipse represents the value of the complex signal f (t) at a fixed arbitrary time. The argument of a ∈ C i , θ = arg a gives the orientation of the instantaneous ellipse. Its modulus |a| acts as a scale factor of this ellipse. The ellipticity angle χ controls the shape of the ellipse. If χ = 0, the ellipse becomes a line segment, while if χ = ±π/4 one obtains a circle. The sign of χ(t) controls the direction of rotation in the ellipse. Finally, the angle ϕ(t) gives the position of f (t) on the ellipse. Section 3.4 explores further this model with some examples. The ellipse parameters can evolve over time. The succession of instantaneous ellipses describe a threedimensional tube (time being considered as an axis), and thus define the bivariate envelope of the signal. Lilly and Olhede [19] have proposed a model similar to (56), called the Modulated Elliptical Signal (MES) model. It originates from oceanographic signal processing applications with a slightly different parametrization. The quaternion embedding method a posteriori justifies the relevance of this model. Instantaneous Stokes parameters There is a straightforward connection between the canonical triplet and the notion of polarization state in optics. The pair [θ(t), χ(t)] represents exactly the spherical coordinates on the Poincaré sphere that describes the polarization state of the bivariate signal f . Moreover, the square magnitude |a(t)| 2 is the radius of this Poincaré sphere. In practice, one often characterizes the polarization state via the associated Stokes parameters [38, p. 31]. In optics, there are four Stokes parameters which read (in the case of fully polarized light) Here S 0 (t) is simply the instantaneous energy density of the signal. The three Stokes parameters S 1 , S 2 , S 3 decompose S 0 in terms of polarization content, as for all t, S 2 0 (t) = S 2 1 (t) + S 2 2 (t) + S 2 3 (t). Note that the four Stokes parameters are energetic quantities. In constrast with what is generally used in physics, the Stokes parameters defined here are timedependent. In optics textbooks Stokes parameters are commonly defined as time-average values [38, Eq. (64), p. 554]: the time-averaging operator acts with respect to the oscillation of the electromagnetic field. Here, it corresponds to averaging only with respect to the instantaneous phase ϕ(t). The so-defined instantaneous Stokes parameters thus describe evolving polarization properties. The quaternion embedding f + leads naturally to the Stokes parameters of f . The first Stokes parameter is simply obtained by |f + (t)| 2 = S 0 (t). Moreover basic quaternion arithmetic shows that so that the three remaining Stokes parameters are readily obtained from the quantity f + (t)f + (t) * j . This allows to identify quantities of type f + (t)f + (t) * j as natural energetic measures of the polarization content of f , while |f + (t)| 2 is simply an energy density. Note that the two quantities |f (t)| 2 and f (t)f (t) * j are also invariants of the QFT, as stated by the Parseval-Plancherel theorem 1. Examples Equation (56) is the bivariate (polarized) counterpart of the well-known AM-FM model a(t) cos[ϕ(t)]. Investigating which signals are generated by (56) is thus of great importance. Limitations Exactly as in the univariate case, the quaternion embedding does not provide useful information when the signal is multicomponent. Consider the signal f (t) = α cos ω 0 t + α cos ω 1 t, with a ∈ C i , which is a sum of two linearly polarized signals at angular frequencies ω 0 and ω 1 . Its quaternion embedding reads which gives us immediately the Euler polar form, with the canonical parameters given by χ(t) = 0, a(t) = α cos ω1−ω0 2 t and ϕ(t) = 1 2 (ω 0 + ω 1 )t. While χ(t) = 0 shows that we have indeed linear polarization, the values of θ(t) and ϕ(t) poorly reflect the multicomponent nature of the signal f . This motivates the construction of dedicated time-frequency representations for bivariate signals. Time-frequency representations of bivariate signals We will focus here on two fundamental theorems, leaving the interpretation of such representations to Section 5. Examples illustrating the use of the newly introduced time-frequency representations will be given in Section 6. Definition and completeness The quaternion embedding is unable to separate multiple components, so that we introduce the quaternion short-term Fourier transform (Q-STFT). In this section, we suppose that f ∈ L 2 (R, H). Let g be a real and symmetric normalized window, with g = 1. For u, ξ ∈ R, its translated-modulated version is g u,ξ (t) = e jξt g(t − u). The exponential is on the left. This choice has no influence since g is real, but this is for convenience due to the permutation arising when taking the quaternion conjugate. The functions g u,ξ (t) define time-frequencypolarization atoms. The definition of g u,ξ is classical: the term polarization solely indicate that the atoms are C j -valued, rather than C i -valued. The resulting Q-STFT of f is given by This first fundamental theorem to build a time-frequency analysis of bivariate signals ensures energy conservation and reconstruction formula. Theorem 3 (Inversion formula and energy conservation). Let f ∈ L 2 (R, H). Then the inversion formula reads and the energy of f is conserved, as well as the polarization properties of f : This fundamental result extends classical results [30] to the bivariate setting. Equation (68) shows that |Sf (u, ξ)| 2 defines an energy density in the time-frequency plane. Equation (69) shows that the instantaneous state of polarization given by the Stokes parameters is conserved by the representation. Thus we can call the quantity Sf (u, ξ)Sf (u, ξ) * j the polarization spectrogram of f . Redundancy: the RKHS structure The Q-STFT does not span the whole functional space L 2 (R 2 , H), where R 2 stands here for the timefrequency plane. It has a reproducing kernel Hilbert space (RKHS) structure. Writing the Q-STFT of some function f ∈ L 2 (R, H), we have for some (u 0 , ξ 0 ) ∈ R 2 . The inversion formula (67) yields where we have introduced the kernel K K(u, ξ, u 0 , ξ 0 ) = g u,ξ , g u0,ξ0 . Equation (71) shows that the image of L 2 (R, H) by the Q-STFT is a RKHS with kernel K. Note that this result is a bivariate extension of a property of the usual STFT [30]. Examples To understand the behavior of the Q-STFT, we look at two very simple signals. Section 5 will carry a more systematic interpretation of these results. Monochromatic polarized signal. Let f (t) such that its quaternion embedding reads f + (t) = a 0 exp(−kχ 0 ) exp(jω 0 t), with a 0 ∈ C i , χ 0 ∈ [−π/4, π/4] and ω 0 ∈ R. Its Q-STFT reads which is localized around the frequency ξ = ω 0 in the time-frequency plane, as expected. The polarization spectrogram of f + is so that the polarization spectrogram of f + gives immediatly the three time-frequency Stokes parameters that fully characterize f + . Quaternion continuous wavelet transform The fixed size in the time-frequency plane of Q-STFT atoms prevents from analysing a large range of frequencies over short time scales. Following classical theory, we will now introduce the quaternion continuous wavelet transform (Q-CWT). The wavelet atoms will be C j -valued, to mimick the Q-STFT atoms. The resulting quaternion continuous wavelet transform decouples geometric and frequency content. Definition and completeness Throughout this section, we restrict our analysis to the real Hardy space H 2 (R, H) introduced in Section 3. Recall that one can associate to any f ∈ L 2 (R, C i ) a unique element f + ∈ H 2 (R, H) called the quaternion embedding of f . Definition 2 (Polarization wavelets). A polarization wavelet is a C j -valued function ψ ∈ H 2 (R, C j ), normalized with ψ = 1 and centered at t = 0. Time-scale-polarization atoms are defined as translated-dilated versions of the wavelet psi: The translation and dilation parameters run over the Poincaré half plane i.e. (u, s) ∈ P (see Table 1). These atoms are normalized, so that ψ u,s = 1. This definition is again classical. The term polarization indicates that the atoms are C j -valued, rather than C i -valued. The quaternion continuous wavelet transform of a bivariate signal f at time u and scale s is: The second fundamental theorem of this paper ensures energy conservation and the existency of an inversion formula for the quaternion continuous wavelet transform. Theorem 4 (Inversion formula and energy conservation). Let f ∈ H 2 (R, H), and a polarization wavelet ψ ∈ H 2 (R, C j ). Suppose that the admissibility condition is satisfied, that is The inverse reconstruction formula reads and the energy is conserved, as well as the polarization properties Equation (81) indicates that the quantity |W f (u, s)| 2 can be interpreted as an energy density in the time-scale plane. Equation (82) means that the polarization properties of f are conserved by the representation. This leads to the definition of the polarization scalogram, which is the image of the coefficients W f (u, s)W f (u, s) * j in the time-scale plane. While taking the wavelet ψ ∈ H 2 (R, C j ) is necessary to extract time-frequency (or time-scale) tones, the condition that f ∈ H 2 (R, H) is not restrictive. Indeed, since there is a one-to-one correspondence between a bivariate signal f ∈ L 2 (R, C i ) and its quaternion embedding f + ∈ H 2 (R, H), the results presented here are also valid for signals f ∈ L 2 (R, C i ). One then has Polarization wavelet design. Definition 2 is classical, and polarization wavelets are constructed similarly to classical analytic wavelets [30]. Namely, they can be built as the frequency modulation of a real, symmetric window g, which yields ψ(t) = g(t) exp(jηt) which admits the QFTψ(ω) =ĝ(ω − η). Ifĝ(ω) = 0 for |ω| > η, then the wavelet belongs to H 2 (R, C j ). RKHS structure As with the Q-STFT, the image of L 2 (R, H) by the Q-CWT does not span the whole L 2 (P, H) space, where P denotes the time-scale half-upper plane. Rather, it spans only a subspace of it, where the redundancy of the representation is encoded within a RKHS structure. Starting from the definition of the Q-CWT and plugging the inversion formula (80), one gets where we have introduced the kernel K which reads K(u, s, u 0 , s 0 ) = ψ u,s , ψ u0,s0 . Asymptotic analysis and ridges The goal of this section is the so-called ridge analysis, that is to provide results about the energy localisation in the time-frequency plane (resp. time-scale plane) of bivariate signals. Ridge analysis has attracted much interest since the 90's. Early work from [39] provided an asymptotic approach. Subsequent theoretical results were developed in a more general setting in [30] and in the context of analytic wavelet transform by [40]. The discussion here follows closely the approach presented in [39] for univariate signals. It relies upon an asymptotic hypothesis on the signal, which essentially means that condition (57) is satisfied: the phase is varying much faster than the geometric components. Finally, we will discuss how well known algorithms in ridge analysis can be applied to the bivariate setting. Ridges of the quaternion short-term Fourier transform The time-frequency-polarization atoms g u,ξ are of the form g u,ξ (t) = g(t − u) exp(jξt), where g is a real, symmetric and normalized window. Under some conditions detailed in Appendix B.2, the ridge of the transform is given by the set of points (u, ξ) ∈ Ω such that It gives the instantaneous frequency of the signal. On the ridge, the Q-STFT becomes This shows that the Q-STFT is on the ridge simply the quaternion embedding of f up to some corrective factor with values in C j . As a consequence, assuming that the ridge has been extracted from the Q-STFT coefficients, the polarization properties of f are readily obtained from the polar Euler decomposition of Sf (u, ξ R (u)). The polarization spectrogram on the ridge is On the ridge, one has directly access to the instantaneous Stokes parameters of f + . Ridges of the quaternion continuous wavelet transform Recall that we consider wavelets ψ(t) = g(t) exp(jηt), where g is a real, symmetric window and η > 0 is the central frequency such that ψ belongs to H 2 (R, C j ). Under some conditions detailed in Appendix B.2, the ridge of the transform is given by the set of points (u, ξ) ∈ P such that which again yields the instantaneous frequency of the signal. The restriction of the Q-CWT to the ridge is: The Q-CWT on the ridge is simply the quaternion embedding of f up to some corrective factor with values in C j . Computing the Euler polar decomposition of W f (u, s R (u)) immediatly gives the instantaneous polarization properties. The polarization scalogram on the ridge is which again shows that the instantaneous Stokes parameters are available on the ridge of the polarization scalogram. Ridge extraction and discussion It is possible to show that the ridge can be extracted from the j-phase of the Q-STFT and Q-CWT coefficients, as originally suggested in the univariate case by [39] (not discussed here). This approach is known to have shortcomings when the signal-to-noise ratio is low, and other approaches have to be used instead [41,42]. Existing ridge extraction algorithms can be thoroughly adapted to the bivariate setting. A detailed discussion on ridge extraction methods is out the scope of the present paper. In our simulations we have used a heuristic method which identifies at each instant u the local maxima of the energy density in the time-frequency (resp. time-scale) plane. This method, although not optimal, provides reasonably good results for our purpose. Time-frequency representations of bivariate signals: illustration We finally illustrate the time-frequency representations presented in section 4 and the subsequent results in section 5. Two synthetic examples are presented, which are polarized counterparts of classical examples (see e.g. [30]). A real-world example is also provided. Two equivalent time-frequency representations of bivariate signals can be proposed. The first one is directly related to classical frequency analysis. One can extract ridges from a time-frequency energy density. Instantaneous polarization properties are thus unveiled using the Euler polar form on these ridges. Another approach is to compute the polarization spectrogram (resp. polarization scalogram) of the signal. This gives the three time-frequency Stokes parameters of the signal. We explore the benefits of the two methods. As they are equivalent representations, we will use the term polarization spectrogram (resp. scalogram) to denote one or the other. Sum of linear polarized chirps Consider a superposition of two linear chirps, each having its own polarization properties (95) and (96). The signal is defined on the time interval [0, 1] by N = 1024 equispaced samples. It can be written as a the superposition f (t) = f 1 (t) + f 2 (t), where their respective polar Euler decomposition reads This signal can be seen as a polarized version of the classical parallel linear chirps signal [30]. The Q-STFT was computed with a Hanning window of size 101 samples, providing good time-frequency clarity. Figure 4 shows the two equivalent polarization spectrograms of f . Figure 4a, b and c depicts the three time-frequency Stokes parameters. Figure 4d, e and f show respectively the time-frequency energy density, instantaneous orientation and ellipticity. The three Stokes parameters provide a reading of time-frequency-polarization properties of the two chirps. They have been normalized to be meaningully interpreted. We have normalized S 1 , S 2 and S 3 by S 0 which is simply the time-frequency energy density depicted in figure 4d. While S 3 is directly an image of the ellipticity, the orientation has to be recovered by simultaneously inspecting the three Stokes parameters. The time-frequency energy density permits the identification of the two linear chirps. This time-frequency energy density can be retrieved from the three Stokes parameters, as S 2 0 = S 2 1 + S 2 2 + S 2 3 . Moreover, 4e, 4f show instantaneous orientation and ellipticity extracted from the ridge. The polarization properties of each chirp are correctly recovered. Sum of hyperbolic polarized chirps Consider two hyperbolic chirps, each having its own polarization properties. The signal is defined on the time interval [0, 1], with N = 1024 samples. It can be written as the superposition f (t) = f 1 (t) + f 2 (t), where the polar Euler decomposition are The Q-CWT was computed using a Morlet wavelet with η = 5. Figure 5 shows the two equivalent polarization scalograms of f . Figure 5a, b, and c represent the three time-scale Stokes parameters S 1 , S 2 , S 3 . Figure 5d, e and f give the equivalent representation using the time-scale energy density, and the instantaneous orientation and ellipticity extracted from the ridge. The polarization properties of each chirp are correctly recovered. These two examples validate the use of the Q-STFT and Q-CWT representations for nonstationary bivariate signals. Time-frequency (resp. time-scale) resolution is governed by the choice of the window (resp. wavelet), so that usual trade-offs apply. Figure 6 shows a seismic trace of the 1991 Solomon Islands Earthquake. This signal has already been studied by several authors [12,43,44]. Data is available as part of JLab [45]. It displays the time evolution of the process f (t) = y(t) + ir(t), where y is the vertical component and r is the radial component. The part of the signal which is represented contains N = 9000 samples, equispaced by 0.25 s. . Figure 6 suggests that the signal is on average elliptically polarized, whereas the instantaneous orientation does not appear clearly from the seismic trace. A real world example The Q-STFT of the signal has been computed using a Hanning window of size 801 samples, with window spacing equal to 10 samples. The Q-CWT of the signal has been computed on 200 scales, and using a Morlet wavelet with η = 5. Figure 7 and 8 depict respectively polarization spectrograms and polarization scalograms of f . In Figure 7d and 8d, the ridge has been represented on top of the time-frequency/time-scale energy density. Figure 7e, f and 8e, f show the instantaneous orientation and ellipticity on the ridge. From the ridge, this signal can be described in first approximation as a slow linear chirp in frequency. Moreover, as seen in both descriptions -Q-STFT and Q-CWT -the orientation of the signal remains constant in the most energetic part, at around −100 degrees. The instantaneous ellipticity is on average equal to χ π/5, confirming the elliptical polarization obtained by visual inspection of figure 6. However, we also see that the instantaneous ellipticity reaches sporadically π/4 (almost circular polarization) and 0 + (almost linear polarization), thus revealing more details about the signal. Conclusion We have proposed a generalization of time-frequency analysis to bivariate signals. Our approach is based on the use of a Quaternion Fourier Transform (QFT). It appears that, despite the apparent complexity of the quaternion algebra due to non-commutativity, the natural extension of usual definitions leads to a bivariate time-frequency toolbox with nice properties. This new framework interestingly includes the usual univariate Fourier analysis. Heisenberg's uncertainty principle is again supported by Gabor's theorem. The definition of the quaternion embedding, a counterpart of the analytic signal for bivariate signals, yields a natural elliptic description of the instantaneous polarization state thanks to the Euler polar form. As a consequence we have introduced instantaneous Stokes parameters, the relevant physical quantities to describe the polarization state of polarized waves. Turning to time-frequency representations, we have defined a quaternion short term Fourier transform (Q-STFT) and a quaternion continuous wavelet transform (Q-CWT) as simple generalizations of the usual definitions by using the QFT in place of the usual Fourier transform. This extension permits to show fundamental theorems on the conservation of energy and polarization quantities as well as reconstruction formulas. These theorems permit the definition of spectrograms and scalograms, including the representation of the evolution of the polarization state in the time-frequency plane. These spectrogram and scalogram possess an underlying RKHS structure. In practice, due to Gabor's theorem, spectrograms and scalograms are never perfectly localized so that one often needs to extract ridges to accurately identify the time-frequency content of the signal. Classical ridge extraction algorithms apply to the present framework. The application of the proposed toolbox to synthetic as well as real world data has demonstrated the efficiency of the proposed approach. The resulting graphical representations make the time-frequency content of bivariate signals very readable and intelligible. On a practical ground, the numerical implementation remains simple and cheap since it relies on the use of a few fast Fourier transforms. The code will be made available from our websites. We emphasize the general relevance and efficiency of the proposed approach to analyze a wide class of bivariate signals without any ad hoc model. This approach is very generic. We believe that it will be useful in many applications where the joint time-frequency analysis of 2 components is required. Moreover this work paves the way to the definition of even more general tools to deal with either bivariate signals in dimension larger than 1 or multivariate signals with values in a D-dimensional space with D ≥ 2. Since g is real, its QFT is C j -valued, it commutes with the exponential kernel, i.e. e jtωĝ (ω) =ĝ(ω)e jtω . We conclude the proof of the inversion formula (67) according to the following lines by using that g = 1: 1 2π From (A.2) we know that the QFT of Sf (u, ξ) with respect to u isf (ω + ξ)ĝ(ω). Then using the usual Plancherel's formula in u yields 1 2π |Sf (u, ξ)| 2 dudξ = 1 2π which concludes the proof of the energy conservation property (68). The polarization conservation property (69) is proven along the same lines using the second Plancherel formula in (16). Appendix A.2. Proof of theorem 4 We first prove a preliminary result. The polarization wavelets ψ are C j -valued and we use the notation ψ s (t) = s −1/2 ψ(t/s). Let f s (u) = W f (u, s) the wavelet coefficients at scale s. The QFT with respect to u iŝ If the admissibility condition is satisfied (if C ψ is finite), f + (t) and C −1 ψ b(t) have the same Fourier transforms. This proves the inversion formula (80). Then, thanks to usual results on Gaussian integrals one gets A(τ ) |ϕ (τ )| e jsign(ϕ (τ )) π 4 e jϕ(τ ) . (B.4)
2016-09-08T15:13:00.000Z
2016-09-08T00:00:00.000
{ "year": 2019, "sha1": "621972891d330ccc149042ca48bab1635e6e887e", "oa_license": null, "oa_url": "https://www.sciencedirect.com/science/article/am/pii/S1063520317300507", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3a7523816dd52143fb0e2e56602bfec4e249c660", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
218532851
pes2o/s2orc
v3-fos-license
Serum Neuropeptide Y Levels Are Associated with TNF-α Levels and Disease Activity in Rheumatoid Arthritis Background Neuropeptide Y (NPY) is a sympathetic neurotransmitter with effects on the regulation of inflammatory cells. The role of NPY on autoimmune inflammatory diseases such as rheumatoid arthritis (RA) is not completely understood. Therefore, we evaluate if NPY levels are markers of disease activity in RA and if there is a correlation between NPY levels and tumor necrosis factor-alpha (TNF-α), leptin, and interleukin 6 (IL-6) levels. Methods Cross-sectional design, including 108 women with RA. We assessed disease activity by DAS28-ESR (considering active disease a score of ≥2.6). Serum NPY levels and anti-CCP2 antibody, TNF-α, IL-6, and leptin levels were quantified (ELISA). Results Sixty-eight RA had an active disease (RA-active), and 40 were in remission (RA-remission). RA-active patients had higher NPY levels vs. RA-remission (22.8 ± 13.6 vs. 17.8 ± 10.3; p = 0.04). NPY levels correlated with increased TNF-α levels (r = 0.32, p = 0.001). Leptin or IL-6 did not correlate with NPY levels. In the logistic regression analysis, NPY increased the risk of disease activity (OR: 1.04, 95% CI 1.006-1.09, and p = 0.03). Conclusion Higher NPY levels are an independent marker of disease activity in RA. This study encourages the quantification of NPY levels as a surrogate marker for RA-active. Future studies evaluating the role of NPY levels interacting with other proinflammatory cytokines are required. Introduction The rheumatoid arthritis (RA) is a systemic inflammatory disease affecting synovial joints leading to pannus with joint destruction and functional disability [1]. The aims of the treatment for rheumatoid arthritis include maintaining low disease activity or remission and improving pain, fatigue, inflammation, labour capacity, and health-related quality of life [2,3]. Nevertheless, there is a high frequency of failure to conventional treatments, particularly to monotherapy with synthetic disease controlling antirheumatic drugs (syntDMARDs) [4]. Although the persistence of chronic inflammation in RA has multifactorial pathogenesis; to date, there is new evidence about the participation of the sympathetic nervous system on the regulation of inflammation in RA [5]. In this context, neuropeptide Y (NPY), a sympathetic neurotransmitter, might mediate effects on cardiovascular functions, hypertension, obesity [6,7], and regulation of inflammatory cells [5,8]. NPY also has a role in the link between the immune system and the neuroendocrine system [9]. In experimental studies, TNF activated the neuronal NPY promoter. In knockout NPY−/− enteric neurons from mice, a lower secretion of TNF compared to wild-type mice has been demonstrated [10]. NPY can induce the activation of immune cell response including macrophages, neutrophils, and lymphocytes inducing the release of proinflammatory cytokines including TNF-α or interleukin 6 (IL-6) [9]. Several reports have demonstrated abnormal concentrations of NPY in systemic lupus erythematosus and RA [11,12]. Although studies have been performed in RA patients, the findings have shown discordant results about NPY levels [11][12][13][14]. For instance, Härle et al. identified higher NPY levels in RA patients compared to those seen in healthy subjects, but there was no association observed between NPY levels and clinical characteristics of RA patients [12]. Härle et al. identified higher concentrations of RA and SLE compared to controls, although Härle et al. found no correlation between NPY levels and disease activity in RA patients [11]. Vlcek et al. identified that the NPY levels did not differ in RA patients and controls [14]. To date, this lack of consistency between the results of the studies referred above implies that the assessment of the role of NPY levels as a possible marker of active disease in RA patients should be assessed in studies with a multivariate approach controlling potential confounders including the serum levels of other proinflammatory cytokines and leptin. Therefore, this study is aimed at determining whether NPY levels are markers of disease activity in RA and if there is a correlation between NPY levels and TNF-α levels. Study Design. This study is a cross-sectional study. Study Population. This study included 108 women with RA who were from an outpatient clinic of a secondary care centre in Guadalajara, Mexico. Selected patients were women aged ≥18 years that met the 1987 American College of Rheumatology (ACR) criteria for RA [15] and signed a voluntary consent form for the study. In this study, we excluded patients with other autoimmune diseases, including overlapping syndrome, acute or chronic infections such as hepatitis B or C, human immunodeficiency virus, or tuberculosis. We excluded also patients with a diagnosis of cancer, chronic kidney disease, or an increase in serum transaminase levels of >2-fold of normal values. Pregnant or breastfeeding patients were also excluded from the study. 2.3. Ethics and Consent. The study protocol was performed according to the guidelines of the 64th Declaration of Helsinki. The Research and Ethics Committee of the Hospital General Regional #110, IMSS, in Guadalajara, Mexico, approved the study protocol under the registration number R-2014-1303-19. All participants in this study were asked to sign a voluntary informed consent before the study inclusion. This voluntary informed consent was also approved by the Research and Ethics Board of the hospital. This voluntary consent form was according to the ethical practices in research studies following the lineaments of the Helsinki Declaration. Study Development. Patients were assessed by trained researchers with a structured interview, physical examination including the clinimetrics of the disease, and laboratory studies. 2.5. Assessment of Disease Activity. We used for the evaluation of disease activity the Disease Activity Score for 28 joints with Erythrocyte Sedimentation Rate (DAS28-ESR) as acute phase reactant [16]. DAS28-ESR is a widely validated index accepted worldwide. This index is used as criteria of the intensity of disease activity in RA patients. We classified the RA patients into the following two groups: (a) patients with active disease (DAS28-ESR score of ≥2.6 (RA-active)) and (b) RA patients in clinical remission (DAS28-ESR < 2:6) [17]. Additionally, we evaluated physical functioning using the validated Spanish version of the Health Assessment Questionnaire-Disability Index (HAQ-DI) [18]. 2.6. Anthropometric Measurements. Body weight was measured using bioelectrical impedance (Tanita™), following the standardised protocols. Height was measured using a wall stadiometer (Seca™ model 206). Body Mass Index (BMI) was calculated in kg/m 2 and classified per the parameters described by the World Health Organization (WHO) as follows: normal weight (range from 18.5 to 24.9 kg/m 2 ), overweight (ranging from 25 to 29.9 kg/m 2 ), and obesity (≥30 kg/m 2 ) [19]. Body Composition Measurements Using Densitometry. Body composition was assessed using Dual-energy X-ray Absorptiometry (DXA) (LUNAR 2000, Prodigy Advance; General Electric™, Madison, WI, USA) using standardised protocols described by the manufacturer. We obtained by DXA the following parameters: fat mass (%) and lean mass (%). 2 Journal of Immunology Research Laboratory Determinations. An 8 h fasting venous blood sample was obtained from the RA patients and controls. From this blood sample, the serum was separated and it was stored at -20°C for NPY determinations. Levels of serum rheumatoid factor (RF) and anticyclic citrullinated peptide (anti-CCP) antibodies of second generation were also quantified. Rheumatoid factor (RF) was measured in by nephelometry in 73 patients at the time of the study, whereas anti-CCP was quantified by ELISA in 106 patients using a commercial kit (EUROIMMUN, Lübeck Germany). 2.9. Determination of NPY Levels and Leptin, Interleukin 6, and TNF-α Levels. Serum NPY levels were determined by ELISA using a commercial kit (EMD Millipore™; MI, USA). The detection range of NPY levels varies from 2 to 1,000 pg/mL. TNF-α level was determined in 106 patients, interleukin 6 level was determined in 89 patients, and leptin level was determined in 91 patients. All these molecules were quantified by ELISA using commercial kits (R&D Systems, Inc., Minneapolis, MN, USA). All the laboratory determinations were performed by researchers blinded to the characteristics of the patients. Statistical Analysis. Quantitative variables were expressed as means and standard deviations (SD) and qualitative variables as frequencies and percentages (%). We used Pearson's correlation tests to identify a correlation between NPY levels and age, duration of RA, BMI, fat mass, and serum levels of TNF-α, IL-6, leptin, RF, and anti-CCP antibodies. Comparisons of proportions between groups were computed using chi-squared tests (or Fisher's exact test if required). Comparisons of means between RA and control groups were calculated using independent-sample Student's t-tests; a similar statistical approach was used to compare means in NPY levels between the RA-active and RAremission groups. Multivariate linear and logistic regression analyses were performed to adjust the variables associated with the dependent variables by confounders. Covariates included in these models were those with biological plausibility to modify according to the model the following dependent variables: disease activity and serum levels of NPY. Also, we included in the models those variables with p value ≤ 0.20 in the bivariate analysis. A logistic regression model was built to identify if NPY levels were associated with disease activity adjusting by other variables. Variables included in the adjustment in the final model were as follows: age, BMI, body fat mass, biologic-DMARD use (anti-TNF-α agents), HAQ-DI score, and NPY serum levels. In this model, we used DAS28-ESR ≥ 2:6 (RA-active) as the dependent variable, and the forward conditional method was used to adjust confounder variables. Multivariate linear regression analyses were performed to adjust the variables associated with serum NPY levels as the dependent variable. Age, BMI, anti-CCP levels, TNF-α levels, leptin levels, lean mass (%), fat mass (%), and RA disease duration were used as covariates. All the analyses were made using SPSS statistical software (IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp.). Results and Discussion 3.1. Results. Table 1 presents a description of the clinical characteristics of the 108 patients with RA included in the study. These patients had a mean age of 58.7 years and mean disease duration of 13.9 years, and 63% of the RA patients had active disease. The following comorbidities were observed in RA patients: 70% overweight or obesity, 97.2% body fat mass of >33%, 75.9% dyslipidaemia, 39.8% hypertension, and 11.1% diabetes mellitus (data not shown in tables). Discussion. In this study, we identified that the presence of disease activity in RA is associated with higher levels of NPY and that the serum levels of this neurotransmitter correlated positively with TNF-α levels, but it did not correlate with serum levels of leptin and IL-6. After adjusting by confounders, the increase in NPY levels remains associated with the presence of higher TNF-α levels and lower body fat mass. Using a multivariate logistic regression analysis after adjusting by potential confounders, NPY and high disability (HAQ-DI score) were both risk factors associated with disease activity. Only a few studies have been conducted to evaluate the relationship between NPY levels and clinical characteristics in RA with nonconsistent results. However, to our knowledge, none of them has previously assessed the relation of NPY levels with disease activity and serum levels of TNFα into the same study adjusting by other proinflammatory molecules. TNF-α is a proinflammatory cytokine that plays a protagonic role in the pathogenesis of the disease activity in RA patients. Some authors have identified that TNF-α increases the secretion of other proinflammatory cytokines and adipokines, including leptin [20]. Nevertheless, we did not observe any correlation between TNF-α levels and leptin or IL-6. Some authors have assessed a possible association between NPY and some clinical variables in RA patients treated with anti-TNF agents [12]. We examined in the present study a wide number of potential confounders of the relation between NPY levels and disease activity that were not previously investigated by other studies and identified an association between NPY levels and higher serum levels of TNF-α. Härle et al. noted that treating RA patients with anti-TNF agents decreased their NPY levels [11]. The mechanisms for explaining the effects of anti-TNF agents on NPY levels require further investigation. We hypothesized that NPY levels might decrease caused by the inhibition TNF-α activity on the cells that express NPY. The correlation between serum TNF-α concentrations and NPY levels observed by the present study might support the hypothesis of the role of TNF-α in increasing NPY, but this effect could also be secondary by other factors. Härle et al. identified that an increase in the fat mass accompanying the decrease of NPY was noticed in their patients after the use of TNF-α blockers [12]. TNF-α levels might also modify the synthesis of leptin inhibiting neuropeptide Y secretion [21], although in the present study, we did not observe any correlation between leptin and TNF-α levels or NPY levels. Therefore, TNF-α might produce an increase in leptin secretion in certain tissues, but not reflected on the circulant levels contributing to changes in body mass, and similarly an increase of NPY [21]. However, we must recognize as a limitation in the present study that leptin, interleukin 6, and TNF-α concentrations were not measured in all the patients with RA due to insufficient sera to quantify in all the samples these molecules. Nevertheless, the sample size was sufficient to demonstrate that the high NPY levels correlated with serum TNF-α levels. NPY concentrations did not correlate with other proinflammatory molecules, although we are confident that the statistical power was sufficient to establish conclusions with TNF-α levels; in the case of leptin and IL-6 correlations, we should be aware of the possibility of type II error in the results if there is no correlation with these two proinflammatory molecules. Erythrocyte sedimentation rate (mm/h) 29:2 ± 13:7 Rheumatoid factor (IU/mL), n = 73 230:05 ± 533:4 Positivity for rheumatoid factor (≥12 IU/mL), n (%) 57 (2.8) Serum TNF-α levels (ng/mL), n = 106 37:9 ± 137:6 Serum IL-6 (pg/mL), n = 89 17:3 ± 38:3 Serum anti-CCP levels (RU/mL), n = 107 97:9 ± 125:5 Serum leptin levels (pg/ mL), n = 91 73:5 ± 62:9 Serum NPY levels (pg/mL), n = 108 20:9 ± 12:7 RA: rheumatoid arthritis; HAQ-DI: Health Assessment Questionnaire-Disability Index; DMARDs: disease-modifying antirheumatic drugs; TNFα: tumor necrosis factor-alpha; IL-6: interleukin 6; NPY: neuropeptide Y. 4 Journal of Immunology Research NPY plays a crucial role in communication between the sympathetic nervous system (SNS) and the immune system [8]. However, the current function of NPY in the immune system in RA requires additional research. In murine models, NPY can inhibit cytokines produced by the T cells, decreasing some subpopulations of B cells and increasing subpopulations of naïve T cells [22]. Furthermore, NPY might have an effect on the leucocyte migration and adhesion to the endothelial wall [23]. On the other hand, in experimental studies, NPY can modulate the response of immune cells including macrophages, dendritic cells, neutrophils, or lymphocytes and induce the release of various proinflammatory cytokines including TNF-α and IL-6 and also interferongamma (IFN-γ) in activated macrophage [9]. We identified that the presence of disease activity is associated with higher levels of NPY. Similar to our results, Härle et al. observed a correlation between concentrations of NPY and DAS28-ESR [12]. Additionally, the results showed that high NPY levels associated simultaneously with disease activity and increases in TNF-α levels. However, Härle et al. did not find a correlation between NPY levels and DAS28-ESR and other inflammatory markers [11]. But these data should be considered in the light of an absence of a multivariate approach that adjusts the effect of potential confounders. Journal of Immunology Research Our study is, to our knowledge, the first to evaluate the correlation between NPY levels and a varied spectrum of clinical variables including abnormalities on body composition, disease activity, TNF-α, IL-6, and leptin. Additionally, we performed different statistical models to evaluate the determinants of these variables using an adjusted analysis. We demonstrate that NPY levels are associated with TNFα, and the clinical significance of this finding should be further explored in long-term follow-up studies to identify whether NPY levels might be considered a subrogated marker of disease activity in RA. Conclusions Serum levels of NPY are significantly related to TNF-α levels and disease activity in RA. This study demonstrates that the NPY levels are associated with an increase of disease activity in RA independently of IL-6, TNF-α, or leptin levels. NPY levels can be considered a marker of activity of the disease. These results encourage future longitudinal studies to evaluate if higher NPY levels can be associated with the development other outcomes such as high disability and erosions in RA patients. Data Availability The database used to support the findings of this study is available on request. If this database is required, please direct the correspondence to Dr. Laura Gonzalez-Lopez (dralaura-gonzalez@prodigy.net.mx) or Dr. Norma A. Rodriguez-Jimenez (azul_umi@hotmail.com).
2020-04-23T09:09:13.914Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "56be52cb818c56caca66b6579f13f9a6e12a99e1", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jir/2020/8982163.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2386a6b3383ed116e5176d2fdf76cad92a84e9c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158632082
pes2o/s2orc
v3-fos-license
Conceptual Model for Mitigating Human – Wildlife Conflict based on System Thinking In conservation process it is unavoidably that conflict incidents may occur among the people and wild-life in the surrounding of the conservation area. Mitigating conflict between wildlife and people is considered a top conservation priority, particularly in landscapes where high densities of people and wildlife co-occur. This conflict is also happened in Leuser conservation area located in the border of North Sumatra and Aceh province, Indonesia. Easing the conflict problem is very difficult. This paper proposes a conceptual model based on system thinking to explore factors that may have great influence on the conflict and to figure out mitigating the conflict. We show how this conceptual framework can be utilized to analyze the conflict occur and further how it could used to develop a multi- criteria decision model. 1. Introduction Whenever a conservation takes place, particularly for a forest, a conflict between human and wildlife (HWC) could happen unavoidably. Analog with the meaning of a conflict, HWC can be defined as interaction between humans and wildlife that results in negative impacts on human social, economic or cultural life, on the conservation of wildlife populations, or on the environment. This issue has negative meaning due to the fact that it could destruct the effective conservation and also to prevent economic development and resource sustainability ( [1]; [2]). A set of global trends relating to human populations, habitat evolution and animal distribution and behaviour has contributed to the escalation of human-wildlife conflict worldwide. While humans and wildlife have co-existed for millennia, the frequency of conflicts involving problem animals has grown in recent decades, mainly because of the exponential increase in human populations and consequential expansion of human activities ( [3]; [1]), partition of wildlife distributions ( [4]; [5]; [6]), and natural of landscape characteristics ( [7]). Responses to tackle these conflicts are varied. However, when conflicts are handled improperly, they can create continuing public frustration, further reducing the credibility of the board which administers the program and detracting from long-term objectives ( [8]; [9]). Some authors are finding that conflict management approaches can be used effectively to manage the HWC ( [8]; [9]; [10]). [11] propose a decision modeling based on multicriteria analysis to resolve human-wildlife conflicts, which requires the participation of local communities and other stakeholder groups. [2] suggests that social factors should be considered in resolving effectively HWC. He points out that 1234567890''"" direct wildlife damage is the main driver of conflict. Technical approaches which are usually used to limit the damage ( [4]; [12]) cannot be expected to lessen the conflict. However, we argue for his approach could handle HWC effectively, as there are other factors beside social should be involved. HWC can be regarded as a complex system. In order to mitigate the conflict we need to include the whole elements in the system. [13] use mental models to analyze HWC from the perspective of a social-ecological system in Namibia. They explore the process of mind mapping to get insight the understanding of HWC in a social-ecological system. They show that the model could be used to indicate significant variables in easing the conflict. [14] also consider the HWC in terms of social dimensions. Therefore they use multi criteria decision analysis (MCDA) as a decision support tool to evaluate the management of how to reduce HWC. [7] point out that although landscape characteristics is crucial in affecting human-wildlife interactions, it is necessary to have a better insight the mechanisms that drive those interactions. As there are many factors involved, they use a conceptual model to integrate those factors in such a way to provide a decision making model at multiple objectives. Indicators derived from social science also focus on wildlife population size as an independent variable. Metrics such as the sociological carrying capacity [15] have been used to understand the diversity of stakeholder viewpoints related to wildlife management. Regardless of metrics, ecological and social frameworks consistently describe wildlife population size as a primary driver of humanwildlife interactions leading to conflict. Fewer conceptual frameworks have integrated ecological and social factors that affect human-wildlife interactions. This paper is concerned with HWC occurred in the Leuser Landscape. We address a conceptual framework to mitigate the conflict. We use system thinking to describe inter-relationship among the component of the system human-wildlife as a whole. 2. Leuser Landscape The Leuser Landscape is an area of forest located in the provinces of Aceh and North Sumatra on the island of Sumatra in Indonesia. Covering more than 2.6 million hectares it is one of the richest expanses of tropical rain forest in Southeast Asia and is the last place on earth where sumatra elephant, sumatran rhinoceros, sumatran tiger and sumatran orangutan are found within one area. It has one of the world's richest yet least-known forest systems, and its vegetation is an important source of Earth's oxygen.It is among the most biodiverse and ancient ecosystems ever documented by science The ecosystem stretches from the coast of the Indian Ocean to the Malacca Straits. It encompasses two vast mountain ranges including Mount Leuser that reaches 3455m, two major volcanoes, three lakes and more than nine major river systems. As well as providing habitats for a number of endangered wildlife species, the ecosystem acts as a life support for approximately four million people who live around it by providing a steady supply of water, soil fertility, flood control, climate regulation and pest mitigation. The Leuser Ecosystem comprises one of the remaining examples of Indo-Malayan (Malesian) vegetation communities with an estimated 45% of the approximately 10 000 recorded plant species. In general the ecosystem can be characterised as a montane rainforest community. However, the typical vegetation type up to an altitude of 600 metres is moist tropical lowland forest characterised by multilayered stories with emergent trees reaching between 45 and 60 metres in height and high densities of fruit tree species. The large variety of tree species found in Leuser represent virtually all life-strategies of trees, from root flowering and trunk flowering to common twig flowering types. Among the most important and impressive trees are the several species of strangling fig. The largest flower on earth; the parasitic Rafflesia is a relatively common in the ecosystem. 3. System Thinking The approach of systems thinking is to consider of any given issue, such as HWC, as a whole, emphasizing interrelationships between components rather than the components themselves. It does not try to break systems down into parts in order to understand them; instead, it focuses attention on how the parts act together in networks of interactions. Systems thinking is not a discipline, but rather an interdisciplinary conceptual framework used in a wide range of areas. Despite the absence of a commonly accepted definition for systems thinking, the diverse available definitions clearly yield two main complementary meanings for systems thinking: rising above the separate components to see the whole system, and thinking about each separate component as a part of the whole system. Systems thinking is considered an effective means of facing real-world situations. Systems thinking is not a new concept, however, it is increasingly being regarded as a new way to understand and manage complex problems at both local or global levels [16] . [17] use the analogy of an iceberg to illustrate the conceptual model known as the Four Levels of Thinking as a framework for systemic interventions. In this model, events or symptoms (those issues that are easily identifiable) represent only the visible part of the iceberg above the waterline. Most decisions and interventions currently take place at this level, because 'quick fixes' (treating the symptoms) appear to be the easiest way out, although they do not provide long lasting solutions. However, at the deeper (fourth) level of thinking that hardly ever comes to the surface are the 'mental models of individuals and organisations that influence why things work the way they do. Mental models reflect the beliefs, values and assumptions that a person hold ( [17]). Moving to the third level of thinking is a critical step towards understanding how these mental models can be integrated in a systems structure that reveals how the different components are interconnected and affect one another. Thus, systemic structures unravel the intricate lace of relationships in complex systems. The second level of thinking is to explore and identify the patterns that become apparent when a larger set of events (or data points) become linked to create a 'history' of past behaviours or outcomes and to quantify or qualify the relationships between the components of the system as a whole. The systems thinking paradigm and methodology embrace these four levels of thinking by moving decision-makers and stakeholders from the event level to deeper levels of thinking and providing a systemic framework to deal with complex problems ( [17]), such as HWC problem. 4. Conceptual Framework Figure 1 shows the conceptual framework model for HWC based on system thinking, which involve interactions and feedbacks within human and natural system. We use the notion of set to describe the model, such as, {conflict, human}would have an impact to {human behaviour}. Set of {human characteristic} could give influence to the set of {human reaction, human behaviour}. Factors that influence human wildlife conflict can be described as follows. i. Landscape characteristics ii. Human encroachment iii. Human population iv. Wildlife distribution v. Government policies In diagram can be shown as in Figure 2. Multi-criteria Decision Making Nowadays it is quite often in the realm of decision making one encounters with problems which contain more than one objectives. These objectives in particular are independent and may have conflicts to one and another. In terms of mathematical programming, this type of problem belongs to an approach which is called multi-criteria decision making (MCDM). Practically, goal programming (GP) is the most popular among all MCDM techniques ( [18]). An interesting characteristic of GP is that a decision maker is allowed to consider the environmental, organizational, and managerial situation into the model via goal levels and priorities. Mathematically, the lexicographic GP are expressed as follows. The objective is to minimize deviation from the desired goals which is consistent with the AHP ranking of the alternatives. It can be seen that there are m goals, k system constraints and n decision variables. Where di + is a deviational variable of overachievement of goal i, and diis a deviational variable of underachievement of goal i. Conclusion We have focused on conflicts between human and wildlife due to conservation concern, particularly happened at Leuser Landscape. The conflict which threatened human living and crops is necessarily to be managed properly. We consider the HWC as a system, and then we put into the concept of system thinking. The linkages between the whole factors involved are described in the conceptual model framework. In order to mitigate the conflict optimally we will use multi-criteria decision making.
2019-05-20T13:04:13.902Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "afe7a3e1dd55c5f1e33a62ad40931470a0112765", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/300/1/012052", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6939c0037b58deb0b4a35c555517076068e4e888", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Sociology" ] }
191472557
pes2o/s2orc
v3-fos-license
Dislodged tales: Javanese goddesses and spirits on the silver screen Indonesian films and television shows often feature popularly though only superficially known figures from Javanese mythology, including the Goddess of the Southern Ocean Nyai Roro Kidul and her counterpart the Queen of the Snakes Nyi Blorong. In this study I examine the effects of placing the stories about these entities in ‘media space’ (Sen and Hill 2000:199), thus removing them from the local context that in the past infused them with its truth, and making possible their apposition to other truths and values that were previously unconnected to them, and may or may not be congenial with them. Introduction Indonesian films and television shows often feature popularly though only superficially known figures from Javanese mythology, including the Goddess of the Southern Ocean Nyai Roro Kidul and her counterpart the Queen of the Snakes Nyi Blorong. 1 In this study I examine the effects of placing the stories about these entities in 'media space' (Sen and Hill 2000:199), thus removing them from the local context that in the past infused them with its truth, 2 and making possible their apposition to other truths and values that were previously unconnected to them, and may or may not be congenial with them. Starting with a short discussion of the nature of stories and their relationship to locally perceived truths, I briefly look at some defining features of the mythological characters Nyai Roro Kidul and Nyi Blorong and how these, as well as references to other mythologies, are used in films. I then consider how through the agency of various supporting roles social and moral judgments are made on events in the film (and implicitly in society). The context of the viewing, for example theatre versus television, is an important factor that influences the reception of the film's message; as the films remove our characters from a local, intimate immediacy they in the process gain a voice in a debate concerning the place of these mythological figures and the beliefs 1 associated with them in a modern Indonesia where a variety of Islamic voices increasingly clamour to be heard. In spite of the increasing influence of Islam in daily life in Java, mythological themes and ideas about magical powers remain important (Fish n.d.;Wessing 2006a). These ideas find expression in beliefs about spirits and supernatural forces, both local and supra-local, even though there is an ongoing discussion between various adherents of Islam and proponents of modernity about how these phenomena should be regarded (Wessing 2002). The degree to which these matters are part of people's daily reality can be seen in the popularity of ghost-reality shows on television in Java (Arps and Van Heeren 2006) and people's almost routine, if sometimes anxious, day-today involvement with spirit entities (Fish n.d.;Wessing 2006b). This obvious presence of spirit beliefs should not lead us to believe that there is unanimity about either their reality or, among those to whom they are real, about how to value them. Some dismiss most Javanese spirits out of hand, calling these beliefs old-fashioned, though perhaps retaining a belief in Islamically sanctioned ones like jin (genies). Others acknowledge the reality of the spirit pantheon (Wessing 2006a), but disagree about whether at least some of these entities are benign or if they should all be classified as setan (devils), interaction with which is dangerous and perhaps idolatrous (syirik). However, believer or not, most everyone is aware of the social fact of the phenomenon, which is part of the 'story', in the sense of Fisher (1987), Niles (1999), and Bruner (2002), through which people in Java constitute their social reality. Narration Fisher (1987:xi) and Niles (1999:3, 8) see humankind as tellers of tales, Homo narrans, who constitute their communities and realities through the stories they tell, participation in which differentiates the insider from the outsider (Wessing 2001; see also Bruner 2002:16). Mutual participation in these tales and the premises that underlie them causes the realities envisioned 3 in the tales to be realized performatively, in the sense of Austin (1975). However, as Bruner (2002:91) mentions, there are many stories and not everyone in a community needs to agree with any one of them, with every detail of the ones they do agree with, or about the veracity of particular versions. This leads to discussion and, ideally, to attempts at compromise, though it can also lead to schisms and discontent. Usually, however, people will mute their disagreements (Beatty 1996;Wessing 2002). It is, furthermore, not necessary for every 3 Envisioned in the sense of Anderson (1991): a mental construction that may, under the right circumstances, lead to a physical or social reality. Fisher (1987) and Niles (1999) use narration in the broadest sense, including myths, history, and personal descriptions and the like (see also Turner 1981:153). member of the community to know the tales in detail or be able to tell them with great accuracy. This is within the purview of a small number of people, the local experts to whom the community as a whole defers (Niles 1999:175). Truth, then, is a matter of social agreement, and the acceptability or 'truth' of a myth or legend can vary with changes in the social context of its telling. 4 Thus, 'truth' here is seen as a social fact that is continually judged against other truths by the participants in the discussion (Wessing 1978b;Bruner 2002:91;Hobart 1999:281). As I have pointed out elsewhere (Wessing 2001), such a judgment, and thus also the reality it defines, is usually a quite local matter, involving known people and local icons that come together to create a reality that in turn defines the people's and icons' actuality. Of course, the scope of this localness must vary with the general applicability of the tale in question. Since adherence to a particular version defines an in-group, the acknowledged teller of a socially important tale becomes a person of consequence in the community it defines: he or she is one who structures reality (Niles 1999:3, 212 note 48), and where different tales or different versions define significantly different realities, social relationships may well be altered (Niles 1999:87;Wessing 2002;Nourse 1999:175, 191), while the tales themselves can become arenas for political competition between competing experts. 5 As is clear from Nourse's work (1999), the acceptance of versions of stories that are especially important to a community often depends on who the dominant voice among these experts is. The role of the audience is vitally important here, as acceptance of the narration depends on it. As Hobart observes, social life is essentially dialogic, and without an audience a speaker is just talking, without the social validation that brings his or her tale into the realm of truth. 6 Yet the audience is a heterogeneous assemblage that is conditioned by previous beliefs as well as the relative and changeable political/power positions occupied by the narrator and the individuals making up the audience (Bruner 2002:64, 66;Spitulnik 1993:297). This is, of course, more generally true than just in the area of mythological recitation and can be applied to all statements and social actions. 7 4 Wessing 2006b;Bruner 2002:58;Hobart 2002:380. The truth of a myth or legend need not be factual (Niles 1999:133). Rather, these are stories 'which people may infuse with their truth' (O'Flaherty 1988:35). 5 When doing field research in 2004 among East Javanese teenagers on their knowledge of mythology, my interviews in one locality were often disturbed by an older self-appointed expert, who due to the superiority conferred by his (middle) age would effectively silence my informants who then deferred to him. 6 Niles 1999:61; see also Fish n.d. This seems to be especially true in, among others, Balinese theatre, where interaction with the audience is a vital aspect of the production (Emigh 1996;Hobart 2002:377, 380-1;see also Tuti Indra Malaon 1988:13). 7 Elsewhere (Wessing 1978b:173) I note that everyone is simultaneously actor and audience, judging the actions of others as to their acceptability and simultaneously having their own actions judged through the reactions of others. In the view presented here, then, communities come about as a result of the interaction between the various narrations of tales presented by its members: the community talks itself together. Some of these tales carry more weight than others, depending on the position of the narrator within the group, and both the tales and the position of the narrator can change over time as they adapt to changing circumstances (Bruner 2002:58;Nourse 1999:175-91): future generations may have to reinterpret or reconstruct the 'truth'. Furthermore, although communities are based on shared 'truths', these need not be based on objective reality. Rumours about something that has happened or fears about something that might happen are equally effective in binding people together for shorter or longer periods of time (Spyer 2002). Although the literature about narration seems to focus primarily on regional or national communities, the above observations apply equally to local communities, especially since national or regional tales are also told or referred to locally. This tends to link wider concerns with local realities, but can also place local matters in the context of a broader discussion (Wessing 2001(Wessing , 2002b. Thus, while the Javanese goddesses and other figures we will be looking at here used to be primarily matters of local concern (Wessing 2006b), since their film debuts they have progressively become caught up in a larger discussion about Indonesia, modernity and Islam and their position vis-à-vis these mythological entities. This is increasingly apparent in television films about them, where their portrayal can vary from absolute evil, to neutral, to benign and advisory. Before going into these portrayals, however, I will first present the mythological figures in brief outline. The characters Nyai Roro Kidul, a figure well known in the literature on Java, 8 has become more widely known among the people of Java since the film debuts of her and her 'daughter' Nyi Blorong. Especially since the 1982 film Nyi Blorong (Putri Nyi Roro Kidul), she and Nyi Blorong have gained broader popularity and name recognition. Prior to this Nyi Blorong was relatively unknown. Now she is mainly recognized as the Queen's daughter and ruler of the snakes, but even with this recognition detailed knowledge about either is rare among the general public (Wessing 2006b:53). Nyai Roro Kidul, Queen of the Southern Ocean, is actually part of two separate traditions, one relevant to Java's courts and the other to the fishermen on the island's southern coast. In the first, part of a Southeast Asia-wide tradition of liaisons between rulers and naga princesses, she is a guardian spirit (dhemit) responsible for the welfare of Java, having entered a relation-ship with the founder of the Muslim state of Mataram, Panembahan Senopati (Wessing 1997b). While this court tradition is often retold in collections of folk tales (for example, Terada 1994:143), it has not been portrayed in any of the films about her that I have seen. 9 In these printed stories the queen is a beautiful princess 10 who, in some tales due to a foul-smelling skin disease, came to be banished from the palace. Wandering through the forests of Java, she came to the Indian (Southern) Ocean, where she heard a voice telling her to enter the waters and be cured. Having done so she became the ruler of the spirits of Java, living in a sumptuous underwater palace (Jordaan 1984;Nyai Roro Kidul 1991:130). While there is some dispute about this matter (Schlehe 1998:144), mermaid-like, her lower body today is reputedly covered with scales, which can refer either to her marine habitat or her association with mythical snakes (naga) (Har n.d.:36; Harnaeni Hamdan Hs. n.d.:16). Indeed, Jordaan's informants referred to the Queen as a naga and her (flying) carriage is decorated with a naga motif (see Plate 1), a motif also found in related mythology (Jordaan 1984:108-9). Among fishermen on Java's south coast, a quite different tradition about the Queen is current. The court tradition generally goes unmentioned there, except in the Parang Tritis/Parang Kusuma area of Central Java, where annual offerings are made to the Queen by the court and where she reputedly met Panembahan Senopati. Along most of the coast, where her origins are not discussed as such, she is considered the ruler of the ocean and the controller of its wealth 11 as well as one who punishes disorderly behaviour. Fishermen's catches are said to depend on her goodwill, and she is reputed to demand offerings of human lives in return for her beneficence (Wessing 1997a). She is the ruler of the spirits of Java and she is easily angered, especially by those who dare to wear her favourite colour, green, to the beach. Such persons are apt to be swept away by a large wave to henceforth serve her in her underwater palace and, if they are handsome young men, to satisfy her considerable sexual appetite. Her daughter is also attracted to handsome young men and, like her 'mother' with Senopati, may dally with them under water for several 9 There have been plans for a television special about the meeting between Nyai Roro Kidul and the ruler of Mataram (TVRI Yogyakarta 1991:5). 10 In the Penembahan Senopati tale, the queen is usually portrayed as both beautiful and sexually attractive. Senopati is said to have spent three days and nights making love with her (Olthof 1987:80-2). Nyai Roro Kidul is usually portrayed as a beautiful young woman. It is said, though, that she changes with the phases of the moon, appearing young and beautiful when it waxes but old and ugly when it wanes (Sri Sultan 1988:156). This may be the basis for the description of her in the Babad Demak (History of Demak) (Sabariyanto 1981:44), where she is said to have an enormous body with thick hair and tusk-like teeth, three hand-spans long. Her breasts are enormous and she snores loudly in her sleep. When Senopati saw her real form, the Babad continues, 'he was sore afraid'. 11 She is also the protectress of the harvesters of swallows' nests on the steep southern cliffs (Air liur 1982:75-6;Knappert 1977:72-4). days, returning them to the shore afterwards, now able to effect cures. 12 Children's books are generally about the Queen's relationship with fishing communities. 13 The films about her or Nyi Blorong refer either to coastal communities or are set in urban areas. Most of my informants in East Java were unaware of the court connection, unlike those living in Yogyakarta and in Parang Tritis and Parang Kusuma, all in Central Java, who, being near the court and its ritual centres, were aware of the connection. As Headley (2004:138) points out, probably referring to the Central Javanese court tradition, her 'cult hardly extends beyond the palaces and a number of villages on the southern coast'. The court and coastal traditions, then, tend to address different interests, the one being concerned with legitimating Senopati's rule and the founding of Mataram, while the other has to do with the risks of making a living from the dangerous waters of the Indian Ocean. In children's books and films depicting the coastal tradition, Nyai Roro Kidul is often dressed in some archetypal 'court dress', befitting her status as queen. Nyi Blorong Until the actress Suzzanna 14 portrayed her in 1982, Nyi Blorong was a relatively obscure figure. She is said to be a beautiful woman (Schlehe 1998:240) who, like Nyai Roro Kidul, can appear as a woman with a fathom-long snake's tail covered with jewels or golden scales (see Plate 2). 15 She is a money goddess who lives in still waters or swamps, though according to Meijboom-Italiaander (1924:235) she lives in a palace in the Indian Ocean. If properly appeased, she is said to bring one untold riches in return for which the supplicant, generally a man, must be prepared to copulate with her every thirty-five days at the Jum'at-Kliwon conjunction of the Javanese seven-and five-day weeks. This wealth, however, turns out to be as ephemeral as the sexual satisfaction, and after seven years the beneficiary must pay by being physically made part of her palace. 16 Since the 1982 film, Nyi Blorong is popularly said to be Nyai Roro 12 Bamar Eska n.d.:58. Subiyanto Hr. (n.d.:50) writes that the Queen has three daughters and two sons, all unnamed, whose fathers are unknown. All five engage in free love with both men and women. 13 Harnaeni HHs 1985HHs , 1985b Rully n.d. The only exception is a rather deviant one by Harnaeni Hamdan.Hs. (n.d.). 14 For the actress Suzzanna, see http://mitglied.lycos.de/uzumaki/specials/suzzanna.htm (accessed 21-7-2006). 15 Kreemer 1879:6, 9 note 4) Elsewhere this figure is the male Kiai Blorong or Belorong, who has a shark's tail as lower body (Knappert 1977:75). 16 Van Hien 1912:145; see the skulls in the illustration in Kreemer 1879:facing p. 1. In the case of the Sundanese ipri (Wessing 1988:54;Rosidi 1977:95-105), whom Nyi Blorong closely resembles, this fate can be postponed if the supplicant brings her other human sacrifices. Wormser (1920) attributes the suffering-for-wealth aspect to Nyai Roro Kidul. Kidul's daughter but, as Schlehe (1998:143) points out, in older mythologies she is said to be the offspring of Raja Angin-angin, the ruler of the spirit world who preceded Nyai Roro Kidul. In summary, in both the court and the coastal traditions Nyai Roro Kidul is associated with underworld elements like fish and snakes (naga). In the court tradition she originated as a princess who entered the ocean to be cured of a skin disease, while in the coastal tradition her origins are unknown. In both she is the Queen of an underwater realm who enjoys an active sex life: in the court tradition with Panembahan Senopati and his heirs, whose liaisons with her legitimates their power, and in the coastal tradition just to satisfy her lust and to add to her palace staff in exchange for good catches. Nyi Blorong shares with the Queen the association with snakes and sexual license, though Nyi Blorong exchanges it for personal wealth rather than for political power or general welfare. Nyi Blorong makes her supplicants physically part of her palace rather than using them as servants or lovers. All these elements are exploited in the films, to which I now turn. Screen goddesses The films we will be looking at can, with one possible exception, be placed in two categories: the ones made for the theatre, in which Nyai Roro Kidul or Nyi Blorong play essential and active roles in the story, and those made for television, in which they are relatively passive background figures who occasionally exercise a deus ex machina function. The possible exception is the 2003 film Anugerah Nyi Roro Kidul, 17 which like the television films is more a family drama featuring the Queen than a film in which she plays an active role. In the films made for television, Nyai Roro Kidul and Nyi Blorong are usually surrounded by attendants and dressed in rather stereotyped zaman dahulu ('olden' times) 18 costume, which is popularly associated with the times of the courts and Javanese kingdoms. Their costume therefore invokes the past and royalty (see Plate 3). In theatre-release films their costume tends to vary between 'court' dress, ordinary street clothes where relevant (for example, in Bangunnya Nyai Roro Kidul), and a green gown, a reference to a painting by Basuki Abdullah (Schlehe 1998:plate 24), which at one point in Pembalasan Ratu Laut Selatan is an integral part of the film. The association with snakes is made abundantly clear in most of the films. In Nyi Blorong (Putri Nyi Roro Kidul) Nyi Blorong hatches from a naga egg, and snakes writhe Medusa-like on her head in Petualangan Cinta Nyi Blorong (see Plate 4). In both films she travels in a flying naga carriage, similar to the one used by the Queen in Plate 1. A similar carriage also appears in the made-fortelevision Nyai Roro Kidul that I viewed in 2004. Although neither has much of a role in the episodes made for television, where they act primarily as sage observers who interestedly keep track of the human protagonists (Misteri Dua Alam: Mustika Nyi Roro Kidul, Indosiar 17 April 2006), when they do appear, snakes and snake symbolism are abundantly present. Nyai Roro Kidul rides in a naga carriage, while in Mustika Nyi Roro Kidul she is said to be an incarnation of a cobra (ular sendok). A very large snake emerges from her hand when she engages in silat combat (Indosiar 17 April 2006), which in Misteri Dua Alam: Anak Titipan Nyi Blorong (Indosiar 27 March 2006) also happens to Nyi Blorong's son, the result of one of her affairs. In the myths as recounted above, the sexual aspect of the two figures plays a relatively important, though not dominant part. The Queen trysts with Panembahan Senopati, in the process giving him power over his future realm (Wessing 1997b:330-2), and in the coastal tradition she desires young men as lovers. Nyi Blorong demands sexual satisfaction from men in return for wealth. In the films this aspect is exploited to the degree that Pembalasan Ratu Laut Selatan was temporarily forbidden by the censors, which only added to its allure. 19 Such publicity draws attention to the films, making them especially attractive to the teenage audiences that, according to Heider (1991:20-1), come to see these films. 20 Finally, there is the Queen's insistence that on the beach she is the only one allowed to wear green --a monopoly that she occasionally shares with Nyi Blorong, who in the television film Nyi Blorong; Titisan Ratu Nagandini (1 April 2006) appears as a green snake. So jealous is she of this prerogative that in Kutukan Nyai Roro Kidul she kills a painter who not only wears a green T-shirt to the beach, but also has the temerity to use his girlfriend as a model for the Queen, wearing a green dress in the painting. Even when in ordinary street clothes, as in Pembalasan Ratu Laut Selatan, the Queen is recognizable by an obvious piece of green clothing. When shooting Kisah cinta Nyi Blorong, the star Joice Erna established spiritual contact with the Queen to ask per-mission both to portray her daughter and to wear green. 21 This seems to have been neglected by the actress Ully Sigar Rusady, who, wearing green on the beach while shooting near Pelabuhan Ratu, got more than she bargained for when she urged the Queen to hurry up and send a large wave: an enormous swell suddenly approached that washed her into the rocks (Diam-diam 1989). So firmly do people believe in her monopoly on this colour that I have never seen a Javanese wearing green on the Indian Ocean beach in East or Central Java, and, when the film Ajian Ratu Laut Kidul was playing in Jember (East Java) in 1991, it was cautioned that if one wore green to the theatre, Nyai Roro Kidul would possess the wearer. Summarizing, we see that the films exploit the symbolic association of the Queen and Nyi Blorong with the colour green, snakes and sexual license, especially the latter, giving viewing the films a cachet of naughty or even sinful entertainment. As we have seen, the details of the two mythologies also tend to spill over into each other, in the films and the literature alike. Thus Purbatjaraka (1962, I:20) has people petitioning the Queen for wealth and paying the same price that Nyi Blorong usually exacts ( Van Hien 1912:173). As Jordaan observes, 'the parallels are striking'. Whether or not the two mythologies were always related in this way (Jordaan 1984:109;Schlehe 1998:144) does not matter: Nyai Roro Kidul now has a daughter. 22 Intertextual references Aside from referring to mythical elements inherent in the tales themselves, the films also bring in elements from other mythical traditions. In Petualangan cinta Nyi Blorong, for instance, a nail pounded into Nyi Blorong's head ensures that she will remain a snake and be unable to change back into her real self, a pretty girl. When the nail is removed, she immediately returns to her old self. This idea of spiking a spirit to render it impotent derives from beliefs about the sundel bolong or kuntilanak, 23 which was featured in the 2004 Malaysian film Kuntilanak; Harum sundal malam and is known throughout the Malay world and into the Philippines. 24 It is the ghost of a woman who died in childbirth and is 21 Pokok & tokoh 1989. Joice Erna is not alone in asking for permission to portray the Queen or her daughter. Offerings are made and permission is asked in the Queen's room (#308) in the Pelabuhan Ratu Hotel (Uchrowi et al. 1988). The actress Suzzanna, moreover, is reputed to practise Javanese mysticism daily, making her the best candidate to portray Nyi Blorong (Film-film horor 2003:70). 22 Pokok & tokoh 1989. Fish's informants were not concerned whether the story as presented in the media was accurate. What mattered to them was the story as such (Fish n.d.:2). A concern with such authenticity or originality may well be an academic burden. 23 Also sundal bolong and kuntianak or puntianak. In Flores it is known as logo lia (Forth 1998:88). 24 Skeat 1972:329;Endicott 1970:60, 62, 81;Sell 1955:70-2;Wilken 1912:319. especially dangerous to women giving birth. She has disordered hair, a mutilated face, and a hole (bolong) in her back (Wessing 1978a:105;Eringa 1984:429;Prawirasuganda 1964:14). Metal objects, knives, or swords and lances are used as protection against her malign influence (Sell 1955:121), while in Flores a spike is inserted into her head to render her harmless (Forth 1998:88). Another intertextual reference, which reinforces the connection with both sex and snakes, is to the story of Ken Dedes, the legendary founding queen of the royal house of Singhasari. It was said of Ken Dedes that she had a luminous or flaming vulva and that the man who could possess her would become a universal ruler (Brandes 1920:59). Sometimes she is said to be Nyai Roro Kidul (Wachtel 1977:19-20;Rully NH n.d.). This idea of a woman's vulva as the locus of power and danger 25 became the prototype for vulvas that house a poisonous snake, endangering prospective suitors: a poisonous snake inhabits the woman's vulva and kills her husband on their wedding night, and only the man who, for example through meditation, manages to control his sexual urges can possess her in the end. In one tale the snake emerges and, caught by the famous Muslim saint Sunan Kali Jaga, turns into a keris, a ritual dagger (Prawirasuganda 1964:85-6). In Kutukan Nyai Roro Kidul the role of the Sunan is taken by an ustadz, a religious teacher, who marries a woman whose vulva the Queen has infested with a snake and several of whose husbands perished on their wedding night. He then refrains from consummating his marriage for forty days. Impatient, the snake finally appears, is caught, and the woman is healed. In Pembalasan Ratu Laut Selatan the Queen, here cast as an evil character, kills men by copulating with them, in the process of which the snake residing in her vulva bites off their penises. One man catches the snake, however, and it turns into a keris, which eventually kills the Queen herself. In Ajian Ratu Laut Selatan, on the other hand, the snake placed within the female protagonist by the Queen is a weapon with which the woman defeats the truly evil village head and his henchmen. When it shows itself, the snake appears from above her belt rather than during sexual activity. In Kisah cinta Nyi Blorong the risqué location of the snake is only hinted at when it appears from underneath Nyi Blorong's sarong, although it also appears from her mouth. A last external reference is to shape shifting, because both Nyai Roro Kidul and Nyi Blorong can switch between being beautiful women and being snakes. While under certain socially controlled conditions shape shifting may be valued, for example when done by a curer or a shaman, it is generally associated with socially uncontrolled, dangerous magic and evil intentions (Wessing 1986:115-6) and is frowned upon, especially in more strict Muslim circles that tend to disapprove of things mystical. 26 Shape shifting, with its inherent uncertainty as to which one is dealing with, thus lends an extra aura of danger and dubiousness to general perception of both the Queen and Nyi Blorong. Cops, clowns, and preachers Not part of the original mythology, but usually present in the film versions, are elements like policemen, clowns, and religious leaders. Even though, as Sen and Hill (2000:142) write, Indonesian films are not supposed to put the police in a bad light, an informant pointed out that the tendency in these films is to have the cops arrive, often in excessive numbers (Bangunnya Nyi Roro Kidul), when the action is over and the criminals have been neutralized. An exception is Santet 2: Wanita harimau, in which the police manage to round up a gang of smugglers, but even there the worst offender is dealt with in the end by the film's female protagonist. Village government, too, is often portrayed as weak and unable to oppose the criminals, which may reflect reality, as an informant pointed out. 'In real life', he said, 'the village head is often afraid of preman (thugs), as opposing them could get one killed' (Rozaki 2004). If we can see this as a 'comment on contemporary Indonesia' (Sen 1988:1), the clowning in these films of the late H. Bokir 27 and his sidekick Dorman are statements about good intentions and human frailty. Bokir's position reflects that of the clowns in shadow-puppet presentations (wayang) in that he appeals to 'subordinate elements […] servants, children, women'. 28 Bokir generally plays out a sub-plot in the film, portraying for instance a phony sorcerer without real powers, 29 a bumbling guard in a house of prostitution, or a village watchman (hansip) who is as afraid as anyone of the forces he is supposed to guard against. 30 He has a roving eye for the ladies, 31 a healthy fear of ghosts, 32 easily jumps to the wrong conclusion, 33 but comments on social justice and quits his job as a guard when a criminal becomes village head. and Bayi Misteri, in all of which shape shifting is depicted as anti-social and evil. In the television series Legenda Ular Putih, to the contrary, the shape-shifting snake-woman opposes an evil sorcerer, and together with her human sweetheart stands for righteousness. 27 http://www.kompas.com/kompas-cetak/0210/19/dikbud/seni09.htm (accessed 20-7-2006). 28 Anderson 1990:167 clowns in wayang presentations sometimes do (Sears 1996:272), he delivers government messages about people's civic duties and conservation. 35 Bokir represents the common man, the anti-hero, who is as much at sea when faced with the real world as any member of the audience. He is part of the film but, like the audience, can only stand by and watch the main drama unfold. 36 For all his foibles and bumbling, however, Bokir stands for order in the face of the disorder brought about within the film's tale, a position also taken by another common character, the religious teacher (ustadz) or preacher (kiai), with whom we arrive at a major arena of discussion in which these films take part. If Bokir comments on the story line and on the position of the common man in Indonesia today, these spiritual figures are part of a nation-wide debate about the nature and future of Indonesian society. This debate is carried on between the side of Islamic modernization and those who, actively or passively, continue to participate in what the modernizers of religion at best label as superstition and at worst as syirik (idolatry). As noted earlier, belief in all kinds of supernatural beings and their powers for good or ill is still very common. So too is the fear of sorcery, which many believe to be still widely practised, 37 a belief that is reflected in the films as well. Tempo observes that the religious figure is imposed on these films by the Board of Film Censorship (Badan Sensor Film), an 'ultra-nationalist gatekeeper of Indonesian culture' (Sen and Hill 2000:138), according to which, though Heider (1994:167) disagrees, these kinds of films are supposed to have a religious mission (Film-film horor 2003:72). However, although this policy fits in with the greater freedom to express religious points of view in the post-Soeharto era, it cannot be said that the films as a whole have shown a greater religious emphasis since then. Indeed, the depiction of our two mythological figures in the films is remarkably balanced. In fact, a teenage informant complained that portrayals of the Queen tend to be too tame, making the films less attractive to his age group, who prefer films that are serem (terrifying). In Ajian Ratu Laut Kidul (1991) and Bangunnya Nyi Roro Kidul (1985), both starring Suzzanna and made during Soeharto's New Order regime, the Queen is depicted as supporting the forces of order and foiling nasty schemes, though in the latter film she does engage in some (implied) explosive underwater sex with the leading man. As 35 Santet 2: Wanita Harimau 36 Anderson 1990:167;Foley 1992:27. As such he resembles the penasar cenikan of Balinese masked theatre, described by Emigh (1996:134), in that in playing 'across that gap' between the film and the audience, he makes 'the telling of the story […] more recognizably human'. If in Balinese theatre the gap to be bridged is between the past and the present, in these films it is one created by the fact that, unlike in live theatre, the audience is not physically present at the performance, but is a step removed into the anonymity of a movie theatre. On the challenges faced by Balinese performers when their audiences are removed, see Hobart 2002. 37 Wessing 1996;Fish n.d.;Dituduh nyantet 2006;Isu santet 2006;Korban 2006;Sakit perut 2006;Lagi 2006aLagi , 2006b observed earlier, in the made-for-television films Nyai Roro Kidul and Mustika Nyi Roro Kidul she mainly acts as a deus ex machina who helps foil the forces of disorder. In only two films, Kutukan Nyai Roro Kidul (1979) and Pembalasan Ratu Laut Selatan (1999), neither of which star Suzzanna, is she portrayed as a purely evil creature, a spirit out only for her own interests. In the latter film this is because she is trying to regain an object that was stolen from her. Even though, as Schlehe (1998:240) notes, Nyi Blorong is 'tempting, awe inspiring, violent, bloodthirsty and […] in the final analysis […] just', she is portrayed negatively in only one film, the post-Soeharto television drama Nyi Blorong; Titisan Ratu Nagandini, in which she eats human flesh and kills newborn babies to counter a rival. However, in another post-Soeharto television film, Misteri dua alam: Anak titipan Nyi Blorong, she acts as the hero's fairy godmother (she is actually his real mother) and supports him when he battles the forces of crime and deception. In most films about her she lectures people on the dire consequences of their desire for instant wealth. In Petualangan cinta Nyi Blorong she chides the fake sorcerer Bokir for his phony act, and his gullible clients for believing in such nonsense, while in Kisah cinta Nyi Blorong she advises the petitioner not to ask for wealth. The moral message, Schlehe (1998:240) observes, is that those who go to the spirits for wealth, rather than earning it through their own labours, will receive their just rewards. This balanced portrayal of our two personages notwithstanding, in many films magic and sorcery are very negatively valued. 38 A moralizing tone is also adopted by the films' religious figures, even though they do not always only appear at the end of the film to save the day, as Film-film horor (2003:71) claims. Though this is often true, in Kutukan Nyai Roro Kidul the Islamic teacher arrives rather early in the film and immediately makes his mark: using only one hand, he prevents a woman from being molested -in his other hand he holds his bag. He then prevents the leading lady from being lynched, teaches the village to pray and leave sin behind, marries the leading lady, releases her from the snake in her vulva, and goes off with her, probably to spread the word of God somewhere else. Similarly, in Santet; Ilmu pelebur nyawa, the Islamic teacher calls on God and subdues the spirit that has been upsetting order in the village. Mentioning government slogans like Pancasila 39 and the law, and admonishing people not to take the law into their own hands (jangan main hakim sendiri), he takes the hand of the female lead and walks off with her. Even those in league with the forces of disorder can be redeemed, as long as they bertobat (repent) (Nyi Blorong: Titisan Ratu Nagandini): in the presence of a religious leader, sorcerers and their ilk are powerless (Gibson 2000:42). The religious leaders in the films, then, tend to preach: as one informant said, 'Dakwa masuk TV' (Islamic missionizing has come to television). Interestingly, however, none of my informants mentioned that belief in the Queen or Nyi Blorong was forbidden by religion. Indeed, on the television show Pemburu hantu (Ghost hunters), the Islamic authority who is one of the show hosts 'confirms the existence of supernatural beings and explains that this is also acknowledged in the Quran'. 40 To summarize, the films under discussion here begin by referring to notable and sometimes exciting elements of the myths as these are known from the literature and folklore, like the colour green, snakes, and a proclivity for promiscuous sexual activity, and in this way they gain a measure of authenticity. 41 As Bruner (2002:94) writes, the possible worlds created by narrative fiction are extrapolated from the known world. Of course, this is not something idiosyncratically Indonesian, but rather inherent in how stories work. Moving on from there, the filmmaker is free to add additional elements, for example by referring to other mythologies, especially where this adds spice to the tale he is telling. Such remixing, while permanent once the film is made, can vary from film to film since, of course, it is not a uniform process but is at least partially determined by the filmmaker's purpose in telling this particular tale. The filmmaker is relatively free here because each telling of the tale is in principle a new one and thus a reconstruction, constrained by the context of its presentation. This then allows new elements to be incorporated into the story and new interpretations to be given to old story lines. 42 The addition of comic-relief figures like Bokir compensates for the loss of anchorage in a particular locale, because through these figures the film's locale becomes any village, its problems common to many. Thus the stories have a greater freedom to develop, unrestricted by particular local parameters, while at the same time, as discussions with viewers showed, being subject to testing against the local truths held by the individual viewers (Fish n.d.). The addition of the religious leader places the tale in a wider, national perspective, addressing supra-local questions about the nature of modernity, the place and reality of mythological figures, and the parameters of being an Indonesian Muslim in the present time. The variety of answers given by the films to these questions show that this is an ongoing debate in Indonesia and that the question of how to define an 'Indonesian Muslim' is 40 Arps and Van Heeren 2006. In any case, the Queen is said to have embraced the Muslim faith (Woodward 1989:261, note 27;Ricklefs 1974:203). 41 The emphasis on sex in advertisements for the films may also increase their commercial potential. 42 Filmmakers are not alone in doing this, of course. In his 1990 novel Perang the celebrated author Putu Wijaya integrates elements like modern weapons, mobile phones, and computers into the Bharata Yuda War. The book's back cover calls it a contemporary wayang story. far from resolved. 43 Many voices are trying to be heard, each claiming to tell the authentic or preferable version of the national tale and each heard by an audience with varying perceptions of what is being said, even if the speaker, be he preacher, politician or filmmaker, has a particular ideal audience in mind (Hobart 1999:266;Wessing 1978b). In all this, television plays a peculiar role in that, while on the one hand it is part of 'national media space' (Sen and Hill 2000:199) that plays an important role in creating 'national imaginaries' (Ginsburg, Abu-Lughod and Larkin 2000:11;Anderson 1991), its messages, and increasingly those of films on VCD, 44 upon entering people's living rooms, become, as Fish (n.d.) has shown, part of local reality with which they then start interacting. This is, of course, far truer of television than of films shown in local theatres, whose audiences are assemblies of relative strangers who do not necessarily all have the same reference points (Hatley 1988:20). 45 Yet even there, local perceptions are not absent, as is illustrated by moviegoers' reluctance to wear green to the theatre where a Nyi Blorong film was playing. Therefore, rather than considering television and VCD discs as intrusive and external, it is perhaps better to see them as yet another local (or localized) source of information and authority on supra-local matters, while in the supernatural area they are part of a continuum between local spirits and those made familiar by the media. Indeed, as Randal Baier (personal communication 2005) observed, electronic mediation seems to give tales greater authority or truth, just like publishing a version of a tale tends to raise its status to that of the 'correct' version (Sweeney 1980:7;cf. Goody 1996:670), which tends to disempower traditional storytellers (Kitley 1999:138), leading to a decline in storytelling at home (AW 1991). 43 See for instance the issues addressed by former president Abdurrahman Wahid (Jangan samakan 2006:12) and by the Muslim scholar M. Dawam Raharjo (Novriantoni 2006:10). See also the debate on the proposed pornography laws and the reaction by for example the Front Pembela Islam (Front for the Defense of Islam) to the stand taken by the popular singer/dancer Inul (http:// www.indonesiamatters.com/297/human-trash-dangdut-singers/, last accessed 24-7-2006). At issue here is the debate whether Islam in Indonesia should be considered as a cultural aspect of life (for instance the Indonesianization of Islam) or whether it should become an institutional part of society (Wahid 2006;Kolaborasi Islam 2007). By extension this also includes the place of 'folk Islam' and the place of entities like the Queen and Nyi Blorong in the belief system. Although it has recently intensified in post-Soeharto Indonesia, this debate is actually not new and has been carried on since at least colonial times (Baso 2006:400-3). 44 In 1990 Jember, East Java, had a three-theatre Cineplex and a rather run-down movie house showing 'porno' (occasionally slightly risqué) films. In 2005, however, the Cineplex had succumbed to the combined pressures of the monetary crisis and the advent of VCD and DVD modes of viewing (Tiket supermurah 2006). The 'porno' house continues to show its usual fare. 45 I am reminded of a small group of anthropologists from the University of Illinois (Urbana) that, stopping in London after fieldwork in Africa, decided to go see Stanley Kubrick's then recently released film 2001. At the point in the film where the computer HAL states that he was assembled in 'Urbana, Illinois' the anthropologists as a group burst out laughing, to the amazement of the rest of the London audience that, of course, did not have this place as an immediate reference point. Viewing television or VCDs is generally done in the context of a group whose religious, political and other preferences are known, if not always agreed with (Ruby 2000:188). Even here, the presence of a respected narrator can influence how messages are received, especially in situations where television or films are watched communally. This can be a way for neighbours to socialize (Hamilton 2002:158), but it can also be a forum in which an opinion leader can elucidate his or her views. 46 In this way 'media space' interdigitates with 'local space'. As television and VCD viewing become more common in individual homes, the immediate influence of local opinion leaders gives way to that of the most respected member of the household and, of course, of the views that are expressed in the film or programme. These, in turn, reflect the interaction of numerous parameters, including those of the film and television industries and the restrictions to which these are subject, and, of course, local sensibilities, which they sometimes get wrong. 47 Through the media, therefore, local beliefs and forms of expression become subject to influences from outside agencies, including the state, the Broadcasting Commission, 48 and the religious establishment. The way stories are presented places the viewer in what Steele (2005:148) calls an 'interpretive framework' that shapes the meaning of the tale and attempts to lead the viewer to certain preferred conclusions. Given the nature of television viewing, during which people frequently do all kinds of other things, from answering the telephone to doing homework, these attempts are often less than successful as 'viewers simply attribute to a picture what they already know […] regardless of what the producer intended' (Ruby 2000:189, 184). Personal predilections or predispositions can, of course, further influence this process, either positively or negatively. Conclusion As can be concluded from our discussion about the role of narrative, 'talked together' communities are emergent, contingent, contested, ideological and political, and reflect a variety of competing interests (Niles 1999:143). In daily life, 46 These views may be publicly accepted, but away from the opinion leader they may later be rejected (Ruby 2000:188-9). 47 For example the 1991 airing in Surabaya of the Canadian cooking programme Wok with Yan, featuring the preparation of pork at the start of the fasting month of Ramadan (Kitley 1992:100 this ongoing local discussion is the reference context for understanding both the realities and the mythologies that people encounter. This is the context in which matters like the meaning of the Queen, but also of religious and thus social and political matters, are clarified and understood (Sen and Hill 2000:124). Here too, the Queen quickly becomes linked to local spirits (Wessing 2006b:52-6). Myths and other stories reflect a specific kind of truth (Fischer 1987:6), being models of the world (Bruner 2002:25) and also models for our continued construction of it. Given the importance of these questions, we could therefore be said to be speaking here of a contest for mythical space, since 'media space' presents us with a new niche in which new interpretations of both myths and religion are possible. Even sacred texts can become 'arenas for contestation' (Abshar-Abdallah 2003:133), in which attributed meaning might depend more on the nature of the audience than on the intentions of the communicator (Caldarola 1990:3-4, cited in Ruby 2000Wessing 1978b). If in the past the interpretation of these mythologies was tied to specific locales (Wessing 2006b:49-50), their move into 'media space' has loosened them from these old ties which, as happens to spirits and gods generally, has created for them the possibility of a wider appeal but which also subjects them, again like religion generally, to the possibility of a much greater range of interpretations or manipulation. As Sean Williams reminded me, 49 the visual aspect of film plays an important role here, its impact being quite different from an oral rendering of the tale. She suggests that life performance necessarily involves considerable audience interaction and makes possible immediate feedback (which is also shown by Hobart's data (2002:380)) while linking the tale with local spirit entities. The mediated versions do the reverse and remove the tales from the local and bring them into a national arena. Thus, what the tale loses in local intimacy it gains in scope and impact. 50 For the Indonesian state, 51 one of films' preoccupations is the maintenance of social order - Sen and Hill (2000:43, 145) speak of their 'moralizing tone' (see also Heider 1994:170). Thus the Sundel bolong opposes the gang that victimized her in life (as does the kuntilanak in Kuntilanak; Harum sundal malam), and the religious leaders in many of the films join in the fight against disruptive forces like were-creatures and criminals -the latter depicted as crude (kasar) and emotional in contrast with the imperturbable calm exuded by the religious leader. In this way the films attempt to socialize their teenage viewers into becoming proper members of the state, the first plank of whose national philosophy (Pancasila) is a belief in one God. 'The guru agama [reli-49 Email, 21-8-2006. 50 Yet the intimacy of a local telling should not be underestimated, as anyone who has heard such tales told by flickering lamplight or at a campfire can testify. 51 Sen and Hill write about Soeharto's regime, but the concern has not changed since then, even though the newspapers these days seem to reflect the existence a higher degree of disorder than they did under Soeharto. gious teacher] figure teaches us not to be afraid of devils', a male informant in his early twenties from a religiously rather relaxed household said, 'It shows that reciting verses from the Quran can help overcome them'. The same person, however, would walk away or change the channel when the religious messages were laid on too thick for his taste. As Heider (1991:109) points out, 'films must make money by telling entertaining stories', and when they cease to entertain, people will turn away. My informant's younger siblings, however, an eleven-year-old girl and a fifteen-year-old boy, tended to stay glued to the screen, the boy even reciting religious phrases along with the television preachers during the call to prayer. 52 While the mythological figures we have been looking at used to be primarily matters of local concern, since their film debuts they have progressively become caught up in a larger discussion about Islam and its position vis-à-vis these beliefs, a discussion in which their portrayal varies from absolute evil, to neutral, to benign and advisory. 53 Being mythological figures, they have little if any say in these portrayals, though those asking their permission to do the films do not, perhaps, see them as totally powerless. But, with or without permission, the film's director is relatively free to develop the narrative as he and the interests behind the film wish. As we have seen, these portrayals have little to do with the role that these figures played locally, such as ensuring that fishermen get good catches or legitimizing the state. The only one relatively true to her original form is Nyi Blorong, but she too is made to denigrate the petitioners who come to her and, in some of the television films, has even become a benign figure. 54 Given their much wider audience with its varied individual local spirit concerns, the Queen and Nyi Blorong continue to be mediated by local spirit icons, although these are perhaps ones that they might in the past not have had to face. They are also mediated by the influence of local and supra-local (for instance televised) religious figures. Both mediations make a single valuation of them difficult. As Heider (1991Heider ( :10, 1994 writes, films are helping shape Indonesian national culture, which includes rewriting those aspects of past belief that no longer fit in with Indonesia's new social realities (Niles 1999:83). Yet there are many aspects of that culture that are still matters for contention, including the position of these myths and religion in the nation's system of values as well as the system of values itself.
2019-06-16T13:15:28.297Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "b26fedd2e707aecec48233612701cc11e7746208", "oa_license": "CCBYNC", "oa_url": "https://brill.com/downloadpdf/journals/bki/163/4/article-p529_4.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "60d4a81924156a06be186332189176172d16e986", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
18834273
pes2o/s2orc
v3-fos-license
Emerging aspects of assessing lead poisoning in childhood This review covers the epidemiology of lead poisoning in children on a global scale. Newer sources of lead poisoning are identified. The methods that are used to assess a population of children exposed to lead are discussed, together with the ways of undertaking an exposure risk assessment; this includes assessing the time course and identifying sources of lead exposure. Human assessment measures for lead toxicity, such as blood lead concentrations, deciduous tooth lead, and use of zinc protoporphyrin estimations are evaluated. The role of isotopic fingerprinting techniques for identifying environmental sources of exposure is discussed. Among emerging data on the cognitive and behavioral effects of lead on children, the review considers the growing evidence of neurocognitive dysfunction with blood lead concentrations even below 10 µg/dl. The challenge of assessing and explaining the risk that applies to an individual as opposed to a population is discussed. Intervention strategies to mitigate risk from lead are examined together with the limited role for and limitations of chelation therapy for lead. Lessons learned from managing a population lead-dust exposure event in Esperance, Western Australia in 2007 are discussed throughout the review. Introduction Lead exposure remains a major environmental issue around the world, as the poverty of measures to deal effectively with the problem in both developing and developed countries has led to significant ongoing exposure. The annual cost of the health effects of lead exposure in the United States alone was estimated at US$43.5 billion in 1997Fmuch higher than that associated with any other environmental toxin. 1 Few doubt that lead exposure has significant health consequences at levels below those considered medically acceptable decades ago, but there is still debate over what levels of lead exposure in the modern world, if any, can be considered of minimal harm. This review focuses on lead poisoning in children because of the high prevalence of lead in the environment, and because the impact of lead exposure on children's neurocognitive development, in particular, is substantial. 1 Emerging data suggest a health impact on neurocognitive function at much lower blood concentrations of lead than thought earlier. [2][3][4][5] Children are particularly sensitive and susceptible to lead toxicity and as such are a subpopulation 6 at which prevention strategies need to be targeted. Childhood lead remains a major public health problem for certain groups of children, specifically African-American children in the USA, 1,7 children living in areas of low socioeconomic status, 8 children living in rural mining communities, and children in developing countries such as India and the Philippines. 9,10 This paper also presents a case study of the assessment of children exposed to lead in Esperance, Western Australia, to illustrate the principles of managing such events from a public health perspective, as well as the scientific rationale for this approach. Ethical background To illustrate the issues of children's lead exposure and management, a case example of environmental exposure in Esperance, Western Australia, is frequently referred to in this review. All human data in this paper were obtained for clinical or public health prevention purposes, and thus a formal research ethics application was not submitted or required to undertake this review. However, the blood sampling conformed to Western Australian Health department guidelines and ethical principles. This paper uses publicly available data on the Esperance lead-contamination incident. Lessons from managing the lead exposure incident in Esperance Esperance is a town on the Southern Coast of Western Australia, 721 km from Perth, which enjoys a reputation of a 'pristine environment,' with 'snow white beaches,' clear 'aqua blue waters,' and an 'abundance of wildlife.' 11 In December 2006, Esperance community members reported that birds were 'actually falling from the sky,' and by the end of March 2007 local government agencies had estimated that the total number of bird deaths in the area was 4500. 11 The detection of elevated lead concentrations in the liver of these birds prompted an environmental investigation that identified significant lead-dust contamination in the town. The point source was found to be the Port of Esperance, which had begun storing and shipping lead carbonate (ore) in 2005. Dust from the port had contaminated rooves; therefore, drinking water from rainwater tanks was a significant source of lead exposure for the population. This newer source contrasts in duration of exposure with the lead mining and smelters found in areas of Australia such as Broken Hill and Port Pirie. 12,13 The difference is seen in the magnitude of clinical effects but also in terms of appropriate remediation strategies and health surveillance. Sadly, much disproportionate fear has been invoked in the population of Esperance from stories of lead-smelting areas in Australia. Concern was expressed by the local population about the potential for lead poisoning among children, and those with blood lead concentrations X5 mg/dl entered a blood test surveillance programme, where testing occurred at threemonth intervals. The aim of such a programme was to evaluate if ongoing exposure and lead accumulation was occurring after environmental remediation had taken place. The level of 5 mg/dl was chosen to give a margin of safety (being significantly lower than the intervention level of 10 mg/dl set by the US Centers for Disease Control and Prevention, CDC, discussed below), and to ensure a sufficient sample size across the town for meaningful data collection. Isotopic lead estimation (described below) was also used, and data collected with this technique revealed that several children had significant blood lead levels associated with exposures from sources other than the primary point source (i.e., the Port). Active measures were taken to control the population's lead exposure through dust. These measures included advice against drinking water from rainwater tanks (contaminated by dust), wet mopping of dust, cleaning with other methods, and hand washing. As a consequence, average blood lead concentrations in children five years of age or younger fell rapidly at three and six months after these measures were instituted. 14 The role of the toxicologist in risk communication is important and conventional concepts in risk communication, such as consistency of messaging and explaining relative risks, were applied in Esperance. [15][16][17][18][19] Risk messages about the absence of serious toxicity risk were communicated while urging the residents not to be complacent. This message was both simple and wholly reliable. It was important that the toxicologist involved in the response to the Esperance incident had no connection with government agencies. The role was to keep an active communications channel with concerned members of the public and professionals alike, and to problem-solve where communications had slipped. To ensure that the message had reached the entire community, toxicologists and public health physicians personally saw parents, general practitioners, and community leaders such as aboriginal communities, pediatricians, and activist groups. 11 It was important to be prepared to explain the detailed science and medicine behind decisions to those who sought to understand them. For example, explaining the risks to an individual versus the whole population for lead's effect on IQ , and explaining some of the confounding factors considered in interpreting existing studies. It was also helpful to give people choice in how they responded to the situation, for example, by offering a variety of intervention strategies (see below) to reduce ongoing exposure. These included hand washing, availability of highefficiency particulate air (HEPA) vacuums, personal house cleaning, and professional house cleaning. 11 It was important to acknowledge the fear provoked by the event. Key to managing this incident was to understand that the community had chosen to live in Esperance because it was a pristine environment. It was also important to understand concerns about the impact of pollution on that image and the prospects of loss of income from tourism. Potential sources of lead exposure Longstanding and ongoing sources of external exposure to environmental lead include lead-smelter areas, 12 melted lead batteries, 20 lead in drinking water, 21,22 the glazing industry, 23 and lead paint. 22 Moreover, exposure can occur in work environments 24 and through the transfer of lead from mother to fetus, [25][26][27] which occurs with a placental transmission ratio of 0.6. The transfer of lead from mothers to nursing infants through breast milk occurs in much lower amounts by comparison, with the mammary gland being a barrier that effectively maintains a low milk:plasma ratio for lead. [25][26][27] Despite a documented wide range of lead concentrations in human milk, there have been no reports of toxicity caused by breast feeding. 25 For children in non-lead polluting industrial areas, paint provides the most common source of exposure to lead. In many developed countries, a reduction in the use of leaded petrol over the last decade has diminished lead poisoning; this has led to complacency. 7,28 The ban of leaded petrol in Australia, in effect since 2002, has resulted in declining blood lead levelsFthough interestingly, even today some dust from roadsides still contains significant concentrations of lead (Peter Baghurst, personal communication). The most recent Australian National blood lead survey, conducted in 1996 by the Australian Institute of Health and Welfare (AIHW), found a mean blood lead level of 5.8 mg/dl in a random sample of children, where 92.7% of lead levels recorded were below 10 mg/dl. 29 The US National Health and Nutrition Examination Survey (NHANES) of 1976-1980, 1991-1994, and 1999-2002 has documented a steady decline in the number of children aged one to five years in the USA with blood levels 410 mg/dl, from 77.8% to 4.4% and to 1.6%, respectively. 30 The average blood lead level for Swedish children aged 7-11 years whose residence was not near industrial sources of the chemical was 2.1 mg/dl between 1995 and 2001. 31 However, pockets of ongoing population exposure to lead still occur in developed countries. For example, the Guys and St. Thomas Medical Toxicology Clinic in the UK saw many tens of patients with significant lead poisoning requiring chelation therapy in South London from 1998-2006. Lead paint from Victorian houses remains a source of lead poisoning when it chips and flakes before and during restoration. 32 This is also a source of exposure in the USA, 33 where the economically disadvantaged, recent migrants, and children with developmental delays are at a higher risk of lead exposure than the general population. 7 Mean blood lead concentrations in these higher risk populations have declined over time, but remain elevated in some locations and among some populations. 34 More recently identified sources of lead exposure for children include industrial environmental sources, such as the shipping of lead in the case of Esperance. Newly identified sources of lead poisoning over the last 10 years also include fishing weights, otherwise known as sinkers, 35 snooker chalk, 36 and lead paint found on products including children's toys and barbeques. Increasing globalization has a marked impact on risk as, for example, in the case of lead paint on toys made in China 37 appearing in shops in the USA and Australia. As a result, there is an increasing need for public health authorities to be vigilant for both domestic and imported lead hazards and to put surveillance systems in place for early identification of such hazards. The most recent US death associated with lead toxicity 38 was in a child who swallowed metal jewelry. This 'imported risk' may affect certain ethic groups more than others. For example, significant amounts of lead are found as a contaminant or an intentional adulterant in some herbs and ethnic remedies including ayurvedic herbal products, 39 imported spices, or Hispanic folk remedies such as 'litargirio'. 27 However, toys made with lead paint are distributed more widely than these remedies. What effect does lead have on children? General principles Lead is neurotoxic. 40 It interferes with signal transmission at the synapse and interferes with cellular adhesion molecules, causing disruption in cellular migration during critical times of nervous system development. Disruption of subunit expression of the N-methyl-D-aspartate receptor (NMDAR) and NMDAR-mediated calcium signaling in glutamatergic synapses is considered the main mechanism of lead-induced deficits in synaptic plasticity, and in learning and memory deficits documented with animal models of lead toxicity. 41 At fairly substantial levels of exposure, lead inhibits the enzymes ferrochelatase and delta amino levulinic acid dehydratase, 42 resulting in microcytic hypochromic anemia. There is no single clinical neurological set of effects that make up a 'signature' injury associated with lead exposure. Deficits have been reported in verbal IQ, performance IQ, academic skills such as reading and maths, visuo-spatial skills, problem solving, fine and motor skills, memory and language skills. 7,43-45 As methods of measuring both lead exposure and cognitive development become more sensitive, subtle adverse impacts of very low blood lead levels become more quantifiable. One of the greatest challenges facing clinicians dealing with lead issues today is determining what this means for individual patients and populations, and mitigating these risks. Emerging literature on health effects of blood lead concentrations 410 lg/dl Several older meta-analyses of observational epidemiological studies indicate that a child's IQ scores decline 2-3 points per 10 mg/dl increase in blood lead level 12,46 between 10 and 30 mg/dl. Below 10 mg/dl, a pooled analysis using a log-linear model shows that the function that best describes the data predicts a 9.2 point decline in IQ over the range of o1 to 30 mg/dl; with two thirds of the decline predicted to occur in the range of o1 to 9.9 mg/dl. 2 A plausible explanatory hypothesis for such findings is a lead-sensitive effect that is rapidly saturated at blood levels o10 mg/dl. 7 Some longrunning prospective studies suggest that lead-associated neurodevelopmental deficits induced by postnatal exposure resolve over several years. 3 But other studies, such as those involving the Port Pirie data, 2,13 do not indicate this. Moreover, brain functional imaging studies show differences between those people with and without past exposure to the metal. 47 At higher doses, the impacts of lead include damage to the nervous, hemopoeitic, endocrine, 48 and renal 49 systems. Data indicate that lead contributes to nephrotoxicity even at blood lead levels below 5 mg/dl, especially in susceptible population groups such as those with hypertension, diabetes mellitus, or chronic renal disease. 1, 10,30,49,50 Emerging literature on health effects of blood lead concentrations o10 lg/dl Recent studies suggest that there is no concentration threshold for injury from absorbed lead in children, and blood lead levels under 10 mg/dl have been correlated with declining IQ scores. 4,5,51,52 Lanphear et al. 5 found that for every 1 mg/dl increase in lead concentration there was a 0.5 point decrease in average scores of arithmetic and reading for children whose blood lead concentrations were o5 mg/dl. Individual studies associating blood lead levels below 10 mg/dl with adverse cognitive impacts must be interpreted carefully, in light of what is known about timing of exposure in relation to child development, outcome measures, methodological limitations, and the importance of controlling for confounding and effect-modifying variables that include socioeconomic status, maternal education, and housing quality. 51,52 The evidence that has evolved over the past five years on the dose-response relationship below lead levels of 10 mg/dl has clear implications for policy decisions based on it. 53 In 1991, the US CDC chose 10 mg/dl as an initial screening level of concern for lead in children's blood. Current data on health risks and intervention options do not support generally lowering that level, but federal lead-poisoning prevention efforts in the USA have revised the follow-up testing schedule for infants aged one year or less with blood lead levels of 5 mg/dl or higher, rather than 10 mg/dl or higher. 51,52 This level was also applied as the cutoff for ongoing surveillance in Esperance as described above. One review suggests lowering the blood lead action level from 10 to 2 mg/dl. 54 Such a suggestion needs to be put in the context of the laboratory methods' sensitivity to detect lead. 55 To help in interpreting blood lead levels, clinicians need to understand the laboratory error range for blood lead values, and to select a laboratory that achieves levels within 72 mg/dl. 55 The US Agency for Toxic Substances and Disease Registry (ATSDR) has refused to set a minimum risk level because some of the health effects associated with exposure to lead occur at blood levels so low as to indicate that toxicity occurs essentially without a threshold. 54 Bernard 51 has suggested that very young children with blood lead levels above the national average should be tested more frequently. The CDC also suggested more frequent testing for some children whose blood lead levels are in the range of 5-9 mg/dl. 52 Bernard advocates education about the dangers of lead, the use of blood lead surveillance, and the collection of additional data to identify populations at risk. The risk to an individual of a small drop in IQ is minimal, but the population risk of an effect on IQ, or other health outcomes, is great. 48 Factors that make children most susceptible to lead toxicity It is only recently that the concept of considering children as a sensitive subpopulation in health risk terms has gained acceptance in environmental toxicology. Children are more susceptible to the adverse effects of lead than adults for several reasons. Lead exposure around 28 weeks of gestation coincides with a time of critical neurological development, leading potentially to permanent effects even at low levels. 56,57 As infants and toddlers, their behavior is marked by a high frequency of hand-to-mouth activities, and hence they tend to ingest more dust. 7,58 The fraction of ingested lead absorbed by young children is higher than that absorbed by adults 7,58 and the developing nervous system is more susceptible to toxins. 7,58,59 In addition, some children have an urge to repetitively consume non-food products such as lead paint flakesFthis is called pica. 7,58 Assessing a population that may have been exposed to lead When assessing a population that may have been exposed to lead, the following factors need to be taken into consideration: Time course and sources of exposure Type of lead compound involved and its physical form, for example dust, pellets, paints, dissolved lead in solution The age distribution of the children exposed Children's risk of lead poisoning correlates positively with their ability to walk and their hand-to-mouth behavior. Typically, peak blood lead levels occur by 18-30 months of age and then decline gradually through the rest of toddler and school years. 7 Children who have persistent pica are at high risk for continued lead exposure well into their schoolaged years; this includes those children with developmental delays. Most young children are exposed by poisoned by the ingestion of lead-containing dust as a result of hand-tomouth behaviors. In a poisoning incident, inquiry into the possible lead contamination of all the environments in which the child spends significant amounts of time needs to be undertaken. What is the age of the house? Have any renovations taken place recently? Are there bite marks on windows or furniture? Other potential sources of lead also need to be considered (Table 1). What source of water does the family use for drinking? The family's dust control and hand-washing behaviors need to be known. A full physical examination with an emphasis on neurological function is required as part of the assessment. Parents may describe irritability, insomnia, aggressiveness, lack of focus and attention, poor appetite, and speech delays. But these features are, of course, also found in non-lead exposed children, 7 and most children with blood lead levels considered elevated will be asymptomatic, showing no physical signs of poisoning. A developmental evaluation should be considered and appropriate developmental monitoring should be established. 83,84 Car repair 91 Contaminated foods, e.g., flour 84,85 Mining 84 Surma (kohl) cosmetics 85 Smelting 84 Ceramic bowls, glazes 85 Demolition 92 Drinking water from lead pipes 86 Battery manufacture 84 Dust 87 Construction 93 Traditional remedies 84 Pipe fitting 94,95 Soldered pots, kettles 85 Plumbing 94,95 Paint, plaster, putty 83 Shipbuilding 96 Metallic jewelry 38 Bridge reconstruction 97 Soil 88 Glazing, pottery 85 Snooker chalk 89 Lead fishing weights/sinkers 85 Renovations 90 Assessing lead poisoning in childhood Emerging Health Threats Journal AL Jones 2009, 2:e3 www.eht-journal.org Diagnostic modalities for assessing lead poisoning in children The best way to assess the degree of lead poisoning in children is by taking venous blood for lead estimation. Heel or finger prick testing is prone to significant error because of environmental skin contamination. 7 However, with prior thorough cleaning of the skin, it has been used by many to collect biomonitoring samples. Recently, in undertaking health surveillance after the lead-dust exposure scenario in Esperance, venous blood testing was performed on children under five years of age without the anticipated difficulties of difficult venous access. Retesting was carried out at three monthly intervals in children with lead concentrations 45 mg/dl, and showed significant expected falls (with the estimated elimination half-life of lead from blood being 30 days). 60 This showed that bioaccumulation from ongoing environmental exposure was not occurring, since a rapid fall in concentration indicates long-term exposure. Roberts et al. 60 found that the average time for blood lead to decline was linearly related to the peak concentration of blood lead, but the time for 50% of the blood lead to decline to o10 mg/dl was not linear and varied with peak lead levels. Venous blood lead estimation is a short-term measure of lead exposure (half-life of 30 days) that reflects exposure from current exogenous sources and the release of lead from bone. 61 The US CDC has, for screening purposes, defined a blood lead level of 10 mg/dl as the threshold level of concern, a value never intended as a definition of what is safe or 'normal'. 55 The CDC recommends that state and local health departments develop appropriate screening strategies for their areas. In the USA, some states have adopted universal annual screening of preschool children 1-5 years old for blood lead, and others have targeted those at highest risk. 7 In many cases, the costs of universal screening exceeded the cost of health benefits. 62 The recommendations for Medicaid-eligible children are mandated by those with authority over Medicaid. CDC guidelines are in agreement with this. All Medicaid-eligible children and children living in highrisk communities (e.g., those in which 12% or more of the children have blood lead levels X10 mg/dl) are screened. Venesection is traumatic to many children and the balance of benefit and harm is a fine one, particularly where unselective, indiscriminate testing occurs. Fetal risk from maternal exposure to lead during pregnancy is substantial, so women engaged in occupations or crafts known to have a risk of lead contamination should be screened periodically. Blood levels of concern for pregnant females are 5 mg/dl or higher. 61 Gardella 63 showed a strong positive correlation between maternal and umbilical cord blood lead levels exceeding 10 mg/dl. This is important in testing exposure to lead in utero. The lead content of the surface enamel of deciduous teeth in children can be estimated by atomic absorption photometry. 59 Lead accumulated in this part of the tooth is linked to the environment in which people reside and as such it can be used as a biomarker of lead exposure. 64,65 It is unsuitable as a measure of acute recent exposures. The use of zinc protoporphyrins (ZPP) can be helpful in individuals with moderate-to-high blood lead concentrations, where the objective is to determine the chronicity of exposure. 7,66 An elevated ZPP indicates circulating lead in the preceding 90 days. An elevated ZPP in a child shown to be iron-sufficient indicates a longer duration of exposure to lead, with a body burden (i.e., more lead deposition in bone) that will require more extensive chelation therapy. 7 In cases where the timing and location of exposure is known, it carries little value. Potentially, it can also be used as a screening tool as a surrogate marker for blood lead. 7,66 But the best screening tool and marker of lead exposure in children remains blood lead and should not be replaced by ZPP. ZPP measurements are fraught with difficulty because there is a poor correlation between ZPP and blood lead at lower blood lead concentrations. 66 Moreover, there are other conditions (e.g., iron deficiency) that can increase ZPP and there is significant inter-individual variation in values. Thus, in the case of lead-dust exposure in Esperance, its use was rejected because of expected relatively low sensitivity and specificity. 67 There is better correlation between ZPP and blood lead at higher concentrations (420 and particularly 440 mg/dl). 67 Speciation of lead by isotoping is a newly developed research application of a technique that is very helpful in identifying the source of lead exposure. 65,68 By comparing the isotopic profiles (isotope ratio) of lead among samples, it is possible to identify or exclude source(s) that contribute to cases of pediatric lead poisoning. In cases of environmental exposure, it can confirm the source of exposure and also help to identify individuals who have not been exposed by this point source. An abdominal X-ray may reveal recently ingested lead paint chips, 33 sinkers, or plasterFsources of exposure that require removal with polyethylene glycol to prevent further lead absorption. 7,69 Long bone X-rays can show 'lead lines' (which represent growth arrest), 33 but do not alter the way a case of child poisoning should be managed. 7,69 Routine screening of populations with iron deficiency for lead is also of very limited value because of the low detection rate. 70 Other risk factors for lead poisoning include concurrent iron deficiency, and it is important that pale or anemiclooking children are screened for this. 71 Both conditions cause anemia and produce a more severe form when combined. The explanation for this is that lead is somehow taken up by the iron transport system in the gut, which is up-regulated in iron-deficient states. 72,73 It follows that treatment of iron deficiency limits uptake of lead and helps with hematopoeisis, 74 and thus the prevention of iron deficiency may represent a public health intervention for reducing lead exposure in humans. 75 However, iron supplementation has not been shown to benefit iron-replete children with pre-existing lead poisoning and may reduce lead excretion. 59 Current and future intervention strategies to mitigate risk from lead The US CDC and the World Health Organization define a blood lead level of 10 mg/dl as the threshold level of concern, at which point active management of exposure to lead should be initiated. 7 In addition to strategies discussed above, other lead-reduction strategies include changes in industrial working practices and litigation. 7,22 Childhood lead-prevention programmes should concentrate on home visits and lead source investigations. 54,76 Five randomized trials examined the effectiveness of intervention strategies, professional house cleaning, vacuuming with HEPA air filters, provision of an individualized healthcare plan, and parental teaching on lead exposure prevention. 76 Only repeated professional house cleaning reduced the blood lead concentration significantly. However, because of ethical constraints, in four trials, the control group also had information about lead poisoning prevention, which is a confounding factor to the effects of the study intervention alone. 76 Individual lead exposure-reduction strategies such as hand washing and wet mopping of dust reduce the lead burden attributed to dust, and allow an individual to regain a sense of control. 7,22 In explaining the measures taken to reduce lead risk in a population, it is important to give people control over their own risk. There is a very limited role for chelation therapy, using meso-2,3-dimercatosuccinic acid (DMSA), in cases where levels of lead in the body are elevated as a result of environmental exposure. In a randomized, double-blind, placebo-controlled trial conducted between September 1994 and June 2003 in the USA, 1854 children from 12-33 months with referral blood lead levels between 20 and 44 mg/dl (0.96-2.12 mmol/l) received chelation therapy with DMSA. This lowered their average blood lead concentrations for approximately six months, but resulted in no benefit in cognitive, behavioral, or neuromotor end points. 77 There was no relationship between falling blood lead levels and improved cognition in the group treated with active drug. 77,78 Although DMSA lowers blood lead in moderately poisoned children, it has no beneficial effect on growth and may have adverse effects. 79 Even with higher blood lead concentrations, the problem in essence arises from ongoing exposure situations where continuing lead mining operations, such as Broken Hill in New South Wales, Australia, result in bone deposition of lead. When this happens, the reservoir of lead in bone is in equilibrium with blood. DMSA removes lead from blood, but probably only 1% of the total body burden. [80][81][82] Re-equilibration then takes place and blood levels rise again. This can happen particularly sharply if ongoing exposure takes place. 80,81 Thus, repeated chelation therapy is required to remove the lead. 80,81 This may cause mild and reversible elevation of liver enzymes. In one study, there were no reported differences in liver function tests between participants treated with DMSA and placebo groups. 44 Prevention of exposure is thus unquestionably a much more effective intervention strategy than post-exposure chelation therapy. However, it takes enormous efforts to reduce the time required to make a home 'lead safe'. 82 Prevention of lead exposure is key to averting risk, so engineering solutions, dust monitoring of industrial sites, and lowering environmental pollution all show promise in this regard. As the evidence accumulates that lead toxicity is significant at lower doses than recognized earlier, there will be more pressure to reduce exposure from known lead sources. In meeting these needs, emerging technologies will need to be developed further to give early warning of the potential for exposure to take place. One such early warning technology is lead-dust monitoring of the air around ports and smelter stacks. Conclusions Many countries in the world have a public health burden from lead. In many developed regions, there is complacency because leaded petrol has been phased out, yet certain subpopulations remain exposed to the toxic metal. In the developing world, lead exposure is ongoing and comes from multiple sources. None of us can afford to be ignorant of the risks associated with lead. We need to know how to approach the issue of preventing exposure where possible, and how to apply appropriate health screening and surveillance approaches if exposure has occurred. Measuring blood lead concentrations remains the cornerstone of testing for degree of lead poisoning. Dealing with the population's fear of unemployment versus a fear of risks to children and tolerance of environmental toxins is a key issue in the modern world. Nowhere is there a better example of that than in cases of lead exposure. Toxicologists have a duty of care to present risks in a balanced way to allow communities to take informed decisions about prevention of exposure, and to recognize the likely health consequences of exposure to lead. People quite rightly expect their clinicians to be better informed and demonstrate excellence in risk communication.
2018-04-03T02:53:51.410Z
2009-05-13T00:00:00.000
{ "year": 2009, "sha1": "d5e8da7d8885dbdaff9374cd19cb21234ce351c9", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3167648?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2bbafe9fd0c92b0eca651c14be57d0fc77fa23cd", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235492650
pes2o/s2orc
v3-fos-license
Pre‐hospital emergency anaesthesia in trauma patients treated by anaesthesiologist and nurse anaesthetist staffed critical care teams Background Pre‐hospital tracheal intubation in trauma patients has recently been questioned. However, not only the trauma and patient characteristics but also airway provider competence differ between systems making simplified statements difficult. Method The study is a subgroup analysis of trauma patients included in the PHAST study. PHAST was a prospective, observational, multicentre study on pre‐hospital advanced airway management by anaesthesiologist and nurse anaesthetist manned pre‐hospital critical care teams in the Nordic countries May 2015‐November 2016. Endpoints include intubation success rate, complication rate (airway‐related complication according to Utstein Airway Template by Sollid et al), scene time (time from arrival of the critical care team to departure of the patient) and pre‐hospital mortality. Result The critical care teams intubated 385 trauma patients, of which 65 were in shock (SBP <90 mm Hg), during the study. Of the trauma patients, 93% suffered from blunt trauma, the mean GCS was 6 and 75% were intubated by an experienced provider who had performed >2500 tracheal intubations. The pre‐hospital tracheal intubation overall success rate was 98.6% and the complication rate was 13.6%, with no difference between patients with or without shock. The mean scene time was significantly shorter in trauma patients with shock (21.4 min) compared to without shock (21.4 vs 25.1 min). Following pre‐hospital tracheal intubation, 97% of trauma patients without shock and 91% of the patients in shock with measurable blood pressure were alive upon arrival to the ED. Conclusion Pre‐hospital tracheal intubation success and complication rates in trauma patients were comparable with in‐hospital rates in a system with very experienced airway providers. Whether the short scene times contributed to a low pre‐hospital mortality needs further investigation in future studies. | INTRODUC TI ON Trauma is a leading cause of premature mortality. 1 While pre-hospital trauma care has developed rapidly in the past decades, pre-hospital emergency anaesthesia remains controversial. There is substantial heterogeneity with regards to operating procedures of emergency medical services and the competencies of providers. [2][3][4] What interventions should be performed in the pre-hospital setting, and who should perform them is widely debated. [5][6][7] Minimizing the time from injury to definitive care is uncontroversial and is associated with decreased mortality and morbidity. 8 Airway compromise in severely injured patients is frequent and is a significant cause of poor outcome. Pre-hospital emergency anaesthesia and tracheal intubation (PHEA) is a critical but high-risk intervention with potential serious adverse events including hypoxia, hypotension, tracheal aspiration as well as difficult or unsuccessful intubation. 9 Recently, the benefit of pre-hospital emergency anaesthesia (PHEA) in trauma patients in haemorrhagic shock was disputed. 10 PHEA has been documented to increase in-hospital mortality in awake, hypotensive trauma patients and a delay of induction of anaesthesia until hospital arrival was proposed for that subset of patients. 11 Differences in airway provider competence may affect the performance and outcomes of the advanced airway management. 12 In the Nordic countries, with few exceptions, PHEA is performed by experienced anaesthesiologists. The objective of the present study is to describe PHEA outcomes in trauma patients with and without shock. Outcomes include intubation success rates, complication rates, scene time and pre-hospital mortality. | Context In many regions in the Nordic countries, the emergency medical services are reinforced by rapid response car-and helicopter-based critical care teams. 2 Anesthesiologists staff the vast majority of these higher tier units. In the Nordic countries, anaesthesiologists are board certified in both anaesthesiology and intensive care medicine. Pre-hospital anaesthesiologists commonly rotate between pre-hospital duties and in-hospital theatre and intensive care work. These advanced pre-hospital providers routinely perform rapid sequence induction before tracheal intubation. This study encompasses pre-hospital critical care teams in both rural and urban areas, covering populations of more than 7 million inhabitants. | Data collection This study is a subgroup analysis of trauma patients included in the Nordic PHAST (Pre-hospital advanced airway management by anaesthetists and nurse anaesthetists critical care teams) study. 13 | Endpoints and definitions Tracheal intubation in trauma patients is described with focus on intubation success rate and airway complications as well as scene time and pre-hospital mortality. A tracheal intubation attempt was defined as laryngoscopy with the intent to intubate. Successful tracheal intubation was confirmed with lung auscultation and/or capnography. Tracheal intubation complications were defined in accordance with Sollid et al as dental trauma, vomiting, aspiration of gastric contents or blood, intubation of the oesophagus or right main stem bronchus, oxygen saturation <90%, systolic blood pressure (SBP) <90 mm Hg and pulse <60 beats/min. 14 Shock was defined as a systolic blood pressure <90 mm Hg. Scene time was defined as the time from the arrival of the critical care team on scene until the departure of the response vehicle carrying the patient. | Statistical analysis Baseline characteristics were described as means and standard deviations for continuous variables and number and percentages for categorical variables. The association between shock (the exposure) Editorial Comment For management of the severely injured at the accident scene, there is always a decision point to either administer advanced treatment at the site, which can take time to optimize, or rapidly transport the severely injured to the nearest advanced hospital. This study shows that prehospital emergency anaesthesia and airway management can be performed with very high success rates if done by experienced providers. and scene time (the outcome) was analysed using linear regression. The difference in mean scene time in patients with shock compared to no shock was reported with 95% confidence intervals with and without multivariable adjustment. Linear regression was used to identify variables associated with scene time that might confound the association between shock and scene time. A P value of <.1 was chosen as a threshold for inclusion in the multivariable analysis. Variables eligible for inclusion were age, sex, estimated weight, intoxication, aggravated conditions; darkness, aggravated conditions; hostile environment, traumatic brain injury and provider total number of intubations (Table 1) Table 1, not eligible for inclusion were those regarded as part of a causal pathway either to be causing shock or to be secondary to shock: National Advisory Committee on Aeronautics' (NACA) severity score, multitrauma, blunt trauma, burns, penetrating trauma, cardiac arrest, seizure, intoxication, strangulation and first vital sign; Glascow Coma Scale, first vital sign; respiratory rate, first vital sign; oxygen saturation, first vital sign; and systolic blood pressure. Some data were missing in the dataset. The frequency of missing values in the variables was: total number of intubations 0.7%, age 12%, sex 2.0%, estimated weight 3.4%, NACA 20%, Glascow Coma Scale 1.0%, oxygen saturation 6.8%, respiratory rate 12%, heart rate 0.7% and number of endotracheal intubation attempts 0.3%. Data management and statistical analysis were performed using Stata version 15 (StataCorp). | Ethics The study is a pre-specified subgroup analysis of the PHAST study with Ethical Review Board approvals obtained from Sweden | RE SULTS Among the 2028 patients who underwent attempted pre-hospital tracheal intubation in the PHAST study, 385 were trauma patients ( Figure 1). Records that had missing information on systolic blood pressure (n = 66), missing information on scene time (n = 17) and those who were trapped on scene (n = 5) were excluded. One patient was regarded as an outlier and possibly incorrectly registered with a scene time of 330 min. Two patients had a scene time of 0 min, these were regarded as incorrect values and were excluded. After exclusion, we identified 294 trauma patients who underwent attempted pre-hospital tracheal intubation. The mean patient age was 45 years and 74% were male ( Table 1). The mean NACA score was 5. | Tracheal intubation outcomes The overall success rate of pre-hospital tracheal intubation was 98.6% (290/294) with a first pass success rate of 88.4% (Table 2). There was no difference in tracheal intubation overall success rate in patients with shock vs no shock (100% vs 98%; P = .28). Tracheal intubation complications were registered in 13.6% of the patients, with no difference between patients with shock and without shock (12% vs 14%; P = .73). The reported complications are presented in Table 3. Pre-intubation checklists were less frequently used in patients in shock compared to in patients without shock (15% vs 63%; P < .01). Following attempted pre-hospital tracheal intubation, 97% of the trauma patients not in shock were alive at the arrival to the emergency department (ED). Of the trauma patients in shock (SBP <90 mm Hg) with measurable blood pressure, 91% were alive at the arrival to ED (Table 4). | DISCUSS ION This study documents successful pre-hospital tracheal intubation among trauma patients by experienced pre-hospital anaesthesiologists and nurse anaesthetists. Furthermore, the study documents short scene times following pre-hospital tracheal intubation, with even shorter scene times for patients in shock. 15,20 In the present study, we demonstrate a relatively short scene Even though the optimal scene time has not been defined for the severely injured and the evidence is conflicting of which patients | CON CLUS ION Following pre-hospital tracheal intubation in trauma patients by anaesthesiologist and nurse anaesthetist manned pre-hospital critical care teams, the success and complication rate was comparable with in-hospital rates. The overall scene times were short, and even shorter among patients with shock. This may contribute to the low pre-hospital mortality rate observed in trauma patients in shock with measurable SBP <90 mm Hg. However, future well-designed studies are needed to investigate this hypothesis. ACK N OWLED G EM ENTS We thank the local investigators (E. Fevang, A. Bäckman, A. Skallsjö) and the critical care teams participating in the original PHAST study. CO N FLI C T S O F I NTE R E S T The authors declare that they have no competing interests. AUTH O R S CO NTR I B UTI O N S MGe conceived and initiated the study. BA, DH and MGe contributed to the design of the study. BA, DH and MGe gathered and structured the dataset and performed the analysis. BA drafted the manuscript. MGe, DH, DK and MGu critically revised the manuscript and approved the manuscript to be submitted. BA affirms that the manuscript is an honest, accurate and transparent account of the study being reported; that no important aspects of the study have been omitted. E TH I C S A PPROVA L A N D CO N S E NT TO PA RTI CI PATE The study is a pre-specified subgroup analysis of the PHAST study with Ethical Review Board approvals obtained from Sweden CO N S E NT FO R PU B LI C ATI O N Not applicable. DATA AVA I L A B I L I T Y S TAT E M E N T The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
2021-06-22T06:16:09.801Z
2021-06-21T00:00:00.000
{ "year": 2021, "sha1": "26054ec59b760b2fde5e963729a3c3a33059c98f", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aas.13946", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "3eb791f75ccc24a7fc2a286a0759798f2de6ee7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139534518
pes2o/s2orc
v3-fos-license
Analysis on the performance of hot water extraction and alkaline extraction for sodium hydroxide-assisted steam exploded empty fruit bunch at pilot scale Empty fruit bunches (EFB) contribute the most to the biomass waste produced from palm oil industries. Biomass waste is made up of cellulose, hemicellulose, and lignin. By having high cellulose content, it has great potential for cellulose production. However, the cellulose extraction process has yet to be optimized. Therefore, the study on the operating conditions in extracting cellulose from EFB takes place by understanding the sodium hydroxide (NaOH) soaking process prior to steam explosion pre-treatment. The effects of retention time on the hot water extraction (HWE) treatment and NaOH concentration on the alkaline extraction (AE) treatment in term of the amount of dissolved sugar were observed. The chemical properties of original fibre and treated fibre were analysed by Fourier Transform Infrared (FTIR) Spectroscopy and the surface morphology were observed using scanning electron microscopy (SEM). In this study, it is found that the best condition for alkaline extraction was at 10% alkaline concentration and the FTIR spectroscopy shows that there a no changes on the chemical structure of the fibre. SEM also shows the changes on the surface morphology of the fibre. Showing that the sodium hydroxide assisted steam explosion pre-treatment does greatly influence the further process Introduction The EFB constitutes about 20% to 22% of the weight of fresh fruit bunches and contains 30.5% dry matter, 2.5% oil and 67% water. Due to the abundance of EFB waste produced from the oil palm mill [1][2][3], the approach towards making an added value product from EFB by chemical modification is very promising. Typically, EFB comprises of cellulose about 24-65%, hemicellulose about 21-34% and finally lignin about 14-31% [4]. With the high amount of cellulose content, there is a possibility of using cellulose from EFB as fibre-reinforced composite in automotive industries. In the recent decades, many researchers from all around the globe have focused their study in the application using cellulose as replacement to the traditional reinforcing fibres in fibre-reinforced composite [5][6][7][8][9][10]. The increasing numbers of studies in this field were mainly due to the characteristic of the natural fibres in having low density, high strength, low cost and biodegradable. However, in order to make the natural fibres compatible with the polymer matric, chemical treatments are needed in modifying the fibre surface [11]. Cellulose is a simple linear macromolecule polymer and it is embedded in a matrix of lignin and hemicelluloses [12]. Together, they form tightly packed cellular structures that form fibre bundles, and are the base for most biomass tissues. With their natural packed structure, it has the ability to bear high mechanical loads, and to resist chemical and enzymatic degradation through microorganisms. This common feature of plant fibres is often termed as biomass recalcitrance, and is a major technical obstacle for most bio-refinery processes. Thus, the implementation of steam explosion and alkaline extraction treatment technology are essential in overcoming the recalcitrance of biomass structure. [11][12][13][14]. Steam explosion pre-treatment is a physic-chemical process. A highly pressurized saturated steam is used to heat up the biomass fibres where the process ends with an instantaneous release of pressure that causes the biomass fibres to rupture. The high temperature during this process catalyses the release of acetic acid from the cleavage of acetyl group and triggers the auto hydrolysis effect. This eventually results in the hydrolysis of hemicellulose and depolymerisation of lignin [15]. These make the biomass fibres more accessible for subsequent treatment. Steam explosion assisted with acid catalyst prior to the treatment has proven to increase the efficiency of the pre-treatment [16,17], but using acid will cause drawback such as corrosion to the equipment and extensive processing downstream effluents that will lead to high water consumption [18]. Thus alternative alkaline based catalyst has gather the attention of some researchers because of the proven capabilities to increase the removal of hemicellulose [17,19]. Alkaline extraction or mercerization is the most popular chemical treatment for the extraction of cellulose from biomass fibres. It alters the physical and chemical structure of the fibres; and removes the lignin and hemicellulose [20]. The dissolved hemicellulose is in sugar forms [21,22]. The major reaction that facilitates the formation of oligomers of sugars began with the depolymerisation and the dissolution of hemicellulose. The sugar will further be degraded to form monosaccharides and sugardecomposition product [22,23]. In this study, the soaking of EFB fibres using sodium hydroxide (NaOH) was implemented prior to the steam explosion pre-treatment. The steam explosion pre-treatment was followed by the hot water and the alkaline extraction treatment for the removal of lignin and hemicellulose structure. The aim of this study was to analyse the effectiveness of sodium hydroxide-assisted steam explosion on the post treatment that were hot water and alkaline extraction. Throughout the process, the dissolved sugar content of the solutions were analysed and the changes in chemical structure fibres were observed by using FTIR analysis. Experimental Please follow these instructions as carefully as possible so all articles within a conference have the same style to the title page. This paragraph follows a section title so it should not be indented. Material The raw EFB (60% moisture content) was supplied by the LCSB Oil Palm Mill, owned by LKPP Corporation Sdn. Bhd that is located in Pahang, Malaysia. The chemical reagent used was Sigma Aldrich's 99% sodium hydroxide (NaOH) pallets that was provided by a local vendor. Sample preparation. The EFB was dried under atmospheric condition for 7 days. Then, the dried EFB were manually chopped into smaller pieces and ground using SIMA grinder model FG 400×200 that comes with 200 mm mesh sieve. Soaking of EFB. Ground EFB fibres were soaked with 3% (w/v) NaOH solution for 16 hours. Soaked EFB fibres were then dried at atmospheric condition until the moisture content reached approximately below 30%. Steam explosion pre-treatment (SEP) . SEP was carried out in a 700 L carbon steel reactor located at Cellulose Pilot Plant, Pahang, Malaysia. The sodium hydroxide-soaked EFB (SHIEFB) fibres were fed into the reactor, followed by the supply of saturated steam until the pressure reached 20 bar. After 20 bar was reached, the 10 minutes countdown started and when the countdown finished, the blowdown valve was opened to create the steam explosion reaction. The exploded EFB fibres were collected, washed using deionized (DI) water until the pH reached 7±0.5 (neutral) and dried overnight in an oven at 105°C. Hot water extraction treatment (HWE). The exploded SHIEFB fibres undergo the hot water extraction treatment using 5% fibres consistency at 80°C and stirred for various retention times (15 min, 30 min, 45 min, 60 min and 90 min). Then, the treated fibres were washed using DI water and dried at 80°C overnight. Alkaline extraction treatment (AE). Hot water treated fibres were then treated using different concentrations (5%, 10% and 20%) of NaOH solution using 0.05% fibres consistency at 80°C and stirred for 1 hour. The treated fibres were then washed using DI water until neutral (pH 7±0.5) pH was reached and dried in oven at 105°C overnight. Characterization 2.3.1. Sugar content analysis. The sugar content of the produced solution was measured using a refractometer. Tiny drop of solution was dropped onto the flat and slant surface of the refractometer. The cap of the refractometer was then closed and the reading was taken by looking through the eyepiece of the refractometer. Fourier Transform Infrared (FTIR) analysis. FTIR spectroscopy analysis was carried out using Thermo Scientific Nicolet iS5 FTIR equipment with resolution of 4 cm -1 , 32 scan per minute and transmittance technique. The range of wavenumber was set from 400 cm -1 to 4000 cm -1 . The analyses were for all the treated fibres. Scanning electron microscopy (SEM) analysis. The microstructure and surface morphology of the produced pulp was analysed using SEM analysis. The equipment used was a FEI Quanta 450 under the acceleration voltage between 1 to 15kV. The analysis was done by putting the sample on an aluminium stud then were observed using the SEM with difference magnification. Sugar content analysis Based on figure 2, the dissolved sugar contents were increasing with the increase of HWE time. The trend of the graph started with a rapid increase of dissolved sugar content from 0 minute to 15 minutes, with the dissolved sugar measured to be 1% at 15 minutes. At 30 minutes, the dissolved sugar content gradually increased to 1.2% and 1.3% at 45 minutes. After that, a rapid increase of 0.5% occurred at 60 minutes, (1.8%). The highest dissolved sugar content was recorded to be 2.2%., at 90 minutes. The increasing numbers of dissolved sugar content were mainly caused by the organic acetic acid release during the HWE, which cause the removal of hemicellulose. This release of organic acid happened when the fibres were exposed to high temperature [24,25]. Hemicellulose in which included the polysaccharides contain many different sugar monomers [26]. Chang, (2014) also agreed that as the HWE time was increased, higher amount of released hemicellulose was observed [4]. Figure 3 displays the sugar content against varied NaOH concentrations. At 5% concentration, the sugar content recorded was 1.6%. Then, the dissolved sugar increased sharply to 2.2% when 10% NaOH concentration was used. At the highest concentration of NaOH (20%), the sugar content still increased but subtly at 2.3%. The subtle increase from 2.2% to 2.3% for 10% and 20% NaOH concentration respectively was because most hemicellulose been removed using 10% NaOH concentration, leaving only a small portion of hemicellulose structure within the fibres. The number of dissolved sugar increased due to the hydrolysed sugar, which was released into the alkaline hydrolysate during the AE [27]. Although AE was specified to remove lignin, it may affect the molecular structure of hemicellulose and cause the hydrolysis of hemicellulose [28]. Therefore, the increase of NaOH concentration influences the hemicellulose to be released. Sugar content analysis The absorption bands located near The absorption bands located near to 1740 cm -1 are assigned to C=O stretching of acetyl or carboxylic acid in which referring to hemicellulose and lignin. On the other hand, the wavenumbers at 1610 cm -1 , 1598 cm -1 , 1510 cm -1 , and 1465 cm -1 are referring to lignin in which all of it are assigned to C=C stretching of aromatic ring, C-C, C-C stretching of aromatic ring, and asymmetric bending in C-H3 respectively [28,29]. As seen in figure 4, the spectra satisfy the trends from spectra analysis of Nieves et al, (2011) in which the transmittance increases gradually in accordance to the parameters. Based on figure 5, the untreated SHIEFB has the highest absorbance which are 0.016, 0.031, 0.034, 0.026, and 0.032 for wavelength 1740 cm -1 , 1610 cm -1 , 1598 cm -1 , 1510 cm -1 , and 1465 cm -1 respectively. At 15 minutes, the absorbance started to decrease and after that, it gradually decreased with the lengthened time. At 90 minutes, the lowest absorbance was recorded for all five wavelengths. The wavelength 1740 cm -1 indicates hemicellulose. Therefore, as the time increased, hemicellulose content significantly decreased [28]. According to figure 6, the spectra satisfy the trends from spectra analysis of Nieves et al, (2011) in which the transmittance increases gradually in accordance to the parameters [28]. Based on figure 7, the untreated SHIEFB has the highest absorbances which are. 0.014, 0.024, 0.026, 0.023, and 0.029 for wavelength 1740 cm -1 , 1610 cm -1 , 1598 cm -1 , 1510 cm -1 , and 1465 cm -1 respectively. At 5% NaOH concentration, the absorbance started to decrease and after that, it gradually decreased with the increasing concentration.. At 10% and 20% NaOH concentration, the absorbance recorded did not follow the supposed pattern in which 20% NaOH concentration should have lower absorbance for all five wavelengths. This shows that increasing the NaOH concentration above 10% is not significant enough to get lower absorbance due to most hemicellulose and lignin structure areabsent within the fibre. The wavelengths 1610 cm -1 , 1598 cm -1 , 1510 cm -1 , and 1465 cm -1 indicates lignin content [28,30,31]. Therefore, as the NaOH concentration increased, lignin content reduced until 10% NaOH concentration. Based on Nieves et al, (2011), the absorbance for the un-soaked EFB at wavelength 1740 cm -1 is 0.121. By comparing the absorbance of all parameters in hot water treatment, none of the absorbance exceeded 0.121 in which the highest absorbance was at 15 minutes of treatment with 0.015 of absorbance. This demonstrates that most of the hemicellulose has been removed during the steam explosion process. Since all other wavelength besides 1740 cm -1 are assigned to lignin, one of them is taken account to be compared with data from previous study [30][31][32]. According to Nieves et al,. (2011), the absorbance at wavelength 1610 cm -1 was 0.198. By comparing it to the data in figure 7, the absorbance for 5%, 10%, and 20% are 0.023, 0.012, and 0.011 respectively. This indicates that the removal of lignin in figure 7 is more effective than the previous study. The concentration of sodium hydroxide used in the previous study was 8% without performing SEP. The effectiveness of the lignin removal in this study s due to the soaking of EFB with sodium hydroxide in which most of the lignin is removed after SEP. Figure 7. Absorbance against NaOH Concentration Sugar content analysis The SEM provided a clear image of the pulp produced as shown in figure 8. The morphology of the produced pulp shows that the fibre has been defibrillated where the original fibre has separated into smaller fibrils. From figure 8.A, it portrays that cleavages were formed between the fibrils. It can also be observed from figure 8.B that silica bodies have been removed throughout the pulp production where it is proven by the crater that used to embody the silica bodies [33]. The fibre also show smooth fibrils structure due to the impurities such as oil that covered the surface which has been removed [33][34][35]. The formation of cleavage and removal of embedded silica bodies from the fibres were due to the severe treatments throughout the process of pulp production. Thus, increasing the porosity and the surface area of the fibre [33]. The formation of cleavage between the fibres is an advantage because many studies have proven that cleavage can increase the surface roughness of the fibre that leads to a better mechanical interlocking that consequently improves the adhesion ability of the fibre [34,36]. Conclusion Soaking of empty fruit bunch with 3% of sodium hydroxide has been proved to increase the effectiveness of hemicellulose and lignin removal in EFB fibres. The HWE treatment time illustrated the highest amount of hemicellulose removed at 90 minutes achieving the highest sugar content of 2.2%. The NaOH concentration in AE treatment shown at 10% concentration was the most preferable condition for the removal of hemicellulose achieving 2.2% dissolved sugar content compared to 2.3% at 20% NaOH concentration. This will greatly reduce the chemical consumption and cost of alkaline reagent for the AE treatment. This result has proven that the steam exploded SHIEFB effectively assisted in the removal of lignin and hemicellulose HWE and AE treatment and it is possible to implement the sodium hydroxide-assisted steam explosion pre-treatment in the production of cellulose from EFB biomass.
2019-04-30T13:09:01.716Z
2019-01-16T00:00:00.000
{ "year": 2019, "sha1": "36e630ae539c3c4d01139587d172c53595e524d1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/469/1/012120", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6486933cb48078cba9939808c4cfc469d054da96", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
214188352
pes2o/s2orc
v3-fos-license
Study of high-strength steel fiber concrete strength characteristics under elevated temperatures using mathematical modelling methods This article presents the study of the various factors influence on the high-strength steel fiber concrete strength characteristics using a mathematical model that was obtained on the basis of experimental data. At the same time, the steel-fiber concrete reinforcement, the scale factor, the temperature and the duration of heating are related to the factors under study. A fundamental approach to building a model is to use exponential functions. This function allows to move from the product of the studied factors to their sum, which greatly simplifies the further use of the dependencies obtained in engineering practice. Separately, it should be noted that one of the studied factors (the duration of exposure to temperatures) is set by the result of its impact on the final response function. That is, taking into account the destructive processes in the structure of concrete with short-term heating and structural – with long-term. In addition, the optimization of the proposed mathematical model is performed, as a result of which it is established that the maximum values of the strength of the steel fiber concrete is achieved at the values of the percentage of reinforcement of HSFRC. Introduction High rates of residential construction and industrial buildings with complex architectural forms, the construction of special structures of large-span bridges, skyscrapers, offshore oil platforms, tanks for liquids and gases storage, nuclear power plants protective shells, etc., require the development of new effective concretes. Dispersion-reinforced high-strength concrete is one of such materials. Fiber reinforcement is an effective means of increasing concrete strength and deformability under compression and tension, crack resistance and reinforced concrete structures rigidity. This is especially important for heavily loaded structures of high-rise buildings and structures, as well as for structures exposed to variable temperature and humidity effects. However, the usage of high-strength steel fiber concrete (HSFRC) for structures subjected to thermal effects, is constrained by insufficient knowledge of the elevated temperatures influence and duration of their action on strength characteristics HSFRC. Thus, studies of the physics and mechanical properties of steel fiber concrete and methods for calculating and designing structures without considering the effects of elevated temperatures are presented in the works of S Caprielov [1], A P Krichevsky [2], L G Kurbatov [3,4], I A Lobanov [5], V I Korsun [6,7], V I Morozov [8], Yu V Pukharenko [9], F P Rabinovich [10], A E Sargsyan [11], K V Talantova [12], A Kelly [13]. On the other hand, sufficiently complete experimental data on the effect of high temperatures (till +200°C) on the main characteristics of the strength and deformation properties, but not on the HSFRC, and on reinforced concrete structures under uneven heating conditions, were obtained in the works of V I Veretennikov [14], M A Ivanov [15], V I Korsun [16,17], K D Nekrasov [18], M Collegadi [19], C Galle [20], V M Malhotr [21], A P Krichevsky [22], A F Milovanov [23], N So Tupov [24], S L Fomin [25]. Based on this, the study of high-strength steel fiber concrete strength characteristics at elevated temperatures can be considered an urgent scientific task of sectoral importance. However, such studies require special equipment, are long in time, and their conduct is not rational without the use of mathematical and computer modeling tools. Fiber -made of steel with curved ends. Production: Khartsyzsk branch of PJSC «Production Association «STALKANAT-SILUR». It has the following characteristics: length l = 60.0 ± 6.0 mm, diameter d = 0.75 ± 0.07 mm; the length and height of the bent-end, respectively, l 1 = 5.0 ± 1.0 mm, h 1 = 2.9 ± 0.5 mm; temporary resistance gap -1160-1290 MPa. The research program included three groups of experiments:  research of HSFRC temperature and shrinkage deformations. Study of their strength and deformation properties characteristics under conditions of axial compression and tension in the temperature range from +20° C to +200°C;  investigation of shrinkage deformations dependence, strength and deformation properties characteristics of modified concrete on the size (scale) of prototypes;  investigation of indirect mesh and dispersed reinforcement effect on the modified concrete prism samples strength and deformation under axial compression.  As a result, the following experimental data were obtained (table. 1). A mathematical model of the strength characteristics ITSFM at compression in conditions of heating up to +200°C For the analytical description of this dependence, we use a polynomial of the 3rd degree, which is able to provide the inflection point. In General, its equation can be represented as: 3 2 , where i  − factors that may affect the desired structural behavior. To reduce the product of coefficients to the sum of polynomials, we define the coefficients i  using an exponential function with polynomials of varying degrees: where t  − coefficient corresponding to the temperature change of concrete; . It should be noted that when adding polynomials, we get the free term Considering that all three coefficients are equal, it is possible in the future to divide the obtained value of the free term into three equal parts for each of the coefficients. Let's take a logarithm both sides of equation (3): ln . 20 20 20 30 Thus, we obtain an equation with 7 unknown coefficients. Next, we use the least squares method to determine the values of polynomial coefficients at which the sum of the regression residuals squares takes the minimum value. As a result, a mathematical model with a coefficient of determination was obtained for short-term heating. 2 Accordingly, for a long-term heating, a similar equation is obtained with the coefficient of determination 2 0,892 R  : . In accordance with the research of Professor V Korsun. [17], expressions of temperature influence functions and heating duration on the strength concrete characteristics under conditions of exposure to elevated temperatures reflect mainly destructive processes in the structure of concrete during shortterm heating and mainly structural processes during long-term heating: It should be noted that the increase in the polynomial coefficients number does not affect the accuracy of the simulation results. Research of strength characteristics of steel fiber concrete on the basis of the mathematical model To analyze the resulting model, it is convenient to use its graphical display. However, the model of Similarly, we study the influence of factors on other physics and mechanical characteristics of steel fiber concrete. At the same time, the mathematical model itself will remain unchanged; only the coefficients of the polynomial dependencies of equation (3), which are obtained on the basis of experimental data, will change. Figure 2. Graphic visualization of the results of modelling the physics and mechanical properties of steel fiber concrete. Conclusion In this paper, we proposed an approach to the creation of the HSFRC physics and mechanical characteristics mathematical model under prolonged temperature exposure. In this case, three factors (the percentage of reinforcement, scale factor and temperature) are included in the final model in an explicit form. The fourth factor (the duration of exposure to temperature) is implicitly specified not by specific values, but by the result of its impact on the final response function (considering destructive processes in the structure of concrete during short-term heating and constructive -with long-term). On the basis of the obtained mathematical model, the interval of optimal HSFRC percent reinforcement values were selected. It allows to reach the maximum steel fiber concrete strength values. In addition, the resulting mathematical model can be applied to the calculation of any material characteristics, which will require only the recalculation of polynomial coefficients obtained on the basis of experimental data.
2019-12-12T10:08:49.025Z
2019-12-10T00:00:00.000
{ "year": 2019, "sha1": "951b2af715d59b42bfe7759cdd90added5124eee", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/687/2/022040", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8992071e0a248b43892c2ee2d3e7b7698c0596d9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
119378399
pes2o/s2orc
v3-fos-license
Exclusive $B \to PV $ Decays and CP Violation in the General two-Higgs-doublet Model We calculate all the branching ratios and direct CP violations of $B \to PV$ decays in a most general two-Higgs-doublet model with spontaneous CP violation. As the model has rich CP-violating sources, it is shown that the new physics effects to direct CP violations and branching ratios in some channels can be significant when adopting the generalized factorization approach to evaluate the hadronic matrix elements, which provides good signals for probing new physics beyond the SM in the future B experiments. I. INTRODUCTION To understand the origin of CP violation (CPV) is an important subject not only for exploring the basic symmetry of space-time and elementary particles but also for understanding the evolution of our universe. It is well known that in the Standard Model(SM) of particle physics, CP violation is characterized by a single weak phase in Cabibbo-Kobayashi-Maskawa matrix [1], which can provide a well explanation for the direct CP violation ε ′ /ε [2] established in kaon decays [3], and also direct CP violation [4] observed in B-meson decays [5]. Though the theory of the strong and electroweak(EW) interactions in SM has met with extraordinary success, it is widely believed that the SM can not be the final theory of particle physics, in particular because the Higgs sector of SM is not well understood yet and the CP phase in CKM matrix is not enough to understand the baryon and anti-baryon asymmetry in the universe. It was suggested that CP symmetry may be broken down spontaneously [6]. Many possible extensions of SM in Higgs sector have been proposed [7]. Other possible extensions of the SM have been explored, such as the Super Symmetric model(SUSY), little Higgs model and extra dimensions, which all make better the situation of the SM. But no single model is good enough to solve all the problems existing in the SM and it is then worthwhile to consider all the possibilities beyond SM. As one of the simplest extensions of the SM, the so-called two-higgs-doublet model(2HDM) which introduces an extra Higgs doublet without imposing the ad hoc discrete symmetries has been investigated widely from various considerations [8,9,10,11,12,13,14,15,16,17]. Motivated solely from the origin of CP violation, a general two Higgs doublet model with spontaneous CP violation (Model III 2HDM) has been shown to provide one of the simplest and attractive models in understanding the origin and mechanism of CP violation at weak scale [12,13]. In such a model, there exists more physical neutral and charged Higgs bosons and rich induced CP-violating sources from a single CP phase of vacuum. Of particular, the model III 2HDM allows flavor-changing neutral currents but suppressed by approximate U(1) flavor symmetry, which is different from the so-called model I and model II 2HDM in which a ad-hoc discrete symmetry (Z 2 symmetry) has been imposed to avoid the FCNC. It is known that the FCNC's concerning the first two generations are highly suppressed from low-energy experiments, and those involving the third generation is not as severely suppressed as the first two generations. So the model III 2HDM can be parameterized in a way to satisfy the current experimental constraints. The constraints on Model III 2HDM from neutral meson mixings (K 0 −K 0 , D 0 −D 0 , B 0 −B 0 ) [18] and radiative decays of bottom quark [19,20,21] have been studied in details. In this note, we shall investigate the possible new effects of model III 2HDM on two-body charmless nonleptonic B decays B → h 1 h 2 with h 1 , h 2 being the charmless light hadrons. This is because those decays have triggered considerable theoretical interest in understanding SM. Also those decay channels are also thought to be sensitive and important in exploring new physics beyond the SM as they involve the so-called tree (current-current) b → (u, c) and/or B → (d, s) penguin amplitudes with both QCD and electroweak penguin transition participating. In the 2HDM, there are five Higgs particles including the H 0 Higgs in SM, these extra Higgs will mediate all the penguin transitions. As the couplings involving Higgs bosons and fermions have complex CP phases in the model III 2HDM, CP violation effects occur even in the simplest case that all the tree-level FCNC couplings are negligible. With the improvement of experimental precision, more and more direct CPV have been observed and will be much precisely tested in the future experiments. The paper is organized as follows. In Sec II, we first describe the theoretical frame including a brief introduction of the two-Higgs-doublet model with spontaneous CP violation, i.e., Model III 2HDM, and the effective Hamiltonian as well as the generalized factorization formula, which is our basic tool to estimate the branching ratios and CPV asymmetry of B meson decays. In Sec III, we make a detailed calculation with numerical results evaluated from a factorization ansatz which allows us to express the matrix elements < h 1 h 2 |H ef f |B > as a product of two factors < h 1 |J 1 |B >< h 2 |J 2 |B >, and make quantitative predictions. Our conclusions and discussions are presented in the last section. A. Outline of the Two-Higgs-doublet Model One of the important developments of SM is the so-called Higgs mechanism, i.e., a spontaneous symmetry breaking mechanism by which the gauge bosons and fermions can get their masses. In the SM, a single Higgs doublet of SU(2) is sufficient to break the SU(2) L × U(1) Y symmetry to U(1) em and generate mass to the gauge bosons and fermions. Nevertheless, the Higgs sector of the SM has not been experimentally tested although enormous efforts have been made. For the origin of CP violation, SM gives no explanation as there is only one single neutral Higgs in SM and its interaction coupling constants are fixed by the known parameters and the fermion masses. Many attempts have been made by both theoretists and experimentalists to explore the mechanisms of CP violation since the discovery of CP violation in 1964. Spontaneous CP violation requires at least two Higgs doublets. A consistent and simple model which provides a spontaneous CP violation mechanism was constructed completely in a general Two-Higgsdoublet model [12,13]. Such a model III 2HDM not only explains the origin of CP violation in the SM, but also induces rich new resources of CP violation. The new sources of CP violation can lead to some new phenomenological effects which are promising to be tested by the future B factory and LHCb. In this note, we will focus on the phenomenological applications of the model III 2HDM in the two-body charmless hadronic B → P V decays. The two complex Higgs doublets in the Model III 2HDM are expressed as [12,13,14,16,17]: where λ i (i = 1, 2, ...8) are all real parameters. If all λ i are non-negative, the minimum occurs at : With v 1 , v 2 are the vacuum expectation values of φ 1 , φ 2 respectively, and δ the relative phase of the vacuum. It is clear that in the above potential CP nonconservation can only occur through the vacuum with δ = 0. Obviously, such a CP violation appears as an explicit one in the potential when λ 6 = 0 [13]. After a unitary transformation, it is natural and convenient to use the following basis: with: where v = v 2 1 + v 2 2 and is related to the W mass by M W = gv/2. Here H 0 plays the role of the Higgs boson in the standard model. H ± are the charged scalar pair with H ± = sin βφ ± 1 e −iδ − cos βφ ± 2 , where tan β = v 2 /v 1 . And as for the neutral Higgs, φ 1 , φ 2 are not the neutral mass eigenstates but linear combinations of CP-even neutral Higgs boson mass eigenstates, H 0 and h 0 : where α is the mixing angle and when α = 0, (φ 0 1 , φ 0 2 ) are identical with (H 0 , h 0 ). For simplicity, the mixing with the pseudoscalar A 0 is not considered here. Let us consider a Yukawa Lagrangian of the following form: where φ i (i = 1, 2) are the two Higgs doublets, . φ 1,2 = iτ 2 φ * 1,2 , Q i,L (U j,R ) with i = (1, 2, 3) are the left-handed isodoublet quarks (right-handed up-type quarks), D j,R are the righthanded isosinglet down-type quarks, while ξ U,D 1ij and ξ U,D 2ij (i, j = 1, 2, 3 are family index ) are generally the nondiagonal matrices of the Yukawa coupling. After diagonalizing the mass matrix of quark fields, the Yukawa Lagrangian that related to the decays we considered in this paper can be written as: where U represents the mass eigenstates of u, c, t quarks and D represents the mass eigenstates of d, s, b quarks, V CKM is the Cabibbo-Kobayashi-Maskawa matrix andξ U,D are the FCNC couplings in the mass eigenstate, and they may be parameterized in terms of the quark mass: The first two generations' FCNC are naturally suppressed by the small quark masses,but the third generation has more space to get FCNC contributions. In this paper, we just choose the ξ U,D to be diagonal ξ U,D ii ≡ ξ U,D i (i = s, c, b, t) , and neglect the first generation quarks' contributions.So the really leading contribution arises from the diagram with a top quark in the loop and the relevant couplings will beξ U,D ts andξ U,D tb , they are explicitly given by: From the above parameterization, the free parameters in this model are Their values can be constrained through experiments. In the model III 2HDM with spontaneous CP violation, the induced CP violation can be classified into the following four types via their interactions [12,13]: i) from the CKM matrix; ii) from the charged Higgs couplings to the fermions ξ charged ; iii) from the neutral Higgs couplings to the fermions ξ neutral ; iv) from the CP nonconservation Higgs potential V (φ) via mixings among scalars and pseudoscalar bosons. The model allows flavor-changing-neutral-currents(FCNC) at tree level and via loop effects due to exchanges of Higgs bosons. One of the most stringent tests is from the radiative decay of B mesons and also from the inclusive decay rate of b → sγ which has the least hadronic uncertainties. Other constraints could come from the B 0 − B 0 mixing, ρ 0 , R b and the neutron electric dipole moment etc. In this note, we shall consider possible new effects in charmless hadronic two body decays of bottom mesons. B. Effective Hamiltonian and Wilson coefficients The effective Hamiltonian for charmless B decays with ∆B = 1 is: The operators Q 1,...10 , Q 7γ , Q 8g can be found in [24], of which the Q 1 and Q 2 are the current-current operators and Q 3 − Q 6 are QCD penguin operators. Q 7γ and Q 8g are, respectively, the magnetic penguin operators for b → sγ and b → sg. Here the mass of the external strange quark is neglected compared to the external bottom-quark mass. The additional new operators related to the neutral Higgs mediated processes(b → sqq )are [25]: where (q 1 q 2 ) S±P =q 1 (1 ± γ 5 )q 2 , q = u, d, s, c, b. The operators Q ′ i in Eq(11) are obtained from the Q i by exchanging L ↔ R. As the primed operators's contributions are suppressed by m s /m b , we shall neglect their effects in our present considerations. The Wilson Coefficients C i , i = 1, ...10 have been calculated at LO [22,23] and NLO [24] in SM and also at LO in 2HDM [26,27]. Here we list their initial coefficient functions in the 2HDM [27,28]: and the LO C 7γ , C 8g are sufficient: where x t = m 2 t /M 2 W , and y = m 2 t /M H ±2 . The Inami-Lim functions A, B, D, E...... are known in SM and 2HDM [26]: For the new operators Q (11,12...16) , the corresponding Wilson coefficients C i , i = 11, ...16 at leading order have been calculated in [25,29]: Here the explicit expression of C Q 1 , C Q 1 can be found in [29]. For the B → P V processes, the Wilson coefficient functions must run from the M W scale to the scale of O(m b ). For C 1 − C 10 , the NLO corrections should be included. While for C 8g and C 7γ , LO results are sufficient. The details for the running Wilson coefficients can be found in Ref. [24]. As for the neutral Higgs boson induced operators, the one loop anomalous dimension matrices can be divided into two distangled groups [25]: As no NLO Wilson coefficients C i , i = 11, 12, ...16 are available, we may just use the LO Wilson coefficients for a numerical estimation. C. Generalized Factorization formula For our present purpose, we may use the generalized factorization method [30,31,32,33] to evaluate the hadronic matrix elements. We know that in full theory, the leading order QCD corrections to the weak transition is of the form α s ln(M 2 W / − p 2 ) for massless quarks, where p is the off-shell momentum of external quark lines and depends on the system under consideration.We can choose a renormalization scale µ and separate is included in the Wilson coefficients c(µ)and summed over to all orders in α s using the renormalization group equation, while the second part is due to the matrix element evaluations and is small. It is related to the tree matrix element via: with: where the µ dependence of the matrix elements is approximately extracted out to the function g(µ), that is: the effective Wilson coefficients c ef f should be in principle renormalization scale independent. Thus it is necessary to incorporate QCD and EW corrections to the operators: with The perturbative QCD and EW corrections to the matricesm s andm e from the vertex and penguin diagrams can be found in [33,34,35]. Using the following parameterization for decay constant and form factors: we arrive at Using the Fierz Transformation, One can easily obtain all the Q 1,...10 tree level matrix elements [30,32]. For the new operators Q 11,... 16 , the additional factorization formulas are [36]: with k = p B − P and q = p B − p. f ⊥ V and f P T are the tensor decay constant of vector meson and the tensor form factor relevant to B → P decays. ǫ * is the polarization vector of vector meson. The hadronic matrix element is given by The tree level matrix elements of Q (11,12,...16) can be factorized as (b → s for example): 15 < |V q ′ σ µν q|0 >< P |sσ µν b|B >, 9 with: N ′ c is the effective color number relative to the new six operators,which is set to be universal in all the decay channels.In this paper we fix it to be N ′ c = 3 to estimate the neutron higgs effects. As for the SM operators,besides the perturbative QCD and EW corrections to the hadronic matrix elements that can be factorized into the effective Wilson coefficients, there still exists the nonfactorizable effects, such as the spectator quark effects, annihilation diagrams and space-like penguins. Consider an arbitrary operator of the form O =q 1 α Γq β 2q 3 β Γ ′ q α 4 which arises from the Fierz transformation of a singlet-singlet operator with Γ and Γ ′ being some combinations of Dirac matrices. By using the identity: the matrix element of M → P 1 P 2 can be expanded as: The last two terms on r.h.s are nonfactorizable, and their contributions are included in the effective color number N ef f c . To evaluate the decay amplitudes, it is useful to introduce the combination of Wilson coefficients The values of N ef f c can be found in [32], that is: And it is reasonable to take the value of N ef f c (V − A) = 2, N ef f c (V + A) = 5. From now on, we will drop the superscript "eff" through the paper for convenience. III. B → P V DECAYS IN MODEL III 2HDM Based on the effective Hamiltonian obtained via the operator product expansion and renormalization group evaluation, one can write down the amplitude for B → P V decays and calculate the branching ratios and CP violating asymmetries once a method is derived for computing the hadronic matrix elements. For purpose of this paper, we are going to explore the new physics contributions to the exclusive decays B → P V in the general model III 2HDM with spontaneous CP violation. For a numerical estimation, we will employ the generalized factorization approach described in the previous section. We begin with the following definitions for the branching ratio and CP violation asymmetry: where A andĀ are the decay amplitudes of B andB respectively, ǫ is the polarization vector of the vector meson. The input parameters in calculation are listed in Table.I. Here f M and f T M are all decay constants of the mesons, f M comes from the experimental measurements, but f T M is calculated from quenched lattice QCD and QCD sum rules [37,38].As for the form factor of pseudoscalar and vector mesons, we use the results from Light Cone Sum Rules(LCSR) [36,39], but for the form factor of η ′ , we use the value of BSW model [40]. And for the η − η ′ mixing effects, we use the results of [41]. The B → P (V ) form factors' values are listed in Table.II. For comparison, we list both the results for light-cone sum rules(LCSR) and from sum rules in the framework of heavy quark effective field theory [42]. In the model III 2HDM, λ ij (i, j = c, s, b, t), m H ± , m h 0 , m A 0 , m H 0 are free parameters that should be constrained from experiments. It was shown from B 0 d,s −B 0 d,s mixing that the parameters |λ cc | and |λ ss | can reach to be around 100 [21], and their phases are not constrained too much. In our present considerations, we simply fix their phases to be π/4 [36,37,39](the first row),sum rule in Heavy Quark Effective Field Theory [42](the second row)and BSW model [40](the third row).The values in the square brackets are the B → η ′ form factors. [42](the second row)and BSW model [40](the third row). to see their effects. For λ tt and λ bb , the constraints come from the experimental results of B −B mixing, Γ(b → sγ),Γ(b → cτν τ ), ρ 0 , R b and the electric dipole moments (EDMS) of the electron and neutron [14,16,25,29,43]. For a numerical calculation, we are going to consider the following three typical parameter spaces which are allowed by the present experiments: Case A : |λ tt | = 0.15; |λ bb | = 50, Case B : |λ tt | = 0.3; |λ bb | = 30, Case C : |λ tt | = 0.03; |λ bb | = 100, (35) and: For the Higgs mass, the following values are assumed: All the numerical results are presented in Table V ∼ Table IX. IV. CONCLUSIONS AND DISCUSSIONS As the charged Higgs mediated one loop FCNC effects to the ∆B = 1 charmless decays are mostly characterized through the Wilson coefficient C ef f g , which is included in the C ef f (3,4,5,6,7,8) , and there are no new operators beyond the basic operators Q 1,2...10 , their contributions to the Wilson coefficients are given in Table.III. On the contrary, the neutral Higgs mediated processes will bring in new operators Q (11,12,··· ,16) with the new Wilson coefficients C (11,12,··· ,16) . They are nonzero when the neutral Higgs couples to the second and third generation of quarks, and the numerical results are presented in Table.IV. From the above calculations, it is seen that in some decay channels, the new physics contributions can be significant, especially to CP violations. a). As we have set the Yukawa couplings λ iu and λ id to be zero, so the neutral Higgs contributions to B → (ρ, ω, K * )π, (ρ, ω)K decays are actually ignored, only the charged Higgs give new contributions. One can see that the branching ratio of B →K 0 ρ 0 decay in the model III 2HDM is the same as SM prediction (about 1.55 × 10 −6 ), which is far below the large central value of experimental result (5.4 ± 0.9) × 10 −6 . Though the annihilation diagram and exchange diagram are not taken into account, their contributions are still not enough to give such an enhancement. So one needs to find some new mechanism to explain this discrepancy. The same situation also appears in the B → K − ρ + , K * − π + decays, where the experimental results are much larger than theoretical predictions both in SM and 2HDM when simply using the generalized factorization approach. Though the branching ratios could be enhanced by using improved QCD Factorization(QCDF) [44], the resulting values are still smaller than the measured results. b). The model III 2HDM prediction for the CP violation of B d → Kφ decay is 5 ∼ 7 times larger than the SM prediction, which can be a signal to look for new physics in future experiments. But the prediction of the branching ratio are both smaller than the experimental one. c). The SM and model III 2HDM predictions for branching ratio of B d → K * 0 π 0 are the same in size and all consistent with the experimental result at 1σ level. While the new physics prediction for CPV can flip the sign of the SM one and be 1 ∼ 5 times larger in size and still within 1σ error of the experiments. d). In B d → K * (η, η ′ ) decays, new physics effects to CPV becomes significant. In B → K * η, the 2HDM prediction is negative but the SM one is positive. In B → K * η ′ , the 2HDM prediction can be as large as 40%, which is about seven times of SM prediction. e). In B d → ρ + π − decay, the model III 2HDM prediction can enhance the CP violation from about −20% in SM to about −30%. Both SM and 2HDM predictions for the branching ratio of B d → ρ 0 π 0 decay are much smaller than the experimental result. Such an inconsistence cannot be improved even in QCD factorization method [44]. As for CP violation, the SM and model III 2HDM predictions have opposite sign with the magnitude (10 ∼ 15)%. As the current experimental error is still too big to draw a conclusion, much more precise measurement is needed to test it. f). In B d → ωπ decays, the new physics effects to CP violation may be distinct with the SM prediction, as it not only flips the sign but also enhances the magnitude by a factor of three. For B 0 s → P V decays, the new physics contribution can be large in some decay channels. a). In B s → K * η, the 2HDM prediction enhanced the direct CP violation to about −50% compared to the SM prediction −28.8%, but for B s → K * η ′ , the new physics contribution is destructive and reduce the SM prediction −37% to about −20%. b). In B s → ρη ( ′ ) decays, new physics contribution to branching ratio is destructive but gives an enhancement to the CPV to about four times of the SM prediction. c). In B s → φη ( ′ ) decays, new physics effects to branching ratios and CPV are both significant. d). In B s → K 0K 0 * decay, 2HDM can give about 25 ∼ 70% enhancement to the branching 13 ratio. e). In B → K 0 φ decay, new physics effect to CPV is very significant, the SM prediction is almost zero, but the new physics effects can enhance it to about −10%. For B u → P V decays, there are also some new effects from the extra Higgs contributions: a). In B u → π −K * 0 decay, the 2HDM prediction of CPV can be ten times of the SM one and are more closer to the experimental value. b). In B u → K − φ decay, CPV can be about 10% in 2HDM , which is much larger than the SM prediction 1.44% and is within 2σ level of experimental results. c). In B u → K * − η decays, new physics contribution reduced the CPV to about a half or a quarter of the SM one and is much closer to the experimental result. d). In B u → ρ − η decay, 2HDM prediction for CPV is 2 ∼ 3 times of the SM one and much closer to the experimental central value. e). In B u → π − φ decay, new physics enhancement for the branching ratio and direct CPV can be all significant. f). In B u → K * − K 0 decay, 2HDM predictions for CPV are 20 ∼ 24%, which is very much larger than the SM prediction −1.73% On the contrary, in B u → K * 0 K − decay, 2HDM prediction for CPV can be much smaller than the SM one. From the above results, we see that in some decay channels, the theoretical predictions for branching ratios are still far from the experimental results not only in SM but also in model III 2HDM,such as B → Kρ, K * π decays. And even using the improved QCDF, the situation cannot be improved much. There must be some new mechanism to improve those situations. For simplicity, we have not considered the possible effects of final state interaction (FSI) and the contributions from annihilation and exchange diagrams although they may play a significant rule in some decay channels. As for factorization part, in principle, N ef f c can vary from channel to channel as in the case of charm decay. However, in the energetic two-body B decays, N ef f c is expected to be process insensitive [30,32], and the preferred values are obtained from the data to be N ef f c (V − A) = 2, N ef f c (V + A) = 5 [30,32]. In a numerical calculation, we have considered only three cases for parameter choice in a general model III 2HDM to be consistent with the experimental results. Also we have totally neglected the first generation Yukawa couplings and the off-diagonal matrix elements of the Yukawa coupling matrix, such as λ tc,sb etc. to eliminate the FCNC at tree level. However, it is still possible that FCNC involving the third generation quarks exists at tree level, so the constraints can be less stronger to get nonzero off-diagonal elements.
2019-04-14T02:40:01.253Z
2007-01-10T00:00:00.000
{ "year": 2007, "sha1": "53c35d9f1fd45ea555cc3b026654e45f3c2a25ce", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0701072", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "53c35d9f1fd45ea555cc3b026654e45f3c2a25ce", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253272235
pes2o/s2orc
v3-fos-license
Variation of Canonical Height for Fatou points on $\mathbb{P}^1$ Let $f: \mathbb{P}^1\to \mathbb{P}^1$ be a map of degree $>1$ defined over a function field $k = K(X)$, where $K$ is a number field and $X$ is a projective curve over $K$. For each point $a \in \mathbb{P}^1(k)$ satisfying a dynamical stability condition, we prove that the Call-Silverman canonical height for specialization $f_t$ at point $a_t$, for $t \in X(\bar{\mathbb{Q}})$ outside a finite set, induces a Weil height on the curve $X$; i.e., we prove the existence of a $\mathbb{Q}$-divisor $D = D_{f,a}$ on $X$ so that the function $t\mapsto \hat{h}_{f_t}(a_t) - h_D(t)$ is bounded on $X(\bar{\mathbb{Q}})$ for any choice of Weil height associated to $D$. We also prove a local version, that the local canonical heights $t\mapsto \hat{\lambda}_{f_t, v}(a_t)$ differ from a Weil function for $D$ by a continuous function on $X(\mathbb{C}_v)$, at each place $v$ of the number field $K$. These results were known for polynomial maps $f$ and all points $a \in \mathbb{P}^1(k)$ without the stability hypothesis, and for maps $f$ that are quotients of endomorphisms of elliptic curves $E$ over $k$. Finally, we characterize our stability condition in terms of the geometry of the induced map $\tilde{f}: X\times \mathbb{P}^1 \rightarrow X\times \mathbb{P}^1$ over $K$; and we prove the existence of relative N\'eron models for the pair $(f,a)$, when $a$ is a Fatou point at a place $\gamma$ of $k$, where the local canonical height $\hat{\lambda}_{f,\gamma}(a)$ can be computed as an intersection number. Introduction In this article, we study the variation of canonical height in families of maps f : P 1 → P 1 . More precisely, we fix a number field K and a smooth projective curve X defined over K. Let k = K(X) be the associated function field, and let K denote an algebraic closure of K. Any map f : P 1 → P 1 of degree d defined over k will specialize to a morphism f t : P 1 → P 1 of degree d, defined over K, for all but finitely many t ∈ X(K). For points a ∈ P 1 (k), we are interested in properties of the function t →ĥ ft (a t ), whereĥ ft is the Call-Silverman canonical height for f t as defined in [5], as t varies in X(K). An important case was studied in the early 1980s. Given any elliptic surface E → X with a zero section, defined over a number field K, and given a section P : X → E also defined over K, the fiber-wise canonical height t →ĥ Et (P t ) is known to define a Weil height on the base curve X(K) [32]. That is, there exists a Q-divisor D E,P on X, of degree equal to the geometric canonical heightĥ E (P ) (viewing E as an elliptic curve over the function field k) so that for any choice of Weil height associated to D E,P . The notation O(1) represents a bounded function, defined on the complement of finitely many points in X(K); the bound depends on the pair (E, P ) and the choice of Weil height h D E,P . This can be viewed as a dynamical example on P 1 as follows. Projecting each smooth fiber E t to P 1 by the natural degree-two quotient that identifies a point x ∈ E t with its inverse −x, and taking, for example, the multiplication-by-2 endomorphism on E t , we obtain a family of maps f t : P 1 → P 1 , welldefined for all but finitely many t ∈ X(K). See, for example, [31, §6.4]. The section P projects to an element p ∈ P 1 (k), and we haveĥ ft (p t ) = 2ĥ Et (P t ), so that on the complement of finitely many points in X(K), for a Q-divisor D f,p = 2 D E,P on X. For any given map f : P 1 → P 1 defined over k of degree > 1, and each point a ∈ P 1 (k), Call and Silverman proved that the specializations satisfy as h D (t) → ∞, for any choice of Weil height h D on X(K) associated to a divisor D of degree equal to the geometric (i.e., over k) canonical heightĥ f (a) [5,Theorem 4.1]. Recently, Ingram improved the error term o(h D (t)) in (1.3) to O(h 2/3 D (t)) [22]. Inspired by (1.2) and (1.3), Call and Silverman asked if there can exist a divisor D = D f,a on X so that the stronger result of the form (1.2) will hold for every f and a; see the Remark after Theorem 4.1 in [5]. We give a partial answer to this question. Definition 1.1. A point a ∈ P 1 (k) is said to be totally Fatou for f if it is an element of the non-archimedean Fatou set at every place γ ∈ X(K) of k. We refer the reader to Section 4 for more information. We note here that throughout this article we identify the places of k with those of k ⊗ K and with the points γ ∈ X(K). The notion of a totally Fatou point has also appeared in [26] in the setting of number fields. Theorem 1.2. Let K be a number field and X a smooth projective curve over K. Let f : P 1 → P 1 be a map of degree > 1 defined over k = K(X), and suppose that a ∈ P 1 (k) is totally Fatou for f . Then there exists a Q-divisor D = D f,a on X, of degree equal to the geometric heightĥ f (a), so that t →ĥ ft (a t ) defines a Weil height for D on X(K). More precisely, for any choice of Weil height h D associated to D, we havê as a function of t ∈ X(K) \ Y for a finite set Y , whereĥ ft (a t ) is well defined. The bounds onĥ ft (a t ) − h D (t) depend on f , a, and the choice of Weil height h D . We shall see that the divisor D is given by whereĥ f = γλ f,γ is a local decomposition of the geometric canonical height for f over k. The fact that D is a Q-divisor for totally Fatou points a, so thatλ f,γ (a) ∈ Q and therefore alsoĥ f (a) ∈ Q, is new; see Proposition 6.1, addressing a question in [10]. As a special case of Theorem 1.2 we recover (1.1) and (1.2), because all points in P 1 (k) are totally Fatou for the maps f coming from elliptic curves. The statement of Theorem 1.2 was proved by Ingram for polynomial maps f (z) ∈ k[z] and for all points a ∈ P 1 (k) without the totally Fatou assumption [21]. Polynomial maps have a totally invariant super-attracting fixed point at ∞, simplifying computations of the canonical height. In fact, much more is known for polynomials f and for maps f coming from elliptic curves, and we address some of this below in the context of Theorem 1.7; see the works of Favre and Gauthier [14,15] and of Silverman [28,29,30]. However, even with the totally Fatou assumption, new complications arise for rational maps that do not exist for polynomials or maps coming from elliptic curves, as we discuss after Theorem 1.7 and illustrate by example in Section 7. The totally Fatou condition. In contrast with the setting of number fields, it may be true that every point a ∈ P 1 (k) is either preperiodic or totally Fatou for maps f defined over k. (Note that the statement of Theorem 1.2 holds trivially when a is preperiodic for f , asĥ ft (a t ) = 0 at all points t where f t is defined, and we can take D = 0.) We know of no examples, nor any mechanisms to prove existence, of maps f defined over k and points a ∈ P 1 (k) with infinite orbit for which a lies in the non-archimedean Julia set of f at a place γ of k. Conjecture 1.3. Let K be a number field and X a smooth projective curve over K. Let f : P 1 → P 1 of degree > 1 be defined over k = K(X). Then every point a ∈ P 1 (k) is either preperiodic or totally Fatou for f . Note that the conjecture remains open for polynomial maps f , though the conclusion of Theorem 1.2 is known to hold for all points a ∈ P 1 (k) in that case [21]. In Section 7, we observe that for all of the previously known cases of Theorem 1.2 in the literature where the maps f are not polynomials (nor conjugate to polynomials), the points a ∈ P 1 (k) are totally Fatou for f . Here we prove that "most" points in P 1 (k), from a density point of view, are totally Fatou. Let k γ denote the completion of k at the place γ ∈ X(K). Theorem 1. 4. For any f : P 1 → P 1 of degree > 1 defined over k = K(X), the set of totally Fatou points for f in P 1 (k) is open and dense in the product topology on P 1 (k), coming from the embedding of k into γ∈X(K) k γ . Theorem 1.4 exploits the non-local-compactness of k γ ; it is false for maps f defined over number fields K, where the Fatou set in a completion K v can fail to be dense at archimedean or non-archimedean places v. To understand the totally Fatou condition better, we relate it to the geometry of the induced rational mapf : X × P 1 X × P 1 on the complex surface X × P 1 , defined by (t, z) → (t, f t (z)). Let I(f ) denote the (finite) indeterminacy set off in (X × P 1 )(K). For a point a ∈ P 1 (k), let C a denote the graph in X × P 1 of the associated holomorphic map t → a(t) from X to P 1 . Theorem 1.5. Let f : P 1 → P 1 be of degree > 1, defined over a function field k = K(X), with the number field K chosen so that all indeterminacy points off lie in (X × P 1 )(K). A point a ∈ P 1 (k) is totally Fatou for f if and only if there exists a birational morphism Y → X × P 1 , defined over K, so that the induced map is the proper transform of the curve C f n (a) in Y . Moreover, the modification Y can be chosen so thatf Y is algebraically stable, meaning that no curve is mapped by an iterate (f Y ) n into the indeterminacy set I(f Y ), and such that C Y f n (a) intersects the singular fibers of the projection Y → X only at smooth points, for all n ≥ 0. Remark 1.6. It was proved in [9, Theorem E] that, for every f of degree > 1 over k, there exists a modification Y → X × P 1 so that the induced mapf Y : Y Y is algebraically stable. Theorem 1.5 implies we can further modify Y so that the orbit of C a is disjoint from the indeterminacy locus off Y , when a is totally Fatou. The choice of Y will depend on a. We use Theorem 1.5 to prove that the geometric local canonical heightλ f,γ (a) can be computed as an intersection number in Y , assuming the point a is Fatou at γ; see Theorem 4.11 and compare with [5,Theorem 6.1]. In analogy with the study of elliptic curves and abelian varieties, the concept of a "weak Néron model" at a place γ of k was introduced in [5] for dynamical systems; but, it is known that these models often fail to exist for maps f : P 1 → P 1 defined over k γ in the absence of good reduction, for example when there is a repelling periodic point in k γ [20]. In fact, as Ingram noted in the Introduction to [21], if f : P 1 → P 1 defined over k is neither Lattès nor isotrivial, then it cannot have a weak Néron model at every place γ. The proof of Theorem 1.5 provides the existence of a relative type of weak Néron model, for a pair (f, a) with a being Fatou at γ, in which the orbit of the Fatou point can be arranged to be integral. Theorem 1.5 follows from the proof of [9, Theorem D] and the classification of γ-adic Fatou components in the Berkovich projective line P 1,an γ (over a complete and algebraically closed field C γ containing the completion k γ ) [27] [2] [9, Appendix]; many of the ideas were already present in [20], and what remained was to show that the full orbit {f n (a)} n≥0 can be disjoint from the indeterminacy set after only finitely many blowups of X × P 1 . Local version of Theorem 1.2. In the setting of elliptic surfaces E → X, Silverman strengthened Tate's result (1.1) by showing that the function B E,P (t) :=ĥ Et (P t ) − h D E,P (t), defined for all but finitely many t ∈ X(Q), can be expressed as a sum over all places v of the number field K of functions with good behavior [28,29,30]. More precisely, he proved that the local height functions forĥ Et on E t (Q) and for h D E,P on X(Q) can be chosen so that all v-adic contributions to B E,P extend to define continuous functions on X(C v ), even across the singular fibers, and that all but finitely many of the v-adic contributions are ≡ 0. We also prove a local continuity result, strengthening the conclusion of Theorem 1.2: Under the hypotheses of Theorem 1.2, we assume that the number field K is extended so that supp D f,a ⊂ X(K). There are local decompositionŝ extends to a continuous function on the Berkovich analytification X an Cv for each place v of K. Here, M K denotes the set of places of the number field K, and the weights are the same as those appearing in the product formula 1 = v∈M K |α| Nv v for α ∈ K * . The conclusion of Theorem 1.7 is known for polynomial maps f (z) ∈ k[z] and for all a ∈ P 1 (k) without the totally Fatou hypothesis [19,14]; their proofs take clever advantage of the compactness of the orbit-closures of all points in the γ-adic Julia sets, as subsets of P 1 (k γ ) (see [14,Theorem 3], [20,Theorem 4.8], [33,Proposition 6.7]), which does not hold for general rational maps f . See, for example, the f of §7.4. Moreover, even for totally Fatou points a ∈ P 1 (k), the proof of Theorem 1.7 requires a new approach. The local canonical height functionsλ ft,v for polynomials f can be normalized so they are always non-negative. The challenge here is the absence of a uniform lower bound on the functions V v of Theorem 1.7, independent of a. (This unboundedness was exploited in [11] to show V v can fail to extend continuously for maps f (z) ∈ k(z) when a point a ∈ P 1 (k ′ ) is defined over a larger field such as k ′ = K v (X); see Remark 1.8.) Finally, we remark that Theorem 1.7 as stated does not imply Theorem 1.2. For polynomial maps f and each a ∈ P 1 (k), the functions V v of Theorem 1.7 will satisfy V v ≡ 0 at all but finitely many places v of K [21], as is the case for sections of elliptic surfaces [30]. However, by contrast, it is not the case that the functions V v will be ≡ 0 for all but finitely many places v for general rational maps f ; there can be nontrivial contributions at infinitely many places, even for totally Fatou points a ∈ P 1 (k). See the example of §7.1; such examples were studied in depth in [25]. Nevertheless, we extract the summability of the magnitudes of V v , over all places v of K, from the proof of Theorem 1.7. Julia points. We use the totally Fatou hypothesis on a ∈ P 1 (k) in a crucial way in our proofs of Theorems 1.2 and 1.7. The study of Julia points with infinite orbit is more subtle. As we show in Section 7, there exist examples of the following: (1) a map f : P 1 → P 1 of degree 2 defined over k = Q(t) with bad reduction at t = 0, for which the non-archimedean Julia set at t = 0 is a Cantor set in the completion P 1 (k 0 ) at t = 0, and the local geometric heightλ f,0 (a) is in R \ Q for all Julia points a with infinite orbit. See §7.3; compare the main results of [10]. (2) a map f : P 1 → P 1 of degree 2 defined over k = Q(t) with bad reduction at t = 0, and a point a defined by a formal power series in Q[[t]] in the non-archimedean Julia set of f at t = 0 for which, at the place v = ∞ of Q, the function V v of Theorem 1.7 will fail to be defined at t = 0. See §7.4. For either example, if such a point a can be constructed to be algebraic over k, then, upon replacing k with a finite extension, it would provide a counterexample to Conjecture 1.3, and the results we prove for totally Fatou points would fail to extend to all a ∈ P 1 (k). More precisely, example (1) would show that the divisor D constructed in Theorem 1.2, defined by (1.4), needs to be an R-divisor instead of a Q-divisor; compare Proposition 6.1. Example (2) would show that the sequences of functions converging to define the V v of Theorem 1.7 would not always converge uniformly in the neighborhood of a singularity; compare Theorem 5.1. Remark 1.8. It is known that, working with maps f defined over the field ℓ = C(t), there exist points a ∈ P 1 (ℓ) that are totally Fatou for f but for which the analog of the (archimedean) function V ∞ of Theorem 1.7 is unbounded on the base curve P 1 (C) [11]. The construction in [11] is different from the construction for example (2) and uses Baire Category. The results of Favre and Gauthier show that such examples over ℓ or examples of the types (1) and (2) above cannot exist for polynomials f [14]. Acknowledgements. We thank Hexi Ye for helpful discussions about this problem. We also thank the anonymous referees for their comments and suggestions. This research was supported by NSF grant DMS-2050037. M K -terminology In this section, we fix some basic terminology associated to the number field K and remind the reader of fundamental facts about elements of k = K(X). Let M K denote the set of places of the number field K, each giving rise to an absolute value | · | v on K which is normalized to extend one of the standard absolute values on the field Q of rational numbers. The set M K satisfies the product formula, for all x ∈ K * . We let K v denote the completion of K at v, so that For each place v of K, we let C v be the completion of an algebraic closure K v . We also fix an embedding K ֒→ C v . We let X an v denote the Berkovich analytification of the curve X over the field C v . We will use the following terminology, as in [24, Chapter 10]: An M K -constant is a function C : M K → R so that C v = 0 at all but finitely many places v. An M K -quasiconstant is a function C : Fix a point γ ∈ X(K) and a choice of ω γ ∈ K(X) defining local coordinates for X near γ. An M K -neighborhood of γ is a collection of open neighborhoods U v of γ in X(C v ), for v ∈ M K , given locally by {|ω γ | v < 1} for all but finitely many places v. This definition is independent of the choice of ω γ uniformizing X near γ, as a consequence of the following proposition. Let g denote the genus of X. For each γ ∈ X(K), choose ξ γ ∈ K(X) so that ξ γ has a pole of order 2g + 1 at γ and no other poles in X. The divisor of a function h ∈ K(X) is for all t ∈ X(K) \ supp(h) and v ∈ M K . Moreover, for each γ ∈ X(K), the notion of M K -neighborhood of γ is well defined. Proof. For each γ ∈ supp(h), let U γ be the complement in X of supp(h) \ {γ} and all zeroes of ξ γ , so that U γ is a Zariski-open neighborhood of γ. The functions h γ := h 2g+1 (ξ γ ) ordγ h and 1/h γ and 1/ξ γ and ξ γ ′ for γ ′ = γ in supp(h) are all regular on U γ . Let U h = X \ supp(h), so that h, 1/h and each ξ γ , γ ∈ supp(h), are regular on U h . Note that As in [24, Chapter 10, Lemma 1.1], there exists a projective embedding of X into P N , defined over K, so the complement of each coordinate hyperplane in P N intersects X in an open subset of some U ∈ U. Indeed, letting F U be the divisor consisting of the sum of points in the complement of U ∈ U, we can find effective divisors H U so that the elements of {F U +H U : U ∈ U} are linearly equivalent, and so that there is no point in the intersection of the supports of F U +H U . (This is because mH −F U will be very ample for any choice of ample H and every U ∈ U, for all sufficiently large m ∈ N.) The elements {F U + H U : U ∈ U} thus induce a morphism φ : X → P k for some k. Choosing any projective embedding i : X ֒→ P r defined over K, for some r > 0, our desired embedding comes from postcomposing φ × i : X → P k × P r with the Segre embedding P k × P r ֒→ P (k−1)(m−1)−1 . Let A N (K) denote affine space of dimension N, and let · · · : x N ) are the coordinates of P N . For each j = 0, . . . , N, let U(j) ∈ U be an element containing X ∩ {x j = 0}. For each v ∈ M K , we let E j,v be the set of all points in P N (K) with projective coordinates (x 0 : x 1 : · · · : x N ) so that |x j | v is maximal. Then E j,v is the unit polydisk in the affine chart where x j = 0 with coordinates y i = x i /x j . For each v ∈ M K , these affine bounded sets cover all of P N (K) and so also X, and the intersection of E j,v with X is a subset of U(j). We let E j be the collection {E j,v : v ∈ M K }, for j = 0, . . . , N. Fix j. For U(j) = U γ , since h γ and 1/h γ are both regular on U γ , we have an M K -constant g γ such that It is also the case that 1/ξ γ is regular on U γ , and so is ξ γ ′ for each γ ′ = γ in supp(h), so we can enlarge g γ if needed so that on E j,v . Moreover, we can also arrange that on E j,v , because either h or 1/h is regular on U γ . By increasing g γ yet again, it follows that Similarly for U(j) = U h , we can find an M K -constant s so that . This completes the proof of the first statement of the proposition. To see that the notion of M K -neighborhood is well defined, we fix γ 0 ∈ X(K) and choose any ω 0 ∈ K(X) with a simple zero at γ 0 . For the covering U of X associated to ω 0 , note that U γ 0 is the unique element containing γ 0 . So for each v and j, if the set E j,v contains γ 0 , then it must lie in U γ 0 . The inequality (2.1) implies that |ω 0 | 2g+1 v = |ξ γ 0 | −1 v on such E j,v , for all but finitely many v. On the other hand, we also have that if |ξ γ 0 (t)| v > 1 at a point t ∈ E j,v , for some j, then E j,v is contained in U γ 0 for all but finitely many v (because |ξ γ 0 | v ≤ 1 on the E j,v 's in the other elements of U). In other words, any M K -neighborhood of γ 0 defined by ω 0 coincides with {t ∈ X(C v ) : |ξ γ 0 (t)| v > 1} for all but finitely many places v of K. This completes the proof of the proposition. Escape rates and Weil heights Throughout this section, we fix f : P 1 → P 1 of degree d ≥ 2, defined over k = K(X), and any point a ∈ P 1 (k). We let Res F ∈ k * denote the homogeneous resultant of P and Q; see, for example, [31, §2.4]. Set Note that S(F, A) is a finite set. Convention 3.1. We enlarge the number field K, if needed, so that S(F, A) ⊂ X(K). 3.2. Geometric escape rates and a divisor on X. Recall here that throughout we identify the places of k with the points γ ∈ X(K), with a slight abuse of terminology. For each γ ∈ X(K), we work with the absolute value on k defined by |z| γ := e − ordγ z , and the norm · γ on k 2 given by There is a constant C γ ≥ 1 so that for all (z, w) ∈ k 2 ; we can take C γ = 1 for all γ ∈ S(F ) [31,Proposition 5.57]. The escape rate of A for F at γ is the quantity It exists in R, by (3.3), and it is equal to 0 for all γ ∈ S(F, A); see, e.g., [31,Proposition 5.58]. We define an R-divisor by The support of D(F, A) is contained in S(F, A) and so in X(K) by Convention 3.1. If we had chosen different lifts of f and a, say cF and bA for c, b ∈ k * , then It follows that D(cF, bA) and D(F, A) are linearly equivalent R-divisors on X. 3.3. A Weil height associated to D(F, A). Let g denote the genus of X. For each γ ∈ X(K), choose a meromorphic function ξ γ ∈ K(X) so that ξ γ has a pole of order 2g + 1 at γ and no other poles. Let D = D(F, A) be defined by (3.5), and recall that supp This function extends continuously to the Berkovich analytificiation X an v \ supp D. A Weil height for D can be defined by for all t ∈ X(K) \ supp D, and we may set h D (t) = 0 for t ∈ supp D. This h D is indeed a Weil height associated to the R-divisor D, as it is an R-linear combination of Weil heights built from the local functions 3.4. Arithmetic escape rates. For each place v of the number field K, we define a norm For each t ∈ X(K) \ S(F ), we let F t denote the specializations of F . We continue to use the collection of functions {ξ γ : γ ∈ S(F )} from §3. 3 Proof. Recall that F = (P, Q) for homogeneous polynomials P and Q of degree d with coefficients in k. By our choice of β γ , there is an M K -neighborhood U γ and an M K -constant b so that e −bv ≤ max{|c t | v : coefficients c of β γ P and β γ Q} ≤ e bv (3.8) for each t ∈ U γ v and each v ∈ M K , by Proposition 2.1. By increasing the constant b, the upper bound on β γ t F t (z, w) v / (z, w) d v follows from the triangle inequality. We can enlarge b at the archimedean places, if needed, so that The final statements of the proposition follow from the same combination of Proposition 2.1 with [31, Proposition 5.57], because the coefficients of F will have no poles and Res F will have no poles or zeroes outside of S(F ). Similar to the geometric escape rates of (3.4), we can define arithmetic escape rates, working at each place v of the number field K. For each v ∈ M K , the escape rate function for the pair It exists in R for all t ∈ X(K) \ S(F, A) by Proposition 3.2; see, e.g., [31,Proposition 5.58]. The proof of convergence for (3.9) shows it is locally uniform in t, so that In fact, it extends to be continuous on the Berkovich analytification X an v \ S(F, A); see, e.g., [1, pp. 295-296] where the escape rate is "Berkovich-ized". If we had chosen different lifts of f and a, say cF and bA for c, b ∈ k * , then These escape rate functions provide local height expressions for the canonical heightĥ ft evaluated at a t . In particular, we havê for all t ∈ X(K) \ S(F, A). See, for example, [31,Theorem 5.59]. Note that the sum over all places of K is independent of the choice of lifts F and A, by the product formula. Variation of canonical height. Recall that we are trying to understand if is bounded, as claimed in Theorem 1.2, where h D is a choice of Weil height for D = D(F, A) defined by (3.5). Recalling that any two choices of Weil height for the same divisor are bounded from one another (and in fact, M K -bounded) it suffices to work with the Weil height constructed in (3.7). Assuming that the point a ∈ P 1 (k) is totally Fatou for f , a hypothesis which will be defined and examined in the next Section, we aim to prove three things: (1) that the local geometric height G F,γ (A) is in Q at all points γ ∈ X(K), so that the divisor D = D(F, A) of (3.5) will be a Q-divisor; extend to bounded -and in fact continuous -functions on the Berkovich analytification X an Cv ; and (3) that the sum is uniformly bounded over all points t ∈ X(K) \ S(F, A). The non-archimedean Fatou set Throughout this section, we fix f : P 1 → P 1 of degree d ≥ 2, defined over k = K(X). For each fixed γ ∈ X(K), we let k γ be the completion of k with respect to the valuation ord γ , and let L γ be the completion of an algebraic closure of k γ . In this section, we introduce and study the totally Fatou condition that is assumed for Theorem 1.2, and we prove Theorems 1.4 and 1.5. 4.1. The Fatou set. Fix γ ∈ X(K). Let d γ (x, y) denote the chordal distance between x and y in P 1 (L γ ). Explicity, if x = (x 1 : x 2 ) and y = (y 1 : y 2 ), then The non-archimedean Fatou set of f at γ is the set Ω γ (f ) of all points x ∈ P 1 (L γ ) for which we can find an open disk D x containing x so that the family of functions {f n |D x } is equicontinuous in the distance d γ . See, for example, [3, Chapter 5]. Its complement This Fatou set Ω γ (f ) will be all of P 1 (L γ ) at γ where f has good reduction. In our case, this implies that Ω γ (f ) = P 1 (L γ ) for all γ ∈ S(F ), the singular set defined in (3.1), for any choice of homogeneous polynomial lift F of f . A point a ∈ P 1 (k) is totally Fatou for f if a ∈ Ω γ (f ) at all γ ∈ X(K). 4.2. Hole-avoiding pairs. Now fix a point a ∈ P 1 (k). Fix γ ∈ X(K), and choose homoge- where ord γ F and ord γ A are defined in §3.1, so that the specializations F γ and A γ are well defined. The holes of f at γ are the points x = (x 1 : Holes exist if and only if Res F γ = 0. We say that the pair (f, a) is hole-avoiding at γ if the specializations satisfy F n γ (A γ ) = (0, 0) for all n ≥ 0. In particular, the pair (f, a) is hole-avoiding at γ for all points a ∈ P 1 (k) if Res F γ = 0. It is easy to check that this definition is independent of the choice of lifts F and A, as long as they satisfy (4.1). , zw). The point 0 = (0 : 1) ∈ P 1 (Q) is the unique hole for f . A point a ∈ P 1 (k) will therefore fail to be hole-avoiding at t = 0 if and only if it specializes to a 0 ∈ Z ≥0 . Indeed, for a 0 = n 0 ∈ Z ≥0 , the iterates of the lift A = (a, 1) will satisfy We may view f : P 1 → P 1 over k as a rational map of the surface X × P 1 to itself, defined over the number field K by (t, z) → (t, f t (z)). We may view the point a ∈ P 1 (k) as a section of the projection X × P 1 → X, also defined over K. The following is immediate from the definitions: a) is hole-avoiding at γ ∈ X(K) if and only if the iterates f n (a) (as sections of the fibered surface X × P 1 → X) are disjoint from the indeterminacy locus of the induced mapf : X × P 1 X × P 1 within the fiber {γ} × P 1 , for all n ≥ 0. Note that all of the indeterminacy points off in X × P 1 are contained in the fibers over S(F ) ⊂ X for every choice of homogeneous polynomial lift F . The term "hole" for an indeterminacy point was first used in [6]; it was meant to capture the idea that the mass of the measures of maximal entropy in the family f t , for t ∈ X(C) \ S(F ), was "falling into the holes" of f and its iterates f n at t = γ. The same condition appears in [25]. 4.3. The action of f on the Berkovich projective line. Fix γ ∈ X(K). We now reinterpret the notion of hole-avoiding in the language of the Berkovich projective line defined over the field L γ , which we denote by P 1,an γ , and the extension of f to a dynamical system on P 1,an γ . A good basic reference for the dynamics of f on P 1,an γ is [3]. Note that the definition of hole-avoiding extends naturally to elements a ∈ P 1 (k γ ), for k γ the completion of k at γ, By definition, the hole-directions for f from a Type II point ζ ∈ P 1,an γ are the connected components of P 1,an γ \ {ζ} that intersect the set of preimages f −1 (ζ). When ζ = ζ G is the Gauss point in \ Ω an γ (f ) has the property that the union U = n≥0 f n (U) is dense in P 1,an γ ; in fact, the set U omits at most 2 points, both in P 1 (L γ ) [3,Theorem 8.15]. Recall that Ω an γ (f )∩P 1 (L γ ) = Ω γ (f ), the non-archimedean Fatou set as we have defined it in §4.1. Proof. Let U be a connected component of P 1,an γ \ {ζ}. If U has non-empty intersection with the non-archimedean Julia set J an γ (f ) ∩ P 1 (L γ ), then the iterates of U must contain all Type II points, including ζ itself [3,Theorem 8.15]. So U must be a hole-direction from ζ for some iterate of f . We are now ready to prove the following result, needed to analyze the dynamics of totally Fatou points for the proof of Theorem 1.2 and Theorem 1.7: , and any point a ∈ P 1 (k γ ). The point a lies in the non-archimedean Fatou set Ω γ (f ) if and only if there exist a change of coordinates B ∈ PGL 2 (k) and iterates f n and f m so that the pair (Bf n B −1 , B(f m (a))) is hole-avoiding at γ. We shall see that one implication is straightforward from the definitions, that the existence of the hole-avoiding pair implies that a ∈ Ω γ (f ). To prove the converse implication, assuming a ∈ Ω γ (f ), we follow the proof of [9, Theorem D], which itself uses the Rivera-Letelier classification of Berkovich Fatou components in the Berkovich space P 1,an γ [27] [9, Appendix] and the Benedetto wandering domains theorem [2], while also keeping track of the orbit of the point a. In the language of [9], given a finite set Γ of Type II points in P 1,an γ , a connected component The k-split Type II points ζ are those in the PGL 2 (k)-orbit of the Gauss point ζ G . These are the Type II points that have k-rational points in infinitely many connected components of P 1,an \ {ζ}. Our proof strategy for Thoerem 4.6 also gives the following statement, which will be used in our proof of Theorem 1.5. Theorem 4.7. Fix any f : P 1 → P 1 of degree d ≥ 2 defined over k = K(X), a point γ ∈ X(K), and any Fatou point a ∈ Ω γ (f ) ∩ P 1 (k γ ). For any finite set of Type II points Γ, there exists a finite set Γ ′ ⊃ Γ so that the pair (f, Γ ′ ) is analytically stable and each point f n (a) of the orbit of a lies in an F -disk for Γ ′ . Moreover, if the elements of Γ are k-split, then we can choose Γ ′ so that its elements are also k-split. Remark 4.8. In [9, Theorem D], the existence of an analytically stable Γ ′ ⊃ Γ for the map f is proved, but without the additional conclusion about the orbit of the Fatou point a. Proof of Theorems 4.6 and 4.7. Fix a ∈ P 1 (k γ ), and assume first that there exist B ∈ PGL 2 (k) and integers n ≥ 1 and m ≥ 0 so that the pair ( where ζ G is the Gauss point. Then by Lemma 4.4, the point f m (a) and all iterates f jn+m (a), for j ≥ 0, do not lie in the hole-directions of f n from ζ B . But the existence of such an orbit implies that either f n (ζ B ) = ζ B or that f n (ζ B ) lies in a direction from ζ B which is not a hole-direction (for otherwise all points would either be in a hole-direction or mapped into a hole-direction under one iterate). If f n (ζ B ) = ζ B , then the hole-directions from ζ B for an iterate f jn , with j ≥ 1, coincide with directions that are mapped to the hole-directions for f n by some f ℓn with ℓ < j; if f n (ζ B ) = ζ B , then the hole-directions for the iterates f jn must coincide with the holes for f n , for all j ≥ 1. In either case, it then follows from Lemma 4.5 that f m (a) is not in J an γ (f ) ∩ P 1 (L γ ). In other words, a must be an element of the Fatou set Ω γ (f ). This proves one implication of Theorem 4.6. To prove the converse implication in Theorem 4.6, we need to find a coordinate change B with good properties. We do this by constructing a Type II point ζ B with the desired properties, and then we will choose any B ∈ PGL 2 (k) sending ζ B to the Gauss point ζ G . Along the way, we will prove Theorem 4.7. Let Γ be any finite set of Type II points. From [9, Theorem D], we know that there is a finite set of Type II points Γ ′ ⊃ Γ (which can be chosen to be k-split if Γ is k-split) so that the pair (f, Γ ′ ) is analytically stable. More precisely, the main theorem of [9, §3.2] states that, for every ζ ∈ Γ ′ , one of the following three cases must hold: (1) the orbit of ζ lies in Γ ′ , and f k (ζ) = f ℓ (ζ) for some ℓ > k ≥ 0; (2) some iterate of ζ lies in a wandering F -disk for Γ ′ , with a periodic boundary point ζ ′ ∈ Γ ′ ; or (3) some iterate of ζ lies in an F -component for Γ ′ that contains an attracting periodic point. Now fix a point a ∈ Ω γ (f ). Choose a Berkovich disk D a containing a and contained in Ω an γ (f ), with k-split Type II boundary point ζ a . Choose D a small enough so that the elements of Γ ′ are disjoint from the forward iterates f j (D a ) for all j ≥ 0; this is possible because D a ⊂ Ω an γ (f ). Following the proof in [9, §3.2], we use the classification of Fatou components (see [9,Theorem A.1]) to analyze the orbit of the disk D a , to choose a distinguished k-split Type II point to be ζ B , and to increase the set Γ ′ further so it contains ζ a and ζ B and remains analytically stable. First assume that a (and so also D a ) lies in a wandering Fatou component U. . The analytically stable pair (f, Γ ′ ) guaranteed by Theorem 4.7 gives rise to a birational morphism Y γ → U γ × P 1 defined over K, which is an isomorphism outside of {γ} × P 1 , and an algebraically stable mapf γ : Y γ Y γ liftingf . (See [9, §4] for details on the relationship between vertex sets Γ and modifications of the surface X × P 1 .) Recall that C a denotes the curve in X × P 1 defined the graph of t → a t , and that C Yγ f n (a) denotes the proper transform of the curve C f n (a) in Y γ , for each n ≥ 0. Let π : Y γ → U γ denote the projection. The indeterminacy points for the iterates (f γ ) n in π −1 (γ) are identified with J-components of Γ ′ , and the F -disks for Γ ′ are identified with smooth points in the fiber π −1 (γ) that are not indeterminate for any iterate off γ . Therefore, the conclusion of Theorem 4.7 about the Fatou point a guarantees that the curves C Yγ f n (a) are disjoint from I(f γ ) and intersect the fiber over γ in smooth points, for all n ≥ 0. Assuming that the point a is totally Fatou, we can repeat this argument over each γ ∈ X(K) wherẽ f : X × P 1 X × P 1 has indeterminacy; we glue the surfaces Y γ and maps f γ to obtain our desired rational mapf For the converse implication, let Y → X ×P 1 be any choice of birational morphism defined over K, and let π : Y → X be the projection to the first factor. Assume that a ∈ P 1 (k) lies in the non-archimedean Julia set at γ ∈ X(K). Then we know that the curve C Y a will intersect an indeterminacy point of some iterate (f Y ) j in the fiber of Y over γ, by Lemma 4.5. Indeed, any small Berkovich disk around a will map, under large iterates of f , over each of the Type II points corresponding to the components of Y over γ. There are now two cases to consider. If C Y a or some iterate C Y f n (a) intersects a component of the fiber over γ which is mapped byf Y into an indeterminacy point, then we are done. If not, then since the point p = C Y a ∩ π −1 (γ) is indeterminate for (f Y ) j , it must be that the point p is sent by (f Y ) m , for some m < j, to an element of I(f Y ). Consequently, C Y f m (a) intersects the indeterminacy set off Y , and the proof is complete. 4.5. Proof of Theorem 1.4. Assume that f is defined over k = K(X), for number field K. Enlarging K if necessary, we can assume that all places γ of bad reduction for f lie in X(K). At each place γ ∈ X(K) of k, we know that the non-archimedean Fatou set Ω γ (f ) ∩ P 1 (k) is open in P 1 (k), in the γ-adic topology. We also know that Ω γ (f ) ∩ P 1 (k) = P 1 (k) for all but finitely many γ. We will show that Ω γ (f ) ∩ P 1 (k) is dense in P 1 (k) for the remaining γ. Fix γ ∈ X(K) and a point b ∈ P 1 (k), and let ζ be any k-split Type II point in P 1,an γ bounding a disk around b. Consider all the connected components of P 1,an γ \ {ζ}. If one of these disks intersects P 1 (k), we call it a k-disk at ζ. In the natural identification of the set of components of P 1,an γ \ {ζ} with P 1 (K), the k-disks correspond to the rational points P 1 (K). If a k-disk at ζ intersects the Berkovich Julia set, we call it a Julia k-disk at ζ. If there are only finitely many Julia k-disks at ζ, then we can always find infinitely many k-disks at ζ that are fully contained in the Fatou set. This shows the existence of Fatou elements of P 1 (k) in the closed Berkovich disk around b bounded by ζ. If there are infinitely many Julia k-disks at ζ, then ζ is in the Julia set (because the Julia set is closed in P 1,an γ ), and ζ is therefore preperiodic [9, Proposition 3.9]. We are still able to find infinitely many k-disks at ζ that are fully contained in the Fatou set. Suppose that f m+n (ζ) = f m (ζ) for some m ≥ 0 and n ≥ 1. Let e be the local degree of f n at f m (ζ), as defined in [3, §7.4], so that e ≥ 1. For e > 1, the iterate f n induces a map g : P 1 → P 1 of degree e, defined over K, by the natural identification of P 1 (K) with the set of directions from f m (ζ). The Julia set of f (which coincides with the Julia set for f n ) in P 1,an γ is contained in the union of the hole-directions from f m (ζ) for f n and its iterates f jn , j ≥ 1, by Lemma 4.5. In other words, the Julia directions are identified with a subset of the union j≥0 g −j (E) for a finite set E ⊂ P 1 (K), corresponding to the hole-directions for f n . But this implies that there are only finitely many Julia k-disks from f m (ζ), because they correspond to a set in P 1 (K) with bounded Weil height, since deg g > 1. It follows that there were only finitely many Julia k-disks from ζ, a contradiction. So we conclude that e = 1. The action of f n at f m (ζ) therefore induces an automorphism A ∈ PGL 2 (K) acting on P 1 (K), the set of directions from f m (ζ). The Julia k-directions from f m (ζ) are contained in the union of the finitely many hole-directions of f n at f m (ζ) and the hole-directions for all iterates f nj , j ≥ 1. As before, these directions are identified with the union of a finite set E in P 1 (K) and the orbit of E under A −1 . Now let h : P 1 → P 1 be the map induced by f m from ζ to f m (ζ), defined over K, under any choice of identification of the k-directions from ζ and f m (ζ) with P 1 (K). Then we can find infinitely many k-disks at ζ that are fully contained in the Fatou set, as a consequence of the following: Lemma 4.9. For any number field K, any finite set E in P 1 (K), any A : P 1 → P 1 of degree 1 defined over K, and any nonconstant h : Proof. Choosing coordinates on P 1 over K, we can assume that A −1 (x) = αx for some α ∈ K * or that A −1 (x) = x + 1. If A has finite order, then there is nothing to show, as then j≥0 A −j (E) is finite while h(P 1 (K)) is infinite. So if A −1 (x) = αx, we can assume there exists a place v of K for which |α| v > 1. Then the set j≥0 A −j (E) = j≥0 (α j E) has no v-adic accumulation points except ∞. Choose any y 0 ∈ P 1 (K) so that h(y 0 ) = ∞, and let {y n } be any infinite sequence in P 1 (K) for which y n → y 0 v-adically. Then h(y n ) → h(y 0 ) v-adically. Therefore, letting Y be this sequence {y n }, after excluding at most finitely many elements from the sequence, we may conclude that For A −1 (x) = x + 1, we work with any archimedean place of K. Let y 0 ∈ P 1 (K) be a point for which h(y 0 ) = ∞, and select any sequence y n ∈ K for which y n → y at this place. Then, as before, letting Y be the complement of finitely many points in {y n }, we conclude that Repeating the above argument for all k-split points ζ, we see that U γ := Ω γ (f ) ∩ P 1 (k γ ) is open and dense in P 1 (k γ ) in the γ-adic topology. Let γ 1 , . . . , γ s ∈ X(K) denote the places for which U γ = P 1 (k γ ). Via the canonical embedding of k into γ∈X(K) k γ , we can approximate any tuple (x 1 , . . . , x s ) ∈ i U γ i by elements in k. This shows that totally Fatou points are open and dense in P 1 (k) in the topology induced from the product topology. 4.6. Intersection theory for a Fatou point. The existence of the resolution Y → X × P 1 constructed in Theorems 1.5 and 4.7 shows more. We now prove that the local geometric canonical heightλ f,γ (a), at each place γ of the function field k = K(X), can be computed as an intersection number in Y when a ∈ P 1 (k) is totally Fatou. In this way, for each place γ of k, we can view our surface Y as providing a relative type of Néron model, associated to the pair (f, a). Fix a choice of local canonical height functions {λ f,γ : γ ∈ X(K)} on P 1 (k) as in [31, §3.5], so thatĥ f (a) = γ∈X(K)λ f,γ (a) for every a ∈ P 1 (k). The local canonical height can be computed asλ f,γ (a) = − min{0, ord γ (a)} at all but finitely many places γ; we enlarge the number field K so that this finite set of places is contained in X(K). Recall that, as in the statement of Theorem 1.5, the curve C a is the section of X ×P 1 → X defined by t → a t , for any a ∈ P 1 (k). The curve C Y a is its proper transform in Y . Proposition 4.10. Let f : P 1 → P 1 be of degree d > 1, defined over a function field k = K(X). Fix γ ∈ X(K). Let π : Y → X × P 1 be a birational morphism defined over the number field K, which is an isomorphism outside of the line L γ = {γ} × P 1 . Let and assume thatf Y maps no component Y γ,i into an indeterminacy point off Y in E γ . Then, there exist rational numbers c γ,i ∈ Q, for i = 1, . . . , m γ , so the following holds. For each point a ∈ P 1 (k) such that the curve C Y f n (a) is disjoint from the indeterminacy locus I(f Y )∩E γ and the singular locus of E γ for every n ≥ 0, the local geometric canonical height of a at γ is computed byλ where (C a ·C ∞ ) γ is the intersection multiplicity of the curves C a and C ∞ in X ×P 1 at (γ, ∞). Combined with Theorems 1.5 and 4.7, we obtain: Theorem 4.11. Let f : P 1 → P 1 be of degree d > 1, defined over a function field k = K(X), and let a ∈ P 1 (k) be a totally Fatou point. Extending the number field K if necessary, let Y be the surface of Theorem 1.5, and let {c γ,i } be the rational numbers guaranteed by Proposition 4.10 over each γ ∈ X(K). Then the geometric canonical height of a satisfieŝ Proof. The theorem is almost immediate from Proposition 4.10 and the statement of Theorem 1.5, summing over all γ ∈ X(K). We only need the additional input of Theorem 4.7 that the orbit of a will always lie in an F -disk for the vertex set Γ ′ . This guarantees that the curves C Y f n (a) intersect the singular fibers only in their smooth points. Proof of Proposition 4.10. As for Theorem 4.7, we continue to follow the arguments of [9], and we also build on the machinery developed in [10]. We identify the components Y γ,i with a finite set of Type II points in the Berkovich space P 1,an γ over the field L γ . (We caution that some components Y i may be non-reduced, so we need to keep track of their multiplicities as well.) See the discussion in, e.g., [9, §4]. Let Γ ⊂ P 1,an γ be the union of this finite set of Type II points; note that Γ must include the Gauss point of P 1,an γ because π : Y → X × P 1 is regular. Suppose that a ∈ P 1 (k) is a point for which the curves C Y f n (a) are disjoint from the points of indeterminacy forf Y for all n ≥ 0. This means, as in §4.3, that f n (a) lies in an F -component for Γ for every n ≥ 0. Fixing a homogeneous lift F of f so that ord γ F = 0, we define the order function σ(F, ·) on P 1,an γ as in [10, §3.1]. Specifically, for each n ≥ 0, we let A n denote a homogeneous lift of f n (a) ∈ P 1 (k) so that ord γ A n = 0, and then σ n := σ(F, f n (a)) = ord γ F (A n ). From [10, Lemma 3.1], the local canonical height at γ (associated to this choice of F ) can be computed asλ The key observation is contained in [10, Proposition 4.1, Theorem 4.2]: for a point a that lies in an F -disk component of P 1,an γ \ Γ, the order function depends only on the boundary point of that F -disk. When each iterate of a lies in an F -disk, the sequence σ n depends only on the sequence of boundary points of these F -disks containing f n (a), over all n ≥ 0. However, by the stability of the pair (f, Γ), these sequences, in turn, depend only on the boundary point of the disk containing a itself. Indeed, every F -disk with boundary point ζ will map into an F -disk with the same boundary point. Moreover, the order function can only take finitely many possible values on the F -disks of Γ (by [10,Theorem 4.2]) and the stability of (f, Γ) implies that the sequence {σ n } will be eventually periodic. In other words, the sequence {σ n } depends only on the component Y γ,i that intersects C Y a in E γ . The coefficient c γ,i is rational because the sequence {σ n } is eventually periodic. Near a singularity: uniform convergence to the escape rate In this section, we fix γ ∈ X(K). We assume that we are given f : P 1 → P 1 of degree d ≥ 2, defined over k = K(X), and a point a ∈ P 1 (k). We choose lifts F and A, as defined in §3.1,with ord γ F = ord γ A = 0 and we assume that We also assume that the pair (f, a) is hole-avoiding at γ, as defined in §4.2, so that for all n ≥ 0. We set A n := F n (A) ∈ k 2 and we study the convergence of the sequence of functions Theorem 5.1. Fix γ ∈ X(K) and a hole-avoiding pair (f, a) at γ with lifts F and A satisfying ord γ F = ord γ A = 0 and Res(F γ ) = 0. There exists an M K -neighborhood U of γ in X so that, for each v ∈ M K , the functions g n,v converge uniformly on U v to a continuous function g v . Note that the limit function g v coincides with the escape-rate function G Ft,v (A t ) defined by (3.9) in §3.4, for t = γ. So we know that the convergence of g n,v to g v is uniform on neighborhoods where t remains bounded away from γ and the other singularities of f . The steps in the proof of Theorem 5.1 are inspired by the arguments in [12], [13], and [25]. 5.1. Convergence of the constant terms g n,v (γ). Proposition 5.2. Fix γ ∈ X(K). Under the hypotheses of Theorem 5.1, the limit In other words, {α v : v ∈ M K } defines an M K -quasiconstant. Remark 5.3. For each fixed n, we have (A n ) γ v = 1 for all but finitely many v. But as n grows, the number of places for which (A n ) γ v = 1 can also grow, so that α v can be nonzero for infinitely many v ∈ M K . A simple example is given by the function f (z) = z(z + 1)/(z + t) defined over k = Q(t), which is similar to Example 4.1, at t = 0. Take a = 1. Fix homogeneous polynomial lift F (z, w) = (z(z + w), (z + tw)w), so that F 0 (z, w) = (z(z + w), zw), and set A = A 0 = (1, 1). Then for every prime p, we have (A n ) 0 p = 1 for all n < p, and (A n ) 0 p < 1 for all n ≥ p. We show below in (5.18) that this will imply that the limit α p of Proposition 5.2 will be negative for all primes p. Many more examples are given in [25]. Bear in mind that this does not happen for Lattès examples (the maps arising as quotients of endomorphisms of elliptic curves) or for polynomials; in other words, for those types of maps, the α v of Proposition 5.2 always define an M K -constant. Proof of Proposition 5.2. Since Res(F γ ) = 0, specializing F at γ, we can write where H(z, w) ∈ K[z, w] is a nonconstant homogeneous polynomial of degree k ≤ d, and F (z, w) ∈ (K[z, w]) 2 is a homogeneous polynomial map of degree ℓ = d − k < d inducing a morphism of degree ℓ on P 1 . The zeroes of H in P 1 are called the holes of f at γ, as defined in §4.2. Because the pair (f, a) is hole-avoiding, the lift A satisfies F n γ (A γ ) = (0, 0) for all n. So it must be that either ℓ > 0 or, if ℓ = 0, the value ofF is not a root of H. Consequently, as in [6,Lemma 2.2], the specialization of each iterate F n can be expressed in terms of H andF by for all n ≥ 1. In particular, this shows that for every n. For ℓ = 0, the mapF is constant, soF n (A γ ) = (z 0 , w 0 ) ∈ K 2 \ {(0, 0)} for some point (z 0 , w 0 ) and for all n ≥ 1. The formula (5.3) gives as n → ∞, for all places v of K. The statements of the proposition follow immediately in this case. Now assume that ℓ ≥ 1. There exists an M K -constant L so that for all (z, w) ∈ K 2 and for all v ∈ M K [31, Proposition 5.57]. This implies that Recalling at all places v and for all i ≥ 1. Note that the bound on the right side of (5.6) can be > 1 at only finitely many places v of K, independent of i. Let S + denote this finite set of places. Therefore, since H(F i (A γ )) ∈ K * for all i, we can apply the product formula to observe that there is a constant c > 0 so that for all i ≥ 1. Using the formula (5.3), we combine (5.5) with (5.6) and (5.7) to deduce the existence of From (5.6) and the summation expression for α v in (5.8), we see that α v ≤ 0 for all v ∈ S + . To show that the sum over all places of the α v is finite, we use (5.7) to estimate Summing over all i, we can then use Fubini's theorem to deduce that This completes the proof of the proposition. 5.2. Proof of Theorem 5.1. Throughout this proof, we work in an M K -neighborhood U of γ ∈ X(K), so that the conclusion of Proposition 3.2 holds. For simplicity, we let u ∈ K(X) denote a choice of local coordinate on X near γ so that u = 0 represents γ. We now fix v ∈ M K , and we drop the dependence on v to ease notation. Let δ denote the v-adic radius of the largest disk {|u| v < δ} contained in the M K -neighborhood U v . Let C = e bv ≥ 1 be the constant appearing in Proposition 3.2 at this place. For each n, we write A n (u) for the specialization of A n = F n (A) at u. For every n ≥ m, we define for |u| < δ. Let q = ord γ Res(F ). From Proposition 3.2, we have d m log C for all n ≥ m ≥ 0 and for all |u| < δ. Let α = lim n→∞ 1 d n log A n (0) ; its existence is guaranteed by Proposition 5.2. Step 1: a choice of N and δ N for a uniform upper bound. Fix ε > 0. Choose N so that we have Now choose δ N > 0 so that, by continuity of A N (u), we have for all |u| ≤ δ N . Applying the upper bound of (5.9) and using (5.11), this implies that (5.13) g n (u) ≤ g N (u) + 1 d N log C ≤ α + 3ε for all n ≥ N and for all |u| ≤ δ N . Note that the lower bound of (5.9) is not enough to get uniform control on g n from below for n ≥ N, because of the log |u| term. Step 2: the Maximum Principle and lower bounds within δ n . By the triangle inequality, we have for all |u| ≤ δ N , from (5.10) and (5.12). Note that the coordinates of A N (u) − A N (0) vanish at t = 0, and so the Maximum Principle (applied to 1 u (A N (u) − A N (0))) gives for all |u| ≤ δ N . For a non-archimedean Maximum Principle see e.g. [1,Proposition 8.14]. Using the upper bound of (5.13), the same argument implies that for all n ≥ N and for all |u| ≤ δ N . This implies that for all n ≥ N and for all |u| ≤ δ N . Now define (5.15) δ n := δ N ε 2e 4d n ε for all n > N. So we have for all |u| ≤ δ n and for all n > N. Combined with the lower bound of (5.12) and the condition on N in (5.11), this shows that for all |u| ≤ δ n and for all n ≥ N. Step 3: Choosing larger N 0 and completing the proof. From the definition of δ n , we see that 1 d n log δ n = 1 d n log(δ N ε/2) − 4ε for all n > N. Now choose n 0 > N so that 1 d n 0 log(δ N ε/2) < ε. Recall that the sequence {g n } converges uniformly on neighborhoods in u that are bounded away from u = 0 (and any other singularities for f in X), so, by our choice of M Kneighborhood, there exists N 0 ≥ n 0 so that |g n − g m | < ε for all n, m ≥ N 0 , uniformly on {δ n 0 ≤ |u| < δ}. For |u| ≤ δ n 0 , we know that g n (u) ≤ α + 3ε for all n ≥ n 0 by (5.13). And we know that for all |u| ≤ δ n and for all n ≥ n 0 , by (5.17). On the other hand, for δ n < |u| ≤ δ n 0 we can choose n > m ≥ n 0 so that δ m+1 ≤ |u| ≤ δ m and then (5.9) gives for all n ≥ N 0 and for all |u| ≤ δ n 0 . This completes the proof of uniform convergence. 5.3. A summable lower bound on a disk. We conclude this section with a consequence of Proposition 5.2 and its proof that will be used to prove Theorem 1.2. Proposition 5.4. Fix γ ∈ X(K). Under the hypotheses of Theorem 5.1 and in the notation of Proposition 5.2, there exists a finite set S γ ⊂ M K so that for every v ∈ S γ and all t in an M K -neighborhood of γ. Proof. We first let S γ be the finite set of places v ∈ M K , including all archimedean places, at which the quantities L v and H v in the proof of Proposition 5.2 differ from 0 and where A γ v = 1. It follows from the computations in Proposition 5.2 (specifically, equation (5.6) and (5.8)) that α v ≤ 0 for all v ∈ S γ . Recall the formula for (A n ) γ given in (5.3). For all v ∈ S γ , we have for all n. For all v ∈ M K \ S γ , we also have F n (A γ ) v = 1 for all n. So (A n ) γ v < 1 for some n > 0 if and only if there exists i < n so that |H(F i (A γ ))| v < 1. Furthermore, from (5.18), such an n exists if and only if α v < 0, for each v ∈ S γ . Now let u denote a local coordinate on X with u = 0 representing γ. From Proposition 2.1, the coefficients of F and the coordinates of A are M K -bounded on an M K -neighborhood of γ. We enlarge S γ if needed to assume that these coefficients are ≤ 1 in absolute value and so that the neighborhood is given by {|u| v < 1} for all v ∈ S γ . We further enlarge S γ to include all places at which the M K -constant b v from Proposition 3.2 differs from 1, and also so that |u| 2g+1 v = |ξ γ | v for v / ∈ S γ on the M K -neighborhood of γ (applying Proposition 2.1 to u). Then, for all v ∈ S γ , the upper bound on the coefficients of F and the coordinates of A gives (5.19) A n (u) v ≤ 1 for all n ≥ 1 and for all |u| v < 1. This implies immediately that g v (u) ≤ 0 for all |u| v < 1 with v ∈ S γ , proving the desired upper bound of the proposition. Moreover, for v ∈ S γ where α v = g v (0) = 0, we conclude from the Maximum Principle (applied to the subharmonic g v ) that g v (u) = 0 for all |u| v < 1, and the estimate of the proposition holds for these v. For the rest of the proof, we fix v ∈ S γ with α v < 0, and choose minimal m ≥ 0 so that (A m+1 ) γ v < 1. Since (A n ) γ 1/d n v is a non-increasing sequence from (5.18), we see that (A n ) γ v decreases to 0 as n → ∞, and Let q = ord γ (Res F ). Proposition 3.2 then gives for all n ≥ m and for all |u| v < 1. Therefore, for all u satisfying (5.21) and (5.20). This shows that On the other hand, for |u| v < A m+1 (0) v , we can choose j ≥ m + 1 so that Then, writing A j (u) = A j (0) + uR j (u) for u near 0, we know that R j (u) v ≤ 1 for all |u| v < 1, by (5.19) and the Maximum Principle, and therefore and for all n ≥ j. This implies that g v (u) ≥ (1 + dq)α v for u values in this region. Since A n (0) v → 0 as n → ∞, the proof of the lower bound on g v is complete, thus completing the proof of the proposition. 6. Proofs of Theorems 1.2 and 1.7 In this section, we complete the proofs of Theorems 1.2 and 1.7. We fix f : P 1 → P 1 defined over the field k = K(X), of degree d > 1, and we assume that a ∈ P 1 (k) is totally S(F, A). For each place v of K, we examine the function on X(K) and its extension to X(C v ) and the Berkovich analytification X an v . Recall that the steps needed to complete the proofs were outlined in §3.5. 6.1. Changing coordinates and lifts. If we change the lifts F and A, multiplying each by an element of k * , it follows from (3.6) and (3.10) that for any choice of γ ∈ X(K). Moreover, the sum of the last two terms is M K -bounded on an M K -neighborhood of γ, as a consequence of Proposition 2.1. If we conjugate F by an element B ∈ GL 2 (k), we have from the definitions of the escape rates, for each γ ∈ X(K), and each place v of K and all t ∈ X(K) \ S(F, A) ∪ S(BF B −1 , B(A)) ∪ S(B) . Replacing F or A by an iterate gives for all n ≥ 1 and m ≥ 0, again immediate from the definitions. 6.2. The divisor D = D(F, A) is a Q-divisor. We need to show that G F,γ (A) ∈ Q for each γ ∈ S(F, A). This is immediate from the following proposition. (It also follows from the statement of Proposition 4.10.) We present an alternative short argument in the following proposition. Recall that k γ denotes the completion of k at γ. Proposition 6.1. Let f : P 1 → P 1 be of degree d ≥ 2, defined over k = K(X), γ a point in X(K), and a ∈ P 1 (k γ ). If the point a is an element of the non-archimedean Fatou set Ω γ (f ) at γ, then the geometric escape rate G F,γ (A) is a rational number, for any choice of lifts F and A. Remark 6.2. In [10] it was shown, for maps f defined over k, that there can exist points a ∈ P 1 (k γ ) with irrational local canonical height. Proposition 6.1 implies that these points must always lie in the non-archimedean Julia set of f at γ. We provide examples in Section 7. It is not known if the Julia points can be algebraic over k. If the pair (f, a) is not hole-avoiding at γ, then Theorem 4.6 implies the existence of a change of coordinates B ∈ GL 2 (k) and iterates so that the pair (Bf n B −1 , B(f m (a))) is hole-avoiding at γ. The conclusion then follows from (6.3) and (6.4). 6.3. Variation of canonical height: proofs of the main theorems. Assume that a ∈ P 1 (k) is totally Fatou for f , and let D = D(F, A). Proposition 6.1 implies that D is a Q-divisor, so it remains to study properties of the functions V v , defined in (6.1), associated to this divsor D on the curve X at each place v of the number field K. We begin by proving Theorem 1.7, which states that the functions V v are continuous on the Berkovich analytification X an v at all places v. This implies, in particular, the existence of a uniform bound C v so that |V v | ≤ C v at all points of X(K). (Recall that we have fixed an embedding of K ֒→ C v for each place v.) Towards proving Theorem 1.2, we then find a finite set of places S ⊂ M K outside of which we have strong bounds on V v , so that we can show the sum v∈M K \S N v V v (t) is uniformly bounded on X(K). Combined with the bound C v for each place v, we obtain a uniform bound on the sum v∈M K N v V v (t), for all t ∈ X(K). Averaging over Galois orbits will complete the proof of Theorem 1.2. Proof of Theorem 1.7. Fix γ ∈ S(F, A). First assume that the pair (f, a) is hole-avoiding at γ, as defined in §4.2. Choose functions α, β ∈ k so that ord γ βF = ord γ αA = 0. This places us in the setting required for the results of Section 5. For the function g v defined there, note that on an M K -neighborhood of γ, as a consequence of (6.2), and this difference is continuous at all places v ∈ M K and an M K -bounded function. If the pair (f, a) fails to be hole-avoiding at γ, then from Theorem 4.6, we can find a change of coordinates B ∈ GL 2 (k) and pass to iterates so that the pair (Bf n B −1 , B(f m (a))) is hole-avoiding at γ. From properties (6.3) and (6.4) of the escape rates, we can replace the lifts (F, A) with (BF n B −1 , BF m (A)), and these changes do not affect the computation of V v on an M K -neighborhood of γ (outside of γ itself, where the specialization B γ may fail to be invertible), except to multiply it by d m at every place v. So we can assume that (f, a) is hole-avoiding at γ. We can apply Theorem 5.1 to conclude that V v is a continuous function on an M Kneighborhood of γ, for every v ∈ M K , and that it extends to a continuous function on the closure of this neighborhood in the Berkovich analytification of X, for each v. This completes the proof of Theorem 1.7, because the continuity of V v -when bounded away from the elements of S(F, A) in X an v -is immediate from the definitions of the escape rates G Ft,v (A t ) and the local height functions for h D . Proof of Theorem 1.2. Fix γ ∈ S(F, A). As in the proof of Theorem 1.7, it suffices to assume that (f, a) is hole-avoiding at γ. Choose functions α, β ∈ k so that ord γ βF = ord γ αA = 0. Let S γ be a finite set of places of the number field K so that the function in (6.5) vanishes on an M K -neighborhood of γ for all v ∈ M K \ S γ . The function V v for the given pair (F, A) then coincides with the function V v for the pair (βF, αA) for all v ∈ M K \ S γ on an M K -neighborhood of γ and is equal to g v at these places. We can enlarge the finite set S γ so that Propositions 5.4 and 5.2 imply the existence of an M K -quasiconstant a(γ) for which (6.6) |V for all v ∈ S γ and t in an M K -neighborhood of γ. Let U be the union of these M K -neighborhoods over all γ ∈ S(F, A). From Proposition 3.2, we know that there exists an M K -constant c so that for all t ∈ X(C v ) outside of U v and all v ∈ M K . From Proposition 2.1, we can increase the M K -constant c so that Let S ⊂ M K be a finite set containing S γ for each γ ∈ S(F, A) and containing all places for which c v = 0 and for which U v is not equal to the union γ∈S(F, because F n t (A t ) v = 1 for all n, from (6.7) and (6.8). On the other hand, if t ∈ U v for v ∈ S, then we still have the bound |V v (t)| ≤ a v (γ) (6.10) from (6.6). Recalling the summability of the bounds in (6.6) near each γ ∈ S(F, A), inequalities (6.9) and (6.10) yield v ∈S for each t ∈ X(K). By the continuity of V v on X an v for every v ∈ M K , from Theorem 1.7, there is a constant for all t ∈ X(K). It follows that, taking averages over the Galois orbit of t, we have for all t ∈ X(K). This completes the proof of Theorem 1.2. Examples In this final section, we present examples to illustrate some of the subtle phenomena that can arise for non-polynomial maps f : P 1 → P 1 , even in the simplest setting of degree d = 2, with K = Q and k = Q(t). 7.1. The difference of heights in Theorem 1.2 is bounded but not M K -bounded. We first present an example, already seen in Remark 5.3, where the conditions of Theorem 1.2 hold but the functions V v of Theorem 1.7 are nontrivial at infinitely many places v of K = Q. A mechanism to construct many other such examples appears in [25]. Consider so that S(F ) consists only of the three points t = 0, 1, ∞ in X = P 1 . Let a = 1, and take A = (1, 1) so that S(F, A) = S(F ) = {0, 1, ∞}. The point a is totally Fatou as a consequence of Theorem 4.6, because the pair (f, a) is hole-avoiding at all points of S(F ). Indeed, at t = 0, we have F 0 (z, w) = (z(z + w), zw) with hole at z/w = 0 and orbit f n 0 (a) = n + 1 for all n ≥ 0. At t = 1, we have F 1 (z, w) = (z(z + w), (z + w)w) with hole at z/w = −1, and orbit f n 1 (a) = 1 for all n ≥ 0. Finally, at t = ∞, we can choose a new lift F ′ = 1 t F so that (F ′ ) ∞ (z, w) = (0, w 2 ) with hole at z/w = ∞ and orbit f n ∞ (a) = 0 for all n ≥ 1. It follows from these computations that D = D(F, A) = (∞) is the divisor of degree 1 on X = P 1 supported at the point t = ∞. This implies, in particular, thatĥ f (a) = 1. Fix a prime p of Q. To see that the function V p is nontrivial on X, it suffices to show that V p (0) = 0. Let A n = F n (A). As explained in Remark 5.3, we have (A n ) 0 p = 1 for all n < p, and (A n ) 0 p < 1 for all n ≥ p. As computed in (5.18), we know that (A n ) 0 1/2 n p is a decreasing sequence for all primes p, so that the α p of Proposition 5.2 (defined as g p (0) for the function g p (t) = lim n→∞ 2 −n log (A n ) t p in a p-adic neighborhood of t = 0) is non-zero for all primes p. Moreover, as explained in the proof of Theorem 1.7, we also have that V p (0) = g p (0) and so V p (0) = α p < 0 for all primes p. All known non-polynomial examples are totally Fatou. Here we survey the results in the literature where the conclusions of Theorems 1.2 and 1.7 were known for examples f : P 1 → P 1 that are not polynomial maps (nor conjugate to a polynomial). In every case, the points a ∈ P 1 (k) that were treated satisfy our totally Fatou hypothesis. The first example is the one presented in the Introduction, where the variation of canonical height t →ĥ ft (p t ) for a family of Lattès maps f t -those arising as quotients of endomorphisms of ellptic curves -is known to differ from a Weil height for a Q-divisor on the base curve X by a bounded amount, for any choice of p ∈ P 1 (k) [32]. The continuity of the local contributions V v , as defined in Theorem 1.7, was shown by Silverman in [29]. Also as mentioned in the Introduction, it is well known that all points are totally Fatou for these maps; see, e.g., the computation of the Berkovich Julia set in [16, §5]. Alternatively, note that the existence of a Néron model forces all points to be hole-avoiding in appropriate coordinates. In [17], the authors prove Theorem 1.2 for rational maps f defined over k = K(X) for a curve X and points c ∈ P 1 (k), under the assumptions that (1) there exists t 0 ∈ X so that the map f has good reduction at all t = t 0 ; (2) f has a super-attracting fixed point at z = ∞; and (3) the point c satisfies ord t 0 f n (c) → −∞. Condition (3) implies that c is in the basin of attraction of the super-attracting fixed point at ∞, so it is clearly Fatou at t 0 . (The hypothesis (3) is stated in [17,Theorem 5.4] as {deg f n (c) : n ≥ 0} is unbounded, but for a notion of degree defined in their Section 5 on the regular functions on X \ {t 0 } and extended to k after equation (5.4).) In [18], the authors studied maps of the form f (z) = z d + t z over k = Q(t), for d ≥ 3, and they prove Theorem 1.2 for all points a ∈ P 1 (k). (The map f for d = 2 is isotrivial, making the theorem true but much easier.) In this example, the point z = ∞ is a super-attracting fixed point, and there are two places of bad reduction, at t = 0 and t = ∞. All points a ∈ P 1 (k) are totally Fatou. Indeed, at t = 0, the reduction is f 0 (z) = z d−1 with only hole at z = 0. So the only points we need to consider are those which vanish at t = 0. But for any integer m ≥ 1, if ord 0 a = m, then ord 0 f (a) = 1 − m ≤ 0, so f (a) which will no longer specialize to 0 at t = 0; this implies that the pair (f, f (a)) is holeavoiding at t = 0 for all a ∈ P 1 (k). At t = ∞, a computation shows that if ord ∞ a = r < 0, then ord ∞ f (a) = (d − 1)r; iterating implies that f n (a) → ∞ in the ∞-adic topology, so the point a will be Fatou at t = ∞. Moreover, if ord ∞ a = r ≥ 0, then ord ∞ f (a) = −r − 1 < 0, and again a is Fatou at ∞. In [13], the authors consider f (z) = λz z 2 + tz + 1 (7.1) for a fixed λ = 0 in Q, defined over k = Q(t), having a fixed point of multiplier λ. For λ not a root of unity or for λ = 1, the result of Theorem 1.2 is obtained there for the critical points c ± = ±1 of f . The critical points will be totally Fatou for any choice of λ. It suffices to check the dynamics of f at t = ∞. For λ = 1, we can conjugate f by B(z) = 1/(tz) so the new map z + 1 + 1/(t 2 z) specializes to z → z + 1 with hole at z = 0, and the critical values in the new coordinate system B(f (c ± )) = (±2 + t)/t specialize to z = 1, so the pairs (f, f (c ± )) are seen to be hole-avoiding in the new coordinate system. For λ not a root of unity, the map f can be conjugated to a map that specializes to z → λ z with a hole at z = 1, and so that the critical values f (c ± ) in the new coordinates will specialize to z = λ. Again the pairs (f, f (c ± )) are hole-avoiding in the new coordinate system. These facts appear in the proof of [13, Proposition 2.2] and in [7, §5]; these cases are also covered by [23,Lemma 3.4]. Finally, in [25], the authors obtain the result of Theorem 1.2 for the maps f of the form (7.1) when λ is a root of unity and for a large class of points c ∈ Q(t) satisfying a hole-avoiding condition at the place of bad reduction t = ∞. This includes in particular maps of the form (7.1) with c ± = ±1. As explained in Theorem 4.6 above, this means that the points considered are totally Fatou. 7.3. Julia points: irrational local heights and R-divisors. This next example is a map of degree 2 defined over the field k = Q(t), with the property that all points with infinite orbit that lie in a local non-archimedean Julia set at the place t = 0 of k will have an irrational local canonical height. If such a point can be algebraic over k, it would show that the conclusion of Theorem 1.2 would fail for Julia points, as stated; the divisor D should be an R-divisor on the curve X. Set (7.2) f (z) = (t 2 + t + 1)z 2 + tz + t 2 − 1 (2t 2 + t)z + t . This map has fixed points at z = 1 (with multiplier 1/t) and at z = −1 (with multiplier 1/t 2 ), and at z = ∞. At the place t = 0 of k, these fixed points at ±1 are repelling, and the fixed point at ∞ is attracting (but not super-attracting). The Julia set in P 1,an t=0 , defined over the field L of formal Puiseux series in t, is a Cantor set of Type I points, and f is conjugate to the full 2-shift [23, Theorem 3(1)]. All points outside of the Julia set will tend to ∞ under iteration. This map f exhibits a polynomial-like behavior near its Julia set, and it can be computed that the Julia set is a subset the formal completion k 0 := Q[[t]]. But this example is not strongly polynomial-like, in the sense of [10, Theorem 1.5], because the multipliers at the two repelling fixed points have distinct absolute values. We can compute local canonical heights over the field k with the procedure described in [10]. In homogeneous coordinates, put F (z, w) = ((t 2 + t + 1)z 2 + tzw + (t 2 − 1)w 2 , (2t 2 + t)zw + tw 2 ). The conjugacy between f on its Julia set and the shift map on 2 symbols is given by the itinerary of a point as it moves between D + and D − . For a point a ∈ k 0 with lift A ∈ (k 0 ) 2 \ {(0, 0)}, a sequence of orders is defined by From the formula for F , we can compute that σ n = 1 if f n (a) ∈ D + and σ n = 2 if f n (a) ∈ D − , for all n ≥ 0. Because of the conjugation to the shift map, we see that the sequence {σ n } is eventually periodic if and only if the point a is eventually periodic. Therefore, G F,0 (A) (and so also any presentation of the geometric local canonical heightλ f,0 (a) at t = 0) is irrational for all Julia points with infinite orbit. Remark 7.1. The function f of (7.2) is conjugate to z → t 2 z 2 +z tz+t 2 , in a standard normal form for quadratic rational maps, with fixed points at 0 and ∞ of specified multipliers (in this case, having multiplier 1/t 2 at 0 and 1/t at ∞). We then moved the two repelling fixed points to 1 and −1 and the attracting fixed point to ∞. 7.4. Julia points with divergent escape rates. Our final example is (7.4) f (z) = z 2 + (t 2 − t − 1)z − t 3 − 2t 2 + t z − t 2 − 1 , defined over the field k = Q(t), at the place γ corresponding to t = 0. As for the example (7.2), all Julia points at t = 0 with infinite orbit for f will have an irrational local canonical height at t = 0. This can be seen from the proof of [10,Theorem 1.3], because this f is conjugate to the map z → (z+1)(z−t) z+t studied there, combined with an identification of the Julia set with the shift on 2 symbols [23,Proposition 4.2]. We construct (formal) points a ∈ Q[[t]] in the Julia set of f at t = 0 so that the sequence of functions (5.1) that define V ∞ (at the archimedean place) will diverge at t = 0. We do not know if the points a we construct can be algebraic over k, nor even if the series will converge on a disk around t = 0. We use these examples to illustrate some of the features of Julia points that do not arise for the Fatou points. Remark 7.2. As we shall see, taking any unbounded sequence of positive integers {m k } k≥0 in the construction below, this example also shows that the orbits of points in P 1 (k γ ) can have non-locally-compact closures. This is distinct from what happens for polynomials; compare [14,Theorem 3]. More precisely, we construct examples so that the sequence α n := 1 2 n log (A n ) 0 v , as defined and studied in Proposition 5.2, will diverge to −∞ at the place v = ∞. In particular, this would show that -if the point a defines a convergent series in Q[[t]] -the conclusion of Theorem 5.1 would fail. That is, the sequence of functions g n (t) := 1 d n log (A n ) t would converge, locally uniformly on a punctured disk around t = 0, and we know that the limit function g(t) must be bounded by o(log |t|) as t → 0 [8, Proposition 3.1]. But the convergence to g would not be uniform in a neighborhood of t = 0. For the construction, note first that f specializes to the identity transformation f 0 (z) = z at t = 0 with a hole at z = 1. The Berkovich Julia set is contained in the direction z = 1 from the Gauss point, and all Julia points have the form 1 + m t + O(t 2 ) for some integer m ≥ 0. We have for the a associated to a given sequence {m k } k≥0 . For each n ≥ 1, we set where σ n−1 is chosen so that ord t=0 A n = 0, as above in (7.3). In fact, σ n = 1 whenever f n (a) = 1 + wt + O(t 2 ) for w = 0, and σ n = 2 for f n (a) = 1 + O(t 2 ). For each n ≥ 1, we let (A n ) 0 denote the specialization at t = 0, and set α n = 1 2 n log (A n ) 0 in the archimedean norm. Again, we can make α m 0 +m 1 +2 as negative as desired by choosing m 2 ≫ m 1 . We see that the pattern continues, and so, by choosing the sequence {m k } k≥0 to grow to infinity very fast, we conclude that the sequence {α n } n≥0 is unbounded from below.
2021-07-15T01:15:53.085Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "a5d3df69eedfc2cd4855cb81067df5c9aefeab58", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a5d3df69eedfc2cd4855cb81067df5c9aefeab58", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
263024971
pes2o/s2orc
v3-fos-license
Carotidynia presenting as an atypical cause of unilateral neck pain in the emergency department Abstract Carotidynia is a rare presentation of atypical neck and face pain, which is due to inflammation around the carotid artery. Symptoms can get aggravated by head and neck movements, jaw movements and deglutition. It is usually a self- limiting illness, and it is treated conservatively with analgesics. Because of it is rarity, and partly due to physician’s lack of understanding, it remains underdiagnosed. Our case report aims to shed light on the importance of how its diagnosis cannot be missed. INTRODUCTION Carotidynia, also known as carotid artery pain syndrome or Fay syndrome, is a rare condition characterized by unilateral or bilateral pain in the neck, face and head caused by inf lammation of the carotid artery.The condition typically affects individuals in their forties or fifties, and it is more common in women than men.The etiology of carotidynia remains unclear, but it is believed to be caused by infectious or inf lammatory processes affecting the vascular wall [1,2]. CASE REPORT A 49-year-old with hypertension and Graves disease presented to ED with complaints of left sided neck discomfort for the last 6 days.She later began developing fever and progressively increasing left-sided neck swelling with episodic pain that began radiating to the left half of her face.She also complained of dysphagia.The patient had no history of trauma, recent infection or vascular disease.She denied any visual changes, difficulty speaking or weakness in her limbs.She was compliant with her lisinopril and thyroxine.Her vitals showed a heart rate of 95 beats/min, blood pressure of 157/85 mmHg and respiratory rate of 18/min with normal room air oxygen saturations.Physical examination revealed tenderness over the entire extent of left carotid artery, but no palpable mass or bruit was noted.Neurological examination was unremarkable.She was treated with analgesics.After engaging in a shared decision-making process with the patient's family and physicians, it was collectively agreed that, due to the presence of severe pain directly overlying the carotid artery, a comprehensive examination using an enhanced computerized tomography scan of her neck would be conducted.The computed tomography (CT) scan showed eccentric, anteromedial thickening around and below the carotid bifurcation, extending inferiorly along the fascial planes between left thyroid lobe and carotid vasculature into the superior mediastinum (Figs 1 and 2).Overall appearances were suggestive of carotidynia.Patient was admitted for pain management, and her symptoms gradually improved.She was discharged on non-steroidal anti-inf lammatory drugs (NSAIDS) with advice to follow-up with a vascular specialist. DISCUSSION The pathophysiology of carotidynia remains unclear.Several hypotheses have been proposed, including infectious, autoimmune and traumatic etiologies.The most widely accepted theory is that carotidynia is an inf lammatory condition affecting the vascular wall of the carotid artery.The diagnosis of carotidynia is based on clinical presentation and imaging findings.Patients typically present with unilateral or bilateral pain in the neck, face and head, which may be aggravated by swallowing or turning the head.Physical examination may reveal tenderness over the affected carotid artery, but no palpable mass or bruit is typically noted.Carotid Doppler ultrasound (US) may show increased blood f low velocity and wall thickening of the affected carotid artery [3].Lecler et al. have debated over the term carotidynia that it should be replaced by transient perivascular inf lammation of the carotid artery (TIPIC syndrome) due to the consistent radiographic finding of inf lammation surrounding the carotid trunk at or near the bifurcation.To diagnose the TIPIC syndrome, the following criteria needs to be fulfilled, which include: having sharp pain over the carotid arteries with or without radiation to head, peculiar perivascular infiltration, exclusion of other vascular and non-vascular causes by radiological imaging and spontaneous or anti-inf lammatory induced recovery within 14 days [4].However, there continues to remain controversy between carotidynia versus TIPIC syndrome, and these two terms are still used synonymously by many clinicians when investigating differentials for idiopathic unilateral neck pain [5]. The choice of imaging modality depends on various factors such as the severity of symptoms, clinical presentation and availability of resources.Duplex Doppler US is a commonly used initial imaging modality for evaluating carotidynia.It can assess blood f low and detect any structural abnormalities in the carotid artery, such as plaque buildup or narrowing.Enhanced CT scan can visualize stenosis, aneurysm or dissection in the carotid artery.It is particularly useful in emergency situations when a quick assessment is required.Other advance imaging techniques include magnetic resonance imaging, which can provide further detailed images of the carotid artery and surrounding structures in the neck and positron emission tomography, which evaluates the metabolic activity of the carotid artery and helps differentiate between active inf lammation and non-inf lammatory causes of carotid pain [3,6]. The treatment for carotidynia is based on symptom relief and addressing any underlying conditions.NSAIDs are typically the first-line therapy for pain relief, and corticosteroids may be considered in refractory cases [7].Infection or other inf lammatory conditions should be ruled out and treated if present. Prognosis for carotidynia is generally good, with most patients experiencing resolution of symptoms within several weeks to months.Recurrence is uncommon but may occur in some patients [6]. Figure 2 . Figure 2. Coronal section of CT neck, showing a anteromedial inf lammation around the carotid bifurcation (white arrow).
2023-09-29T05:11:36.906Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "d92db20df85bad1633a1539d4a74fa5fa8c351c9", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/omcr/article-pdf/2023/9/omad094/51752264/omad094.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d92db20df85bad1633a1539d4a74fa5fa8c351c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36292579
pes2o/s2orc
v3-fos-license
Teaching Practices that Foster Self-regulated Learning : a case study The aim of this paper is to present a case study of an elementary school teacher who changed her practices to foster self-regulated learning (SRL) strategies in her students. Specifically, this study describes the process of how she developed her teaching strategies to promote SRL strategies such as self-evaluation, goal-setting and planning, and lastly, rehearsing and memorization. The teacher’s classroom practices promoted opportunities to encourage her students to become conscious of their learning process as they used these specific SRL strategies and as they executed reading and writing tasks from the curriculum of English as a Foreign language. The results reflect the importance of developing SRL strategies in students from early years on in the classroom while accomplishing mandatory tasks from the curriculum. Introduction Teaching children today has brought forth much discussion amongst the teaching community as to which teaching practices should be adopted and which teaching instruments should be used.What's more, despite these resources and teachers' efforts to use them, children continue to have difficulties in many of the academic areas.Students struggle to learn how to learn, as an objective to reach academic objectives in diverse subjects (Rosário, Pérez, Pienda, 2004).Teachers should acquire training in terms of explicit teaching of Self-regulated Learning (SRL) strategies, which is crucial for students to develop general learning skills that are cross-curricular to any academic subject (Carneiro & Veiga Simão, 2007).As a way of observing this phenomenon and perhaps contributing to a possible improvement of the teaching of learning strategies, we decided to observe and to propose the challenge to a primary school teacher teaching English as a foreign language (EFL) to Portuguese childrento change her teaching practices and to foster SRL in her students.Our suggestion was to give her training and information regarding SRL before she actually decided to change her teaching practices and how the process would be.Essentially, we wanted to observe what she could do as a teacher to improve her students' learning skills.We believe that the role of the teacher is crucial when promoting SRL strategies in students because there is a need for systematic and contingent interaction between students and a skillful model, such as their teacher,.From an academic point of view, we consider this skillful model to be the teacher and this contingent interaction to include consistent periods of deliberate practice.In agreement with Ericsson (2002), when expert teachers transmit and guide students in acquiring the necessary knowledge and, consequently, the techniques needed to obtain it, students can become expert performers in their area of preference.Therefore, and as Cho (2004) exemplified in his study, teachers serve as a reflective and analytical example of adaptability which students can follow by scaffolding strategies in their learning environment and as Pintrich and Blumenfeld defend in their 1985 study, by providing adequate and timely feedback.According to the Portuguese National Curriculum for Primary Education -Essential Competencies (Department of Primary Education), teachers should adopt teaching methods that will allow their students to plan and organize their own learning; as well as identify, select and apply learning strategies; self-evaluate and adapt learning strategies to learning objectives; to identify and express difficulties and to be able to transfer knowledge from one context to another.In addition, this study contemplated developmental factors which condition students' capacity to acquire and develop such strategies autonomously at the age of 9/11, thus, the importance of considering the teacher as an expert in modeling and monitoring SRL strategies.In accordance with Bronson (2000), there is a potential for students to develop SRL strategies at this age, although this potential is mainly reactive and dependent on external events, such as what the teacher models and verbalizes.For this reason, and as we have seen in other studies (Cook-Sather, 2008;Perry, Phillips & Hutchinson, 2006;Siegler, 2005), teachers should be aware of the learning environment they provide their students with, so as to offer them support and opportunities to take risks and think critically.Specifically in terms of strategies to learn a foreign language, and according to the [3] indications in Cohen's study (1998) and in Wesche and Skehan's study (2002) , a good strategy could be communicating in the target language in the classrooma strategy the teacher we observed hadn't implemented previously to this study.Accordingly, Chomsky, Belletti & Rizzi (2002) declare that any language learner has the potential to produce the target language that is placed before them.In this way, we believe teachers must emphasize communication practice in a meaningful context.Otherwise, students spend their time merely studying grammatical rules and memorizing vocabulary, other than focusing on strategies that allow them to regulate their learning.Bygate goes further in his 2002 study and claims that if teachers do not provide a meaningful learning environment with effective teaching practices, then students' learning strategies will be negatively influenced.Besides, and in accordance with Jeffrey (2006) and Wilson & Fowler (2005) we consider that teaching practices and environments are linked to students' performance.Also in agreement with these authors, we took into consideration that a supportive teaching environment has the potential to motivate students to adopt more efficient learning practices, such as the use of SRL strategies.In addition, Gibbs (1995) and Biggs, Kember and Leung (2001) demonstrated in their studies how teachers should build on these strategies ensuring that students understand the objectives they are pursuing because -and in accordance with Hinkel (2005) -they need to identify and make sense of the target language.Also, Biggs & Collis (1982) and later Swain (2001) and Ping (2009), claim that learners' success in mastering a second language depends on their use of learning strategies.We believe that this requires teachers to intervene with their teaching techniques to help their students establish adequate objectives and to use precise learning strategies to reach them. Aims and Questions raised From what we have seen in various studies, and from what we've studied regarding students' needs, teachers should focus on and favor different teaching practices to foster SRL strategies in EFL.In this way, we've chosen 4 to consider in our study, namely that teachers can (i) encourage their students to excel surface learning and have a more meaningful approach to learning (Entwistle, 1990); (ii) guide students in using SRL strategies by explicitly establishing objectives, developing and delivering stimulating activities, and clarifying evaluation procedures (Zimmerman, 2000); (iii) give students pedagogical instruments in order to facilitate and stimulate strategic learning and improve their performance (Ericsson, 2002); and (iv) monitor students' learning process and the strategy use and not merely acknowledge results (Bruner, 1971).With these teaching practices in mind, the aim of the current study is to investigate how an EFL elementaryschool teacher developed her practices to promote SRL strategies in her students during a didactic unit of the academic year.Essentially, we wanted to investigate how through this teachers' teaching practices, students were guided to consciously and intentionally influence their learning process.Therefore, we opted for Zimmerman's model (2000), where he considers SRL as an array of competencies that allows students to control the variables which have an impact on their learning process.What's more, we also considered important to study the teachers' ability to stimulate students to acquire skills that allowed [4] them to transfer knowledge to other contexts, an ability Beck (2008) regarded as central to students' learning process.After carefully analyzing the teaching context, content and practices of this teacher and consequently, having verified that SRL strategies were not being taught explicitly, we proposed this teacher to change her practices so as to promote these skills in her students during reading and writing tasks.Hence, the teacher focused on 3 of the 14 SRL strategies provided by Zimmerman & Martinez-Pons (1986) in a study where they developed a structured interview for assessing student use of SRL Strategies.These strategies included self-evaluation, goal-setting and planning, and rehearsing and memorizing.With these guidelines, the following questions arose for this study: (i) Did the teacher develop teaching practices to foster SRL in her students?If so, how and what was her role?and (ii) Did her students reveal any improvement in selfevaluation, goal-setting and planning, and lastly, rehearsing and memorization during reading and writing tasks?We hope that by focusing on the aims of this study and by answering the questions raised, this study may contribute to the improvement of teachers' awareness of their students' learning strategies. Method This case study provides food for thought about the uniqueness, as in other case studies (McDonough & McDonough, 1997;Nisbet & Watt, 1984;Nunan, 1992;Stake, 1994) of a teacher's practices to foster SRL strategies in her students during a didactic unit of the academic year.With this purpose, it follows Yin's Guidelines (1984) for conducting a case study and provides a descriptive list of the teacher and her students' actions.Similarly to other studies (Cohen, L., Manion, L & Morrison, K., 2000, Merriam, 1988;Qi, 2009), it also offers an interpretative and evaluative analysis of the findings through appropriate operational measures for the development of SRL strategies as they were being used, such as classroom observations, documentation (e.g.teaching material and work produced by the students) and teacher interviews, namely, daily reflective interviews and a follow-up semi-structured interview (Ransdell, 1993).We did not expect to generalize the results of this study to other domains or population, considering its design.This case study is subjective in nature, but objective in its particular teaching context and research area.Thus, we may have acquired a broader understanding of the impact teaching techniques might have on students use of SRL strategies. Participants and Context All participants agreed to participate in the study.Parents gave their licence in regards to their children's participation in the study. Description of the teacher The teacher who participated in this study was Portuguese, 28 years of age at the time of the study and had no previous experience with explicit teaching of SRL strategies in her classroom.Her academic background consisted of a four-year degree in English, as well as a two-year degree in didactic and pedagogical training in EFL.In terms of professional development, this teacher had regular continuous teacher training and observations in an English language institute and 5 years experience in teaching. Description of the students We chose the students for this study based on their age and school grade because of their cognitive development.Children begin to understand the constructive nature of the mind in [5] academic settings as they realize memory exists and distinguish it from inferences.Therefore, they can benefit from the explicit instruction of learning strategies (Demetriou, 2000;Miller, & Byrnes, 2001;Paris & Winograd 2003;Wood, Willoughby, McDermott, Motz, Kaspar, & Ducharme, 1999;Zohar & David, 2008).They could begin working explicitly with learning strategies.We decided on a fourth-grade class from a primary school located in the district of Lisbon.This class was made up of 10 boys and 9 girls aged 9 to 11 who were in English class for the second consecutive year.The students were essentially from lower middle class families.The teacher initially described these students as individuals who had "little practice in reflecting about their own learning process in communicating orally and in working in pairs and groups".These students were used to listening to the teacher and doing assignments as they were told.They had little opportunities or initiative for autonomy.The teacher also described these students as "participative and as individuals interested in the English language", although their participation was usually done in their mother tongue.Their English level was A1 -according to the Common European Framework of Reference for Languages: learning, teaching, assessment (CEFR). Description of the location of the study The following information was drawn from the school's official Educational Project.The school was located in the outskirts of Lisbon, Portugal.The neighborhood surrounding the school consisted of both economically middle class and lower class families.The school had a total of 1052 primary school students and 35% of these students needed financial aid and 28 teachers, 2 of which had a Bachelor's degree, 28 had a 4 year degree and 2 had a Master's degree.The percentage of total students that failed the academic year was 23%.The school's structure consisted of 11 classrooms, a cafeteria, a computer room, a library, a sports hall, two game fields, 2 a multiuse pavilion and 4 offices.Description of the school's pedagogical tradition Information regarding the pedagogical tradition of the school was extracted from observations done of other classes with other teachers throughout the academic year, a meeting that was conducted with the Board of Directors as well as from the school's official Educational Project.The teaching method that prevailed in the school was essentially teacher-focused, rather than student-focused.The teacher played the main role in most classes and gave students instructions of how to do tasks.Little or no pair work and group activities were conducted by teachers.Most assignments were individual.There were 2 teachers that taught technology classes and that provided students with contact with computers.In terms of technology, other teachers used CD players, TVs and DVDs.The school's main academic concern was referred to the students' performance in Math and in their mother tongue.We hope that with this study, we may make the school community more aware of students' needs in terms of learning strategies and teachers' knowledge of how to teach those strategies explicitly while proposing tasks from the curriculum.We feel that considering these strategies as crosscurricular, that they may help students learn to study for Math, Portuguese, as well as other subjects. Instruments and Procedures As mentioned previously, we used operational measures such as classroom observations, documentation (e.g.teaching material and work produced by the students) and teacher interviews, namely, daily reflective interviews and a follow-semi-structured interview.The teaching materials and the teacher's [6] daily reflections allowed us to study the teacher's daily planning and metacognitive exercise in relation to her own work.To capture specific teacher and student actions and behavior during the classes we did systematic participative and non-participative observations with one observer.This type of observation attains genuine perceptions of actual occurrences during lessons (Tuckman, 1994).To be more precise, while there were moments when the observer sat quietly in a corner of a classroom, there were other moments when she circulated around the room, checking students' participation in order to better understand how to interpret students' reactions and actions in class.Additionally, field notes helped obtain the events descriptively and chronologically.The observer registered the events in the classroom.The data resulting from the observations was compared with the teacher's perception and reflection of each lesson in postlesson interviews.The material produced by the students served as a guide to understand what the students were actually able to do effectively.We used a semi-directed structure for the follow-up interview with reference to Zimmerman & Martinez-Pons' Interview objectives (1986) for assessing students' use of SRL strategies, because it could capture the interviewee's detailed insight on the students as well as her own work progress and learning process.Essentially, the objectives of the interview with the teacher included the teacher's perspective in regards to (1) her own experience with teaching practices that promoted SRL and (2) the students' experience with the teacher's teaching practices.To view the questionnaire, please see appendix 1. Findings The data for this study was analyzed by two researchers.Both researchers analyzed the data gathered from the observations, the students' work and the teacher interviews.This data was then analyzed through content analysis with cross-referencing.The teacher and student quotes were selected in terms of pertinence from both the observations and the teacher interviews. Teaching Practices and Teaching Material We proposed the teacher to use a different approach to her teaching practices when considering restructuring her lessons.Specifically, we asked her to think about how she could encourage her students to excel surface learning and have a more meaningful approach to learning.We also asked her to consider how she could guide her students in using SRL strategies by explicitly establishing objectives, developing and delivering stimulating activities, and clarifying evaluation procedures.Subsequently, we asked her to analyze and choose pedagogical instruments in order to stimulate, facilitate and improve their performance.Lastly, we asked her to reflect on how she could monitor her students' learning process and strategy use (see appendix 2 for an example of the teaching material the teacher used).The teacher chose to use a children's story (The Little Engine that Could by Watty Piper) in order to deliver the content from the curriculum and simultaneously help her students develop SRL strategies.Figure 1 shows an example of how she planned and organized her teaching practices to foster SRL.To help the teacher monitor her students' learning process and strategy use, the observer kept a track of which strategies students were seen using.Accordingly, the teacher decided to register on paper what strategies the students used with and without difficulty so that she could monitor their learning process better. Class Observations and Written Material Produced by the Students We observed a unit (twelve lessons) out of the entire academic year.The average number of pupils per lesson was 16.The information considered most relevant for this study consisted of the teacher's task proposals and intervention in class as well as the students' reactions to the conscious and intentional practice of SRL strategies (such as, goal setting and planning, and rehearsing and memorization, and selfevaluation).Table 1 shows a summary of the dynamics inside the classroom with the teacher's teaching practices.Students read the text in class in pairs. Memorization Repetition of words orally and in written form Helped students pronounce the words properly and wrote words on the board. "I can pay a ticket on a bus.";S12: "I can tell the driver where I want to go."; S15: "I can turn on the radio in a car." Selfevaluation Open ended written question at the end of class: "Today I learned…" Monitored students and answered questions. "I learned how to make a plan"; "I learned new words in English"; "I think I can write better about what food I like.";I think my choices could be better if I had chosen other food." Table1.Proposed tasks, the teacher's intervention and the students participation in class. In addition, from what we observed, most students' reactions regarding the teacher's proposals in class revealed curiosity and proactivity (e.g."Miss (...) Are we going to continue the story?I want to know what happens to the food and toys.").We also registered situations where students insisted on participating in English frequently even when some of their peers responded in their mother tongue (e.g."read the story in English").This type of participation turned into debate situations, where students discussed details related with learning content and strategy use ("It's not a male train, it's a female train because the text says she, not he"; "that's not how you pronounce that"; and "we've never underlined before" as opposed to "we've underlined before in Social Studies", "I like planning things like parties"). Daily reflection Interviews The observer and the teacher had daily reflection interviews where they reflected on her teaching practices in class.On the whole, the teacher mentioned various occurrences in the classroom that we grouped into topics which can be seen in figure 2. Semi-structured Follow-up Interview The following figure illustrates the information we gathered from the interview with the teacher.Essentially, the teacher focused on a number of topics, namely, her role as a teacher, the teaching strategies and material she used, her awareness of her students' previous knowledge of SRL strategies, and the students' reaction to SRL strategies. Discussion A primary-school EFL teacher developed her practices to teach SRL strategies to her fourth-grade students throughout a unit of the academic year.From the results, we were able to answer the questions proposed for this study.Essentially, the study focused on: (i) whether the teacher developed teaching practices to foster SRL in her students and if so, what her role was; and (ii) if her students revealed any development in self-evaluation, goalsetting and planning, and lastly, rehearsing and memorization during reading and writing tasks.Findings show that the teacher developed her practices throughout the unit to promote SRL competencies in her students, such as, self-evaluation, goalsetting and planning, and lastly, rehearsing and memorization (Zimmerman & Martinez-Pons, 1986) because from the daily reflective interviews, the observations and the follow-up interview, she continuously mentioned her concern in working these competencies (e.g."I feel that this is a beginning for them to regulate their learning and talk about their learning so I have to guide them..."). Accordingly, the study investigated the type of teaching practices she used to promote SRL strategies during reading and writing tasks and how the students reacted to them (e.g.her students "adapted well to the new story and the different ways I [she] worked with them, like the pair work and group work, which they don't usually do.I [the teacher] believe this type of work helped them get the meaning of and memorize words because they had each other's help.They learned how to learn English from the book that is filled with strategies... because of the characters.").She verbalized some of these practices in her daily meetings with the observer and in the follow-up interview (e.g."I need to explain to them what an objective is... planning.They don't know.If I don't explain..."; "I knew I had to tell them they could underline something to memorize it or identify a word that's difficult to [11] pronounce...") .In short, she spoke of how she encouraged her students to have a more meaningful approach to learning, as seen in previous studies (Entwistle, 1990), how she guided them in using SRL strategies by explicitly establishing objectives with them, by developing, adapting and delivering stimulating activities (e.g."As these lessons continue, I find myself setting and changing objectives according to what I think is feasible for these students..." These results are similar to those in other studies (Zimmerman, 2000).She also spoke of how she gave her students opportunities and tools to improve their performance, as suggested by other authors (Bruner, 1971;Cook-Sather, 2008;Ericsson, 2002); as well as how she tried to monitor their learning process and their use of strategies (e.Students reacted positively to the fact that in this unit, they had the opportunity to learn through the use of other sources of information, such as children's literature ("we like reading stories like Little Red Riding Hood" and "we can learn English with stories").This reaction had been previously studied by some of the authors we analysed in the theoretical review (Bruner, 1971). The observations as well as the material the students produced, revealed that they also reacted well to the teacher's teaching practices to promote SRL ("I want to know what happens to the food and toys.").The students seem to have enjoyed and been successful in organizing the plans that were proposed to them in class ( "I learned to organize in this lesson").Once they accomplished this task effectively, they proceeded to create their own plans related to the content of the story.Students revealed high levels of motivation during this task and specifically mentioned it in class ( "My favorite activity was to organize a plan").They also demonstrated high levels of effort in trying to organize coherent plans that would be presented to colleagues later (all students put a plan in order and wrote it in English in lesson 8).These plans were completed in English, which inevitably lead students to practice writing skills in a more fluent and conscious manner as mentioned previously by Spolsky (1998) in his own study.Specifically as regards SRL skills during reading and writing tasks, the teacher focused essentially on selfevaluation, goal-setting and planning and rehearsing and memorizing to teach students comprehension of written texts as well as production and reproduction of short written texts.Nonetheless, and in agreement with other studies (Bronson, 2000;Perry, Phillips, & Hutchinson, 2006;Siegler, 2005) questions and when they reported what they thought about their own learning process, they had difficulty in specifying the obstacles they felt when trying to acquire knowledge and develop competencies ("In this lesson I learned English").Nevertheless, in the last three lessons of the twelve-lesson unit plan, the students were able to evaluate their tasks more successfully because they expressed that they were more familiarized with the concept ("I learned to speak about my choices in this class"; "I learned how to make a plan"). The teacher reflected upon and spoke about her role while using these new practices in this unit with her students.She regarded herself as being an active participant in her students' learning process and played a principal role in her own knowledge acquisition cycle in order to successfully adapt her pedagogical methodology accordingly.This type of teacher behaviour is proposed by different authors (Hatch, Eiler White, & Capitelli, 2005;Land, 2000;Randi & Corno, 2000).Furthermore, from the statements about monitoring her students, she revealed that she was responsible for interpreting her students' behaviour and performance, as indicated by some authors (Zimmerman, 2000).Hence, the teacher emphasized the importance of portraying a leading role as a learner, as a guide and as an effective model as Cho (2004) suggested in his study. One last focus area in this study arose regarding the link between SRL strategies, teaching practices and language learning.We cross-referenced all of the data to understand if the teacher did in fact promote SRL strategies during reading and writing tasks and if the former helped the students with the later.The teacher stated "the students as well as my performance gradually improved as we became familiar with SRL"; " So I Technology-enhanced Learning Environments (TELEs) in order to verify whether or not SRL has an impact on learning with new literacy instruments as well.Additionally, new measuring instruments for SRL could be created and used with the help of new technologies.Bearing this in mind, this study may contribute to finding ways of increasing the understanding of learning and to improving the quality of learning experiences for both teachers and students.Furthermore, other researchers may be enticed to observe and intervene in other educations scenarios by guiding teachers in helping their students learn to learn.We contemplate a specific teacher and her students in this study, but similar studies may be conducted with teachers and students from other ethnic backgrounds. Figure 1 . Figure 1.The teacher's action plan to foster Self-regulated Learning in her students Figure 2 . Figure 2. Topics and comments that emerged in the daily interviews with the teacher. Figure 3 . Figure 3.The teacher's perception of her role as a teacher, her teaching practices, her students' previous knowledge of SRL andtheir reaction to these teaching practices. g. "I'm giving them the exercise and I'm walking around the classroom.I'll ask them in the end what they responded as a group.I want to see how they work in group"; "I think they're getting a bit better at selfevaluating their work because they are responding more specifically to the question about what they learned in the lesson.";"They're reading better because they're understanding what they're reading and because they're practising and working on the text before they read it."). think it [self-regulated learning] helped a lot.It showed them that they can learn English by using a book and by talking about what they're learning.The same goes for writing.";"This helped them with reading comprehension and word association.Let's say, this [the teacher's teaching practices] created a basis for them to self-regulate their work."Students also commented on this, stating for example, "I think I can write better about what food I like."and "I think my choices could be better if I had chosen other food.", repetition, selfevaluation, etc...As students should be able to consider alternatives when learning something and trying to solve problems, teachers should also consider alternatives to what they usually do in the classroom.Students' results can improve significantly by adopting different teaching techniques.In conclusion, this study can serve as a basis for future research on teacher methodologies and training in other learning environments.It would be interesting to develop SRL methodology in
2017-11-29T06:33:16.512Z
2012-01-27T00:00:00.000
{ "year": 2012, "sha1": "f3c8fbb29de6146cf37f02f3089f8cf163652f81", "oa_license": "CCBY", "oa_url": "https://rua.ua.es/dspace/bitstream/10045/20521/6/EReJ_01_01_02.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "f3c8fbb29de6146cf37f02f3089f8cf163652f81", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
10036176
pes2o/s2orc
v3-fos-license
Exploration of Noncoding Sequences in Metagenomes Environment-dependent genomic features have been defined for different metagenomes, whose genes and their associated processes are related to specific environments. Identification of ORFs and their functional categories are the most common methods for association between functional and environmental features. However, this analysis based on finding ORFs misses noncoding sequences and, therefore, some metagenome regulatory or structural information could be discarded. In this work we analyzed 23 whole metagenomes, including coding and noncoding sequences using the following sequence patterns: (G+C) content, Codon Usage (Cd), Trinucleotide Usage (Tn), and functional assignments for ORF prediction. Herein, we present evidence of a high proportion of noncoding sequences discarded in common similarity-based methods in metagenomics, and the kind of relevant information present in those. We found a high density of trinucleotide repeat sequences (TRS) in noncoding sequences, with a regulatory and adaptive function for metagenome communities. We present associations between trinucleotide values and gene function, where metagenome clustering correlate with microorganism adaptations and kinds of metagenomes. We propose here that noncoding sequences have relevant information to describe metagenomes that could be considered in a whole metagenome analysis in order to improve their organization, classification protocols, and their relation with the environment. Introduction Metagenomes represent a gold mine for biology, biomedicine, and biotechnology. Their studies have opened a window to find new products and environmental solutions, as well as to define relevant biological and ecological knowledge regarding the microorganisms. Most metagenomic data published has revealed new insights about the microbial world itself. Frequently, the study of metagenomes begins by decoding information in assembled or unassembled sequences, being the principal goal to analyze the genomic composition, functional dynamics, and biodiversity, which can be accomplished by different methods of prediction and comparison. Nowadays, metagenomic studies have revealed dependence among functional features, pathways, or biological processes, among metagenome niches [1][2][3]; for instance, some genes, metabolic pathways, and genomic features are associated to conditions of the environment studied [4]. These characteristics are the result of studying only the coding sequence depending on ORF predictions [5], leaving aside the noncoding sequences (NCS). Interestingly, the proportion of NCS in some metagenomes is up to ,21% [6], which in big metagenomes could exclude many significant sequences. The NCS in a metagenome could correspond to regulatory elements in prokaryotic or simple eukaryotic organisms [7]. However, there are other elements in NCS with structural or organizational genome function like repetitive DNA, that in some free-living bacteria are necessary for homologous DNA recombination and rearrangements [8]. Additionally, when a metagenome has a high amount of eukaryotic microorganisms, repetitive DNA is highly abundant and NCS increase due to their larger genomes and lower gene density [9]. Thus, different elements related to genome structure and regulation of metagenomes could be defined by exploring NCS. Different methods have been used to search information in NCS in genomes and metagenomes, for example, identification of ribo-switches, noncoding RNA, or transcription factors in microbial genomes [10][11][12]. The most successful approaches to analyze these sequences are supported by sequence-based methods, not by sequence similarity-based methods like BLAST [13]. These sequence-based approaches analyze both coding and NCS from a different perspective, and not from comparisons [5]. In microbial genomics, sequence-based methods work by defining sequence patterns as (G+C) content, noncoding RNA, codon usage, di, tetra, or pentanuclotide frequencies [14]. These strategies can be used to identify regularities among microorganisms, for example, the existing relationship between trinucleotide frequencies and fingerprinting of geographic origins of Mycobacterium tuberculosis [15]. In contrast, the application of sequencebased methods in metagenomics has allowed comparison of organisms based on structural patterns, type of tetranucleotide frequencies [14,16], and assignments of taxonomic groups in metagenome samples based on noncoding elements [17]. They have also been used to define new features in coding and NCS such as structural RNA organization in archaea [18], or for metagenome binning based on l-mer composition [19]. The (G+C) content, codon usage, and tetranucleotide frequencies have been the most successful and most studied sequence patterns in metagenomics [14,16,18]; however, codon and tetranucleotides are directly associated with coding sequences [20], they are not useful for analysis of NCS or whole metagenome studies. In this work, we evaluated trinucleotide usage pattern in conjunction with the whole metagenome composition and their biological significance. We analyzed the coding and NCS from several metagenomes deposited at the DOE Join Genome Institute JGI (http://www.jgi.doe.gov/), by making comparisons of structural and functional profiles defined by sequence and similarity-based methods. Results In this work we examine four main approaches to study the noncoding sequences in twenty three metagenomes with different environmental conditions. Table 1 shows the metagenomes and their sizes. DOE JGI classifies these metagenomes as environmental (Env), host-associated (HAs), and engineered (Eng) based on the type of ecosystem, host phylogeny, and function [21]. One important feature related to this classification is the size of the metagenomes, where those with more than 17 Mbp were defined as dense, and those with less than 9.4 Mbp were defined as non-dense. For example soil microbial communities from a Minnesota Farm (SMF) represents a dense metagenome, and Olavius algarvensis endosymbiont (OAEM) represents a non-dense metagenome. It is important to consider that in non-dense metagenomes it is common to find large DNA sequences (more than 1 Kb) that compensate for the few sequences and allows application of the sequence-based approaches. Metagenome Dataset and Noncoding Sequences We identified the proportion of coding and NCS for each metagenome ( Figure 1A), finding a smaller proportion of NCS (,20.5%) that contrasts with a significant amount of coding sequences (,79.5%) to be analyzed. Six metagenomes had more than 20.5% of NCS (EMR, OAMD1, OAMD4, OAMDG1, OAMED3 and SMF). From this global landscape, the association between NCS and environmental conditions for some metagenomes, like Endophytic microbiome from rice (EMR) and Olavius algarvensis endosymbiont metagenomes (OAEM), is exposed, showing a relation between a high proportion of NCS and the HAs metagenomes. However, expected associations like dense metagenomes with a high proportion of NCS were discarded because dense metagenomes like SMF or Methylotrophic community from Lake Washington sediment (MLWSF) have less NCS than others. The association of functions to predicted ORFs or coding sequences via BLAST programs is a similarity-based method common in metagenomics that allows understanding the functional complexity of the metagenomes. Upon identifying which of the predicted coding sequences have associations with functional information (Pfam categories) [22], we found that not all coding sequences had functional assignments and, therefore, could not be used for metagenome functional description. The proportion of predicted ORFs associated to Pfam models was very low ,10% ( Figure 1B), which in the context of all metagenomes can be represented as ,13% of coding sequences with functional assignments, and ,66.5% of coding sequences without functional assignments. Interestingly, there were non-dense metagenomes with more functional associations than dense metagenomes, as was the case for Anaerobic methane oxidation (AOM) and Archaeal virus community from Yellowstone (AVCY) metagenomes that had more than 40% of coding sequences with functional associations. In contrast, SMF or MLWS (dense metagenomes) had less than 10% of the coding sequences with functional associations. Finally, there were no associations between dense and non-dense metagenomes and coding sequences because the proportions of coding sequences with functional associations varied among all metagenomes. Metagenome Description by Sequence-based Methods The sequence patterns used in this sequence-based approach exposed features associated with composition and organization of DNA sequences. For composition, (G+C) content was the first measure used to characterize coding and complete (coding and noncoding) metagenome sequences (Table S2), radially plotted in Figure 2A. This pattern showed different ranges of distribution for coding and complete sequences, in which small peaks in the radial distribution represent non-specific (G+C) content and large peaks indicate a tendency to high (G+C) content. This analysis revealed that coding sequences (blue peaks) had some specific (G+C) content peaks, for example, around 68, 62, 56, and 44.5%, while the complete sequences (red peaks) only had one (G+C) content peak around 43% given by AOM metagenome, which corresponds to a high proportion of (G+C) content for noncoding elements. A second measure to characterize NCS in the metagenomes was implemented using the codon (for coding sequences) and trinucleotide (for complete sequences) contents ( Figure 2B, Table S3). The radial distribution of these patterns clearly showed similarities and differences between coding and complete sequences. According to this, there are similar codon and trinucleotide compositions with similar usage tendency like GGC or GCG (red asterisk), which shows a relationship between coding and NCS. That means that the codons and triplets might be used simultaneously for protein synthesis and likely for promoter regions. On the other hand, the high uses of trinucleotide compositions different from codons in complete sequences are the most relevant feature in this work. This is because the trinucleotides CGC, CCG, TTT, and AAA are highly used in NCS (green asterisks), which may be a relevant structural feature of metagenomes, like that observed for TRS. Interestingly, these tendencies or high use of trinucleotides were observed for aquatic metagenomes (UCG, MLWSF) and might be associated with a new environmental-dependent feature for those metagenomes. Metagenome Description by Similarity-based Methods Similarity-based methods were applied to compare functional and structural features. The coding sequences with Pfam [22] associations were studied to identify relevant functions in metagenomes, but are not described further because functional environment-dependent features have already been described extensively [1][2][3]. A comparative file called ''functional profile'' was generated for all metagenomes, which has all the functional assignments and their frequency of use in each metagenome. This profile was analyzed by hierarchical clustering, as shown in Figure 3 (Table S4). This approach allowed us to define clustering of metagenomes according to functional assignments. Herein, we identified regularities among the kinds of metagenomes and their sets of functions. For example, clusters were formed with the metagenomes from the Methylotrophic community from Lake Washington sediment (MLWSMO, MLWSME, MLWSFD, MLWSF) or from Olavius algarvensis endosymbiont (OAEMD4, OAEMG3, OAEMD1), which are examples of specific niches with common sets of functions, whose microbial communities maintain similar sets of proteins related to the environment requirements or cell necessi-ties. Interestingly, the metagenome SMF showed several common functions with the MLWS cluster, suggesting possible similarity in the microbial community and functional requirements in these soil and sediment ecosystems. The other metagenomes were arrayed in diverse clusters and involved a combination of Env, HAs, and Eng metagenomes, indicating that there are several common functions among these metagenomes. These common functions were selected and the most conserved functions were identified ( Figure 4). As expected, these functional associations are related to cell viability as (catalytic and anabolic) enzymes, mobile element mechanisms, translocation of various substrates across membranes by ABC transporters, and phosphorylation-mediated switches by response regulator receiver domains (Table S5). These common functions show common dynamics among microorganisms from different environments, but not specific functions for each metagenome. In order to identify the proportion of unique functional assignments for each metagenome, we used the functional profile to extract the number of unique assignments for each metagenome ( Figure 5, Table S6). The result of this approach showed only 8 metagenomes with a unique set of functions. This feature was associated to specific adaptations in accordance with different niches or environmental conditions because these metagenomes are distributed in the three studied categories. A particular feature in the metagenomes from MLWS (Methylotrophic community from Lake Washington sediment) is revealed by the fact that four of the five metagenomes had unique sets of functions, not common to all, that could reflect metabolic adaptations for particular substrates in the same community, as has been proposed [23]. Subsequently, we investigated whether metagenomic NCS were present in complete annotated genomes by examining the proportion of NCS mapped in available sequenced genomes. 4189 genomes and sets of coding sequences were challenged against the entire set of metagenomic NCS. Figure 6A represents the percentage of BLAST hits associated to 31 taxonomic classes. For most classes, a 60% were found in the first 4 taxonomic classes (Gammaproteobacteria, Bacilli, Alphaproteobacteria, and Betaproteobacteria). The remaining 40% involved the other taxonomic classes. Figure 6B, shows a similar behavior for the hits using coding sequences (Table S7). Functional and Structural Profiles Finally, the trinucleotide and codon usages profiles Tn(ls), Tn(ts), Cd(ls), and Cd(ts), were calculated. These correspond to normalized values used to compare metagenomes, based on the length (ls) and number of triplets (ts) by sequence. These values defined four structural profiles and the Pfam assignments defined one functional profile (see Methods). The comparison of the functional and structural profiles was obtained by mean construction of hierarchical clustering trees (Figure 7). It is important to note, structural and functional profiles were based on different percentages of analyzed sequences since this depends on the method used. Sequence-based approaches defined (Tn(ls) and Tn(ts)) with 100%, and (Cd(ls) and Cd(ts)) with ,80%; and the similarity-based approach used ,13% of the sequences. In order to analyze the relevance of the structural patterns in terms of classifying the metagenomes, several comparisons were made between structural profiles trees and the functional profile tree. The Env and HAs metagenomes were organized in two clear clusters, showing patterns of organization that have been described by other authors [1][2][3]. These clusters were then used to compare them with the structural profiles trees (lines in Figure 7). Although the structural hierarchical trees differed in cluster distribution, some regularities were observed (fringe shaded), such as a clustering conservation in the categories Env, HAs, or Eng between the functional and Tn(ts) profiles for some metagenomes. Discussion Here, we studied particular metagenomic features based on whole sequence analysis that includes noncoding elements, usually left out in standard methods in metagenomics. This means that only a subset of sequences is analyzed in metagenomes using the common method of ORF prediction where NCS are discarded or used only to improve methods in gene finding [24,25]. In this work, seven relevant aspects will be discussed. The NCS from Several Metagenomes were Studied The NCS are not well studied and are not used to identify functional or environment features in metagenomic analysis. However, the proportion of NCS is higher (,20.5%) than that of coding sequences with Pfam assignments (,10%) that are used commonly in metagenome functional analysis (Figure 1). Although these proportions can depend on the prediction methods, a similar proportion of NCS was defined previously for other metagenomes and by different programs [5]. Thus it is plausible to define these proportions of coding and NCS as particular feature of the metagenome composition. In addition, considering the proportion of NCS in prokaryotes ,18% [26] and unicellular of simple eukaryotes ,30% [27], these metagenomic NCS could harbor relevant information regarding the different microbial populations. A Wide Range of (G+C) Contents in Metagenomic NCS was Revealed by Sequence-base Methods In microbial genomes NCS are involved in regulation and rearrangements of the genomic content, both of which are important for adaptation to changing environments [9][10][11]. These features can be related with sequence patterns in NCS that differ from those in coding sequences, and these are discriminatory elements for gene prediction [25]. This idea agrees with the sequence patterns presented in Figure 2A where there are evident differences in the range of distribution of the (G+C) content between coding and complete sequences, which could reflect abundant elements in NCS with a large range of compositions. Additionally, Figure 2A shows that all coding sequences are mainly distributed in a (G+C) range between 32 and 73% (dashed lines), where all metagenomes are located. This ''range of life'' seems to be flanked by sequences, rich in repetitions perhaps subjected to different processes of selection, adaptation, or environmental stress. In contrast, below 32% and above 73% there seems to be no complete metagenome. Analysis of all complete bacterial genomes deposited at the NCBI shows that below 13.5% and above 75% it is hardly possible to find any living organism (Table S8). (G+C) percentages ,32% and .73% seem to be primarily occupied by organisms involved in symbiotic associations and intracellular life styles or by aerobic organisms, where (G+C) values are higher [28]. An Abundant Number of TRS Elements were Found in NCS The results obtained with the codon and trinucleotide usage ( Figure 2B), indicate that the abundant elements in NCS are TRS (TTT, AAA, CGC, CGG, and CCG). The definition of TRS in metagenomes depends on the density and comparison with codons, because similar triplet density both in coding and NCS involves the same element; and the differences confer specific triplets to coding (as codon) or NCS as TRS. Accordingly, we have identified three relevant TRS (CGG, CGC, and CCG) by the high density and distribution across several metagenomes, mainly in UCG and MLWSF metagenomes. These TRS could be involved in adaptations and genetic susceptibility to variations [15], or they could be associated to noncoding RNA with a regulatory function in transcriptional processes [11]. Thus, the TRS represent simple sequence repeats, abundant in metagenomes and possibly involved in adaptation to different environmental conditions, as has been defined in prokaryotic genomes [29]. This idea still has to be explored more deeply. A Large Proportion of NCS is Present in Complete Genomes (Figure 6) This can be discussed in two ways. One explanation might be that many sequenced bacterial organisms might be part of the microbiota of these metagenomes or related with, in the worse cases, pollution phenomenon. A further analysis with 16S rRNA might verify the presence of theses genomic classes identified by us. Another explanation might be related to lateral gene transfer. Figures 3,4, and 5 showed several typical behaviors of functional assignments per metagenome. This complex distribution (in Figure 4) seems to be related to the metagenome size. That is, there exists a strong relationship between the numbers of functional assignments and the metagenome size (R 2 ,0.91) (Table S1). For example, the SMF metagenome has the highest value, whereas the AVCYNL metagenome has the lowest one. This is because the SMF metagenome is environmental (soil), while the AVCYNL is host-associated, which might be expected, since this trend is observed for other related metagenomes. On the other hand, no evidence of functional pattern can be studied in the NCS by functional profiles, due to the fact there is not annotation for these sequences in the bacterial database. However, a diversity of functional elements, type of noncoding RNA (ncRNA), among others, has been identified in NCS as key players in gene regulation [30]. Trinucleotide Patterns and Structural Profiles Help to Identify Features among Metagenomes and the Environment In this work we carried out a whole metagenome analysis using coding and NCS and showed that NCS are significant and contain relevant information, such as the trinucleotide organization that in some of cases is common for several metagenomes. With the aim of comparing the metagenomes based on associated trinucleotides values, we propose Tn(ls), Tn(ts), Cd(ls), and Cd(ts) as the structural profiles with the capacity to embrace all the trinucleotides (Tn) or codons (Cd), and define a comparable value to each metagenome. An increase in any of these values means that specific trinucleotides are being used with high frequency in the metagenome; in contrast, low values indicate a non-conserved use of trinucleotides or codons. These patterns, which could help to identify features among metagenomes and correlations with environments, were used to make a classification of the metagenomes in a hierarchical tree (Figure 7). The relevance of this clustering organization lies in the proportion of metagenome sequence used for each profile, for example the Tn(ts) use 100% of the metagenome sequences, whereas the functional profile uses only ,13% of them. As result, the use of Tn(ts) could capture regularities in the NCS. Here, we propose that the clustering similarities and differences of metagenomes based on Tn(ts) and functional profiles have biological meanings. The similarities these are related to conserved cellular mechanisms in coding sequences and NCS, like specific mechanisms of regulation for specific genes. In contrast, differences are related to conserved elements not present in functional profiles, but present in NCS, like TRS or ncRNAs [11]. These describe possible connections among microorganisms based on complex mechanism of regulation. The differences of clustering in structural profiles are directly related with the constants of normalization i) length (ls), and ii) number of trinucleotides or codons by sequence, Tn(ts) or Cd(ts)), where Tn(ts) is more precise to compare metagenomes according to the comparison with functional profile tree. This could be due to more changes in the NCS more than in coding sequences that conserve basic functions but also allow for a more dynamic genome. The clustering of the MLWSMO, MLWSME, MLWSFD, and SMF metagenomes across all the trees would indicate that the possible organization or patterns in the NCS could be connected to those protein motifs present in the coding sequences. This regularity is only revealed for Env metagenomes, which are not affected by drastic environmental fluctuations and allow a controlled organization, as a model for genetic exchange and adaptation [31], for example the temperature in archaeal organisms and the GC variations [32]. Thus, possible reorganization of genome elements in the NCS occurs less frequently in Env than in HAs where microorganisms need to adapt to the imposed and varying host cell conditions [27]. Finally, Eng metagenomes have no specific distribution or clustering of metagenomes, possibly because these communities are subjected to strong and different environmental pressures to carry out a great variety of functions required for specific adaptations and genomic rearrangements in each environment [33]. However, it would be important to identify elements that could lead to a possible connection, and be used in biotechnology. A Framework for Studying the Environmental Metagenomes is Proposed All these results suggest a related metagenomic framework. Despite analyzing a small number of metagenomes, this sample allows us to identify some significant correlations and trends in the direction: Eng -. HAs -. Env. For that, some relevant features were examined and discussed ( Figure 8, Table S1). Initially, the average (G+C) content per metagenomes category increases very little (from 52.5 to 56%), but this trend could only be relevant for aerobic organisms [27]. Nonetheless, the Tn(ts) and Tn(ls) usages are moderately correlated with the (G+C) contents (R 2 ,0.63). In terms of some specific triplets (CGC, CCG, TTT, and AAA) these relationships are considerably high (R 2 ,0.9, ,0.95, ,0.85, and ,0.86, respectively). The number of functional assignments increases greatly and this is inversely related to the percentage of NCS, the abundance of TRS (especially for TTT and AAA), the reorganization of the genome NCS, and adaptation to the environment. These features by metagenomic category would be connected, thereby, to a larger number of NCS (rich in regulatory sequences and TRS) that might contribute to increase the number of genomic rearrangements and establish selective adaptation processes through the use of a smaller number of functional assignments. All these trends and directions seem to suggest a related framework of metagenomic parameters (or features) moving from ''restrictive'' environments to environments of ''free-living organisms''. In conclusion, the sequence-based methods, specifically Tn(ts), effectively help to define regularities in the organization of the metagenomes and, second, the NCS can contain relevant information for metagenome classification and microorganism functional description that needs to be studied more deeply. Undoubtedly, the common functional environment-dependent features proposed by other authors could be associated to structural environment-dependent features. Consequently, environment-dependent features could be defined by the study of the whole metagenome. Thus, the proposed metagenomic framework only is possible taking into account all the information coded by complete metagenomes. Materials and Methods Five methodological steps were followed in this study ( Figure S1). Metagenome Data Sets A total of 23 metagenomes were downloaded from the metagenomics program at the DOE Joint Genome Institute JGI (http://www.jgi.doe.gov/) (Febrary 2010). Based on type of ecosystem, host phylogeny, and function, these metagenomes are classified as environmental (Env), host-associated (HAs), and engineered (Eng) [21]. The sequences downloaded correspond to DNA scaffolds as DOE-JGI presents the data, pre-cleaned. An additional cleaning was made by a python scripts to avoid sequences with #20 bp, and X (unknown) and N (unspecified) contents .25%. Noncoding Sequence Identification and Sequencebased Methods Coding and noncoding sequences were determined through ORF prediction with MetaGeneMark algorithm [24]. Three sets of data were defined to each metagenome: Coding sequences (ORF predictions), Complete sequences (Coding and Noncoding sequences) and Noncoding sequence (region of the sequences without ORF predictions). The sequence-based methods applied in this work involved the definition and analysis of three sequence patterns: (G+C) content, Codon Usage (Cd), and Trinucleotide usage bias (Tn), Tn according with [34]. These patterns were applied on coding and complete sequences conferring structural pattern values, defined by two assessments: i) Trinucleotide (Complete sequences) or codon (Coding sequence) values based on length Tn(ls) or Cd(ls) respectively, these were defined as the sum of trinucleotide usage frequencies (Tn) or codon usage frequencies (Cd), over the length of sequence (l): ''Tn(ls) = S(Tn)/l'' or ''Cd(ls) = S(Cd)/l''. And ii) Trinucleotide or Codon values based on the number of trinucleotides or codons by sequence, Tn(ts) or Cd(ts), respectively. These were defined as the sum of trinucleotide usage frequencies (Tn) or codon usage frequencies (Cd), over the number of trinucleotides n(Tn) or codons n(Cd): ''Tn(ts) = S(Tn)/ n(Tn)'' or ''Cd(ts) = S(Cd)/n(Cd)''. These values above were organized in a comparative table named as¨structural profiles (Table S3). Functional Assignments and Similarity-based Methods The peptides from predicted ORFs were assigned to a functional feature using BLASTP [13] (BLAST 2.2.25 release) methods as propose [35]. Pfam-A was used as local database (February 2010 release, 11912 models in total available at www.sanger. pfam.com) [22], and a cutoff: e-value , = 1e 230 , identity . = 95%. The resulting Pfam assignments were integrated into a unique file named¨functional profilë table (Table S4), which lists the Pfam models with a value for each model defined as the frequency of assigned sequences for each model by metagenome: ''e(Pfam) = (Pfam nq )/N(Pfam)''. Where f(Pfam) is the frequency of the Pfam model in the metagenome; (Pfam nq ) are the number of BLAST queries assignments for the model, and N(Pfam) is the total number of Pfam models with associations in the metagenome. An additional approach was applied, related to blast searches of NCS in complete bacterial genomes to the association of any annotated function or taxonomy (BLASTn, e-value , = 1e 210 ). Functional and Structural Profiles Four structural profiles were made, two based on coding sequences (Cd(ls), Cd(ts)), two based on complete sequences (Tn(ls), Tn(ts)), and one functional profile based on functional associations. Those profiles are comparatives tables, which compares the 23 metagenomes. The functional and structural profiles were analyzed by hierarchical trees using the Hierarchical Cluster Explore tool (HCE) [36]. Metagenomic Framework For each metagenome category (Env, HAs, and Eng), ten parameters (size, whole metagenome (G+C) content, functional assignments, Tn(ls), Tn(ts), CGC, CCG, TTT, AAA, and percentage of NCS) were averaged and calculated per metagenome and per metagenome category (Table S1). Coefficients of correlation were calculated by simple linear regression for some of those parameters. Figure 8. A metagenomic framework. At the top are shown, the three metagenomic categories. Averaged values per category for each parameter are shown above the arrows. The parameters (1-8) were calculated from complete metagenomes, parameter 9 was calculated from NCS (Table S1) and parameters 10 and 11 are behaviors inferred from literature [9][10][11]. doi:10.1371/journal.pone.0059488.g008
2018-04-03T00:15:45.346Z
2013-03-25T00:00:00.000
{ "year": 2013, "sha1": "719e672799b229787370464e62d4a6760eeb10e5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059488&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "719e672799b229787370464e62d4a6760eeb10e5", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250651178
pes2o/s2orc
v3-fos-license
Focusing on structural features of a scenario: An activity on expected frequencies, deviations, and chi-square contributions In this paper, I present a way to teach statistics by helping students reuse structural features of a problem to interpret those features. It consists of explaining concepts by presenting corresponding analogical examples following a given story. Expected frequencies, deviations and chi-square contributions are presented in relation to a story where six people found tokens on a geocaching course. The activity prepares students to gradually interpret these three elements of the chi-square statistic test on their own. The activity also allows for a discussion about benefits of the chi-square test of independence, namely its added value compared to proportions that can show spurious results Concept to be presented Several studies encourage the use of real-world examples and humour when teaching statistics (Bizon, 2018;Carver et al., 2016;Cousineau & Harding, 2017;Zeidner, 1991). This approach doesn't exclude integrating independent work on the part of the students. A recent study suggests that making students read worked examples might enhance their performance (Brisbin & Maranhao do Nascimento, 2019). Structured yet casual guidance seems to be a favourable teaching practice. A given scenario where a problem has to be resolved can be deconstructed into two main components: structural features and surface features (Quilici & Mayer, 1996). Structural features include elements of a problem that allow a solution to be found. As for the surface features, they include elements related to the story itself. However, structural and surface features are integrated differently depending on the type of statistical analysis presented. For example, in front of various scenarios, undergraduates are better at identifying situations resolved by corre-lation analysis than chi-square independence tests (Quilici & Mayer, 2002). The proposed activity help students direct their attention to the structural elements of a problem using a parallel scenario. These structural elements are then used to interpret results. The chi-square test of independence is interesting in this respect: in addition to being considered by some teachers as "essential for good statisticians" (Harth, 2018), it may be one of the first statistical tests that students are introduced to in their academic journey. We can make it more intuitive for two interrelated reasons: its familiarity and its simple calculation. First, chisquare test of independence is based on frequencies and count data are found in a variety of everyday situations: the number of items purchased in a store, the number of grammar errors in a text, the number of crimes committed by teenagers, etc. Second, the chi-square test of independence is relatively easy to calculate in a spreadsheet or by hand (McHugh, 2013). Thus, teaching this test through a scenario can introduce expected frequencies, deviations, The Quantitative Methods for Psychology v1 and chi-square contributions in a more familiar way. The chi-square test of independence tests the relationship between two qualitative variables with nominal scales. Because the mean of categorical data is meaningless, the test compares observations with predictions. In other words, it compares observed frequencies with expected frequencies (Field, Miles, & Field, 2012). This comparison allows contrasting two scenarios and to answer, for example, the following question: which modality of which group has higher or lower frequencies than expected? The greater the difference between observed and expected frequencies, the higher the chi-square statistic, the less likely the difference is due to chance. Observed frequencies come from the cross-frequency table directly drawn from the data. The expected frequencies are calculated as follows for each cell: where E f is th expected frequency, r t is the row total, c t is the column total, and T t is the table total. In a way, the expected frequencies table is the scenario where all cells contribute to the cross-frequency table according to their weight per group and per category (or modality) in the sample. The difference between observed and expected frequencies are called deviations which are calculated as follows for each cell excluding the totals: where D is the deviation, O f is the observed frequency, and E f is the expected frequency. The larger the deviations, the less likely the hypothesis of independence between the two qualitative variables (Gilles & Maranda, 1994). The chi-square contributions indicate which cells contribute most to the result. where C are the contribution, D is the deviation, and E f is the expected frequency. After calculating all the chi-square contributions and summing them up, we refer to a threshold. If the sum of the chi-square contributions is greater than the threshold, the deviation between observed and expected frequencies is statistically significant. Therefore, the probability that the two variables are not independent of each other is low. This threshold as well as the degrees of freedom are not the subject of the proposed activity for the sake of parsimony. The proposed activity focuses on expected frequencies, deviations, and contributions to the chi-square statistic. Activity Typically, metaphors present concepts so that a source models a target. For example, correspondence metaphor involves relating entities or attributes with their counterparts in the target. This strategy can be used to teach statistics (Tay, 2022). We propose the use of a correspondence metaphor to highlight the structural features of a scenario by equating these features in the story. Therefore, the proposed activity follows Cousineau and Harding's (2017) recommendation: we present calculations as a step in a story rather than as an intimidating equation. Accordingly, the familiar scenario is presented first. In a second step, we present a survey situation. Firstly, six people must find tokens on a geocaching course. First common names are suggested: Alice, Bob, Chuck, Dan, Eve, and Franck. We are interested in the number of tokens found by everyone. These six people have different characteristics. Some are amateur athletes: it is the case for Alice, Chuck, and Eve. Others are professional athletes: Bob, Dan, and Franck compete professionally. In addition, these people used different means of transport to find their tokens: Alice and Bob used their car that day, Chuck and Dan cycled, and Eve and Franck walked on the course. The people and their performance are illustrated in Figure 1. The setting of the survey context is presented next: 337 sports amateurs and athletes are asked what their usual mode of transportation is. In the cross-frequency table found in Figure 1 to the right. The columns represent the two groups, i.e., sports enthusiasts and athletes. The rows represent three means of transport: car, bike and walk. In both scenarios, the idea is to compare an actual performance to an expected performance. The structural elements of the story include the columns and rows of the table, i.e., the two types of people and their means of transport to carry out the course, but also the "performance" whether expressed as found tokens or people answering a survey. The six people correspond to the number of cells of the cross-frequency table (excluding the marginal totals). Whether in the geocaching course scenario or the survey and cross-frequency table, to make a comparison, we ask how the groups (non-athletes and athletes) relate to the modalities (means of transport). In other words, did Alice, considering that she is not a professional athlete and that she used her car on the course, performed as expected? And what is meant by "performed as expected"? If the tokens were distributed according to table columns and rows, we would draw two observations: 1. The amateur athletes would have 56.38% of the token of their column, because they have 190 tokens The Quantitative Methods for Psychology v2 Figure 1 Six people, amateur and professional athletes, found tokens on a geocaching course using different means of transportation Left: pictorial representation; right: Cross-frequency table of the usual means of transportation of amateur sports enthusiasts and athletes (n = 337). The images were designed using the resources of Flaticon.com. out of the total of 337 tokens found on the geocaching course (190/337 × 100% = 56.38%). The professionals would have the balance of tokens for their respective column. Indeed, they found a total of 147 tokens (147/337 × 100% = 43.62%). 2. The tokens would be distributed according to the mode of transportation used in the geocaching course. For example, both Alice and Bob used their car. Together, they collected a little over half of the tokens found (185/337 × 100% = 54.90%). Thus, this portion of the tokens should be divided between them so that Alice gets 56.38% of the tokens and Bob gets the balance to reflect the type of athlete they are -amateur or professional. Tables 1 and 2 illustrate the proportions required to obtain a distribution by groups and modalities according to the situation presented. To produce the expected performance for each person, we gather frequencies of a proportional distribution among groups and categories following the proportions presented above. This prediction represents the number of tokens expected per person. The prediction is theoretical: there are decimals even though a token can't be split. How close (or far!) is the prediction to the actual performance? Some performed better, some performed worse than expected. To find out, we subtract the predicted and the actual performance. The remainder is called a deviation. When a person finds more token than predicted, the deviation is positive. In that case, the concerned group in the cross-frequency table is said to be overrepresented for the corresponding qualitative category. Figure 2 illustrates predicted performance for each person and the token difference with actual performance. For example, Alice found 120 tokens when 104.3 were pre- dicted. She overperformed, the deviation indicates a difference of +15.7 tokens. To the upper right, tables present theoretical frequencies of the usual means of transportation of amateur sports enthusiasts' athletes and deviations. Amateur athletes are overrepresented in the car category as indcated in the table on the lower right of Figure 2. As another example, Eve found 14 tokens when she was predicted 25.93: she underperformed by -3.76 tokens. In her case, the deviation is negative, thus indicating that, in the survey, amateur athletes are underrepresented in the walking category. Knowing which table cell or person over-or underperformed, the number of predicted tokens must be put into perspective: if this number is small, the difference of each additional token found by a given person will have more impact than if the number is large. Figure 3 below illustrates this conceptually. For each person, the number of tokens found is presented with a comparison to their prediction in percentages. On the right, the contributions to the chi-square statistic are presented on the right side of Figure 3. The cells with the highest contributions to the chi-square statistic correspond to the individuals that deviated most, in proportion to the number of tokens found, from their predicted performance. In other words, differences between types of athlete and means of transportation appear to be more pronounced for those who walk. For example, on one hand, Franck, the professional athlete found 32 tokens when 20.065 were predicted. He overperformed by 7.10 tokens or 37.30%. On the other hand, Eve, the sports enthusiast found 14 tokens when 25.935 were expected. She underperformed by 85.25%! In the survey, in comparison with other means of transportation, the over-representation of athletes in the walking category contributes most to the difference between sports enthusiasts and athletes in terms of the usual means of transportation. Together, they contribute nearly two thirds of the sum of the contributions to the chi-square statistic (12.591/18.550 × 100% = 67.86%). By relying solely on proportions, we wouldn't have been able to see this difference since they represent only 46 people on 337. Strategy to Assess the Activity and Conclusion Once the setting-driven story is completed, the activity is repeated in a slightly different version. During this second iteration, students must reconstruct a similar story in small groups. This second situation concerns a research object related to their academic discipline, for example sociology The Quantitative Methods for Psychology v4 ¦ 2022 Vol. 18 no. 2 or criminology. For example, it could be the type of crime for a sample of incarcerated adults and youth: violent crime, property crime, drug crime, etc. In this case, these offenders could also participate in a geocaching course for the story scenario. The question would then be whether being an adult or youth offender is associated with overor under-representation by crime types. Students are encouraged to identify and use the structural elements to formulate the research question, but also the interpretation of the results. This second setting is partly guided. Conclusively, a third situation empowers the students to interpret expected frequencies, deviations, and contributions by themselves. This one is not guided. The students must work on their own in small groups and present their story to the whole class.
2022-07-20T15:09:02.154Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "88ede9de85e5cf7b3c0c5fa0b58c9312eae03f3f", "oa_license": null, "oa_url": "https://www.tqmp.org/Vignettes/vol18-2/v001/v001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09395d710045dbf092fa6539a0d25c9bdcd3680c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
202539249
pes2o/s2orc
v3-fos-license
Enhanced diffusion and enzyme dissociation The concept that catalytic enzymes can act as molecular machines transducing chemical activity into motion has conceptual and experimental support, but much of the claimed support comes from experimental conditions where the substrate concentration is higher than biologically relevant and accordingly exceeds kM, the Michaelis-Menten constant. Moreover, many of the enzymes studied experimentally to date are oligomeric. Urease, a hexamer of subunits, has been considered to be the gold standard demonstrating enhanced diffusion. Here we show that urease and certain other oligomeric enzymes of high catalytic activity above kM dissociate into their smaller subunit fragments that diffuse more rapidly, thus providing a simple physical mechanism of enhanced diffusion in this regime of concentrations. Mindful that this conclusion may be controversial, our findings are sup-ported by four independent analytical techniques, static light scattering, dynamic light scattering (DLS), size-exclusion chroma-tography (SEC), and fluorescence correlation spectroscopy (FCS). Data for urease are presented in the main text and the con-clusion is validated for hexokinase and acetylcholinesterase with data presented in supplementary information. For substrate concentration regimes below kM at which these enzymes do not dissociate, our findings from both FCS and DLS validate that enzymatic catalysis does lead to the enhanced diffusion phenomenon. INTRODUCT INTRODUCTION The ubiquity of enzyme catalysis in biology and technology has become even more interesting with the discovery that enzyme catalysis appears to transduce chemical activity into motion leading to enhanced diffusion, a conclusion that came originally from experiments [1][2][3][4][5][6][7][8][9] and now is buttressed by theoretical analysis [7][8][10][11][12][13][14][15][16] . Much of the experimental support comes from considering enzymes of high catalytic turnover, among them urease, acetylcholinesterase and hexokinase, the three enzymes that we consider in this study. We are motivated by noticing that these enzymes are oligomeric and evolved to operate within biological cells at substrate concentrations below the Michaelis-Menten constant kM which for urease is 3 mM. 17 As many (not all) of the experiments demonstrating enhanced diffusion operate at significantly larger substrate concentrations, it is interesting and relevant to inquire into origins of enhanced diffusion when the substrate concentrations exceed those that are biologically relevant. We focus on urease, which has been considered to be the gold standard demonstrating enhanced diffusion 1-4, 7-9, 14, 18 . The product of urease catalysis is gas whose presence might influence mobility, CO2. For generality, we also study other enzymes, hexokinase, and acetylcholinesterase. Fluorescence-based measurements of diffusion in the urease system show that it grows in two steps. This enzyme's effective diffusion coefficient measured by fluorescence correlation spectroscopy (FCS) grows smoothly with increasing substrate concentration up to kM and saturates at a plateau of 25% enhancement. 8 We interpreted this regime in terms of enzyme leaps stimulated by the catalytic activity such that chemical activity led to the enhanced mobility. 8 When the substrate concentration was further increased a second rise of enhanced diffusion was observed, up to 80% 8 faster than in the absence of substrate. The observation of two-step rise is intriguing because the second concentration regime, substrate concentrations in the 0.1-1 M regime, 1-4, 14, 18 was the condition of many prior experimental studies. We speculated that the second regime of extra-enhanced diffusion might reflect enzyme dissociation 8 but made no direct test of this hypothesis for this system, though enzyme dissociation into subunits was reported already long ago for F1-ATPase 19 and more recently, discussed for other oligomeric enzymes. [20][21] Meanwhile, concerns were raised that fluorescence-based measurements might introduce experimental artifact incorrectly interpreted as enhanced diffusion. 9,[20][21][22][23] With these considerations in mind, here we revisit the urease system and test the enzyme dissociation hypothesis. Mindful that our conclusions may be controversial, our conclusions are tested by four independent analytical techniques: static light scattering, dynamic light scattering (DLS), size-exclusion chromatography (SEC), in addition to fluorescence correlation spectroscopy (FCS). Buffer conditions and other experimental protocols are specified in Supplementary Material. RESULTS AND DISCUSSION Choice of enzyme samples. To the best of our knowledge, all studies of enhanced diffusion in the urease system concern urease extracted from Canavalia ensiformis, the common Jack bean. The source of the urease sample was specified in some studies [1][2]8 , not specified in other studies 3-4, 7, 9, 14, 18 , as summarized in the Table S1 (supplementary information). To anticipate conclusions of the following discussion, we found it reassuring that despite quantitative differences according to which source of urease we used, the qualitative conclusions were the same. Bearing in mind the doubts recently expressed whether FCS is a true measure of translational diffusion 9, 20-23 , we were motivated to perform experiments independent of and complementary to FCS. In order to make the findings most comparable to FCS (fluorescence) measurements on which relied so much earlier Figure 1. Static light scattering of urease. (a) Schematic diagram in which a multimeric enzyme may dissociate into subunits. (b) Zimm plot for sample Ur1f at various urea concentration where c is mass concentration of enzyme, q is wavevector, and the symbols K, R and k are constants with standard textbook meanings in static light scattering. K is optical constants, R is the Rayleigh ratio, and k is a constant chosen arbitrarily to shift curves on the x-axis according to the Zimm Plot method. Data are open symbols, plotted from top towards bottom at progressively smaller c. Filled symbols denote these data extrapolated to zero concentration. Lines are least squares fits to the data. Yellow, blue, black, brown, grey, and green shows urea concentration 1 M, 10 -1 M, 10 -2 M, 10 -3 M, 10 -4 M and 10 -5 M in 100 mM PBS buffer (pH 7.2), respectively. (c) Weightaverage molecular weight of urease, which is the inverse of the yintercept in (b), plotted against urea concentration. Table I. Enzymes studied and their Michaelis-Menten characterization. Characterization was done in this laboratory except when identified by literature reference. data in the literature 1-4 , our principal independent tests were performed on enzymes also labeled in the same manner as for FCS experiments with fluorescent dye using the procedures described in Supplementary Material. Specifically, the light scattering experiments were performed on urease labeled with fluorescent dye. We now mention some differences between the samples listed in Table S1, especially our finding that the urease of highest catalytic activity aggregated when its concentration exceeded nM. Indeed, the tendency of urease to aggregate in 100 mM PBS buffer (pH 7.2) when the enzyme concentration exceeded 100 nM was noted by us earlier. [7][8] Those experiments were performed using enzymes with the highest purity available to us commercially, Sigma-Aldrich "Type C-3 urease." This presented a difficulty as we wished to respond constructively to the voiced concerns that FCS is artifact 9, 20-23 , yet only FCS possesses the sensitivity needed to measure diffusion at nM concentrations. Therefore to assess this sample with our complementary experiments was not feasible. Of all the experiments one might use to test the conditions under which oligomeric enzymes might dissociate (Fig. 1a), scattering experiments are the most direct as they can give absolute measurements of molar mass. Using the sample of highest catalytic activity, we attempted static light scattering at nM conditions where FCS showed the absence of aggregation, but they failed owing to insufficient sensitivity. Therefore, we turned to using a sample that we found to be less aggregationprone, Sigma-Aldrich "Type IX". In what follows, we refer to this as sample Ur1, and to the sample of higher catalytic activity used in our earlier experiments 7-8 as sample Ur2. Although control experiments showed the same qualitative conclusions regardless of labeling (Supplementary Information), we found that labeling the enzyme with fluorescent dye modulated the catalytic activity, probably by modulating access to the active site. In what follows, we refer to unlabeled and dye-labeled urease as samples Ur1u and Ur1f, respectively. Labeling and purification processes are described in Supplementary Information. shows Michaelis-Menten kinetic curves for Samples Ur1f (urease) and Acf (acetylcholinesterase). Static light scattering. First, the absolute weight-average molecular weight (Mw) of urease sample Ur1u was determined using static light scattering. 35 The so-called Zimm plot is the standard way to analyze data of this kind. On the ordinate, a quantity proportional to sample concentration (c) is multiplied by instrumental constants (K) and divided by a measure of scattering at a given specified angle (R). On the abscissa, one plots wavevector squared (q 2 ) shifted by concentration (kc) according to the standard method of fitting, the so-called Zimm plot. Extrapolating both wavevector and concentration to zero, one obtains the y-intercept, which is the inverse weight-average molecular weight, Mw. At substrate concentrations below kM this gave Mw = 5.5 x 10 5 g-mol -1 (Da), consistent with the known hexameric form of this enzyme. 17 The substrate concentration was then increased in small increments by up to 4 orders of magnitude, up to 1 M. It is obvious in Fig. 1b that Mw decreases. Inspecting a plot of Mw against substrate concentration (Fig. 1c), one sees that Mw is constant below 1 mM but decreases when the substrate concentration is higher. At 1 M concentration the molecular weight is slightly above one-half the original value, suggesting that in the presence of urea, this enzyme became heavily dissociated. Dissociation into trimers was not complete as Mw slightly exceeded one-half the initial value. Distributions of hydrodynamic radius Rh inferred using the CONTIN algorithm from data in panel a. Relative abundance is plotted against radius. The Rh of black peak is consistent with the reported value. 36 (c) Hydrodynamic radius Rh is plotted against logarithmic substrate concentration across 4 orders of magnitude. Low-Rh peak of the bimodal distribution (empty black), high-Rh peak (filled black) and average Rh weighted by relative abundance (blue) are shown. (d) Relative diffusion coefficients implied from data in panel c are plotted against logarithmic substrate concentration across 4 orders of magnitude. Symbols are same as in panel c. Slopes of Zimm plots quantify pairwise interactions as they are proportional to the second virial coefficient, A2; positive and negative slopes imply repulsion and attraction, respectively. The negative A2 at substrate concentrations above 10 mM, more strongly so with increasing substrate concentration, indicates that pairwise attraction grows with increasing concentration (Fig. S2), indicating growing tendency towards aggregation. Control FCS measurements described below confirm the same pattern of two-regime enhanced diffusion, below kM and above it, as reported earlier for Sample Ur2f. 8 Dynamic light scattering (DLS). The static light scattering experiments measured molar mass, not diffusion. In order to measure diffusion without using FCS, we turned to dynamic light scattering (DLS). This standard method quantifies the photon autocorrelation function and extracts from it the implied translational diffusion coefficient D. From this, one infers the hydrodynamic radius Rh of an equivalent sphere. 37 Using sample Ur1f, measurements were made for a relatively short time, as soon as feasible to do after adding substrate (30 s), to minimize the opportunity for aggregation. In the absence of substrate and under conditions of very low substrate concentration, the measured Rh  8.5 nm is consistent with literature values for the radius of urease. 36 Fig. 2a compares the autocorrelation G(t) below and above kM that we determined for this sample (Fig. S1). The curve for the latter is shifted to faster time lags indicating faster diffusion, and also shows a two-step process, obvious to the eye in this curve. This contributes to a bimodal distribution when these curves are deconvoluted to show the relative abundance of diffusing entities of different hydrodynamic radius Rh as plotted in Fig. 2b. To perform convolution we used the standard CONTIN algorithm. 38 The bimodal distribution at high substrate concentration shows one peak close to the original one, and also a second peak of the size expected if urea dissociates into trimers. 2) (standard proteins) on Superpose 6 column. Elution volumes (Ve) are identified with maximum peak height of each respective protein. Thyroglobulin, ferritin, and conalbumin have molar mass 669,000 g/mol, 440,000 g/mol, and 75,000 g/mol, respectively. (c) Relative sizes of the eluted urease, extracted from the peaks of each chromatogram, are plotted against logarithmic substrate concentration. The ordinate of this bar graph shows the relative abundance of the hexamer (black), trimer (red) and dimer (grey). The relatively high enzyme concentration needed for this experiment is believed to explain quantitative differences between Fig. 3c and Fig. 4d. From these distributions we took the peak maxima, calculated their abundance-weighted averages, and plotted these quantities against substrate concentration in Fig. 2c. Finally, diffusion coefficients were calculated from the Stokes-Einstein equation. Diffusion enhancement of this enzyme relative to the substrate-free situation is plotted in Fig. 2d against substrate concentration. Note the peak from highest Rh, which diminishes with increasing substrate concentration, the peak with lower Rh, which also diminishes with increasing substrate concentration, and the average inferred from the average Rh. Size-exclusion chromatography (SEC). This standard method to characterize enzyme purity 39 was implemented by us by measuring elution through a Superose 6 SEC column (GE Healthcare), which has a measurement range from 5,000 to 5,000,000 Da. The column was calibrated using standard proteins (thyroglobulin, ferritin, conalbumin), as shown in Figure 3b. This allowed the approximate molecular weight of individual peaks of our unknown sample to be determined. For Sample Ur1f, representative elution curves are plotted in Fig. 3a. In the absence of substrate, the SEC chromatogram of urease shows one major peak at elution volume Ve = 1.6 ml, and from comparison to the peptide standards this corresponds to 550,000 g-mol -1 , the molecular weight of urease hexamer. From 5 mM urea and above, a slight shoulder appears on the higher elution side, indicating generation of smaller units. With increasing urea, this becomes a distinct peak centered at 75,000 g-mol -1 . There are also signs of aggregation. For 100 mM urea, but not yet for 10 mM urea, a second distinct peak appears at Ve = 1.5 ml, and this is assigned to 700,000 g-mol -1 , some kind of aggregate that grew with further increase of urea concentration. Focusing on dissociation of the enzyme into subunits, we deconvoluted the elution peak areas to give relative abundance of hexamers, trimers, and dimers as a function of substrate concentrate, as plotted in Figure 3c. The time to make measurements using SEC is at least one hour to elute after the sample solution is injected into the column. Unlike the measurements we made using static and dynamic light scattering, which were completed within a few minutes, the SEC experiment therefore was more sensitive to the slow process of protein aggregation. The relatively high enzyme concentration needed for this experiment, 200 nM (Supplementary Information), is believed to explain quantitative differences between Fig. 3c and Fig. 4d. Aggregation is suspected to involve denatured urease but as aggregation was not the point of this study, this matter was not pursued. Intensity-weighted FCS. Fluorescence correlation spectroscopy (FCS), a standard method to measure the diffusion of nMlevel quantities of molecules including proteins, was used by us and others in earlier studies of enhanced diffusion. The measured autocorrelation curves G(t) nicely fit a free diffusion fitting model regardless of urea concentration, except that upon inspecting the fitting residuals for high urea concentration, small systematic deviations are observed at the most rapid time scales, faster than hundreds of microseconds (Fig. 4a). As this time scale is a minor contribution to the overall fit, one-component fitting was used. Customarily analyzed from the intensity-intensity autocorrelation function, raw data from this experiment consists of fluorescence intensity traces as a function of time. In fact, perfect dye labeling efficiency is impossible, but the labeling protocol uses an excess of dye, so for this argument we assume that the dye has labelled all subunits. To the extent this argument holds, it is therefore relevant to consider how raw values of the fluorescence intensity may change. The intensity was time-independent during measurements in buffer and 1 mM urea. On the other hand, the intensity gradually diminished over time in the presence of 1 M urea (Fig. 4b). Pursuing these differences and using sample Ur1f in the absence of substrate and at substrate concentrations below kM, we observed a nearly-Gaussian intensity distribution. For higher substrate concentrations this became progressively broader, so we deconvoluted the intensity distributions as illustrated in Fig. 4c. The idea behind deconvolution was that if unperturbed urease hexamer enzymes are uniformly labeled to display intensity Imax when passing through the FCS confocal spot, dissociated trimers will display (1/2)Imax and dimers will display (1/3)Imax. Deconvolution was performed according to this reasoning. As a function of substrate concentration, the fluorescence intensity was separated into relative abundance of hexamers, trimers, and dimers, as plotted in Figure 4d. A technical point is that in this analysis, certain quantitative differences can result depending on how the intensity traces are binned according to time. Single Gaussian and bimodal distributions are a robust conclusion regardless of binning size at 1 mM and 10 mM urea, which are conditions where urease has experienced little dissociation into subunits. On the other hand, when considering the 100 mM substrate concentration, 1 ms binning caused self-averaging of short-time events. Under this condition, we found that binning at 0.2 ms revealed a trimodal rather than the bimodal distribution implied by 1 ms binning. These considerations are believed to be why intensity traces do not show evident of the smallest oligomeric subunits, monomers and dimers (Fig. S3). Another oligomeric enzyme, acetylcholinesterase. Bearing in mind that urea is a common protein denaturation agent, 40-41 a fact that potentially might influence the action of urea on the enzyme urease despite the fact that urease has been considered a model system in which to study enhanced diffusion, generality of these findings was checked regarding acetylcholinesterase (AChE), another oligomeric enzyme that in the literature was interpreted to display enhanced diffusion at substrate concentrations above kM. 7 AChE is a tetramer and its substrate is acetylcholine. We denote the unlabeled and dye-labeled samples as Acu and Acf, respectively. Studying sample Acf, this enzyme's hydrodynamic radius was measured by DLS and the distributions of Rh were inferred when the substrate concentration was increased to values well above kM. As shown in Fig. S4, these data follow the same dissociation patterns as urease. Another oligomeric enzyme, hexokinase. Hexokinase I, a dimeric enzyme of size 104,000 g/mol used in several earlier studies for which enhanced diffusion was reported at substrate concentrations above kM was also investigated. 5 The substrate is glucose. This enzyme's hydrodynamic radius was measured by DLS and the distributions of Rh were inferred when the substrate concentration was increased to values well above kM. The data are similar to those presented above for urease and acetylcholinesterase (Fig. S5). Regarding enhanced diffusion, the data obtained by FCS at high substrate concentrations are intermediate between the D/D0 of the undissociated enzyme and its dissociated components, as expected of this measurement that does not distinguish between them. Comparing enzymes with different commercial provenance. To assess generality, the remaining samples in Table I were also studied for completeness. For the additional samples, their Michaelis-Menten characterization is shown in Fig. S6. For urease, DLS experiments are compared for samples Ur1u (Fig. S7) and Ur2f (Fig. S8). For each, the enzyme's hydrodynamic radius was measured by DLS and the distributions of Rh were inferred when the substrate concentration was increased to values well above kM. Between the samples there is excellent consistency with quantitative differences. These may reflect differences of turnover rate. For acetylcholinesterase, similar comparisons were made for a sample unlabeled with fluorescent dye, sample Acu (Fig. S9). Between the samples there is excellent consistency. Static and dynamic measurements compared. We were interested to compare diffusion from different experiments. To do this, it was reasonable to suppose that hydrodynamic radius Rh equals static radius of gyration Rg within our experimental uncertainty. However, Rg measured by static light scattering is notorious for having high experimental uncertainty in the regime of our relatively-low molar mass, so instead we estimated Rg from the measured MW. Our reasoning was to identify Rg with the radius of the equivalent sphere, knowing that globular proteins have density 1 g-cm -3 . From Rg  Rh, D was calculated using the Stokes-Einstein equation. CONCLUSIONS Our experiments offer an alternative explanation for many experiments in the literature that were performed at substrate concentrations above kM, as in this regime we confirm that dissociation of oligomeric enzymes into subunits can explain those findings though they do not exclude enhanced that diffusion may contribute. It is interesting to speculate about the biological function of this enzyme dissociation phenomenon. Not known presently is whether this question has functional significance, as such high concentrations are not believed to occur in natural settings. It might function as a biological regulatory mechanism. At the same time, in the regime of biologically-relevant substrate concentrations below kM, we do observe that the presence of substrate enhances diffusion even of the oligomeric enzyme. This is broadly consistent with the qualitative conclusion from much previous work and helps to clarify the regime of their potential validity. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16] In particular, while enhanced diffusion concept needs qualification may not apply in the substrate concentration regime where enzymes dissociate into subunits, they may apply at lesser concentrations, and this is interesting because the regime of lesser concentration is more relevant biologically. This study is not believed to be directly relevant to an interesting parallel family of studies in which catalytically-active enzymes, urease in many instances 1-4, 7-9, 14, 18, 24-30 , were attached chemically to the surfaces of colloidal beads or nanoparticles. Enhanced mobility or ballistic motion of colloidal beads is observed when substrate is added 4, 24-30 . It is unknown how the methods of enzyme surface-attachment might influence the opportunities for enzyme dissociation into subunits, however. Also enzyme-driven colloids are surely influenced by diffusiophoresis produced by a concentration gradient of reaction products near the surfaces of colloidal beads 31-34 . Diffusiophoresis is not believed to contribute to the situations, considered here, of enzymes at nM concentrations. ASSOCIATED CONTENT Supporting Information. Additional data related to this paper are present in the Supplementary Materials. Experimental detail, enzyme assays, and more DLS results of various samples. Author Contributions The manuscript was written through contributions of all authors. / All authors have given approval to the final version of the manuscript. Notes The authors declare no competing financial interests. ACKNOWLEDGMENT This work was supported by the taxpayers of South Korea through the Institute for Basic Science, project code IBS-R020-D1. For instrument access we thank IBS-R0190D for dynamic light scattering and IBS-R022-D1 for size exclusion chromatography. We are indebted to Dr. Hyun Suk Kim in the IBS Center for Genomic Integrity for help with SEC measurements. Contents Experimental procedures Figure S1. Assays of enzyme activity and fit to the Michaelis-Menten equation for urease sample Ur1f and acetylcholinesterase sample Acf. Figure S5. Dynamic light scattering of hexokinase, sample Hex. Figure S6. Assays of enzyme activity and fit to the linearized Michaelis-Menten equation for samples not shown in Fig. S1. Figure S7. Dynamic light scattering of unlabeled urease, sample Ur1u. Figure S8. Dynamic light scattering of dye-labeled high-activity urease, sample Ur2f. Figure S9. Dynamic light scattering of unlabeled acetylcholinesterase, sample Acu. Experimental procedures Samples. Urease (type IX, type C-3) from jack bean, purchased from Sigma, was labeled at the amine residue with dylight 488 maleimide dye by a protocol involving 150 mM phosphate buffer (pH 7.2) with added 2 μM urease and 40 μM fluorescent dye solution, stirred for 6 h at room temperature. Acetylcholinesterase from Electrophorus electricus (electric eel), purchased from Sigma-Aldrich, was labeled at its carboxyl residue by Dylight 488-NHS (N-hydroxysuccinimide) dye by a protocol in which 30 μM dye solution and 1 μM enzyme were added to a mixture of 80% phosphate buffer solution (PBS) and 20% dimethyl sulfoxide (DMSO) before 6 h of stirring at room temperature. Finally, the dye-labeled enzymes were purified by removing the free dye by membrane dialysis (Amicon ultra-4 centrifugal filter; Millipore). Hexokinase I from saccharomyces cerevisiae was purchased from Sigma-Aldrich and it was labeled with Alexa fluor 488 labeling kit (Invitrogen) using a protein fluorescence labeling kit (Invitrogen). Enzyme activity assay. The urease and acetylcholinesterase assays were performed using the urease activity kit (MAK120, Sigma Aldrich) and acetylcholinesterase activity kit (MAK119, Sigma Aldrich) as reported in the manufacturer's instructions. Hexokinase activity listed in Table I was taken from the literature. 1 Static light scattering (SLS). A commercial laser light scattering instrument (ALV/DLS/SLS-5022F) equipped with a multi-τ digital time correlator (ALV5000) and a cylindrical 22 mW He-Ne laser (λ0 = 632.8 nm, Uniphase) was used in the laboratory of Jiang Zhao at the Chinese Academy of Sciences. The measurements were conducted at scattering angles from 30° to 150° in steps of 10°. For SLS measurements, dye-labeled urease and its substrate solutions were mixed at the desired concentration in 100 mM PBS buffer (pH 7.2) and filtered twice by using 100 nm pore size syringe filter (Whatman). The range of urease concentration was 50 nM to 150 nM. Dynamic light scattering (DLS). A Brookhaven ZetaPALS instrument with the ZetaPlus option at 90° scattering angle was used in the IBS Center for Multidimensional Carbon Materials. For DLS measurements, 30 nM dye-labeled enzymes (Urease, AChE) and the substrate solution (urea for urease, acetylthiocholine for AChE) were mixed at the desired concentration in 100 mM PBS buffer (pH 7.2) and filtered twice using 100 nm pore size syringe filter (Whatman). For hexokinase reaction, 50 nM of dye-labeled
2019-09-10T15:14:52.171Z
2019-09-07T00:00:00.000
{ "year": 2019, "sha1": "90559cecb27605317fff56fba64c5831127c82df", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1909.03283", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "90559cecb27605317fff56fba64c5831127c82df", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Biology", "Chemistry", "Medicine" ] }
267063503
pes2o/s2orc
v3-fos-license
Quantifying the minimum localization uncertainty of image scanning localization microscopy Modulation enhanced single-molecule localization microscopy (meSMLM), where emitters are sparsely activated with sequentially applied patterned illumination, increases the localization precision over single-molecule localization microscopy (SMLM). The precision improvement of modulation enhanced SMLM is derived from retrieving the position of an emitter relative to individual illumination patterns, which adds to existing point spread function information from SMLM. Here, we introduce SpinFlux: modulation enhanced localization for spinning disk confocal microscopy. SpinFlux uses a spinning disk with pinholes in its illumination and emission paths, to sequentially illuminate regions in the sample during each measurement. The resulting intensity-modulated emission signal is analyzed for each individual pattern to localize emitters with improved precision. We derive a statistical image formation model for SpinFlux and we quantify the theoretical minimum localization uncertainty in terms of the Cramér-Rao lower bound. Using the theoretical minimum uncertainty, we compare SpinFlux to localization on Fourier reweighted image scanning microscopy reconstructions. We find that localization on image scanning microscopy reconstructions with Fourier reweighting ideally results in a global precision improvement of 2.1 over SMLM. When SpinFlux is used for sequential illumination with three patterns around the emitter position, the localization precision improvement over SMLM is twofold when patterns are focused around the emitter position. If four donut-shaped illumination patterns are used for SpinFlux, the maximum local precision improvement over SMLM is increased to 3.5. Localization of image scanning microscopy reconstructions thus has the largest potential for global improvements of the localization precision, where SpinFlux is the method of choice for local refinements. INTRODUCTION Single-molecule localization microscopy (SMLM) increases the precision with which single molecules can be localized beyond the diffraction limit (1)(2)(3).Methods in SMLM require sparse activation of single emitters, after which emitters can be localized sequentially with reduced uncertainty. In recent years, various modulation enhanced SMLM (meSMLM) methods were introduced that increase the localization precision over SMLM by sparsely activating emitters with intensity-modulated illumination patterns (4).As a result, information is added to the data about the relative position of the emitter with respect to the illumination patterns.meSMLM methods include SIMFLUX (5), SIMPLE (6), and repetitive optical selective exposure (ROSE) (7), which use sinusoidally shaped intensity patterns, and MINFLUX (8) and RASTMIN (9,10), which use a donut-shaped illumination pattern.Patterned illumination can also be used to improve axial resolution, for example, with modulated localization (ModLoc) (11,12) and ROSE-Z (13), which use illumination with both axial and lateral structure.Additional improvements to the localization precision can be attained through iterative meSMLM (14,15), where patterns are iteratively moved through the sample using prior information from earlier measurements, to improve the localization precision locally around single emitters. Specifically for SIMFLUX (5), it has been shown that meSMLM with sinusoidal patterns improves the resolution over both SMLM and structured illumination microscopy (SIM) (16).SIM uses nine sinusoidal patterns in total aligned on three lateral axes, and subsequent reconstruction results in at most a 2-fold resolution improvement over the diffraction limit.SIMFLUX on the other hand only uses six patterns in total aligned on two lateral axes, and subsequent localization results in a 2.4-fold maximum improvement of the localization precision over SMLM.Therefore, the combination of structured illumination with sparse localization in meSMLM can result in a better resolution over existing reconstruction approaches, while using less illumination patterns in the process.These factors motivate the incorporation of meSMLM in existing systems, in which image reconstruction instead of localization is the current state-of-the-art. A promising candidate system is spinning disk confocal microscopy (SDCM) (17)(18)(19)(20)(21) (see Fig. 1 a).SDCM introduces a spinning disk with pinholes in the illumination and emission paths.Rapidly pulsing the excitation laser causes stroboscopic illumination of the sample with moving illumination foci.If used for image scanning microscopy (ISM) (22), the fluorescent emission signal is recorded on an image detector.Subsequent reconstruction of the recorded images results in an expected resolution improvement of a factor 2 over diffraction limited imaging (18,19). Recently, SDCM was used for PAINT-and STORMbased localization microscopy, where SMLM localization algorithms were used to localize emitters in raw camera data (20,21).It is shown that this improves the detection rate and signal-to-background ratio compared with widefield SMLM at the cost of a reduced signal photon count, resulting in a localization precision that is at best comparable with that of SMLM (20). However, these methods do not take the information contained in the illumination pattern into account, as one would do in meSMLM.In this text, we therefore develop a statistical image formation model, suited for modulation enhanced localization in SDCM (see Fig. 1 b).Our method, called SpinFlux, sequentially applies patterned illumination generated by a spinning disk to excite the sample.Subsequently, emitters are localized in the recordings from a sequence of individ-ual pattern acquisitions, taking knowledge about the pattern into account.The resulting intensity-modulated emission signal is then described by our image formation model.To evaluate the potential localization precision improvements of SpinFlux, we need to study the information contained in a single-pattern exposure, the localization precision obtained by sequential illumination with multiple patterns and the optimal pattern configuration to maximally improve the precision.To accomplish this, we calculate the theoretical minimum uncertainty of SpinFlux in terms of the Cram er-Rao lower bound (CRLB) (23,24).The CRLB is often used in (me)SMLM to quantify the theoretical minimum uncertainty of localizations.Using the SpinFlux image formation model, we calculate the CRLB for various illumination pattern configurations.Based on the CRLB, we compare SpinFlux with SMLM. Secondly, we consider a localization approach that is comparable with SpinFlux.Here, isolated emitters are localized directly in ISM reconstructions (25) rather than in individual pattern acquisitions as done in SpinFlux.Specifically, we consider localization in ISM reconstructions with a factor ffiffiffi 2 p -reduction in the point spread function (PSF) width.We also consider ISM reconstructions that are Fourier reweighted (see Fig. 1 b), resulting in a factor 2 reduction in the PSF width.We approximate the maximum localization precision of these approaches and compare it with SpinFlux. METHODS In SpinFlux (see Fig. 1 a), a spinning disk containing pinholes is placed in the illumination and emission paths.The spinning disk is rotated, thereby sequentially moving illumination patterns over the sample.As in SDCM (19), the excitation laser is rapidly switched on and off.Within the time frame where the laser is on, the spinning disk can be considered stationary.This causes stroboscopic illumination of emitters in the sample.Furthermore, the illumination has a nonuniform intensity profile over the field of view due to the spinning disk architecture.This causes patterned illumination of emitters in the sample, which in turn results in intensity modulation of the emission signal.The rotation angle of the spinning disk determines the position of each illumination pattern with respect to the emitter position.Subsequently, the intensity-modulated emission signal is windowed by the same pinhole, after which the signal is imaged on a camera. The image analysis (see Fig. 1 b) consists of extracting localized emitters from the recordings, as well as retrieving the relative distance between the illumination pattern and emitter from the photon count.To evaluate the total amount of information that can be extracted from the measurements with this approach, we first develop an image formation model for SpinFlux.We subsequently use this model to calculate the theoretical minimum uncertainty of SpinFlux in terms of the CRLB.The CRLB will allow us to quantify the maximum amount of information contained in each exposure with a single pattern.In turn, we use this to derive the localization precision that can be attained through sequential exposures with multiple patterns.In addition, we can explore how the pattern configuration, the pinhole radius, and the mutual spacing between patterns affect the maximum localization precision. Model for SpinFlux image formation To calculate the theoretical minimum uncertainty that can be attained with SpinFlux localization, we need a model to describe the amount of photons collected by a camera pixel.Existing models for (me)SMLM (5,8,14,26,27) do not suffice for this, as they do not include a pinhole in the illumination and emission paths.In this subsection, we therefore develop a statistical image formation model for SpinFlux.A detailed derivation of this model can be found in Note S2. For the image formation, we assume that pinholes are separated far enough on the spinning disk, such that only one pinhole can appear in a region of interest during each camera frame.This assumption is valid for the magnifications, pinhole sizes, and pinhole separations in existing SDCM setups (19)(20)(21).In line with this, we can assume that there is no cross talk of emission signals between different pinholes.This allows us to describe the regions of interest on the camera frames as separate regions of interest from individual patterns. We model the pinhole in the emission path as a circular window.In the absence of readout noise, the measurements on each camera pixel can be described as independent realizations of a Poisson process (26).For each pixel i with center coordinates ðx i ; y i Þ and for the measurement corresponding to illumination pattern k, the expected photon count m i;k after illumination through the pinhole with position ðx p;k ; y p;k Þ is described by (see Note S2): Here, ðq x ; q y Þ is the emitter position, q I is the expected signal photon count under maximum illumination, and q b is the expected background photon count.Each illumination pattern Pðq x À x p;k ; q y À y p;k Þ is assumed to be a known function with a known pinhole position ðx p;k ; y p;k Þ in our image formation model.We model each illumination pattern as a Gaussian PSF in the center of the pinhole, with standard deviation s illum .Alternate illumination patterns can be generated by placing a phase mask in the illumination path.We therefore also include a model of the donut-shaped pattern from, e.g., MINFLUX (8), with a zero-intensity minimum at the center of the pinhole and standard deviation s illum . Note that the signal photon budget of a single emitter stays constant when going from one pattern location to multiple pattern locations.In particular, this means that one pattern exhausts the full signal photon budget, whereas multiple patterns need to share the same signal photon budget.Each pattern in a multiple-pattern illumination sequence gets a share of the signal photon budget proportional to their illumination intensity on the emitter position. We model the emission PSF as a Gaussian, with standard deviation s PSF .The term Hðq x ; q y ; x i ; y i Þ describes the discretized emission PSF after windowing by the pinhole (see Note S2: illumination and emission point spread functions). In existing work on meSMLM, such as in MINFLUX (8), it is assumed that meSMLM is able to record the same amount of signal photons as SMLM.This assumption allows benchmarking between methods on the same signal photon count.However, the assumption is not trivial, as additional illumination power or time is needed to exhaust the signal photon budget with nonmaximum illumination intensity.Properly adjusting the illumination power to compensate for the reduced photon flux requires accurate prior knowledge about the emitter position, which is generally unavailable.Increasing the illumination time increases the probability of sample degradation.As such, we should include the possibility that meSMLM will not exhaust the signal photon budget in the image formation model. The normalizing constant A describes how the signal photon budget is affected by nonmaximum illumination intensity.This constant plays a vital role in benchmarking meSMLM (when the summed intensity over all patterns does not result in a uniform profile), as it gives a physical explanation of the fair signal photon count against which meSMLM should be compared (14).Specifically when comparing meSMLM to SMLM, the normalization constant models whether meSMLM would have had recorded the same amount of signal photons as SMLM, despite the additional illumination power or time needed to do so.Results on the improvement of meSMLM compared with SMLM should thus only be given in the context of the normalizing constant A. We choose A to model two scenarios (see Note S2: multiple emission patterns).In the first scenario, which we explore in this text, we assume that the entire signal photon budget is exhausted after illumination with all patterns, independent of the total brightness on the emitter position.We thus assume the illumination power and time is sufficient to exhaust the signal photon budget of the emitter.Here, A is inversely proportional to the summed illumination patterns.The only signal photon loss in this scenario comes from the windowing effect of the emission pinhole.This scenario is consistent with the assumption used in, e.g., MINFLUX (8), stating that meSMLM will record the same amount of photons as SMLM.In the second scenario, the illumination power and time are constant for each pattern such that the total illumination power and time equal that of SMLM, even though this does not exhaust the signal photon budget for nonmaximum illumination.Instead, the maximum possible signal photon count occurs when the emitter is placed at the brightest position of the total illumination pattern.Here, A is inversely proportional to the amount of illumination patterns K. The constant B i;k describes how the background is affected by illumination pattern k.As such, the term Aq b B i;k represents the effective background under patterned illumination.It depends on the camera pixel area, the pinhole area, the PSF, and the illumination pattern, but not on the emitter position (see Note S2: effective background B i ).In the analysis of, e.g., MINFLUX (8), the pattern dependency of the background is neglected.We can incorporate this in our image formation model for SpinFlux by modeling B i;k as the overlapping area between the camera pixel i and the approximation of pinhole k (see Note S2: pattern-independent background). Cram er-Rao lower bound To quantify the theoretical minimum uncertainty of localizations, the CRLB is often used (23,24).Under regularity conditions on the likelihood of the data ( 23), the CRLB states that the estimator covariance C q of any unbiased estimator b q of the parameters q satisfies the property that ðC q À I À 1 ðqÞÞ is positive semidefinite.Here, IðqÞ is the Fisher information, of which entry ðu; vÞ is described by: where lðqjcÞ is the log-likelihood function given the recorded photon counts c on the camera pixels.The matrix I À 1 ðqÞ is the CRLB.Consequently, the diagonal of the CRLB bounds the estimator variance from below.Specifically for SMLM, the CRLB is attained by the covariance of the maximum likelihood estimator for 100 or more signal photons (26).As the localization uncertainty of the maximum likelihood estimator converges asymptotically to the CRLB (28,29), we can also use the CRLB to investigate the theoretical minimum uncertainty of SpinFlux. Using the image formation model from Eq. 1, we can derive the CRLB for SpinFlux.When using K pinholes and a camera consisting of an array with N pixels pixels, any entry ðu; vÞ of the Fisher information is given by (see Note S3): To evaluate Eq. 3, the partial derivatives of the image formation model of Eq. 1 with respect to the unknown parameters q x , q y , q I , and q b need to be computed.Expressions for these partial derivatives are found in Note S4. Simulations and parameter values We sampled measurements from the image formation model and evaluated the CRLB using representative in silico experiments.The model parameters (see Table S1) are considered to be representative of an SDCM experiment (20). To maximize the information contained in the Gaussian illumination and emission PSFs, we choose their standard deviations to be diffraction limited (30).Specifically, we approximate the standard deviation of the illumination as s illum ¼ 0:21 lex NA and the standard deviation of the PSF as s PSF ¼ 0:21 lem NA .Here, l ex and l em , respectively, describe the excitation and emission wavelengths and NA is the numerical aperture. Emitters are located in the center of the region of interest, consisting of 10 Â 10 pixels.The pinhole was discretized on a mesh with N M;x , N M;y ¼ 100 pixels in each direction.For N M;x , N M;y ¼ 100 mesh pixels, the relative error in the CRLB caused by the discretized pinhole approximation is at most 0.02% (see Fig. S2). RESULTS A spinning disk can be designed with various pinhole sizes, spacing, and arrangements (20).In addition, the rotation of the spinning disk gives additional freedom, as patterns and pinholes can appear arbitrarily close to each other via sequential illumination with a rotating spinning disk.For SpinFlux, this means that a wide variety of illumination pattern configurations can be created via the appropriate spinning disk and rotation angle.Furthermore, donut-shaped illumination patterns can be used by adding a phase mask in the illumination path (see Fig. S3).In this section, we explore how the theoretical minimum localization uncertainty of SpinFlux depends on pattern configurations and positions. In Figs.2-5, and S4-S17, we calculate the theoretical minimum uncertainty for the scenario where the entire signal photon budget is exhausted after illumination with all patterns.We compute the theoretical minimum localization uncertainty for three standard configurations.These pattern configurations can be created via sequential illumination with a rotating spinning disk, where the rotation angle of the spinning disk determines the position of an illumination pattern.In Localization on ISM reconstruction data, we establish localization on ISM reconstruction data as a benchmark for SpinFlux.In Single-pattern configuration, we simulate the theoretical minimum uncertainty using a single pattern and pinhole, akin to confocal microscopy.In Two-pattern configuration, we compute the CRLB for a two-pattern configuration where pinholes are separated by a distance s along the x-axis, resembling raster-like configurations of earlier work on meSMLM (9,10,14).In Triangular pattern configuration, patterns and pinholes are arranged in an equilateral triangle configuration, similar to the configuration found in MINFLUX (8,15).Donutshaped intensity patterns shows the effect of donutshaped illumination patterns.A summary of the most important simulation results is found in Table 1. To rigorously quantify the improvement of SpinFlux, we also evaluate the localization precision in the two other scenarios described in Model for SpinFlux image formation.Figs.S18-S31 show the theoretical minimum uncertainty in the case in which the illumination power and time are constant for each pattern.There, the maximum possible signal photon count occurs when the emitter is placed at the brightest position of the total illumination pattern.Figs.S32-S45 show the CRLB where the pattern dependency of the background is neglected and where the entire signal photon budget is exhausted after illumination with all patterns. Localization on ISM reconstruction data As a straightforward implementation of localization, we consider localizing isolated emitters in ISM reconstruction data.In this approach, an ISM image is first acquired and reconstructed, resulting in a reduction of the PSF width by at most a factor ffiffiffi 2 p (18,19).If the ISM image is subsequently Fourier reweighted (18), the PSF width is reduced further by a total factor 2. Subsequently, individual emitters are localized in the ISM reconstruction data.We approximate the CRLB for this localization approach (see Note S1).For a signal photon count of 2000 photons per emitter and a background photon count of 8 photons per pixel, the best-case localization precision of localization on the ISM reconstructions is 1.77 nm, or 1.25 nm with Fourier reweighting, whereas SMLM would achieve a localization precision of at most 2.62 nm.The improvement of localization on the ISM reconstructions over SMLM is thus 1.48, or 2.10 with Fourier reweighting.These results agree with the improvements that were recently found experimentally (25). Fig. 2 shows the localization precision of localization of individual emitters in the ISM data over a range of signal and background photon counts, PSF standard deviations, and camera pixel sizes.From Fig. 2 b, we see that the improvement of localization on the ISM data over SMLM for a PSF standard deviation of 93.3 nm and a camera pixel size of 65 nm is at most 1.8, or 3.0 with Fourier reweighting.This is achieved at a signal photon count of 200 photons and a background photon count of 16 photons per pixel.Furthermore, the improvement decreases to 1.4, or 1.9 with Fourier reweighting, as the background goes to zero.For zero background, the improvement over SMLM is constant as a function of the signal photon count.In our approximation, the localization precision of localization on ISM reconstructions is proportional to 1 ffiffiffi q I p if the background is zero, and therefore the improvement over widefield SMLM is constant. Fig. S1 shows the localization precision of localization of individual emitters in the ISM data over a range of PSF standard deviations and camera pixel sizes.From Fig. S1 b, we see that the improvement of localization on the ISM data over SMLM for a signal photon count of 2000 photons and a background photon count of 8 photons per pixel is at most 1.7, or 2.8 with Fourier reweighting, achieved at a PSF standard deviation of 250 nm and a camera pixel size of 50 nm.Furthermore, the improvement decreases to 1.3, or 1.5 with Fourier reweighting, for an increasing camera pixel size and a decreasing PSF size. Single-pattern configuration In Fig. 3, we evaluate the theoretical minimum uncertainty in the case in which a single pinhole is used for illumination and emission, as illustrated in Fig. 3 a.Results are shown for the scenario where the entire signal photon budget is exhausted after illumination with all patterns. From Fig. 3, d and e, we see that the localization precision is optimal when the pinhole and pattern are centered directly on the emitter position.Without a pinhole, this results in an improvement of at most 1.17 over SMLM.For a pinhole with radius r p ¼ 4s PSF , the difference with SMLM is negligible, indicating that the confocal effect of the pinhole has been lost.The improvement can thus be attributed to the effect of pattern-dependent background, as the background is reduced on camera pixels that are not located on the maximum of the Gaussian illumination pattern.This background reduction is visualized in Fig. S4 g, showing a 10.2-fold reduction in the average background count per pixel compared with SMLM for r p ¼ 4s PSF and For pinholes of radius r p ¼ 3s PSF and below, the localization precision deteriorates with respect to the nopinhole case.Already for r p ¼ 2s PSF , no position of the pinhole results in an improvement over SMLM.In these cases, the pinhole not only blocks background photons, but also signal photons carrying information about the emitter position.Fig. S4, f and g show that, in the best case (for x p ¼ q x ), 248 signal photons are lost when going from r p ¼ 3s PSF to r p ¼ 2s PSF , whereas the average background is reduced with only 0.21 photons per pixel.As such, more information about the emitter position is lost due to the loss of signal photons than that we gain by blocking background, resulting in a reduction of the improvement factor from 1.14 to 0.90.Similarly, moving the pinhole away from the emitter position blocks signal photons, thereby reducing the localization precision.For r p ¼ 3s PSF , the improvement over SMLM goes from 1.14 at x p ¼ q x to 0.73 at a 130 nm distance between x p and q x .From this, we can conclude that larger pinholes are in principle better for SpinFlux, as more information about the underlying signal is revealed through the larger pinhole. Two-pattern configuration In Figs. 4 and S6, we evaluate the theoretical minimum uncertainty in the case in which multiple patterns are used sequentially for illumination and emission.We first consider the scenario of pinholes that are separated in the x-direction around focus coordinates ðx f ;y f Þ, as illustrated in Fig. 4, a-e.Results are shown for the scenario where the entire signal photon budget is exhausted after illumination with all patterns.For these simulations, the pinhole radius was set to r p ¼ 3s PSF for both pinholes. From Fig. 4, d and e, we see that using multiple patterns is beneficial for SpinFlux, maximally resulting in a 2.62-fold precision improvement over SMLM in the x-direction when using a pinhole separation s ¼ 4s PSF .This improvement decreases only moderately to 2.17 when the pattern y-coordinate is moved 130 nm out of focus (see Fig. S7).When the illumination time and power are adjusted to exhaust the entire signal photon budget, the low-intensity tails of the Gaussian intensity profile increase the information content of signal photons, as these contain increased information about the relative position of the emitter with respect to the illumination pattern.As discussed in Model for SpinFlux image formation, the multiplepattern configuration has the same signal photon budget as the single-pattern configuration.These results therefore show that the same signal photon budget is utilized more efficiently by using multiple pattern locations. However, increasing the pinhole separation also reduces the region where SpinFlux improves over SMLM.For a pinhole separation s ¼ 3s PSF , the domain where SpinFlux improves over SMLM by at least a factor 1.2 spans 175 nm, whereas this domain spans 111 nm for s ¼ 4s PSF .In the case where the pinholes are not centered around the emitter position, one of the patterns takes more of the signal photon budget than the other.As such, highly informative signal photons carrying information from the tails of the Gaussian illumination pattern are traded in for lowly informative photons coming from the center of the pattern.This is shown in Fig. S6 f: for a pinhole separation s ¼ 4s PSF , 1573 signal photons are coll-ected in total when x f ¼ q x , with the remaining 427 photons being blocked by the spinning disk.When considering a 130 nm distance between x f and q x , 1956 signal photons are being collected in total as one pinhole has moved close to the emitter position.Yet these photons are lowly informative, resulting in a precision improvement of 1.09 over SMLM.For increasing separations, the relative difference in illumination intensity between noncentered patterns increases, thereby reducing the domain of improvement. Furthermore, Fig. 4, d and e show that there is an optimal pinhole separation of s ¼ 4s PSF for SpinFlux.When increasing the pinhole separation beyond this, the localization precision decreases again.This is caused by a combination of two factors.First of all, as shown in Fig. S6 f, the spinning disk blocks an increasing amount of signal photons for increasing pinhole separations, as the overlap between the pinhole and emission PSF is reduced.Between s ¼ 4s PSF and s ¼ 5s PSF , the amount of signal photons is reduced by 324 when x f ¼ q x .This effect is eliminated when the pinhole is removed, as shown in Fig. S8. Secondly, increasing the pinhole separation results in illumination with the low-intensity tails of the Gaussian illumination patterns.As we exhaust the signal photon budget in this scenario and as the background is pattern dependent, this results in an amplification of the background.Fig. S6 g shows that the average background count increases from 7.75 photons per pixel at s ¼ 4s PSF to 26.7 photons per pixel at s ¼ 5s PSF . Up until now, we have only considered the localization precision in the x-direction.Because the pattern has a different structure in the xand y-directions, the modulated emission intensity will carry different information about the emitter xand y-positions.Specifically in this configuration, both patterns lie on the x-axis.Therefore, the intensity difference in the modulated emission signal is strongly affected by the emitter x-position.However, as both patterns have the same y-coordinate, there is no difference in the effect of the emitter y-coordinate on the modulated emission intensity between the patterns.Therefore minimal information is carried about the emitter y-position. To investigate how the two-pattern configuration of Fig. 4 a affects the y-precision, we equivalently consider the x-precision that can be obtained with the rotated pattern (see Fig. S9).From Fig. S9, we see that the x-precision for the rotated pattern results in negligible improvements or even reductions over SMLM if the entire signal photon budget is exhausted.Specifically for s ¼ 4s PSF , the improvement factor over SMLM is 0.83 when the patterns are perfectly centered around the emitter position, whereas the improvement increases to 1.12 when the distance between y f and q y is 130 nm.From the equivalence, we can thus conclude that the two-pattern configuration of Fig. 4 a results in optimal x-precision, but the associated y-precision is diminished. Triangular pattern configuration In Figs. 4, f-j and S10, we evaluate the theoretical minimum uncertainty in the case in which multiple pinholes are used for illumination and emission in an equilateral triangle configuration centered around focus coordinates ðx f ; y f Þ. Results are shown for the scenario where the entire signal photon budget is exhausted after illumination with all patterns.For these simulations, the pinhole radius was set to r p ¼ 3s PSF for all pinholes. From Fig. 4, i and j, we see that the triangle configuration from Fig. 4 f results in a precision improvement in the x-direction of at most 1.94 compared with SMLM, when the distance between the pinholes and the center of the triangle is r ¼ 2s PSF .As seen for the two-pattern case, this optimum is a result of two contrasting factors.On one hand, increasing the pattern distance illuminates the emitter with the tail of the Gaussian intensity profile, thereby increasing the information that signal photons carry about the relative distance between the illumination pattern and the emitter.On the other hand, increasing the distance between the emitter and the pinholes also increases the amount of signal photons that are blocked by the spinning disk, while the pattern-dependent background increases due to the low illumination intensity. Note that the x-localization precision of the triangle configuration is worse than that of the two-pattern configuration described in Two-pattern configuration.The reason for this is that the triangle configuration contains one pinhole, of which the x-coordinate is located close to the true emitter x-coordinate (i.e., the blue pattern in Fig. 4 f).As such, signal photons that are collected after illumination with this pattern contain little information about the emitter x-position.The Two-pattern configuration of two-pattern configuration is thus able to distribute signal photons more efficiently to maximize the information about the emitter x-position. On the other hand, as discussed earlier for Fig. S9, the two-pattern configuration contains little information about the emitter y-position.To investigate this for the triangle configuration, Fig. S11 shows the x-localization precision that can be achieved when the triangle pattern is rotated clockwise by 90 for all three scenarios under consideration.Equivalently, these results also hold for the y-precision that can be attained with the nonrotated pattern.It can be seen that the optimal spacing r and the localization precision are comparable with those for the nonrotated triangle configuration.We find a precision improvement in the y-direction of 2.05 over SMLM.As the rotated pattern is asymmetric along the x-axis, the precision also scales asymmetrically around the optimum.In addition, the asymmetry causes a shift to the optimal x-coordinate of the pattern focus.For example, the optimal focus position is x f ¼ q x À 0:13 nm when considering the scenario where the entire signal photon budget is exhausted.From the equivalence, we find that the triangle configuration balances the localization precision in the xand y-directions at approximately a twofold improvement in either direction at the cost of suboptimal precision in each individual direction. In MINFLUX (8,15), a triangle configuration was also used for illumination, where an additional fourth pattern was added in the center of the configuration.As such, we also consider the scenario where an additional pinhole and pattern are added in the center of the triangle for both rotations of the configuration (see Figs. S12 and S13). From Figs.S12 and S13, we find that adding a center pinhole causes a deterioration of the localization precision compared with the triangle configuration without a center pinhole.The precision improvement over SMLM is at most 1.44 for the nonrotated pattern, and at most 1.78 for the rotated pattern.On the other hand, the domain where SpinFlux attains an improvement over SMLM has increased due to the addition of the center pinhole.For the nonrotated pattern with spacing r ¼ 2s PSF , the improvement over SMLM varies between 1.39 and 1.44 as long as the pattern focus and the emitter remain at a 130 nm distance from each other. The explanation for both these effects is that the center pinhole blocks the least amount of signal photons, and also claims the majority of the signal photon budget due to illumination with near-maximum intensity.As such, as shown in Figs.S12, f, g and S13, f, g, the effect of the pinhole spacing r on the usage of the signal photon budget and background count is strongly reduced.For pattern spacings between r ¼ 0:5s PSF and r ¼ 2s PSF , pattern focus positions within a 130 nm range of the emitter position and either rotation, signal photon counts vary between 1753 and 1968 photons, and average backgrounds vary between 0.88 and 4.30 photons per pixel.When the center of the triangle is displaced from the emitter position, another pinhole is able to cover the emitter position, thereby enlarging the range of similar photon counts and increasing the domain of precision improvement. Donut-shaped intensity patterns Note that MINFLUX uses a donut-shaped intensity pattern for illumination, which contains an intensity minimum in the center.As described until now, SpinFlux uses a Gaussian intensity profile, with an intensity maximum in the center.By incorporating two phase masks in the system (see Fig. S3), SpinFlux can be adapted to utilize donut-shaped illumination.As the donut-shaped pattern increases the information content of signal photons in its center rather than at its boundary (8), it will mitigate the situation where highly informative signal photons are blocked by the pinhole, which in turn improves the theoretically minimum localization uncertainty.We explore this effect in Figs. 5 and S14-S17. Figs. S14 and S15 show the SpinFlux localization precision of the triangular configuration without a center pinhole, in the scenario where the entire signal photon budget is exhausted.Here, the improvement of SpinFlux with donut-shaped illumination over SMLM is approximately 1.64 in the x-direction and 1.74 in the y-direction at a pinhole spacing r ¼ 3s PSF .This improve-ment is comparable with that of SpinFlux with Gaussian illumination, as the intensity minimum of the illumination donut is placed 3s PSF away from the emitter.The Gaussian pattern at r ¼ 2s PSF and the donut-shaped pattern at r ¼ 3s PSF are comparable on the emitter coordinates, thereby negating the advantages of the donut-shaped pattern. This changes when including a center pinhole in the triangular configuration, as shown in Figs. 5, S16, and S17.Here, the maximum improvement over SMLM is 3.5 in the xand y-directions at a pinhole spacing of r ¼ 4s PSF .When increasing the spacing r between the pinholes (beyond the width of the donut-shaped beam), a larger share of the signal photon budget will be claimed by the center pinhole.The intensity minimum of the center pinhole increases the information content of signal photons, thereby improving the resolution over SpinFlux with Gaussian illumination.However, this improvement decays sharply when the pattern focus is not centered on the emitter position.Specifically for r ¼ 4s PSF , the improvement exceeds 1.5 in either direction only when the emitter-focus distance is smaller than 5 nm.Therefore, it is more practical to choose a smaller spacing between the pinholes.For r ¼ 3s PSF , the maximum improvement over SMLM is 3.3 in the xand y-directions, and the improvement is larger than 1.5 in either direction when the emitter-focus distance is at most 37 nm. DISCUSSION In meSMLM, sparse activation of single emitters with patterned illumination results in improved localization precision over SMLM.The precision improvement of meSMLM is derived from retrieving the position of an emitter relative to individual illumination patterns, which adds to existing PSF information from SMLM.In addition, meSMLM improves the resolution over image reconstruction in SIM while reducing the required amount of illumination patterns.This suggests that meSMLM can improve the localization precision in existing setups, which are limited by image reconstruction in processing. We developed SpinFlux, which incorporates meSMLM into SDCM setups.In SpinFlux, patterned illumination is generated using a spinning disk with pinholes to sequentially illuminate the sample.Subsequently, the emission signal is windowed by the same pinhole before being imaged on the camera.During the analysis, emitters are localized in the recordings from a sequence of individual pattern acquisitions, taking knowledge about the pattern into account. We have derived a statistical image formation model for SpinFlux, which includes the effects of patterned illumination, windowing of the emission signal by the pinhole and pattern-dependent background.For our analysis, we considered Gaussian illumination patterns and a Gaussian emission PSF.We also consider donut-shaped illumination patterns, which can be generated by incorporating a phase mask in the illumination path.In addition, we have derived and evaluated the CRLB for this model.We applied the CRLB to various illumination pattern configurations to quantify the theoretical minimum uncertainty that can be gained with SpinFlux.We compared SpinFlux with SMLM and with localization on ISM reconstruction data, which results in an average global improvement of 1.48 over SMLM, or 2.10 with Fourier reweighting. When using one pattern only, pattern dependency of the background causes an improvement of at most 1.17 over SMLM, whereas no improvement is found when neglecting this effect.In the single-pattern case, the pinhole blocks signal photons that carry information about the emitter position.As such, it is beneficial for SpinFlux to use pinholes that are as large as possible to reduce the amount of signal photons blocked by the pinhole.In other words, we find that a spinning disk with pinholes is convenient to generate patterned illumination, although the pinhole itself has an adverse effect on the localization precision due to the blockage of signal photons.However, we have not considered neighboring emitters in our analysis, nor have we modeled out-of-focus background.In ISM, optical sectioning is achieved with the spinning disk by reducing the effects of neighboring or out-of-focus fluorescent signals, thereby improving the resolution.We expect that the pinhole has a similar effect on the localization precision that can be attained with SpinFlux, thereby resulting in an optimal pinhole radius.Future research should focus on incorporating these effects into the image formation model. Based on the single-pattern results, we conclude that SpinFlux requires multiple patterns to generate a significant precision improvement over SMLM.We explored various multiple-pattern configurations, which can be obtained via sequential illumination.We found that a configuration of two pinholes with radius 3s PSF , separated in the x-direction around the emitter position by a distance of 4s PSF , results in a precision improvement of 2.62 in the x-direction compared with SMLM, while the y-improvement is at most 1.12.For larger separations, the information content of signal photons increases due to illumination with the low-intensity tails of the Gaussian illumination pattern.However, when the separation increases above 4s PSF , the loss of signal photons due to the windowing effect of the pinhole causes deterioration of the localization precision. We also evaluated the theoretical minimum uncertainty of a triangular pattern configuration, where pinholes are sequentially placed at the corners of an equilateral triangle around the emitter position.This results in approximately a twofold x-precision improvement over SMLM, which is a reduction compared with the two-pattern configuration.However, the triangle configuration also attains approximately a twofold precision improvement in the y-direction.As such, the triangle configuration balances the localization precision in the xand y-directions at the cost of suboptimal precision in each individual direction.Including a center pinhole in the triangle does not improve the maximum localization improvement, but it extends the domain on which any improvement can be attained. By including a phase mask in the illumination and emission paths, illumination patterns with arbitrary diffraction-limited intensity profiles can be created.We evaluated the localization precision of SpinFlux with donut-shaped illumination.As the donut-shaped pattern increases the information content of signal photons in its center rather than at its boundary, it will mitigate the situation where highly informative signal photons are blocked by the pinhole.We find that, in the triangular configuration with a center pinhole, the maximum improvement over SMLM is increased to 3.5 in the xand y-directions at a pinhole spacing r ¼ 4s PSF . FIGURE 2 FIGURE 2 Approximation of the theoretical minimum localization uncertainty of SMLM on reconstructions acquired from (Fourier reweighted) ISM.For this simulation, a PSF standard deviation of 93.3 nm and a camera pixel size of 65 nm were used.(a) Approximate CRLB in the x-direction as a function of the expected signal photon budget for varying values of the expected background photon count.(b) Improvement of the approximate CRLB over SMLM as a function of the expected signal photon budget for varying values of the expected background photon count. FIGURE 3 FIGURE 4 FIGURE 3 Theoretical minimum localization uncertainty of SpinFlux localization with one x-offset pinhole and pattern.For this simulation, 2000 expected signal photons and 8 expected background photons per pixel were used.Results are evaluated for the scenario where the entire signal photon budget is exhausted after illumination with the pattern (disregarding signal photons blocked by the spinning disk).(a) Schematic overview of SpinFlux localization with one pinhole with radius r p centered at coordinates ðx p ; y p Þ.In (d) and (e), the x-distance ðx p À q x Þ between the pinhole and the emitter is varied, where y p ¼ q y .(b) SpinFlux CRLB in the x-direction as a function of the emitter-pinhole xand y-distances for pinhole radius r p ¼ 3s PSF .(c) Improvement of the SpinFlux CRLB over SMLM as a function of the emitter-pinhole xand y-distances, for pinhole radius r p ¼ 3s PSF .(d) CRLB in the x-direction as a function of the emitter-pinhole x-distance.Simulations show SpinFlux with varying pinhole sizes, widefield SMLM, and localization on ISM reconstructions.(e) Improvement of the SpinFlux CRLB over SMLM as a function of the emitter-pinhole x-distance for varying pinhole sizes. FIGURE 5 FIGURE 5 Theoretical minimum localization uncertainty of SpinFlux localization with four pinholes and donut-shaped patterns in an equilateral triangle configuration with a center pinhole.For this simulation, 2000 expected signal photons and 8 expected background photons per pixel were used, with pinhole radius r p ¼ 3s PSF .Results are evaluated for the scenario where the entire signal photon budget is exhausted after illumination with all patterns (disregarding signal photons blocked by the spinning disk).(a) Schematic overview of SpinFlux localization with a triangle of three pinholes with an additional center pinhole centered at focus coordinates ðx f ;y f Þ.In (d) and (e), the x-distance ðx f À q x Þ between the pattern focus and the emitter is varied, where y f ¼ q y .(b) SpinFlux CRLB in the x-direction as a function of the emitter-pinhole xand y-distances for pinhole spacing r ¼ 3s PSF .(c) Improvement of the SpinFlux CRLB over SMLM as a function of the emitter-pinhole xand y-distances for pinhole spacing r ¼ 3s PSF .(d) CRLB in the x-direction as a function of the emitter-focus x-distance.Simulations show SpinFlux with varying pinhole spacing, widefield SMLM, and localization on ISM reconstructions.(e) Improvement of the SpinFlux CRLB over SMLM as a function of the emitter-focus x-distance for varying pinhole spacing. TABLE 1 Summary of simulation results for localization on ISM reconstructions and SpinFlux variants considered in the main text
2024-01-22T16:04:49.038Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "633ee37a34090196a9cacd8c6f9f6ae9252b7383", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bpr.2024.100143", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5ec5053d15b453008d92d08717def7708ea8611", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
4526705
pes2o/s2orc
v3-fos-license
Detection of plasma tumor necrosis factor, interleukins 6, and 8 during the Jarisch-Herxheimer Reaction of relapsing fever. The Jarisch-Herxheimer Reaction (J-HR) is a clinical syndrome occurring soon after the first adequate dose of an antimicrobial drug to treat infectious diseases such as Lyme disease, syphilis, and relapsing fever. Previous attempts to identify factors mediating this reaction, that may cause death, have been unsuccessful. We conducted a prospective trial in Addis Ababa, Ethiopia on 17 patients treated with penicillin for proven louse-borne relapsing fever due to Borrelia recurrentis to evaluate the association of symptoms with plasma levels of tumor necrosis factor (TNF), interleukins 6, and 8 (IL-6 and -8). 14 of the 17 (82%) patients experienced a typical J-HR consisting of rigors, a rise in body temperature (1.06 +/- 0.2 degrees C) peaking at 2 h, leukopenia (7.4 +/- 0.6 x 10(-3) cells/mm3) at 4 h, a slight decrease, and then rise of mean arterial blood pressure. Spirochetes were cleared from blood in 5 +/- 1 h after penicillin. There were no fatalities, but constitutional symptoms were severe during J-HR. Plasma TNF, IL-6, and -8 were raised in several patients on admission, but a seven-, six-, and fourfold elevation of these plasma cytokine concentrations over admission levels was detected, respectively, occurring in transient form coincidental with observed pathophysiological changes of J-HR. Elevated plasma cytokine levels were not detected in the three patients who did not suffer J-HR. We conclude that the severe pathophysiological changes characterizing the J-HR occurring on penicillin treatment of louse-borne relapsing fever are closely associated with transient elevation of plasma TNF, IL-6, and -8 concentrations. T he Jarisch-Herxheimer Reaction (J-HR) t is an eponymous title for a clinical phenomenon, first described at the turn of the century, resulting from the treatment of early syphilis with mercury (1,2). The original description was that the spots of roseola syphilis became more defined and numerous on treatment, and that the reaction was accompanied by fever, sweating, and anorexia which occurred with 24 h of mercury inunction. It is now recognized that similar transient reactions occur soon after the first adequate dose of a drug, usually an antibiotic, in the treatment of a wide spectrum of infectious diseases (3). Antibiotic treatment of spirochetal infections, for example syphilis, relapsing fever, yaws, leptospirosis, Lyme disease, and Vincent's angina, has typically been identified with J-HR, but the phenomenon has been described in many other bacterial infections. In ad-1 Abbreviations used in this paper: J-HR., Jarisch-Herxheimer Reaction. dition, J-HR has been described in protozoal infection. For example, in African trypanosomiasis, it is a serious complication of treatment (4). Pathophysiological changes occurring on penicillin treatment of early syphilis have been previously documented (5), and consist of an increase in body temperature of more than 0.8~ peaking between 6 and 8 h, often with rigors, associated with a fall in systemic arterial blood pressure and an increase in metabolic rate. The clinical worsening of symptoms associated with antibiotic treatment of louse-borne relapsing fever, caused by Borrelia recurrentis, is the most severe form of J-HR documented. Patients with this infection present with high fever, severe constitutional disturbance, hepatalgia, and splenomegaly (3). The diagnosis of relapsing fever is made by microscopic observation of spirochetes, typical of Borrelia recurrentis, on Wright's stained blood smears. The level ofbacteremia may be intense with densities of up to 10 s organisms per mm -3. The name relapsing fever describes one of the principal clinical features of this infection in which there is spontaneous resolution of high pyrexia, usually by crisis, within 10 d, followed by transient remission lasting 5 or 6 d, followed by relapse of symptoms. Each remission is accompariled by the production of IgG and IgM specific for the responsible bacterial strains causing relapse, and up to four relapses may occur during the course of the infection. The J-HR of relapsing fever may be fatal with a mortality approaching 40% (6). The J-HR of louse-borne relapsing fever (7-9) is very predictable in its form, and is similar to that of syphilis, but it occurs sooner after the administration of antibiotics, at "~90 rain, and is associated with considerably more severe constitutional disturbances. It is hardly surprising that the severe pathophysiological changes of the J-HR of relapsing fever have provided a paradigm for studies on the pathogenesis of inflammation and shock. However, previous studies aimed at defining the nature of mediators involved in the etiology of this syndrome have failed. Since many of the features of J-HR resemble those now known to be caused by cytokines in man or animals, we have carried out experiments to test the hypothesis that the J-HR of louseborn relapsing fever is associated with the appearance of cytokines, TNF, IL-6, and -8 in the circulation. Materials and Methods Selection of Patients. Patients attending the emergency outpatient clinic of the Black Lion Hospital, Addis Ababa, with a proven diagnosis of relapsing fever, were included in the study. The diagnosis of relapsing fever was made on clinical suspicion, and proven by visual identification of Borrelia recurrentis organisms on blood smears stained with Wright's stain which were examined using light microscopy. Patients were excluded from the study ff evidence of any other infection was found and in particular, if malarial parasites were seen on the blood smear in addition to Borrelia. Only patients experiencing the first episode of f~.ver in the course of R recurrentis infection were inchded. In all, 17 patients were enrolled into the study. The age range was 14-40 yr old, and all patients were male. Protocol and Clinical Parameters. Patients were nursed throughout the study reclining on a bed. Clinical measurements of rectal temperature (using a mercury thermometer) and blood pressure were made on admission until steady values had been obtained, then immediately before intr#muscular injection of 600,000 U procaine penicillin, and at 0.5, 1, 2, 4, 8, 12, and 24 h after injection. In addition, 10 ml of blood were drawn from a peripheral vein at the same time points, and placed into endotoxin-free tubes containing EDTA as anticoagulant. Thin blood smears were made on glass microscope slides and stained using Wright's stain. In addition, an aliquot of blood was taken for dilution and estimation of total white cell count using a hemocytometer. The mean of three estimations of white cell count was taken for each blood sample. Each blood sample was centrifuged at 1,000 g for 5 min, and plasma was immediately frozen in aliquots and stored at -20~ before cytokine assay. Microscopy of the stained blood films was carried out by an independent observer who examined them by eye for the presence of R recurrentis. A minimum of 25 microscopic fields (x 100) were examined for the presence of the organism. A negative score was not given until all microscopic fields examined were free of the organism. Patients were discharged from the hospital at 24 h, but some discharged themselves after 12-24 h. Cytokine Assays. All cytokine assays were performed in the Department of Pathology, the University of Michigan. Samples were assayed within 3 mo of aspiration, and were continuously maintained in -20~ until the time of assay. Measurements were made in a blinded fashion so that those investigators handling coded samples were unaware of patients' clinical details or of the timing of samples. TNFAnalysis. TNF bioactivity was assayed using the highly sensitive cell line, WEHI 164, subclone 13 (the generous gift of Anders Waage, University of Trondheim, Norway), which is able to detect TNF at a concentration of 2 pg/ml (10). We have previously shown that this assay is unaffected by II.-1, -2, or -6, and that there is no synergy between interferon-'), and TNF in this assay (11). Briefly, samples were serially diluted in 96-well microtiter phtes. The WEHI cells were resuspended at 5 x 10 ~ cells/ml in RPMI 1640 with 10% FCS, 2 mM t-glutamine, and 0.5 #g/ml actinomycin D (Calbiochem Corp., La Jolla, CA). The next day, cell lysis was detected by adding MTT-tetrazolium (3-[4,5dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide; thiazol blue) and incubating the plates an additional 4 h. The dark purple tetrazolium salts were then dissolved in acidified isopropanol. Units of TNF were calcuhted based on a recombinant human standard run in the same assay. I1_,6 Assay. The Ib6 assay was performed using the B9 cell line, which is very sensitive to Ib6. Briefly, serial dilutions of the samples were placed into 96-weU microtiter plates, to which were added 5,000 B9 cells in IMDM with 2 mM t-glutamine, 25 mM Hepes, 1% penicillin/streptomycin, and 10% FCS (12). The cells were incubated for 68 h at 37~ and pulsed for the final 6 h with MTTtetrazolium. The crystals were dissolved with acidified isopropanol, and the units calculated based on a standard curve run in the same assay. We have previously established the specificity of the B9 assay for IL-6, and have shown that it is unaffected by multiple other cytokines (13). IL,8 ELISA. Ib8 was measured with an ELISA procedure after a previously described protocol (14). A polyclonal rabbit anti-Ib8 antibody was prepared by repeated intradermal injections of purified rib8. The IgG from high-titer antisera was purified on protein-A agarose columns (Pierce Chemical Co., Rockford, IL). ELISA plates (Nunc-Immuno Plate Maxisorb, Neptune, NJ) were coated with 50 ~l/well of anti-IL-8 diluted to 1/~g/ml in borate-buffered saline (50 mM H3BO3, 120 mM NaC1, pH 8.6) and incubated overnight Patients 17 patients were enrolled in the study. Patients were all febrile on admission with rectal temperature of 39.5 _+ 0.9~ (SEM), and had been iU with symptoms consistent with a first episode ofR recurrentis infection for 4.1 _+ 1.0 d (SEM) before inclusion in the study. 14 patients (82%) experienced a severe J-HR with rigors and profound constitutional symptoms. Three patients experienced no J-HR reaction. There were no deaths in the study period, and all patients were discharged from the hospital. All patients completed 12 h of study and 11 patients completed 24 h of study. Six discharged themselves from the hospital early, between 12 and 24 h. Fig. 1 shows results of clinical parameters measured during the study as a function of time. Rigors were experienced by all 14 patients experiencing J-HR and occurred between 60 and 120 rain after penicillin administration, each episode lasting between 20 and 30 rain. Even though all patients were febrile on admission to the study, all 14 patients sufferingJ-HR experienced a rise in body temperature which peaked at 120 min of 1.06 + 0.2~ (SEM) and coincided with rigors. In addition, a leukopenia (7.4 _+ 0.6 x 10 -3 cells mm -3) beginning with the onset of rigor was documented. Furthermore, the typical small decrease then rise in mean arterial blood pressure was observed. Spirochetemia, while not being quantitatively recorded, had disappeared in all patients within a mean of 4.5 h after antibiotic treatment was commenced. ( Fig. 2). Since the patients had been symptomatic for an average of 4 d before diagnosis, it was not surprising that several patients had circulating cytokines detectable before the initiation of antibiotic treatment. To provide a clearer understanding of the relative changes in cytokine levels, the data in Fig. 2 is expressed as a fold increase over admission levels. These data show that TNF was the most rapidly induced, and showed the greatest increase relative to admission levels. A detailed description of the rise in plasma cytokine concentrations and body temperature in the early phase of the J-HR is shown in Fig. 3 in conjunction with the time of onset of rigor. This figure shows that TNF is elevated over admission levels within 30 min. These TNF levels were significantly greater than pretreatment controls at the 1 h time point (Wilcoxon rank sum test), whereas I1:6 and -8 levels were not elevated above baseline at this time point. Plasma II.-6 was elevated above admission at the 2 h timepoint, and I1:8 at the 4 h time point (all comparisons made using the Wilcoxon rank sum test). Thus, the evolution of plasma cytokine levels in this form of J-HR occurred in a distinct temporal sequence with the early, rapid rise in TNF being sequentially followed by I1:6 and then by -8. This figure clearly shows the relationship to the appearance of plasma cytokines with the development of rigors and pyrexia. TNF appeared before the onset of symptoms, I1:6 was detected as the symptoms developed, and IL-8 was detected well after the onset of rigors and pyrexia. Discussion The J-HR is a well-known, but infrequently diagnosed complication of antimicrobial therapy. The reaction has been documented in the treatment of many bacterial infections (3), but particular attention has been focused on the phenomenon in spirochetal infections, namely syphilis (5), relapsing fever (6-9), and leptospirosis (15). There has been considerable debate in the past concerning the nature and the identity of the mediators involved in the severe clinical manifestations of the J-HR, particularly for syphilis and relapsing fever, but no definitive etiological explanation for the phenomenon has emerged. The present study sheds light on this question for the first time, and suggests that a cytokine cascade consisting, at the least, of TNF, I1:6 and 8, may be responsible. We have demonstrated that these cytokines appear in the circulation in transient fashion related to the appearance of severe symptoms, and pathophysiological changes associated with the most severe form of the J-HR reaction, namely those associated with relapsing fever caused by/~ recurrentis infection. Analysis of the plasma cytokine profiles demonstrates that the appearance of elevated TNF in the plasma precedes that of I1:6, which in turn precedes that of II:8. In the current study 82% of patients suffered a J-HR, a percentage consistent with published data of previous studies in this condition (6,7). The three patients who did not experience a J-HR in the current study were cured of/~ recurrentis bacteremia, and became apyrexial after penicillin treatment, but demonstrated no release of any of the three measured cytokines. The peak plasma levels of TNF measured in this study of 126 + 38 pg/ml (range 0-469) are close to levels detected upon hospital admission of patients who subsequently died of menigococcal bacteria (16), and slightly lower than peak levels of TNF detected in plasma of human volunteers receiving bolus infusion of endotoxin (17). In addition, the peak detected plasma levels of I1:6, namely 9,578 _+ 1,808 pg ml-1 (range 2,262-28,447), are comparable with those measured previously in septic shock and bacteremia (18), but considerably higher than human volunteers receiving bolus infusion of endotoxin (19) or rTNF (20). Elevated levels of plasma I1: 8 have not previously been reported in any human infection, but elevated levels have been detected in a primate model of bacteremia (21). Bolus infusion of endotoxin into human volunteers has been reported to cause a rise in plasma IL-8 concentration (19), but peak levels reported in that study were only "~10% of the peak values detected in the study reported here on J-HR. Thus, the peak values of the cytokine concentrations recorded in this J-HR study are of the same order of magnitude as those recorded in previous studies of lethal bacteremia. In this study of the J-HR of relapsing fever no fatalities occurred, however severe pathophysiological changes were documented. The transient nature of elevated plasma TNF, Ib6, and -8 is similar to that recorded in humans in response to a bolus infusion of endotoxin (17,19). Furthermore, a similar profile of cytokine release has been recorded in vitro (22), reflecting control of cytokine production at the level of mRNA concentration within stimulated cells. A reaction resembling J-HR has been demonstrated in renal transplant patients about 1 h after receiving antithymocyte globulin (23) or OKT3 mAb (24) to treat rejection, and transient elevation of plasma TNF concentration was associated with fever and constitutional symptoms. Treatment of cancer patients with high-dose ffNF induces symptomatology very similar to that observed during the J-H reaction (25). In addition, experiments extending these studies showed that incubation of peripheral blood mononuclear cells in vitro with either antithymocyte globulin or OKT 3 mAb induced secretion of TNF. The stimulus for cytokine generation and release into the circulation during the J-HR clearly deserves more detailed investigation. The presence of soluble pyrogenic agents has been documented in plasma of patients experiencing J-HR by bioassay employing intravenous infusion of such plasma into rabbits (26). However, the identity of the factor(s) responsible was not determined. It was suggested that endotoxin per se may be the mediator, since the pathophysiological events of the J-HR closely resemble those of endotoxin administration (27). However although a further study suggested that endotoxin might be responsible, it also demonstrated that endotoxin fractions prepared from/g recurrentis and injected intravenously into rabbits caused none of the pathophysiological changes associated with J-HR (38). The correlation of disappearance of the spirochetes from the circulation we have demonstrated in the present study, with the observed pathophysiological changes and pulsatile release of cytokines, suggests that removal of bacteria, presumably by phagocytosis, may represent the stimulus for cytokine release. Spirochetes are not removed en masse from the circulation until penicillin is administered, and it is likely that abnormal forms of the organism, produced in response to the antibiotic, are rendered susceptible to phagocytosis by macrophages, for example Kupffer cells in the liver. In our laboratory, we have recently shown that phagocytosis of pathogens by human macrophage cell lines (THP-1) or peripheral blood monocyte/macrophages caused a rapid increase in mRNA for TNF, IL-6 and -8, and that these cytokines are released into the incubating medium (G. E. Griffn, unpublished observations). It is therefore possible that phagocytosis of spirochetes made susceptible by the action of penicillin is an important stimulus for the production and release of cytokines. This hypothesis is strengthened by the study that a nonendotoxin heat stable particulate pyrogen has been isolated from B~ recurrentis, that was proposed as a possible stimulus for the J-HR (26). This hypothesis is testable using a murine model of Borreliosis (29) in which the administration of ampicillin has been shown to produce a J-HR type reaction. In addition, a previous study of the interaction of Lyme disease spirochetes with adherent human monocytes or murine macrophage cell lines demonstrated the production of Ibl as determined by bioassay (30). This bioassay in this study was probably not specific for Ibl and may well have detected IIr Lyme disease spirochetes were phagocytosed in vitro by macrophages in the absence of opsonins, serums, or antibiotics. In addition, experiments were carried out that eliminated endotoxin as a stimulus for cytokine release in this system. Furthermore, in another series of experiments, it was shown that Borrelia spirochetes induced the production of leukocyte pyrogen and thromboplastin from human blood leukocytes (31). Attempts have been made to amdiorate the severity of the J-HR using pharmacological agents. The use of intravenous hydrocortisone was ineffective (7), and the use of an opiatepartial agonist. Meptazinol, was only partially successful (32) in relapsing fever. The use of a more potent glucocorticoid, prednisone, in the J-HR of syphilis was also unsuccessful in reducing symptoms (33). The demonstration in the present study that cytokines may have a role to play in the etiology of J-HR may lead to more rational therapy aimed at blocking the action ofcytokines using specific mAbs that have already been shown to be efficacious in animal models of severe infection (34,35).
2017-10-17T06:25:35.123Z
1992-05-01T00:00:00.000
{ "year": 1992, "sha1": "5448db47e6fb293eea146864781c21732c50fe8e", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/175/5/1207.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5448db47e6fb293eea146864781c21732c50fe8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
27823984
pes2o/s2orc
v3-fos-license
Indoor birch pollen concentrations differ with ventilation scheme, room location, and meteorological factors Indoor pollen concentrations are an underestimated human health issue. In this study, we measured hourly indoor birch pollen concentrations on 8 days in April 2015 with portable pollen traps in five rooms of a university building at Freising, Germany. These data were compared to the respective outdoor values right in front of the rooms and to background pollen data. The rooms were characterized by different aspects and window ventilation schemes. Meteorological data were equally measured directly in front of the windows. Outdoor concentration could be partly explained with pheno logical data of 56 birches in the surrounding showing concurrent high numbers of trees attaining flowering stages. Indoor pollen concentrations were lower than out door concentrations: mean indoor/outdoor (I/O) ratio was highest in a room with fully opened window and additional mechanical ventilation (.75), followed by rooms with fully opened windows (.35, .12) and lowest in neighboring rooms with tilted window (.19) or windows only opened for short ventilation (.07). Hourly I/O ratios depended on meteorology and increased with outside temperature and wind speed oriented per pendicular to the window opening. Indoor concentrations additionally depended on the previously measured concentrations, indicating accumulation of pollen inside the rooms even after the full flowering period. detailed information of the indoor pollen concentrations that have the more meaningful influence on humans compared to outdoor or even background pollen concentrations. Although there are a few studies on indoor pollen or mold spores, [8][9][10][11][12][13][14][15][16] there is still a need for a comprehensive study that combines outdoor concentrations on rooftop level, ground level as well as indoor concentrations. 17 Most of the studies reported decreased values of pollen concentration inside buildings, but some studies also showed that these concentrations were not correlated with outdoor levels (eg, Ref. 12). O'Rourke and Lebowitz 11 stated that atmospheric transport plays a negligible role for indoor pollen concentrations and identified feet and bodies of people and animals as main vectors. The great influence of pollen transport into houses via clothing was also supported by Jantunen and Saarinen. 18 Furthermore, indoor pollen concentrations were found to increase when rooms were more frequently accessed and outdoor activities of people were higher. 8,9 Equally for birch pollen antigens in dust, it has been suggested that they are carried indoors via footwear and clothes. 19 However, there is a lack of knowledge about how meteorological parameters are able to influence the indoor concentration of pollen in relation to its outdoor concentration. We additionally realize that little is known about differences caused by different ventilation schemes. A deeper understanding of the influence of window ventilation will allow a better adaptation of the individual behavior. Therefore, this study aimed to answer the following questions: • Is there a significant correlation between outdoor and indoor pollen concentrations? • Will indoor pollen concentrations and indoor/outdoor ratios change under different ventilation schemes and room locations? • How do meteorological conditions, especially wind direction, influence the number of floating pollen in indoor air? | Study site and rooms The study was conducted inside and outside the forest faculty building of the Technical University of Munich at Freising, Germany (48°24′N, 11°45′E). The three-story building is situated at the western edge of the green campus area on which agricultural fields and forests border. The edifice itself is surrounded by extensively managed meadows, hedges, and groups of trees comprising different species, including some birch (Betula pendula Roth) specimen ( Figure 1). Indoor (I) and outdoor (O) concentrations of birch pollen were assessed for five rooms in the building: three office rooms, one large combined laboratory/seminar room, and one small laboratory room (Table 1, Figure 1). Ventilation schemes and other properties of the rooms are listed in Table 1. All rooms have a central heating system with heating elements under the windows. In all rooms, the windows were opened for starting and stopping the respective outdoor personal pollen samplers (see section Pollen monitoring) which were placed on the window sills. Windows were closed at the end of the day. All five rooms, especially the office rooms, were frequently entered by co-workers, students, and the regular users. All sampling days except the first one (April 19, 2015, DOY 109) were working days. Thus, the experimental conditions were in accordance with real-life situations. The rooms East-Tilt and East-Vent lie directly next to each other; one tall birch tree that was flowering during the measurements is situated right in front of their windows. Air is aspirated at 10 L min −1 through a vertically oriented intake, and pollen is deposited on microscope slides that are coated with white and pharmaceutic Vaseline (Molyduval). Microscope slides were inserted every second hour for 60 minutes during 8 and 7 p, resulting in six measurements per day (8-9 , 10-11 , 12 -1 p, 2-3 p, 4-5 p, and 6-7 p). In total, the sampling campaign resulted in 480 pollen samples of which five had to be discarded due to failure in the sampling. To prepare permanent samples, we applied a mixture of distilled water, gelatine, gelvatol, and safranin (staining) to cover slips and fixed them to the microscope slides. The edges were sealed with common transparent nail varnish. Samples were assayed under a light microscope at 400× magnification (Axio Lab. A1 connected to a Motic Moticam 3, 3.0 MP; Zeiss Microscopy GmbH, Jena, Germany). Practical Implications • The assessment of indoor pollen is crucial for human wellbeing as people stay most of the day inside buildings. Although restricted to one season and five rooms, our study demonstrated that outdoor pollen concentrations varied with room location and that weather and ventilation schemes strongly influenced indoor/outdoor ratios. In addition to the wise choice of room location and | Meteorological data Meteorological Although in meteorology, wind direction is reported by the direction from which it originates (eg, southerly wind), for clarity in the analyses, we refer to the direction it is going to (eg, wind toward north). | Statistical analyses Linear regressions were performed to determine the combined influ- where i = 1…235 is the observation id, and the rest as above. Residual plots showed that errors were heteroscedastic with respect to outdoor pollen concentrations; thus, we included a variance function that is an exponential of the outdoor pollen concentration with different parameters for each room. This corresponds to following formulation: | Birch pollen and flowering season The aerobiologically defined birch pollen season started on April 13th and ended on April 25th when 5% or 95% of the annual sum had been collected ( Figure 3). The birch pollen concentration sharply increased to its annual peak on April 16th with 1722 pollen grains m −3 . | Outdoor meteorological conditions Under the influence of a high-pressure system, the weather from April 19th to 22nd was predominantly sunny with only a few clouds on April 22nd. 21,23 Dry air masses from subpolar origin subsequently warmed up, and daily maximum temperatures increased up to ~20°C. During nights, some frost was observed. Winds in this period were generally weak and toward west and south. On April 23rd, a small low-pressure system in the higher troposphere reached Bavaria from northwest and led to lower air temperatures. On the following day, it was sunny and warm again with weak winds toward east. There was light rain on the non-sampling days 25th and 26th. On April 27th, a low-pressure system moved over Bavaria and this was the only sampling day when 6 mm precipitation was registered. On April 29th, the weather was cooler but sunny again. The outdoor measurements ( Figure 4) reflected the diurnal patterns with maximum temperatures around midday or in the early afternoon, depending on the aspect (east-midday, south and west-early afternoon). On the south and sometimes also on the west side, the highest temperatures were recorded, corresponding to the lowest air humidity values. Air pressure records mirror the frontal system passing on April 27th. | Wind speed and directions Wind speeds were generally low during the sampling days, not exceeding the category of moderate breeze (Beaufort scale 4, 5.5-7.9 m s −1 ). Opposite to the general pattern toward east, 20% of winds were also toward west ( Figure 5). The outdoor wind field in front of the win- | Outdoor and indoor birch pollen concentrations With respect to outdoor pollen concentrations, the eight sampling days can be divided into two periods: medium-to-high pollen concentration on the first four sampling days (April 19th till 22nd) of up to 600 pollen grains m −3 and low concentrations rarely exceeding 50 pollen grains m −3 on the last four sampling days (April 23rd, 24th, 27th, and 29th) (see Figure 6). This dichotomy is equally seen in the Background (building rooftop) and outdoor (window) pollen concentrations were highly correlated for all rooms (all P<.001, Table 2). Outdoor and indoor pollen concentrations were highly correlated for South-Open and less for the other rooms, with no significant correlation for North-Open. The correlations between indoor and background concentrations were very similar to the indoor-outdoor correlations. | Modeling The significant explanatory variables for modeling indoor pollen con- | DISCUSSION Background pollen concentration and phenological observations largely matched; however, some small discrepancies are obvious which have to be carefully interpreted in light of a three-day temporal resolution of the phenological observations. The peak concentration West-Open, the value was smaller (.77), probably due to its largest distance to the roof trap and the building structure (see Figure 1). For complex city structures, the representativeness of pollen traps is known to be limited. 27 The presence of high buildings and complex surfaces may, for example, increase turbulence, thereby causing pollen concentrations to differ considerably over short distances both vertically and horizontally. [28][29][30][31] For most allergic people, due to their living and working conditions, indoor pollen concentrations are more relevant than outdoor background concentrations and the suitable siting of office or living rooms and their ventilation matters. 13,32 How relevant these differences between outdoor and indoor conditions may be is underlined by our study. First symptoms in people allergic to Betula pollen occur when airborne pollen concentrations exceed ~20 pollen grains per m³, 33,34 a threshold which was exceeded on nearly all days of the birch pollen season (see Figure 3). This is the range which can be underrun by the "best performing" room in our study (16 pollen grains per m³ in East-Vent) that even has a birch tree at 5 m distance in front of its window. The average I/O ratio of birch pollen grains found in this study (.33) largely matches results reported in the literature; especially the F I G U R E 8 Same as Figure 7, but for effects on the I/O ratio of pollen average rate for the four rooms which were not influenced by an installed ventilation system (.22) is in line with previous studies. 8,9,13,32 However, our study revealed distinct and significant differences between the five rooms. The highest mean I/O ratio of .75 was found for South-Open, strongly supported by the highest correlation between indoor and outdoor pollen concentration (.95, see Table 2). This relatively small laboratory room is characterized by a high window/room volume ratio (see Table 1); thus, the constantly opened window allowed considerable exchange of air, boosted by the exhaust hood working at a very small extraction rate which obviously constituted an effective venti- however, their window to room volume ratio was similar ( Table 1) Table 1 date coefficients had P<.05) and were higher in the second-half of the sampling period (t tests comparing the first to the last four measurement days, all P<.05) During this time, both outdoor and indoor concentration of pollen grains decreased substantially. The linear model for indoor pollen concentrations revealed still an additional dependence on the previous concentration after accounting for outdoor pollen concentration; thus, most likely pollen grains also accumulated inside the rooms, which at the end may also influence I/O ratios. As the sampling sites were not cleaned daily, pollen grains probably have settled down and accumulated over time. Even if the ground is cleaned, pollen grains from inaccessible areas of a room can be moved to open areas and appear in the samples beyond the pollen season. 38 In addition, pollen grains are able to accumulate in house dust. Thus, they can reach a peak even a long time after pollination season and maintain their antigenic activities until the next pollination season. 39,40 Yli-Panula 41 and Enomoto et al. 42
2018-04-03T05:12:31.781Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "f1aa452cb8e73277b6fc55dcdd4ca7dacb340527", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/ina.12351", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "bc7b6920a96e40ecf5e159613a1e363100667ee2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
6322796
pes2o/s2orc
v3-fos-license
Prior Infection of Chickens with H1N1 or H1N2 Avian Influenza Elicits Partial Heterologous Protection against Highly Pathogenic H5N1 There is a critical need to have vaccines that can protect against emerging pandemic influenza viruses. Commonly used influenza vaccines are killed whole virus that protect against homologous and not heterologous virus. Using chickens we have explored the possibility of using live low pathogenic avian influenza (LPAI) A/goose/AB/223/2005 H1N1 or A/WBS/MB/325/2006 H1N2 to induce immunity against heterologous highly pathogenic avian influenza (HPAI) A/chicken/Vietnam/14/2005 H5N1. H1N1 and H1N2 replicated in chickens but did not cause clinical disease. Following infection, chickens developed nucleoprotein and H1 specific antibodies, and reduced H5N1 plaque size in vitro in the absence of H5 neutralizing antibodies at 21 days post infection (DPI). In addition, heterologous cell mediated immunity (CMI) was demonstrated by antigen-specific proliferation and IFN-γ secretion in PBMCs re-stimulated with H5N1 antigen. Following H5N1 challenge of both pre-infected and naïve controls chickens housed together, all naïve chickens developed acute disease and died while H1N1 or H1N2 pre-infected chickens had reduced clinical disease and 70–80% survived. H1N1 or H1N2 pre-infected chickens were also challenged with H5N1 and naïve chickens placed in the same room one day later. All pre-infected birds were protected from H5N1 challenge but shed infectious virus to naïve contact chickens. However, disease onset, severity and mortality was reduced and delayed in the naïve contacts compared to directly inoculated naïve controls. These results indicate that prior infection with LPAI virus can generate heterologous protection against HPAI H5N1 in the absence of specific H5 antibody. Introduction Influenza A viruses can infect a variety of animal species including birds, swine and humans. Highly pathogenic avian influenza continues to cause economic losses to the poultry industry worldwide with outbreaks of H5N2 and H7N3 in North America [1,2,3] as well as outbreaks of H5N1 originating in Hong Kong [4,5] spreading through out Asia and into Africa and Europe. These Eurasian H5N1 are zoonotic and can cause serious disease leading to death in humans [6] and are feared of causing the next influenza pandemic [7]. The demonstration that H5N1 through a combination of mutations can transmit between ferrets has further raised alarms that H5N1 could cause the next influenza pandemic [8,9]. Influenza viruses are segmented negative-sense single stranded RNA viruses and can undergo genetic drift when the individual genes change slowly through mutation over time or genetic shift where entire gene segments can be exchanged between different influenza viruses. The reservoir for avian influenza are wild birds where hemagglutinin (HA) (H1-H16) and neuraminidase (NA) (N1-N9) subtypes circulate [10,11]. Recently an H17 subtype has been discovered in bats [12]. In birds, low pathogenic avian influenza (LPAI) viruses replicate but do not cause severe clinical disease, however LPAI can result in a drop in egg production even when no clinical signs are observed. However, highly pathogenic avian influenza (HPAI) can evolve from some H5 and H7 subtype viruses by the acquisition of a polybasic amino acid motif at the HA 0 cleavage site. Highly pathogenic avian influenza causes severe clinical disease and death in poultry [1]. There is a currently an unmet need to have a vaccine that can protect against newly emerging influenza viruses prior to knowing their subtype to develop a vaccine. Although currently used conventional influenza vaccines are generally effective in protecting animals and humans if used properly, they are not ideal since new vaccines need to be matched and generated against currently circulating influenza viruses. This lag time in vaccine generation was demonstrated by the H1N1 2009 pandemic where a vaccine was not available at the start of the pandemic [13]. Therefore the development of universal influenza vaccines able to protect against an unknown newly emerging pandemic influenza virus is critical. To generate a universal vaccine the correlates of immune protection against influenza would be valuable to aid development. Currently, influenza neutralizing antibodies are one known correlate of immunity. However, a universal vaccine eliciting neutralizing antibodies against multiple influenza virus subtypes is currently not feasible because the generation of escape mutants can occur through genetic drift [14]. Killed influenza vaccines must be closely matched with the HA subtype to be effective and even small changes result in the vaccine losing effectiveness [15]. It is possible to generate cell mediated immunity to protect against different influenza subtypes, using a variety of approaches. These include DNA vaccines [16], vector based vaccines [17] and attenuated influenza viruses [18]. Heterologous immunity has been demonstrated to influence influenza virus infection [19]. Furthermore, the role of natural infection with influenza viruses in generating heterologous immunity against HPAI H5N1 influenza has been evaluated in various animal models such as ferrets [20], pigs [21], Canada geese [22], wood ducks [23], mallard ducks [24], swans [25] and chickens [26]. These publications demonstrate that previous infection with several different live influenza viruses can either protect or influence the outcome of HPAI influenza virus infection in a wide variety of animal species. Hence, prior infection with a heterologous influenza virus may offer potential protection against pandemic influenza viruses. Evaluating heterologous immunity generated by prior infection with influenza virus may lead to improved vaccination strategies. Chickens were chosen to evaluate heterologous immunity since they are highly susceptible to HPAI, and HPAI influenza is transmissible between chickens. Hence, heterologous protection of chickens may provide insight into heterologous protection of other species. To address the role of heterologous immunity following natural infection we used LPAI A/goose/AB/223/2005 H1N1 or A/WBS/MB/325/2006 H1N2 virus to infect chickens prior to challenge with A/chicken/Vietnam/14/2005 H5N1. In addition, the transmission of H5N1 from chickens previously infected with LPAI prior to H5N1 challenge to naïve contact chickens was assessed. Ethics Statement All animal work was carried out in compliance with Canadian Council on Animal Care guidelines and was approved by the Animal Care Committee at the Canadian Science Centre for Animal and Human Health. Viruses A/goose/AB/223/2005 H1N1 was isolated in 2005 from a wild goose in Alberta Canada. A/WBS/MB/325/2006 H1N2 was isolated in from a wild bird in Manitoba Canada. Both these viruses were grown and titrated in 9-10 days old embryonated chicken eggs. A/chicken/Vietnam/14/2005 H5N1 (H5N1) used in this study had an intravenous pathogenicity index (IVPI) score of 2.9. Propagation and titration of this H5N1 virus was done in Japanese quail fibrosarcoma (QT-35) cells. The sequence similarity at the amino acid level of N1 from H5N1 was 87% similar to N1 from H1N1 and 43% similar to N2 from H1N2. In addition, the N1 from the H5N1 had a 20 amino acid deletion in the stalk region. Infection of chickens with H1N1 and H1N2 Specific pathogen free (SPF) chickens were obtained at 40 days of age from the Ottawa CFIA Fallowfield laboratory. The birds were floor housed in heated enhanced BSL3 animal cubicles and allowed 1 week of acclimatization before the start of experiments. Each chicken was inoculated with 10 5 plaque forming units (pfu) of H1N1 or H1N2 in 1 ml sterile PBS via the cloaca, trachea, nares and eyes. Chickens were monitored twice daily and clinical signs scored as 0, 1, 2 or 3 for normal, mildly sick, sick, and very sick (moribund) respectively. Cloacal and oropharyngeal swabs were collected from each chicken on day 0 (prior to challenge) and at predetermined time points post challenge. Blood for serum separation was collected from the wing vein of anaesthetized chickens on day 0 and 21 days post infection (DPI). Swabs and sera were stored at 270uC. Heterologous challenge and evaluation of heterologous cross-protection Chickens pre-infected with LPAI H1N1 or LPAI H1N2 at 21 DPI as well as aged matched control chickens were used for the H5N1 challenge. The H5N1 challenge groups were 1) 9 LPAI H1N1 infected plus 5 uninfected control chickens co-housed in the same cubicle, 2) 9 LPAI H1N1 infected chickens with 5 non infected contact controls added to the cubicle the day after the challenge, 3) 10 LPAI H1N2 infected chickens with 5 uninfected control chickens co-housed in the same cubicle, and 4) 10 LPAI H1N2 infected chickens with 5 non infected contact controls added to the room the day after the challenge. The H5N1 challenge was performed using 10 5 pfu of H5N1 delivered by the intranasal, oral and ocular routes. Clinical signs were monitored and scored as previously described. For animal welfare reasons, moribund chickens were humanely euthanized and assigned a clinical score of 3. The number of dead and/or euthanized birds was recorded. Cloacal and oropharyngeal swabs were collected from each chicken prior to and at predetermined days post challenge (DPC). Blood for serum separation was also collected from the wing vein of surviving chickens at 21 DPC. RNA extraction and quantitative real-time reverse transcriptase polymerase chain reaction (qRT-PCR) Total RNA was extracted from 0.5 mL of clarified oral and cloacal swab specimens using the RNeasy Mini kit (Qiagen, Mississauga, Ontario, Canada) according to the manufacturer's protocol. To semi-quantify the amount of virus nucleic acid in each swab specimen, a semi-quantitative real-time RT-PCR specific for the M1 gene of influenza A was performed as previously described [27]. Standard curves were generated for each run using serial dilutions of full length in vitro transcribed influenza A Matrix gene. The nucleic acid copy number in each specimen was extrapolated from the standard curve. In addition, qRT-PCR specific for H5 was performed to confirm the post challenge shedding was H5N1. Serology All serum samples were heat-inactivated in a water bath for 30 minutes at 56uC. Hemagglutination-inhibition (HI) assay was performed according to the WHO manual on animal influenza diagnosis and surveillance protocol [28]. Briefly, 4 HA units of virus were added to equal volumes of 2-fold serially diluted sera and incubated at room temperature for 30 minutes. This was followed by the addition of a 0.5% (V/V) suspension of chicken red blood cells (CRBC). The highest dilution of serum which completely inhibited the agglutination of CRBC was determined, the reciprocal of which was considered the HI titre for that serum specimen. The neuraminidase inhibition (NI) assay was performed as previously described [29], using binary ethylenimine (BEI)-inactivated whole LPAI H1N1 virus antigen. The antigen was first titrated to determine the optimum antigen dilution for use in NI assay. The presence of anti-NA antibodies in serum and the NI titre were then detected by adding the optimal antigen concentration to serial 2-fold dilutions of sera. The highest dilution of anti-serum to inhibit NA activity was considered the NI titre. Virus neutralisation assay was also performed according to the WHO manual on animal influenza diagnosis and surveillance protocol [28], with minor modifications. Briefly, equal volumes of 100 pfu of influenza virus and serial 2 fold dilutions of sera were mixed and incubated at 37uC. After 1 hour, 100 uL of serum/ virus mixture was transferred to corresponding wells of 96-well plate with confluent MDCK cells (ATTC CCL-34) and incubated at 37uC for 1 hour. The serum/virus mixture was then replaced with fresh culture medium and plates incubated at 37uC for 3-4 days. Plates were examined for cytopathic effect (CPE) and the reciprocal of the highest serum dilution to completely neutralize AIV growth in at least 2 out of 4 wells was considered the VN titre for that serum sample. Serological cross-reactivity between H1N1 or H1N2 and the HPAI H5N1 was assessed by testing the virus in combination with sera from LPAI H1N1 or H1N2-infected chickens in both HI and VN assays. The competitive ELISA for detecting antibodies against influenza A nucleoprotein (NP) was performed as previously described [30], using a baculovirusexpressed recombinant influenza A NP antigen. Plaque size reduction assay was performed following a published protocol [31] with some modifications. Briefly, 50 pfu of H5N2 in 500 mL AMEM was added into each well containing a confluent monolayer of MDCK cells in a 12-well plate. Virus back titration and cell control wells were also included. After a 1 hour incubation at 37uC, the virus inoculum was removed from all wells and 1.5 mL of DPI 21 serum from LPAI H1N1-infected chickens diluted 1/10, 1/50 and 1/100 in an overlay of 1.5% carboxymethylcellulose, 2% FBS in DMEM (CMC overlay) was added to duplicate wells of H5N2 infected cells. Overlay lacking chicken serum was added to the virus back titration and the cell control wells. After 96 hours at 37uC, the cells were fixed in 10% phosphate buffered formalin and immunostained using antiinfluenza A NP monoclonal antibody. Plaques were visualized under an Olympus microscope and plaque diameter determined using a computer based cell Sens Imaging software version 1.4.1 (Olympus Corporation). The average plaque size of at least 20 plaques per serum dilution was calculated. Cell mediated immunity Peripheral blood mononuclear cells (PBMC) were purified from heparinized blood collected from 4 H1N1 infected and 2 uninfected control chickens, using Ficoll-Paque TM Plus (GE Healthcare Bio-Sciences AB, Uppsala, Sweden) following a previously described protocol [32]. Purified PBMC were resuspended in RPMI-1640 supplemented with 2 mM L-glutamine, 100 U/ml penicillin, 100 ug/ml streptomycin and 10% chicken serum (culture medium). To detect cell mediated immune (CMI) responses, PBMC were stimulated with H5N1 antigen and IFN-c secretion and cell proliferation measured. PBMC were resuspended in culture medium at 1610 7 cells/ml and 100 ul (1610 6 cells) transferred to individual wells of a 96-well cell culture plate. Duplicate wells were stimulated with 100 ug/ml of purified BEIinactivated H5N1 antigen to give a final volume of 200 ul/well. Cultures were incubated for 48 hours at 37uC and supernatants harvested after removing the cells by centrifugation. IFN-c concentrations in supernatants were measured by ELISA using Chicken IFN-c CytoSet TM (Invitrogen, Camarillo, CA, USA) according to the manufacturer's protocol. Antigen-driven PBMC proliferation was assayed by the carboxyfluorescein diacetate, succinyl ester (CFSE) dilution method [32]. PBMC were first labelled with CFSE according to manufacturer's protocol (Molecular Probe TM , Eugene, OR, USA), then resuspended in culture medium and 1610 6 cells in 100 ul transferred to individual wells of a 96-well cell culture plate. Duplicate wells were stimulated as described above. Cultures were incubated at 37uC for 5 days, PBMC from each well resuspended by pipetting up and down and then transferred to FACS tubes. A 2 laser Flow cytometer (Beckman Coulter, Mississauga, ON, Canada) was used to determine the percentage of proliferated cells and the data analyzed with CXP Software (Beckman Coulter). The percent proliferation value for antigen-stimulated cells was divided by that for unstimulated controls to obtain the stimulation index (SI). Statistics Data from multiple time points was analyzed by 2-way ANOVA and Bonferroni multiple comparisons post test, using GraphPad Prism version 5. Differences between pairs of data collected at a single time point were analyzed by student t-test. Any p,0.05 was considered statistically significant. Chickens infected with A/goose/AB/223/2005 H1N1 or A/ WBS/MB/325/2006 H1N2 develop no clinical disease despite virus replication and shedding All chickens infected with H1N1 or H1N2 avian influenza shed virus beginning on DPI 3 as detected by quantitative real-time RT-PCR ( Figure 1). Virus shedding peaked on DPI 7 and was undetectable in most chickens at DPI 20. There was significantly more virus in cloacal than in oral swabs at DPI 7 and 10 (p,0.05) for both H1N1 and H1N2. However, despite the replication and shedding of virus, all H1N1 and H1N2 infected chickens showed no signs of disease throughout the 21 days of observation. The kinetics and duration of shedding was very similar for both H1N1 and H1N2 infections. Chickens serconverted following A/goose/AB/223/2005 H1N1 and A/WBS/MB/325/2006 H1N2 infection Serum samples collected from all chickens prior to H1N1 or H1N2 infection had no HI or VN antibodies specific for H1N1 or H1N2. However, at 21 DPI all H1N1 infected chickens elicited H1-specific HI (338 mean titre with 239 standard deviation) and VN (840 mean titre with 436 standard deviation) antibodies and all H1N2 infected chickens had H1-specific HI (188 mean titre with 181 standard deviation) and VN (920 mean titre with 399 standard deviation) antibodies. In addition, all the serum samples at 21 DPI from H1N1 or H1N2 infected chickens tested positive for influenza A NP antibodies with greater than 95% inhibition in the cELISA confirming influenza infection. All H1N1 infected chickens at 21 DPI developed N1-specific antibodies with the N1 neuraminidase inhibition assay (212 mean titre with 74 standard deviation). Chickens infected with the N2 subtype did not react in the N1 neuraminidase inhibition assay. In contrast, no heterologous antibody activity against H5N1 was detected in sera from H1N1 or H1N2 infected chickens using VN assays. Although there were no neutralizing antibodies specific for H5N1 in chicken sera at 21 DPI following either H1N1 or H1N2 infection, these antibodies were able to significantly decrease H5N1 virus plaque size compared to negative sera ( Figure 2). There was no significant difference in H5N1 plaque size when sera from H1N1 and H1N2 infected chickens were compared against each other. However, when compared against negative control sera, antibodies from H1N1 infected chickens significantly reduced H5N1 plaque size at all dilutions tested (p,0.05). Similarly, antibodies from H1N2infected chickens significantly reduced H5N1 plaque size at 1/10 and 1/50 dilutions (p,0.05) compared to negative control sera. Infection of chickens with A/goose/AB/223/2005 H1N1 induces heterologous cell mediated immune responses to H5N1 Heterologous cell mediated responses were assessed by antigeninduced IFN-c secretion and PBMC proliferation. PBMC from LPAI H1N1-infected chickens proliferated more than PBMC from uninfected controls following stimulation with H5N1 antigen ( Figure 3A). Similarly, stimulation of PBMC from LPAI H1N1infected chickens with inactivated H5N1 antigen induced close to 50% higher IFN-c secretion compared to PBMC from uninfected controls ( Figure 3B). The higher IFN-c and proliferative response in LPAI H1N1-infected chickens was indicative of a heterologous recall response. Prior infection with LPAI protects chickens against HPAI H5N1 Following H5N1 challenge, oral and cloacal swabs were collected on days 3, 5, 7, 10 and 14 and evaluated for influenza viral RNA using qRT-PCR. When A/goose/AB/223/2005 H1N1 pre-infected and control chickens were co-housed and all challenged with H5N1, the control chickens shed higher levels of virus in both oral and cloacal swabs at 3 DPC ( Figure 4A and B). The difference in virus shedding between the 2 groups was statistically significant for 3 DPC cloacal swabs (p,0.05). In H1N1 pre-infected chickens, H5N1 shedding peaked at 3 DPC and decreased with time. Control chickens started to develop clinical signs of H5N1 disease at 2 DPC ( Figure 4C). In contrast, the majority of chickens pre-infected with H1N1 did not develop clinical signs of disease until 7 DPC when 3 chickens developed clinical signs. The difference in clinical score between the 2 groups reached statistical significance on 3 and 4 DPC (p,0.05). All control chickens were dead by 4 DPC but all H1N1 pre-infected chickens survived H5N1 challenge until 8 DPC when 2 of 9 died ( Figure 4D). Similarly, when A/WBS/MB/325/2006 H1N2 pre-infected and control chickens were co-housed and all challenged with H5N1, the control chickens shed slightly higher levels of virus in both oral and cloacal swabs at 3 DPC ( Figure 4E and 4F). Likewise, control chickens started to show clinical signs of H5N1disease at 2 DPC ( Figure 4G) but only 3 of 10 H1N2 preinfected chickens developed clinical signs starting at 4 DPC ( Figure 4G). The difference in clinical score between the 2 groups reached statistical significance on 3 and 4 DPC (p,0.05). All control chickens were dead by 4 DPC while only 3 of 10 H1N2 pre-infected chickens died of H5N1 by 7 DPC (Figure 4H). Prior infection with LPAI does not prevent H5N1 transmission to contact control chickens following challenge To evaluate if prior infection with LPAI would prevent transmission of H5N1 to contact control chickens following challenge, LPAI pre-infected chickens were challenged with H5N1 and contact control chickens were placed in the rooms the following day. H1N1 pre-infected chickens had peak viral shedding at 3 DPC which decreased with time ( Figure 5A and B). Contact control chickens also shed virus starting at 3 DPC ( Figure 5A and B), indicating that the pre-infected chickens shed enough virus to infect contact birds. However, virus shedding in the contact controls and the H1N1 pre-infected chickens was approximately the same at all DPC. All H1N1 pre-infected chickens did not show any clinical signs of H5N1 while the contact control chickens started to show clinical signs of disease on 13 DPC ( Figure 5C). The difference in clinical score between the 2 groups was statistically significant at DPC 16, 19 and 20 (p,0.05). However, 2 contact control chickens did not develop any clinical disease at all. All H1N1 pre-infected chickens survived H5N1 challenge. On the other hand, 1 contact control chicken died on DPC 14 and 2 others on DPC 17 ( Figure 5D). Similarly, H1N2 pre-infected chickens had peak viral shedding at 3 DPC which decreased with time ( Figure 5E and F). Contact control chickens also shed virus starting at 3 DPC ( Figure 5E), at approximately the same levels as H1N2 pre-infected chickens in cloacal swabs at all DPCs. However, H5N1 shedding in oral swabs from contact controls was slightly higher than in H1N2 preinfected chickens at DPC 5, 7 and 10 ( Figure 5F). In addition, H1N2 pre-infected chickens did not develop clinical disease while contact control chickens started to develop clinical signs of H5N1 on 4 DPC ( Figure 5G). The difference in clinical score between the 2 groups was statistically significant at DPC 5, 6 and 7 (p,0.05). All H1N2 pre-infected chickens survived H5N1 challenge ( Figure 5H). However, in the contact control group, 1 chicken died on DPC 6 and another on DPC 7 ( Figure 5H). Once again, 2 contact control chickens survived H5N1 challenge without showing any clinical disease. Pathology of chickens that died from H5N1 challenge Histological lesions consistent with HPAI were observed in chickens that developed clinical disease. However, the organs affected and extent of the lesion was variable between individual birds examined. Common lesions included interstitial pneumonia, splenic necrosis and macrophage hyperplasia, lymphohistiocytic and necrotizing myocarditis and multifocal pancreatic necrosis. Influenza viral antigen was most consistently detected in lung, spleen, heart, skeletal muscle, kidney, proventriculus and pancreas by immunohistochemistry. Seroconversion in chickens following A/chicken/ Vietnam/14/2005 H5N1 challenge Twenty one days following H5N1 challenge sera collected from all surviving chickens was assessed for H5 specific antibodies using an H5 competitive ELISA, H5N1 virus neutralization and HI. The majority of chickens from H1N1 and H1N2 pre-infected as well as surviving contact control chickens developed H5 specific antibodies following challenge ( Figure 6). Discussion In this study, the influence of prior infection with two different heterologous LPAI viruses on subsequent challenge with HPAI H5N1 was evaluated. In the first experiment, naïve control and pre-infected chickens were housed together and both challenged to allow for amplification of H5N1 following the initial challenge. Approximately 70-80% of chickens previously infected with LPAI H1N1 or H1N2 survived H5N1 challenge whereas all naïve control chickens died following acute disease. In the second experiment, only chickens previously infected with LPAI H1N1 or H1N2 were challenged with H5N1 and then naïve controls placed in the room the day following challenge to evaluate if transmission of H5N1 could occur from pre-infected chickens to naive control birds. In this second scenario the survival rate of pre-infected chickens was improved since none developed clinical disease or died from H5N1 challenge. In contrast, 40-60% of the contact control chickens died from H5N1 challenge demonstrating that the pre-infected chickens transmitted virus to the control birds. Despite transmission of H5N1 from pre-infected to naive birds, there was a delay in onset of clinical signs and a reduction in virus shedding, clinical disease and mortality. This was expected since the amount of H5N1 virus shed by chickens previously infected with either H1N1 or H1N2 was two orders of magnitude lower then the H5N1 challenge dose used. This study demonstrates that previous infection with a LPAI H1N1 or H1N2 was able to partially protect chickens against HPAI H5N1 challenge. This heterologous protection is not mediated by neutralizing antibodies since antibodies to H5 were not detectable prior to challenge. However, antibodies developed against both H1N1 and H1N2 infection in chickens were able to decrease H5N1 virus plaque size compared to naïve serum. This inhibition of virus spread in the absence of specific neutralizing activity indicates that non-neutralizing antibodies may play a role in the clearance of H5N1. Antibodies against the NA protein can prevent the release of new virus particles from infected cells thereby restricting virus replication [21,33]. Therefore, the anti N1 antibodies in chickens pre-infected with H1N1 could have potentially restricted H5N1 replication, consequently reducing disease and mortality. The H5N1 plaque size reduction was slightly better using H1N1 sera compare to H1N2 sera however this was not statistically significant. However, it is likely that the H1N1 sera N1 component is responsible for this observation as sera from H1N1 infected chickens developed N1 specific antibodies determined using a neuraminidase inhibition assay. Similarly, this could have been the case in H1N2 infected chickens if anti-N2 antibodies were to crossreact with the N1 of H5N1. However, this is very unlikely given that there was only 43% homology between the NA proteins of H1N2 and H5N1, with a 20 amino acid deletion in the stalk region of N1 from H5N1. Together this indicates that N1 antibodies as well as other antibodies may be responsible for H5N1 plaque size reduction. Therefore, additional mechanisms were likely involved in the protection against H5N1. Antibodies against the matrix protein 2 (M2) would prevent the uncoating of influenza virus during infection and thus restrict virus replication. This has been demonstrated using monoclonal antibodies against influenza M2 which reduced influenza virus plaque size without affecting the number of plaques [31]. In addition, this antibody reduced the replication of influenza virus in mouse lungs, indicating that antibodies against M2 can provide protection in vivo [34]. However, heterologous cell mediated immunity against influenza generated at mucosal sites is likely another mechanism responsible for protection. Prior infection of chickens with H9N2 was able to protect them from H5N1 and this protection was cell mediated because adoptive transfer of cytotoxic CD8 T cells from H9N2 primed chickens protected naïve inbred chickens from H5N1 challenge [26]. A similar mechanism is responsible for protection against H5N1 in mice previously infected with H9N2 [35] and pigs previously infected with swine influenza H1N1 [21]. The highly conserved internal proteins (PB2, PB1, PA and NP) of influenza virus are the most likely source of cross-reactive epitopes recognised by cytotoxic T cells [36,37]. The immunity generated by prior infection with LPAI H1N1 or H1N2 was not able to prevent virus replication; however it did decrease shedding of HPAI H5N1 in oral and cloacal swabs. Reduced virus shedding is expected in this situation since infection can not be completely prevented in the absence of neutralizing antibodies. Prior infection with H9N2 was able to protect chickens from H5N1 but surviving birds shed low amounts of virus in feces [26]. Similarly, when Canada geese are pre-exposed to LPAI they develop partial protection against HPAI [22]. In addition, in wood ducks, prior infection with H1N1 influenza provided partial protection against H5N1 [23]. Chickens that survived H5N1 challenge generated antibodies specific for H5. This seroconversion for H5 allows for the differentiation of infected and vaccinated animals (DIVA), whereby H1 specific antibodies are only present in chickens preinfected with H1N1 or H1N2. Vaccination of chickens using LPAI to protect against HPAI H5, with diagnostic tests for DIVA could be used for improved control of HPAI in endemic countries. In addition, it could be most effective when all chickens are vaccinated to generate herd immunity as 100% protection was demonstrated when only H1N1 or H1N2 pre-infected chickens were challenged compared to 70-80% protection where preinfected chickens were mixed with naïve chickens and all challenged with H5N1. An ideal vaccine should prevent virus infection and shedding via neutralizing antibodies. Therefore, partial protection provided by LPAI against H5N1 is potentially dangerous as it can mask the clinical manifestation of H5N1 allowing the virus to spread. Nevertheless, a reduction in virus shedding, severity of illness and deaths due to IAV might be acceptable outcomes during a pandemic. This can then be combined with a DIVA to identify and eliminate H5-positive chickens. The duration of immunity is unknown; however, it is likely that it would last for the usually short lifetime of a chicken. A major question remains if a similar strategy of using a low pathogenic influenza virus in humans that does not cause disease would elicit protective immunity against H5N1. Epidemiological data from Vietnam suggests that previous and probably repeated infection of humans by influenza makes them less likely to die from H5N1 infection. This was based largely on the finding that humans aged #16 years were more likely to die from H5N1 infection than did older people [38]. In humans, protection against influenza H3N2 or H1N1 challenge correlated with pre-existing influenza-specific CD4+ T cells [39]. It has been previously demonstrated that prior infection with H3N2 in ferrets can protect against H5N1 illustrating that protection against H5N1 is possible in other animal species. Furthermore, using a temperature sensitive attenuated H1N1 influenza virus, it was recently demonstrated that protection against H5N1 could be achieved in mice [40]. It is possible that the protective immunity generated by replicating low pathogenic influenza viruses may be different between different animal species including humans. Chickens are one of the most susceptible species to HPAI H5N1 with mortalities approaching 100%. We have demonstrated in chickens that protective immunity can be generated in the absence of H5 specific neutralizing antibodies. We are currently investigating if live attenuated influenza vaccines can elicit similar protection as the LPAI against HPAI in chickens. The development of several live attenuated influenza viruses covering the HA subtypes that have a pandemic potential would enhance influenza pandemic preparedness. Such vaccines can be manufactured and stockpiled prior to the emergence of a new influenza virus and then immediately tested for efficacy, thereby avoiding the need to develop and produce a new vaccine at the beginning of an outbreak.
2017-04-14T08:33:52.765Z
2012-12-11T00:00:00.000
{ "year": 2012, "sha1": "e4aef37d08b801f0a226329d12130b6083342adb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0051933&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4aef37d08b801f0a226329d12130b6083342adb", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
86828496
pes2o/s2orc
v3-fos-license
Design Of Battery Charge Control System On Hybrid Power Plants . Battery charge control system has been built on hybrid power plants. The source of voltage comes from horizontal-axis windmill with 500 Watt power, Monocrystalline solar cell with 200 WP for charging 12 Volt 35 AH. The aim of this research is to optimize battery charging process using potential energy sources from wind and solar. Control system on hybrid power plant is done by adjusting the amount of voltage or electric current based on the need of the battery charging. The making of the control system uses several software and hardware. Some of the main components used are ATMega 16 micro-controller and voltage censor. The control system that has been made is tested by monitoring the voltage and current during the control process. The test result shows that the control system that has been made is able to work properly in controlling the input voltage from the source and charger voltage on the battery. Introduction Windmill is one of the potential alternatives to meet the need of energy, especially in archipelagic areas with sufficient wind potential. Moreover, solar cell is one of the environmental-friendly and promising energies in the future due to its absence of pollution during energy conversion process, and the source of the energy is widely provided by nature, that is, sunlight [1,2]. Hybrid energy system that combines multiple energy sources to supply electrical energy to the load for maximizing energy at low cost, pollution-free, and excellent power quality, and sustainable energy [3]. The existing control system has a working system that controls the battery charging up to the full condition. After the condition is reached, the control system cut the source energy for the battery charging [3,4]. Therefore, it is necessary to use control system that is able to solve the problem. One of the efficient control systems is to make automatic charger controller tools that optimize energy potential from wind and solar energy for battery charging. Methodology The design of battery charging control system on hybrid power plant consists of three big parts, namely input variable, control system, and output variable. Input variable from windmill and solar cell is monitored by using voltage censor, while control system is monitored by using multiple hardware circuits such as regulator circuit, charger circuit, and voltage censor circuit. Output variable consists of 5 relays controlled using ATMega 16 micro-controller for battery charging process and ON/OFF process to the load. The configuration of control system build is seen in the form of block diagram on the picture below. Results And Discussion The design of battery charging control system is built on hybrid power plant. Control system has been built in three steps, that is, hardware design, software design, and equipment test. The hardware design consists of one-source terminal and two-source terminal design, voltage censor design, charger circuit design, microcontroller circuit design, regulator design, and panel control box design. The software design is in the form of control system design using vision code AVR realized with C language. The control system test is conducted by collecting monitoring data and observing the work process of the control system during charging. Fig. 2. Hybrid Control System. Control system performance for battery charging is tested by monitoring and controlling the built battery charging system. The test is performed by testing all of the control systems built including hardware and software. The built system will monitor input voltage of two sources, that is, solar cell and windmill along with the work process of the control system. Monitoring test consists of the source voltage, load voltage, voltage in each battery and current done by the censor. Meanwhile, the performance of the software for the control process is carried out by monitoring the control system that occurs due to interruption based on the program specified. Control system test of battery charging on the picture above shows that there is voltage and current change during the control process. In this test result, voltage distribution from the solar cell source during the control process is on the voltage range of 19.79 volt at 9:00 a.m. and voltage of 19.30 volt at 1:00 p.m. On the graph of solar cell test, it can be observed that the voltage distribution in the afternoon increases from 9:00 a.m. to 2:30 p.m. caused by the effect of solar heat, while voltage supply power source used is constant, that is, 19.00 volt. The source voltage monitoring during the control process has significant change during the battery charging and loading process after being connected to the control circuit, and source voltage of solar cell before and after circuit experience voltage drop. The result of monitoring and control process obtained also shows that when the source voltage condition is met, then the control process of charging occurs. Charging time for battery with capacity of 12 Volt 35 Ah for two batteries, in which battery one and two is reconditioned empty, is the charging time battery one starts from 9:00 a.m. to 4.05 p.m. After the first battery is full, the second battery is automatically charged, and the first battery is connected to the load. The measurement of battery voltage displayed on LCD and direct measurement experiences voltage difference.
2019-03-28T13:14:07.811Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "cd6a29aaa04dca3f3642471f96ca3c21c8525568", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/48/e3sconf_icenis18_01014.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "116de7f9543151e653f7cd1f04eb08dcbc7fc213", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
255680995
pes2o/s2orc
v3-fos-license
Brassica napus Roots Use Different Strategies to Respond to Warm Temperatures Elevated growth temperatures are negatively affecting crop productivity by increasing yield losses. The modulation of root traits associated with improved response to rising temperatures is a promising approach to generate new varieties better suited to face the environmental constraints caused by climate change. In this study, we identified several Brassica napus root traits altered in response to warm ambient temperatures. Different combinations of changes in specific root traits result in an extended and deeper root system. This overall root growth expansion facilitates root response by maximizing root–soil surface interaction and increasing roots’ ability to explore extended soil areas. We associated these traits with coordinated cellular events, including changes in cell division and elongation rates that drive root growth increases triggered by warm temperatures. Comparative transcriptomic analysis revealed the main genetic determinants of these root system architecture (RSA) changes and uncovered the necessity of a tight regulation of the heat-shock stress response to adjusting root growth to warm temperatures. Our work provides a phenotypic, cellular, and genetic framework of root response to warming temperatures that will help to harness root response mechanisms for crop yield improvement under the future climatic scenario. Introduction The effects of climate change are threatening crop productivity across the globe. The increasing incidences of heat waves, drought, and other extreme weather events experienced worldwide are negatively affecting agricultural production [1,2]. Feeding the world's population will require a significant rise in food production against the backdrop of these climatic constraints [3,4]. In view of a future increase in food insecurity, agriculture needs to find new ways to adapt crops to adverse environmental changes [5][6][7]. One of the major alterations triggered by climate change is a global trend of warmer temperatures [6,8]. Elevated ambient temperatures have profound effects on plant physiology and development, leading to substantial declines in crop yield and quality [9,10]. Although crops are heterogeneously affected, higher temperatures generally shorten crop growth periods, affect photosynthetic rates, reduce plant shoots and root biomass, promote fruit senescence, decrease seed numbers and sizes, and alter seed composition [11][12][13][14]. Even though the predicted increase of a few degrees in ambient temperature can have profound effects on crop growth and yield, information on how crops adapt to warmer temperatures is still scarce [15]. Roots are the main organs that control nutrient and water uptake, and changes in soil temperatures alter these processes, limiting crop growth. Root systems are also highly plastic in response to environmental conditions and can modulate different physiological and morphological traits to adapt their architecture and functionality to disadvantageous Table 1. Changes in major root traits of Brassica napus varieties result in extended and deeper root systems in response to warm temperatures. Comparative values of mean and coefficient of variation of major root traits classified according to their categories (extent, size, distribution, and shape-related traits) show significant changes in most traits between 21 • C and 29 • C in all Brassica napus varieties analyzed. Two-way ANOVA analysis of contribution of genotype (G), temperature (T), and genotype x temperature interaction (GxT) to changes in root trait values after warm temperature treatment were also assessed. Significant differences are indicated by asterisks, * p < 0.01, ** p < 0.001, *** p < 0.0001, and ns shows non-significant differences. Ndepth (network depth, cm), Nwidth (network width, cm), ConvA (network convex area, cm 2 ), MajA (major ellipse axis, cm), MinA (minor ellipse axis, cm), Nlength (total network length, cm), Narea (network area, cm 2 ), Nper (network perimeter, cm), Nsurf (network surface area, cm 2 ), Nbush (network bushiness), NLdist (network length distribution), Nsolid (network solidity, cm cm −2 ), MaxR (maximum number of roots), MedR (median number of roots), SRN (secondary root number), SRD (secondary root density, n cm −1 ), AspR (aspect ratio, cm cm −1 ), Nw/d (network width-to-depth ratio, cm cm −1 ). Detailed descriptions of root traits are provided in Table S1. 21 roots seem to take advantage of this plasticity by deploying different modifications of root traits to better exploit their surrounding environment. Figure 1. Differential RSA changes in B. napus varieties led to an increase in soil exploration in response to warm temperature. (A) Differential values of representative root trait categories: extent (network depth (Ndepth, cm)), size (network length, (Nleght, cm)), distribution (network length distribution (NLDist)), and shape (ellipse axis ratio (AspR), cm cm −1 ) traits of a collection of 10 spring oilseed rape genotypes grown at 21 °C and 29 °C. Statistical t-test analysis, *** FDR < 0.01. (B) Biplot of principal component analysis (PCA) based on all root traits analyzed, showing the high contribution of size and shape traits to variability of SOSR genotypes. Traits and varieties are colored Figure 1. Differential RSA changes in B. napus varieties led to an increase in soil exploration in response to warm temperature. (A) Differential values of representative root trait categories: extent (network depth (Ndepth, cm)), size (network length, (Nleght, cm)), distribution (network length distribution (NLDist)), and shape (ellipse axis ratio (AspR), cm cm −1 ) traits of a collection of 10 spring oilseed rape genotypes grown at 21 • C and 29 • C. Statistical t-test analysis, *** FDR < 0.01. (B) Biplot of principal component analysis (PCA) based on all root traits analyzed, showing the high contribution of size and shape traits to variability of SOSR genotypes. Traits and varieties are colored based on contribution to the variance. Circles highlight the three different groups based on their root response to warming. (C) Dendrogram plot of SOSR genotypes. AGNES (agglomerative nested) hierarchical clustering was used, where Y-axis represents (dis)similarity based on Ward's minimum variance method. Color boxes highlight three main clusters according to their root trait values. (D) Representative root organization of Drakkar and Duplo Brassica napus varieties grown in a pouch-and-wick system for 7 days at 21 • C or 29 • C. More interestingly, together with a common effect of warm temperatures, we also found that there was a significant effect of genotype on all the evaluated root traits (Table 1). In order to identify and characterize the genetic variability associated with RSA response to warming, we performed multivariate principal components analysis (PCA) and hierarchical clustering (HC). Using all previous trait measurements, we found that 69% of the variability in temperature-dependent alteration of RSA traits among genotypes could be explained by the first two components, 46% and 22, 5% in PC1 and PC2, respectively ( Figure 1B). Traits with higher influence in the PC1 were size-related traits, such as network surface and network area, followed by network perimeter and network length. The high contribution of size traits indicates that there is a high variation in these traits between the varieties, and accordingly, values ranged from 0.98 to 1.67 for the network surface or from 0.99 to 1.66 for the network area in Drakkar and Wesway varieties, respectively (Figures 1A,B and S1B and Table S1). Likewise, shape traits also had an important contribution to PC1, with aspect ratio values from 1.37 to 0.83 in Wesway and Westar ( Figure 1A). Interestingly, high values of these two traits correlated with high values of extent traits, such as the minor ellipse axis and network width in PC1, but also with negative high values of the major ellipse area or network depth in the PC2. These data suggest that most varieties change the depth of their root system mainly through the growth of their primary roots (major ellipse area vs. network depth traits), but key differences in the growth of the secondary roots account for the diversity found in the width of the network (minor ellipse area vs. network width) ( Figures 1A,B and S1A, Table S1). Altogether, these results imply that each variety uses different arrangements of their root traits to increase medium exploration. Consistently, when we positioned the genotypes according to their PC score, we identified three root response groups. A first group comprised the Westar, Karat, Dux, and Line varieties that showed low scores in all PCs but particularly in PC2 scores, network width, and minor ellipse area, thus representing a low response with no preferential top allocation of root networks. Although having similar PC1 scores, the Drakkar variety was allocated separately from this group due to the lowest scores of network depth (1.03) and major ellipse area (0.60), together with high network width-to-depth ratio (1.15). Consequently, the growth of secondary roots was increased, but not that of the main roots. Interestingly, Drakkar had high network solidity and secondary root density values, suggesting that although their roots explore less area, plants of this variety augmented their surface of interaction with the medium through an increase in the number of secondary roots ( Figures 1A,B and S1A-E, Table S1). Therefore, Drakkar represents a variety that develops wider but not deeper root networks in response to warm temperatures. A broad third group was constituted by varieties distinguished by similar PC1 and PC2 scores (Industry, Marnoo, Fido, Wesway, and Duplo) that coincide with genotypes showing moderate changes in most of root traits but a very substantial increase in extent traits, major ellipse area, and network depth. Thus, this group mainly represents varieties growing significantly more extensive and deeper roots in response to warming. Accordingly, this group contains the variety Duplo, with the highest score for convex area (2.1), resulting from high network width (1.47) and length (1.84) scores. Therefore, Duplo represents a positive root response to warm temperatures by displaying a root system covering more extension due to a combination of wider and deeper roots ( Figures 1A,B and S1A-E, Table S1). Hierarchical clustering (HC) of the genotypes based on these RSA traits further supported these PCA results. Thus, the varieties were grouped in three clusters according to the prevailing combination of traits displayed to respond to warming ( Figure 1C). This clustering underlies the same phenotypic categorization of the varieties obtained in the previous analysis. In summary, the variability in RSA response in B. napus roots reflects the plasticity of the root system to respond to different environmental conditions. Thus, B. napus roots seem to take advantage of this plasticity by deploying different modifications of root traits to better exploit their surrounding environment. Combinatory Changes in Cell Elongation and Cell Division Drive Differential Root Response to Warm Temperatures We have shown that an increase in root growth is a common response to warm conditions in B. napus. To identify the cellular processes that are responsible for this growth, we carried out a comparative analysis of the primary root growth process between two B. napus varieties, Drakkar and Duplo, that differ significantly in their primary root response to warm temperatures (network depth 1.03 and 1.41, respectively) ( Figure 1A,D and Table S1). First, we monitored the primary root growth dynamics of the two varieties in seedlings grown at 21 • C and 29 • C. Warming temperatures led to a significant increase in primary root length in Duplo, whereas no differences in length were detected in Drakkar after exposure to 29 • C compared to 21 • C ( Figure 2A). Moreover, this differential root growth response was also maintained when B. napus seedlings were grown in perlite pots and exposed to warm temperatures for longer time periods. After 15 days, the root length ratio between 29 • C and 21 • C was similar to the values we observed at 7 days, 0.81 in Drakkar and 1.36 in Duplo ( Figure 2B), suggesting that root growth response triggered by warm temperatures might be maintained through the whole seedling growth. based on contribution to the variance. Circles highlight the three different groups based on their root response to warming. (C) Dendrogram plot of SOSR genotypes. AGNES (agglomerative nested) hierarchical clustering was used, where Y-axis represents (dis)similarity based on Ward's minimum variance method. Color boxes highlight three main clusters according to their root trait values. (D) Representative root organization of Drakkar and Duplo Brassica napus varieties grown in a pouchand-wick system for 7 days at 21 °C or 29 °C. Combinatory Changes in Cell Elongation and Cell Division Drive Differential Root Response to Warm Temperatures We have shown that an increase in root growth is a common response to warm conditions in B. napus. To identify the cellular processes that are responsible for this growth, we carried out a comparative analysis of the primary root growth process between two B. napus varieties, Drakkar and Duplo, that differ significantly in their primary root response to warm temperatures (network depth 1.03 and 1.41, respectively) ( Figure 1A,D and Table S1). First, we monitored the primary root growth dynamics of the two varieties in seedlings grown at 21 °C and 29 °C. Warming temperatures led to a significant increase in primary root length in Duplo, whereas no differences in length were detected in Drakkar after exposure to 29 °C compared to 21 °C (Figure 2A). Moreover, this differential root growth response was also maintained when B. napus seedlings were grown in perlite pots and exposed to warm temperatures for longer time periods. After 15 days, the root length ratio between 29 °C and 21 °C was similar to the values we observed at 7 days, 0.81 in Drakkar and 1.36 in Duplo ( Figure 2B), suggesting that root growth response triggered by warm temperatures might be maintained through the whole seedling growth. Root length growth is the result of combined cell proliferation in the meristem and rapid longitudinal cell expansion in the elongation zone. Hence, we measured these cell parameters in Drakkar and Duplo roots after 7 days exposed to warm (29 °C) and control Root length growth is the result of combined cell proliferation in the meristem and rapid longitudinal cell expansion in the elongation zone. Hence, we measured these cell parameters in Drakkar and Duplo roots after 7 days exposed to warm (29 • C) and control (21 • C) temperature conditions. First, we measured root meristem parameters. Confocal microscopy on mPS-PI-stained roots revealed that both varieties displayed shorter root meristems at 29 • C than at 21 • C, at 776 µm vs. 612 µm in Drakkar and 731 µm vs. 609 µm in Duplo ( Figures 3A and 4B). This decreased meristem size was correlated with a reduced number of meristematic cells in both genotypes, 75 vs. 58 and 73 vs. 56 in Drakkar and Duplo, respectively. However, when we measured the average meristematic cell length, we found that it was higher at 29 • C than at 21 • C in Duplo, but it remained similar in Drakkar at both temperatures ( Figure 3A). A cell size frequency distribution analysis revealed that in Duplo, warming resulted in a reduction in the number of shorter cells and a consistent increase in the frequencies of longer-sized cells, from 8 (24.08% at 21 • C to 18.3% at 29 • C) to 12µm (16.9% at 21 • C to 22.4% at 29 • C) ( Figure 3B). In contrast, Drakkar displayed a similar distribution of cell length frequency (from 22.6% to 22.5% or from 19.7% to 20% for cell length frequencies corresponding to 8 or 12 µm, respectively) at 21 • C and 29 • C. The root meristem is divided into the apical and basal meristem according to the upward position from the root tip. Cells in the apical meristem are continuously dividing and expanding at a constant rate, whereas cells at the basal meristem also divide, but rapidly increase their size to exit the meristem through the transition zone. Since we found differences in average meristem cell length in Duplo, we were interested in mapping where these differences arise in the meristem. We found no significant differences in cell length, cell number, or meristem length in the basal meristem between 21 • C-and 29 • C-treated roots in any of the genotypes ( Figure 3C). However, when we recorded the average cell length of the meristematic cells according to their position in the root meristem from the initials close to the quiescent center (QC) up to the meristem transition zone, we found that cells sharply increased their cell length earlier at 29 • C, at 411 µm compared to 561 µm from the QC in Drakkar and at 385 µm compared to 517 µm at 21 • C in Duplo, suggesting that the boundary between the apical and basal meristem starts closer to the QC when roots were grown at warm temperatures ( Figure 3D). Consistently, the meristem length and total number of cells of the apical meristem were also reduced in both genotypes (58 vs. 42 in Drakkar and 57 vs. 38 in Duplo), and the average cell length increased significantly only in Duplo ( Figure 3C). Altogether, these data indicate that warming temperatures provoke a shortening of the root meristem caused by a reduction in the apical meristem size. Cell production in the meristem mainly relies on two factors, the number of dividing cells and their rate of cell division. Since the meristem of both genotypes contained less meristematic cells in response to warm temperatures, we analyzed whether there were changes in their rate of cell division that could account for the differential root growth response in both varieties. We quantified the differences in cell division rates in response to warm temperatures by scoring double-labeled-EdU and DAPI nuclei cells using confocal microscopy. First, we monitored warmth-induced changes in DNA replication by measuring EdU incorporation rates that represent the portion of meristematic cells in the Sphase among all meristematic cells (EdU-labeled nuclei/DAPI-labeled nuclei). As shown in Figure 3E,F, EdU incorporation ratios were not significantly different between roots grown at 21 • C and 29 • C in either of the two varieties, indicating that warming temperatures did not strongly affect DNA replication. Then, we measured the relative number of mitotic cells (mitotic index) by scoring M-phase EdU-labeled nuclei (number of EdU-labeled Mphase nuclei/total EdU-labeled nuclei). We found a significant increase in mitotic figures in Duplo roots grown at 29 • C compared to 21 • C, with the EdU-related mitotic index rising from 2.41 at 21 • C to 5.49 at 29 • C, but we did not detect any difference in Drakkar ( Figure 3G). Altogether, these results suggest that Duplo meristems compensate for the reduction in meristematic cell numbers in response to warm temperatures by increasing their cell division rate. length, cell number, and average cell length did not display significant changes when grown at 29 °C ( Figure S2B). In contrast, in the EZ, average cell length was significantly increased compared to 21 °C ( Figure S2C), suggesting that cell elongation is the main cellular mechanism employed in this variety to respond to warming. These results further support the hypothesis that B. napus deploys different cellular changes to respond to adverse temperatures. Next, we quantified the number and length of the cells in the root elongation zone (EZ), where cells no longer divide but expand rapidly. When we examined the elongation zones of mPS-PI-stained roots grown at 29 • C and compared these with roots grown at 21 • C, we observed a significant decrease of 44% in the length of Drakkar EZ, whereas no change was observed in Duplo ( Figure 4A,B). The differences in EZ length correlated with a decrease in the number of elongating cells, from 33 to 24 cells (27%), and a reduction in average cell length, from 92 to 76 µm (18%), in Drakkar. By contrast, in Duplo, a significant increase in average cell length, from 72 to 83 µm (14%), compensates for the decrease in cell number, maintaining the EZ length ( Figure 4A,B). Accordingly, we observed a clear increase in the relative frequencies of longer cells in Duplo EZ compared to Drakkar EZ ( Figure 4C). The analysis of the distribution of cell size relative to cell position in the EZ of both varieties showed that, as expected, at both temperatures, cell length increased as cells were reaching the differentiation zone. However, significant differences among the genotypes were revealed at 29 • C. Cell length increased more sharply along the root EZ in Duplo than in Drakkar ( Figure 3D), suggesting that cells elongated more quickly in Duplo compared to Drakkar EZ in response to warm temperatures. Consequently, the first cell in the differentiation zone was significantly bigger in Duplo at 29 • C than in Drakkar, further evidencing the differences between both genotypes ( Figure 4E). These results showed that warm temperatures enhance cell elongation in Duplo but not in Drakkar EZ. Our data suggest that B. napus roots might combine different cellular changes, such as increasing cell proliferation in the meristem followed by enhanced cell elongation, to integrate the root growth response to warm temperatures. However, this combinatorial strategy must be differentially applied between varieties, since spatial root growth response varies among genotypes. Thus, when we analyzed the cellular response of Marnoo, another genotype that increased the length of their primary root in response to warming, although to a lesser extent than Duplo ( Figure S2A), we found that the meristem length, cell number, and average cell length did not display significant changes when grown at 29 • C ( Figure S2B). In contrast, in the EZ, average cell length was significantly increased compared to 21 • C ( Figure S2C), suggesting that cell elongation is the main cellular mechanism employed in this variety to respond to warming. These results further support the hypothesis that B. napus deploys different cellular changes to respond to adverse temperatures. Balanced Regulation of Transcriptional Temperature Response Is Crucial to Adjusting Root Growth to Warming Conditions We have shown that B. napus roots modify their developmental program to adjust their growth to warming temperatures. To understand the genetic regulation of this response, we compared the root patterns of gene expression of the B. napus varieties Drakkar and Duplo using transcriptomic analysis. Correlation analysis of the differences in gene expression with the differences in primary root elongation observed in these two different genetic backgrounds will allow us to identify the major transcriptomic changes that define the primary root response to warming temperatures, as well as some of the gene regulatory networks that may contribute to the differences in RSA detected in these varieties. With this aim, we carried out RNA sequencing (RNA-seq) of Drakkar and Duplo root tips grown at 21 • C and 29 • C for 7 days. Differential gene expression analysis of these data identified 930 differentially expressed genes (DEGs) in Duplo and nearly three times more genes, 2605, in Drakkar (DEGS, adjusted p-val < 0.05 and −1 > log2FC > 1) ( Figure 5A,B and Table S2). Interestingly, 77.7% of the Drakkar DEGs were upregulated, whereas 58.5% were induced in Duplo, suggesting that an active transcriptional reprogramming is triggered in response to warm temperatures. Comparative analysis of these DEGs showed a low correlation between the gene expression responses to warming temperatures of the two varieties ( Figure S3A). Only 12.2% of Drakkar DEGs were shared with Duplo DEGs, whereas 34.4% of Duplo DEGs overlapped with Drakkar DEGs (Figure 5C), indicating that there are significant differences in the temperature-dependent transcriptome reprogramming between the two varieties. Hierarchical clustering of all the genes with altered expression at 29 • C in both varieties revealed six major patterns of transcriptional response to warmer temperature ( Figure 5B and Table S3). Cluster 1 (1105 genes) and Cluster 5 (422 genes) contained genes predominantly induced or repressed in both varieties, respectively. Gene ontology enrichment analysis showed that Cluster 1 was enriched in genes related with the response to oxidative stress (BnPRX71), fatty acids (BnACX2), and sugars (BnSUC7), as well as ethylene (ET) biosynthesis (BnSAM1, BnACO1) and splicing machinery (BnSR30), whereas Cluster 5 contained genes related to metabolic processes and beta-glucosidase activity ( Figure 5C and Table S3). These clusters represent the common patterns of gene expression between genotypes, and therefore, either the transcriptional activation (Cluster 1) or repression (Cluster 5) of these groups of genes underlies their shared temperature induced response, as confirmed by qPCR ( Figures 5B,C, 6 and S3B,C). Remarkably, the biological processes that were overrepresented, such as oxidative, lipid, and splicing pathways, have been defined as part of the common mechanisms of temperature sensing and signaling in plants [20,40,41]. The remaining clusters comprised genes showing differential expression between the two genotypes and are therefore more likely to contain the genetic determinants of the variability in root response to warming conditions. Clusters 3, 4, and 6 represented genes that had an opposite expression in Duplo compared to Drakkar. We found genes that were repressed in Drakkar but less repressed or induced in Duplo (Cluster 3), induced in Duplo confirmed that B. napus roots specifically modify their transcriptional programs to adjust their growth to respond to warming temperatures by coordinating the activation of temperature sensing/signaling and cell growth regulatory pathways (Figures 6 and S3B,C Figure 6. Transcriptional dynamics are altered in response to warm temperatures in B. napus roots. Gene expression levels of several genes representative of the main GOs enriched in three of the most significant transcriptional response patterns identified by hierarchical clustering analysis of Drakkar (DK) and Duplo (DP) root tips grown at 21 °C compared to 29 °C. As measured using quantitative RT-PCR (qPCR), relative gene expression values of genes from Cluster 1 were related with the response to oxidative stress (BnPRX71), fatty acids (BnACX2), cell wall (BnXTH24), as well as ethylene biosynthesis (BnSAM1) and cytokinin catabolism (BnCKX1) and showed similar pattern of expression in both varieties. Meanwhile, relative gene expression values of genes from Cluster 2 were related to response to hydrogen peroxide (BnCAT3) and high light intensity (BnGols1) together with core heat-shock response genes (HSR) such as chaperones (BnP23) and heat-shock proteins (HSPs) (BnHSA32 and BnHSP15.7). Cluster 4 was related with cell wall biogenesis and organization (BnXTR6 and BnEXPA8), as well as hormonal regulation, such as ABA signalling (BnPYL5), brassinosteroids (BnSTE1), and gibberellin (BnGA20 × 2) metabolism. All of these confirmed the differential expression patterns between varieties in response to warming. All the experiments were performed using three biological and three technical replicates. Expression values were normalized Figure 6. Transcriptional dynamics are altered in response to warm temperatures in B. napus roots. Gene expression levels of several genes representative of the main GOs enriched in three of the most significant transcriptional response patterns identified by hierarchical clustering analysis of Drakkar (DK) and Duplo (DP) root tips grown at 21 • C compared to 29 • C. As measured using quantitative RT-PCR (qPCR), relative gene expression values of genes from Cluster 1 were related with the response to oxidative stress (BnPRX71), fatty acids (BnACX2), cell wall (BnXTH24), as well as ethylene biosynthesis (BnSAM1) and cytokinin catabolism (BnCKX1) and showed similar pattern of expression in both varieties. Meanwhile, relative gene expression values of genes from Cluster 2 were related to response to hydrogen peroxide (BnCAT3) and high light intensity (BnGols1) together with core heat-shock response genes (HSR) such as chaperones (BnP23) and heat-shock proteins (HSPs) (BnHSA32 and BnHSP15.7). Cluster 4 was related with cell wall biogenesis and organization (BnXTR6 and BnEXPA8), as well as hormonal regulation, such as ABA signalling (BnPYL5), brassinosteroids (BnSTE1), and gibberellin (BnGA20 × 2) metabolism. All of these confirmed the differential expression patterns between varieties in response to warming. All the experiments were performed using three biological and three technical replicates. Expression values were normalized with those of BnACT7 (Chen et al., 2010). The second and third biological replicates are shown in Figure S3C. The remaining clusters comprised genes showing differential expression between the two genotypes and are therefore more likely to contain the genetic determinants of the variability in root response to warming conditions. Clusters 3, 4, and 6 represented genes that had an opposite expression in Duplo compared to Drakkar. We found genes that were repressed in Drakkar but less repressed or induced in Duplo (Cluster 3), induced in Duplo but less induced in Drakkar (Cluster 4), or repressed in Duplo but induced or less repressed in Drakkar (Cluster 6). Regarding the type of genes contained in these clusters, Cluster 3 (260 genes) was enriched in genes related to nitrate assimilation, metabolism, and transport (BnNRT2.1) and to cell wall-related processes. Cluster 4 (352 genes) included genes involved in peroxidase activity, cell wall biogenesis and organization (BnXTR6, BnEXPA8 and BnXYL4), and abscisic acid (ABA) signaling (BnPYL4 and BnPYL5). Thus, in Duplo, the tran-scription of several cell growth regulators is activated or repressed to enhance root growth, whereas in Drakkar, this response is not fully accomplished (Figures 5B and S3B). Finally, Cluster 6 (153 genes) encompassed genes related with either light-signaling pathways or negative regulation of photomorphogenesis (BnPIL2) (Figures 5C and S3B). Comparison and validation by qPCR of the differential expression patterns of key genes belonging to these biological processes in each cluster confirmed that B. napus roots specifically modify their transcriptional programs to adjust their growth to respond to warming temperatures by coordinating the activation of temperature sensing/signaling and cell growth regulatory pathways (Figures 6 and S3B,C). Lastly, Cluster 2 (922 genes) was the largest differential cluster, representing more than half of the genes (54.66%) with opposing expression patterns. This cluster comprised genes that are highly induced in Drakkar but less induced or repressed in Duplo, and it was enriched in genes related to the response to hydrogen peroxide (BnCAT3) and high light intensity (BnGols1). However, the most represented GO of this cluster, corresponding to 56 genes, was "response to heat", and this included several core heat-shock response genes (HSR), such as heat-shock proteins (HSPs) (BnHSA32, BnHSP15.7), chaperones (BnP23) and heat-shock transcription factors (HSFs) ( Figure 5C and Table S3). Moreover, this group of heat-shock response genes not only showed the highest fold changes in Drakkar DEGs (13 HSR genes out of 15 genes with log2FC > 8, 86%) ( Figure 7A), but also the highest differences between Drakkar log2FC and Duplo log2FC (13 HSR genes out of 19 genes with ∆DK log2FC/DP log2FC > 8, 68.4%) ( Figure 7B). The negative correlation between the high induction of HSR gene expression and the reduction in primary root growth in Drakkar, together with the positive correlation of increased primary root elongation with the repression of the same genes in Duplo, as assayed by qPCR, strongly suggested that a tight control of HSR is required to stimulate warmth-triggered primary root growth ( Figures 6A and S3B,C). Consequently, we speculated that when roots are exposed to warming temperatures, an initial activation of HSR is triggered in both varieties, but subsequent HSR repression is initiated in Duplo and not in Drakkar. To test this hypothesis, we analyzed the transcriptional dynamics of several HSR genes belonging to Cluster 2 at 24 h, 48 h, 4 days, and 7 days after being exposed to the control temperature, 21 • C, and 29 • C in both varieties. Thus, we monitored the pattern of expression of four wellknown core HSR genes, including the small heat-shock protein, BnHSP17.6; the ATPdependent chaperone of the HSP70 family, BnHSP70; the mitochondrion-localized small heat-shock protein, BnHSP23.6M; and the HS-induced galactinol synthase, BnGolS1, by qPCR ( Figure 7C) [42,43]. As expected, we found that all these genes started to increase their expression at 24 h after 29 • C exposure. In Duplo, this increase was constantly maintained until reaching the maximum at 4 days, when the expression levels were reduced (BnHSP17.6 and BnHSP70) or steadily maintained (BnHSP23.6 and BnGols1) up to 7 days. On the contrary, in Drakkar, this gene induction gradually increased from 24 h without declining until day 7 when the genes reached the maximum differential expression levels compared to Duplo ( Figure 7C). These gene expression patterns correlated well with the opposite primary root growth response to warm temperature observed for these two varieties and suggest that the phenotypic RSA response could be associated with differences in the transcriptional dynamics of the heat-shock response. Based on these results, we propose that in the roots, warming temperatures lead to an early activation of the HSR. Once this response is triggered, roots attenuate this stress response to avoid its detrimental effect over growth. We postulate that roots of oilseed rape varieties such as Drakkar that could not counterbalance their temperature stress response and maintained high levels of HSR gene expression strive to readjust their root growth. Further analysis of the effect of the sustained expression of some of the key HS regulators identified in this study on root response to warming will contribute to corroborating this hypothesis. Red boxes correspond to HS genes, whereas yellow boxes correspond to nHS genes. (B) Differences between Drakkar log2FC and Duplo log2FC of HS response genes and nHS response genes, showing the highest differences between Drakkar log2FC and Duplo log2FC. X-axis represents three consecutive intervals of ∆DK log2FC/DP log2FC values (DP, Duplo). Red boxes correspond to HS genes, whereas yellow boxes correspond to nHS genes. (C) Differential dynamics of heat-shock response gene expression (BnHSP17.6, BnHSP70, BnHSP23.6M, and BnGols1) at 24 h, 48 h, 4 days, and 7 days after being exposed to 29 • C compared to control temperature, 21 • C, in Drakkar and Duplo roots. Statistical t-test analysis of three biological replicates * p < 0.05. (D) Warm temperature-triggered changes of several root traits promote root response by facilitating roots' access to extended areas of soil (blue square box). Enhanced cell division and elongation support this increase in RSA growth (orange square box). Transcriptional changes of different gene regulatory networks related with temperature signaling, plant growth, and nutrient balance drives these cellular changes that result in the differential RSA response (brown square box). Finally, coordinated attenuation of heat response is required for root response to warming temperatures in B. napus (brown square box). Discussion Root system architecture determines the root capacity of supplying water and nutrients to the plant. Under changing environmental conditions, RSA needs to undergo rearrangements to effectively reach water and nutrient-rich patches in the soil. The identification of root traits responsible for this response is crucial for developing climate-adapted cultivars [18,35]. Several effects on root system architecture due to changes in soil temperatures have been previously described in crops. Thus, a reduction in root growth associated with higher temperatures has been shown in maize, sorghum, rice, and potato [44][45][46][47]. On the other hand, changes in adventitious and lateral roots' initiation and elongation have also been described in potato, maize, sweet potato, and cassava [48][49][50]. However, few studies have focused on characterizing the root response to warm temperatures that is predicted to be the most likely temperature condition confronting crops. Our results suggest that constant warming temperatures enhance some beneficial root traits related to the roots' ability to explore their environment in several oilseed rape varieties at early stages of seedling development. These root traits included an increase in the extent and depth of primary roots, changes in the distribution and elongation of secondary roots, and an increase in the root surface area due to the combination of primary and secondary root extension. Bigger root systems assist water and nutrient uptake to support active growth during seedling establishment. They also increase the growth of the crop canopy, enhancing seedling survival to pest and environmental stresses and improving their competition with weeds. Consistently, some of the RSA changes that we have described in response to warming are coincident with root traits already associated with QTLs that are found in cultivars that are more tolerant to high temperatures and drought stress in some crops, such as rice, wheat, and barley [51][52][53][54][55]. These QTL-associated traits correspond to higher root length or enhanced superficial root distribution, suggesting that the similar root responses displayed by some of the B. napus varieties analyzed could be positive root traits for tolerance. Moreover, in other species, larger root size and fast early root expansion have been linked to adaptive advantages to temperature by enhancing plant competition in the field [23]. Contrary to this observed positive RSA response to warming, the exposure of roots to temperatures that are higher than optimal generally causes a negative effect in the overall root growth, hinting that plants have a narrow temperature margin between triggering an adaptation or stagnation root growth response [20,55]. Harnessing the modulatory mechanism responsible for this root growth readjustment may help to develop crops that are equally adapted to either heat waves or an increase of a few degrees in temperature triggered by climate change [56]. Our analysis of root response variability to warm temperatures in B. napus seedlings also uncovered differential growth responses between primary and secondary roots. At warmer temperatures, the elongation of primary and secondary roots was not concertedly enhanced in all varieties, but it underlies Drakkar's divergent response, consisting of the elongation of secondary roots, though not of primary roots ( Figure 1B,C). These phenotypic responses suggest that specific root developmental signals could differentially modulate the growth programs of primary and lateral roots to warming temperatures. Several hormones, such auxins, ABA, brassinosteroids, or gibberellins, have been described as modulators of differential responses to environmental stress displayed by primary and secondary roots in several plant species [41,[57][58][59]. Since some regulatory components of these hormonal pathways were differentially expressed in Drakkar primary roots compared to Duplo, one tempting possibility is that a hormonally dependent mechanism could be responsible for their contrasting growth responses to warm temperatures ( Figure 5C and Table S4). Further comparisons of primary and lateral roots' specific transcriptional programs will be needed to elucidate the regulatory mechanisms enabling their distinct growth response. On the other hand, the shortening of root meristems upon exposure to warm temperatures is a common response mechanism between varieties. Negative effects of elevated temperatures on root meristem size have been described previously in other species, causing a decrease in root growth [60][61][62]. However, in B. napus roots, the reduction in meristem size by warm temperatures positively correlates with an increase in longitudinal root growth. Hence, the alteration of meristem length should be compensated for by changes in either cell division rates or cell length. Coordination between size and cell division to maintain meristem structure has been already described in Arabidopsis shoots [63,64]. Warm temperatures increase the size and number of mitotic root cells, so regardless of the differential organization of both meristems, a similar compensatory mechanism might also maintain root meristem homeostasis in B. napus roots. Cell growth and cell cycle need to be coordinated in each cell, so warm temperatures must affect the progression of the cell cycle [65,66]. It has been already described that cell cycle length is dependent on environmental conditions, including increased temperatures [67]. Acute heat stress causes G2 arrest in Arabidopsis, but in maize root cells, the cell cycle phases progress more rapidly due to a shorter cell cycle time when exposed to 30 • C [68,69]. Although our results point to a warming-induced acceleration of the cell cycle, to determine precisely how warm temperatures control cell cycle progression, the application of recently developed methods for analyzing spatial and temporal dynamics of the cell cycle "in vivo" in roots would probably be needed [70,71]. A second cellular change driving B. napus root growth in response to warm temperatures is a progressive increase in cell elongation. The remodeling and loosening of cell walls by different cell wall enzymes are essential steps to promoting cell elongation. Accordingly, warm temperatures preferentially induced the expression of several of these enzymes, such as expansins, XTHs, and XYLs in Duplo roots, where cell elongation increased more sharply [72]. Brassinosteroids (BRs) elicit cell expansion by altering cell wall properties under various environmental conditions [73,74]. In particular, BRs have been shown to mediate an increase in root cell length at warm temperatures in Arabidopsis [61]. Coincidentally, BnSTE1 expression, a BR biosynthetic enzyme, as well as some orthologous groups of BR-regulated cell wall enzymes (BnXTH24, BnEXPA8, and BnXTR6) were induced in response to warming in B. napus roots [75]. These results suggest that a BR-regulated pathway might also control warmth-induced cell elongation in this crop. It has also been shown that cytokinins (CKs) negatively control cell elongation in the root elongation zone in Arabidopsis [76]. We have observed that warming induces two negative regulators of CKs, BnCKX1, a cytokinin oxidase/dehydrogenase that catalyzes the degradation of CKs, and BnKMD1, a member of a family of F-box proteins called KISS ME DEADLY (KMD) that targets type-B ARR proteins for degradation. Moreover, we detected that the expression of BnHB-3, a downstream effector of CK signaling, is also differentially repressed in response to warm temperatures, suggesting that changes in cell elongation may be mediated by the downregulation of CK response ( Figure 5 and Table S3). Regulation of gene expression is an essential component of root response to warming temperatures [77,78]. Among the processes uncovered by our analysis of transcriptomic changes triggered by warm temperatures in B. napus roots, we also found an activation of several hormonal responses. We detected induced levels of two ET biosynthesis genes, BnACO1 and BnSAM, and of two ABA sensors, BnPYL4 and BnPYL5 in the primary roots in response to warm temperatures. Both hormones increase their level under heat stress and are known to mediate thermotolerance in several crops [79][80][81], suggesting that an activation of biosynthesis or signaling of ET and ABA could also control warming response in B. napus roots. An analysis of the role of hormonal pathways on the cellular and developmental responses of roots to warming temperatures would be needed to test this hypothesis. Together with the differential transcriptional responses, we found common pathways of gene activation between varieties in our analysis. Components of ROS/redox-signaling pathway (BnPRX71), lipid signaling (BnACX2), and splicing machinery (BnSR30), together with physiological responses such as sugar metabolism (BnSUC7) and metabolic processes that are considered primary temperature-sensing events, displayed altered expression in B. napus roots independently of their specific primary root response [40,77,82]. Thus, this activation of genes related to temperature sensing together with the induction of HS core genes reinforces the idea that a common set of mechanisms for sensing and signaling temperature changes is shared between warming and heat stress in B. napus roots. Similarly, a conserved transcriptomic response between warming and higher temperatures has previously been observed in Arabidopsis seedlings at very early times upon temperature stress [83]. Following this initial activation, some varieties seem to decrease the HSR to protect their root growth from the negative effect of a maintained stress response. To attenuate the activation of the HSR, several molecular mechanisms could be involved, including specific transcriptional repression. However, none of the key HS transcriptional repressors known to negatively regulate temperature stress response, primarily the HSFsB family [84], were differentially upregulated in the varieties analyzed in our study. Another possibility is through the fine-tuned control of post-translational modifications of HSFs that change their transport, localization, or turnover, setting positive or negative effects [85]. Alternatively, similar epigenetic changes concerning the histone variant H2A.Z could be mediating this process, as they have been shown to mediate the transcriptomic response to warm temperatures in Arabidopsis, [86]. Further analysis will be necessary to fully uncover the mechanisms that contribute to this attenuation mechanism in B. napus roots. Ambient temperatures are gradually rising as a consequence of global warming, negatively impacting crop productivity. This problem will increase even more in agricultural systems, in which increased temperatures are usually accompanied by complex and concomitant detrimental soil conditions, such as enhanced evapotranspiration and compaction of the soil, changes in nutrient composition and moisture, and soil salinization. Root systems will be required to respond to all these heterogeneous soil environments by producing a combination of root traits that ensure plant survival [87]. For example, the production of shallow roots is an effective strategy for a scarcity of water, but when this is accompanied by poor soil nutrient content, root growth and lateral root branching has to be redirected into deeper regions of the soil where these resources are more abundant [88]. Therefore, improving crop tolerance to warming will necessarily require strategies involving the exploitation of combined morphological, cellular, and genetic pathways underlying beneficial root responses ( Figure 7D). Challenging differential root responses to combined environmental stresses will uncover the beneficial traits and possible trade-offs of a specific RSA for enhanced tolerance to warming and other associated stresses. In this context, the association between root trait indexes and the induction of specific gene networks, as well as differences in the dynamics of transcriptional heat-shock response such as the ones uncovered in our study may provide the biotechnological tools needed to exploit root response to these complex environmental changes. Plant Materials and Growth Conditions A total of 10 B. napus SOSR varieties were used (Table S1). Seeds were germinated in 1 4 MS agar plates for 3 days and transferred to a pouch-and-wick system. The system consisted of growth pouches assembled from two moistened black cardboards (42 × 29.7 cm) and overlaid with 0.5 mm polypropylene black covers. The paper and cardboards were clipped together to each side using PVC spine bars. The growth pouches were set vertically into plastic trays, so that the lowest 5 cm of the blotting paper were submerged in 2 L of nutrient solution ( 1 4 MS) (adapted from [89,90]). Each pouch (experimental unit) containing six seedlings of each variety was randomly placed in a growth chamber, where they were subjected to a constant temperature (21 • C/29 • C), 16 h daylight (150 µmol/m 2 /s), 40% RH, and the same watering regime with nutrient solution ( 1 4 MS) for 7 days [91]. For root length measurements at 15 days, 3-day-old seedlings were transferred to perlite pots (15 cm diameter and depth) and watered with either 100 mL/pot of 1 4 MS (plants grown at 21 • C) or 200 mL/pot (plants grown at 29 • C) to maintain constant perlite humidity at both temperatures. Perlite pots were randomly placed in a growth chamber, where they were subjected to a constant temperature (21 • C/29 • C), 16 h daylight, and 40% RH. Root Trait Analysis For root trait analysis, three independent biological experiments using pouches containing 6 seedlings for each variety and temperature treatment were used. The seedlings were grown in a single pouch with enough separation between each individual seedling to avoid any overlap of the root systems. Then, pictures of intact B. napus roots of the three independent experiments grown for 7 days (when the first and second leaves unfold) at 21 • C or 29 • C were taken using a copy stand and a resolution of 314 ppi. Seedlings were not removed for imaging, but individual pictures of each seedling from the same pouch were taken for imaging. For quantification of root traits, the GiaRoots semi-automated software v0.1 was used [92]. Secondary root number was quantified using saRIA v0.1 [93]. Two-way ANOVA analysis of the contribution of genotype (G), temperature (T), and their interaction (GxT) to the variance of root trait values after warm temperature treatment were performed using GraphPad Prism 6 software. After testing the data for normal distribution using the Shapiro test (significance level 0.05), and for homoscedasticity, using the Brown-Forsythe and Welch test, grouped individual data corresponding to the 3 biological replicates were treated as no matched data for regular (not repeated measures) two-way ANOVA followed by a multiple comparison unpaired t-test with the desired false discovery rate (Q) value set to 1.000%. Traits that did not pass the normality test were compared using a non-parametric Kruskal-Wallis test. Principal component analysis (PCA) was performed on scaled data using the corresponding command from the FactoMiner R package. Hierarchical clustering was performed using the agnes command from the Cluster R package. The optimal number of clusters was determined using K-means cluster analysis and the Elbow method in R. Three independent experiments containing 5 seedlings of each variety were used for quantification of the total root length of plants grown in perlite pots for 15 days at 21 • /29 • C. Seedlings were grown individually in each pot, each seedling was removed from the pot, and the roots were washed carefully to remove the perlite. Then, each seedling was placed against a black background with the roots extended, and an individual picture was taken. Total root length was quantified using the Fiji software 1.53k [94]. Cellular Parameter Analysis Root tips from three independent biological replicates using three experimental units (3 experimental units, 18 seedlings, of each variety and temperature treatment per replicate) of B. napus pouch-and-wick-grown seedlings of each variety and temperature treatment (21 • C and 29 • C) were stained using the modified pseudo-Schiff propidium iodide (mPS-Pi) method, as described in [95]. Sequential pictures of the root tip (from meristematic zone up to first differentiated cells) were taken using a vertical Confocal Zeiss LSM-880-Axio-Imager2 microscope with an LD/LCI-Plan-Apochromat 25×/0.8 Imm Korr-DIC-M27 objective (Carl Zeiss Microscopy GmbH, Jena, Germany). The excitation wavelength was 561 nm, and emission was collected at 568-702 nm. Pictures were manually adjusted and stitched using Fiji [94]. Cell number and size of the external cortical layer were quantified using Cell-o-Tape macro [96]. Root meristem size was measured from the QC to the first cell on the cortex that doubled its size at the start of the elongation zone. The apical meristem is defined as the part of the meristem in which the cells are continually dividing and expanding at a continuous rate. Therefore, its size was measured from the QC to the first notably larger cortical cell. Then, the basal meristem was measured from the end of the apical meristem to the elongation zone, where the first cell exceeds more than twice its size. Cell proliferation analysis was performed by in vivo 5-ethynyl-29-deoxy-uridine (EdU) labeling as described in [97], using root tips from three independent biological replicates (1 experimental unit for each variety and temperature treatment per replicate). EdU is a thymidine analogue in which a terminal alkyne group replaces the methyl group in the 5th position. EdU is incorporated into newly synthesized DNA during DNA replication by cells in the root sample. A fluorescent azide, iFluor-488, is then added that diffuses freely through native tissues and DNA, and it covalently cross-links to the EdU in a "click" chemistry reaction. DAPI (4 ,6-diamidino-2-phenylindole) is a fluorescent stain that binds strongly to adenine-thymine-rich regions in DNA. Detection of EdU was performed using the baseClick-Edu488 kit (BCK-EdU488) according to the manufacturer's instructions (baseclick GmbH, Munich, Germany). Two µg/mL 4 ,6-diamidino-2-phenylindol (DAPI) was used for DNA counterstaining. For mitotic index quantification, plants grown for 6 days in the pouch-and-wick system were treated for 2 h by submerging their roots in 1 4 MS solution containing 10 µM 5-EdU (pulse). After the pulse period, 5-EdU was rinsed with water, and plants were grown in 1 4 MS for 8 h (chase period), before root tips were collected for EdU detection. Sequential image stacks of the meristematic zone were taken using an excitation wavelength of 488 nm and emission collection at 498-598 nm for an EdU or excitation wavelength of 405 nm and emission collection at 410-498 nm for DAPI. Image stacks were projected and stitched using Fiji [94]. EdU-and DAPI-stained nuclei were analyzed using CellProfiler software v3.1.9 [98]. RNA Extraction and Sequencing Analysis Total RNA from three independent biological replicates (3 experimental units, 18 seedlings, for each variety and temperature treatment per biological replicate) was extracted from root tips of pouch-and-wick-grown B. napus seedlings under 21 • C or 29 • C. Root tips of 0.5 cm, corresponding with the distance from the tip up to the first differentiated cell (root hair cell), was dissected, and RNA was extracted using TRIzol reagent (Invitrogen, Boston, MA, USA). Total DNA-free RNA was cleaned using RNeasy Plant Mini Kit purification columns (Qiagen, Hilden, Germany). RNA quality and integrity were assessed on the Agilent 2200-TapeStation (Agilent, Santa Clara, CA, USA). Library preparation was performed using 1µg of high-integrity total RNA (RIN > 8) using the TruSeq Stranded mRNA library preparation kit. Libraries were sequenced on an Illumina-Hiseq2000 platform using paired-end sequencing of 125 bp in length. Quality control of RNA-seq reads was performed using FastQC software v0.11.1. Quality filtered reads were mapped to the rapeseed genome (AST_PRJEB5043_v1 [99]) using HISAT2 [100]. Differential expression analysis of raw count data was performed using DESEQ2 [101] in R. Correlation analysis of differentially expressed genes was performed using ggplot2 in R. Hierarchical clustering of transcriptomic data was performed using Cluster 3.0 [102]. Over-represented biological functions of gene clusters were assessed using SeqEnrich [103]. Expression Analysis by qRT-PCR Total RNA from three biological replicates (3 experimental units, 18 seedlings, for each variety and temperature per biological replicate) independently collected from replicates used in RNA-seq analysis was extracted from root tips of pouch-and-wick-grown B. napus seedlings under 21 • C or 29 • C. First-strand cDNA was synthesized with the RevertAid First Strand cDNA Synthesis Kit (Thermo Scientific, Waltham, MA, USA). qPCR was performed using the LightCycler ® 480 SYBRGreen I Master (Roche, Basel, Switzerland) on a LightCycler ® 480 System (Roche, Basel, Switzerland). All the expression analysis was performed using three biological and three technical replicates. Expression values were normalized with those of BnACT7 [104]. Statistical Analysis Quantification data were analyzed using GraphPad Prism 6 software. All statistical analyses, one-way ANOVA (Tukey's multiple comparisons test), two-way ANOVA, and t-test were performed with built-in analysis tools and parameters.
2023-01-12T16:47:26.575Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "18292b042841ec83aa266178b2d615bbde6f8b21", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/2/1143/pdf?version=1673018646", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d360e197817267e4a7408e4c89382a3095f82c8", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251872993
pes2o/s2orc
v3-fos-license
Analysis of Battery Swapping Technology for Electric Vehicles – Using NIO’s Battery Swapping Technology as an Example : The electric vehicle (EV) industry is growing rapidly at the moment. However, refueling an electric vehicle could be a time-consuming process. This was the case until the emergence of battery swapping technology. Using the battery swapping technology developed by EV manufacturer NIO as an example, this paper aims to analyze the history, current state, and future of the technology. In order to conduct the analysis, data provided by the EV industry is closely examined, as data from NIO is compared with data from other aspects of the EV industry. Through the analysis, it is found that the battery swapping technology provides efficiency and convenience, but is of high cost and lacks adequate infrastructure on a larger scale. Battery swapping technology has a promising future, but in order to effectively compete with conventional technologies such as charging stations, it would be beneficial for the technology to cut down on cost and expand its infrastructure. INTRODUCTION Electric vehicles (EVs) are widely used around the world.However, charging an EV could be quite time-consuming.Battery swapping technology is developed in order to resolve this issue, as the drained battery inside an EV could be replaced with one that is fully charged.Based on its past and current state, the future of battery swapping technology would continue to be promising if its current issues could be properly resolved. Battery swapping technology has been around for quite some time and has started to boom recently due to the rapid growth of the EV industry.Although the technology has matured over the years, it is still relatively new compared to other battery-related technologies such as charging stations.For that very reason, battery swapping technology exhibits a tremendous amount of potential.But on the other hand, the technology also has its unrealized downsides.As battery swapping technology takes a more dominant role in the EV world, this article intends to uncover its potential and hidden weaknesses by investigating the history, current state, and future of the technology.Using Chinese EV manufacturer NIO's battery swapping technology as an example, data regarding refueling time, cost, and amount of infrastructure is inspected.Battery swapping technology has the potential to dominate the EV industry in the future.For that to become reality, it is important for the technology to maintain its strengths and eliminate its weaknesses. BACKGROUND INFORMATION OF BATTERY SWAPPING TECHNOLOGY AND CHINESE EV MANUFACTURER NIO The battery swapping technology was first developed in 2007 by an Israeli startup named Better Place [1].The company paired up with French carmaker Renault to launch an electric sedan with battery swapping features, it also built battery swap stations to perform the battery replacement procedure on its electric sedans.Due to limited market demand at the time, the product received very limited market demand.As a result, a better place filed for bankruptcy in 2013 [1]. The technology is currently dominated by Chinese EV manufacturer NIO.Since its founding in 2014, NIO has been involved in developing electric vehicles that possess battery swapping capabilities.Similar to Better Place, NIO also developed battery swap stations as its first one opened in 2018 in Shenzhen, China [1].The system developed by NIO has seen much success.With 700 completed battery swap stations in China in 2021 [1], the company continues to expand its operations within China and overseas. Despite its current success, the battery swapping system developed by NIO is not perfect.In order to accurately project its future, its operation, strengths, and weaknesses need to be further examined. ANALYSIS BATTERY SWAPPING TECHNOLOGY IN THE PRESENT DAY The core of the battery swapping technology is the battery swap station.A battery swap station is a location where the discharged battery of a vehicle could be instantly replaced with a fully charged one, eliminating the delay due to the charging of the vehicle's battery [2]. Since NIO is a leader in this technology at the moment, the battery swapping technology it has developed is an appropriate representation of the current state of this technology.Using battery swap stations developed by NIO, this part would determine the status of this technology today by examining its operation and analyzing its strengths and weaknesses. Operation of Battery Swapping Technology A labeled concept drawing of a typical battery swap station developed by NIO is displayed below in Figure 1.Core components of such a battery swap station include the Flexible Locking and Unlocking Platform, drills for battery replacement procedures, and the battery storage compartment [3].To swap a drained battery, an electric vehicle would first enter the battery swap station and be placed in the Flexible Locking & Unlocking Platform.The drills would then remove the screws that attach the batteries to the vehicle, the displaced battery would travel on a conveyor belt into the battery storage compartment.Meanwhile, the battery storage compartment would also send a fully charged battery through the conveyor belt back to the platform.The drills would then install this battery into the vehicle, which completes the battery swapping process.As of January 2021, there are already at least 800 such battery swap stations around China, that number will continue to grow [3]. The Advantages of Battery Swapping Technology The most significant advantage of the battery swapping technology would be the increased efficiency in the refueling process for electric vehicles.To charge an EV with an average household outlet of 120V, also known as Level 1 charging, an EV would gain a range of 3-5 miles per hour [3].Given these statistics, assuming a specific electric vehicle could gain 4 miles of range per hour using a household outlet, it would take 25 hours just for the vehicle to gain 100 miles of range.If an EV is charged at a Level 2 charging outlet available at most conventional charging stations, it would take 6-12 hours for the vehicle to fully charge [3].With Level 3 charging, also known as DC fast charging with over 480V of power, it could still take up to approximately 45 minutes to get to an 80% charge [3]. On the contrary, an electric vehicle manufactured by NIO would spend 277s (4.6min) in a battery swap station to replace its discharged battery [4].It is also worth noting that parking is included in this timeframe, further emphasizing the increased efficiency battery swapping technology has provided. Based on the data above, Figure 2 below is created to display the refueling times for vehicles using different methods. Figure 2 Refueling time for electric vehicles using different methods Given the data gathered above and cases presented in Figure 2, it is determined through calculations that NIO's refueling process through battery swapping is 99.7% faster than refueling through Level Through these statistics, it is clear that the battery swapping technology could provide much increased efficiency in the refueling process of an electric vehicle. The Downsides of Battery Swapping Technology Despite the high efficiency of NIO's battery swapping system, it would also have its weaknesses.The biggest disadvantage of the system is the high cost that is associated with having such advanced and complex battery swapping facilities.The construction cost alone of one such battery swap station amounts to be $3,000,000 [5].In comparison, it would only cost between $100,000 and $175,000 to construct a Tesla Supercharger Station [6], and $500,000 to construct a new gas station with four pumps [7].To present this cost difference visually, Figure 3 above is created based on this data. Given the data, it could be found through calculations that the construction cost of building NIO's battery swap station is 5 times more expensive than that of a station, and a minimum of approximately 16 times more expensive than that of a Tesla Supercharger Station.The factor of high cost contributes to the limitation of the amount of battery swapping infrastructure that is available to the growing number of electric vehicles. Besides cost, the limited resources available inside a battery swap station are also disadvantageous.Since there is one spot per station, only one vehicle is allowed at a time, while there are usually multiple spaces available at an EV charging station or a gas station.A typical battery swap station developed by NIO has 13 batteries stored in the facility, which would allow up to 312 swaps per day [3].This number is dwarfed when compared to the 1700 vehicles that visit a conventional gas station per day on average [8].For the betterment of the battery swapping technology in the Future, these weaknesses would need to be amended. FUTURE DIRECTIONS AND RECOMMENDATIONS FOR BATTERY SWAPPING TECHNOLOGY The battery swapping technology will continue to expand in the future, as NIO aims to reach 3000 battery swap stations in China by 2025 [5], up from the current 800 as of January 2021 [3].NIO is also going to expand its battery swapping services outside of China, as it opened its first battery swap station in Norway in 2019, and plans on having 20 such stations in 2022 [4].Currently, NIO also has 534 power charger stations and 600 destination charger stations around China [1].Based on this data, Figure 4 below is created to visually present the distribution of NIO's refueling facilities.Expanding battery swapping infrastructure should be a priority, not only for NIO but for the battery swapping technology in general.Using China as an example again, as of today, the number of charging stations in the country still significantly surpasses the number of battery swap stations available, as there is over 430,000 third-party charging sites available to EVs in all of China [1]. Figure 5 below is created based on this data.According to the Figure and data presented, it could be determined through calculations that battery swap stations would only be approximately 1.8% of the total number of refueling facilities available to electric vehicles in China.Even though battery swapping may be dominating within NIO in the near future, it still has a long way to go before it could dominate the whole EV industry.This further confirms the point that expanding battery swapping infrastructure is key to the future of this technology, and NIO is headed in the right direction. Given its agenda to increase the amount of battery swapping infrastructure, it would be crucial for NIO to cut down on the cost of building these battery swap stations.Since NIO still dominates the technology today, its ability to resolve the issue of the cost would also play an important role in the fate of the battery swapping technology overall. Maintenance is another important aspect that could significantly affect the future of this technology one way or the other.Since the technology is relatively new, so are the battery swap stations.Furthermore, properly maintaining them will be beneficial in the long run.Similarly, it is equally important to appropriately maintain the connections between the battery and the vehicle itself.Management of the batteries would also be needed.For the Lithium-Ion batteries used by EVs to remain within their ideal temperature range, proper coolants need to be applied, whether from the vehicle itself or the stations.So far no evidence or reports have suggested that these critical factors have brought problems upon NIO or the technology, but since these factors are key to the technology, they would always be worthy to pay attention to. Overall, the future of battery swapping technology will be promising.The technology revived despite the initial failure from Better Place.The rapid expansion and growth of NIO fully demonstrate the potential of battery swapping technology.With governmental policies leaning towards the development of EVs [11], the growth of this technology will also be supported by policymakers around the world.Furthermore, companies other than NIO have come up with their own battery swapping systems, demonstrating high faith that the tech industry has towards this technology.CATL from Ningde, China launched its battery swap service for EVs in January of 2022 [9].In response, NIO has also announced that it will open up its battery swapping services to electric vehicles made by other companies [10].This demonstrates the fact that the entrance of other companies into the field would create competition, but that would also spark further innovation and drive the development of the technology forward.It is truly exciting to see what the future of battery swapping has to offer, as the industry gradually resolves its current issues and continues to mature on its journey ahead. CONCLUSION Battery swapping technology is a booming industry that exhibits much potential for the future.Based on the analysis conducted throughout this article, it is clear that the technology reduces refueling time significantly for EVs, but would solutions regarding its high cost and inadequate infrastructure.To improve upon the research done in this paper, more analysis could be done regarding the biasedness of data, and more objective data should be collected.Since the technology is still relatively new, there is also an insufficient amount of data on its overall performance.In the future, more data should also be collected on the performance of battery swapping technology in order to further examine successes and failures. Figure 1 Figure 1Labeled concept drawing of battery swap station developed by NIO[3] Figure 3 Figure 3 Construction cost of refueling facilities Figure 4 Figure 4 Distribution of NIO's refueling facilities According to the figure and the data presented, battery swap stations take up 41% of all of NIO's refueling facilities.With the planned increase of battery swap stations around the world, battery swapping could soon become the dominant type of refueling method at least for NIO's vehicles.Expanding battery swapping infrastructure should be a priority, not only for NIO but for the battery swapping technology in general.Using China as an example again, as of today, the number of charging stations in the country still significantly surpasses the number of battery swap stations available, as there is over 430,000 third-party charging sites available to EVs in all of China[1].Figure5below is created based on this data. Figure 5 Figure 5 Battery swap stations vs. third party charging facilities in China 1 charging, at least 7% faster than refueling through Level 2 charging, and approximately 89.7% faster than Level 3 DC fast charging.
2022-08-28T15:05:11.563Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a4227052005a9e088052e16f9b1ed1b2d9a1e5b5", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/14/shsconf_stehf2022_02015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "89fb60d13ba51245046c05c8479d37527a6316e5", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
252381137
pes2o/s2orc
v3-fos-license
Preoperative risk factors for early recurrence after resection of perihilar cholangiocarcinoma Abstract Background Early recurrence after curative resection of perihilar cholangiocarcinoma (PHCC) often occurs within a year of surgery. Preoperative predictors of early recurrence remain unclear. The aim of this study was to define reliable preoperative predictors of early recurrence. Methods Medical records and preoperative multidetector-row CT of patients with PHCC who underwent resection between 2002 and 2018 were reviewed. Clinical findings, tumour markers, and radiological appearances including a ‘periductal enation sign’ (PES) where there was evidence of soft tissue enhancement appearing to arise from the extrahepatic bile duct, were analysed. Results Among 261 patients who underwent resection for PHCC, 67 (25.7 per cent) developed early recurrence. Multivariable analysis identified four preoperative risk factors for early recurrence, namely carbohydrate antigen 19–9 (CA19-9) 37 U/ml or higher (OR 2.19, 95 per cent confidence interval (c.i.) 1.08 to 4.46), positive PES (OR 7.37, 95 per cent c.i. 2.46 to 22.10), mass-forming tumour (OR 4.46, 95 per cent c.i. 1.83 to 10.90), and luminal-occlusion tumour (OR 4.52, 95 per cent c.i. 2.11 to 9.68). The OR of preoperative risk factors were used to define four risk subgroups for early recurrence. The early recurrence rates in the low, moderate, high, and very-high risk groups were 0, 9.4 , 39.7, and 65.0 per cent respectively. Conclusion CA19-9, PES, mass-forming tumour, and luminal-occlusion tumour identify patients at higher risk for early recurrence after resection of PHCC. Introduction Surgical resection is the cornerstone to achieve long-term survival in perihilar cholangiocarcinoma (PHCC) [1][2][3] . Around 24-30 per cent of patients who have undergone resection develop early recurrence within a year of their operation with a dismal prognosis [4][5][6] . A detailed analysis of preoperative factors that might predict early recurrence has yet to be described, although some studies have reported risk factors for early recurrence after the resection using both preoperative and postoperative factors [5][6][7] . Recently, the present authors 8 reported the periductal enation sign (PES) on preoperative multidetector-row CT (MDCT) as being associated with perineural invasion, poor outcomes, and shortened survival in resected distal cholangiocarcinoma. The possible relationship between PES and tumour radiological appearance for predicting perineural invasion and early recurrence in PHCC has not yet been clarified. The present study aimed to identify preoperative risk factors associated with early recurrence after resection of PHCC. Methods This study was approved by the institutional ethics committee (approval number J2019-142-2019-1-3) and is reported in accordance with the STROBE statement 9 . The medical records and MDCT scans of patients with PHCC who underwent resection with curative intent at Shizuoka Cancer Center between September 2002 and December 2018 were analysed. In-hospital deaths after surgery were excluded. All patients underwent major hepatectomy and bile duct resection with or without vascular resection or pancreatoduodenectomy 10 . The Bismuth classification was used to assess the extent of the tumour 11 . The plasma disappearance rate of indocyanine green clearance (ICGK) and future liver remnant volume were utilized to evaluate the functional reserve of the remnant liver 12 . The ASA physical status (PS) 13 and Charlson co-morbidity index 14 were used for preoperative assessments. The preoperative carbohydrate antigen 19-9 (CA19-9) and carcinoembryonic antigen (CEA) values were usually measured within 2 weeks before the day of surgery, after the resolution of jaundice and cholangitis. Neoadjuvant treatment was never performed. Early recurrence was defined as recurrence within 1 year of resection of PHCC 5 . MDCT analysis A MDCT with a standard protocol optimized for cholangiocarcinoma was used for preoperative tumour assessment before biliary stent placement. MDCT was performed in the early arterial, late arterial, portal venous, and delayed phases. Raw data were reconstructed with a slice thickness of 2 mm. MDCT images were reviewed by experienced radiologists blinded to the other clinical findings. As described previously 8 , PES was defined as a surrounding soft tissue enhancement that seemed to emanate from the circumference of the enhanced extrahepatic bile duct at MDCT (Fig. 1a). The length of PES was defined as the perpendicular distance from the circumference of the bile duct to the vertex of the enation, and a positive PES was defined as a PES length of 2 mm or more ( Fig. 1b) 8 . A mass-forming PHCC was defined when an extraductal mass was identified (Fig. 1c). Intraductal tumour appearance was divided into 'luminal occlusion' where the lumen of the bile duct was completely invisible (Fig. 1d) and 'non-luminal occlusion' where the lumen remained visible (Fig. 1e,f). The tumour abutment angle of the portal vein (PV) or hepatic artery (HA) was assessed, with an angle of 180° or more considered significant. Lymphadenopathy was defined as the detection of enhancing round-shaped lymph nodes with a short diameter of 1 cm or more within the regional lymph node area. Postoperative follow-up Pathological examinations were performed in accordance with the International Union Against Cancer (UICC) TNM Classification eighth edition 15 . Adjuvant treatment was not routinely performed, except for patients who participated in clinical trials (13 patients) and those who had a positive surgical margin at final pathology (18 patients). Clinical and radiological follow-up was scheduled on a 3-month basis for the first year after resection. Recurrence was diagnosed either through radiological or histological evidence. Statistical analyses Continuous data were described as medians with interquartile ranges and compared using the Mann-Whitney U test. Categorical variables were compared using Fisher's exact test. A logistic regression analysis using stepwise backward selection was performed with multivariable analysis to determine preoperative risk factors for early recurrence after the resection of PHCC. All preoperative variables were entered into the model, and those with a P ≥ 0.050 were removed from the final model by backward selection 16 . Factors found to be significant according to multivariable analysis were given weighting points based on their ORs 17 . The factor with the lowest OR was given one point and depending on the ratio of the OR in the other factors to it, two or three points were given. Those points were then summed and divided into four risk subgroups (low, 0-2 points; moderate, 3-4 points; high, 5-6 points; and very-high, 7-8 points). Survival curves were generated using the Kaplan-Meier methods, and differences were compared using the log rank test. Two-sided P values <0.050 were considered statistically significant. The statistical analyses were performed using R (version 4.1.0, The R Foundation for Statistical Computing, Vienna, Austria). Table 2 shows the clinical and pathological characteristics related to early recurrence. Patients who developed early recurrence had higher CA19-9 and CEA values. According to the MDCT findings, a positive PES, mass-forming tumour, and luminal-occlusion tumour were observed more frequently in the early recurrence group, but the rates of tumour abutment to the PV or HA and lymphadenopathy did not significantly differ. Multivariable analysis identified four preoperative risk factors that independently predicted early recurrence: CA19-9 of 37 U/ ml or higher (OR 2.19), positive PES (OR 7.37), mass-forming tumour (OR 4.46), and luminal-occlusion tumour (OR 4.52) ( Table 3). Figure 2 displays the early recurrence rates ranging from 0 to 65 per cent according to the four risk subgroups created from the multivariable analysis, followed by the association with survival (Fig. S1). Correlation between each risk factor and the pathological features is shown in Table 4. Discussion Radical surgery represents the cornerstone of PHCC treatment, despite being very invasive, especially once hepatopancreatoduodenectomy and vascular resection are performed [1][2][3][18][19][20][21][22][23][24][25] . The present study has highlighted the importance of early recurrence after resection for PHCC. Predicting the likelihood of early recurrence before surgery could have a profound effect on decision-making for many patients. The present study identified four preoperative risk factors for early recurrence: CA19-9 of 37 U/ml or higher, positive PES, mass-forming tumour, and luminal-occlusion tumour. CA19-9 is a well established prognostic factor in PHCC 26,27 . The present study revealed that the PES was associated with perineural invasion in PHCC, as shown previously in distal cholangiocarcinoma 8 ; however, while a positive PES was an independent risk factor for early recurrence, perineural invasion was not. Luminal-occlusion tumours were associated with the nature of the highly malignant tumour, which might account for the high early recurrence rate. A mass-forming tumour was also found as an independent risk factor for early recurrence in the present analysis. Large cohort studies of intrahepatic cholangiocarcinoma showed that 22 per cent of patients developed recurrence within 6 months after surgery 28 , and 44 per cent developed recurrence within 1 year 29 . PHCC comprehensively includes intrahepatic cholangiocarcinoma with invasion to the hepatic hilum, as it is difficult to clearly distinguish hilar cholangiocarcinoma and intrahepatic cholangiocarcinoma with invasion to the hepatic hilum on clinical images 15,30,31 . The inclusion of intrahepatic cholangiocarcinoma with mass-forming PHCC may have been responsible for the high early recurrence rate. To provide a clinically relevant message, a risk classification associated with early recurrence was developed, with increasing rates of recurrence. This identified a group at the highest risk for early recurrence (around 65 per cent) who should be carefully informed about their dismal prognosis before surgery. The present study has several limitations, including its single-centre and retrospective nature. To validate these results, and in particular, the relatively new concept of PES and luminal occlusion, a multi-institutional study with a large patient population is warranted.
2022-09-21T06:16:49.723Z
2022-09-02T00:00:00.000
{ "year": 2022, "sha1": "89faf804c90d60a4b99a6fe0ae12bc3b74f3e8b0", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/bjsopen/article-pdf/6/5/zrac115/45946173/zrac115.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48f93d9ed5c956b018b0707e638d5f7fa3c5b132", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }